Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Conversational agent overwrites input from SearchIndexTool #2918

Open
reuschling opened this issue Sep 9, 2024 · 12 comments
Open

[BUG] Conversational agent overwrites input from SearchIndexTool #2918

reuschling opened this issue Sep 9, 2024 · 12 comments
Assignees
Labels
bug Something isn't working untriaged

Comments

@reuschling
Copy link

It is straightforward and works of course when I define a SearchIndexTool inside a conversational flow agent, the question is inserted into the defined OpenSearch query, there will come search results.

Unfortunately, I can not manage this with a conversational agent.

This agent should always execute the same query:

{
	"name": "Test Agent",
	"type": "conversational",
	"description": "Simple agent to test the agent framework",
	"llm": {
		"model_id": "{{ _.LlmModelId4agent }}",
		"parameters": {
			"max_iteration": 5,
			"stop_when_no_tool_found": true,
			"disable_trace": false
		}
	},
	"memory": {
		"type": "conversation_index"
	},
	"app_type": "chat_with_rag",
	"tools": [
		{
			"type": "SearchIndexTool",
			"description": "A tool to search opensearch index with natural language question. If you don't know answer for some question, you should always try to search data with this tool. Action Input: <natural language question>",
			"include_output_in_agent_response": true,
			"parameters": {
				"input": "{\"index\": \"scll_agendaitem\", \"query\": {\"query\": { \"match_all\": {}}} }"
			}
		}
	]
}

I get always this error message: Failed to run the tool SearchIndexTool with the error message Expected a com.google.gson.JsonObject but was com.google.gson.JsonPrimitive; at path $. Independent from the specified tool input. It seems that the conversational agent overwrites the parameters.input content with its generated, redefined query, wasting the query specification from the SearchIndexTool.

Expected behavior is that you define the OpenSearch query inside the tool description, and that the output from the former step of the conversational search should be inserted into this query, e.g. like this:

{
	"type": "SearchIndexTool",
	"description": "A tool to search opensearch index with natural language question. If you don't know answer for some question, you should always try to search data with this tool. Action Input: <natural language question>",
	"parameters": {
		"input": "{\"index\": \"${parameters.index}\", \"query\": ${parameters.query} }",
		"query": {
			"query": {
				"match": {
					"tns_body_chunked768": {
						"query": "${parameters.FORMER_OUTPUT_OF_CONVERSATIONAL_AGENT}"
					}
				}
			},
			"size": "5",
			"_source": "tns_body_chunked768"
		}
	}
}

Maybe the 'input' parameter for the query specification can be renamed or configured.

@reuschling reuschling added bug Something isn't working untriaged labels Sep 9, 2024
@yuye-aws
Copy link
Member

yuye-aws commented Sep 10, 2024

Hi @jngz-es ! Is this the same bug in #2836? Will this PR fix: #2837?

@reuschling
Copy link
Author

As far as I see, #2837 will fix it for MLConversationalFlowAgentRunner and MLFlowAgentRunner, but not for MLChatAgentRunner.

MLChatAgentRunner uses the AgentUtils method AgentUtils.constructToolParams for generating the params for a tool. This is done here. The PR fix creates a new method AgentUtils.getToolExecuteParams that will do this for MLConversationalFlowAgentRunner and MLFlowAgentRunner.

@yuye-aws
Copy link
Member

You are right. Feel free to raise a PR fixing it.

@yuye-aws
Copy link
Member

@reuschling can you share your query when executing the agent?

@yuye-aws
Copy link
Member

yuye-aws commented Sep 12, 2024

Hi @reuschling ! I guess you wish to somehow hardcode the search query for search index tool. Unfortunately, the tool input will be overridden by the conversational agent.

Since you are already hardcoding the search query, I recommend you to try flow agent to first fetch the results from search index tool. You can then concatenate a ml model tool or connector tool to call the large language model.

You can even specify your prompt in the ml model tool.

@reuschling
Copy link
Author

If it is a bug in flow agent and conversational flow agent, then it is also a bug inside conversational agent, aka MLChatAgentRunner. The cleanest way of fixing it would be also as part of #2837 . @jngz-es I can support if you want.

@yuye-aws
Copy link
Member

yuye-aws commented Sep 12, 2024

I doubt whether the same fix would work. In conversational agent, we let LLM to generate input parameters and pass them into the tools.

I will discuss with @jngz-es tomorrow to check whether this would introduce some breaking change.

@yuye-aws
Copy link
Member

I recommend you to try flow agent to first fetch the results from search index tool. You can then concatenate a ml model tool or connector tool to call the large language model.

@reuschling Could you try my workaround?

@reuschling
Copy link
Author

I want to use the conversational agent, but I want to have a tool (among others, e.g. one for hybrid search) which makes dedicated queries inside the index, such as counting documents for example. The conversational agent should give input for this, but this input should be inserted into a predefined query template, with an according parameter value reference inside. This is what the SearchIndexTool is made for. Using a flow agent instead is no solution for me.

The SearchIndexTool should behave same, independent if it is used inside flow or conversational agents. Maybe then the use of the input variable for the query template definition is wrong.

But I doubt it is not consistent or understandable if after merging #2837, all tools with some definitions inside the input variable will act differently dependent where they are used (flow or conversational). I.e. all tools (if there are others) with necessary input definitions currently don't work and won't work with conversational agent as these definitions will be overwritten. #2837 fixes this problem only for the flow agents.

But not overwriting the input if there is already an input definition inside the tool specification would solve this also for the conversational agent. This is what #2837 do.

Crucial is, that there is a possibility to get the formerly LLM generated action input inside a variable for referencing inside the query template. As ${parameters.question} in the tutorial example

@yuye-aws
Copy link
Member

yuye-aws commented Sep 13, 2024

We just have a discussion. The conclusion is that we would introduce a new field in tool settings like "configs", which works like hardcode parameters.

@reuschling
Copy link
Author

This sounds promising, cleaner as some hidden input prioritization. I'm looking forward to it, the current status is a blocker for me.

@ylwu-amzn
Copy link
Collaborator

Cool, @reuschling thanks for reporting this issue. @jngz-es is working on this. Will publish PR soon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working untriaged
Projects
Status: No status
Development

No branches or pull requests

4 participants