Hi,
I've connected a chat model along with another tool.
The tool is returning data (objects) that are too large for the chat model to handle.
I'm wondering how I can customize the response from the tool to extract specific keys from the objects and then send them to the AI model. At what point in the flow should this customization happen? How can I ensure the objects are customized before being sent to the AI model?
I'm aware of the 'Edit Fields' option, but it doesn't quite fit the flow I'm aiming for.
My desired flow is:
Trigger a message → AI model → Request summary from a different tool → Customize the objects → Receive the response → Respond from the chat model
My current flow is:
Trigger a message → AI model → Request summary from a different tool → Receive the response → Chat model cannot handle the data
Thank you,
Noam Ofir
Hi,
I believe you're looking to utilize structured output.
What proved most effective for me involved an AI agent handling generation and tool calling, which then feeds into a straightforward LLM chain. The sole objective of this chain is to produce a structured output.
Regards,
J.
Hi,
Thank you for replying.
Can you please share the AI tool agent?