Troubleshooting
This section will describe tools for troubleshooting and address common errors.
General logging capabilities for Workers also apply to embedded function calling.
The invocations of tools can be logged as in any Worker using console.log()
:
The runWithTools
function has a verbose
mode that emits helpful logs for debugging of function calls as well input and output statistics.
To respond to a LLM prompt with embedded function, potentially multiple AI inference requests and function invocations are needed, which can have an impact on user experience.
Consider the following to improve performance:
- Shorten prompts (to reduce time for input processing)
- Reduce number of tools provided
- Stream the final response to the end user (to minimize the time to interaction). See example below:
If you are getting a BadInput
error, your inputs may exceed our current context window for our models. Try reducing input tokens to resolve this error.