Prompt Engineering Guide
How to create prompts effectively for your VolansDB Agents
Include details in your query to get more relevant answers
In order to get a highly relevant response, make sure that requests provide any important details or context. Otherwise you are leaving it up to the model to guess what you mean. This is especially important when working with domain specific requests.
Worse | Better |
---|---|
Summarize the meeting notes. | Summarize the meeting notes in a single paragraph. Then write a markdown list of the speakers and each of their key points. Finally, list the next steps or action items suggested by the speakers, if any. |
Get the total. | Extract the total sales amount from the financial report table in the document. The table lists monthly sales for each product category. Summarize the total sales amount for the entire year across all categories. |
Provide Examples
Also known as “few-shot” prompting, providing examples is the most common way to prompt a model. This is especially useful when you want to get a specific type of response. For example, if you want to get a summary of a document, you can provide a few examples of the type of summaries you want to get.
Give models time to “think”
If asked to multiply 17 by 28, you might not know it instantly, but can still work it out with time. Similarly, models make more reasoning errors when trying to answer right away, rather than taking time to work out an answer. Asking for a “chain of thought” before an answer can help the model reason its way toward correct answers more reliably.
For example, if you ask an agent to classify a document, you can ask it to also provide the reasons for its classification. This can help the agent to provide a more accurate classification.
Practically, to apply this in our agents with output_schema
configuration, you
can ask the agent to output a reasoning entity first, and then a classification
entity.
For example:
works better than:
Split complex tasks into simpler subtasks
If you have a complex task, it is often better to split it into simpler subtasks. This makes it easier for the agent to understand what you are asking, and it also forces the agent to “think” about the task in a structured way.
For example, if you want to extract information from a document which requires multiple steps, you can split the task by asking the agent to extract the intermediate results, and then use those results to extract the final result.