Skip to content

Research & Insights

Boost Your Search with LLM Query Rewriting

Search functionality is crucial for your app or website. However, users often struggle to find what they need due to vague or ambiguous queries. This can lead to frustration, low engagement, and missed opportunities. Fortunately, Large Language Models (LLMs) can provide a solution.

Imagine seamlessly converting imprecise user queries into focused keywords. This would significantly improve search results, even for complex queries, allowing users to find exactly what they need with ease. Enhanced engagement would follow, making your search feature a competitive advantage. This is the potential of LLM-driven query rewriting, and it is more achievable than you might think.

How to Unlock Advanced Reasoning in LLMs

Chain-of-thought prompting encourages the LLM to break down its reasoning into a step-by-step process before providing a final answer. This has several key benefits:

  • Improved accuracy on complex reasoning tasks
  • Greater transparency into the model's thought process
  • Reduced hallucination by grounding the output in a logical sequence

While simply adding "Let's think step by step" to your prompts can help, there are more advanced techniques to make CoT even more effective. Here are three key strategies.

How to Increase Diversity while Maintaining Accuracy in LLM Outputs

When working with large language models (LLMs), it's common to want varied and diverse outputs, rather than the model repeatedly generating similar responses. The go-to solution is often to increase the temperature parameter, which makes outputs more random by flattening the probability distribution. However, simply increasing temperature can lead to incoherent or low quality outputs.

Fortunately, there are several alternative techniques we can use to generate a wider variety of outputs while still maintaining coherence and quality. In this post, we'll explore 5 practical strategies you can implement today to get more diverse results from your LLMs.

The Role of Examples in Prompt Engineering

In the world of large language models (LLMs), examples play a pivotal role in shaping model behavior. Through a technique called "n-shot prompting", providing a set of well-crafted examples in the input prompt can dramatically improve the model's ability to understand the desired task and generate relevant outputs.

However, not all examples are created equal. Poorly chosen examples can lead to subpar results, wasted resources, and frustration for both developers and end-users. On the other hand, a thoughtfully curated set of examples can unlock the true potential of LLMs, enabling them to tackle complex tasks with ease.

The Power of Small, Focused Prompts

As adoption of large language models (LLMs) grows, it's tempting to create highly complex prompts to handle a variety of tasks. After all, if an LLM can engage in open-ended dialogue, surely it can tackle any request we throw at it, right?

Not so fast. My experience building dozens of LLM-powered applications has revealed an important insight: Smaller, single-purpose prompts consistently outperform large, complex ones. Let's dive into why this approach is so effective.