Tools

  • Published on
    Evaluating the quality of responses generated by Large Language Models (LLMs) is essential for building reliable and effective AI solutions. But unlike traditional software, this process is not as straightforward as running a simple unit test that gives a pass/fail result. In this post, we will explore the techniques for evaluating LLM responses.
  • Published on
    In my previous post using tools, I discussed how a Retrieval-Augmented Generation (RAG) system can be enhanced with plugins that handle specific queries using predefined functions. While this approach works well for a limited set of query types, it becomes difficult to scale as user queries become more diverse and complex. In this post, I’ll explore how we can use a plugin to generate dynamic SQL queries from natural language, enabling the AI assistant to answer a much broader range of questions.
  • Published on
    In my previous post I discussed the concept of using a Retrieval-Augmented Generation (RAG) approach to create an AI assistant that can answer questions based on your own data. Retrieving the relevant records that has the same semantic meaning as the user query will not be sufficient for all use cases. In this post, I will discuss about the scenarios where we need more than just semantic search to answer the user queries. And one of the ways to solve those problems.