Llm

  • Published on
    In my previous post using tools, I discussed how a Retrieval-Augmented Generation (RAG) system can be enhanced with plugins that handle specific queries using predefined functions. While this approach works well for a limited set of query types, it becomes difficult to scale as user queries become more diverse and complex. In this post, I’ll explore how we can use a plugin to generate dynamic SQL queries from natural language, enabling the AI assistant to answer a much broader range of questions.
  • Published on
    In my previous post I discussed the concept of using a Retrieval-Augmented Generation (RAG) approach to create an AI assistant that can answer questions based on your own data. Retrieving the relevant records that has the same semantic meaning as the user query will not be sufficient for all use cases. In this post, I will discuss about the scenarios where we need more than just semantic search to answer the user queries. And one of the ways to solve those problems.
  • Published on
    A Virtual Assistant can assist users in answering questions, providing information and performing tasks. In the past, the virtual assistants were built using predefined rules and templates. This approach posed limitations to the number of tasks that the virtual assistant could perform and the quality of the responses it could generate.
  • Published on
    Large Language Models (LLM) are a class of machine learning models that are trained on large amounts of text data to generate human-like text. One of the applications of Large Language Models LLM models is natural language processing, or NLP, that deals with understanding and generating natural language texts. NLP can enable many intelligent applications, such as chatbots, voice assistants, text analytics, and more.