• Published on
    In my previous post using tools, I discussed how a Retrieval-Augmented Generation (RAG) system can be enhanced with plugins that handle specific queries using predefined functions. While this approach works well for a limited set of query types, it becomes difficult to scale as user queries become more diverse and complex. In this post, I’ll explore how we can use a plugin to generate dynamic SQL queries from natural language, enabling the AI assistant to answer a much broader range of questions.
  • Published on
    In my previous post I discussed the concept of using a Retrieval-Augmented Generation (RAG) approach to create an AI assistant that can answer questions based on your own data. Retrieving the relevant records that has the same semantic meaning as the user query will not be sufficient for all use cases. In this post, I will discuss about the scenarios where we need more than just semantic search to answer the user queries. And one of the ways to solve those problems.
  • Published on
    The serverless technology has been around for a while now, and it has revolutionized the way we build and deploy applications. Azure Functions, a serverless compute service from Microsoft Azure allows to run code on-demand without having to manage infrastructure. In this article, we will explore how running the Azure Functions on a dedicated Kubernetes cluster, such as Azure Kubernetes Services (AKS), can provide added benefits and flexibility to your applications.
  • Published on
    A Virtual Assistant can assist users in answering questions, providing information and performing tasks. In the past, the virtual assistants were built using predefined rules and templates. This approach posed limitations to the number of tasks that the virtual assistant could perform and the quality of the responses it could generate.
  • Published on
    Large Language Models (LLM) are a class of machine learning models that are trained on large amounts of text data to generate human-like text. One of the applications of Large Language Models LLM models is natural language processing, or NLP, that deals with understanding and generating natural language texts. NLP can enable many intelligent applications, such as chatbots, voice assistants, text analytics, and more.
  • Published on
    API Management is a multi-faceted process that involves several teams and stakeholders. The API Management process includes designing, building, publishing APIs. The process is iterative and involves continuous improvement. For instance, it may involve adding new APIs, updating existing APIs, or deprecating APIs. It is important to adopt DevOps techniques to manage the API lifecycle to ensure quality, consistency, and improved productivity.