From forming our understanding of Retrieval Augmented Generation (RAG) in the previous post, we now turn to building a RAG application. In this post, we will build a simple RAG application using:

Retrieval Augmented Generation aka RAG is a new paradigm in the world of Generative AI which allows AI systems to provide more contextual, accurate and personalised responses by combining the power of LLM with rich and proprietary data sets. These data sets can range from internal documents, databases, to APIs and research papers. This approach uplifts the capabilities of LLMs from providing generic responses to delivering domain specific responses.

This blog post (fourth in the Uncovering GenAI series) picks apart the RAG paradigm, and dives deeper. It explains the basics and then moves to exploring what realistic RAG systems look like.

As we continue our journey into the uncovering Generative AI (GenAI), given we have some basics under our belt, it is always beneficial as developers for us to build something tangible and get our hands dirty. In this post, we will dive into building our first single-purpose GenAI application. This will be a lightweight tool that leverages a local LLM to generate responses based on conversational context. Our tech stack would consist of .NET, and Semantic Kernel. We would use Ollama to run our local LLM. By the end of this post, you'll have a working example you can run entirely on your own machine.
The pace at which GenAI has exploded and gone from novelty to necessity is also reflected in the landscape of tools, platforms, communities, breakthroughs, etc. The field is evolving so rapidly that staying up to date can easily feel like a full time job. Hence, this blog focuses on the current state of the ecosystem to help readers get started.
Many products are actively finding innovative use-cases for helping their customer base with Generative AI aka GenAI. As software engineers and architects, we are tasked to build the applications and integrations which sit behind the scenes powering these GenAI use-cases. A core belief that guides this approach is that establishing a well-formed understanding of a technology before integration helps to uncover its maximum benefit. For those working on a similar belief or in general like to go down the rabbit-hole, this blog is the first of the series to uncover the basics of GenAI (i.e., remove the "buzz" from the buzzwords).