GenAI artificial-intelligence prompt-engineering
As we continue our journey into the uncovering Generative AI (GenAI), given we have some basics under our belt, it is always beneficial as developers for us to build something tangible and get our hands dirty. In this post, we will dive into building our first single-purpose GenAI application. This will be a lightweight tool that leverages a local LLM to generate responses based on conversational context. Our tech stack would consist of .NET, and Semantic Kernel. We would use Ollama to run our local LLM. By the end of this post, you'll have a working example you can run entirely on your own machine.