This workshop explores the evolving landscape of AI-powered conversational systems, focusing on chatbot development using Microsoft Azure and OpenAI services. Participants will gain hands-on insights into building intelligent, context-aware chatbots powered by large language models (LLMs) and enhanced through Retrieval-Augmented Generation (RAG)—a technique that enables models to produce more accurate, grounded responses by incorporating external knowledge.
In this session, I will demonstrate how to develop a RAG-based chatbot using the Azure and OpenAI ecosystem. The workshop will introduce the foundational components of a RAG pipeline and present a streamlined low-code/no-code approach for rapid prototyping and deployment using Azure tools and services.
This workshop offers a clear and simplified overview of how to integrate different types of data—such as documents and structured content—into language model-based systems to generate reliable, domain-specific outputs. Whether you're interested in educational technologies, research automation, or enterprise applications, this session will equip you with the knowledge and tools to design chatbots tailored to your specific datasets and use cases.
This workshop is designed for everyone who is interested in integrating AI into their work. While no prior experience with chatbot development is required, a basic understanding of cloud platforms like Azure or general AI terminology will be helpful. Basic familiarity with RAG and LLMs would also be helpful but is not mandatory. The session will be accessible to beginners and informative for those exploring Retrieval-Augmented Generation and large language models.
Presented by Nafisa Islam
Read MoreRegister for this event.