Mary Grygleski is a Java Champion and a passionate Senior Developer Advocate at DataStax, a leading Data Management company that champions OSS and specializes in Big Data/NoSQL, Streaming, Managed Data Cloud platforms, and Real-Time AI systems. She has over 20 years of hands-on software engineering and technical architecture experience in Java and Open Source. She is an active tech community builder outside of her day job, and currently the President of the 3000+ member strong Chicago Java Users Group (CJUG).
Boost LLMs with Event Streaming and Retrieval Augmented Generation.
Large pre-trained language foundation models (LLMs), such as ChatGPT, that are pre-trained offline have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream natural language processing (NLP) tasks. However, their ability to access and precisely manipulate knowledge, especially with up-to-date knowledge, is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures.
To overcome such limitations, Retrieval Augmentation Generation (RAG) is a technique that can be utilized to retrieve data from outside a foundation model, and as such, it can be used for augmenting the prompts by injecting the relevant retrieved data into the context. RAG has proven to be more cost-effective and efficient than pre-training or fine-tuning foundation models, and can help to reduce hallucinations in LLMs.
We’ll take a look at how utilizing an event-driven streaming approach, by using the open source library, LangStream, can quickly integrate your existing data in motion into generative AI applications such as with prompt engineering and the RAG pattern.
Comments