Rectified Flow
Rectified Flow is a novel method in generative modeling that simplifies and speeds up the generation of complex data, such as images or 3D shapes. Unlike traditional diffusion models that gradually denoise a sample over many steps, rectified flow directly...
RAGAS
RAGAS (Retrieval-Augmented Generation Assessment System) is an evaluation framework designed to measure the performance of retrieval-augmented generation (RAG) systems. RAGAS focuses on how effectively a system retrieves relevant information and generates coherent, factual responses based on that information, addressing both...
RAG
RAG (Retrieval Augmented Generation) is a technique that enhances generative AI models—such as large language models—by integrating external, domain-specific knowledge sources into the generation process. Rather than relying solely on patterns learned from pre-training, RAG dynamically fetches relevant information from...
Prompt Injection
Prompt Injection is a security vulnerability in AI systems, particularly large language models (LLMs). It occurs when an attacker manipulates model behavior by inserting malicious instructions into user input. As a result, the model may bypass intended restrictions, leak information,...
Prompt Engineering
Prompt Engineering is the process of crafting effective inputs to guide the behavior of large language models (LLMs) and other AI systems. By carefully designing prompts, users can influence the quality, relevance, style, and accuracy of AI-generated outputs. Prompt engineering...
Pre-training
Pre-training is the process of training a machine learning model on a large, general-purpose dataset before adapting it to a specific task. By learning broad patterns, structures, and representations from unlabeled or widely available data, a model develops a rich...
Post-training
Post-training refers to the stage that follows the initial model training process, focusing on refining, optimizing, and preparing the model for deployment in real-world scenarios. After a model has been trained—often through pre-training and fine-tuning—it may still benefit from additional...
Perplexity
In the context of natural language processing (NLP) and machine learning, perplexity is a metric used to evaluate the performance of language models. It measures how well a model predicts a sequence of words, with lower perplexity indicating better performance....