Large language model

Share This
« Back to Glossary Index

A large language model (LLM) is a type of artificial intelligence[1] system that uses machine learning[3] to understand and generate human-like text. These models, such as the GPT series and BERT, are built on the Transformer architecture, first introduced in 2017. LLMs are trained using various techniques, including tokenization, reinforcement learning, and fine-tuning, to improve their performance. They also incorporate attention mechanisms and context window adjustments. Despite their complexity, the cost of training these models has been decreasing over time, thanks in part to compression techniques like post-training quantization. LLMs are commonly used in tool integration and intelligent agent[2] systems, contributing to decision-making processes and reinforcement learning scenarios. Their effectiveness is measured using metrics like entropy, perplexity, and cross-entropy. Understanding the strengths and weaknesses of these models is crucial for future improvements in AI capabilities.

Terms definitions
1. artificial intelligence.
1 Artificial Intelligence (AI) refers to the field of computer science that aims to create systems capable of performing tasks that would normally require human intelligence. These tasks include reasoning, learning, planning, perception, and language understanding. AI draws from different fields including psychology, linguistics, philosophy, and neuroscience. The field is prominent in developing machine learning models and natural language processing systems. It also plays a significant role in creating virtual assistants and affective computing systems. AI applications extend across various sectors including healthcare, industry, government, and education. Despite its benefits, AI also raises ethical and societal concerns, necessitating regulatory policies. AI continues to evolve with advanced techniques such as deep learning and generative AI, offering new possibilities in various industries.
2 Artificial Intelligence, commonly known as AI, is a field of computer science dedicated to creating intelligent machines that perform tasks typically requiring human intellect. These tasks include problem-solving, recognizing speech, understanding natural language, and making decisions. AI is categorized into two types: narrow AI, which is designed to perform a specific task, like voice recognition, and general AI, which can perform any intellectual tasks a human being can do. It's a continuously evolving technology that draws from various fields including computer science, mathematics, psychology, linguistics, and neuroscience. The core concepts of AI include reasoning, knowledge representation, planning, natural language processing, and perception. AI has wide-ranging applications across numerous sectors, from healthcare and gaming to military and creativity, and its ethical considerations and challenges are pivotal to its development and implementation.
2. intelligent agent. An intelligent agent is a component of artificial intelligence that perceives its environment through sensors and interacts with it via actuators. These agents are designed to maximize the value of a performance measure based on their past experiences and knowledge. They are not just reactive, but can adapt to changes in their environment and proactively work towards achieving specific goals. They come in various types, including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents. Intelligent agents are used in diverse applications, such as developing autonomous systems, creating software agents, and conducting cognitive science studies. They offer a systematic way to test and compare different AI programs, and their study also bridges the gap between AI and economics.

A large language model (LLM) is a language model notable for its ability to achieve general-purpose language generation and other natural language processing tasks such as classification. LLMs acquire these abilities by learning statistical relationships from text documents during a computationally intensive self-supervised and semi-supervised training process. LLMs can be used for text generation, a form of generative AI, by taking an input text and repeatedly predicting the next token or word.

LLMs are artificial neural networks. The largest and most capable, as of March 2024, are built with a decoder-only transformer-based architecture while some recent implementations are based on other architectures, such as recurrent neural network variants and Mamba (a state space model).

Up to 2020, fine tuning was the only way a model could be adapted to be able to accomplish specific tasks. Larger sized models, such as GPT-3, however, can be prompt-engineered to achieve similar results. They are thought to acquire knowledge about syntax, semantics and "ontology" inherent in human language corpora, but also inaccuracies and biases present in the corpora.

Some notable LLMs are OpenAI's GPT series of models (e.g., GPT-3.5 and GPT-4, used in ChatGPT and Microsoft Copilot), Google's PaLM and Gemini (the latter of which is currently used in the chatbot of the same name), xAI's Grok, Meta's LLaMA family of open-source models, Anthropic's Claude models, Mistral AI's open source models, and Databricks' open source DBRX.

« Back to Glossary Index
en_USEN
Scroll to Top