large language model
Appearance
English
[edit]Noun
[edit]large language model (plural large language models)
- (machine learning) A type of neural network specialized in language, typically including billions of parameters.
- Synonym: LLM
- Hypernyms: LM, language model < model
- Coordinate terms: SLM, small language model
- 2022 April 15, Steven Johnson, “A.I. Is Mastering Language. Should We Trust What It Says?”, in The New York Times[1], →ISSN:
- GPT-3 belongs to a category of deep learning known as a large language model, a complex neural net that has been trained on a titanic data set of text: in GPT-3’s case, roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books. GPT-3 is the most celebrated of the large language models, and the most publicly available, but Google, Meta (formerly known as Facebook) and DeepMind have all developed their own L.L.M.s in recent years.
- 2023 May 20, John Naughton, “When the tech boys start asking for new regulations, you know something’s up”, in The Observer[3], →ISSN:
- Less charitable observers (like this columnist) see two alternative interpretations. One is that it’s an attempt to consolidate OpenAI’s lead over the rest of the industry in large language models (LLMs), because history suggests that regulation often enhances dominance.
- 2025 April 13, Stephen Ornes, “Small Language Models Are the New Rage, Researchers Say. Larger models can pull off a wider variety of feats, but the reduced footprint of smaller models makes them attractive tools”, in Wired[4]:
- Large language models work well because they’re so large. The latest models from OpenAI, Meta, and DeepSeek use hundreds of billions of “parameters” […] With more parameters, the models are better able to identify patterns and connections, which in turn makes them more powerful and accurate. But this power comes at a cost […] huge computational resources […] energy hogs […] In response, some researchers are now thinking small. IBM, Google, Microsoft, and OpenAI have all recently released small language models (SLMs) that use a few billion parameters—a fraction of their LLM counterparts.
Derived terms
[edit]Translations
[edit]Translations
|
See also
[edit]Further reading
[edit]large language model on Wikipedia.Wikipedia