DS1 spectrogram: LLaMA: Open and Efficient Foundation Language Models

LLaMA: Open and Efficient Foundation Language Models

February 27, 20232302.13971

Authors

Aurelien Rodriguez,Hugo Touvron,Gautier Izacard,Marie-Anne Lachaux,Timothée Lacroix

Abstract

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets.

In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.

Resources

Stay in the loop

Get tldr.takara.ai to Your Email, Everyday.

tldr.takara.aiHome·Daily at 6am UTC·© 2026 takara.ai Ltd

Content is sourced from third-party publications.