A Large Language Model, or LLM, is a machine learning model that generates text based on the next word in a sentence with associated probabilities.
LLMs are trained on enormous training datasets scraped from the internet to improve their ability to predict the next word with accuracy.
The Large in LLMs refers not just to the size of the training data, but to the massive number of parameters that need to be adjusted.
Transformers, initially developed by Google, is a new type of architecture that has allowed parallelization and attention operations in the LLM training process.
LLMs have revolutionized a range of natural language and information retrieval tasks from translation to chatbots, to recommendation systems.
They have been productized in a range of applications from Facebook's Meta AI to Google's Gemini and OpenAI's ChatGPT.
LLMs are nascent, evolving at lightning speed and provide a wealth of opportunities for researchers and companies, and easier access to information for consumers.
Genomic Language Models, or gMLMs, are also used to accelerate research on genome interactions - a task on which LLMs are proving particularly effective.
LLMs have been highly successful so far, but improvement in their accuracy is essential to their future development and productivity.
LLMs are tools that enable fast feedback loops making steady progress in science and helping companies provide customers ameliorated services.