The transformer model introduced in 2017 initially seemed like a clever hack to many researchers in the language domain.
The arrival of Google's BERT model in 2018, built on transformers, revolutionized language tasks and sparked a surge of interest in 'BERTology.'
The launch of OpenAI's GPT-3 in 2020, significantly larger than its predecessor, amazed researchers with its capabilities and scale.
The field of NLP started dividing between those emphasizing scaling models and others raising concerns about ethics and neglected languages.
The introduction of ChatGPT in 2022 had a seismic impact, erasing entire research areas and leading to shifts in focus for many researchers.
Some researchers felt disillusioned by the corporate atmosphere surrounding GPT-3 and the benchmark-driven culture.
The evolving landscape of NLP prompted varied reactions, with some embracing the advancements while others emphasized the limitations of large language models.
By 2024, NLP was undergoing a period of consolidation and adaptation, with researchers grappling with the implications of money and corporate involvement.
The emergence of LLMs led to a flood of attention and scrutiny, propelling researchers like Ellie Pavlick into the limelight.
The future of NLP remains uncertain, with debates on whether the field is experiencing a paradigm shift or simply an evolution of existing ideas.