Specialised AI Models Rise as LLMs Stall
AI is evolving rapidly, but with the rise of large language models (LLMs) like ChatGPT, there are increasing concerns that these models have reached a developmental plateau. In a recent interview with Locaria’s Senior Data Scientist Thorsten Brueckner, we delved into the current challenges facing LLMs, and the growing importance of specialised AI models.
The Limitations of Large Language Models
LLMs have made huge strides over the past few years, enabling new capabilities across a range of industries. However, as Thorsten noted, we may be seeing the limits of this technology. LLMs, which rely on enormous datasets and increasingly powerful computational resources, have begun to show signs of stagnation. The rapid growth in performance seen between 2022 and early 2023 has slowed significantly, with incremental improvements becoming harder to achieve.
At the heart of this challenge is the need for data. Thorsten explained that LLMs require vast amounts of information to improve, but we’ve essentially “scraped the internet” and digitalised much of the world’s written content. As a result, there is little room for further growth without access to new, high-quality data sources. Attempts to generate synthetic data, where one LLM creates data for another to learn from, have had limited success. This leads to a cycle of mistakes, where errors are perpetuated rather than corrected.
Additionally, scaling up the computational resources to improve these models is not sustainable. The energy consumption of data centers dedicated to training LLMs is staggering, and as Thorsten highlighted, OpenAI’s servers already consume more energy than entire countries. This raises questions about the environmental and economic viability of continuing to build on this model.
The Promise of Specialised Models
In contrast to the broad approach of LLMs, specialised models are gaining traction as a more efficient solution for certain tasks. These models are designed to focus on a specific function or industry, trained on proprietary data that is not accessible to general-purpose models. Thorsten pointed to the example of creating a translation model tailored to a brand, a service already offered by Locaria, where the model is trained on years of proprietary translation data, offering a level of accuracy and context that an LLM simply cannot replicate.
The key advantage of specialised models lies in their ability to perform specific tasks with greater precision and efficiency. By training a model on highly relevant data, organisations can ensure the model is finely tuned to their needs. This is particularly important in industries like translation where nuances in language, tone, and brand voice are crucial. For example, a specialised translation model for an ecommerce brand would be able to accurately translate content across languages while maintaining the brand’s unique identity, a task an LLM would struggle with due to its generalised nature.
Specialised models by Locaria:
PENTA-CON: PENTA-CON is a specialised tool which leverages a brand-trained LLM to analyse any online content, enabling the identification of key issues with precision and efficiency. Fully customisable to each client’s unique needs, it delivers tailored insights to support smarter decision-making and drive impactful results.
CLAISSIFY: CLAISSIFY leverages advanced LLMs to pre-classify keywords with 90% accuracy, saving time and effort while delivering precise results tailored to your needs. With CLAISSIFY, we can easily label keyword lists to track performance and uncover growth opportunities to optimize your content strategy in a tailor-made way.
Smaller, Smarter, and More Tailored
What makes specialised models especially appealing is their scalability. Unlike LLMs, which require immense amounts of data and computational power to improve, specialised models can be developed using smaller datasets and tailored to specific use cases. Thorsten emphasised that these models are often more cost-effective, as they are not reliant on vast external data sources but rather on the organisation’s own proprietary information.
Moreover, specialised models can be used in conjunction with automation tools to create highly effective systems. For example, a company might combine a translation model with an orchestration model that manages various tasks—such as sourcing content, reviewing translations, and ensuring quality control—to create a comprehensive solution. In this way, smaller models can work together like an orchestra, each performing its part to contribute to a larger, more complex task.
The Future of AI: A Hybrid Approach
The shift toward specialised models does not mean the end of LLMs. Rather, it points to a future where both approaches coexist, each serving its distinct purpose. While LLMs are still valuable for tasks that require a broad understanding of language and general knowledge, specialised models will increasingly be used for applications that demand precision and customisation.
One area where this hybrid approach is already being implemented is at Locaria, where Thorsten’s team combines automation with machine learning models. The company has developed tools that use LLMs in specific cases, such as categorising key terms or translating content for clients. However, the majority of tasks are handled by rule-based automation, which remains more efficient and reliable for many use cases.
For example, one of Locaria’s tools combines automation with a specialised LLM to handle complex categorisation tasks. In this system, most tasks are processed by automation, but if the automation encounters a scenario it cannot resolve, the LLM steps in to find a solution, often by searching the web and using the most relevant results. This approach demonstrates how automation and AI can work together seamlessly to optimise performance.
Looking Ahead: Specialised AI as a Key Trend for 2025
As we look toward the future, it’s clear that specialised AI models will play an increasingly important role in shaping the landscape of artificial intelligence. While LLMs have captured much of the attention in recent years, the limitations of these models are becoming more apparent. As Thorsten pointed out, the next wave of AI development will focus on creating smaller, more focused models that can be tailored to specific industries and use cases.
This shift is particularly exciting for businesses that deal with large amounts of proprietary data. By training specialised models on their own data, organisations can develop AI systems that are not only more efficient and cost-effective but also better suited to their unique needs. The ability to create AI that truly understands and serves the specific requirements of a company will unlock new possibilities for innovation and growth.
The future of AI is not just about bigger and more powerful models—it’s about smarter, more specialised models that can work together to solve complex problems in ways that were previously unimaginable. As we move into 2025, expect to see an increasing number of companies adopting specialised AI, using it to drive more effective solutions and create more personalised experiences for their customers. The rise of these models signals a new era in AI development—one where precision, customisation, and efficiency take center stage.
Insights from: Thorsten Brueckner, Senior Data Scientist
Written by: Oliver Barham, Global Marketing Manager