Generative AI models like DSLMs outperform LLMs in value
Small domain-specific generative AI models can be designed to address specific industry needs, proving more cost-effective and efficient than large language models for businesses.
GRAPEVINE, Texas -- Businesses will see more value from generative AI using smaller domain-specific language models instead of large language models.
That's according to analysts speaking during the Gartner Tech Growth and Innovation Conference Tuesday. Domain-specific language models (DSLMs) are generative AI models trained to excel in specific fields or industries. Being trained in the language of individual sectors means the DSLM can ingest, understand and generate industry-specific and nuanced lingo. In contrast, large language models (LLMs) are trained more broadly and in wider topic areas, and are less likely to provide specific, accurate data for niche industries.
Not only do DSLMs show greater value for businesses with specific industry outputs, but they're also more cost-effective to train due to their smaller size, Gartner analyst Annette Zimmermann said during a session at the conference. Gartner estimates that more than 60% of generative AI models used by enterprise businesses will be domain-specific by 2028.
Chinese AI startup DeepSeek, which upended the market earlier this year with its low-cost model, demonstrated how new generative AI model architectures can "significantly reduce AI development costs," Zimmermann said.
"We are seeing domain-specific language models being developed, and many providers are providing repeatable efforts to deploy different models across different industries," she said. "We estimate that some models we've observed in our studies have four times more efficiency than LLMs in terms of cost and latency."
Business use cases for DSLMs will grow
Gartner analyst Danielle Casey said DSLMs are the future of generative AI models, and as businesses turn to DSLMs, use cases will change significantly in the coming years.
DSLMs are trained off of domain-specific data and are designed to support the specific knowledge areas for which they were created.
Danielle CaseyAnalyst, Gartner
"DSLMs are trained off of domain-specific data and are designed to support the specific knowledge areas for which they were created," she said during the conference. "This could be an industry domain, a functional domain or a use case domain."
Casey said DSLMs will also be multimodal, meaning they will work with two or more data types including text, images and video, in the future. Multimodality and domain specialization will evolve hand in hand, she said.
Casey said this is because "DSLMs will be designed to support the primary data modalities of the industries for which they were created."
Healthcare organizations, for example, might use a healthcare model that's able to interact with specific healthcare images, as well as voice and text. A healthcare model's functionality will differ from other multimodal DSLMs geared toward financial or marketing functionalities, she said.
Use cases already exist for DSLMs. IBM implemented a DSLM trained on legal documents to help German court systems earlier this year, Gartner's Zimmermann said. The generative AI model supported judges in pre-categorizing documents in large class-action lawsuits.
"The application reduced the review process by 50%," she said. "A very strong impact on productivity."
Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining Informa TechTarget, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.