Securing large language models (LLMs) will be a top priority as generative AI applications — and threats — proliferate. We examine how buyers are evaluating vendors in the space.
Enterprises deploying large language models (LLMs) are moving to secure their AI applications against emerging cybersecurity threats.
These threats include data poisoning (corrupting or modifying training data to compromise the model), jailbreaking (tricking the LLM into producing malicious outputs that bypass safety protocols), and prompt injection (manipulating the input prompt to override the LLM’s original instructions).
Companies offering generative AI-powered solutions face high stakes, as security incidents could lead to compromised AI models, sensitive data loss, or reputational damage. A notable example occurred when Samsung banned use of generative AI tools after employees disclosed meeting notes and proprietary code to OpenAI’s ChatGPT last year.
As a result, the broader machine learning security (MLSec) market is growing. Startups raised $213M across 23 deals last year, up from $70M across 5 deals the year before. The space is largely nascent with most companies in the early stages of funding. Vendors’ offerings vary, from safeguarding the sensitive data used to train LLMs to detecting and flagging the third-party genAI tools employees are using.
Despite the nascency of the space, Analyst Briefing surveys submitted to CB Insights by startups and interviews with their enterprise customers show that companies are already paying into the hundreds of thousands of dollars to secure AI models.
Want to see more research? Join a demo of the CB Insights platform.
If you’re already a customer, log in here.