Lasso Security Steps into the Spotlight to Tackle LLM Security Challenges

Lasso Security Steps into the Spotlight to Tackle LLM Security Challenges

Sign up for our daily and weekly newsletters to stay updated with the latest and exclusive content on leading AI industry news.

Despite their complexity, large language models (LLMs) can be quite vulnerable when it comes to cybersecurity. A cleverly crafted set of prompts can cause them to divulge thousands of secrets or create harmful code packages. Additionally, poisoned data injected into these models can lead to bias and unethical behaviors.

Elad Schulman, cofounder and CEO of Lasso Security, emphasized in an interview with VentureBeat that although LLMs are powerful, they shouldn’t be trusted without skepticism. Their advanced capabilities and complexity make them susceptible to various security risks.

Lasso Security aims to tackle these serious issues. The company launched from stealth today with $6 million in seed funding, backed by Entrée Capital and Samsung Next. Schulman believes the impact of LLMs exceeds that of the cloud and internet revolutions combined, highlighting the significant risks that come with such advancements.

LLMs are revolutionary technology that have quickly become essential for businesses wanting to maintain a competitive edge. Their conversational, unstructured, and situational nature makes them easy to use and, unfortunately, exploit. When manipulated through prompt injection or jailbreaking, LLMs can expose training data, sensitive information, proprietary algorithms, and other confidential details. Workers can unintentionally leak company data, as illustrated by Samsung’s ban on using ChatGPT and other generative AI tools due to such incidents.

Schulman explained that since the content generated by LLMs can be controlled through prompts, users might indirectly gain access to additional functionality of the model. Problems also arise from “data poisoning,” where tampered training data introduces biases that compromise security and ethics. Insecure output handling due to inadequate validation and hygiene can expose backend systems to significant risks. Misuse may result in severe consequences such as XSS, CSRF, SSRF, privilege escalation, or remote code execution. Additionally, attackers might flood LLMs with requests, leading to service degradation or shutdown.

The software supply chain of LLMs can also be compromised by vulnerable components or services from third-party datasets or plugins. Over-reliance on LLMs as the sole source of information can lead to misinformation and major security incidents. For instance, if a developer asks ChatGPT to suggest a code package, the model might provide a non-existent package (“hallucination”), which hackers can exploit by creating a malicious code package to match the hallucination. Once developers use such a package, it can serve as a backdoor into the company’s systems.

Lasso Security’s technology intercepts interactions with LLMs, whether between employees and tools like Bard or ChatGPT, agents like Grammarly, plugins linked to developers’ IDEs, or backend functions making API calls. An observability layer captures and monitors data sent to and from LLMs, using various layers of threat detection. Response actions, such as blocking or issuing warnings, are also applied.

Schulman advised understanding which LLM tools are being used within the organization and for what purposes. This awareness can spark critical discussions about the organization’s needs and protections.

Lasso Security’s platform includes key features like:
– Shadow AI Discovery: Identifying active tools and models, user insights.
– LLM data-flow monitoring and observability: Tracking and logging all data transmissions.
– Real-time detection and alerting.
– Blocking and end-to-end protection: Ensuring employees’ prompts and generated outputs comply with security policies.
– A user-friendly dashboard.

Unlike general security tools like data loss prevention, Lasso is a comprehensive suite focused specifically on the LLM world. Security teams gain full control over all LLM-related interactions and can enforce policies for various groups and users.

Organizations must adopt LLM technologies securely and safely. Blocking technology use isn’t sustainable, and companies without a dedicated risk plan for generative AI will face challenges. Lasso aims to equip organizations with the necessary security tools to embrace progress and leverage this remarkable technology without compromising their security.