“Arnica’s CEO Envisions the Future of DevOps Security with Generative AI”

Join our newsletters to get the latest updates and exclusive content on industry-leading AI.

VentureBeat recently had a virtual conversation with Nir Valtman, CEO and co-founder of Arnica. With a robust background in cybersecurity, Valtman has led product and data security at Finastra, improved security practices at Kabbage (now part of Amex) as CISO, and managed application security at NCR. He’s also on the advisory board of Salt Security.

Valtman is recognized as a leading innovator in cybersecurity, with significant contributions to open-source projects and seven patents in software security. He’s a regular speaker at major cybersecurity events like Blackhat, DEF CON, BSides, and RSA. Under his leadership, Arnica is pioneering developer-focused application security tools and technologies.

Here’s an excerpt from VentureBeat’s interview with Valtman:

VentureBeat: How do you see generative AI impacting cybersecurity over the next 3-5 years?
Nir Valtman: We’re beginning to understand where generative AI excels and its limitations. It can significantly enhance application security by providing tools that make security the default for developers, especially those with less experience.

VB: What new technologies or methods could influence generative AI’s role in security?
Valtman: Developers need actionable solutions for security vulnerabilities. This starts with prioritizing important assets, assigning the right remediation owners, and mitigating risks. Generative AI will be crucial for risk management, but prioritization and ownership may need a more structured approach.

VB: Where should organizations focus their investments to harness generative AI in cybersecurity?
Valtman: Invest in solving repetitive and complex problems like specific source code vulnerabilities. As generative AI proves its worth in various use cases, investment priorities will evolve.

VB: How can generative AI shift security from reactive to proactive?
Valtman: For generative AI to be predictive, it needs to be trained on relevant data sets. The accuracy of these models will build trust among technology leaders, enabling AI-driven decisions to mitigate risks proactively. For now, human oversight is crucial at the right moments.

VB: What organizational changes are needed to integrate generative AI in security?
Valtman: Strategic and tactical changes are required. Strategically, leaders need to understand AI’s benefits and risks and align its use with company security goals. Tactically, allocate budget and resources for integrating AI with asset, application, and data discovery tools, and develop a playbook for addressing security incidents.

VB: What security challenges could generative AI introduce, and how should they be addressed?
Valtman: Data privacy and leakage are significant risks. Mitigate these by hosting models internally, anonymizing data before external transmission, and conducting regular audits. Another risk is the security of the models themselves, which requires rigorous vulnerability assessments and penetration testing. Finding practical solutions to these risks without compromising functionality can be challenging.

VB: How can generative AI automate threat detection, security patches, and other processes?
Valtman: By analyzing historical data from networks, logs, emails, codes, and transactions, generative AI can detect various threats like malware, insider threats, account takeovers, and fraud. Advanced use cases include threat modeling during software design, automated patch deployment, and self-improving automated incident response.

VB: What strategies should companies adopt for generative AI and data protection?
Valtman: Establish clear policies on data collection, storage, usage, and sharing. Define roles and responsibilities aligned with the overall cybersecurity strategy. Support these with incident response, breach notification plans, vendor risk management, and security awareness initiatives.