In recent years, Large Language Models (LLMs) and Generative AI (GenAI) have experienced massive breakthroughs.
Names like ChatGPT, Claude, and Gemini are now a part of everyday conversation, impacting both our personal lives and business operations.
While these technologies boost productivity, they also introduce entirely new attack surfaces and risks.With those technologies, almost anything is possible — from drafting a simple LinkedIn post to launching targeted attacks against businesses via prompt injections, jailbreaks, or data poisoning.Need a report proofread? No problem with your favorite LLM. Need a convincing phishing e-mail for a social engineering campaign? Also no problem – and now easier than ever.But it’s not just attackers who benefit.
Defenders can leverage the same technologies to detect anomalies in logs, spot malicious user inputs, and strengthen system security against novel and evolving threats.This leads to two key aspects:

  • Security of AI – How do we protect LLMs and GenAI from being exploited?
  • AI for Security – How can we use LLMs and AI to improve our defenses?

AI can improve our security, but if we don’t secure it, it becomes just another attack surface.

Join our talk Generative AI in the Crosshairs at Almato DevCon, where we’ll explore opportunities, risks, and real-world attack scenarios.