top of page

Gaspard Baye

Embracing Future Cybersecurity: Hacking Generative AI and Language Models with A.I. Red Teaming and Beyond

Bio:


Gaspard Baye, Ph.D. Candidate - Cyber-AI Researcher

Gaspard Baye is a doctoral candidate in cyber-AI at the University of Massachusetts, with over 4+ years of industry experience in cybersecurity. His work includes conducting over 20+ security audits, uncovering 20+ critical design flaws, and resolving over 100+ security bugs.

Gaspard holds certifications such as OSCP, CEH practical, and PNPT. He has presented his research at DEFCON and other significant cybersecurity conferences, addressing professionals in the field. Additionally, he is recognized for identifying a critical vulnerability in a popular banking application, earning him a CVE designation.

His academic contributions are notable, with 6+ publications and presentations at reputable IEEE conferences and journals such as the Annual Conference on Neural Information Processing Systems (NeurIPS), IEEE International Symposium on Networks, Computers, and Communications (ISNCC), and several other international symposia and workshops focused on software reliability (IEEE ISSRE), and security (IEEE HASP).

Gaspard also advocates for free and open-source software (FOSS), contributing to the review and security of various FOSS projects. His work in both practical and academic aspects of cybersecurity shows his commitment to advancing the field.


Abstract:


This presentation delves into Artificial Intelligence (AI), focusing mainly on Large Language Models (LLMs). We begin by exploring the basic concept of AI and gradually narrow our focus to understand what Large Language Models are and how they differ from other AI technologies.


The talk will elucidate how LLMs are uniquely trained to perform a wide array of specific tasks, providing insight into what makes these models especially powerful in processing and generating human language. To give a comprehensive overview, we will present various types of LLMs, supplemented with real-world examples, demonstrating their versatility and widespread applicability.


However, with great power comes great responsibility. We will unveil potential attack strategies against LLMs, such as prompt injection and hallucination, illustrating how these models can be manipulated or misled. The discussion will extend to the impact of these adversarial actions, not only on the models themselves but also on their broader applications and users.


The presentation will shift to developing adequate safeguards in response to these challenges. This includes strategies to protect LLMs from malicious use and misinformation, ensuring their reliability and integrity.


A critical aspect of our discussion will be the ethical implications and the importance of responsible AI. We will debate the moral responsibilities of developers and users of LLMs, emphasizing the need for ethical considerations in AI development and deployment.


The presentation will conclude with a summary of the key points and a forward-looking perspective on the future of LLMs in an increasingly digital and interconnected world.


This session aims to inform and inspire thoughtful and responsible use of LLMs, fostering an environment where technology serves humanity positively and ethically.

Gaspard Baye
bottom of page