The recent developments in the field of AI, such as access to powerful computers and vast amounts of data, have opened up new possibilities. However, they have also raised significant concerns in terms of cybersecurity. DirectIndustry met Alain Bouillé, General Manager of the CESIN (Club of Experts in Information Security and Digital Technologies) during a cybersecurity conference (Les Assises de la Cybersécurité) in Monte Carlo, Monaco, earlier this month to discuss various aspects of AI, including its role in defense, as well as how to protect AI from potential threats that may arise from it.
Alain Bouillé is a former CISO and founded the CESIN (the French Club of Experts in Information Security and Digital Technologies) in 2012, of which he currently holds the position of General Manager. The CESIN boasts nearly 900 members, including large French companies, SMEs, hospitals, and various government agencies.
The club’s objective is to establish a collaborative ecosystem for CISOs to promote information security.
Today, it is AI that has captured the CESIN’s attention, and they will be organizing a congress dedicated to this theme next month.
We met Alain Bouillé during the Cybersecurity Conference in Monaco at the beginning of October to discuss with him the issues surrounding AI, the opportunities it presents, and the risks it may pose to industrial enterprises in terms of cybersecurity.

How is artificial intelligence currently used in the field of cybersecurity?
Alain Bouillé: “I think security monitoring remains the essential domain to fully harness the benefits of AI in the field of cybersecurity. There are now solutions that are effectively using AI, to manage and process large amounts of data. To me, this is more akin to “machine learning” rather than true AI. Luc Julia, an AI expert who worked at Google before joining Renault, even stated that “AI does not exist and will never exist.” It’s likely that true artificial intelligence, as often imagined, may not be achievable. However, machine learning offers significant advantages, especially in monitoring solutions. In cybersecurity, prevention is crucial to raising user awareness, but it’s essential to recognize that attacks are inevitable. Therefore, early detection of attacks is vital. Security monitoring systems increasingly incorporate features, including recording and analyzing all user activities with their browser or computer, to identify potential attack signals. Machine learning plays a key role in this monitoring by seeking subtle indicators amid the mass of information. The ability to conduct forensic investigations is also essential for understanding past incidents and determining how and why they occurred. Security monitoring systems accumulate massive amounts of daily data, making the role of AI critical in extracting relevant information.”

What are the potential advantages of its use, especially concerning threat detection and incident response?
Alain Bouillé: “The potential advantages of using AI in cybersecurity include threat detection, incident response, and the ability to prevent and mitigate attacks. In detection mode, AI enables quick reactions when threats are already present, while prevention and protection systems aim to prevent intrusion. The massive data and rapid processing offered by AI enhance the speed of detection, thereby reducing the number of incidents. When a successful attack occurs, it is essential to trace the origin of the attack. This is crucial to prevent a recurrence of the attack when restoring the system. So, investigation plays a key role in this search for the attack’s origin, and AI proves valuable in this context.”
RELATED ARTICLE

What are the main risks associated with AI in cybersecurity?
Alain Bouillé: “Despite its advancements, AI is not without flaws. It remains a technology and is obviously vulnerable, and it is still relatively easy to deceive. AI’s learning processes have their own shortcomings, raising questions about the quality of this learning. Furthermore, once AI is in operation, there is a concern about its ongoing reliability. Another emerging challenge is the proliferation of companies that are not only using AI as consumers but are developing their AI projects. In the field of cybersecurity, integrating security into projects has always been a priority. For example, we don’t expect a car manufacturer to begin installing ABS on cars once they’ve rolled out of the factory but rather during the manufacturing process. Similarly, in IT projects, security should be integrated from the development phase. AI, despite its specificities, is primarily an IT project, and it’s essential to ensure that the entire development process does not leave vulnerabilities that could allow the injection of false data, compromise the system for malicious purposes, or divert the AI from its original objective. Therefore, companies integrating AI into their operations must pay particular attention to security. The risk of dire situations in case of security failure is very real, and it is crucial that security be a central concern, just as it is in any other project.”

Are new types of threats emerging with AI?
Alain Bouillé: “One emerging trend that deserves our attention is the use of AI by attackers. Currently, we are unable to determine whether the cyberattacks we face, or those targeting businesses, are the result of AI or not. However, it has been well-established for forty years that attackers have always been quick to exploit the latest innovations, sometimes in a more organized and innovative manner than defenders. It would be problematic if attacks, of types yet unimaginable, were generated by AI. Another concern is generative AI, which is now capable of generating code. To simplify, imagine asking a program like chatGPT to create an application, and it does so brilliantly. However, in traditional development, developers follow rules to ensure code security and prevent vulnerabilities. When flaws are discovered in an application, they can be exploited by hackers for malicious purposes. Today, the challenge lies in how AI generates this code. If a program generated by AI is incorporated into an application without checking its quality, it can introduce many vulnerabilities. AI may not have been programmed to eliminate code flaws. I’m not saying we need to stop using tools like chatGPT to generate code, but it’s equally crucial to have tools to verify the quality of the generated code. This practice is already common when human programmers are in charge, but it should also apply when AIs generate code.”

How do you envision the future evolution of the relationship between AI and cybersecurity?
Alain Bouillé: “I firmly believe that we cannot do without AI. Even before the advent of AI, what characterized cybersecurity was the massive volume of data to be processed. Whether in upstream analysis to detect attackers or in monitoring online activities, it generated a considerable amount of data. Similarly, security monitoring results in a gigantic amount of data. When dealing with such a quantity of data, AI becomes indispensable. Another area where AI is crucial is identity management, ensuring that each user, whether an employee or contractor, accesses the right applications and data at the right time. This identity management also generates vast amounts of data because a typical user can interact with many applications every day. Ultimately, AI offers undeniable benefits for managing this amount of data, whether it’s to meet regulatory requirements, conduct ad-hoc investigations, or improve security in general. Therefore, we won’t be able to do without it anytime soon. However, AI cannot be left in the hands of just anyone for any use. Therefore, regulation is necessary. Currently, NIS 2 regulation is at the center of discussions for cybersecurity. However, AI is a rapidly evolving field, and it is still in the early stages of its development. It is essential to consider regulations that cover a much broader range of applications.”