Innovation and danger:
How AI is shaping the future

While the AI Act sets a global precedent, it also highlights a paradox: regulating AI to foster innovation – without empowering cyber threats or losing our values.
G09a Desktop 231010 02 1

A column by Jacques de La Rivière


As the EU Parliament has just voted on the AI Act, the use of artificial intelligence remains a delicate balance between technological progress and the preservation of fundamental values.

This is the first law on artificial intelligence (AI), and the 27 EU member states “unanimously confirmed it” on February 2. Back on December 8, Thierry Breton, European Commissioner, boldly announced the news: “Historic! The European Union becomes the first continent to establish clear rules for the use of AI. The AI Act is much more than a set of rules – it’s a launchpad for EU startups and researchers to lead the global race for AI. The best is yet to come!”. In an era defined by disruptive technologies, Europe and the United States are moving toward strict regulation to contain their impact. This approach raises critical issues such as innovation and performance, cybersecurity, and the broader structure of our societies. Meanwhile, sovereignty and international trade lie at the heart of the debate, prompting reflection on how to strike the right balance between tech advancement and safeguarding our core values.

 

Technological boom: Skillfully anticipating the rise in attacks

In its recent risk barometer, Allianz – one of France’s top general insurers – highlights the significant growth in cybersecurity threats over the past few years, which now top the list of corporate concerns, surpassing business disruptions, market volatility, and environmental worries. According to a report from U.S.-based company Splunk, published in mid-October 2023, 70% of CISOs surveyed believe AI benefits cybercriminals more than cybersecurity defenders. Among the main concerns, 36% believe AI accelerates and intensifies attacks. 2024 is expected to follow the same upward trend, with the Olympic Games, political stakes, and legal controversies looming on the horizon. Faced with this invisible war that can ruin businesses, it’s crucial to adopt preventive measures. Attacks are multiplying, and AI is making them easier. Take this concrete example: data theft used to be a labor-intensive task. Today, AI is a powerful ally. An AI can now be tasked with targeting a specific decision-maker online and automatically compiling all publicly available data about them. This data is then used to craft a personalized email. In this example, the AI sends a spontaneous job application, prompting the victim to click a link that leads to a fake LinkedIn page. Once the person enters their email and password, that sensitive data is collected by the attacker. If this is what can happen with just one password, imagine the potential fallout when AI has access to a vast array of connected devices.

Hyperconnectivity, meant to simplify our lives, paradoxically increases our vulnerability. Similarly, generative AI could easily enhance more sophisticated attacks like DDoS. It could identify weaknesses in target systems and adjust attack patterns in real time to bypass defenses-posing a significant threat.

In this cat-and-mouse game, defenders are forced to constantly adapt their techniques to attackers who continuously change their strategies. Every new connection becomes a new potential attack point.

 

Technological revolution: A new tower of Babel for SOC teams? 

A dual challenge is emerging – declining numbers of SOC (Security Operations Center) analysts and the growing frequency of cyberattacks – pushing companies to proactively integrate AI into their security strategies.

What makes AI unique is its ability to rapidly process large volumes of data, which is crucial in cybersecurity, where there is a major gap between the speed of attacks and the often slow pace of human detection. Automating algorithms to detect both strong and weak signals early – and accurately – is now essential to quickly raise alerts about potential threats or attack profiles.

Generative AI, beyond its offensive capabilities, can also play a defensive role by addressing the shortage of SOC analysts. It acts as an interpreter between human language and machine operations, offering accurate incident insights and suggesting solutions. By pooling skills in this way, AI is revolutionizing access to complex systems.

That’s one of the reasons investment strategies are evolving. Until now, investments were focused on basic security, i.e., blocking known attacks. Today, AI allows for the detection and prevention of unknown threats, capable of blocking any abnormal behavior on the network.

 

AI regulation: Necessary safeguard or obstacle to innovation? 

After long months of tough negotiations, EU member state ambassadors have pulled back the curtain on a historic series of new regulations, making the EU the first global power to govern AI use. But it’s no small challenge, and a debate looms: bureaucracy vs. self-regulation, responsibility vs. Innovation. France and Germany – originally focused on protecting their national AI champions, Mistral AI and Aleph Alpha – resisted pressure from European lawmakers pushing for strict limits on dominant tech players. Still, managing risks doesn’t mean guarding against every possible threat. Just weeks earlier, on a global scale, 18 countries – including France, the U.S., the U.K., and Japan – signed an agreement on AI safety following the first international summit on the subject. The agreement urges companies to develop models that are “secure by design.” Notably, no Chinese organizations signed this agreement.

Now in effect, the AI Act categorizes AI models into four risk levels. It places particular emphasis on generative AI and copyright protection, while strengthening the role of human oversight – humains remain the final decision-makers for high-risk models. The regulation mandates strict transparency, requiring users to understand how AI systems work. It extends beyond the technical, covering data governance, storage, usage, and collection. It also grants national authorities the power to enforce compliance and issue fines for violations. Beyond its technical scope, the law serves as a protective shield, aiming to defend citizens from authoritarian misuse and to ensure the long-term health of democracy.

 

But isn’t the real risk trying to regulate everything? Innovation, by nature, is unpredictable, and it’s unrealistic to believe we can foresee every possible scenario. The future of artificial intelligence lies in the dance between predictable regulation and bold innovation.

 

This article is a translation of an opinion piece by Jacques de La Rivière, CEO and co-founder of Gatewatcher, originally published on Journal Du Net. Read the original French article HERE.