The Italian government pushes for tougher penalties on crimes linked to artificial intelligence, while Europe advances with the world’s first AI law.
Artificial intelligence has become a central issue in political and legislative debates worldwide. In Italy, the government is preparing to present a 25-article bill that would regulate the use of this technology for the first time in the country.
The proposal outlines general principles for research, development, and application of AI, while also toughening penalties for crimes committed with these digital tools.
The Italian Council of Ministers expects to approve the bill by the end of April. After that, it will move to Parliament for review, with the aim of being passed into law before the end of the year. According to government sources, the initiative seeks to provide a concrete response to the impact of AI on fundamental rights, as well as the economic and social risks stemming from its rapid expansion.
Although the draft is still subject to changes, some key points are already outlined. It sets guidelines for the use of AI in sensitive sectors such as healthcare and the justice system, with a strong focus on how automation affects working conditions. It also lays the groundwork for a national strategy to develop artificial intelligence, ensuring that Italy remains competitive in the global landscape.
One of the most significant aspects of the proposal is the toughening of criminal and financial penalties. It introduces harsher sentences for market manipulation through algorithms, considers AI use in money laundering an aggravating factor, and includes fines for copyright violations. Additionally, it imposes up to three years in prison for using AI tools to impersonate others through “deepfakes” or similar digital creations.
In addition to Italy: which countries are participating in this project?
This Italian initiative is unfolding alongside broader developments in Europe. In March, the European Parliament approved the world’s first “AI Act,” a sweeping regulation that classifies AI systems according to the level of risk they pose to society. Of the 618 lawmakers present, 523 voted in favor, 46 against, and 49 abstained.
The regulation establishes four levels of risk, ranging from minimal applications to “high-risk” systems, such as those used in elections, education, hiring processes, or healthcare. These high-risk systems will face strict requirements, including regular risk assessments, measures to eliminate bias, strong data governance, and mandatory human oversight.
The general provisions will take effect in May 2025, while the stricter rules for high-risk systems will come into force three years later. Each EU member state will need to establish national oversight agencies, while overall coordination will fall under a dedicated AI office within the European Commission.
The law also sets strict prohibitions. These include predictive policing based solely on personal profiles, mass collection of images for facial recognition without authorization, biometric categorization to infer sensitive data such as political opinions or sexual orientation, and emotion recognition in workplaces or schools.
When it comes to penalties, the text is equally tough. Companies providing false or misleading information to regulators could face fines of up to €7.5 million or 1% of global turnover. For the most serious violations — such as engaging in prohibited practices — penalties could reach €35 million or 7% of worldwide revenue, whichever is higher.
For Italian lawmaker Brando Benifei, one of the architects of the European legislation, this marks “a clear path toward safe, human-centered AI development.” The challenge now, he emphasized, is ensuring implementation and compliance — something Italy will also have to face as it rolls out its own national framework.
Taken together, these measures highlight a clear trend: at both national and continental levels, the debate around artificial intelligence is no longer just about innovation. It is also about setting limits and protecting society from the potential risks of a technology that is reshaping the world.