
Publication
Navigating international trade and tariffs
Impacts of evolving trade regulations and compliance risks
Italy | Publication | October 2025
On September 23, 2025, Italy adopted Law no. 132/2025 on Artificial Intelligence (AI). The law will enter into force on 10 October 2025 and aims, inter alia, to complement the Regulation EU 2024/1689 (EU AI Act).
Law 132/2025 impacts multiple areas, establishing general regulatory frameworks applicable to each. Below, we present a summary of the main provisions.
The Law 132/2025 mandates that the use of AI in the workplace must adhere to principles of safety, reliability, transparency and respect for human dignity and data confidentiality. Employers are required to inform employees whenever AI systems are deployed in work processes.
Transparency obligations are significantly expanded – these include the need to communicate the logic and purpose of AI systems, the nature of data and parameters used, metrics for accuracy, robustness and cybersecurity, mechanisms for human oversight, impact assessments, updating protocols and engagement with unions and regulatory authorities. To support strategic oversight, Law 132/2025 establishes a National Observatory on AI in the workplace, tasked with monitoring its effects and promoting training initiatives.
AI-driven data processing for research in areas such as prevention, diagnostics, therapeutic interventions, medical devices and public health is recognized as serving a significant public interest under Article 9 of the GDPR.
Law 132/2025 permits the secondary use of pseudonymized personal data for research purposes without requiring renewed consent, provided transparency safeguards are in place.
It explicitly authorizes the use of anonymization, pseudonymization and synthetic data techniques. Before initiating such processing, prior notification must be submitted to the Italian Data Protection Authority (Garante), along with details on security measures, data protection impact assessment (DPIA) and data processors. Processing may commence after 30 days unless a blocking measure is issued.
AI technologies may be employed to assist in prevention, diagnosis, treatment and therapeutic decision-making; however, ultimate responsibility for medical decisions remains with healthcare professionals.
Law 132/2025 prohibits the use of AI to discriminate in access to healthcare services and affirms the right of patients to be informed when AI tools are utilized in their care. Healthcare-related AI systems must meet standards of reliability, undergo periodic verification and updates and be designed to minimize risks.
Law 132/2025 introduces a new aggravating circumstance for crimes committed with the support of AI. A new criminal offence is established for the unlawful dissemination of AI-generated or altered content (e.g., deepfakes), punishable by imprisonment ranging from one to five years. Market manipulation offences committed through AI are subject to increased penalties under the Financial Consolidated Act (TUF) and Italian Civil Code.
Law 132/2025 empowers the Italian Government to issue legislative decrees within 12 months to define the legal framework governing the use of data, algorithms and mathematical models for training AI systems. These decrees will establish protective and remedial measures, as well as a system of penalties.
The market surveillance authority will be granted powers to supervise and enforce compliance, including the ability to request information and conduct inspections. These provisions are particularly relevant to financial and insurance institutions, as they will shape the governance, auditing and testing of datasets and algorithms used in AI training, complementing obligations under the EU AI Act.
In this context we also highlight that on August 6, 2025, the European Insurance and Occupational Pensions Authority (EIOPA) has issued a comprehensive opinion on AI Governance and Risk Management, which provides a principle-based, risk-sensitive framework for the insurance sector. This framework emphasizes the need for proportional, risk-based governance, requiring undertakings to assess AI use by data sensitivity, customer exposure (including vulnerable groups), business continuity and financial materiality, ensuring that governance measures are tailored to the specific risk profile of each AI application.
Furthermore, the legislative decrees will provide national authorities powers to supervise, carry out inspections and impose sanctions, including oversight of real-world testing for high-risk AI systems. Amendments to relevant banking, financial, insurance and payment services laws to ensure full compliance with the EU AI Act are expected, reflecting also EIOPA's expectations for robust data governance, transparency and human oversight in AI systems. This includes ensuring data completeness, accuracy and appropriateness throughout the AI lifecycle, implementing bias detection and mitigation (including in proxy variables) and maintaining clear documentation and audit trails.
The Bank of Italy, CONSOB and IVASS will continue to serve as market surveilance authorities under article 74 par. 6 of the EU AI Act and will be actively involved in strategic coordination. In line with EIOPA's opinion, these authorities will oversee the implementation of AI governance standards, including the delineation of roles across compliance, actuarial, data protection, ICT and senior management functions and the maintenance of audit trails and clear decision accountability.
With the adoption of a local law on artificial intelligence, Italy confirms its commitment in the race to regulate AI, demonstrating its proactive approach to shaping the future of AI governance in the country.
Publication
Impacts of evolving trade regulations and compliance risks
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2025