Digital concept of graphs

Topic: Artificial intelligence and digitalisation

 Subscribe to Artificial intelligence and digitalisation

UK Finance publishes report on the impact of AI financial services: Opportunities, risks and policy considerations

December 06, 2023

On 23 November 2023, UK Finance published a report on the impact of artificial intelligence (AI) in financial services, including opportunities, risks and policy considerations.

Managing AI risks and legal implications, effective cybersecurity, ensuring privacy and the integrity of organizational records

December 06, 2023

In a world where generative AI is driving innovation and technology is outpacing legislation, there’s a lot for companies to consider to maintain operational effectiveness and minimize risk. To help provide some guidance, Norton Rose Fulbright Canada hosted its 2023 technology, privacy and cybersecurity virtual summit. Our leading lawyers were joined by prominent industry leaders to discuss and explore the latest developments, challenges and opportunities in the technology, privacy, and cybersecurity landscape.

California proposes rules for automated decision-making

December 06, 2023

On November 27, 2023, the California Privacy Protection Agency (“CPPA”) released a first draft of rules for automated decision-making technologies under California’s privacy law. The proposed rules revolve around providing notice of the technology’s use, opting out, and consumer access to business information.

Artificial Intelligence (Regulation) Bill: UK Private Members’ Bill underscores wide-spread regulatory concerns

December 06, 2023

A Private Members’ Bill, the Artificial Intelligence (Regulation) Bill (the Bill), has been introduced into House of Lords (the UK’s upper House of the UK Parliament) and is currently at the second Parliamentary stage.

PART II: Legislative advances in the world of artificial intelligence, Canada – minister releases proposed amendments to AIDA

November 23, 2023

On October 5, the Minister of Innovation, Science and Industry (ISED) wrote a letter to the Standing Committee on Industry and Technology proposing amendments to Artificial Intelligence and Data Act (AIDA), which was introduced as part of Bill C-27 in June 2022. Further information on AIDA can be found in our previous update.

Informative preparatory papers on state of frontier AI and potential safety risks for UK AI Safety Summit published

November 23, 2023

On 26 October, the UK prime minister gave a speech on the AI Safety Summit to be hosted in the UK on 1 and 2 November. The summit will bring together representatives from large lab AI companies, world governments and civil society to discuss safety risks arising from frontier AI. Frontier AI is “highly capable general purpose AI models that can perform a wide variety of tasks that match or exceed the capabilities present in today’s most advanced models” (cutting edge foundation models) and the safety risks are cast quite widely, ranging from artificial general intelligence threatening our existence to the risk of deep fakes degrading trust in digital information sources.

Advances in artificial intelligence legislation in Canada (Part I)

November 23, 2023

On September 27, the Minister of Innovation, Science and Industry released a voluntary code of conduct specific to generative AI. This GenAI code follows the proposed Artificial Intelligence and Data Act (AIDA), which was introduced as part of Bill C-27 in June 2022 but will not likely be in force until 2025.

Generative AI: Q&A with Professor Peter McBurney, Professor of Computer Science, Department of Informatics, King’s College London

November 23, 2023

Professor Peter McBurney specialises in artificial intelligence (AI) and provides AI consultancy services to Norton Rose Fulbright LLP and our clients. In this article, corporate technology lawyer Sarah Esprit discusses generative AI with Professor McBurney.

International agreement on AI safety: Imran Ahmad comments on Bletchley Declaration

November 23, 2023

Artificial intelligence (AI) has the potential to fundamentally change business processes, and also have a significant impact on human well-being. However, its fast-paced development and continuous transformation are fuelling widespread concerns and forcing governments to create legislative standards and policies to ensure that AI is safely developed while urging companies to adopt a common framework. This was at the heart of the conversation at the AI Safety Summit at Bletchley Park in the U.K., where the Bletchley Declaration was officially signed, aiming to reinforce the design, development, deployment, and use of AI in a safe manner. Imran Ahmad, our Head of Technology and Co-head of Information Governance, Privacy and Cybersecurity, echoed the importance of the declaration: "It allows countries to have a framework for their respective jurisdictions, but, more importantly, allows businesses to have more certainty around what standards they should be trying to meet from a best practices standpoint."

President Biden issues sweeping artificial intelligence directives targeting safety, security and trust

November 09, 2023

On October 30, 2023, after recognizing that Artificial Intelligence (AI) is the most consequential technology of our time and anticipating that it will accelerate more technological change in the next five to ten years than witnessed in the past fifty, President Biden issued an Executive Order directing actions to establish new AI standards. These directives, which the White House presents as constituting the most significant action any government has taken to address AI safety, security and trust, cover a variety of issues for private and public entities domestically and internationally. Such issues include safety and security, privacy, equity and civil rights, healthcare, employment and education in addition to promoting innovation, developing international standards and ensuring responsible government AI use.