Disputed episodes

 

I think, therefore I am…? Artificial Intelligence is changing our legal environment. But as machines continue to inform decision making and human interaction, what does this mean for long-established principles such as causation, standard of care, and procedural fairness? What ethical issues arise when we rely on machines to make a judgement call? And if a machine can learn on its own, can it be an inventor with IP rights? Breaking down this brave new world is IP partner Maya Medeiros, and associate Jesse Beatson, whose practices cover the intersection of technology and law.

For more information on this topic:

CPD credits: This episode qualifies for 0.5 hours of Substantive credit in Ontario and 0.5 hours of Substantive credit in British Columbia.

AI and the law | S2 EP5

Transcript

 

Listen and subscribe to the Disputed podcast on:

 

Contact us

 


Transcript:
Maya Medeiros  00:00
And we might actually see increased transparency because sometimes it's hard to figure out what's going on in a human's brain when they make a-- a decision as well. And so, we might have an improved system if we do delegate to-- to machines in some instances. We've seen flaws in history over human decision making and we might have an improved system if we delegate to machines more.

Ailsa Bloomer  00:25
Hello, you're listening to Disputed, a Norton Rose Fulbright podcast. In this episode, we are talking about Artificial Intelligence and how it's changing our legal environment. Specifically, we'll be looking at machine learning, which is the ability for a machine to consume large amounts of data, identify patterns, make decisions without human intervention, and learn automatically as it goes along.

Andrew McCoomb  00:50
Uses of machine learning include predicting the outcome of litigation, calculating accident risk assessments, making administrative decisions affecting individual rights, and even inventing new technologies, music and works of art.

Ailsa Bloomer  01:03
But how does the use of AI fit with established legal principles such as causation and a standard of care? What ethical and procedural fairness issues arise when we rely on machines to inform our decision making? And if a machine can learn on its own, can it be an inventor? And if so, who owns AI generated IP?

Andrew McCoomb  01:25
Across the globe, courts, industries, legislatures and businesses are grappling with how to deal with these novel questions. To help us break down this brave new world, we spoke with Maya Medeiros and Jesse Beatson. Maya is an IP partner with a split practice across our Vancouver and Toronto offices, advising on IP strategy, domestic and international IP registrations and international portfolio management. Jesse is an associate in our Toronto office who studies the intersect of technology and law.

Ailsa Bloomer  01:56
Both Maya and Jesse have spoken and written extensively on this topic. They participated in a recent Ontario Law Commission panel on this topic and they have coauthored several articles and books, including Litigating Artificial Intelligence and Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law. Links to these resources are all in this episode's description.

Andrew McCoomb  02:30
So guys, let's start from the top. Can you set the scene for us? What are we talking about when we talk about Artificial Intelligence and what does that mean to you guys in your legal practice?

Jesse Beatson  02:39
So we're really starting with a million dollar question here. You know, what is AI? My short answer there is that an intelligent machine is one that can do things that it was once thought only a human being could. And so, you know, there are countless examples. There's a machine that can play chess better than the top Grandmaster, that's AI. There's one that can play Texas Hold'em and deceive its opponents better than a, you know, pro on PokerStars. You know, there are things also more socially useful too like AIs that read radiology charts and provide medical diagnoses and predict the outcome of court cases. So, an umbrella definition is that AI is a technology that exhibits high cognitive functioning. So, high cognitive functioning, okay, but how do you get there? So, if it's by programming it with a bunch of human experts’ knowledge, then that's what's called an expert system. But if it's the kind of AI that's not programmed with knowledge in advance, but learns on its own by observing patterns, that's machine learning. And that's where a lot of the excitement and funding is right now, but also the risk because that's where AI is able to interpret, reason, make independent decisions. And this gives rise to a whole host of new legal, ethical and governance issues. In terms of the practice of the law, well I just want to back up for a quick second because I see AI as kind of this freight train, or probably better metaphor, the bullet train, bearing down on-- on all professions, really. And so, a Canadian study from the-- the Public Policy Forum in 2019 surveyed about 2,000 Canadians, and more than half said that they definitely didn't fear losing their job to AI or automation within the next five years. However, when they were asked what they thought that would look like in 25 years, only a quarter of them were equally sure. In 25 years, it's really hard to know what society is going to look like, let alone the legal system or the legal job market. I think we're gonna see augmentation in the short term, and more and more automation in the long term. And, you know, it kind of comes down to a question of what kinds of work and what kinds of decisions should a human brain be involved in. As AI gets better, faster, stronger, smarter, etc., our answer to that question may continue to evolve and hopefully, for our sakes really, AI won't be replacing lawyers but replacing old ways of working. We are seeing the rise of online courts and Online dispute resolution. And so, maybe physical courts will be less of a thing going forward. So, there are various trends but I think it's about changing the ways that we work rather than just replacing lawyers with robots.

Ailsa Bloomer  05:16
You mentioned Online dispute resolution or ODR. And just to explain, this is essentially an online equivalent of Alternative dispute resolution so, is the use of virtual technology to help resolve disputes. And-- and this technology is being used in-- in BC, isn't it, with the BC Civil Resolution Tribunal? I think that's the first of its kind in Canada. So can you just-- can you tell us a bit more about the motivations behind the system?

Jesse Beatson  05:42
Yeah, absolutely. This was just an innovative idea —out in BC to, you know, bring together various minds – legal minds, regulators, data scientists – and sort of craft this digital alternative to physical dispute resolution. And the idea there, I mean, there are plenty of reasons motivating it, but one is access to justice and the accessibility of dispute resolution processes. But I just wanted to make a quick clarification that, you know, we're talking about different things here when we simply talk about Online dispute resolution versus Online dispute resolution with AI decision making. So, Online dispute resolution might be human adjudicators just simply in an online or digital forum. Whereas there are different points at which you can introduce AI. So you can introduce AI as a kind of screening tool to triage cases, for instance, all the way up to having the final arbiter or decision-maker relying on an AI tool to render a verdict or a decision.

Ailsa Bloomer  06:49
Of course, that's a-- that's an important distinction to make, you know, the difference between having a virtual hearing, which we're obviously very accustomed to now, versus the use of AI in helping determine those hearings, and perhaps also the use of AI as a preliminary screening tool in the litigation process. So, for example, this makes me think of a couple of episodes that we recorded recently, one on class actions when we're talking about certification motions and also one on litigation financing when we talked about how litigation funders assess the prima facie merits of the case before deciding whether to invest. In both of those cases, you can see how AI decision making capabilities would be a useful screening tool. But I think this leads us on to a second point which is, what ethical issues come along with that? It can't be as simple as just turning over the administrative decision making to a computer system without risk.

Maya Medeiros  07:45
Yes, so AI also raises a number of ethical issues. To look at a simple example where an AI algorithm determines credit limits for credit cards and it may be inherently biased. There is an example where the system gave a husband 20 times higher credit limit than his wife that ultimately resulted in, you know, a regulatory investigation and discrimination complaint. But I think it highlights some of the ethical issues. Bias, for example, whether that's bias in the data itself or bias in the algorithmic process or factors to consider weighing to determine that credit limit, for example, and also raises an issue around transparency. How was that credit limit determined by the system? Not suggesting that there's transparency in human decision making currently, you know, maybe we can do better with AI systems and that transparency, or explainability. How was that decision made? Why was that decision made? So these are some of the big ethical issues that are being posed by Artificial Intelligence decision making.

Ailsa Bloomer  08:42
Yeah, it's-- it's interesting because it's almost-- the AI is almost holding a mirror up to the programmers, right? And reflecting their own subconscious or internal biases. And so, on the issue of AI systems having a role in administrative decision making, there is an interesting recent case in Houston, Texas, which involves the use of AI to assess teacher performance and make decisions on their pay and termination. Jesse, can you talk us through that case and how it's illustrative of some of the broader issues in the use of AI in decision making?

Jesse Beatson  09:14
Yeah, absolutely. I mean, the whole idea of, you know, of a fair decision making process is that those who are subject to that decision making can have some access to the reasoning that went into it. And then, you know, can challenge that reasoning if there are merits there. But with AI decision making, especially with machine learning, in many cases, the way this functions isn't entirely known even to the programmer so the-- the AI is kind of left to its own devices to do this pattern recognition. And so, that can pose issues if that same black box, as it's often called, is responsible for decisions that have major impacts on people and their rights. So here's a case in-- in Houston where the school board decided that they would implement a more automated system for evaluating teacher competency. And so between 2011 and 2015, teachers in Houston had their job performance evaluated by this data driven appraisal algorithm that was called the Educational Value Added Assessment System or EVAAS. And so this algorithm informed the school board's evaluations of teachers in terms of who do work bonuses go to, who to give sanctions to, and even who to fire, and that's obviously where it becomes the most controversial. And of course, an employer's impulse to quantify employee performance is not new or inherently objectionable, and the goal behind the program was noble enough too, the idea was to have an effective teacher in every Houston classroom. But the situation was hardly fair to teachers. They couldn't challenge the decisions or receive an adequate explanation for them. The source code and information underlying the algorithm were trade secrets owned by the third-party vendor. And so, you know, when teachers would be sanctioned or even fired, in some cases, they had great difficulty in challenging these decisions or even knowing what the basis of the decision was. And, as a result, a bunch of teachers got together and the teachers’ union coordinated it and they launched a civil lawsuit in Houston.

Maya Medeiros  11:20
I think Jessie touches on a very important tension in this transparency or explainability is that, a lot of these systems are complex, but they're also protected as trade secrets to confidential information while aspects may be patented, generally, the code is-- is a trade secret. So, how do you meaningfully disclose that without jeopardizing your trade secret protection? So there will need to be some mechanism in place, just some solution, to enable that disclosure and protect those important assets.

Ailsa Bloomer  11:48
So the-- the Houston case was ultimately settled but it's interesting in litigating the claim, one of the difficulties that the plaintiff encountered was how to get adequate discovery. And in particular, the vendor who provided the algorithm was a third party to the litigation. They refused to disclose the source code for their model, i.e. how these teacher performance ratings were generated. And to your point Maya, the-- the vendor was concerned with protecting its proprietary confidential information, its trade secrets. And there was also nothing in the procurement contract that permitted the school board to have access to any of this data or the source code. So the board kind of faces this procedural unfairness claim over its decision making process when the board itself does not know how those teacher evaluation scores were arrived at. I think the case highlights another issue in claims involving the use of AI in decision making, which is with machine learning, even if you have the source code, that's only half the picture, right? You can't question the machine on how it arrived at that decision or what weight it gave to certain variables and why. And so, ultimately, how do you explain these systems to a judge? How does an expert explain these systems to the court? I think in the Houston case, the plaintiff's expert was allowed to go and look at the code on a computer screen but they couldn't interpret it, it was just letters on a screen. So, the Houston case shows that there's some really basic evidentiary issues presented when companies are using these AI systems to make decisions that could end up being the subject of litigation.

Jesse Beatson  13:28
Absolutely Ailsa, you know, from our own case law, you've got the White Burgess case for instance, the courts are inherently cautious about the introduction of expert opinion evidence in litigation. But here we are in a, let's say, a trial involving complex AI technology and-- and you see a real need for expert advice just to bring everyone's technical literacy up to the appropriate level where they can work through some of these things. So, I think that's going to be a major tension in these cases.

Andrew McCoomb  13:59
Okay, so carrying that forward and thinking about commercial and administrative applications of AI systems and the risks involved. If you're in the business of selling or procuring machine learning technology, what kind of claims do you think we might see and how are stakeholders in this industry managing their risks?

Maya Medeiros  14:17
I think before we-- anticipating those claims, I think contracts are evolving as a-- as a first matter. And so, for these commercial products, often multiple stakeholders are working together, collaborating and so making sure that there's a clear contractual framework allocating and mitigating some of these risks - liability risks, indemnity risks, but also just who can do what with the technology? How is this going to be built? Who owns what? And trying to get that as clear on the outset in the contracts to try to avoid some of these uncertainties in the law and these disputes will be really important. Also considering novel issues like if you're developing or co-developing machine with somebody else's data, that data owner might want to ensure that there's ethical safeguards. So, you know, elevating the legal standard to ethical standards by incorporating ethical guidelines into the contracts to govern that development process to make sure that there's some safeguards around bias in their data and making sure whatever that’s being built with their data has some protection. Also, we're seeing some-- some contracts addressing restrictions of the use of AI. So, if you build a-- a system that sort of automates aspects of human conversation, for example, you want to ensure the actions of that autonomous machine can't skirt the law. So, if something were going to be illegal via human, the machine can't be able to do it. So even trying to codify some of these issues in contracts will help mitigate some of those issues before they happen, or at least say, the parties turn their mind to it. This is where the fault should lie. And so, we're seeing a lot on the-- on the contract side trying to address some of those issues early on, some of those litigation risks early on. I think we'll see some interesting issues come up in tort law, consumer protection, product liability, as well as privacy and data rights and intellectual property on the issues around that AI systems race.

Jesse Beatson  16:04
The tort area is one that I've been thinking about a bit, just because of how fascinating it is and the extent of the challenges too. So I think, you know, one practical application of AI that we're going to be seeing a lot more of is self-driving vehicles. And some of the-- the legal questions that arise in this area are, how do we establish causation, what is the standard of care, and how do we assess a breach? The biggest safety advantage to an autonomous vehicle is probably that it is not human. So according to the US Department of Transportation, nearly 94% of fatal crashes are due to human error. Self-driving vehicles don't get bored, they don't get tired or agitated, they don't have road rage, they don't get intoxicated. And so, you know, the-- the current state of the art isn't necessarily at a point where it can react to uncertain and ambiguous situations with the same anticipation or skill of an attentive human driver. But how often is that level of skill actually in place on the roads? As the technology improves, I think human drivers will become more and more the thing that the law is concerned with limiting as opposed to self-driving cars. But from a tort law perspective, an advantage of human driving is we at least know how to handle the harms that result from it. There's case law that guides courts on when, where and to whom to assign responsibility and the quantum of damages.

Ailsa Bloomer  17:35
Autonomous vehicles are a good example of kind of the-- the novel torts and causation questions that courts may be dealing with as AI becomes more integrated, I think, into our lives. And just thinking about what Maya said about ensuring machines aren’t programmed to do anything illegal, or nothing a human isn't allowed to do. As you mentioned, a big motivating factor behind autonomous vehicles is safety and in particular, the safety of the passenger or the user. So, if the software used in the vehicle is programmed to preserve user life first and then avoid harm to external persons or property, that-- that comes second, and then in an accident, the software functions correctly but damage is still caused to third-party property. How is liability apportioned because negligence statutes talk about fault. How would fault be defined in this context? And what would be the appropriate standards of care? And what would be reasonably foreseeable for the manufacturer?

Jesse Beatson  18:33
Yeah, a couple of really interesting questions there, Ailsa. I'll start with a scenario. So, let's say you're on a highway, you’re in a self-driving car, and as you say, it's-- it's programmed for self-preservation to a certain extent. So there's a vehicle that suddenly stops in front of you and-- and there's a vehicle that's also tailing you close behind. But there's a motorcycle driver in the lane next to you. So what does the self-driving car do in this scenario? Does it take into account whether or not the motorcycle driver is wearing a helmet? Does it simply, you know, swerve out of the way to protect yourself or other occupants of the car and doesn't take that driver into account at all? Does it take into account the internal safety features within the car and sort of make a quick assessment as to whether, you know, not a lot of harm will be caused to you, but a lot of harm would be caused to that motorcycle driver? So that's-- that's a really tricky question from a programming perspective, but it's also tricky in terms of assignment of legal liability.

Maya Medeiros  19:29
I think the assignment of legal liability, just to touch on Jesse's comment there, is very interesting, particularly because these are very complicated systems. You're looking at an autonomous vehicle, you know, how many hundreds of parts are included in that system? And they could all each be provided by an independent developer. Look, if you have a-- a generic tool like an object detector, for example, all of a sudden is that company now on the hook if the autonomous vehicle makes a-- a fault when it was actually just a generic tool for any type of object detection, not necessarily for specific use in a vehicle? So now, it also raises sort of duty of care issues too. Does that sort of general generic component provider now owe, you know, pedestrians a special duty that they didn't otherwise. So I think the complexity of these systems, although, you know, vehicles generally are quite complicated, I think because of the autonomous nature of some of these decisions and the sort of offsetting from the human decision maker is going to trigger a lot of interesting liability issues. I think contracts are going to come in too, of course, there's going to be contracts governing all of these development processes. So I think, but tort law around duty of care, standard of care, and causation will have to adapt and potentially different liability frameworks emerge to-- to rectify those harmed by these-- these new technologies.

Andrew McCoomb  20:46
Yeah Maya, I think that's-- I think that's bang on. When I think about what this sort of self-driving car claim looks like when it hits the courts because of something that happens where you know, someone's self-driving car maybe strikes another vehicle. So you're gonna have an action from the person who's been injured, say, by a car that's being driven by a self-driving AI system and they're going to sue the driver who is piloting that car and then maybe that driver is going to turn around and sue the car maker and or the AI systems maker. But the AI systems maker, maybe that's not just one entity, I mean, you’ve got multiple software developers who are building components for a really complex system. And so, maybe there are multiple parties there, all of whom likely have some kind of contract in between them, you know, mitigating risk, displacing risk, limitations of liability, limitations on the types of claims that can be brought. I'm sure you guys have drafted many, many, many of those agreements over the years and they're going to come into play affecting all of these different relationships. And, you know, if our insurance listeners out there feel their ears burning, you know, I can see why. I mean it just-- it seems like for some extended period of time, I mean, obviously, through the inflection point where self driving cars eventually become the norm, they become the legally and ethically defensible way to get from point A to point B on the ground, insurance is going to play a massive role in-- in really distributing the risk between these-- these various parties, these various participants in this whole space. It’s going to be hugely important for insurance to play that role.

Maya Medeiros  22:31
Absolutely and I think we'll see some other interesting cases come up in the-- in the medical space as well given the-- the use of digital health and AI powered digital health tools. I think, there, given a medical professionals are involved, or even lawyers, right. I'm not going to let a tool say, oh here's the decision on the case and I'll just blindly follow that decision, I'm going to use it as one of my pieces of feedback in my research process. So I think keeping a human in the loop and using the AI as a support tool as opposed to the ultimate decision will be a good sort of transition strategy as well, just making sure there is that human vetting the decision trying to find some explainability, poking there, at least doing some diligence so you have a defensible position in the event something goes wrong.

Andrew McCoomb  23:21
In those types of discussions and that human involvement I'm sure is going to inform the standard of care of kind of what is a reasonable use of AI in the future. Maya, to draw on some of your expertise in the IP field, can you tell us about some of the intellectual property issues that flow from the use of AI more broadly? 

Maya Medeiros  23:40
Absolutely. So I think, you know, in the litigation discussion, I think we're gonna see some interesting IP litigation around AI technology. For example, there's an increasing number of patent filings directed to AI inventions worldwide. And so as those market competitors go battle it out, there are going to be a lot of patent-related litigation enforcing those patents or trying to monetize that value some other way. Similarly, we're going to be seeing a lot of challenges to traditional IP frameworks by AI-generated works. So, for example, AI systems are inventing as well as they're creating artistic works and literary works. So, are those works protectable by copyright? Or can a patent application have an AI inventor and who owns that AI-generated IP? I think these are challenging questions that traditional IP frameworks worldwide are struggling to adapt to, to look at. For example, there is a patent application filed globally, name-- naming an AI system, and countries have to decide whether to award that patent. So far, we've seen the South African Patent Office say yes to that patent, but notably there is not a substantive examination process in South Africa. And so, that's-- that's just one thing to-- to keep in mind. The Australian Patent Office said no, but then the Federal Court of Australia overturned that decision and said, yes, we actually can have an AI inventor, it's not necessary to have a human inventor. The European Patent Office refused the application. But the courts in-- in-- in the courts in the UK, as well as the US, similarly refused that application and said that an AI machine cannot be an inventor. The Canadian Office is currently examining this patent application. They've issued a notice of non-compliance based on the non-human inventor being listed. However, in that notice, they suggested that the applicant, Dr. Thaler, a human, submit a statement that they are the legal representative of the inventor. So it sort of leaves open whether that will rectify the situation. But of course, just because the Canadian IP Office, if they do decide to grant it, it can be challenged in the court too. So this is ongoing in Canada. So something I'm very interested in, you know, in the patent world, this is exciting stuff for us. Similarly, as I mentioned, the same machine actually creates artwork, as well. So-- the Dr. Thaler filed a copyright application in the US and the-- the United States Copyright Board denied that application, again on the notion that you cannot have a non-human author, you need a human being to create the work. However, in Canada for a different work, an art-- piece of art that was created by both a human and an Artificial Intelligence painting application, can-- the Canadian Copyright Office granted that registration. So if there's a human author, maybe that changes things. So we have yet to see whether an AI can-- can generate a protectable work alone in Canada as a registered copyright, but we'll-- we'll see how that evolves. And, you know, even if a system can be a patent inventor or copyright author, it's still unclear how the applicant, the human applicant or corporate applicant, is entitled to own these creations. How can a machine transfer IP rights to a human? Is a machine’s creator, by default, its legal representative, the creator? What if the owner is different than the creator? Or what if a user of the machine that was involved in the creation process is different than the creator and the owner? And so, I think all of these issues are going to have to be decided or they'll come up eventually in different interesting cases to see how, you know, traditional IP frameworks will adapt to respond to these novel questions.

Andrew McCoomb  27:43
So it's a renaissance machine. Going then from the IP regulatory framework to more generally speaking, regulation around AI, what can you guys tell us about the regulatory framework in the use of AI systems in Canada?

Jesse Beatson  27:58
Yeah, this is a really interesting area, Andrew, law and the regulation of activities tends to take some time to catch up to new technologies, and we're certainly seeing that where AI is concerned. So there's currently no federal AI-specific regulation in Canada, nor in the US, applicable to the private sector specifically. However, I also want to make the point that it's not a complete Wild West, so, there has been proposed legislation that's been considered and there are laws and frameworks that are-- that are still relevant to AI, even if they're not specifically directed at it. So, Canada's existing laws may indirectly regulate the use of AI, big data, algorithmic decision making. So, for example, we have privacy statutes that restrict the use of AI in connection with personal data, or we have anti-discrimination or human rights laws that will address issues around algorithmic decision making where that crops up. An interesting example in Québec, we had Bill 64 on data protection, which includes new provisions around automated decision making that require organizations to inform individuals if a decision about them is based on automated processing. So, we've talked about, you know, the transparency issues around automated decision making, or AI informed decision making, but notice is another big piece too. For instance, in the States, people have been subject to these kind of government automated decision making systems and didn't even know it. So this bill would ensure that some notice is-- is provided in those contexts.

Maya Medeiros  29:39
And we also have a federal directive targeting government decision making as well, so that might be an interesting thing to watch to see. The-- the federal level so, there's of course gaps given provinces also handle some very important government decisions, you know, education for example, but interesting to see that. Also, you're seeing procurement requirements come in. So the Canadian government has a list of AI suppliers and they have a procurement process to establish those that have a responsible and effective AI services and solutions and products, and so, I think the procurement space will be an interesting one, sort of a soft law, to see how that evolves, what requirements are going to be imposed, whether that's on the government, but maybe, you know, general, corporate entities will also adopt similar procurement requirements. So requiring these companies to disclose, to have fair systems, to provide some notice, that sort of thing and work with them, together. We also see a lot in the United States around procurement government contracts and-- and seeing how the US government’s adapting and putting more strict requirements on software vendors in the AI space.

Andrew McCoomb  30:45
For Canadian clients in the AI space who may be looking to sell products or services abroad, are there any major legal developments in other jurisdictions that you would flag as being notable for Canadian businesses in this space?

Maya Medeiros  30:59
Yeah, absolutely. Europe, not surprised that Europe's a legislative heavyweight, but Europe has an Artificial Intelligence Act or proposed Act. I think they're going through various reviews and revisions to that Act. But currently, it's a foreign law that actually can apply to Canadian businesses with its extraterritorial reach. It applies to systems that are in the EU, on the EU subjects, or that produce outputs that are used in the EU. And it has a broad sort of expansive definition of AI systems, so you know, back to the first question, what is AI? Well, legal definitions of AI are going to be very important, and I-- hopefully, some harmony across jurisdictions for our global entities to make sure that they can try to comply globally with these standards. And so, broad horizontal framework, that has sort of obligations to provide documentation and diligence, as well as notice of the decisions as well. There’s straight bans on specific uses of the-- of AI systems, depending on that. So if you're going to be exploiting vulnerable groups, for example, that use is banned. If you're going to be using subliminal techniques to manipulate people's behaviour - banned. And there's sort of a high-risk category and there's a duty to inform, or duty to explain if you're in that high-risk category, so there's that sliding risk scale to determine what are those obligations-- those reporting and documentation obligations.

Jesse Beatson  32:24
I think Canadian entities, you know, both public and private, should definitely take note from the recent reforms in the EU, and I imagine we're going to be seeing similar pieces of legislation being proposed here in Canada, in the US. And, you know, there's going to be a-- a much more complex regulatory set of provisions and rules to track going forward. And, ultimately, as I mentioned before about machine learning technology, that's where some of the greatest excitement is and also some of the greatest risk because it means that it's inherently not so predictable. And that, you know, has the potential to cause great harm. And so, with regulation, we want to ensure that we have appropriate guardrails in place. And, I mean, Maya talked about the medical sphere before, there's some really exciting developments in AI technology in terms of medical diagnosis, the legal system, of course, as well. Anywhere that you know, up the standard for professionalism through augmentation with-- with AI technology just need to ensure that the appropriate guardrails are in place.

Maya Medeiros  33:36
So yeah, while there is no Canadian specific legislation, I think the European legislation provides some really great guidelines. So even if it doesn't apply directly to your activities, I think, looking at that framework, trying to adapt your internal governance processes to align with that framework will be very diligent to ensure you have a defensible position in the event something unintended or a harm was caused by your deployment of AI systems. So even if it doesn't apply to you, try to look to see how you can adapt your governance processes to accommodate those reporting requirements and monitoring requirements.
Ailsa Bloomer  34:09
We hope you enjoyed this episode of Disputed. If you'd like to find out more about this topic, or how to contact our guests, please visit nortonrosefulbright.com/disputed. Also, if you have any questions, feedback, or topics that you'd like us to cover in a future episode, please do email us at disputed@nortonrosefulbright.com. And if you would like to hear more, please subscribe to Disputed on Apple Podcasts, Spotify or wherever you get your podcasts. 

Norton Rose Fulbright Canada LLP is providing this podcast as a purely educational service. While it may contain legal information, it should not be construed as legal advice, a legal opinion or recommendation, or a statement of process or policy of Norton Rose Fulbright Canada LLP. The information, views and opinions expressed by guest speakers are entirely their own and their appearance on the podcast does not express or imply an endorsement by Norton Rose Fulbright Canada LLP of the information, views or opinions expressed by any guests, or of any entities they represent. Norton Rose Fulbright Canada LLP expressly disclaims any and all liability or responsibility for any direct, indirect, incidental or any other form of damages arising out of any individual’s or organization’s use of, reference to, reliance on, or inability to use this podcast or the information presented in this podcast.

Contacts

Partner
Knowledge Lawyer
Partner
Associate