10 March 2020

AI Guides | AI & Liability

Who is liable when AI fails?

In the 1980s, the Therac-25 radiation system delivered at least 6 fatal or near-fatal doses of radiation to cancer patients. This was due, in part, to a glitch in the computer coding. There were also issues with the design of the system. Further problems were introduced when the system was upgraded at various hospitals. The
correct apportionment of liability is still debated today. So far, in Australia, the legislative focus has been on regulating liability for limited and specific forms of AI, such as autonomous cars and drones. There is no legislation dealing with liability for damage caused by AI generally.

Our experts look at regulating liability for different forms of AI in Australia.

Share on LinkedIn Share on Facebook Share on Twitter
    You might also be interested in

    As at 6 December, almost all entities in the ASX200 have held their AGMs for 2021. Entities have continued to adapt to the challenges brought on by the COVID-19 pandemic, which influenced the way...

    16 December 2021

    The increased use of Artificial Intelligence raises serious questions: Can – or should – we automate decision making?

    16 December 2021

    One of this year’s prominent corporate governance developments was the Federal Government’s proposed regulatory "crackdown" on proxy advisers.

    16 December 2021

    On 15 October 2021, the Hon. Ray Finkelstein AO QC as Commissioner and Chairperson of the Royal Commission into the Casino Operator and Licence, delivered the report (An inquiry into the suitability...

    16 December 2021

    This site uses cookies to enhance your experience and to help us improve the site. Please see our Privacy Policy for further information. If you continue without changing your settings, we will assume that you are happy to receive these cookies. You can change your cookie settings at any time.

    For more information on which cookies we use then please refer to our Cookie Policy.