•  
  •  
 

UC Law Science and Technology Journal

Authors

Stefan Heiss

Abstract

Are the E.U.’s proposals on artificial intelligence (AI) a major breakthrough or just a mere token of an initial liability regime? Several initiatives have been released in 2020 to shape Europe’s digital future to the next level, whereas the U.S. leadership program is hesitant to regulate AI. However, the recent E.U. proposals by introducing strict liability or implementing a certification procedure are a first approximation of what is needed rather than an adoptable bill. Based on the lessons learned from the E.U. a scheme of liability is outlined, which strengthens the trajectory of AI’s development in the long-term solely when it is socially desirable. AI is characterized by self-learning, opacity and autonomy, and its increasing ubiquity will put greater strain on the liability system. Therefore, this contribution considers the impacts of AI on U.S.’s major liability regimes, analyzes the effects of its application, and develops a flexible system for risky AI systems. Overall, a fundamental challenge of tort law raised by AI is examined: based on the question of whether the applicable U.S. tort law doctrines are capable of setting proper incentives for the usage of AI for society. The influence of AI on liability rules will be felt along two margins: First, to avoid application difficulties, adjustments must be made to existing rules; otherwise, legal uncertainty will be enhanced. Second, there is not a single existing liability regime which is capable of governing AI in a socially optimal manner. This contribution indicates that the U.S. and E.U. neglect important opportunities to reduce the risks of AI and enhance AI’s innovation on account of their liability rules or new proposals. However, the U.S. already noted that the global AI race is underway. Hereinafter, a first roadmap is outlined that leads to a leading position.

Share

COinS