AI in aviation: Innovation and legal complexities

Failure in the aviation industry, though rare, is often catastrophic. As AI adoption increases across industries, it is inevitable that the aviation industry embeds AI into more of its operations – from maintenance and crew scheduling to air-traffic management and even flight controls.

While AI could enhance the efficiency and accuracy of aviation operations, its use presents thorny legal questions for the industry. How can the industry pursue safe AI-powered innovation while navigating these legal challenges responsibly?

Responsibility vs. Liability

The core question in almost all AI-related failure is: Who is at fault? In other words, who should bear responsibility for the AI use? The developers? The owner of the platform on which the AI is used? The end users themselves?

There is an argument to be made for each:

  • The developers of the AI should be responsible for ensuring that the AI is trained to be both accurate and reliable, and that sufficient data is fed into the AI to make it usable and trainable for all anticipated use.
  • The owner of the platform on which the AI is published should be responsible for monitoring its use and mitigating any risks caused by the use of the AI through their platform.
  • The end users of the AI, with appropriate warning of its usage (e.g., a user agreement), should be responsible for using the AI responsibly and in compliance with the intention of the developer.

However, responsibility and legal liability are often independent. As far as legal liability is concerned, the concept of shifting liability (i.e., culpability moving from a first party to a second party) has some merit to be considered here.

When determining the respective liability of the developers of the AI and the owner of the platform on which the AI is published, the primary legal issue is AI’s “personhood.”[1]

  • If AI has personhood, it can be treated as an independent legal entity (think: a company, a corporation, and the like), and unique forms of liability can be applied to the developers themselves. See, for example, products liability[2] and strict liability[3].
  • If no, AI is simply considered an “agent” of the owner of the platform on which the AI is published, opening the legal door to vicarious liability[4], where the “employer” (i.e. the owner of the platform on which the AI is published) is liable for the actions of its agents.

In order to avoid these liability risks, it is legally advantageous for AI developers and platform owners to transfer the legal liability to the end user. Because AI is a “black box” algorithm and has a known layer of unpredictability, it is hard to show legal causation. Therefore, a court could find that a “reasonable” user of AI would understand the risks of use and therefore is individually liable for any failure resulting from their knowing use of the AI.

Will Regulation Preempt Legal Liability Questions?

Before we get an overarching court ruling on legal liability related to the use of AI, there is another consideration: direct regulation of the use of AI.

The Federal Aviation Administration (FAA)[5], which is responsible for the safety of civil aviation, is one of the most highly regulating U.S. government agencies. Should the FAA adopt its own AI regulation specific to its jurisdiction, it could preempt some of these questions around legal liability and AI.

Alternatively, another governmental agency or a potential new governmental agency dedicated to the adaptation of AI across a plurality of sectors could be called upon to draft these regulations.

We will have to wait and see which moves faster in the aviation sector — litigation or regulation.

In the second and final installment of this series, we cover the legal liability a patent owner has.

[1] As a parallel aside- In the United States, the Copyright Office has taken the position that AI systems cannot be considered authors and, therefore, cannot own copyright. This means that the copyright to AI-created works would be owned by the person or entity that created the AI system, which in turn indicates that AI likely does not have “personhood” in the eyes of the Copyright Office. Will courts in other areas follow suit? https://www.reuters.com/world/us/us-appeals-court-rejects-copyrights-ai-generated-art-lacking-human-creator-2025-03-18/

[2] https://www.law.cornell.edu/wex/product_liability

[3] https://www.law.cornell.edu/wex/strict_liability

[4] https://www.law.cornell.edu/wex/vicarious_liability

[5] https://www.faa.gov/about/mission/activities