For a PDF of this article, click here.
There is no doubt that we are entering the “autonomous vehicle era.” Just nine months ago, the National Highway Traffic Safety Administration (“NHTSA”) issued a “Federal Automated Vehicles Policy” (“The NHTSA Policy”) which provides guidance on the safe design and development of “Highly Automated Vehicles[i]” (HAVs). The NHTSA Policy reflects the Department of Transportation’s view that “automated vehicles hold enormous potential benefits for safety, mobility and sustainability.” As we are entering the autonomous vehicle era, inevitable questions arise as to how these driverless vehicles will impact motor vehicle litigation.
Much of the future landscape in motor vehicle litigation will depend upon the regulatory framework adopted by the federal and state governments. The NHTSA Policy sets forth guiding “reasonable practices and procedures” that manufacturers and suppliers should follow in developing HAVs. These standards and procedures will likely become rules and regulations in the years to come. Additionally, the NHTSA Policy encourages states to regulate HAV “drivers”[ii] for the limited purpose of enforcing traffic laws and to consider allocating liability among HAV owners, operators, passengers, manufacturers, and others when a crash occurs. The NHTSA Policy makes clear that regulations on the “performance” of the HAVs are exclusively within the province of the federal government[iii] while states should “examine its laws and regulations in the areas including insurance and liability and enforcement of traffic laws and regulations.” Therefore, it is anticipated that the federal government will issue unified safety standards for HAVs while individual states will update their traffic, liability, and insurance laws to regulate these vehicles.
This article sets forth the current legal framework in motor vehicle litigation in New York involving fully autonomous vehicles (“AVs”). In early April of this year, the New York lawmakers approved a state budget bill that includes a new measure allowing AVs on New York highways for the limited purpose of testing or demonstration. Just last month, the Department of Motor Vehicles began accepting applications for autonomous vehicle testing. What happens if an AV is involved in an accident? For purposes of illustration, assume an AV is involved in a collision with another vehicle driven by a human (“non-AV”) and the non-AV driver is injured as a result. The injured non-AV driver may potentially sue (1) the owner of the AV (if different than the manufacturer); (2) the human driver of the AV (if human driving was involved); and (3) the manufacturer of the AV. Set forth below is an analysis of possible claims against each of these parties in this fact pattern.
- The Owner of the AV
In New York, in order to recover from the owner of a vehicle in a car accident, an injured plaintiff typically needs to prove that the owner was negligent and that such negligence caused his or her injuries. Negligence is defined as “lack of ordinary care,” which is the “failure to use that degree of care that a reasonably prudent person would have used under the same circumstances.” PJI[iv] 2:10. Additionally, New York Vehicle and Traffic Law (“VTL”) establishes “rules of the road” and violation of a VTL section constitutes negligence. PJI 2:26; Deleon v. N.Y.C. Sanitation Dept., 14 N.Y.S.3d 280 (2015).
In a typical motor vehicle case, even if the owner of the vehicle was not involved in the operation of the vehicle, the owner may nevertheless be found liable if he or she failed to properly maintain the vehicle and such failure resulted in the plaintiff’s injuries. Additionally, the owner may be implicated pursuant to VTL §388 which imposes liability on the owner of a vehicle for the negligence of a driver if the owner had given permission to the driver to operate the vehicle.
In our fact pattern, if the accident occurred as a result of the malfunctioning of the AV due to the failure to maintain the vehicle, including the software, then liability will likely attach to the owner. For example, if the owner failed to update the AV software as required by the manufacturer, or if the owner modified the software, then the owner will likely be found liable. However, unlike in a typical case, VTL §388 will likely not apply to the owner of an AV even though the owner technically gave permission to the AV software “driver” to operate the vehicle. This is because the statute, as it is currently written, imputes liability on the owner only for the negligent operation of the vehicle by a “person.” Specifically, VTL §388 provides that:
Every owner of a vehicle used or operated in this state shall be liable and responsible for death or injuries to person or property resulting from negligence in the use or operation of such vehicle, in the business of such owner or otherwise, by any person using or operating the same with the permission, express or implied, of such owner (emphasis added).
VTL defines a “person” as a “natural person, firm, partnership, association, or corporation.” As such, it is unlikely that the AV software would qualify as a “person” for purposes of VTL §388. Therefore, if the vehicle was in fully autonomous mode and its software “driver” simply made an incorrect prediction or decision, then the owner of the AV will not be implicated by the operation of VTL §388 since the AV software is not a “person.” Liability may attach, however, if human control of the AV was involved, such as when an occupant of the AV took over the control of the vehicle. If the AV was being operated by a “person,” then the owner of the AV will be liable for the negligence of the driver if the owner had given permission to the driver to operate the vehicle.
It should be noted, however, that the state legislature may choose to revise VTL §388 to impute liability on the owner for the decisions and actions of the AV, depending upon the state’s policy involving AVs. While clearly stating that allocating liability and regulating traffic rules remain the responsibility of the individual states, the NHTSA Policy does recommend that the term “driver” in state traffic laws be redefined to accommodate new scenarios which may be presented by a self-driving car. Specifically, the NHTSA Policy recommends that an HAV system that conducts the driving task and monitors the driving environment (generally SAE Levels 3-5) be considered the “driver” of the vehicle. For vehicles and circumstances in which a human is primarily responsible for monitoring the driving environment (generally SAE Levels 1-2), NHTSA recommends the state consider the human to be the driver for purposes of traffic laws and enforcement.
At this time, the New York Vehicle and Traffic law defines a “driver” as “Every person who operates or drives or is in actual physical control of a vehicle.” VTL §113. As indicated above, VTL §131 defines a “person” as “[e]very natural person, firm, partnership, association, or corporation.” Therefore, it does not appear that a software “driver” would be considered a “driver” for purposes of the New York traffic law. However, based upon the recommendations in the NHTSA Policy, New York state legislature will likely change the definition of “driver” to include both a “person” and an HAV system. It is, therefore, possible that the state legislature may also revise VTL §388 to impute liability on the owner of a vehicle for the negligent operation of vehicle by either a person or an HAV system.
- The Driver of the AV
Similar to seeking recovery from the owner of a vehicle, an injured person suing the driver of a vehicle must prove that the driver was negligent in the operation of the vehicle and that such negligence caused the injuries. The inquiry is generally whether the driver used reasonable care in the operation of the vehicle. Additionally, New York Vehicle and Traffic Law (“VTL”) governs rules of the road that a driver must abide by and a violation of the VTL is prima facie evidence of negligence.
In our fact pattern, therefore, if human operation of the AV was involved, then the liability of the human driver would be determined according to the “reasonably person” standard mentioned above. If a human driver was forced to take control of the AV because of issues arising out of the software, and the accident nevertheless occurred, then the human driver’s liability will depend upon whether his conduct was reasonable under the circumstances.
- The Manufacturer of the AV
In addition to suing the owner and driver of the AV, an injured person may also make a claim against the manufacturer on products liability grounds. If an injured plaintiff alleges that the software “driver” did not act properly and caused the accident, then a design defect claim may be implicated. For example, a plaintiff may bring a design defect claim if the AV incorrectly predicted the movement of another vehicle or made a driving decision that is being questioned. An injured person claiming a design defect may allege causes of action in negligence and strict products liability. Under the strict liability theory, a manufacturer is liable if the injury was caused by a defective product that was used for its intended or reasonably foreseeable purpose. Under the negligence theory, in addition to proving a defective product, the plaintiff also needs to prove that the manufacturer knew, or in the exercise of reasonable care should have known, that the product was defective.[v]
Under both strict liability and negligence theories, a product is “defective” if it is not “reasonably safe.” PJI 2:120. A product is not reasonably safe if a reasonable person who knew or should have known of the product’s potential for causing injury and of any feasible alternative design would have concluded that the product should not have been marketed in that condition. In deciding whether a product was defective, the jury is required to balance the risks involved in using the product against (1) the product’s usefulness and its costs, and (2) the risks, usefulness and costs of the alternative design as compared to the product in question. PJI 2:120, 2:126. To prove his case, a plaintiff is “under an obligation to present evidence that the product, as designed, was not reasonably safe because there was a substantial likelihood of harm and it was feasible to design the product in a safer manner.” Voss v. Black & Decker Mfg. Co., 59 N.Y.2d 102, 108 (1983). The defendant manufacturer, on the other hand, may present evidence showing that “the product is a safe product–that is, one whose utility outweighs its risks when the product has been designed so that the risks are reduced to the greatest extent possible while retaining the product’s inherent usefulness at an acceptable cost.” Id.
Additionally, a product is as a matter of law “not reasonably safe” if a Federal Safety statute is violated. See Feldman v. CSX Transp., Inc., 31 A.D.3d 698, 703 (2d Dept. 2006). The Federal Motor Vehicle Safety Standards (FMVSS) are regulations setting forth minimum safety performance requirements for motor vehicles or items of motor vehicle equipment. If such a safety standard is violated, then the product is not “reasonably safe.” However, compliance with a Federal Safety Standard constitutes “some evidence” of due care but does not by itself preclude the imposition of liability. See Lugo v. LJN Toys, Ltd., 146 A.D.2d 168 (1 Dept. 1989). [vi]
In applying these principles, the injured non-AV driver will have to prove that the AV software, as designed, was substantially likely to cause harm and that there was a safer alternative which is not cost-prohibitive. In deciding whether the AV software was “substantially likely to cause harm,” a jury will necessarily have to first determine whether the AV’s behavior in the accident was improper. If the AV had acted properly, then the AV software, as designed, was clearly not likely to cause harm. The “substantially likely” standard also suggests that the jury will need to consider the likelihood of a specific accident fact pattern occurring. Additionally, the non-AV driver would need to present expert evidence of an alternative safer design that is not cost prohibitive. Such a “safer alternative design” will likely take the form of better machine learning algorithms, a rule-based algorithm or increased data input (training) to enable the AV software to make better decisions.
However, this standard may be difficult to apply in cases involving a self-driving car’s software as an inquiry into the propriety of an AV’s decision or behavior involves value judgment that could differ from individual to individual. A jury in one case may find an AV’s decision or behavior improper while a different jury may return a different result. Should an AV be found “defective” just because it made a decision that five people on the jury disagree with? Should a manufacturer be facing liability each time a jury questions a decision made by an AV? Additionally, what standard should an AV’s behavior be held to? Should an AV be held to a “reasonable person” standard as in a standard motor vehicle case? Since most of the AVs are programmed to drive more conservatively and marketed to be safer than a human driver, should they be held to a higher standard of behavior, such as a “reasonable machine” standard?
Additionally, a recent study[vii] by London School of Economics found that some drivers intend to “bully” AVs when they hit the road- driving aggressively around them in the assumption that they will have to stop and let the bully through. Such a behavior may create a higher risk of accidents for AVs. Should an AV be programmed to predict such bully behavior? Furthermore, in the case of an imminent crash, should the vehicle prioritize the well-being of passengers or pedestrians? This is yet another example of value judgment that may differ from one person to another.
As illustrated above, the advent of the autonomous vehicle era necessarily creates the need for change in the law. Such changes will likely be made by both the federal and state legislatures with the courts filling the gaps. A new legal landscape will inevitably emerge as self-driving cars enter the market place.
[i] The NHTSA has adopted the SAE International (“SAE”) definitions for levels of automation in vehicles. HAVs represent SAE levels 3-5 vehicles which are vehicles with the ability to monitor driving environments.
[ii] The Policy, in various places, refers to the automated vehicle system as the “HAV’s computer ‘driver’” and suggests that states should update references to a human driver as appropriate when evaluating their laws and regulations.
[iii]The Vehicle Safety Act expressly preempts states from issuing any standard that regulates performance if that standard is not identical to an existing Federal Motor vehicle Safety Standard (“FMVSS”) regulating that same aspect of performance.
[iv] New York Pattern Jury Instructions (PJI) is used by judges throughout New York State to instruct juries in civil cases.
[v]It should be noted, however, that the Court of Appeals has stated in dictum that causes of action for negligent design and defective design are “essentially identical” and that separate jury questions on each theory were “redundant.” It is currently unclear whether the Court of Appeals intended to eradicate all distinctions between negligent design and defective design claims.
[vi] However, liability may not be imposed upon a manufacturer on a theory that has been pre-empted by federal law, that is, if the theory directly conflicts with a Federal Safety Standard or stands as an obstacle to the accomplishment of a Federal Motor Vehicle Safety Standard. See Geier v. Am. Honda Motor Co., 529 U.S. 861 (2000).
[vii] Think Good Mobility Survey 2016, http://media.wix.com/ugd/efc875_d98af657dce04c72a4c167a9efd93757.pdf