• Home
  • Can AI-powered Robots Be Sued for Medical Malpractice?

Can AI-powered Robots Be Sued for Medical Malpractice?

February 14, 2020 Julie Clements 0 Comments

Just as in any other business sector, research and testing of AI-powered devices are ongoing in the medical field as well. Machines can analyze huge amounts of medical data during medical record review and find details that a doctor or lawyer may miss. AI (artificial intelligence) tools are considered a great option to use especially with electronic medical records. Studies show that EHRs can be transformed into efficient doctors’ aides that can provide superior quality, clinically relevant medical information in real time. Similarly, robot surgeons are being used to assist surgeons and expand their capabilities. They help reduce the stress and fatigue surgeons experience. Since the robotic arms are steady, they are used in surgeries such as hysterectomies, cancer operations, and gallbladder removal. However, there are serious concerns associated with the use of artificial intelligence and machine learning (ML) in the medical field.

AI-powered Robots Be Sued for Medical Malpractice

The robotic device used for surgical procedures could malfunction and cause adverse patient outcomes. These have been in use for about two decades and according to the FDA, there have been 1391 injuries and 144 deaths of patients during the first decade of using surgical robots. These injuries and deaths were mostly due to the device malfunctioning and technical difficulties. Grave errors occurred typically during highly complicated surgery such as cardiothoracic surgery.

When a robotic device malfunctions, it raises questions regarding who is to blame. AI is at present used in medicine as “decision aides,” and legally it signifies that they are meant to complement and not substitute the opinion of a specialist. Therefore, the physician or provider using the AI device could be held liable for anything that might go wrong.

A robotic device may make a diagnosis and prescribe a drug to which the patient is allergic, if the allergy information has not been entered in the medical record. In such an instance, the device did act within the medical standard of care because it could not have been aware about the allergy. Defective programming or mechanical error in AI devices can lead to adverse patient outcome. In such instances, the programmer of the AI system could be held liable for negligent or improper programming. The person tasked with monitoring and maintaining the AI device could also be found liable for not recognizing or correcting the issues present.

If a legal case arises following an adverse patient event, the question is whether it is a malpractice or a product liability case? Litigation connected with robotic surgery may combine medical malpractice law and product liability law though these are very different from each other.

  • In medical malpractice litigation, the rule applied is that of negligence. In other words, the doctor’s negligence to provide the best standard of care based on customary practice resulted in injury or harm to the patient. If the doctor can prove that he/she acted as a reasonably prudent person and performed a procedure or technique acceptable within the medical society, then there may be no medical malpractice involved. However, robotic surgery is still growing and it is difficult to determine whether a particular method is acceptable in the medical society and authenticated in clinical practice.
  • When using a robotic device, the physician operating it has the responsibility to provide the best standard of care by correctly using the instrument. If the doctor operates the device incorrectly, the patient has to prove that the possibility of the device malfunctioning would have been much less if the procedure was performed at another healthcare facility or by a different surgeon. Since each robotic device has the risk of malfunctioning, it may be difficult to prove this.

The FDA insists that doctors who perform robotic surgery must have special training, experience, and high-quality assessment. In addition, there must be at least one other surgeon present at the operating table who is equally trained in operating the device. The FDA also requires manufacturers to provide hands-on training course to surgeons and adequately warn them regarding the hazards involved. The complexity of cases involving AI use in medicine is also due to the fact that there is no consensus as regards how much training is adequate to accredit a surgeon to operate the robotic device. At present, each healthcare organization has its own accreditation process. This is why it is rather challenging to identify the specific duties and responsibilities of the various parties involved.

Product malfunction doesn’t pitch the blame on the doctor providing the treatment. However, doctors have to inform patients about the treatment, its risks, and the action taken in case of any malfunctioning. Manufacturers on their part, must provide adequate warning to consumers regarding the risks of their products. Doctors must also provide these details to their patients. A manufacturer who has provided adequate educational instructions and warnings to the physician will not be held liable. If a doctor fails to provide adequate warning to the patient about the risks and limitations of the device, he/she could be found liable for medical malpractice. In robotic surgery litigation, the mixed relationship between the surgeon’s duty and manufacturer’s duty creates considerable complexity.

As a company assisting medical malpractice and personal injury attorneys with medical review services, we understand that healthcare providers must ensure they are using the robots and other such AI-powered devices in a legal and ethical manner, and keeping in line with the standards of care.

 

 

leave a comment

 

     

    Powered by