top of page

Is AI in Medicine a Big N-O?

  • Deekshita Gorrepati
  • Jan 30, 2021
  • 4 min read

This century has definitely been a unique one marked by our dependence - maybe even overdependence - on Artificial Intelligence (AI). The world around us is rapidly evolving to integrate Artificial Intelligence into everything: from our social media to shopping to even speech processing, but with the intents to make our lives that much easier.

Recently, there has been increased attention on how companies are using AI to exploit us just to fill their pockets with cash. However, the use of AI in healthcare has, without a question, been done for the greater good of patients and the global community. Physicians are expected to still uphold the 2000-year-old Hippocratic Oath: to treat the ill to the best of one's ability and value patient privacy without being directed by self-interest. These healthcare workers go through their days with the sole purpose of doing everything they can to help their patients. Scientists and engineers have used AI to develop softwares that do exceptional tasks that even our complex human brains cannot do, including processing huge amounts of data, cross-analyzing different cases to identify diseases or potential treatments, spotting patterns, and countless others (Myers). With such great potential, the medical field is also rapidly being dominated by AI.

Specifically, AI applications in healthcare can start from regular yearly checkups, in which primary care physicians can use AI to “take their notes, analyze their discussions with patients, and enter required information directly into Electronic Medical Records systems” (Amisha, Malik, Pathania, and Rathaur). Just simple changes from small clinics to big hospitals can make a huge difference especially as this patient data can be collected and analyzed to detect any drastic changes and provide the necessary medical attention. Even the most educated physicians cannot always accurately remember all their patient’s by heart or deduce every condition. However, with AI, a new concept of “precision medicine” has been introduced that allows physicians to look into their patients’ genes and proteins to diagnose, plan treatment, or even prevent the disease (Amisha, Malik, Pathania, and Rathaur). Case studies have shown that AI systems have surpassed dermatologists in successfully detecting skin lesions because of their ability to cross-analyze between many cases downloaded onto their systems (Amisha, Malik, Pathania, and Rathaur).

Though doctors strongly advise against self-diagnosing, there are new applications that have combined human pathology with AI, allowing patients to track their health throughout their daily lives. Cardiologist, researcher, and author Eric Topol of Scripps Research claims that this is the age of the “medical selfie” in which ordinary people will be able to diagnose skin cancers with a simple selfie! (Myers). Now, how convenient is that! At the moment, AI + healthcare definitely does not seem horrendous considering all its benefits. However, the speed at which this transition is happening may pose unintended consequences that are necessary to assess.

Like humans, machines aren’t perfect. Nothing is. They are bound to malfunction, error, injury, and god-forbid, death. Though AI is coded with extensive algorithms that maximize the success rates, mistakes are still plausible. There is the possibility that “an AI system recommends the wrong drug for a patient, fails to notice a tumor on a radiological scan,” or incorrectly concludes that one patient needed more attention than another (Price II). Though all these situations are perfect mistakes, a person’s life is at stake. Is it worth it? But then again, can’t physicians make the same mistakes? According to a study, more than one in three physicians have had a medical liability lawsuit (O’Reilly). So clearly physicians are susceptible to the same kind of mistakes so why should it be any different with AI?

On top of being prone to mistakes, AI technology in healthcare has also posed new concerns such as patient confidentiality and the possibility of discrimination. In order for AI systems to be able to properly detect and identify different diseases and treatment methods, they require large datasets from a large population sample. The problem is many patients see this as an invasion of their privacy especially if there is the possibility of this information being released to third parties such as banks or life insurance companies (Price II). When taking a look at the discrimination problem, AI algorithms are greatly influenced by the underrepresentation of certain racial and gender groups. Google technical program manager Donald Martin addressed how the datasets that these algorithms are based on are “drawn from largely white, Northern European populations” (Myers). Such bias in AI systems has unintentionally led to an unfavorable discrimination problem in healthcare. For instance, a wrist-worn device that monitors heart activity using a green-light technology has little or no effect on African Americans (Meyers). An invention created to help patients cannot be successful if it is distorted in who it can help.

However, this is not to say that there are no ways to end these flaws altogether. Martin recently developed “Community-Based Systems Analysis,” which is a process that scans and removes any biases in data gathering and analysis that influence decision-making (Meyers). If the medical field keeps up with AI without sacrificing patients’ safety and universal diversity, we can head in the right direction. The medical field is one of the few areas that have not been completely tainted by selfish intentions and let’s keep it this way. Let’s not let technology dominate our lives and instead use it wisely for the greater good. Let’s say Y-E-S to a more inclusive and successful field!

__________________________________________________________________________________

Citations

O'Reilly, Kevin B. “1 In 3 Physicians Has Been Sued; by Age 55, 1 in 2 Hit with Suit.” American

Medical Association, 26 Jan. 2018, www.ama-assn.org/practice-management/sustainability/1-3-

physicians-has-been-sued-age-55-1-2-hit-suit.


Price II, W. Nicholson. Risks and Remedies for Artificial Intelligence in Health Care. 6 May 2020,

www.brookings.edu/research/risks-and-remedies-for-artificial-intelligence-in-health-care/.


Myers, Andrew. “The Future of Artificial Intelligence in Medicine and Imaging.” Stanford HAI, 1 Apr.

2020, hai.stanford.edu/blog/future-artificial-intelligence-medicine-and-imaging.


Amisha et al. “Overview of artificial intelligence in medicine.” Journal of family medicine and

primary care vol. 8,7 (2019): 2328-2331. doi:10.4103/jfmpc.jfmpc_440_19



 
 
 

Comments


© 2020 by Deekshita Gorrepati. Proudly created with Wix.com

  • Facebook Clean Grey
  • Twitter Clean Grey
  • LinkedIn Clean Grey
bottom of page