"Machine learning is only as good as the information provided to train the machine. Models trained on partial datasets can skew toward demographics that often turned up in the data—for example, Caucasians or men over 60. There is concern that...
"Machine learning is only as good as the information provided to train the machine. Models trained on partial datasets can skew toward demographics that often turned up in the data—for example, Caucasians or men over 60. There is concern that “analyses based on faulty or biased algorithms could exacerbate existing racial gaps and other disparities in health care.” Already during the pandemic’s first waves, multiple AI systems used to classify x-rays have been found to show racial, gender, and socioeconomic biases.
Such bias could create a high potential for poor recommendations, including false positives and false negatives. It’s critical that system builders are able to explain and qualify their training data and that those who best understand AI-related system risks are the ones who influence health care systems or alter applications to mitigate AI-related harms."
Richard E. Anderson is chairman and chief executive officer, The Doctors Company and leader, TDC Group of companies.
He shares his story and discusses his KevinMD article, "Artificial intelligence, COVID-19, and the future of pandemics." (https://www.kevinmd.com/blog/2020/11/artificial-intelligence-covid-19-and-the-future-of-pandemics.html)