“Fairness, reliability/safety, privacy, inclusiveness, transparency, and accountability.”

The six are Microsoft’s ethical principles for the development of artificial intelligence (AI) useful to humans. The U.S. multinational tech firm’s ethics committee stops a project if it goes against the six values.

Jarom Britton, regional attorney in the Health, Education and Public Sector in Asia at Microsoft, speaks during the “Dementia & Technology” conference at Yonsei Cancer Center, Seoul, Monday.

Jarom Britton, regional attorney in the Health, Education and Public Sector in Asia at Microsoft, spoke on “Ethics in the Age of AI: Implications for Medical Research and Technology” during the “Dementia & Technology” conference at Yonsei Cancer Center, Seoul, Monday.

“Some say that AI has a bright future, while others say it is threatening human beings,” Britton said. "Microsoft's goal is to make technologies useful to humans.”

Fairness, one of the six principles of Microsoft, is related to prejudice, he explained. “An AI machine recognized a white man’s face correctly but failed to recognize a black woman’s face. The data was biased, and it affected the results. If we had applied this software to the medical sector, we must have made a significant error,” he said.

Britton went on to explain how significant the six principles were. “Microsoft has an ethics committee, just like hospitals. The committee reviews ethical issues, not profits,” he said.

“The committee discontinued some projects because they were in breach of Microsoft’s values. Human should be at the center of all AI development.”

Britton emphasized that AI and humans needed collaborations. The success of healthcare services will hinge on the outcome of the collaborations, he said.

“Wearable devices will check individuals’ health status 24 hours a day. As AI provides infinite possibilities, we need to utilize it responsibly,” he added.

Copyright © KBR Unauthorized reproduction, redistribution prohibited