Ah, ethics. Those pesky thought actions that so interfere with business dealings. What’s next, morals? Well, in this article’s case it involves artificial intelligence (AI) and how it is programmed. AI can certainly do many things better than us humans BUT someone has to come up with the algorithms and programs. Are those people ethically “challenged”? What are their bias? So, an ethical panel to oversee and be the watchdog of AI programs? Hmmmm, WHO appoints the panels? So many thoughts. No wonder Elon Musk is worried about AI and where it will take us.
“About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They’d made a machine learning algorithm that essentially works as gaydar. After training it with tens of thousands of photographs from dating sites, the algorithm could perform better than a human judge in specific instances. For example, when given photographs of a gay white man and a straight white man taken from dating sites, the algorithm could guess which one was gay more accurately than actual people participating in the study.* The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.
Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats…”
Full Story at Wired