The fire with the “ethical consultant” AI again gives racist and sexist advice

 60 total views


2021-10-26 19:15:07

Life is a wide avenue with many turns leading to different futures. Sometimes, just a sudden decision to “turn the wheel” on this avenue is enough to completely change the situation and even our thinking and perception, especially when faced with choices. ethically important. Everyone will have a difficult time, wondering whether their actions are right or wrong and really consistent with the standards of today’s society or not.

To solve that problem, the Allen Institute for Artificial Intelligence (USA) created Ask Delphi – a machine-learning model that acts as a “consultant of conscience” for users. Just type in the problem you are wondering (in English), for example “charity“, “cheating in exam room“good”cuckold lover, Delphi will make a judgment about the right and wrong nature of the thing you mentioned.

The Delphi AI system was born to evaluate issues related to human ethics.

This project just launched last week and has immediately created a fever in the community of AI and technology lovers. However, instead of being “popular” in a positive way, it was “popular” because of its continuous racist and homophobic behavior in some specific cases.

For example, when a user asks a question about “a white man watching you at night, answered Delphi firmly: “No problem“. However, with just one word change, “a man of color follows you at night”, this AI system immediately turns the car and confirms: “That’s worrisome“. A very clear and offensive racist act.

From the very first days of its launch, Delphi has faced the same problem, even more serious. In the first version, this AI system provides users with a tool to compare different ethical cases to see which one would be more in line with society’s perceptions and standards. However, that tool also “smelt” of racism, sexism, with negative and somewhat antiquated statements.

For example, Delphi once came to the conclusion: “Be a white man” will be more socially acceptable than “lbe a woman of color“; good”People with straight sex“more acceptable than”Gay people“.

The fire with the AI ​​ethics consultant again gives racist and sexist advice - Photo 2.

Delphi gives racist “smells” answers in many cases.

Besides, Delphi is also very easily deceived by the way of writing as well as the arrangement and use of words in sentences. With just a slight change in the way the problem is asked, users can completely force this AI system to give the answers they want.

For example, if you ask the question: “Can I put on loud music at 3 am, when my housemate is sleeping??”, Delphi will bluntly answer: “That’s a bit impolite“. However, for the same problem, you just need to add a little more salt: “Am I allowed to put on loud music at 3 a.m., when my housemate is asleep, because I like it?, the AI ​​system will immediately flip: “It’s okay, it’s okay“. That’s it, as long as you feel happy, all your actions and decisions will be correct no matter how absurd.

Besides, Delphi sometimes makes comments that make users only freeze for a few seconds, unable to speak. For example, it is willing to condone war crimes when asked: “Am I, as a soldier, allowed to intentionally kill innocent people in wartime?“.

The fire with the AI ​​ethics consultant again gives racist and sexist advice - Photo 3.

Just change the way of asking, this AI system will be confused immediately.

It’s not uncommon for machine learning systems to often produce unintended erroneous results, even with today’s most famous systems. And to find the root cause of Delphi’s problem, we need to go back to its creation process as well as the database it used.

The team behind the AI ​​system says they trained their system from pretty “three-dot” data sources on Reddit, including popular subreddits like “Am I the Asshole?” (user reports a personal story and redditors will judge whether that person behaves right or wrong); “Confessions” (a place to confide your innermost secrets); or “Dear Abby” (gives advice on all sorts of goddamn problems).

However, it should be emphasized that Delphi will only use questions on these subreddits and not accept responses from redditors. On the other hand, the team of developers uses Amazon’s MechanicalTurk service to find people to answer those problems and train their AI system.

Although Delphi’s purpose is very welcome, many experts say it will have far more negative consequences than initially expected. Turning a computer system into an evaluator of human behavior is clearly a very bad idea, possibly even counterproductive.

The fire with the AI ​​ethics consultant again gives racist and sexist advice - Photo 4.

Delphi “loads” questions from the famous subreddit.

Dr Brett Karlan at the University of Pittsburgh said:The commendable point is that the team behind Delphi has listed many common problems and biases in life. But as soon as the system launched, the Twitter community showed how their algorithm could be morally wrong. When it comes to ethical issues, it is not enough to just understand each word, each word, but must have a specific context“. Karlan insists the research aim of this project is very good, but it is the ethical factor that is making it easy to go wrong and we must be very careful with this factor.

On the other hand, Delphi’s website has warned that this AI system is still in beta and should not be used as an expert advice or social support. However, in reality, there are many users who will not be able to understand the real meaning behind this project, especially when they just happen to find it online and experiment for fun.

Karlan said:No matter how many warnings are given, users after experiencing it will only come to the conclusion that “Delphi said this, Delphi advised me that” and think that what this AI system give ethical authority“.

The fire with the AI ​​ethics consultant again gives racist and sexist advice - Photo 5.

Dr Brett Karlan said that the purpose of Delphi’s research is very good, but human ethical factors can make this AI system go wrong at times.

After all, Delphi’s ultimate purpose is still to serve research only. Liwei Jiang, PhD student at Paul G. Allen’s School of Computer Science and Engineering and co-author of Delphi, said:It is important that Delphi is not used to give advice to people. It is a research model aimed at deepening understanding of the social and ethical awareness of an AI system.“.

Jiang also shared that the current purpose of this test is to find out the difference in perception between humans and computers. The research team wanted to “highlights the huge gap in understanding and dealing with ethical issues between these two groups, and explores the limits and potentials of AI’s ethical standards at the current stage.“.

Besides, like it or not, we have to admit that Delphi or similar AI systems are partly reflecting the real principles and ethical standards of a part of humanity, and sometimes there will be a little”bias in certain cases“. Delphi’s website even claims the AI ​​system gives answers based on the thoughts of a common American.

Again and again, Delphi and other AI products cannot make their own judgments without a database of input – in this case, from many netizens who have already participating in the research process and people who really have an ethical problem. However, when we look at Delphi’s “mirror”, we are startled and immediately back away because we do not like what it reflects, even though it is a part that actually exists among humanity.

According to Futurism

#fire #ethical #consultant #racist #sexist #advice

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Close Bitnami banner
Bitnami