EssaysForStudent.com - Free Essays, Term Papers & Book Notes
Search

Artificial Intelligence and Marketing

Page 1 of 5

Week 16 Questions.

Auditing Algorithms for Bias

Questions:

  • Artificial intelligence is completely based on mathematical solutions and exact calculations, when, as a philosophy, a rather abstract concept with many vague theories that can be interpreted in different ways. How to solve situations where there are no specific numeric variables or obviously well-known unambiguous conditions that a priori exclude an objective view?
  • Is it possible to entrust and let the machine to solve moral dilemmas?

The Ethics of Influencer Marketing

Questions:

  • It is a fact that all popular streamers, video bloggers, Instagram models and other media persons have a weighty word in society, especially among the young generation. Can they be counted as equal Mass Media representatives on a par with TV reporters, radio broadcasters and newspapers?
  • If yes then should they also be monitored by specific law regulations, government censorship and ethic restrictions?

How Companies Can Identify Racial and Gender Bias in Their Customer

Statement:

Racism - a set of ideological views, which are based on the provisions of the disparity of human races and the decisive influence of racial differences on history and culture. Same goes to the meaning of gender or sex orientation discrimination. However, all these prejudices are based on emotional impulses. It turns out that the whole problem is in people who are guided by a world view based only on emotions a priori that do not imply the existence of logic and sanity. Nowadays many companies focus on the excessive corporative tolerance because of the fearing a public condemnation from various human rights movements (such as feminists, homosexual movements and other minorities). Such trend forces the large corporations to hire employees giving preference to their belonging to a minority instead of their skills and job experience. For example, software development studio from Canada (EA Bioware) opened a new branch with all representatives of minority for improving the image and received a public approval. However, the studio commercially failed since its employees were not qualified enough for such job.

Question:

  • Thus, how to find a golden middle between tolerance and discrimination?
  • How to avoid the absurdity of public accusation of racism in hiring a white candidate just because he/she is more experienced than an Asian/black?
  • Is it possible that the optimal solution would be to replace an overly emotional human employee with an unbiased machine that is deprived of such biological "flaws"?

Response:

People, by their nature, instinctively feel wary about their relatives with a pattern of behavior that is strikingly different from their own. Even more distrust or other subconsciously negative sensations cause a striking difference in physiology such as skin color. Machines do not have these human’s disadvantages. The conclusion suggests itself: replacing the weak link in the chain of business processes, where the customer service is under the high risk because of unstable irrational human nature.

Let’s consider this theory on a real example of the use of artificial intelligence in the work, which is usually operated personally by a human-operator. So, can artificial intelligence become a potentially successful unbiased HR manager?

The decision of experienced HR managers is largely based on stereotypes that differ depending on the scope of their company. For example, what a stereotypical good programmer would look like in the eyes of a typical lady manager: the probable presence of a beard, glasses as a must-have accessory and a polo shirt. And at the same time a woman with this profession is much less associated.

It would be nice if artificial intelligence would hire you instead of these prejudiced “ladies”?

First, experts teach artificial intelligence, but experts are those same “ladies” with specially trained stereotypes. However, they do not even have to explain anything to anyone. They just label it: bearded on a picture – probably will get a higher chance, but this one in the heels – no way.

At the same time, artificial intelligence is learning and making some kind of inhuman conclusions. And then it turns out even more biased than all the ladies HR-s put together. Who cares about discrimination from artificial intelligence - look at the TED report on How I’m fighting bias in algorithms. There, a black girl complains that facial recognition systems do not see her.

That is, a person may be a racist, he may not like people with a different skin color, a cut in his eyes, and not at all like his own tribe. But only a car can go to such an extreme - not see them point-blank.

So, perhaps, AI will throw out resumes of girls of childbearing age even faster than any biased HR. After all, the lady in this position might still seem that you are similar to her career woman relative and she would take you just because of this. Meanwhile, artificial intelligence is alien to such sentiments.

But what about bearded guys? Do they have an advantage? They also still have a higher risk to be eliminated by AI rather than by a human-manager. The reason is that such systems do not execute just a primitive parsing the resume of the potential candidate. Since such easy task can be done by simplest scripts. The AI digs much deeper: it is able to study your social networks, to find drunken photos (much more efficiently than the HR person does on Monday morning) and make completely unexpected conclusions, even for developers. There is already an existed algorithm to predict the likelihood of depression several months before the real symptoms hit a person. According to social networks: posted messages, pictures, videos, viewed and liked pages, etc.

Much less fun it sounds in the context of an employment attempt. For the instance, you send a resume and the system looks at your Facebook, Instagram and Twitter. After analysis it decides that you are on the verge of depression and… just throws your resume/CV because there is no company who needs such employee. Yes, this is machine learning and that sounds as a black box. At the entrance - a resume, at the exit - the decision. You cannot simply open an algorithm and cross out intolerant variables like “high risk of depression” or “biological clock ticking”.

Thus, implementing AI as a solution against discrimination is a knowingly failure idea. At least at the current level of technologies.

Download as (for upgraded members)  txt (6.4 Kb)   pdf (105.2 Kb)   docx (10.8 Kb)  
Continue for 4 more pages »