SIGN UP FOR OUR NEWSLETTER

Excitement and worry with the possibilities brought by artificial intelligence are common; check out the moments AI technology mesmerized us either in a positive or negative way

Artificial intelligence is already a reality. Today, technology-based solutions help doctors diagnose, invest in the stock market, partake in the board of large companies and even build empathy between people from different realities. In the coming years, the technology will likely keep its ever-growing pace. Between 2017 and 2018, artificial intelligence-derived businesses are about to increase 70% and end the year with an estimated value of US$ 1.2 trillion, according to advisory firm Gartner. By 2022, this amount is to double and reach US$ 3.9 trillion – superior to Brazil’s gross domestic product, which closed 2017 in BRL 6.6 trillion.

However, there are some challenges in the way of artificial intelligence expansion. And some experiences in the past few years have clearly shown part of these challenges. In 2016, for instance, an experiment by Microsoft set into motion in Twitter the Tay bot, designed to have the personality of a typical North American teenager and to learn from interactions in social networks. She had to be terminated in less than 24 hours. As soon as she started interacting, the tweets directed to Tay were friendly and out of curiosity, but after 96 thousand tweets posted by herself, her personality changed. Over 40 thousand followers saw Tay become a Nazi, and advocate white supremacy ideas and favor genocide.

At the time, Microsoft defended the social, cultural and technical experiment as a valid one and explained that its bot was a victim of troll attacks – organized groups acting as digital vandals who show destabilizing online behavior and, at times, criminal.

This was not the only time that an AI application in a social network had a dark ending.

In 2018, three professionals at MIT Media Lab, the innovation laboratory at the Massachusetts Institute of Technology, proposed to test how artificial intelligence algorithms interpret images. For this, they designed two versions of the same bot and programmed both to write captions for pictures and drawings. The first bot version analyzed images of people, cats, and dogs. The second one, baptized as “Norman,” a tribute to the character Norman Bates, from the movie “Psycho” (1960, Alfred Hitchcock), was exposed to images of violence and death.

At the end of the experiment, Norman and the other bot underwent the Rorschach test – known to many as a test of interpretation of inkblots. For the same image, the normal bot saw “a person holding an umbrella in the air.” Norman described the scene as “a man shot dead in front of his wife.”

Norman became the first known case of a psycho AI system.

In an interview to the British network BBC, Iyad Rahwan, a professor at MIT involved in the “Norman” project, argued that experiments like this are valid as they prove that “data is more important than algorithms.” In other words, the data used to train artificial intelligence has a bigger influence on the system behavior than the algorithm used to articulate this AI. “The data is reflected in the way AI perceives the world and how it behaves,” he states.

Artificial Intelligence and prejudice

Researchers at MIT have shown great concern regarding evidences of prejudice and discrimination in AI systems – and they are not alone. In an interview to BBC, Dave Coplin, former Microsoft Chief Envisioning Officer, suggested taking a step back as an effort to understand how the technology works. “We are teaching algorithms in the same way as we teach human beings, so there is a risk that we are not teaching everything right,” says Coplin. “When I see an algorithm answer, I want to know who programmed it,” he added.

Mulher assiste à reprodução digital. Crédito: RawPixel/Unsplash

Today, the leading technology and innovation companies responsible for these algorithms are, mostly, in Silicon Valley, in the United States. Artificial intelligence near-monopoly by a small group in relation to human reality – 4 billion people do not even have access to the internet – is seen as suspicious by an increasing number of experts. Joanna Bryson, Computer Scientist at the University of Bath, said in an interview to British newspaper The Guardian that the fact that programmers are mostly “single white men from California” bothers her.

To be sure, serious episodes of racism and misogyny have been reported in artificial intelligence systems. For example, Google News autocomplete system, Google news aggregator service, which uses artificial intelligence, completed the sentence “man is to computer programmer as woman is to…” with the term “housewife.”

There are even more serious cases in similar platforms related to ethnic prejudice. A report elaborated by artificial intelligence used at a USA court concluded that black people had twice as many chances of repeating crimes as white people. Another study, which used machine learning – a type of AI -, associated Euro-American names with happy words and Afro-American names with unpleasant words. Worse: when set to analyze identical résumés, a third artificial intelligence system showed 50% more chances of inviting a candidate of Euro-American ancestry than an Afro-American candidate.

“When we train machines inside our culture, we mandatorily transfer our own prejudices,” states Joanna Bryson. “There is no mathematical way of creating justice. Prejudice is not a bad word in machine learning. It only means that the machine is identifying patterns.”

“When we train machines inside our culture, we mandatorily transfer our own prejudices” – Joanna Bryson, Computer Scientist at the University of Bath.

“I agree there is centralization, but the technology development is growing worldwide. Overall, this is very strong in Silicon Valley and there is homogenization in a sense; however, communities can use tools to reinforce their local behavior,” ponders José Luiz Goldfarb, professor at the Pontifical Catholic University in São Paulo (PUC-SP) and Ph.D. in History of Science. “In the end, does network integrated technology eliminate or open rooms for new cultures?”

Goldfarb explains that, until the 1950’s, there was a highly positivist notion on natural sciences as supreme and 100% objective. Today, a highly shared understanding among scholars is that human activities and their subjectivities do have reflections in scientific production.

It means that social environment beyond digital is determinant. “Humanity faces problems in society, in the economic system. Networks and these technologies expose these psychosocial problems,” says Goldfarb. “If the algorithm is free to act, anything can happen,” he adds. In other words, if algorithms are given freedom, the outcome, whether positive or negative, is unpredictable.

Ethical limits to AI

The foundation for programming and computing is logics and mathematics, two forms of knowledge of universal and accurate character. Nevertheless, it does not mean that the result of its operation is fully predictable, or that it has no huge human interference. “The automated process has logic within this system, within these parameters, but it does not mean that what you have programmed is going to happen,” explains Goldfarb. “Our brain also has extremely precise and accurate processing, but using neurological processing accurately does not ensure obtaining the same results,” he adds.

The History of Science professor explains with an experience of his from years ago. At an Academic Conference held in Montreal, Canada, a machine capable of playing chess at a competitive level was presented. “One of its programmers came to watch a match and, at every play, he was anxious to see what the machine would do. He programmed the system, but he did not know how the system would behave,” he describes.

Robô à venda. Crédito: Lukas/Unsplash

Plenty of decades have gone by and, today, AI application has gone way beyond chess matches. At the US stock exchange, for example, a study carried out by Credit Suisse shows that 60% of the market is controlled by funds whose strategies are planned exclusively by robot-advisors – double the ratio seen ten years ago.

However, even with all the technology in the world, some events like weather, natural phenomena such as earthquakes or even stock market ups and downs cannot be estimated accurately. In February 2018, for example, the average return on investments recommended by robot-advisors showed the worst rate since 2011: 7.3% down, whereas the rate considering all shares – humans and automated – withdrew 2.4%. “Forecasts [by artificial intelligence systems] are not able to calculate, for instance, human agents who play dirty. But will machines be capable of doing the same? Don’t they already play dirty secretly? We don’t know much about their behavior for now,” says Goldfarb.

In his book Homo Deus – A Brief History of Tomorrow, Israeli Historian Yuval Noah Harari reports that some companies have artificial intelligence bots as members of their board to help with decision making. The surprising factor is that these bots tend to recommend investing in companies or businesses that are also highly automated. According to Harari, it is as if artificial intelligence willingly wanted their counterparts to nourish.

“Historically, we associate conscience with intelligence. In the case of machines, they have intelligence, but not human conscience. What if, regarding efficiency, they concluded that we are dispensable?” asks Goldfarb, at PUC-SP.

The apocalyptical scenario is a possibility. However, technology has been providing solutions for humanity problems, especially in the healthcare field.

AI and healthcare revolution

In April 2018, the FDA paved an important path for the future of artificial intelligence in medicine: IDx was authorized to market a device that provides ophthalmic diagnosis using only technology.

One of the diseases that the software detects is diabetic retinopathy, which occurs when a high amount of sugar in the blood damages retina blood vessels. In the U.S. alone, diabetic retinopathy affects 30 million people. The IDx technology leap lies in the analysis of eye images conducted by a special camera.

“AI and Machine Learning (ML) hold enormous promise for the future of medicine. FDA is taking steps to promote innovation and support the use of artificial intelligence based medical devices,” tweeted Scott Gottlieb, Commissioner at the agency, which is about to approve other software dedicated to cerebrovascular accidents (CVAs), also known as strokes.

A study, Generation AI 2018, shows that confidence in diagnosis provided by artificial intelligence is highly relevant. In the survey, parents were asked if they would accept AI-based diagnosis and treatment: 56% said they would fully trust it, and 30% would partially trust in the procedure. In Brazil, the rates were 48% and 37%, respectively. Also, 62% of the respondents would also accept that their children underwent surgery by a machine – in Brazil, 60%.

“Artificial intelligence can be trained to perform diagnostics to match or overcome human doctors in many cases. As data about diseases grows, AIs will become even more capable of detecting cancers and their precursors,” says Tom Coughlin, expert at IEEE, the organization that developed the Generation AI 2018 study, in an interview to Brazilian portal UOL. The expectation is that detecting and treating cancers in an early stage will be possible in the future.

Empathy and AI

Positive uses of artificial intelligence are not limited to healthcare. The Deep Empathy project, for instance, aims to nourish empathy between people from nations at peace and people living under extreme conditions – such as war. Developed by MIT with support from Unicef, the United Nations Children’s Fund AI solution works with images. The proposition is simple: an AI-based software turns pictures of a neighborhood or even aerial footage of a city at peace into war zones.

Robô humanoide. Crédito: Franck V/Unsplash

What would the corner down your street look like if it were just another one devastated by the war in Syria? What would your city’s busiest avenue look like if it were in the middle of a conflict zone? With these simulations, the maxim that one picture is worth ten thousand words can be explored. Ultimately, the goal is to encourage people to donate for causes that help refugees and, above all, children affected by war.

“People have the power to incite reaction [in other people] in a way that statistics just can´t. And technologists—through tools like artificial intelligence—have opportunities to help people see things differently,” state Deep Empathy creators.

You can learn more about Deep Empathy here. It is a way to employ technology for a better world.

Content published in September 12, 2018

See also