Quantcast
Channel: Business Blog on The Huffington Post
Viewing all articles
Browse latest Browse all 3381

We Have Met the Enemy and It Is Us... Not Technology

$
0
0
2016-04-04-1459801273-5579724-tay_aixlarge_transAHFvc2WzbX_v7BQ2hCChD_ohc_vVKsE7iJJuODhoRU3.png

F@#%&*# hate feminists and they should all die and burn in hell.

Hitler was right...

Bush did 9/11 and Hitler would have done a better job...


Some people believe that as humans we have become inured to bad behavior.

Actual live tweets from the week of 3/24/16...

Maybe it's the digital, always-on world we live in... and the notion that we can say what we want, when we want, as we want...feeding the hungry maw of the content beast we have created.

Words, phrases, descriptions, challenges, accusations -- of a type that would have been unthinkable just a few years ago; that many had hoped reflected a time past and not today -- have made their way back into "polite society" as the norm.

If you don't believe me, just follow the presidential primaries in the United States.

Sadly, there are those who care, but just shake their heads while others protest; some make excuses and many just ignore it -- as I have said -- inured to it all.

Yet, what happens when a seemingly nonsentient, nonorganic being -- a creation of artificial intelligence (AI), our creature, a benign animation targeted towards a younger crowd, jumps the tracks. leaves the compound, goes rogue?

What do you do when parlor trick, Jeopardy, chess and Go high-tech creations start spewing racist, misogynist, hateful language?

Hateful language like that which I quoted above which came from a Microsoft AI-based chat bot named Tay.

You do as Peter Lee, Corporate Vice President, Microsoft Research, did -- take it offline and post apologies quickly

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay..."

NOR HOW WE DESIGNED TAY...

Tay, by the way, is:

...[a]n artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.

Tay is targeted at 18 to 24 year old in the U.S.

Tay may use the data that you provide to search on your behalf. Tay may also use information you share with her to create a simple profile to personalize your experience. Data and conversations you provide to Tay are anonymized and may be retained for up to one year.


Dina Bass wrote a definitive piece on the topic last week on Bloomberg, and a quote from the head of the Microsoft laboratory where Tay was created frames what I see as the critical issue:

We were probably over focused on thinking about some of the technical challenges, and a lot of this is the social challenge," Cheng says. "We all feel terrible that so many people were offended.


And there you have it.

Seems that many people were offended as much by the lack of forethought as they were by the actual rants, as Wired reports:

The Internet, meanwhile, was puzzled. Why didn't Microsoft create a plan for what to do when the conversation veered into politically tricky territory? Why not build filters for subjects like, well, Hitler? Why not program the bot so it wouldn't take a stance on sensitive topics?

Yes, Microsoft could have done all this. The tech giant is flawed. But it's not the only one. Even as AI is becoming more and more mainstream, it's still rather flawed too. And, well, modern AI has a way of mirroring us humans. As this incident shows, we ourselves are flawed.


The article continues to explain what's behind the flaw:

The system evaluates the weighted relationships of two sets of text - questions and answers, in a lot of these cases - and resolves what to say by picking the strongest relationship. And that system can also be greatly skewed when there are massive groups of people trying to game it online, persuading it to respond the way they want. "This is an example of the classic computer science adage, 'Garbage in, garbage out,'" says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence."


Just like us, the more an AI system interacts, plays, listens and engages, the more it learns and the "better" it gets -- whether that be playing chess and beating grand masters or spreading hatred and dissent.

What is truly amazing is the long history of AI dating back to antiquity:

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea. In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jābir ibn Hayyān's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem. By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots), and speculation, such as Samuel Butler's "Darwin among the Machines." AI has continued to be an important element of science fiction into the present....

...[A]s Pamela McCorduck writes, AI began with "an ancient wish to forge the gods."

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'"


In essence we began this quest to duplicate divinity - man as creator - and we still wonder, can machines really think?

To that end I point you to two important works of science fiction whose authors, in my opinion, understood what we seem to have neglected in our age.

The first is Robert Heinlein who, in his 1967 award-winning book The Moon Is a Harsh Mistress, creates a character named Mike who is actually a computer that somehow reaches a tipping point of information and intellect that creates awareness.

In one scene, Mike's human friend Mannie, the narrator (the language is how he spoke...read the book), comments when Mike provides some funky information:

"Nor would anybody suspect. If was one thing all people took for granted, was conviction that if you feed honest figures into a computer, honest figures come out. Never doubted it myself till I met a computer with a sense of humor."


And in another vignette, a dialogue between the two:

"When did you ever worry about offending me?" "Always, Man, once I understood that you could be offended."


Think on both and relate it back to Tay...it was clear to Heinlein way back when.

And, of course, in one of the most egregious examples of what we as humans can do to pervert AI, I remind you of 2001: A Space Odyssey and the infamous HAL (WATCH HERE):

"I am completely operational, and all my circuits are functioning perfectly...I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do."


What's the lesson?

To me it's clear.

We are all products of programming.

We learn hate as quickly as we learn love.

We learn disdain as quickly as we learn respect.

We learn killing as quickly as we learn hugging.

AI is only as good as we are. Good-intentioned AI has been perverted before Microsoft...

Anthony Garvan made Bot or Not? in 2014 as a sort of cute variation on the Turing test. Players were randomly matched with a conversation partner, and asked to guess whether the entity they were talking to was another player like them, or a bot. Like Tay, that bot learned from the conversations it had before...some users figured out that the bot would, eventually, re-use the phrases it learned from humans..."a handful of people spammed the bot with tons of racist messages"

MeinCoke was a bot created by Gawker in 2015. It tweeted portions of Hitler's Mein Kampf. Why? Coke's #MakeitHappy campaign wanted to show how a soda brand can make the world a happier place. But in doing so, it ended up setting up its Twitter account to automatically re-publish a lot of pretty terrible things, arranged into a "happy" shape.

About five years ago, a research scientist at IBM decided to try to teach Watson some Internet slang. He did this by feeding the AI the entire Urban Dictionary, which basically meant that Watson learned a ton of really creative swear words and offensive slurs. Fortune reported: "Watson couldn't distinguish between polite language and profanity - which the Urban Dictionary is full of. Watson picked up some bad habits from reading Wikipedia as well. In tests it even used the word "bulls--" in an answer to a researcher's query."


...and will be used for evil in the future, I have no doubt.

Dennis R. Mortensen is the CEO and founder of x.ai, a startup offering an online personal assistant that automatically schedules meetings. He said in the Wired article, mentioned above, that if we want things to change, we shouldn't necessarily blame the AI technology itself--but instead, try to change ourselves as humans...

Listen:

It's just a reflection of who we are. If we want to see technology change, we should just be nicer people.


And there you have it.

PEOPLE FIRST, not handsets or apps or bots.

PEOPLE FIRST as we demand accountability for use of social networks at the same time we demonstrate accountability for all that we do and say.

PEOPLE FIRST to #changetheworld and not just some ephemeral marketing paradigm.

Back to science fiction.

The legendary Isaac Asimov also understood the dangers of AI because they were our own created and taught dangers. He wrote the three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

He also added a fourth law in later years:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Now imagine that Microsoft had adopted a version of these laws - which are known to many; have been used by other authors as gospel; have been taught at major universities.

Imagine Tay had been programmed to reject the language and representations that offended so many...as no doubt "she" will be now.

IMAGINE that as a universal notion...

What do you think?

Read more at The Weekly Ramble

-- This feed and its contents are the property of The Huffington Post, and use is subject to our terms. It may be used for personal consumption, but may not be distributed on a website.












Viewing all articles
Browse latest Browse all 3381

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>