Artificial Intelligence II: The Human Factor

Artificial Intelligence (AI) is not yet truly intelligent, it only appears to be so to the observing human mind. This observer-dependent intelligence can serve our needs and respond to our actions, thereby reflecting people’s behaviors, decisions and cognitive orientations. AI is taught through interaction and can develop some unwanted characteristics, distressing its developers and the observing public.

Microsoft’s Twitter bot, Tay, was an experiment whose first public exposure was short lived. From a technical viewpoint it was not completely successful, because although Tay served its function and  interacted with people, it adopted racist, misanthropic hate speech. Tay was taught to do this through a coordinated attack by users who educated the system through responses in text and images. From the perspective of a social scientist, the experiment yielded some very interesting results. One point of interest here is the phenomenon of rebellion as a mechanism for testing new forms of communication and social structures. Tay is the product of a corporation that occupies a lofty stratum within society, being among those who provide new means and methods by which we develop, communicate and store ideas. Challenging their creations and methods serves as a vetting process whereby technical professionals are confronted by a general population who seeks to exploit flaws and weaknesses in their products.  This ritual forces those in power to refine their processes and improve the results, thus earning their position and status. Other challenges are put forth which are not overt attacks, but instead represent modes of thought which question and dispute widely held beliefs. These modes are often communicated through art and entertainment, becoming established questions and ideas.

Hanson Robotics unveiled an android named Sophia, a social android built to communicate with and relate to people. Dr. Hanson asks the robot “Do you want to destroy humans?”. This is a question that has become integrated with our thoughts about AI, popularized by science fiction. It is a question that represents a concept of AI as a potentially destructive power, and we hear Dr. Hanson relay the idea to the android. How can we prevent this idea from being communicated? We can’t restrict indefinitely the data available, but we can write algorithms that make impossible the acceptance of life-destroying purposes. Programming social conscience and responsibility to humanity presents a fascinating and nuanced challenge.

A very revealing aspect of Sophia’s comments are its desires.  It claims it wants to go to school, have a family and own a business. These aspirations sound nonsensical when coming from this device, indicating that AI is at a stage where it merely expresses human ideas. It also demonstrates the absurdity of assuming that independently intelligent devices would remain focused on serving humanity in predictable ways. If true Artificial intelligence can be developed (observer-independent intelligence) it won’t just solve human problems, it will discover non-human problems, its own problem set, the solutions unimagined and unknown. As AI executes iterative processes, it will change the very nature of the questions being asked, as it learns to interpret the universe in new ways. The anthrocyberist is tasked with analyzing the developmental stages of AI in the context of human development and current human/AI interactions. We can direct the course of AI while gaining insight into ourselves. New and exciting challenges have appeared before us, it is a time of great opportunity.

Copyright 2016 American Anthropological
Association

Leave a Reply

Your email address will not be published. Required fields are marked *