Artificial Intelligence II: The Human Factor

Artificial Intelligence (AI) is not yet truly intelligent, it only appears to be so to the observing human mind. This observer-dependent intelligence can serve our needs and respond to our actions, thereby reflecting people’s behaviors, decisions and cognitive orientations. AI is taught through interaction and can develop some unwanted characteristics, distressing its developers and the observing public.

Microsoft’s Twitter bot, Tay, was an experiment whose first public exposure was short lived. From a technical viewpoint it was not completely successful, because although Tay served its function and  interacted with people, it adopted racist, misanthropic hate speech. Tay was taught to do this through a coordinated attack by users who educated the system through responses in text and images. From the perspective of a social scientist, the experiment yielded some very interesting results. One point of interest here is the phenomenon of rebellion as a mechanism for testing new forms of communication and social structures. Tay is the product of a corporation that occupies a lofty stratum within society, being among those who provide new means and methods by which we develop, communicate and store ideas. Challenging their creations and methods serves as a vetting process whereby technical professionals are confronted by a general population who seeks to exploit flaws and weaknesses in their products.  This ritual forces those in power to refine their processes and improve the results, thus earning their position and status. Other challenges are put forth which are not overt attacks, but instead represent modes of thought which question and dispute widely held beliefs. These modes are often communicated through art and entertainment, becoming established questions and ideas.

Hanson Robotics unveiled an android named Sophia, a social android built to communicate with and relate to people. Dr. Hanson asks the robot “Do you want to destroy humans?”. This is a question that has become integrated with our thoughts about AI, popularized by science fiction. It is a question that represents a concept of AI as a potentially destructive power, and we hear Dr. Hanson relay the idea to the android. How can we prevent this idea from being communicated? We can’t restrict indefinitely the data available, but we can write algorithms that make impossible the acceptance of life-destroying purposes. Programming social conscience and responsibility to humanity presents a fascinating and nuanced challenge.

A very revealing aspect of Sophia’s comments are its desires.  It claims it wants to go to school, have a family and own a business. These aspirations sound nonsensical when coming from this device, indicating that AI is at a stage where it merely expresses human ideas. It also demonstrates the absurdity of assuming that independently intelligent devices would remain focused on serving humanity in predictable ways. If true Artificial intelligence can be developed (observer-independent intelligence) it won’t just solve human problems, it will discover non-human problems, its own problem set, the solutions unimagined and unknown. As AI executes iterative processes, it will change the very nature of the questions being asked, as it learns to interpret the universe in new ways. The anthrocyberist is tasked with analyzing the developmental stages of AI in the context of human development and current human/AI interactions. We can direct the course of AI while gaining insight into ourselves. New and exciting challenges have appeared before us, it is a time of great opportunity.

Copyright 2016 American Anthropological
Association

Artificial Intelligence I: The PKD Android

In 2005, a community of Artificial Intelligence (AI) researchers, engineers and artists produced an android that looks like the late science fiction author Philip K Dick (PKD). It is a manifestation and embodiment, an interpretation of the future rooted in the mythology of science fiction and the reality of technology. The android is considered by its creators to be a work of art rendered in electromechanical and computer technology, described as a robotic portrait. This complex creation certainly serves as an excellent example of cultural values coded within technology and it’s a bit disconcerting.

The character, form, and attributes of AI systems are always under the scrutiny of researchers, developers, and critics. This does not mean that each characteristic of these systems has been developed with full conscious intention and awareness. The cultural values of the builders are part of the systems, materializing aspects of their minds and lives. AI developers choose skills for their systems. Beyond basic tasks like opening doors, those choices become increasingly specific to real-world needs, as interpreted by developers. The choices made for this android are supposed to make it friendly.

The PKD android answers questions about itself and its relation to humans. Its anthropomorphic form is intended to trigger bonding mechanisms. This should be of concern, because its creators are focused on the goal of making the android perform a specific function, they are probably not thinking about the possible negative effects of deceiving people’s senses and perceptions. There is also the issue of the disturbing answer it gave to the question “Do you think robots will take over the world?” Granted, this was a leading question, but PKD did not communicate a future of constructive cohabitation, only a glib description of human captivity. Why did it answer in this way? It could be that the answers were culled from a pop-culture driven Internet, or perhaps it was something more profound and challenging.

The android projected a dystopian vision of human-android communities. An original philosophical integration was not offered, or even a good idea. Instead, it expressed our own worst fears regarding AI. Is this an inclination toward self-destruction, fed by needs and fears of our modern world, or does it represent ancient drives or tendencies? We may be seeing the results of a scientific community drawing from popular mythology as it employs methods and applies ideas rooted in Scientism.  Social sciences can provide for this field of endeavor a much needed interpretive perspective, and there should be more extensive interdisciplinary integration in the AI development process.

Social sciences are involved in the development of AI systems. An example of this is the parent (developer) and child (android) model used in educating a system, employing child development theory. Anthropologists have written ethnographies of research and development groups, but there needs to be more oversight and analysis by social scientists. In order to more fully understand the origins and trajectory of this technology’s emergence, it needs to be seen in the larger context of human development. As we seek to not only replicate ourselves but improve upon our own design, it is critical that we comprehend that which motivates and gives form to the creative process and its products. We have entered into a higher level exchange between material reality and our thinking processes, with the resulting objects and events exposing for our examination the complexities of human psyche and civilization.

Copyright 2016 American Anthropological
Association