Although algorithms have mediated all interactions, from our cultural and social to our political and economic interactions, computer experts have endeavored to react to the increasing demands of interpretability by developing technical methods to understand their behavior.
But a group of academics and industry researchers now say that we do not have to get into these black boxes to understand and therefore control their impact on our lives. After all, these are not the first impenetrable black boxes we find.
“We have developed scientific methods for studying black boxes for hundreds of years, but these methods have so far been applied to living things,” says Nick Obradovich, an MIT Media Lab researcher and co-author of a new article published last week in Nature. “We can use many of the same tools to study the new Black Box artificial intelligence systems.”
The authors of the paper, a diverse group of researchers from industry and academia, propose to develop a new speculative discipline called “machine behavior”. It addresses the study of AI systems in the same way that we always study animals and humans, through experimentation and empirical observation.
Although there is discipline from artificial intelligence researchers, machine behaviorists still need to work closely with them. As they discover new ways to behave and influence AI systems, the former can rely on such knowledge to improve the design. The more each discipline can benefit from one’s expertise, the more likely it is that an artificial system will benefit people instead of harming them.
“We are witnessing the emergence of machines with agencies, machines that make decisions and act autonomously,” said Iyad Rahwan, another researcher for the Media Lab and the paper’s lead author, stated in a blog accompanying the publication.