MIT - Massachusetts Institute of Technology

Well-Intentioned Uses of Technology Can Go Wrong

Joi Ito is the director of the MIT Media Lab, a research laboratory devoted to the integration of technology, art and design. He is on Twitter (@joi).

The bulk of today’s artificial intelligence research focuses on machine learning, where engineers “train” machines to augment the collective intelligence of our governments, markets and society. This “extended intelligence,” or E.I., will likely become the dominant form of A.I.

If we allow ‘extended intelligence’ to develop without thoughtfully managing how it integrates with, and affects, society, it could be used to amplify dangerous biases and entities.

Here’s the rub: The algorithms that create E.I. are trained by humans and can propagate the same biases that plague society, perpetuating them under the guise of “smart machines.” Take, for instance, predictive policing algorithms used to determine which neighborhoods should be more heavily patrolled for criminal activity, or who should be classified as a terrorist. Unless we embed ethical and moral grounding, technology meant to advance our well-being could, in fact, end up amplifying the worst aspects of our society.

Well-intentioned uses of developing technologies can go wrong. In 2003 I co-authored a paper that predicted that an open internet would play a significant role in democratizing society and fostering peace. Later, in the early days of the Arab Spring, it felt as though the internet had indeed helped spark the uprising. But as the internet has increasingly become a place for bigotry and malicious trolling as well as a platform for organizations like ISIS to advance a wave of hatred, I wonder, “What hath the internet wrought?” I have similar concerns about the development and deployment of E.I.

It’s absolutely essential for us to develop a framework for how our ethics, government, educational system and media evolve in the age of machine intelligence. We must initiate a broader, in-depth discussion about how society will co-evolve with this technology, and we must build a new kind of computer science that creates technologies that are not only “smart,” but are also socially responsible. If we allow E.I. to develop without thoughtfully managing how it integrates with, and affects, society, it could be used to amplify dangerous biases and entities. And we may not notice until it’s too late.


Join Opinion on Facebook and follow updates on twitter.com/roomfordebate.

Follow us