Summer Curation

 

This video deals with the Area of Knowledge of Ethics, specifically the ethics of artificial intelligence. This is a topic that has, in the past, been discussed many times and led to much debate because of the complexity and how completely unfamiliar it is to any ethical questions raised before. This video, however, argues that it is not completely unlike any knowledge issue before. It specifically compares artificial intelligence to other inventions of the past that were groundbreaking and changed the way our society functioned. It brings up the printing press, which would forever change communication and help spark social and cultural changes. When I looked at my knowledge of history and helped to find a similar step forward in technology with unforeseen societal consequences, I thought of the cotton gin, a machine which had a large impact on the agriculturally-focused, antebellum south of the United States, not only allowing for greater yields but also increasing the practice of slavery. In the video, the atomic bomb is mentioned as an example of technological advance with horrific results. However, the video mentions philosopher Alain Badiou who argues that these technological events, these breakthroughs are necessary for “progress”. Not attempting to create these moments (such as the creation of artificial intelligence) and create progress is unethical. The topic that the video is handling is obviously complex. It is acknowledged that progress is not always good (harkening back to the atomic bomb example, as countries are now trying to regress in a way by reducing the number of weapons of mass destructions they possess).  However, we can hardly know which inventions will change society forever in a net positive way and which in a net negative way. Does this mean we should fear technological innovation and progress? Does this mean it is ever ethical to avoid progress, though it may lead to better lives for future generations?

Though the video does speak broadly about technology and progress, it is primarily about the issue of artificial intelligence. It brings up the concern of robots eventually taking the jobs of humans. Not just the menial jobs which are necessary for survival but more intellectual jobs as well, such as teachers and lawyers and doctors. Though he doesn’t focus much on a world in which this is true, I can’t help but question what this world would be like. What would be left for the humans? If we no longer are needed to provide services to the world, do we no longer have jobs? How is money addressed if we cannot work? Are the only jobs available “creative” jobs that artificial intelligence cannot fill, such as philosopher or artist? Is this world, where everyone is unburdenedby working/unnecessary, a utopia or perhaps a dystopia?