Teaching: Opportunity in the Age of Intelligent Machines

Opportunity in the Age of Intelligent Machines, a class exploring the effects of AI.

Client
University of Michigan
Project Type
Teaching
Date
Sep 2019
 - 
Jan 2021
Services
Teaching, Course Design

SI 260: Opportunity in the Age of Intelligent Machines

For three of the four semesters of my Masters Degree at the University of Michigan School of Information, I worked as a Graduate Student Instructor for a class called Opportunity in the Age of Intelligent Machines. The concept of the course was to explore the effects of automation and artificial intelligence on society at large. The class was helmed by Walt Borland, a prolific entrepreneur who understood the business and economic implications of AI, who hired me as a GSI because I could bring my own technical background in the field to the class. The class, which had the tagline "what will you do when machines do everything," was a forward-looking survey course that covered the seismic changes that robotics, automation, and even basic labor-saving technology have brought about. The course was designed equip students with a compass with which to navigate their future careers and explore the nuances of algorithmic justice.

While teaching this course, I had a broad suite of responsibilities: teaching weekly seminars to 50 students, giving extensive feedback to student work, and perhaps most importantly, helping establish what narratives about the future would be appropriate for a course like this. Our students came from a broad range of disciplinary backgrounds, which means they entered our class with dozens of separate mental models of what automation is, what it is capable of with current technology, what it is being used for, and what it should be used for. Our mission as teachers was to teach students the facts about AI in order to empower them to forge their own ideas and opinions about what society ought to do about the challenges presented by widespread automation.

Pedagogy

Because of SI 260's nature as a survey course, it was designed using the Gameful Learning course model. In a typical college course, all assignments are mandatory, which means that students have the psychological experience of starting off at 100% in the class and gradually degrading their score as they make mistakes. In Gameful Learning, students may select from a broad range of point-scoring opportunities in pursuit of a certain set total for each letter grade. While both models have a certain amount of points that are required to hit a desired letter grade, the UX of Gameful Learning differs in a few really meaningful ways.

The biggest difference in the user experience of a Gameful class is the idea that every point scored is a net-positive event. In a classical college course, getting a 30% on a quiz is disastrous, while in a gameful learning class, those 3 points out of 10 still help contribute to the final total, and the missing 7 points could be achieved later via an optional assignment to make up for the poor quiz performance. This distinction was very important for student behavior- when learners knew that a low score was not the end of the road, they learned to take more risks, to try tasks that they might not be as good at, and to and experiment more with different types of assignments. Additionally, the flexibility offered by Gameful Learning allowed our students to focus on the aspects of our subject matter that excited them the most.

This pedagogy, and the hard work of our teaching team to make a nonlinear scoring system understandable and up-to-date, was a big success with students. In the fall of 2019, SI 260 received the strongest student evaluation scores of all the courses the University of Michigan School of Information.

Teaching about the future

It is a challenge to teach a class that is primarily about the future. Clearly, nobody knows what our future holds, and speculation about future technologies can never be more authoritative than pure speculation. Walt Borland, Chris Demundo (the other GSI) and I delivered a course that informed students about the present and past, then encouraged them to speculate about what kind of future they would want to help bring about.

For example, we spent a meaningful amount of class hours on the future of employment (and whether robots really are going to take all our jobs) but we conducted that investigation by reaching into the past. Automation threatening employment is not a new story! We explored the history of the Luddites who destroyed textile machinery in England and the hundreds of professional "calculators" that performed intricate math before computing became available.

Analysis of the present played a large role in the course design also - we examined the financial details of the "Big Four" companies in Silicon Valley: Apple, Amazon, Facebook, and Google, and discussed what nonlinear effects their investments in automation might have on their business models. We discussed privacy, antitrust concerns, data's treatment as a raw resource, international differences in AI policy, and the limits of what today's cutting-edge technology is really capable of accomplishing. Our students were brought up-to-speed about AI from business, technological, and governmental perspectives.

Armed with all this information, our students were invited to speculate about what futures they believed were possible. These speculative futures sparked engaging discussions and helped students to examine their deeply-held cultural priorities. We discussed the definition and purposes of labor, the concept of social safety nets, income inequality, and the existential terror of being outsmarted by a computer.

As a proponent of algorithmic justice (and justice in general), I was able to bring that perspective to the course design. Under my supervision, I ensured that my discussion section would have an antiracist focus. We discussed the ways in which automated decision-making enforces existing inequities, and the ways in which AI is often used to launder the racist value judgments of its creators into something that looks like "objectivity." I also centered themes of humanism and pluralism to undermine the expectation that AI can ever produce an answer that is best for everyone. This justice work is my largest contribution to the course design, and it's also the part that I'm the most proud about.