We’re about to dive into Generative AI for a large part of this conversation. What’s your simplest explanation of what Generative AI does, and how does it differ from Classic AI as we’ve known it?
Chris: Let me illustrate this with a comparison. People were excited when search engines came out. People said, wow, now we’re not going to need libraries anymore; the search engine will rapidly give us what we need to know because search engines are based on AI, magically powerful. And of course, that isn’t what happened. Instead of struggling to find resources in a library, people struggled to filter resources in a search engine. The search engine returned some useful things and many many things that were not useful because it didn’t understand what it was doing in the way that a librarian helps you find something by understanding what you want to do. So, search engines are certainly something that has improved productivity through AI, but not in a way that is magical or without its own problems.
Generative AI goes a step beyond the search engine. Generative AI says, I’m not going provide you a bunch of resources, and then you can look at them and decide which, if any, might be useful, which two or three you would combine to get what you are seeking. I’m just going to give you the answer. Now, that’s an overly simplified description, but it illustrates the comparison that classic AI doesn’t attempt to give you the answer in many situations; it helps you with your search for resources. Generative AI attempts to give you the answer, but the weaknesses of AI limit its effectiveness in this expanded role.
It is a little like the relationship between Captain Picard in Star Trek: The Next Generation and Data the Android, who looks like a person but who is an AI. Captain Picard uses Data to do things that he can't easily do. But Data is not in charge of the starship because Picard has a whole set of human wisdom that Data can't possibly have because he's an AI.
This relationship between humans and AI is a theme in science fiction that goes way back. For example, long ago when I was an undergraduate student I watched the movie 2001 A Space Odyssey, in which the AI goes crazy and starts killing astronauts because it doesn’t understand what human beings are attempting to do. So, it’s a mistake to think that generative AI is a step toward becoming more human, and therefore it can do more and more things like a human being. AI is an “alien” intelligence. As an alien intelligence, it has strengths that are different than human intelligence, and that can be very useful, as long as we don’t think that it’s somehow comparable to human intelligence in its strengths.
In its Future of Jobs 2020 Report, the World Economic Forum found that COVID-19 has accelerated the arrival of the future of work, especially automation and the adoption of new technologies. Given your background in AI and workforce development, could you help set the stage for us on the workforce landscape as it stands today?
Chris: Many people are reskilling and upskilling because AI is eliminating some jobs and is changing the nature of many other jobs. I have many students who go for job interviews. Currently, in interviews people don’t just look at your transcript and examine your recommendation letters and chat with you about what college life was like–they want you to do a performance task of some kind. Somebody might go in as a marketing major and they’d say in the interview, you’ve got half an hour to produce a marketing plan, and here’s what we want the marketing plan to cover. Now, the punchline is that what they’re going to do at the end of that time is give the same task to something like ChatGPT and, if your marketing plan isn’t any better than what ChatGPT can produce, you’re not going to get the job. The employer is going to say, well, why should I hire you when I can get this for free just by typing it into an AI.
As we understand the complementarity between human strengths and the strengths of AI, what is ideally going to happen in the workforce is a partnership between humans and AI called IA, Intelligence Augmentation. In this relationship, the person does what he or she does well, the AI does what it does well, and the whole is more than the sum of the parts (as with Picard and Data). The partnership can accomplish more than either side can on its own. However, to achieve IA humans have to upskill in a way that’s different than AI.
A challenge is that, if you look at what we’re now training students to do and how we measure quality, we’re using assessments like psychometric high-stakes tests to determine effectiveness. If students get high scores on high-stakes tests, we believe they’re ready for work. Unfortunately, that’s now the AI part of work: AI does well on psychometric high-stakes tests. We’re preparing students, in a sense, to lose to AI as opposed to preparing students to do the human things that AI can’t do. Our education system, as it stands today, is educating people as if it was still the Middle Ages; we’re concerned that AI might disrupt our medieval education. That’s the wrong way to think about it: the fundamental issue is not about how we keep education as it is and integrate AI; it’s that we understand how AI now enables and even forces us to transform what people are learning.
With Generative AI having captured the attention of educators, what implications do you foresee for curriculum and assessment design in schools around the world?
Chris: Performance assessments are where we need to go. Psychometric tests with multiple choice and short answers are often used as proxies for achievement but are not good indicators of applied knowledge. For instance, asking students to list the seven steps of scientific inquiry and put them in the correct order teaches scientific inquiry only as a recipe. But first, scientific inquiry is not a recipe; it’s much more complex than that. And second, somebody can get a perfect score on tests like that and not have any idea of how to do scientific inquiry. All that they know is how to memorize descriptions of it. Performance assessments, in contrast, make you have to do something to demonstrate applied knowledge.
AI enables us to create very rich kinds of performance assessments about human capabilities that AI itself does not have. One of the advances in education I develop and study is called Digital puppeteering. In many ways, this is a like flight simulator for human skills. Pilots now learn their job in part by going through simulations with dummy aircraft to let them learn skills that would be dangerous to learn in a real aircraft; they practice the dangerous things until they’re fluent and then they can fly the real aircraft to learn things not possible in the simulator. Digital Puppeteering lets human beings practice psychosocial skills important in work and life without making high-stakes mistakes with real people.
My colleagues and I studied using virtual students to prepare teachers to have equitable discussions in classrooms. We’ve also used these simulators to conduct research on how people learn negotiation such that, whether you’re negotiating with a younger woman or an older man, or a person of color, you understand how not to let your biases get in the way. So here, what AI is doing is not substituting for a person or deskilling them, it’s empowering a person by making a simulator more authentic and sophisticated.
What are some interesting applications of AI or Generative AI that you’re already noticing and what are some that you’re excited about in the near future?
Chris: I am Associate Director for Research in one of the National AI Institutes that the National Science Foundation is funding; – its focus is Adult Learning and Online Education.
A different national Institute is examining how AI can help neurodiverse students. Yet another is studying the role of narrative and engaging students - how can AI enable motivation personalized to a learner’s culture and history, rather than just one-size-fits-all? Still another institute is studying the role of AI as a cognitive learning partner because, when you teach something to an AI, you’re also learning it yourself; the best way to learn something is to teach it. So there’s lots of research going on into how AI can complement human teachers and students with useful kinds of support.
There are three kinds of potential benefits of generative AI that I don’t see discussed much, but for me are hallmarks of where we might be if we do things right.
Generative AI has the potential to help people understand the consequences of global issues in the context of their own lives by modeling and predicting the effects of these issues on specific locations. For example, if California is experiencing climate-related problems, someone in Iowa may not realize how it impacts them. Generative AI can use a combination of databases to model a study on the climate change effects in Iowa in 2050, even if such a study has not yet been conducted. By doing this, people can better understand what they have at stake and the preventive actions they can take. Evidence-based modeling for local decision-making using generative AI has the potential to make a significant difference.
I would expect to see more evocative art, more influential writing, and more wonderful music if human beings upskill so that they’re not just learning what generative AI now does in the arts but moving beyond that to enhance their human creativity. Let’s say you’re painting and you’ve made a wonderful little flower. Now, you want a border of that flower all around the edges, and instead of performing the formulaic task of painting that same flower over and over, you can have AI make those flowers. This frees up your time to work more deeply on the non-formulaic parts of the art, which is creative, which is human.
I believe digital puppeteering (described earlier) could be used to help people with social perspective-taking, seeing things from another person’s point of view. Too often, human beings work against each other instead of working together because people don’t understand conflict resolution or are frightened by people who are different from them instead of excited by people who could complement them. With psychosocial simulators, people may become better at working with and feeling empathy for others, particularly people who are different from them.
How do you see the education ecosystem - primarily educators, policymakers, and entrepreneurs shaping the kind of reinvention we need with AI and Generative AI in the mix?
Chris: Well, first teachers and that’s the easiest one for me, because I am a teacher: AI is not going to replace the important parts of what we do. So understanding deeper the human parts of teaching, and how AI can help with the routine parts of teaching is the most important thing human teachers can do from a process perspective. From a content perspective, human teachers need to look at what kids are learning and ask “Is this part of what they’ll be doing in the workplace, either as a foundation for knowledge or actual knowledge and skills that are useful in the real world? Or is this just part of doing well on a high-stakes test that in the workplace will be part of the AI, part of the solution?”
That brings us to policymakers because ultimately policymakers want to measure how effective education is, and appropriately so. But what measures you use determines what you get. If you use psychometric tests, and other things that AI does well on, then you’re going to get a generation of graduates who are effectively unemployable, because the only thing they’ll know how to do are things that AI is already doing better. So it is crucial that policymakers understand the new division of labor between people and machines, and change how we measure education with a greater focus on performance assessments that will look at the human side of what’s attained.
Finally ed-tech entrepreneurs - The first thing to do is to deeply understand what problem you’re trying to solve and what business you’re in. I live 90 miles north of Newport, Rhode Island, which has wonderful mansions in it, historic mansions that were built over a century ago in what Mark Twain called the Gilded Age. This was the age of the big banks, the railroads, and people making enormous amounts of money in those enterprises, particularly in the railroads. They lived like kings and queens in Newport, but within 20 years those fortunes had collapsed. The mansions were deserted, many of them were torn down, and a few are left as museums. The reason those fortunes disappeared so rapidly is people didn’t understand what business they were in. They thought they were in the railroad business but they were really in the transportation business, and the automobile and the airplane hammered the railroads.
What I often tell edtech entrepreneurs is, we’re not in the education business anymore. We’re not in the classroom business, the campus business, the high-stakes test business, or the even degree business. We’re in the learning business, the lifelong learning business and that is a different business than the education business.
What I often tell edtech entrepreneurs is, we're not in the education business anymore. We're not in the classroom business, the campus business, the high-stakes test business, or the even degree business. We're in the learning business, the lifelong learning business and that is a different business than the education business.
And if you can solve a problem with lifelong learning, you’ve got a very exciting entrepreneurial opportunity.
Finally, what do you think you should keep in mind to ensure that all the design practices are in the best interest of the students and the broader education community?
Chris: AI doesn’t understand equity or diversity or inclusion - those are human understandings. My colleagues and I published a book chapter and an article that discusses the cyclical ethical challenges of AI because there are at least five places in the development of AI where bias can be and often is introduced, resulting in inherently biased decisions that worsen diversity, equity, and inclusion. Here is where the human and AI partnership becomes crucial in ensuring that datasets used to train AI are fully representative of the population to be served. Because it’s not that AI introduces a new kind of bias that’s never been there before. AI serves as an amplifier for the human biases that we’ve already got and. if we’re not careful, we just have a faster way of being biased, a more efficient way of being biased, instead of a more effective way of avoiding bias. Generative AI is a mirror; if we don’t like what it shows, we need to fix the real world rather than blaming the mirror for what it reflects.
About Chris Dede
Chris Dede is a Senior Research Fellow at the Harvard Graduate School of Education and was for 22 years its Timothy E. Wirth Professor in Learning Technologies. His fields of scholarship include emerging technologies, policy, and leadership. From 2001-2004, he was Chair of the HGSE Department of Teaching and Learning. In 2007, he was honored by Harvard University as an outstanding teacher, and in 2011 he was named a Fellow of the American Educational Research Association.
In 2020 Chris co-founded the Silver Lining for Learning initiative (https://silverliningforlearning.org). He is currently a Member of the OECD 2030 Scientific Committee and an Advisor to the Alliance for the Future of Digital Learning, Chris is also a Co-Principal Investigator and Associate Director for Research of the NSF-funded National Artificial Intelligence Institute in Adult Learning and Online Education.