AI is Unable to Outpace Higher Education

Lumina podcast episode 48, Full episode transcript

Listen along

0:00:11.4 Dakota Pawlicki: Hello and welcome to Today's Students, Tomorrow's Talent, the show about work and learning after high school, brought to you by Lumina Foundation. I'm your host, Dakota Pawlicki, coming to you live from the podcast lounge at South by Southwest. We're so glad to have you with us for one of three live shows addressing a range of topics, from climate change, AI, and American prosperity. Artificial intelligence is seemingly everywhere, certainly here at South by Southwest. It seems that every other session has some reference to AI. It might be every session, actually, but that's not hard to understand. Adoption rates of AI have been faster than many other technological advances, rivaled only by the Internet and smartphones. Investment in AI has also been rapid and substantial. An August 2024 Goldman Sachs report projected a total of $1 trillion of capital expenditures for AI among the public and private sectors in the coming years. And anyone searching for tech financing will tell you that it's impossible to do unless you have some kind of AI feature these days.

0:01:11.1 Dakota Pawlicki: And it all gets me thinking. As I've shared on the show many times, in the end, I'm a classically trained tuba player. I've had a lot of twists and turns in my career, and all of those twists and turns have had to do with improving the human condition. And while AI certainly has many applications to help us make life better, is that pace entirely dependent on the humans behind it? Can AI outpace higher education? I found some really smart folks, notably not tuba players, at least I don't think so, who can help me sort this issue out. With me today is Alisa Miller, Chief AI Officer, CEO and co-founder of Pluralytics, Athena Marketing and Media, and also Lumina Foundation's Board Chair. We also have John McDonald, Head of Strategic Initiatives at the Tulsa Innovation Labs, and Julie Schell, Assistant Vice Provost of Academic Technology and the Director of the Office of Academic Technology at the University of Texas at Austin. Thank you all so much for joining me today.

0:02:09.0 John McDonald: Thank you.

0:02:09.9 Alisa Miller: Thank you.

0:02:09.9 Julie Schell: Thanks for having us.

0:02:10.8 Dakota Pawlicki: Yeah, Alisa, I'm going to start with you. And basically, my first question is, am I an idiot? Perhaps it was rather bold of me to propose a session at South by Southwest that says AI can't outpace higher ed. It's obviously evolving at an unprecedented pace. But I guess my argument suggests that higher ed has a pretty important role of shaping it. Am I wrong? Am I an idiot? Should I just quit my job now? How are you feeling?

0:02:38.7 Alisa Miller: I think it's a both/and, not that you're an idiot. [laughter]

0:02:41.1 Dakota Pawlicki: Or I should quit. Okay, good. [laughter]

0:02:41.9 Alisa Miller: No, no, no. No, I'm talking about higher ed.

0:02:44.0 Dakota Pawlicki: Ah, okay.

0:02:44.2 Alisa Miller: So, I think that it's up to higher ed as to what extent it's able to keep up with AI or not. I think it's a leadership question. Because I think that certainly humans are going to be important for successful AI integration and use in the future. I'm very much an AI-augmented advocate when it comes to people and interactions and the future of work. At the same time though, sometimes higher ed is recalcitrant, right? Sometimes higher ed committees things to death, right? Meanwhile, AI is going to be adopted everywhere. And so, the question is, how is higher ed going to not only keep up, but lead in its adoption? And if it does lead in its adoption, I believe that higher ed will be more successful in preparing the future student, the future teacher, the future workforce for relevance as AI becomes integrated. And I'll just leave with one kind of analogy, which we were talking about analogies earlier. I think it's the same… Not exactly the same, but think about higher ed before the computer and after the computer. Huge implications of the computer in higher ed. I think AI has similar aspects to it. There are differences, clearly, but when we talk about transformative aspects. And so, it's a leadership question in my mind of, what will higher ed and educators and institutions do as students and business move forward with AI?

0:04:52.9 Dakota Pawlicki: Yeah, Julie, you're rooted, you're in as much higher ed as one could possibly be. Alisa's mentioning that it is partly a leadership thing, but I guess, same question. Do you think higher education has some kind of pacing when it comes to AI?

0:05:10.0 Julie Schell: I thought a lot about this question, and I think that it took me back to a couple years ago when I was walking around on the campus of University of Bologna. Has anyone been, have you been, anyone been to University of Bologna?

0:05:23.0 Dakota Pawlicki: Some folks in the audience, nice.

0:05:24.0 Julie Schell: So, it was founded in 1088, and it's the oldest continuous university that's been in operation. And it's withstood so much, right? Over those thousand-plus years. Wars, the Industrial Revolution; many, many, many technological disruptions; the advent of the Internet. It's really survived all of that. And you have to think about what are the ingredients that make higher ed so resilient. And I think that one of those things, at least for me, and what we try to cultivate at UT Austin, it's not just about knowing, but it's about the relationship that cultivates the knowing.

0:06:15.2 Julie Schell: And I think back to my time when I was in college and I was an undergrad, and I had a physics professor who really cared a lot about me and about my development as a human being. And he would sit with me in his office hours and try to explain Newtonian mechanics over and over again. And it was really that relationship that set me on the path to my career and my life. And I don't think that that can be replicated by AI. And I know that most people have a story about a teacher who became a significant other and influential other in their life. And I don't think that can be matched by AI. But I do think that there are some things in higher education that should be outpaced by AI. And some of that is the more transactional… The transactional education of pure lecture and passive learning that happens in a lot of our classrooms. And I do think we need to… I would welcome it outpacing us in that respect.

0:07:24.9 Dakota Pawlicki: Yeah, I know later on we're going to talk about it as a teaching tool and how it might actually help be a transformative agent. But you do talk about relationships, and certainly there's the individual relationship between faculty, student, for example, that can really catalyze all this work. But there is another set of relationships. John, I know you've been really at the intersection of a lot of these things. You've been in private industry; you've worked a lot in workforce development and innovation. How do you see higher ed's role within the proliferation of AI going on right now?

0:07:58.0 John McDonald: Yeah, appreciate the question. I was thinking quite a bit about this earlier this weekend and last week, and just trying to ponder what some of the longer-term implications from a workforce perspective are. In your last episode, you touched on that a little bit. I think that one possible important thing we should consider is the role that AI is taking from what we used to call new workers or early-career employees. A lot of the jobs that it is affecting most today are what we would have called junior-employee work in the past. Right? Like, graphic design or doing data analysis for superiors to review like in the legal field, or perhaps even doing research on a topic to prepare decision makers in a business to take a direction one way or another. That's largely being replaced today by AI. And one of the concerns that I have is that in the process of doing that, we are potentially eliminating a lot of early-career job roles for people that frankly, they are all going to school for, to obtain bachelor's degrees in order to get those jobs. That's the value proposition that has been sold for quite some time.

0:09:33.1 John McDonald: And so, if the payoff of an early-career job or a starter job at a company isn't there anymore because the job has been taken by an AI tool, what then is the impact on the universities that have a substantial amount of their operation focused on those undergraduate degrees? I think it's definitely non-zero and potentially has huge implications for the structure of universities. And what, if you will, they make their money on with this proliferation of degree programs and undergraduate degree offerings that have been made if the payoff isn't there anymore. So, I think a lot about that problem from a workforce-readiness perspective.

0:10:20.2 Dakota Pawlicki: Yeah. Julie, I think I see a smirk on your face.

[laughter]

0:10:24.0 Julie Schell: Maybe. I'm thinking about the ways in which we are working to educate our students that the degree is not the end goal itself, but it's the durable skills that come from traditional undergraduate education. For example, critical thinking and analysis, the ability to evaluate misinformation and disinformation, the ability to solve problems. And I was sharing with Alisa, I think there's some signals about what the future of work are going to look like in our current workplaces right now. I can't think of a day in the past week where I've solved a problem where I actually know the answer. Every day, the problems that come to my desk are problems where there is no clear direction, it's totally a gray area, and we have to make decisions that are in the best interest of the university and our students when we're faced with that. And I think that is replicated out in the world as well. And so, what we're hoping for is not to produce a bachelor's degree, but to produce human beings who are able to do critical analysis, to make ethical and moral decisions in the face of ambiguity; to be able to discern what is misinformation, disinformation, or accurate information, and also to be able to create.

0:11:58.8 Julie Schell: So, we're also developing creatives and thinkers to produce human works that help us understand the world and help us understand each other. And the last thing is that we're trying to help create people who are engaged in interdisciplinary thinking. There aren't a lot of problems in the world where you can just go and solve with a statistics degree. You have to be able to have that understanding in the context of a larger environment that has multiple perspectives coming into one. And I think that that's at least what we're hoping to do at the University of Texas at Austin, is not to produce degrees but to produce thinkers and problem solvers.

0:12:46.9 Alisa Miller: Can I speak to the interdisciplinary aspect? I think this is really important, the point that you're making. Because when we think about AI in the future, I think one of the things that's underestimated is the interdisciplinary nature of AI itself. Right? We think of the entertainment industry combined with deep fakes, combined with regulation, and all of these things interacting with each other. Right? When we look at the future and the human element in regards to AI, it's also someone who can see, or people who have the skill set to see across these places where AI is taking shape and were using it as a tool, and being able to make those connections across these sorts of "focus buckets," for lack of a better word. And I think much of the conversation when we talk about AI and innovation and when we're reading about it, it's like, "Here it is in this bucket, here it is in that bucket. And then over here we're talking about ethical AI, and then over here, here's what's going on with the entertainment industry in relationship to it." And it's actually a both/and in that case as well.

0:13:56.9 Alisa Miller: And so, preparing people and leaders to be able to weave and have that interdisciplinary look. And I know we're going to talk more about the structure of education and thinking about what we are actually teaching people in the future, but I think there are core aspects to a bachelor's, for example, that really focus on this idea of being broadly knowledgeable and skilled, like in the last conversation that you led, but also in the specifically skilled. Right? And so, I think that preparing for this interdisciplinary nature of AI itself is a part of what higher ed can really help move forward.

0:14:39.8 Dakota Pawlicki: You know, higher ed has never been accused of being fast. And yet talking about the interdisciplinary nature and knowing how much curriculum design work would have to go into transforming all sorts of different programs to embed this kind of learning, let alone trying to keep up with jobs and other things, to John's broader point, what are some ways… And listen, we can stick with you on this. What are some ways that higher ed or even workforce training programs, it doesn't have to be traditional four-year institutions, how do we speed up that process, so this way we can be equipping people today with the competencies, skills, knowledge, abilities to be able to enter a workforce that is AI-enabled?

0:15:24.0 Alisa Miller: So, this might sound a bit contrarian.

0:15:26.4 Dakota Pawlicki: Love it.

0:15:27.8 Alisa Miller: I actually think liberal arts is really important.

0:15:31.3 Dakota Pawlicki: You're speaking to my heart. I'm a tuba player. This is great.

0:15:33.5 Alisa Miller: Well, and in fact, I'm kind of… I'm a very early example of it, and I'm not pretending that my experience is normal. But I have an AI patent, and I have no idea how to code. I don't know Python. But what I thought about was a problem I was trying to solve. Right? And then I leveraged AI and also people who can code to solve that problem. Right? And so, it's really about problem solving. It doesn't mean that the individual specifically-skilled part, today, three years from now, we're not going to be talking about prompting, but understanding some of these things and using these technologies in the classroom because you see how they hallucinate. You see where they're good. You see where you're better than they are. You see that they can call on things that you can't call on. That interaction with the technology, so there is a fluency. By the way, all the students are using it anyway, but at least within a classroom setting. Right?

0:16:44.0 Alisa Miller: But it goes back to the problem solving. That's really important. So, in a weird way, I think it's back to the basics. Right? I think about this with my own children who are teenagers, and I'm like, "Okay, what do I hope they major in? What are they actually preparing for at some point?" And I keep thinking about leadership, critical thinking, creativity, and having fluency dealing with human beings, which, as we know, can sometimes be the hardest part of the job, and how you put all of those pieces together. And that is what higher ed or credentialed education can help you do. It's the human interaction pieces of it.

0:17:33.0 Dakota Pawlicki: Yeah. John, I know you've been working with a lot of different partners in Tulsa. You have Build Back Better, Good Jobs Challenge, Tech Hub Designation, and so much of it, if I'm thinking… Correct me, is around autonomous flight. And so, it's, from my perspective, a pretty newish field. How are you working with training providers in higher ed to get them to speed up the process of embedding some of these new skills into their work?

0:17:57.9 John McDonald: Well, the easy answer and maybe the ironic one, is by talking to the employers. What we have found is, there's been a significant disconnect at least in Northeast Oklahoma between what we'll call it the "buyers" of the workforce product are looking for and what our local providers of that talent have been building to. Despite all kinds of efforts to try and bridge that distance, it really wasn't being bridged very effectively. So, what we did was we set up two structures, something called a "workforce intermediary" and another one called a "labor market observatory," which are just really big words for gathering together the employers and asking them, what do you think you're going to need from a workforce perspective to be able to boldly enter this area of what we like to call "flight transition" or "aerospace transition"? And then using that input to verify it or disprove it, using data analytics, that's the labor market observatory, and then share that with our workforce providers to see if there's a way to bridge some of the gaps that it creates. For instance, if you talk to an employer and you say, "Well, how many welders are you going to need?" And they say, "Well, maybe 50."

0:19:25.3 John McDonald: Then you go to the labor market observatory and go, "Well, is he right?" And then click, click, click, click. Well, they probably will need more like 500. And then you can go to your workforce providers and say, "It says here you're going to produce five." Right? So, what we need to do is try to shift the situation to be able to meet the validated needs of the employer. We had to put a structure in place to bridge the distance between what the employers are seeing from a skills perspective, post-high-school, and what our providers, even in our local community, were playing a role in providing and lining those two things up.

0:20:05.0 Dakota Pawlicki: It does make me wonder though, are we devaluing the traditional degree? I'm trying to connect the liberal arts conversation to this very hard skills, we need this many people doing these kinds of things in product adoption.

0:20:22.0 Alisa Miller: Well, I think, if I think about a community college or a four-year institution and opportunities, right? And if you do argue that AI is most successful when it has an interdisciplinary point of view, but also has interdisciplinary viewpoints that are created or part of the creating of it or the interaction of it, I think higher ed is in a really interesting and potentially future-forward position. Because of all the different kinds of degrees and all different kinds of people who attend these institutions. And when you think about AI, AI is the smartest with the highest diversity of data possible, which helps to decrease the bias. Right? When you think about quality data and you think about quality inputs and people who can be a part of it, higher ed is actually in a much better position to potentially be a part of that creating of the future of AI in ways, and creating leaders who understand that interdisciplinary inputs.

0:21:37.0 Alisa Miller: We're not that far away from people being able to say, "I'm really interested in creating this piece of software that solves this particular problem." And you put it into a prompt and you say, "I want to create this piece of software that solves this particular problem. It kind of looks like this piece of software that I see out there. It kind of looks like that piece of software that out there. I would like you to create a V1 of that. Thank you." And it creates it for you. When you think about the implications of that, of solving problems and putting people in a position to think about how they can take advantage of this power to help solve problems, I think, what will higher ed do with that, preparing leaders to think about that? Don't need to know how to do Python to do that. What I did over four and a half years with Pluralytics, if I were to create that company today, I could do it much faster than I did before based on where the technology is now. And it's going quicker and quicker.

0:22:46.8 Alisa Miller: And so, it really is about having people have a comfort, but also knowing the output that they get, whether it's right or wrong or where it needs to be tweaked. Right? And that to me is just the iterative process of learning. And where do you get that most? I think this notion of what it means to be a Computer Science Major versus somebody who's in business versus someone who's concerned with design, a lot of this is actually collapsing from an interdisciplinary standpoint. And so how do we think about that from a degrees and higher ed standpoint? And actually, I think universities and community colleges are in a great position. I know resource-constrained, etcetera, all the challenges, but everybody's kind of sitting there. That could be a part of creating that future.

0:23:38.8 Dakota Pawlicki: Julie, are you hearing this from students, I guess? There's the traditionalist mindset of saying, "Well, you're in the office of technology, I'm going to work in technology even I'm going down a career path." But are you getting students from design, from the arts, from… We're talking about Philosophy majors on the last episode. Are you starting to see that interdisciplinary interest come from students themselves?

0:24:00.6 Julie Schell: Oh, definitely we're seeing the interest with students. And I think it's hard for me to imagine that we're that close to that future. I do think there is a future where we will be able to do things quickly. But right now, I just think that AI is… I think that's a marketing tool to call it AI. I don't actually think it's that intelligent when it comes to creativity, when it comes to solving problems. And part of what I think we need to train our students to do is see that… Particularly in the arts, what it's producing is, it's just, it's not… There's something missing from it. And part of what AI is doing, part of the issue with AI is that it is not using new information. It's being trained on information that already exists and knowledge that already exists. And what we're doing in higher education is trying to prepare for a world that we don't know. The future, not the past, right?

0:25:09.2 Julie Schell: And so, I think that what we're trying to help students understand is that when you first start using AI, and probably folks have had this experience, right? You first start using AI, it sort of blows you away, right? In the first hour or two that you use generative AI in particular, you're like, "Oh my gosh, this is going to change the world. I'm going to be able to do everything with this." At about hour 15, you're like, "Oh, it's not better than me," right? And I think trying to help students understand that while it can speed you up, it can also make you look sloppy, right? While it can make you more creative, because it's using information that's already input from past information and old information, it makes you sound like everybody else. When you look at an image that's produced with something like Midjourney, there's something sort of dead about it. When an artist takes that image and then transform it and infuses their own DNA to it, that's when it's compelling. And so, I think, really, what we want to be thinking about is how to help our students partner with generative AI to transform rather than to transact with it. That's kind of where I sit on that.

0:26:29.0 Dakota Pawlicki: So, I want to shift a little bit. Higher ed plays a lot of different roles, obviously a main one, around training and preparing people for life and jobs and things like that. But universities in particular have also played really critical roles in their research, function, and advocacy in certain ways. I know when I was still a strategy officer at Lumina, blockchain had hit the halls of Lumina, which is a really funny conversation, by the way. A bunch of education policy experts suddenly talk about blockchain. There's like four of us in a room not knowing at all anything to talk about with each other but high-quality conversation. But we started talking about the need for a regulatory framework, and shouldn't we produce a framework, and these kinds of things. And obviously Lumina is a foundation and it's not a university. But there are folks that are concerned about the lack of regulation when it comes to AI tools in the AI space. And oftentimes universities do play a role in this space to put out thought pieces, frameworks, and other pieces. Should universities be doing that more? Are we seeing good spaces where that's happening, or should that really be left to industry since it's happening so fast, or another body, for that matter?

0:27:44.0 Julie Schell: I can take that on the teaching and learning front. I definitely think universities should be developing and influencing the responsible adoption of AI, particularly for teaching and learning. And in fact, we are doing that. We are working on defining, what does the responsible adoption of AI look like for teaching and learning? And then what are the key principles that should set the decision making on a campus? Because it's not a 1-zero. There are no situations that are clear-cut. When it's a clear-cut situation, you already know what to do. But some of the principles that we're thinking about are ones that folks are already familiar with, but balance, transparency, pedagogy, those kinds of things that we think are really important. And we need to be teaching our students and we need to be teaching our faculty how to engage responsibly. I definitely think we need to be influencing in that space. And when it comes to regulation and science and research, I think that I'm less in that space, but I definitely am a firm believer when we are doing it in the teaching and learning space.

0:29:10.1 John McDonald: I think that's right. And if I could just double down on that with a quick anecdote. This past weekend I was able to spend some time with my middle son Zachary, who's a Poli Sci History Major right now at Purdue. And he related this experience where he was trying to understand a particular topic. And what he did was he went to one of the AI tools and said, "Explain this to me as if I was an elementary student." And then it complied, took that in, and he said, "Now explain it to me as if I was a middle school student." And then kind of understood it, and then again as a high school student, as a high school graduate, as a junior in college. And what it was able to do was to hyper explain in a much faster rate than he would have if he'd gone out, done the research on his own, or even had gone to a class for this particular topic.

0:30:05.4 John McDonald: And so, I think that AI is not going to replace the research professor, the intellectual, the subject-matter expert. Because I think what those things are missing, you're talking earlier about the generative AI for art and whatever, I think what they're missing is a soul, quite frankly. Right? And we always talk about how the great creatives, it speaks to my soul and it's missing a soul. Right? And so, we're not going to lose that, I think. But I think what AI does replace is crappy teaching. Right? If you're running a course where you're standing up there reading the textbook or putting it on PowerPoints and that's all you got, that's all you're contributing to the situation, I don't think that that's probably long for the world. I think people will not sign up for your class and people will not want to learn that way. I think it doesn't replace the research professional, the research scientist, the professor intellectual. I just think it replaces crappy teaching.

0:31:14.9 Alisa Miller: So, building on that kind of three parts, I think higher ed being a part of responsible AI conversation and leadership is critical because it's moving very quickly. And certainly, there is government action on that front. How effective it'll be in staying in front of it, big question mark. And the companies involved with it, not that they're not trying to be ethical, but it's maybe not their first item on the list. And so, I think having people that are thoughtfully engaged in it, and being a voice and a very loud voice as a part of the conversation is really important. Because it's kind of like we cannot ignore, we cannot ignore this, recognizing there's power dynamics at play and all kinds of things, but I think that's a role. But I also think then it's really important for higher ed to actually be engaged and working with it, because it's really hard to be an effective critiquer if you're just looking at it like it's out there somewhere, the idea of it, versus really being involved with it.

0:32:24.1 Alisa Miller: I think on the flip side, one of the things I worry about with the soul part, right? Is the issue of "good enough." Right? I have a lot of friends who are creative composers, other things, and they're very concerned with the "good enough." The machine is good enough. And so, where in the teaching process are we? Is the AI that doesn't have a soul still better than the boring professor? Right? So, where good enough is, where the AI is, and where the human is, right? In all of this. How can we leverage that AI can be better than a crappy instructor, but not a great instructor? Right? And so how to find that place and level up our own people and institutions, so if they… Maybe there's a reason that this particular instructor is not great, and maybe we can help that instructor to be better because there's things that they don't have to do anymore because the AI is assisting on those parts, right? Thinking about how we can use this to level up the experience on behalf of the student, I think is a real opportunity.

0:33:44.0 Julie Schell: I think that leveling-up concept is really exciting and something that… And I think it's both leveling up for the student and leveling up for the instructor. One of the things that you asked what our students are saying, and it's true that AI is everywhere. It's ubiquitous. Students are using it. But they're not all using it in the way we assume that they're using it. I like this example of using it to actually help someone learn something, is a good example. They're not all using it to cheat or write their papers for them. In fact, most of the students that we talk to are concerned about this concept of cognitive offloading, right? Cognitive offloading is when you relegate a task to a tool to do that task. And what the danger is, is if you don't replace that with a more cognitively demanding exercise. Our students actually want to have those more challenging and cognitively demanding exercises. And so, I think there's a real opportunity there. We're most excited about using AI to help solve teaching and learning problems that are enduring and unyielding teaching and learning problems.

0:35:00.4 Julie Schell: For example, gaps in prior knowledge. Okay? One of the most that the highest correlation for how you're going to do in a course is what you bring into that course, what your prior knowledge is for that. The model of higher education currently is that you have to teach to a general audience. So, you can't customize. Well, there are students with big knowledge gaps here, and there's students with no knowledge gaps over here. These students are more advanced. You actually have to teach to the middle of that distribution. And with AI and with generative AI tools, we can offload some of that more transactional knowledge acquisition to happen before class, some of that prior knowledge. The faculty member doesn't have to do that "Explain it to me as if I'm an elementary," up to that, up to that scaffolding which was… Then free up the time for that instructor to do more challenging hands-on active learning tasks in class that really help develop that meaning and what I call like "3D knowledge." Right? If you've seen the Mona Lisa in-person, you have 3D knowledge. The way your neurons are actually structured for that experience are different than if you only see the portrait in a book. Right?

0:36:19.5 Julie Schell: And so, by offloading some of that more transactional passive learning and knowledge acquisition to before class, we can free up the time for the faculty member to spend that in class time with those more hands-on activities. And I'll just close this comment with… You mentioned ‘you don't know how to code, but you have an AI patent’. I also don't know how to code, but I had a vision for an AI tutor that would do exactly what we're talking about, where we would be able to actually scale the expertise of our faculty across the university, across the system. And so, we've developed a tutor called "UT Sage." Sage is available 24/7 to a student. They can go and they can ask Sage the same kind of question that you just posed without shame, right? You're probably not going to go to a UT Austin faculty member and ask them to explain it to you like you're a fifth grader, right? So, UT Sage can do that.

0:37:35.0 Julie Schell: But the other thing that UT Sage does is it helps the faculty member. You can know nothing about pedagogy, and if you use UT Sage to build your tutor for a particular topic, you can be assured that you're using effective principles of learning science because we've baked that into there. I don't know how to code, but the person that coded it is back there, and our engineer is back there. And so, I also think it's like coming together with visions and expertise in that interdisciplinary way that you mentioned, Alisa, to really customize the learning experience and offload what we don't need to do with them, so that we can spend the time, that precious time that we have, that relational time we have, to transform their learning experiences.

0:38:23.9 John McDonald: Dakota, if I could just point out what I think maybe is a related workforce issue that comes of that. Just like I think there's going to be decreased demand for early-stage employees and employers as we've historically known them because of the functions being replaced by AI in certain industries. There's also a decrease, therefore demand for, we'll call it 500-person, 100-level lectures. Right? Which I can think back to my own management 201 class cost accounting and how horrible that was in 1000-person lecture hall. And I learned practically nothing. But those are often also the places that are the early-stage employees in academia. Right? And so, talk about the teaching assistants and graduate assistants and other sort of toeholds into academia to start your career in that space. There's a decreasing demand for that. Right? And so, I worry as well about the future sustainability of academia if we're also eliminating a lot of those places where early-stage employees in that industry got a toehold because of AI. And so, there's significant early-stage workforce implications, I think, of AI, not just in the general workforce, but also in academia.

0:39:46.9 Dakota Pawlicki: I'm getting stuck on something though, because I guess I just don't know who to trust anymore. As we've been saying, there is a proliferation of AI products. It's going to be easier for us to figure out and develop new tools and new ways to help us all achieve our goals at the same time. As you point out, Alisa, some of our corporate leaders are giant tech monopolies that we have in this country, for example. We can't necessarily… I won't. I'm not going to say "we." I don't necessarily put my trust in them to have the human condition first in mind, or ethical practice for that matter. But at the same time, we're in a spot where universities also have a lack of public trust. And I guess I'm still a little bit curious, as we start to see AI seep more into our training and into our teaching and pedagogy, how does someone trust that what they're actually receiving is something of high quality and use? When Tulsa has to create intermediary agencies to make all this kind of work, there is an argument to say that that's also a trust-building exercise to get higher ed, private investment, employers to get to trust each other. Because we couldn't trust each other alone. We had to create a separate structure in order to create that trust environment. I guess, can anyone make me sleep a little bit better at night? How do I trust anything when I don't know who should really be leading on this?

0:41:14.2 Alisa Miller: Well, I don't think you should trust… I mean, on the one level, I think trust is earned. Right? With that said, I was thinking about this notion of the last question of the kind of metacognitive impacts, right? And that's a perfect thing for higher ed to actually study, provide research on, to help people understand. So, some of the assumptions that you've made, right? In creating Sage, what were those, and why did you make those choices, and why did that happen? And that's the same kind of knowledge that a huge technology company might incorporate or benefit from. Right? So, I think because higher ed actually cares about these questions, that's a real opportunity for higher ed to be a part of defining what trust could look like. Under what circumstances is something trustworthy? Right? And to what extent have we actually researched to understand the impacts of Door A versus Door B?

0:42:25.9 Alisa Miller: When I was doing some research on this idea of metacognitive impacts between this idea of laziness versus alertness, and how it can be… And I was like, "What are some studies… " trying to find studies, "Who's studying this? Who's involved with this?" And there wasn't as much coming up as I would like to see, right? And I thought, "Well, there's a big opportunity," right? Because it's not only good for the intellectual space, it's not only good for education, but it's potentially could impact how some of these big companies who do have billions and billions and billions of dollars who are spending to invest in these things, that kind of research could have an impact so that they're actually creating something that we could feel more trustworthy about.

0:43:12.2 Alisa Miller: I think one of the things that I worry about is that there is an enormous amount of trust actually in the output today. And it could be wrong and it may not be accurate, but it's an amazing thing to watch people just kind of agree, right? And not to have the critical thinking skills to know whether it was right or wrong. So, you can see in that a whole sea of opportunity for credentials beyond high school to help people think about this, that could be helpful in a workforce environment, but also just in how this is all developed and how higher ed could play a part.

0:43:57.0 John McDonald: You know, this trust gap that you're bringing up, I think is important, very important to consider. The way I explained it sometimes is if you… Let's say you're ordering a book on Amazon and you get to the last screen where you're putting in your credit card information and, you know, "I think it's safe, I'm pretty sure it's safe." You get that feeling in the pit of your stomach, but then you maybe the feeling you realize is actually that you're hungry. So, you go to a restaurant and you order a meal, and at the end of the meal you hand the waiter your credit card, and he walks away with it for like five minutes and you don't think anything of it. And so, what's the difference between these two experiences, right? And the answer is, that even though it's fleeting and superficial, you are developing an interpersonal human relationship with the waiter. And so that if you get an errant charge on your credit card, your mind immediately goes, "It was the waiter." Obviously, the Amazon's experience is way more secure. But we as humans are frankly only wired to trust other humans. We don't trust computers or machines, at least not yet.

0:45:15.0 John McDonald: Or said more succinctly, if IBM's Watson figured out that you had cancer, would you believe it or would you want to talk to your doctor? Right? And so, there's this sort of inner human fail-safe that we've not been able to overcome yet, it seems, which has also been more attenuated in young folks today as they have been immersed in this ocean of fake: Fake friends, fake influencers or whatever. They are ever more attuned to seeking out authenticity in the things that they… This is what I account for the rise in vinyl records again, right? Which was like, "Wait a minute, I had vinyl records and I got rid of them all." But it's because you can hold it, you can feel it, you can see the grooves, you can look at the liner notes. It feels authentic in a way that streaming music doesn't. And so, there's this reaction, I think, to social media where people are seeking out more authentic things. And we've tuned up our sensories around whether or not we should trust these computers or not, frankly, I think at just the right moment where we shouldn't, right? And so, I just don't… We have not overcome this trust gap of humans only trusting other humans yet. And I think that provides me a little calmness at night when I go to bed about all this AI stuff, on the trust front.

0:46:33.7 Julie Schell: I want to plus one for our earlier conversation about the importance of the relationship and having the human-in-the-loop in our AI engagement. So, always wanting to have a human partner versus just offloading the transaction. One of the things just to your point that we've heard about Sage. You could use ChatGPT or another tool like your son may have, or you could use Sage to help you learn a particular topic. Let's say that you want to learn logistic regression, which is about predicting outcomes of an event. Why would you go to Sage versus going to ChatGPT? And what we hear from our students is that they trust Sage more because their faculty have trained it. Right? And so, there is this human connection, knowing that they know the person that trained that particular tutor, creates an element of trust and connection. So, I do think that having that partnership and having that human in the foray is really important to building that trust. So, when you're thinking about, "Who do I trust?" thinking about who are the people that are involved in that relationship, I think is really important.

0:48:03.0 John McDonald: The more higher ed can step away from the mills of 500-person lectures taught by teaching assistants or reading off of PowerPoints and more towards the interaction between faculty through unique learning experiences, the better, and the more life that that will have in it and the more relevant it will be. But if we rest on the past learnings that are now being amalgamated and produced through GPT tools, as we got nothing better than that from a learning perspective, that's a big problem, right?

0:48:42.3 Dakota Pawlicki: As we're getting close to ending here, it makes me think about the various trade-offs that have to happen. Trust, as we've been discussing, happens at a hyperlocal level. The research I'm familiar with human trust, you generally only, your trust only extends to seven-plus or minus-two people, depending on who you are as an individual. At the same time, pretty much every one of our systems that we're talking about here is designed for scale. The higher ed business model is designed for scale. You have to have that lecture hall and a lot of traditional higher education institutions in order to generate the revenue to cover some of the smaller seminar courses. If I'm creating a tool that I want to bring to the market, my mission is not necessarily to serve 30 people, it's to serve 30 million people and maybe more than that. So, I guess as we're wrapping up here, how do we best rectify the trade-offs between the necessities for how our current system is built with the predilection to scale, with the very real need for authenticity which can might be only achieved perhaps by one-to-one relationships? Anyone got an answer for that one?

0:49:54.0 Alisa Miller: That is a hard question.

[laughter]

0:49:56.1 Dakota Pawlicki: I was kind of looking, again, I know you were formerly the CEO of PRI as well. So, as you think about the communications landscape that we're on, public radio international versus this hyperlocal thing, I had to… I thought maybe you had some ideas, maybe.

0:50:11.0 Alisa Miller: Well, I think, I'm not sure if this answers your question, but I've been thinking a lot about the importance of curiosity in this time. And I think that for higher ed to run towards piloting and experimenting and understanding AI, the more that that can happen so that, A, there's a level of comfort and/or projects that can turn into something like Sage so that the institutions begin to flex more and more of a muscle around it. And then recognizing that it's the human relationships that often are some of the most powerful teaching methods. And so how can this technology perhaps transform part of the model? Right? If we're talking about the 500-person, whatever, the 200-person lecture hall, how can this transform that part and be also still part of the economic reality of putting together a university in a way to provide the best possible outcomes, right? For students? And AI can be a part of that. It should be a part of it. It's a huge opportunity. It needs to be done in a way that's quote unquote "responsible." Mistakes will be made. Right? And how has institutions can we allow for mistakes? Because that's going to be really critical.

0:51:48.8 Alisa Miller: None of these big companies… And I'm not suggesting that the University of Texas should be Google. But mistakes are made along the way as you're trying to innovate and try new things, and having the space to allow that to happen so that the learning can come from it. So, I don't know if that really answers your question other than I think that it's a huge opportunity to move as quickly as possible, because of the values that higher ed has can have a unique role in how AI itself evolves. And the only way to do that is to run as fast as you can towards it in a way that allows you to pilot and use it so you can learn it and incorporate it as quickly as possible.

0:52:38.3 Julie Schell: Yeah, definitely a plus-one on that. And I think that just picking up on something that you said earlier, I think the biggest challenge that our students face right now is not AI. There have been so many different technologies that have come into the world. I mean, yes, this is an unprecedented technology that is going to be ubiquitous and that is going to change our lives. But personally, I think that the challenge of our students' lifetime is learning how to deal with ambiguity. And I think that the best thing that we can do in higher education is to help our students become the architects of their own ethical frameworks, whether it comes to AI or otherwise. To be able to make decisions in community with other people and problem-solve so that we can address what's next. Because even though it feels like we know what's next, we don't. And we need people to be able to deal with chaos, make decisions in the face of chaos, make decisions when there are complete and total masses of misinformation and disinformation, and be able to critically analyze what's fake, what's not fake, how do we make decisions in those spaces, and how do we make decisions in thinking about people and walking around in their shoes? And I think that's where the traditional liberal arts comes into play and we plus-one that.

0:54:14.1 John McDonald: Yeah, Dakota, I'm just thinking about your question too, and scaling up and scaling up to what? Right? Because the promise of education beyond high school has always been about trying to get a better job and get ahead and more income and the like. And yes, I think while it's true that there's been lots of technologies that have come and gone with respect to higher ed, AI in my mind is quite different because it's the first one that really gets at the core value propositions of higher ed, which are teaching and research and analytics. Right? And so, I think that the rise of interest in things like certificates and apprenticeships provides a picture for a future which may look a lot more like the past in the sense that we may have much more of a, I'm going to call it almost like a guild-like system of teaching you on the job what it is that you need. And higher education institutions becoming partners of employers in that skill development.

0:55:17.1 John McDonald: And then having the traditional universities almost revert to the thing that they were originally designed to do, which is the deep thinking and the research and the liberal arts educations. Which may be for a lot fewer number of people on the population as they once were when they originally were designed. Because the vast majority of skill-building is now being potentially done on the job, perhaps in partnership with those same institutions like it has always been. So, a future may look a lot more like the past based on how AI is replacing some of the things that have been built up over the years in higher ed as their domain, but maybe get reverted back to something we saw before.

0:56:00.8 Dakota Pawlicki: So, why don't we close out with a bit of a call-to-action. Julie, I'm going to start with you as being the higher ed insider on this panel. If you could change one thing about how higher ed approaches AI, what would you do?

0:56:12.0 Julie Schell: If I could change one thing about how higher ed approaches AI, I would probably change the fear and skepticism around student cheating around AI. It's just everywhere, and there's a lot of sticks that people are really interested in sticks when it comes to academic integrity and AI. And like I said, we're really wanting to foster environments where students are able to develop their own blueprints for how they adopt and how they don't adopt. And so, I would radically change the ways, these sort of barring the use of generative AI and then these sort of police states around use of AI, and have more of a forward transformative approach.

0:57:06.0 Dakota Pawlicki: Sounds like a good next episode. So, thank you for that idea. John?

0:57:10.1 John McDonald: Could not possibly agree more. When I was a professor over at Purdue, everything was all lit up about how we have to develop these counter tools to figure out… When in reality, if your course is about regurgitating a textbook and then asking people to write a paper about what you said in the textbook, dude, drop it. Right? You're going to have to level up what it is that you're doing in the classroom and from an assignment perspective, and embrace this, right? In many ways, I don't want to call out any names or even an entire profession, but that's just kind of a lazy approach, frankly, to the impact of AI. And I think it's just a call to level up what it is that you're doing from a teaching, learning, and research perspective, more than anything.

0:57:56.8 Julie Schell: And I know we're close out of time, but I don't think… It's the resistance to that change that worries me.

0:58:03.3 John McDonald: Yes. Yes.

0:58:03.9 Julie Schell: It's not so much AI, but it's I think people who were just, refuse to stop the passive learning transactions that are biggest issue.

0:58:14.8 John McDonald: And you can appreciate how people would do this. They would compartmentalize. Professor's a busy job. I've got a lot of things I've got to cover off on. I've got to do research, I got lab assistants, and then I have to teach. And so, if I can make that be really easy and simple and almost brain dead, then I don't have to put a lot of effort into that. Right? So, I can understand the resistance. Right? But too late. That ship sailed. Right? And so, sorry, you're going to have to go back and take a look at how you're doing that.

0:58:40.8 Dakota Pawlicki: Yeah. Alisa?

0:58:42.0 Alisa Miller: Okay. So, I'm going to triple down on this [laughter] because I just think it's completely misplaced, and we can sort of use by analogy. Right? When we were talking about this earlier, the card catalog is not being used anymore. It's Google Search. But remember when that first happened, it's like, "Oh, Google Search." Or using spellcheck. Is that cheating? It sounds silly now, but it wasn't silly at one point. And I actually think, and I don't know the answer to this, so this is perhaps another podcast at some point, that the idea of what original work is going to completely change. So, the idea of plagiarizing anything or even what original is, in my opinion, is going to completely evolve. And we will look back on this 10 years from now and be like, "Wow, that was silly. That was some really silly talk."

0:59:39.3 Alisa Miller: Now, I don't know what that means in terms of IP and all those other things. Right? But the fact is, is it's going to be like spinning music and using beats from some… I mean, it's just, it's going to become a kind of a quilt, an intersection between what was written, what will be written, how it's going to be written. And that's not going to be the place to draw the line. It's going to be, how did that help the world that that happened? How many people use that to make something amazing happen? Not, was that cheating or not? It just, it's completely misplaced. And most of the technology, as we know, is not really good at identifying that it was cheating anyway…

1:00:26.9 Julie Schell: It's dangerous, actually.

1:00:27.9 Alisa Miller: It's dangerous. There are kids and people who are being affected by it who are being expelled and all kinds of stuff, and it's crazy.

1:00:36.9 John McDonald: The thoughts in my head didn't come to me at birth. They were the thoughts based on the inputs that I got from all of my teachers along the way. Has there ever been anything such as an original thought? Reality. Right? And so now it's just being hyper-accelerated, right? In bringing those thoughts into our heads to process.

1:00:56.8 Dakota Pawlicki: Well, I got to say, this conversation went in really interesting directions, and you've given me a lot of more job security because there's a lot of other things that we need to definitely dive into. I want to thank our guests today, Alisa Miller, John McDonald, and Julie Schell for joining me. This special edition of Today's Students, Tomorrow's Talent is produced by Amy Bartner, Domy Raymond and me, Dakota Pawlicki, with support from Matthew Jenkins and the team at Site Strategics, engineering support provided by the audio professionals here at South by Southwest. Debra Humphreys and Kevin Corcoran provide leadership for Lumina’s strategic engagement efforts. Be sure to check out our other live recordings from the South by Southwest stage. Subscribe to our show wherever you get your podcasts. Thanks for listening. We'll see you next time.

1:01:37.0 John McDonald: Thank you.

1:01:37.8 Alisa Miller: Thank you.

1:01:37.8 Julie Schell: Thank you.

[music]