Episode 88 - Education and AI featuring Priten Shah
Scott (00:02.522)
Hey everybody, welcome back to another fantastic episode of your fabulous learning nerds. I'm Scott Schuette, your host, and with me, has he been a good boy? I don't know. Let's check it out. Dan Coonrod.
Scott (00:20.342)
Oh, Dan, you've been good this year. That's amazing.
daniel (00:24.482)
darn tootin'. What's up Scott, how you doin'?
Scott (00:27.494)
I'm doing all right, I'm doing all right. So, whew, we got through that, we got through the holiday mostly, right? And now we've got the new year, so I've got a date this week with my Xbox, and the couch is what I've got this week. Absolutely, how about you? How are you doing, sir?
daniel (00:36.363)
Mostly.
daniel (00:39.79)
Ooh! Haha! What are you playing?
daniel (00:48.846)
Oh, I'm fair to Milt and so I asked but I'm super curious, what are you playing? Ah, you got me! You got me!
Scott (00:56.314)
You were gonna ask me not to play it? Are you feeling okay, sir? Are you feeling all right?
daniel (00:57.074)
Meh. No, I know better than that. I mean, you know, just saying, it's the holidays. What are you playing? You said you're getting on your couch with your Xbox.
Scott (01:09.546)
Oh, oh, I have, I'm a Diablo nerd. So I've got the new Diablo. Yeah, yeah, and I've got the, I'm playing through, it was playing through two and then I got the X and then I'm like, let's see what this did and like, oh my gosh. Yeah, you know, I love that. And then I also have, and my wife is watching, there's a horror adventure called the...
daniel (01:14.814)
No! Yeah, yeah, yeah
Scott (01:38.47)
The quarry. You heard of this? Oh, it's awesome. It's like an 80s slasher video game that you play along with the school counselors at a camp where there's a murder running around. It's quite great. And.
daniel (01:41.343)
Yeah? No.
daniel (01:52.663)
I really want to play this.
Do you know, I have been playing a game with friends of the show, Joey Ackland and Ruben Corbett, called Lethal Company. And it's this little tiny, I don't wanna say little tiny game, I feel like that's derogatory, but it's a game where you play a bunch of workers, and you have to go off into incredibly dangerous situations and find scrap metal and scrap stuff and bring it back. And it's kind of
dark comedy survival horror. Zeta can speak to it. She's been played, she's played with us a couple of times as well. And it is a hoot. And I definitely think that just in between the breaks, I'm gonna be playing some more of that.
Scott (02:41.166)
Awesome, that sounds great. I wanna know more about this game, so let's go ahead and waste no more time and bring in the Duchess of Design. You love her, Z-Girl's here.
Scott (03:03.794)
Seeker all!
Zeta (03:05.395)
Scott, how you doing?
Scott (03:07.162)
I'm doing okay. My transition could have been better, but that's all right. So you're playing this game?
daniel (03:10.862)
Yeah.
Zeta (03:11.359)
Oh, it was fine. Yes, yes, I got talked in kind of against my will because I don't like scary survival horror kind of games. I'm just, I don't have the nerve for it. But yeah, they're like, yeah, this is gonna be fun. You should play. And I'm like, yeah, okay. It's only $10 on Steam. I'll give it a go. And it's terrifying. It's like low poly enough to like...
daniel (03:32.862)
Ha ha ha!
Zeta (03:35.483)
make it kind of scary and all the different kinds of monsters that you come across have their own unique set of how they see you, how they can track you, and it's fun. It's really fun. I'm glad I jumped in.
Scott (03:47.662)
Okay, we'll have to check that out. Okay, so I'm gonna...
daniel (03:49.862)
Yeah, yeah, I want to drop one thing before we transition over. I'm going to drop it and we can talk about it on another show. I just want this to work its way in. I think one of the best trainings ever made is the first level of Super Mario Brothers.
Zeta (03:57.105)
Mm-hmm.
daniel (04:07.586)
talk more about that but I just want to drop that now and let it percolate. No!
Scott (04:08.962)
Not Oregon Trail. You're comparing this to Oregon Trail, which was fantastic.
daniel (04:13.802)
That's not. Oregon Trail is great, but not even close. Best example of good training. First level Super Mario Brothers.
Zeta (04:15.041)
Oh
Zeta (04:22.54)
Mm-hmm. Interesting.
Scott (04:23.674)
Well, interesting. So I'm going to need to figure out what our fabulous guest, who did not come here to talk about video games, by the way, has to say about all this. And we're going to learn all about our fabulous guest in our very own segment that we call What's Your Deal?
Scott (04:46.246)
Preten!
Priten Shah (06:05.156)
Hey Scott!
Scott (06:06.522)
Hey, what's your deal, my friend?
Priten Shah (06:09.628)
So my deal right now is to help teachers navigate the age of AI and we're doing a lot of different things to help them do that. We offer PD, I have a book, we offer courses, and then we have really important conversations about what the large-scale reforms that will be necessary will look like. And hopefully we can have some of those conversations today.
daniel (06:27.246)
Can I jump in real quick and just make sure that like, I understand you said PD and I'm acronym dumb. What do you mean when you say PD?
Priten Shah (06:34.496)
Yeah. Professional development.
daniel (06:37.73)
Awesome!
Scott (06:40.078)
Yeah, that's cool. So how did you get into this? I think that's the important part. So I get that we're going to have a great conversation about all this stuff, but what led you here? Where did you get your start? How did you get here?
Priten Shah (06:53.404)
Yeah, my path's kind of really weird, so this is gonna take a bit. But I started my first ed tech venture in high school. I was a high school sophomore. And we tried to develop an online tutoring platform to help students practice for the New York State Regis exam. And we kind of worked on that for four years. We got some cool grants for Macy's and Walmart to get that going. And then kind of used that initial experience to kind of work on some different interventions in education. I spent most of college thinking about
Scott (06:56.102)
That's fine.
daniel (06:56.654)
That's awesome.
Priten Shah (07:22.168)
education philosophy as a philosophy major. Took some time to work on projects on like mastery learning, building custom solutions for folks. And then during the pandemic, we did a lot of work with institutions navigating the COVID-19 pandemic, transitioning folks to online learning, trying to figure out innovative ways to kind of preserve extracurricular activities. While I also got my master's education management policy. So a lot of various exposures. Throughout this time, I also like taught in Korea for a little bit, taught in a special ed classroom in New York.
I'm hosting a summer camp every summer. And so all of this kind of like culminates in trying to think about like, what does education need to look like long run to best serve our students? And that's kind of the theme amongst all the projects. And AI has always been like in the back of my head, this really cool thing that I was really excited about. I was like in college, one of my computer science projects was like working on a language learning tool. And I first looked into like the large learning models and language models then. And then, you know, waiting with one of my clients, I'm like, oh, like.
wait until we have AI to do this part and we can be like so much better serve our students on this way and this way. And then when like the technology became more accessible about a year and a half ago, a little bit less than that, we were really excited. We were like, okay, a lot of things that we've been dreaming about for the last few years can actually like be done overnight. We can actually like implement these things into the platforms that are already building. And at the same time, we started hearing folks like freaking out.
Um, we were like, you know, some of our clients from the COVID-19 pandemic were like, oh, like, what are y'all doing in terms of thinking about this crisis? I was like, crisis? Like, what, where's the crisis? And that's kind of when we started like really dig deep into, okay, folks are nowhere close as excited as I am right now. Um, and we kind of need to help folks kind of figure out how to deal with the short-term problems this is creating for a lot of folks that are real. Um, before we can talk about more of the fun and cool long-term potential.
Scott (09:12.734)
Great experience, love it. And I'm super excited about our topic today because everybody's talking about it, right? So how does AI affect what I do? And for those of us that have kids, and my kid has grown up, but everybody around like, oh my gosh, how am I gonna talk about this with my kid? How is it gonna affect how they learn? So this is super awesome. I'm super excited to get into it.
So without further ado, everybody, let's dive into our topic of the week.
Scott (09:52.134)
This week we're discussing the most important conversations to have with our students in the age of artificial intelligence, in the age of AI. Wow, that big topic, pretty wide topic. You know, if we're going to start anywhere, I feel like the best place to start is with the myths, right? So what's the most pervasive myth?
surrounding AI and education. So if we think about it, how can we help people feel better about it? What's the biggest myth that's out there and what's the truth behind it?
Priten Shah (10:34.072)
I don't know if this is going to make a lot of people feel better about it, but I think the biggest myth that we've been encountering lately is that AI detectors do not work. There is no way to figure out if a student has used AI in their work. Should I... There's like a massive ambulance there. Massive ambulance, loud ambulance, but...
daniel (10:51.575)
Yeah.
daniel (10:54.882)
Hahaha
Scott (10:55.723)
Let's just do a solid pickup on three. Ready? Three, two, one, go ahead.
Priten Shah (11:00.388)
There's nowhere really to figure out if your students are using AI in their assignments. Lots of folks are trying to come up with band-aid solutions or paying for various detectors that a lot of universities are throwing lots of research out there showing that they don't work or don't work universally or very well. We've talked to some teachers who are finding ways to record a student's screen while they're writing an essay, all sorts of interventions that seem really band-aid-y. And the minute one new solution comes out, we see a TikTok video of some student explaining to the rest of their fellow students on TikTok how to get around it. And so I think that chasing this like AI detection...
is definitely the biggest myth that there will be some way to detect AI. And I think this is where like the first conversation we can talk about that you need to have your students is, is that fact that we need to rethink what cheating means outside of the classroom and we're going to start having conversations about what we what academic integrity and honesty and the value of those things and why we care.
daniel (11:51.926)
You know, I definitely as, as chat GPT and like some of the other large language models have like risen up and they become more and more part of like our daily lives. I heard the same thing. Lots of people talking about like, oh, here's this way to detect AI. And, you know, like I've, I've seen even more like the rise of like, Oh, like, you know, none of this was written by AI. None of this was generated by AI. And, uh, I mean, like that's the goal. And like, that's something that like the people behind AI who are working towards is like, Hey,
making this tool be nearly, if not totally, imperceptible from human generated content. I've always found it kind of perplexing when people are like, oh yeah, here's this AI tool to detect if people have written AI. How? Are you looking for just bad writing? Because I know plenty of people who are bad writers. I've heard a couple of teachers say that the thing they look
in grammar skills, like they'll have students who will go from like, you know, oops, I forgot punctuation. Oops, I forgot capitalization. So all of a sudden, they're like, they're brilliant grammarists. I don't think that's a word, but I'm gonna run with it overnight. And like they, they now they know it's like, oh, well, they're probably using like AI, but that's the only detection method I've heard. That's sounds like it makes any sense.
Priten Shah (13:18.348)
Yeah. And even that, like we're, you know, the, the really fascinating to see the amount of information that's spread on tech talk among students. Um, especially when it comes to cheating. Um, and folks are learning how to like tell chat, to be, to give chat, to be a sample of something they've written already and ask it to mimic the same like writing style and make the same mistakes. Um, and like, it's just, it's such a terrible game to be playing on both ends. Like if I was educators, like trying to like play this game of love, how we're going to catch our students and students trying to figure out ways to get around it.
It really starts to make the entire educational enterprise seem a little ridiculous at that level.
Zeta (13:50.479)
Yeah, because like at the oh, sorry, go ahead, Dan. Okay, if I can just jump in real quick. When it comes down to it, the reason why these students are writing is to show their skill and develop their skill and become better at it, right? And in a way using AI, I would argue that it can be like a tool to help them know what good looks like. So they can have like a model to be able to emulate. So could AI kind of help in that sector?
daniel (13:51.09)
I... Go ahead, go ahead.
Zeta (14:19.847)
like help students become better writers. Is that a?
Priten Shah (14:24.448)
Yeah, and the recent data supports that. Like there's good surveys coming out after this fall that show that most students are not using it to cheat. Like most of those cheating rates have not dramatically gone up in the last year. There's a lot of fear about it, the capabilities and the potential to cheat is really high. But students aren't using it that way. And what we're hearing from students directly is that they wanna just know how they can use it productively. And there are, right, like we all know their productive uses of it. And so I think they wanna hear from the teachers that, oh, can I use it to brainstorm the topic for my kids?
Can I use perplexity to do some research on some good sources? Can I feed in an argument I'm making and ask it to point out the logical fallacy in there or point out a counter argument I can make? And I think that's when we can start to have amazing conversations with students about, here's how you can use this stuff to take your writing to a really, really next level of thinking, level of writing. But I think right now, everybody's too scared to even talk about some level of that. Folks are like, oh, it's either, you can't use it at all, or we're just not gonna talk about it. It seems to be the two most pervasive approaches.
But hopefully it gets closer to that, right? Because I think it can really help students do better.
Zeta (15:27.431)
Yeah, I agree, totally. It could be used as a tool.
Scott (15:32.798)
in my career, I had to make a decision as to whether or not I wanted to be a good learning designer and educator or a cop when it comes to learning. Developed learning for a major retailer, right? And so the big thing that was going on is like, well, if they're taking the quiz, how do we know that there are not five other people right behind them also taking the quiz?
And I piped up and I said, well, isn't there learning going on in that situation? Isn't there some like learning that cross-functional social learning? And one of the things that we had to come up with, now this was a long time ago, right? But one of the things we had to come up with is like, hey, we understand that that's going to exist. But the process and the understanding and the, oh, I want to have the spirit of what we're trying to do was far more important.
than trying to police everybody and ensure that no one was cheating. So I find that really interesting because you're right. The first thing that I heard about when we started talking about AI, chat, TTP, oh my God, kids are going to cheat, kids are going to cheat, kids are going to cheat. And then I also heard that fallacy of, ah, no, teachers have got this tool that's going to tell them that they're not cheating. And I think it's just a matter of, and maybe you can help us understand. So where is that focal point from a conversation perspective with our students about?
Priten Shah (16:42.64)
All right.
Scott (16:59.206)
Hey, we live in an AI world now, because we do. This is how we're going to approach learning moving forward, and it's going to be better.
Priten Shah (17:09.231)
Yeah, and I think this comes down to answering the question that I think we all have heard or asked ourselves in the classroom, which is like, why are we learning this? I think that fundamentally, like getting to the point of answering that question for our students and helping them answer it for themselves even will work way better than trying to figure out how to record a student's screen while they write the essay. I think we need to do a better job of explaining to students exactly how these things that we think are important. And, you know, I'm still a big fan of like teaching writing.
I think the process of learning how to write can dramatically change how someone thinks and their ability to articulate their thoughts to another human out loud. But students right now, all they see is they're up at 3 a.m., they have an essay due the next morning, and they're worried about their grade. And I think some of these conversations are going to cause us to have to shift those kinds of pressures we're putting on our students and get them to see, no, this essay I'm writing is so that I can better think through this process, come up with my own arguments, and have a conversation with my peers the next day or with my teacher the next day. And I'm hoping, you know, this is...
This is all idealist, like if everything goes the way it needs to go, that is what I would be hoping that the conversations in our classrooms look like. There are a million two reasons why that's going to take a while and that's difficult. But I think that really will be the only long-term solution to getting folks to see, like getting students to see like what's going on here? Like why are we still doing this? Why am I even bothering? And get them to buy into that, the entire idea of learning these things.
daniel (18:36.711)
I think we're at such a point where the education system, as we know it, has to, and I think this is what Shane basically has to change, has to adapt. These tools are coming. I'm old enough to have been told not to bring a calculator into math class and that it's not like I'm going to walk around with a calculator all the time. And well, oops.
We all do now. It's not a fair question, but let me ask you, what do you think the education system of 10 years looks like? Just a piece of it. Do you think that there's still this bulwark of anti-AI sentiment, or do you see it integrating more? And if you do, how are we measuring learning when so many of the common ways that we measure learning are now rendered moot?
Priten Shah (19:33.896)
There's a lot of different questions there, and so I'll try to go and see if I can cover some of those. But no, but this is the fun stuff, right? I think this is the conversation that we need to be having. So I hope that in 10 years, the anti-AI stance, I don't think it's going to last much more than a year. I think the pervasiveness of it across all tools, all industries, you're not going to get around it. It's like anyone taking an anti-internet stance these days. It's not really feasible or practical in any sense of the word.
daniel (19:35.946)
Yeah, I'm so sorry. I got excited. Pick one.
Priten Shah (20:02.928)
except maybe in some niche isolated context. And so I do think more schools will have embraced AI in some way or another. I think we'll see really problematic ways of embracing it. So we'll probably see some surveillance of students. We'll probably see more problematic like punishing discipline uses of AI. We'll see some really cool ways to help students kind of take whatever learning journey they wanna take. And so one of the fun things that I hope happens is that students kind of explore their interests along with curricular goals. And so...
If you really have to learn about the Supreme Court, maybe the AI system can kind of show you a Supreme Court case that's relevant to your interest and still help you learn how the Supreme Court works. And not everybody has to learn the exact same cases, maybe for that exact curricular goal. But I also think that there's going to be a difference in how we even think about what we're teaching, why we're teaching. And I think that's a question. I don't know if it's going to make it 10 years. I think this is going to have to happen much earlier. And I hope it happens much earlier, because otherwise I think we're doing a disservice to our students.
But I think we're gonna have to start thinking about like, what is the end goal? Like we have never been able to predict what the world that our students will inherit looks like. But like, there's been some in place, right? Like we kind of have some idea about like, what the world will have looked like, what have looked like, you know, four years from now, eight years from now, depending on what level you're teaching. But I think that this is like the least amount of prediction, like the least predictable future ahead of us right now. I think that we have no clue what careers will exist. We have no clue.
what kind of industries will survive, and which ones will become really popular. And I think that, given all that unknown, I think framing education as a career preparation phase of life, I think, is gonna start to fade away. I'm hoping that we can start thinking about education as how do we make students learn how to be happy, and learn how to talk to other humans, and interact, and make their lives robust, no matter what they end up doing. How can we make them lifelong learners, and not necessarily, you know.
hone in on like you need to build the skill set so that you can like graduate and get a job. And yeah.
Zeta (21:57.275)
Yeah, no, I love that. Yeah, using AI to help shape the level of learning like, hey, this is how you learn. This is the first steps you take. This is how you get from point A to point B, right? Like, I love that. And the way that our education system can then, rather than teaching a skillset, teach you how to get there. I love that. I hope maybe in the future, future educators will see that and use AI as a tool.
daniel (21:57.998)
I love that. I love that. Yeah.
Zeta (22:27.535)
and be able to tailor experiences to our learners and help them succeed rather than, oh, let's get them out in the job force.
Scott (22:37.482)
One of the things that I think is pretty cool as I just start to really just absorb everything that you're bringing to the table, I mean, there's more questions than there are answers, which is super fun. Like learning nerds like me get really excited about that. That's really cool. But the possibilities are pretty endless as far as how our kids will learn moving forward. One of the things that I find really exciting about AI tools is that it has given me to
the ability to do things that I could never have done before. I'm not an artist, but in a matter of moments, and granted, Dolly 3's got a way to go. Nine times out of 10, the prompt is nowhere even close to what I asked it to do. But I can still create things, and then the creative part of me gets really, really cool. So this opportunity of, hey, use this tool to create something, and you think about...
the you think about social studies, things like that, they use these tools to create a video of Abraham Lincoln talking about this or that the next thing like the learning and the process of utilizing those school tools. In my humble opinion, it reinforces the learning but also teaches them valuable skills that they could use moving forward. You know, and a lot of us, I know for me myself, like, I'm self taught in almost everything.
Nobody, I didn't go to school and learn how to do PowerPoint. All of a sudden it was there and I had to figure it out, right? So those kinds of things I think are super opportunities in the, that we can talk not only to our educators about, but our students as well.
Priten Shah (24:15.896)
Yeah, there's also a really cool thing. So let's think about it, like how we can view it, like interact with the world differently. I was like, you know, walking through some like pyramids in the outside of Mexico city. And I had like chat TTP, the voice version in my AirPods. And I was able to like have a conversation and ask questions about like, oh, I'm looking at this. What might this be? Or how did they do this? Or, you know, like, it was just like, and I was like, this is an amazing thing. Like normally if I wanted to look these things up, I would have gone back home, like check them on the internet.
If I was lucky enough where I had maybe something accessible, I'm still in my phone reading, looking down at it, having to just put in the search terms, look at it while I'm not fully present. But I got to kind of just look around, walk around with these little AirPods in my head with JetGT talking to me. And I learned so much. I learned way more than I ever have at a historical site before. And I hope we can kind of show students that there's really good reasons to learn these things, that learning can be enjoyable.
And it's like so much more accessible. So like if we can show them how to use these technologies for things like that, where they're kind of get to explore their interests, maybe they're not interested in mind ruins, but whatever their interests might be, I think there's really cool ways to like start thinking about how might I use this to like kind of learn about the things I want to learn about, do the things I want to do about it, that I might not have the skillset for. I think those are all really interesting conversations to start having with students.
Scott (25:33.874)
So you had your own Jarvis. You had your own personal Jarvis. That's amazing. Okay, I'm sorry.
Priten Shah (25:36.789)
Yeah.
daniel (25:38.146)
Hehehehe
daniel (25:43.942)
This is, you know, for years I've been in corporate L&D and I really think, I've always heard it, like leaders and L&D professionals get excited when you talk about like on-demand learning. That's like been a buzzword and I've seen lots of attempts at it, but I'll be honest, I've never seen a successful attempt. I've always seen like, hey, here's this library and we've partnered with like this company or this company produced like.
1100 courses and students can just go and get what they need and then students don't Student you know like it's like oh I can go click on this and do it but one I need time To like I need the drive, you know, I need the will skill and knowledge and I don't have the will So I'm not gonna worry about it. It's not being given directly to me. It's not a directive. So I don't know what I don't know but I feel like this The AI and it entering into the educational space
gives us that ability to produce like real, true, adaptive learning curricula and agendas where like I can come in and be like, it can be like, hey, what do you know about this? And I can type up, I know this and I know that, I know this. And it's like, okay, great. I'm going to send you these, I need you to read these articles and I want you to like come back and talk to me about these things. And we can have like truly adaptive, individualized learning plans.
And I hope the same thing happens in the public education space. Um, I was terrible at math, just the worst. And, you know, I had lots of great and amazing teachers try over and over and over again, but it wasn't until I got out of school and started looking at like the things that math does, the things that you can do with math, like the weird stuff that I was just like, Oh, math is cool, you know, and you know.
But how could my teachers at the time have known that would have been the thing that would have gotten me excited about math? Well, they wouldn't have known because I didn't know. But I feel like an AI would have been able to suss out, like, hey, you should look at these things. And if you want to get there, here's the path you have to take to get there.
Priten Shah (27:55.116)
Yeah, I, you know, like, I wonder sometimes when I'm sitting here thinking about it, like, from the perspective of like, oh, can we get students to like, think about learning in that way, right? Like even something like math, you know, well, oftentimes when someone says, why I learned this, the kind of conversation ends up being like, oh, like, here are the careers that it opens up, like, here's like, what you what classes you can take in the future. But if you start telling them like, you might see the world differently, right? Like, after you learn a certain level of math, you might walk around like seeing numbers on a screen differently seeing
shapes in the environment differently. And I think that's true for almost everything we learn. I think that there's intrinsic reasons to learn these things that can change how we interact with the world that have nothing to do with tangible career goals necessarily. And even if AI can do all of these things, there's still really good reasons for us to learn how to do it so that we can feel that joy, that excited joy that you're talking about after having learned being able to recall those math concepts. That to me is, again, that is the ideal environment at which we are teaching students.
daniel (28:53.126)
I have a follow-up that's maybe it's an unfair question, but it is like a, it's a fear and a worry. And maybe this belonged when we were talking about pervasive myths, but I know I just read a study and I'll find the link to it so we can link it in the show notes, but where basically they talked about students who had, this doesn't sound terrible, who had.
maybe not the best facilitator, but a facilitator who was like pushing them, not answering questions, not offering up material, but making them go dig for answers versus students who had like amazing facilitator, an amazing facilitator who was like giving them information, handing it to them. When they asked the students like, hey, how did you feel about like these two classes, the students who had the rude professor, the
maybe the not best facilitator professor, were like, ugh, I didn't like it. I didn't have a great time. I don't feel like I learned anything. And the students who had the facilitator who was like there and was a font of knowledge and like was able to open up and like, and have like answers on hand said, I love it. I wish all my classes were like this. This is great. But when we tested them or when they were tested, the test showed that the students who had to struggle
had better attention, they learned more, and the students who had the professor who was just offering up answers and was there to like support them learned less. Do you think that's a thing that's a danger in the AI space where we will have a bunch of ready to go, ready to handle knowledge, and because of that, we'll devote our brains to other things?
Priten Shah (30:37.908)
Yeah, this is a real concern, I think. And I think we've like, we can already see some signs of it. There's like, also really good evidence about like our spatial reasoning, post like Google Maps. Like, we definitely have like way worse spatial reasoning than our ancestors and even like our grandparents did. And I think just like thinking through like, as we rely more and more on technology to do certain things for us, our own skill sets and those things do get weaker. And I think this is where like, we have to be really like deliberate about when we introduce these technologies, what extent we introduce the map.
daniel (30:47.606)
Yeah.
Priten Shah (31:09.229)
The calculator is a great example here because you wouldn't... And my go-to example is that kindergartners aren't giving calculators to do basic arithmetic. But you wouldn't stop a 12th grade calculus student from using a calculator to do basic arithmetic. But that's because they've already had the chance to develop that skill set at least initially. But I think what you're talking about is also just... And Google Maps is that example, right? Is that this isn't just about the schooling system. It's beyond that. It's about us as humans, regardless of within a school context or not.
like our ability to do these things. Like what does that mean for it? And I'm hoping that there are some like benefits from this too. So like, for example, be like you have to ask really good questions or to get really good output. And that might be part of the learning process for a lot of folks. I think they have to like asking questions is an important part of that process. And then I'm hoping that there's other skillsets we're developing that might maybe compensate for the skillsets that get weaker. And so we might become even more robust thinkers. We might become even more robust using language.
if we can have some like aid and tools that kind of get us there. And then like rightly, there's the question of like, what are these, what are of these things are necessary, which are not like spatial reasoning. I still sometimes I'm like walking around the city and I'm like, OK, maybe I kind of wish that like I had it in Google Maps the last time I was here, because I could probably like navigate a little bit more easily. And so then like I might go back and not use it. But I think those kinds of things are going to be decisions that like we have to empower our students to be making. So they need to know themselves well enough to know that like they shouldn't use it as a tool.
because it's affecting them in some negative way. And all of this requires such high levels of introspection and the metacognitive skills, but I'm hoping we can focus on developing those, because I think that's what our school systems can maybe do for us.
daniel (32:44.526)
that. You know, I just recently had to take a trip to Pennsylvania and I hadn't been in more than a decade. And it was funny because I remembered the route to get there. I hadn't driven in a decade, but also I hadn't used like Google Maps to get there the last time I drove there. And so like it was fuzzy, but like, okay, I think I take like 514 when I get to Maryland. Yeah, okay. All right, cool. And like I was able to like navigate to like my relative's house.
almost with no use from Google. But just today, I had to get across town to get to a restaurant, and I've been there four or five times. And I'm like, man, how do I get there? I'm just going to turn on Google Maps just to be safe. You know? It's a great example.
Scott (33:27.782)
Yeah. Although, you know, Julia Phelan would remind us that learning needs to be hard, Daniel.
Scott (33:37.498)
needs to be some challenge in order to make learning stick. So I hear you right there. I think that that's always been true no matter what our technology is, right? So the easier it is for me to get to competency and mastery and then I just, I feel like there's some detriment in that, to be honest with you. So I think that's cool. One of the things that I wanna chat a little bit about is this idea of ethics, right? So...
daniel (33:38.34)
I... yeah.
daniel (33:41.74)
Yeah.
Scott (34:06.29)
We're walking into this really unknown space. And in that space, I think ethics is an important part of the conversation. How do you frame up ethics to both sides of the audience when we're talking about AI?
Priten Shah (34:22.464)
Yeah. This is where my philosophy background comes out really strong. I think there's so many different angles to think about ethics and AI right now. And I think even the plagiarism cheating angle is an ethical issue. There's integrity issues there. Thinking about copyright and intellectual property is another interesting conversation here. But also there's ethics of the biases that these algorithms are perpetuating, the data sets that go into them.
And all these conversations need to happen, right? Like these students need to know like what kinds of things they're exposing themselves to, how to be better critical, both like consumers and eventual producers of these technologies, right? Like these students will be the one shaping the future of technology. And so we kind of need to show them what the harms of like perpetuating like societal biases within these systems is. We need to think about like the bias, like the ethics of like the environmental costs of these technologies, right? Like the amount of like resources they use is enormous. And so as we're like facing a massive
climate crisis, we need to navigate what decisions are we making about what kinds of costs we're willing to take on to make this technology more accessible and widespread. But there's various ethics angles here that also relate to each industry. And so there might be questions about facial recognition or mortgage loans or...
for diagnosing a patient or how much AI should be in a classroom. I think that all these ethical questions that will come up, do we implement with UBI or not? Right? I think there's so many important questions that AI kind of forces us to ask. And I think this is where we need to just start having these conversations for our students so they learn how to have the conversations. It's not a matter of, oh, do they know the ethical issues relevant to facial recognition
But it's like, do they have enough practice thinking about these things critically enough? Do they know which forms of knowledge to call on? Do they have the critical reasoning skills to be able to think about this the next time a new ethical issue comes up? And I think unfortunately the answer is that we don't do that at a robust level right now, at the K-12 level at least. And hopefully we can start sitting down and thinking through, okay, our students really need to be really critical thinkers when it comes to ethical issues.
Priten Shah (36:33.104)
Because some of the things that will always be in human territory will be the decision-making that we make as society, right? Like the chances that we in the near future start offloading like major ethical decisions to AI algorithms is probably less than We'll probably rely more fleet on humans for that. And so we need to make sure that our students have the skills to do that and saying those conversations
Zeta (36:55.263)
Definitely, definitely. And one of the other things about ethics and AI is when we do the machine learning, we do like rate and humans that rate those and they give their own bias. So having like a larger sample size of whoever's doing that and making sure that you cover all the bases, I think is very important too.
Priten Shah (37:15.804)
It's also about the labor that goes into it. Who are those folks? Are they being compensated fairly? All those things are also really important ethical issues that these AI algorithms bring up.
daniel (37:19.724)
Yeah.
Zeta (37:24.979)
Definitely.
Scott (37:27.322)
Well, we're getting to that time where we're going to have to start thinking about wrapping up, but before we do, Preetan, I wanted to give you the opportunity to go ahead and is there anything that's really, really important when it comes to this discussion that we haven't had an opportunity to talk about that you wanted to share with our audience?
Priten Shah (37:45.768)
I think that just like maybe the summation of all this might be that I think like taking some time for all of us to think about why learning can be fun and useful and intrinsically good is going to be the key to answering a lot of these questions for us in the future. And so I know I get that example of like using AI in my ear to like navigate the ruins, but I didn't use it for like Spanish, right? So I was like, I tried to use the level of Spanish I had from college.
to navigate because there was a certain joy felt by like being able to talk naturally with another human by being able to either like, you know, in broken language maybe, but still be able to have that conversation, not have it mediated by AI. And I think this is these are the kind of things we need to start having like really open conversations about that. Sure, AI can write a poem, but did it really like, when you read it, versus a poem you read by a Holocaust survivor, like you have, right, they cause a different emotional response in you. And so
I think those kinds of conversations about like, why is learning, why is consuming this kind of information and knowledge? How can it be fun? Why can it be intrinsically good? I hope that that's where we're headed and that's what we can start doing.
Scott (38:56.082)
Great discussion, timely discussion, super stuff. Thank you so much. Preeten, could you do me a favor? Could you let our audience know how they could connect with you?
Priten Shah (39:06.168)
Yeah, we would love everybody to follow our Instagram at Pedagogy Cloud. We post lots of updates there, news updates, prompt tips, all everything you need to know about how AI influences education. And then definitely check out the book. It kind of gives you a wide overview of all the kind of different things that you kind of start thinking about. Some of which we talked about today, which is AI and the future of education, teaching and the age of artificial intelligence.