Charles Fadel: Future Proofing Education
How can we effectively integrate AI into education to prepare students for the future?
What key competencies and character traits must education systems prioritize in a rapidly evolving technological landscape?
This episode features Charles Fadel, founder of the Nonprofit Center for Curriculum Redesign and author of "Education for the Age of AI." Charles has taught at Harvard, MIT, Wharton, and the University of Pennsylvania, and has held executive roles at Cisco Systems. He brings a unique perspective from his extensive experience in both technology and education.
Charles discusses the parallels between AI advancements and the early days of the internet, and the current state and future potential of AI in education. He provides an overview of his four-dimensional education model, which emphasizes knowledge, skills, character, and meta-learning, and underscores the importance of purpose, autonomy, and identity in motivating students.
Charles also explores the necessity of balancing knowledge with skills and character development, highlighting the role of project-based learning in making education more relevant. He addresses the impact of university entrance exams on K-12 education and the need for modernized assessments, as well as the cultural and contextual differences in education systems globally. Furthermore, Charles delves into the importance of enabling students to apply knowledge across different contexts and disciplines, and how AI can aid in recognizing patterns and facilitating knowledge transfer.
He discusses the evolution of traditional disciplines and the need for interconnected learning approaches, balancing didactic education with project-based learning for holistic development. Finally, Charles emphasizes the need for urgency in education reform and encourages educators and leaders to drive local change with a mindset of incremental improvements. Join us as we explore the future of education with Charles Fadel, delving into the integration of AI and the importance of a holistic, human-centric approach to learning.
Join us as we explore the future of education with Charles Fadel, delving into the integration of AI and the importance of a holistic, human-centric approach to learning.
Get in touch at hello@thelearningfuture.com; and find the transcript at our website www.thelearningfuture.com.
[TRANSCRIPT AUTO GENERATED]
Louka Parry (00:02.734)
All right. Hello and welcome. Yeah, so I'm just going to do the bio. My first question to you, just so you're ready is what's something you're learning at the moment? And then I'm going to ask you directly about the book and let's just have a conversation for about 30 minutes. Well, we can edit. I very rarely need to, to be honest, Charles. I have no concerns with you. But yes, if you ever want to the end, if you say something, you're like, let me say that again. I'll make a little edit point.
Charles Fadel (00:17.755)
Okay, do you edit this? I hope you're ready.
Louka Parry (00:32.238)
which is very easy to do with this.
Charles Fadel (00:35.163)
Well, even this introduction should be probably cut out, but that's the easy part.
Louka Parry (00:40.43)
yeah, well that's fine, that's fine too. We'll see how we go. All right, enjoy yourself mate, thank you so much for your time, it's great.
Hello everybody and welcome to the learning future podcast. I'm your host Luca Parry. And today we're very fortunate to have just a fantastic thought leader in the global education space. I have been following his work for quite a number of years now. And I think it is world -class, especially as we look at this moment with technology kind of converging with what we do in schools, in universities, in companies and in the workforce. It's my delight to have Charles Fadel here.
who is the founder of the Nonprofit Center for Curriculum Redesign in Boston. Its framework, which we will talk a lot about today, is available in 23 languages. And he is the lead author of the recent book, Education for the Age of AI. A little bit more about Charles. He has taught at Harvard's Graduate School of Education for seven years, and also down the road in Cambridge at MIT, and also at Wharton and the University of Pennsylvania.
He was a senior executive in technology companies for 25 years and founder of Neurodyne AI, formerly a global education lead at Cisco Systems and an angel investor with Beacon Angels. He's been awarded a BSEE, an MBA and seven different patents. Charles, what I love about you is the perspective you bring to education from someone that has really been working in technology and 21st century skills now.
for a couple of decades. In fact, I don't think it's too bold to say you were one of the pioneers of the 21st century skills articulation through your work with the OECD. But thank you so much for joining us for the Learning Future podcast.
Charles Fadel (02:27.163)
Thank you so much Louka for hosting me and you're right, the work on 21st Century Skills goes back to the mid 2000s, so 2005 through 2009 and the book I wrote at the time called 21st Century Skills, well that has become a moniker used worldwide, so that was very rewarding. What is less impressive is to what extent all these recommendations have been put into practice and that's what we'll be discussing today.
Louka Parry (02:56.878)
Brilliant. I'm very curious about where you see us in our work in global education systems and what might be needed. My first question, though, is always a personal question about the beautiful intrinsic act of learning itself. What's something that you personally have been learning recently? What's something that's been on your mind and in your field of awareness?
Charles Fadel (03:23.993)
Well...
The analogy between AI's ramp up and the early days of the internet, at least from a consumer perspective, I'm not talking about ARPANET, where every day there was a new discovery. One day you could stream audio, the next day you could stream video. It was just mind blowing. And I constantly felt out of breath. It was just happening so fast. And I have the same impression. I've had the same impression since November 30, 2022.
Louka Parry (03:42.222)
Mmm.
Charles Fadel (03:54.943)
which is as everybody knows when chat GPT came to the fore and so we've seen this a beautiful acceleration but like all exponentials they come to a saturation point and I think we're starting to see a saturation point if you look at chat GPT 4 well GPT 4 Omni where
Louka Parry (04:20.302)
Yes.
Charles Fadel (04:21.443)
not GPT -5. They've gone laterally, meaning they've added lateral features like multimodality and so on, but they haven't grown to another trillion parameters or tokens or whatever. So that's where we're starting to see a saturation of the traditional S -curve that we find in technology. And I'm, in a sense, glad to see it because now we're going to be transitioning from the science phase into the engineering phase. And we can talk about that.
Louka Parry (04:28.364)
Mmm.
Louka Parry (04:39.95)
Mm -hmm.
Louka Parry (04:49.058)
It's a wonderful and you know, it's wonderful to have your reflections on the internet. And I think it's a parallel that's been referenced a lot, you know. I can just remember, you know, like dial up modems and kind of like using a chat bot on the internet, you know, as a young person at school. I'm pretty old, man. I just look young.
Charles Fadel (05:09.851)
Come on, you're not that old, Louka
Louka Parry (05:15.406)
Now, it's like this just this incredible moment of explosion that of course is kind of too big to know, Charles, if you use that framing. And so I've, when you try to keep up with all of it could be pretty exhausting. You know, I probably follow already too many newsletters of AI experts sharing the next tool that comes out weekly. But there is this incredible energy and I think a real sense of optimism.
but also in some cases like a real caution to where this technology will take us. Because this moving very quickly, of course, we kind of, there aren't any guardrails. It's kind of in some ways uncharted terrain. I would...
Charles Fadel (05:53.403)
Well, I'm hoping that we have learned from the debacle with social media. And we're going to be a bit more guarded this time around, although it's still time to regulate social media. I'm just not sure why that's not happening. Just saying.
Louka Parry (05:57.742)
Yes.
Louka Parry (06:06.67)
Yeah, yeah, I mean, the social dilemma and our friends at the Center for Humane Technology, I think they've done such a wonderful job at articulating. So kind of, you know, what's the benefits and actually what are the kind of negative impacts? And I think Jonathan Haidt also wrote a book, The Anxious Generation, which came out only about six weeks ago as we go as we record now. And again, it has some just some really powerful insights about the great rewiring that that social media here in particular has.
has done to our young people and lots of discussion and debate around that too. it's so interesting. So what I'd love you to do is because the people listening to this are educators, largely, you know, they might be leaders in schools, they may be classroom teachers, they might be innovators working in the ecosystem in some way, people living in universities. I'd love you to kind of do two things. Number one is give us a sense of the framework, which I'm holding up if you're watching it on YouTube.
but on the front of the book of the education for the age of AI, this four dimensional education model, which has so inspired me in my work, Charles, as I was saying before we went to recording. So give us a sense of the framework, but then also why it becomes central in this age of AI. Why is it no longer sufficient that academic achievement becomes the main kind of contestation in a competitive system? And it's a ladder of knowledge.
Why do we need to add skills and character to that in the meta learning construct as well to wrap it all together? And then the idea of the motivation sciences even, purpose, autonomy, identity. Who are we? What do I want to do? Two big questions, but give us a sense of the central framework because I feel it just holds everything together, Charles, in this rapidly moving world.
Charles Fadel (07:54.299)
Well, thank you. So yes, so first of all, the framework was created with an eye towards a changing future, not specifically AI, although my center has been involved with AI since 2014. And I personally have been involved with it since I will admit my age, 1989, at a time where we could only compute three layers of neurons and now we can compute thousands of layers. So technology here, the processing power has improved by a factor of 100.
Louka Parry (08:13.806)
Fantastic.
Charles Fadel (08:24.205)
million in 30 years. And that's why we can do all these amazing things like instantaneous translation and on and on and on. That has allowed for massive databases, you know, because of cost of memory having dropped by a hundred million fold. So now we can have enormous databases that we can mine for statistical analysis, which is what the large language models do.
Louka Parry (08:40.11)
Mm.
Charles Fadel (08:49.275)
So that has been an enormous change in all these years. So, but regardless of this, the need for teaching more broadly, more deeply has been known since Socrates and Montaigne and Dewey. There's nothing particularly new about it. What we did was to really rationalize it all by looking at more than 111 different frameworks and 861 papers to
Louka Parry (09:05.07)
Yeah.
Charles Fadel (09:19.301)
to concatenate it all, to summarize it, to synthesize it into an ensemble that was actionable. And by actionable, I don't mean making it happen every once in a while ad hoc, but I mean being very deliberate, comprehensive, systematic, and demonstrable. These are our high bars.
Louka Parry (09:43.982)
Beautiful.
Charles Fadel (09:45.019)
So when you start thinking about it and you look at this concatenation, you realize, okay, well, of course there's knowledge. And no, knowledge is important, it's not gonna disappear.
It's as naive as saying Google knows everything, therefore, you know, why would you learn everything? Because you can search for anything. Come on. Are you really going to stop every other words and search for its meaning or pick up your calculator? Automation is necessary. Your minds of training is necessary, et cetera. So same thing here. The naive view is, well, why would I learn anything? AI is going to be there and do everything for me like a, a genie out of a, out of a lamp. Come on.
Louka Parry (10:15.822)
Mmm.
Louka Parry (10:25.666)
Yeah.
Charles Fadel (10:26.029)
Even if we had that genie, you still have to be explicit with what you want. And how would you be explicit if you're not already educated enough to know what you should want and have a sense of purpose? All this to say that even in the most...
wild of scenarios of jobs being all disappeared and were just hedonists at the beach all the time, you still would need to scaffold that 10 -year -old and make sure that they learn. Okay, that means that the pressure on the second part of the framework, which is related to personalization, becomes all the more critical. If you have...
tools that can do a number of things for you. Why would you waste, quote unquote, waste the brain power on doing so? Our brains are lazy by good evolutionary design. Our brains are only percent of our power consumption for five percent of the body mass. We are wired by evolution to be lazy because laziness allows you to save brain power. That's why Kahneman of talked about system one, system two. We revert to the emotional system.
Louka Parry (11:21.036)
Mm -hmm.
Charles Fadel (11:38.793)
rapid system as often as we can to save energy. We judge people on their appearance, whatever. We do these things.
Every once in a while, we realize, my God, I need to spend a bit more cycles on analyzing what I've been told. And that's the tragedy of social media. People do not spend that brain energy. The same can happen with AI, where why would I waste my energy analyzing what's been said? I'll just rely on AI. So whereas people are afraid of doomsday scenarios, my fear is the simple doomsday scenario of human nature of wanting to save energy.
Louka Parry (11:59.598)
Yeah.
Charles Fadel (12:18.253)
and therefore overly relying on AI. That's my biggest fear and it's one of those seemingly small but probably highly impactful fears, very much like in social media. Anyway, going back to the framework that implies that you want to pay attention to the motivation of the student.
Louka Parry (12:21.55)
Yes.
Charles Fadel (12:40.571)
in a world where things seem to come more easily. And that means developing their sense of identity and belonging, developing their sense of agency and growth mindset, but also finally developing their sense of purpose and passion. These are the positive motivators vehicleers that will drive them forward, even if they have genies out of a lamp. Of course, we're not going to get genies out of a lamp.
at least not anytime soon.
So that means that we still have to develop a full human being. And that means not just knowledge as has been done for decades, if not centuries, actually centuries. OK. But also what has been been asked of education for centuries, which is development of skills, meaning critical thinking, creativity, et cetera, development of character, you know, courage and ethics and so on. And finally, the development of meta learning, meaning your metacognition, your meta
Louka Parry (13:36.002)
Hmm.
Charles Fadel (13:40.557)
emotion, aka mindfulness, etc. All of these dimensions are important, but the past few decades have seen a narrowing down to mostly knowledge and typically declarative knowledge, not much procedural knowledge unless it's performing arts. And so I think...
AI is shining a spotlight on our glaring deficiencies, having in essence moved too much into memorization and rote learning. And it's great that it's exploring this because it's showing, you know, everything should be open book. Everything should be, you know, having to really understand what you're doing. So sure, first phase, you learn to write your essay because you need to learn how to write an essay. But that's not the only thing you should be
Louka Parry (14:20.27)
Mm -hmm.
Charles Fadel (14:34.221)
doing, should be learning how to use AI to write a better essay and justify the prompts that you have written and justify AI's response and analyze it and see if it has hallucinated or not. So see a mixture of both worlds not just one or the other. Anyway all of the saying it's about a complete whole whole child for the whole world education.
Louka Parry (14:46.862)
Mmm.
Louka Parry (14:59.788)
Mmm.
Charles Fadel (15:00.133)
knowledge, skills, character, meta -learning coupled with their motivation, identity, agency, and purpose.
Louka Parry (15:08.526)
It's brilliant, Charles. In your book here as well, Education for the Age of AI, and I'm sure in the other ones that talk to the model, you know, it's the idea that wisdom is the enduring goal of education, not knowledge, not information, you know, not just skill, but this idea of wisdom, which seems to me like such a human construct. When you think about what does it mean to be useful? What does it mean to live well? You know, you talked about Socrates and some of the old, you know, like,
You could look at all the different kind of traditions, be they Eastern or Western, kind of converging at this point in time. You know, like, what's worth learning is a great question.
Charles Fadel (15:39.419)
Yep. Correct. Correct. Hold on.
You're spot on, but you know, I don't think it's anywhere near brilliantly novel to say that. I think it's been known throughout human history that wisdom is the goal. It's better expressed by some religions like Buddhism, but it's really been known, you know, since the Greeks, since Confucius and Buddha and so on and so forth. It's been known. You always hear about wise kings being good for their
people and all of that. So it's not a modern concept, it's just that it's been turned into something more of a philosophical, mushy view than something actionable. And so we spent a lot of time in the book showing that it's actually not at all a mushy view, it can be decomposed into its elements and guess what? They correspond to knowledge, skills, character, meta -learning and purpose and all of that.
Louka Parry (16:37.614)
Mm -hmm.
Charles Fadel (16:44.859)
with one extra complexity, the addition of time, which is hard to impress. I know and do things at my age that I didn't do in my 20s. And one of the key challenges will be how do we compress more experiences for a young person? Now mind you,
Louka Parry (17:03.532)
Mmm.
Charles Fadel (17:07.419)
Education already does that. Education doesn't take you step by step from Mesopotamia all the way to the modern times. It compresses, you know, all the disciplines. That's what education does. And so we have to figure out even better ways to compress what is essential, including experiences so that these experiences stick with you. That's why we talk about project -based learning and in the right proportion with the rest of the pedagogical techniques.
Louka Parry (17:30.222)
Mmm.
Louka Parry (17:37.55)
I'm really curious Charles, because you're absolutely right. You know, these aren't new ideas. I would love your assessment as someone that's been doing this work for many years on the distinction, it's something that I speak to often in my work, it's kind of the distinction between education, educere, you know, or educare, and schooling. You know, and I think they're conflated often, you know.
Charles Fadel (18:04.185)
Mm -hmm.
Louka Parry (18:07.15)
And I think learning is also confused in that as well. Whereas I see them as three different constructs with three different distinct definitions. My issue doesn't seem to be education. My issue seems to be the kind of mental models that continue to drive traditional schooling, which aren't based on the latest learning sciences. They certainly seem to be far more oriented towards training than...
education than the more holistic development. That's a distinction you call out in the book really beautifully, the difference between training and education. So can you just give us a bit of a perspective on that, the difference between education and school and then maybe education and training, you know, that we're in this moment.
Charles Fadel (18:50.459)
Yeah, well, so education is a broad learning experience. Training is a narrow one related to specific jobs. Education is meant to provide you with a baseline for life. That baseline has to also be mindful that you're going to need a job at some point, but it's not the only purpose of it, right? It's life.
So really it's a meeting, as you saw at the beginning of the book, all three vectors that are needed, the psychological, the social, and the economic. Of course, these three are always considered as antithetical to each other, but that's, you know, that's obsolete thinking. You know, all three are needed. It's not one or the other. which by the way, if I may quickly weave in one of my pet peeves, we, in education, we have too many OR conversations
rather than AND conversations. I think it comes from an academic mindset of fighting for one's concept or one's idea that make one famous. But quite honestly, a lot of these constructs that made someone famous are actually partial. There's no one pretty much that has the absolute truth about absolutely everything. And with a more humble mindset or with an end mindset, we can say, okay, well, I'll borrow this from this theory and this from that theory, and I'll assume that that
assemble them in a way that makes sense and that's much better than either one of them. Again, no OR conversation an AND conversation.
Louka Parry (20:25.07)
Great reflection. I often think about that Charles's, you know, thesis antithesis, we get stuck in the debate and we love a good debate instead of thinking of the synthesis, which is, well, actually what might be the both end? What can also be true?
Charles Fadel (20:33.563)
Right. Yeah.
Exactly.
Charles Fadel (20:42.293)
Correct, correct. And that's perhaps where the engineering mindset we have in my organization comes into play. That's why we call it...
education engineering, the same way that there is civil engineering, mechanical engineering. We call it education engineering because we're trying to be precise and specific and converge our something actionable. I, for example, do not like how often people ask open ended questions and don't try to resolve them. You go to a number of conferences and so on. Everybody says, how do we fill in the blank? And I come on, we're way beyond that by now.
you should be, this is what we need to do, how are we gonna actually do it, not open question about how do we even think of something. We're way past that. So this is where my engineering intensity comes handy, because the idea is 1%. Excuse me, the sweat is the other 99%, and it's green time we actually get it to the sweat level.
Louka Parry (21:30.51)
Mmm.
Louka Parry (21:46.702)
I love it. And so back to the, back to the kind of training versus education divide, cause I, and maybe it's the course, the false divide, you know, the, with AI as it currently stands, and as you see impacting kind of the workplace and now our education systems, how do you see that? I mean, clearly you have not just a flag in the sand for what neat should happen, but also a way to move forward towards it.
It's something that you lay out in this book. Because you do ask questions like, do students always know best? And then you answer the questions. What do you think are the next steps for people working in schools or in education systems as they think about integrating AI? Or think about this idea of trying to re -centralize the ideal of education instead of potentially training that has crept into the way that...
Why do these crept into? I think it's just been the way school systems were initially designed, but for now, a completely past paradigm in the mass education setting.
Charles Fadel (22:51.003)
Yeah, well, actually, this issue of training, I think, is very context and cultural dependent. Around the world, it really depends. Some cultures are much more with a training and economic mindset. And I will put generally the Anglo -Saxon culture in that category. Others are on the opposite side and all about helping a human flourish and with no regard for the economic aspect and everything in between. So I think it's culturally dependent.
Louka Parry (23:17.87)
Mm. Mm.
Charles Fadel (23:20.909)
here, context dependent. That said, again, the reality is correct for both, meaning you want someone to be, let's say, educated for life and at some point, starting high school and obviously further into tertiary, you want them to be trained for any common output that they need. So that's the thing I would keep in mind here.
Louka Parry (23:40.558)
Yeah.
Louka Parry (23:47.15)
Yeah. It's interesting to see a few down here in Australia, a few of the credentialing authorities, Victoria comes to mind that are kind of now blending the, the idea of kind of education and training. And so the idea of being, bringing more of the vocational education setting into the school experience, which hasn't been kind of our history here in Australia, even our university sector, you know, versus our vocational training sector, a very distinct, and there's only a few universities I'll point to Swinburne.
as some innovative universities that are dual sector universities that do both. And I wonder...
Charles Fadel (24:22.171)
I think that's a good initiative, but it seems a little bit heavy -handed in the sense that you can achieve a lot of that.
cohesion and relationship to the real world, which they're trying to achieve via training while not making it about training. So that means, you know, doing more projects early on and structuring things in a way that's more real world centric. I think that's the ultimate goal is to make it more real world centric, more relevant, which is our tagline, making education more relevant. But because it's so hard to revamp the curriculum at large, people try to
Louka Parry (24:43.084)
Interesting.
Charles Fadel (25:04.253)
find patches and that's probably at first glance at first here it sounds to me more like a patch than a deep reshaping because deep reshapings are hard to pull off politically.
Louka Parry (25:18.414)
Yeah, yeah, that's a really interesting point. One thing that really influenced me as a teacher, Charles, that I came across from your work a long time ago was, I don't know if you, I can't remember precisely what the concept is called, but it's when you, it's around curriculum design or pedagogical design, which is where, and it's kind of like starting with the real world context first, which is things like the good project -based learning too. And so, and it just really struck me that.
Charles Fadel (25:35.865)
Mm -hmm.
Charles Fadel (25:42.811)
sure, yeah.
Louka Parry (25:47.662)
kind of the paradigm that I was, I don't know, you want to use the word trained, but educated in, in my teacher, initial teacher education, which I really enjoyed. But I think for a lot of it, it was, here's the knowledge that's needed. Okay, here's some skills you add to it. And then here's maybe a competency or capability, a character traits. And then at the end, there is some authentic assessment.
Charles Fadel (26:12.347)
Hmm? Hmm?
Louka Parry (26:13.006)
And it always struck me when I came across your model, you may need to explain it more crisply than I am doing here, but you notice that why on earth would we start with kind of the abstract and then end at the real world? Especially if we're thinking about motivating young people, which today is all about the real world, instead of starting at the real world and coming backwards and still having high quality content.
Charles Fadel (26:24.699)
Well, here.
Charles Fadel (26:31.355)
Well, that's a...
That's a result of expertise, strangely speaking, meaning humans learn by going from the concrete to the abstract. Just look at an early child's development. That's how we learn. We bump around and then we learn and we go and then we eventually abstract. So we go from five apples to the digit five to X to F of X to et cetera. We move on in layers of abstraction. That's how we always do it. However, once you've become an expert, it's really hard.
Louka Parry (26:55.566)
Mm.
Charles Fadel (27:02.589)
to deconstruct this expertise and who becomes teachers or professors experts and so for them they are so comfortable with the abstract that that's where they start. In well whatever 30 40 years ago there was this big movement to reshape mathematics by starting with set theory set theory so you'd start with ensembles and bijections and transitivity and things of that nature which
Oof, that's really formal mathematics. But for a mathematician, that's beauty because now it's all cohesive, makes sense. But it's so abstract. Everybody hates it, of course, in class.
Louka Parry (27:34.03)
Sounds like it.
Charles Fadel (27:45.403)
So that's the difference between where you get to in terms of expertise biases you to think that that's the way people should start and love your discipline rather than start much more prosaically and bring them little by little to at least like your discipline, not necessarily love, but as a minimum, appreciate it. So that's that's bias that's induced by amazingly expertise itself.
Louka Parry (27:53.934)
Mm.
Louka Parry (28:06.892)
Yeah.
Louka Parry (28:12.206)
Wow, I've never heard it explained that way. That makes a lot of sense. You brought up something else around disciplines that I'd love you to speak to. Because I'm really curious about where the disciplines go. And again, I think using the same model, if we're coming from the abstract, well, we can have very clear disciplines that we, ladders of expertise that...
You know, often as teachers, we become quite attached to and we love because we develop that expertise in this particular area of knowledge, which is a wonderful thing. But then of course you go to the real world and there are no disciplines. I'll reframe. There are disciplines, but none of them are distinct or separate. You know, you look at living systems or systems thinking or ecology. So how do we, where do you think the future of disciplinary knowledge is going?
Charles Fadel (28:54.907)
Yeah.
Sure.
Charles Fadel (29:04.379)
Well, I still think it's actually a useful construct. I don't want to dis centuries of, how can I say, convergence in education. But at the same time, clearly, they need to be more porous because a problem, a real world problem, it doesn't only involve a single discipline. And that's where projects come in very handy because they force this interdisciplinarity. Obviously, you could do so with even inquiry -based learning. You don't need projects to do that.
Louka Parry (29:13.806)
Yes.
Charles Fadel (29:34.357)
which as projects make it all the more obviously necessary.
But as a quick parenthesis though, let's keep in mind that project -based learning projects are also very time -consuming. Even though they're sticky, they're also time -consuming. They're sticky and motivating. So you have to have the right balance between didactic education and project -based learning. Again, an end proposition and the right ratios between the two rather than all one way or the other way.
Louka Parry (30:05.102)
Mmm.
Charles Fadel (30:05.851)
So we're talking about porosity of systems, you know, between disciplines and being pedagogies, being more fluid in how they interact with each other. Now people think, my God, that's now so mushy. How is that going to happen? Excuse me. It's just a question of design. As you're designing your course on, let's say, exponentials, you bring in the biology of diseases. You bring in the environmental aspects of global warming
as you bring in social media virality like say Gangnam Style going to billions of users and you show how your exponential corresponds to reality or not and you do projects around that.
Louka Parry (30:43.116)
Mm. Mm.
Charles Fadel (30:51.547)
That's how you bring it all together. It's not one or the other perfectly doable. And this is the right way of doing it because then it's very natural, very concrete, and you can move into abstraction and transfer from one disease to another and from diseases to another phenomenon like global warming.
Louka Parry (31:12.718)
Fantastic. Charles, I would love to be in a class that you taught somewhere at the grad school. You've brought up this concept of transfer. I want you to talk more to it. Because again, I'm just really curious. I think great educators know this and do it well. But it sometimes seems to be you need to be tuned in to the role that it's expertise and transfer that we're trying to cultivate here. Tell us a bit more about what you mean by that.
Charles Fadel (31:37.711)
Yeah.
Well, the strange thing is that transfer is the natural goal of an education, meaning we're teaching you all these things when you're young so that by the time you're whatever in your 50s or 80s, magically, you can draw on that, apply it to a new situation. That's transfer. Has always been one of the needs of education, of a successful education, is that you can actually transfer and adapt to new situations.
but we also made the naive or lazy
jump into thinking that just because I give you expertise, you're going to necessarily be able to transfer it. And that's not true. That's not how it works. Expertise can remain very siloed. And even if you see an exponential growing in, let's say, a new disease, you don't realize that it's an exponential in other situations as well. You haven't been trained to see it.
Louka Parry (32:34.414)
Hmm.
Charles Fadel (32:42.875)
By the way, that was the class I was teaching. How do you see these patterns across a bunch of disciplines? So it was about transfer. Now, if you allow me, let's move into AI as a comparison to all of this.
Louka Parry (32:42.926)
Hmm.
Louka Parry (32:54.894)
Yes, I'd love that.
Charles Fadel (32:56.379)
So AI in the earlier days of machine learning, meaning only last decade, was extremely good at solving an abounded problem, whether it was Go or Stratego or protein folding or drug discovery or whatever. You give it one set of problems, one set of parameters, sorry, one set of rules with a narrow data set. I'm sorry, even if it's a narrow set of rules, but a giant data set, narrow set of rules.
Louka Parry (33:25.262)
Yeah. Yeah.
Charles Fadel (33:26.753)
And it can do it fine, even if the computational space is gigantic, you know, permutations on the order of 10 to the 535 power, it doesn't matter. It can deal with it. So it can do expertise extremely narrowly, but extremely deep. It's like a, you know, chess champion, go champion, whatever, right? Protein folding champion. It can do amazing things that way. What has happened though with language models is that all of a sudden,
the training data is much larger. It's this enormous corpora of words and all of their interconnections and basically the autoregressive statistical nature of how do words appear in a given sentence. If you've seen this word, what's the probability of the next one being that one and the following and the following and the following? That's how it does it.
Louka Parry (34:08.398)
Mm.
Charles Fadel (34:25.595)
And so magically, all of a sudden, because we explode the narrow boundedness, now these things can actually transfer. They can see analogies, extrapolate things that we may not be able to see because we don't have this gigantic corpus of knowledge. And so all of a sudden, now these systems can not just be better experts than us, but they can transfer better than us. At the macro level.
Louka Parry (34:33.87)
Hmm.
Louka Parry (34:53.87)
Mm.
Charles Fadel (34:54.427)
at the refined level, and this is where the conversations rarely go, at the refined level, it's a different story. And we have all the caveats that we described in the book, but also you see around you when they start hallucinating, when they draw pictures where the elephant is supposed to be invisible and the elephant is smack in the picture, and on and on. We've seen these examples. So it can be a giant helper, but it's very much trust that verify, as we would
Louka Parry (35:08.332)
Mm -hmm.
Charles Fadel (35:24.381)
do for nuclear proliferation, trust but verify. So that's how it goes with transfer there and the capabilities it brings.
Louka Parry (35:27.692)
Mmm.
Louka Parry (35:36.558)
Well, I was just as you were explaining that, there was a really helpful graph in the book as well, which talked about, you know, artificial narrow intelligence and then into artificial capable intelligence. And then ultimately, I think, I don't know if you go GAI or G or AGI artificial general intelligence. And so a lot of people, you know, a lot of the kind of click bait stories are kind of on the general intelligence side. But I think understanding kind of each phase is really interesting.
Charles Fadel (35:46.009)
Yep.
Charles Fadel (36:02.331)
again.
Louka Parry (36:06.446)
So, and I guess that takes me to the question. Like if we're having this, when we're having this conversation, Charles, I hope in the year 2030, let's give it a six year, I mean, less five and a half year time horizon. You know, what, where do you think we will be in terms of the technology? And then of course, where do you hope we will be in terms of the way our education systems have been, let's say re -engineered perhaps as part of its influence.
Charles Fadel (36:32.807)
Okay, so two separate questions and let's talk about where technology will be first. Technology progresses by a series of steps, right? It's how evolution progresses as well. Stephen Jay Gould has talked about it in such terms of, the wording escapes me, but whatever, a series of steps.
I think we're plateauing now with language models. There's only so much you can wring out of this statistical process I talked about earlier. So to get to the next acceleration, we're going to need to have different types of algorithms working with the neural network Markovian types that we're using here. And so...
We cannot forecast when that will happen. We're talking about neuro -symbolic AI. We're talking about all sorts of different types of algorithms. So we cannot forecast when breakthroughs occur. We can just have a vague probability of whether or not they're feasible in a given timeframe. The emerging consensus, I would say, after the first scare that...
came after language models is okay, well, there's no GPT -5, it's plateauing. There's only so much electricity you can suck in, only so much data you can suck in. There's only so much these algorithms can do. We're very much in a situation of diminishing returns.
Louka Parry (37:54.188)
Mm.
Charles Fadel (38:08.571)
So, okay, so we cannot say what's gonna be different in 2030 from that perspective. I can tell you that at least in my opinion, and as described in the book, we are in a very, very interesting phase nevertheless. This capable phase, which is an engineering phase. Now we're bringing all sorts of things to work better. For instance, again, GPT -4 -OMNI, now it's gone into multi -modality. Now we can read graphs. Ooh, that's cool.
Louka Parry (38:37.006)
Amazing.
Charles Fadel (38:38.477)
So you see that's where the improvements may come much more interestingly than just more words aligned better.
Louka Parry (38:47.534)
Mmm.
Charles Fadel (38:48.091)
And there's also more rationalization about, OK, no, these are not reasoning systems. Unfortunately, the AI world has put in words that are very anthropomorphic, like reasoning, like intelligence, like emergence. I would much rather call it thresholding. That's much more neutral. A threshold, it's not an emergence, it's a threshold. It passes threshold.
Louka Parry (39:09.902)
Interesting.
Louka Parry (39:13.806)
Yeah. Yeah, so.
Charles Fadel (39:16.635)
I wouldn't call it reasoning, I would call it reckoning or something like that or restating statistically. And so because we use these words, it takes us into an anthropomorphic view of these things. And therefore, we like to...
Louka Parry (39:25.006)
yeah, never thought of that. Yeah. Yeah.
Charles Fadel (39:37.179)
think of them as quote unquote intelligent and you have to resist this anthropomorphization. For instance, last week I was in Madrid at a conference and they had this chat bot, female chat bot with of course the typical perfect body that no one has and all of that, right? And I'll refer to it as it because you have to fight the anthropomorphization every step of the way. You're not going to talk to Emily, you're going to talk to it.
Louka Parry (39:42.606)
Hmm.
Louka Parry (39:53.87)
Yeah. Yeah.
Charles Fadel (40:05.051)
If you let yourself slide, then you're going to start ascribing meaning when there is none. It's just words aligned statistically one after the other. You have to keep that in mind at all times.
Louka Parry (40:07.212)
Mm.
Louka Parry (40:12.654)
fascinating.
Well, just to intersect the it's just such a beautiful segue to like, you know, we describe it as it and yet our, the human desire is to call it her. And I use that deliberately because of the movie with Joaquin Phoenix, which is really kind of science fiction now for science fact -esque. But yeah, please continue. I mean, that's, it is such an interesting, even for robots, you know, we give them such pets. We just immediately subscribe our kind of humaneness to them in a really...
Charles Fadel (40:27.611)
Yeah.
Exactly.
Louka Parry (40:46.958)
interesting and in some ways like acute but definitely a problematic way because it as you say it places meaning and intentionality when it isn't there necessarily.
Charles Fadel (40:56.123)
Yeah, right, these systems have no agency, no purpose. They may have a sort of identity, by the way. So they have no agency, no purpose. We're still in the driver's seat and we shouldn't surrender that. Fair. Now, above and beyond that, they do have a certain form of identity based on their training data, right? The corporate they've been trained on biases them in one way or another. If it's only an Anglo -Saxon corpus or whatever.
Louka Parry (41:07.532)
Mmm. Mmm.
Charles Fadel (41:21.595)
They have also their algorithms that give them some form of identity. And you have the user interfaces and the agent, the agentic aspect of them that give them a form of identity. So they can respond as a funny body or as a nasty annoying body or whatever, depending on what flavor of a UX you put on them. So they have a form of identity.
Louka Parry (41:31.502)
Mmm.
Charles Fadel (41:46.811)
But that doesn't mean that they are conscious by any stretch.
Louka Parry (41:47.918)
It's a bathroom.
Louka Parry (41:52.59)
I feel we could have a very deep conversation about consciousness at this point. I'm going to have to tag it for our next chat. Because I'm so interested. I mean, I think that is like, what does it mean to be human in this moment? Yeah.
Charles Fadel (42:04.539)
Well, so I'll direct you to something that we probably should have put in the book. In the appendix, we have an entire chapter on the evolution of competencies, like skills and character meta -learning. And if you look at all the way down to insects, all the way up to mammals, you'll see that it's full of gray zones. A lot of these systems emerge, a lot of these capabilities emerged through evolution. And so you had to have a certain form of courage. You had to have
Louka Parry (42:31.212)
Mmm.
Charles Fadel (42:34.493)
to have a certain form of ethics, et cetera, to survive in solo versus social environments for all of these animals. And so without going into the consciousness aspect, again, I'm sure they're going to be, we're going to find out that they have gray zones too. My dogs are conscious that they exist. They may not have the duration of consciousness that we have or the purpose of consciousness that we have, but they're certainly conscious of their existence. They remind me every day.
Louka Parry (42:57.102)
Yeah. Yeah.
Yeah, I'm sure they do. Interesting.
Charles Fadel (43:05.179)
Right, so that's the sort of thing we have to pay attention to. We are a continuum, we're not the apex. We're just one of the branches of plenty of different leaves on this evolutionary tree.
Louka Parry (43:13.196)
Yes.
Louka Parry (43:22.284)
That's beautiful. Charles, if you were to look to the hope that you have for education, and I mean, this is a mission you've been on now as a futurist inventor author, doing this work at the Center for Curriculum Redesign, what would you hope can shift? What is the re -engineering that needs to be done?
Charles Fadel (43:45.787)
Well, I'm going to talk about my fear, I'm going to talk about my fear than the hope. My fear has always been that these things are accelerating faster than education systems can adapt. And so we get into a black hole sort of situation where we reach the event horizon, after which education will never be able to catch up.
I'm hoping this is not the right analogy. I'm hoping, of course, that with a number of us, you and I included, working on changing the systems, people are going to start begrudgingly, perhaps, but faster and faster, start accepting that the change is needed. And...
I'm going to finish by highlighting one key villain in the system that's completely under the radar very frequently, and that's university entrance examinations. They're the ones that bias everything we learn in K12.
Louka Parry (44:38.702)
stress.
Charles Fadel (44:45.179)
And because of that bias, we are not allowed to pay attention to modernized disciplines, say data science rather than trigonometry. We're not allowed to add modern disciplines like engineering. We're not allowed or entrepreneurship or social science. They're not valued. All we value is what's on the final entrance test, whether it's SAT or what do you call it in Australia again?
Louka Parry (44:51.342)
Hmm.
Louka Parry (45:06.51)
Mm.
Louka Parry (45:10.318)
The the the ATAR Yes.
Charles Fadel (45:12.483)
So these are the big impediments to the system. I know that a lot of people in Australia recognize that, but as usual, there's a bunch of people who fight to keep it in place. Same for SATs, same for baccalauréat, same for Abitur, same for Gaokao, you name it. These are the things that immobilize the system. And until they're opened, I don't mean removed.
Louka Parry (45:33.646)
Hmm.
Charles Fadel (45:42.397)
always need a sorting mechanism, but make them more intelligent to allow for modern disciplines, modernized disciplines, competencies, until these systems allow for more, we're kind of stuck. That's really the Gordian knot that needs to be cut through. And it's shocking in a sense that all of the complexities we deal with are eventually converging on this one choke point.
Louka Parry (45:46.798)
Mmm.
Charles Fadel (46:09.339)
If they were to allow us that freedom, then we could teach the what and the how a lot better. But they don't. And they're not even aware of the immobilizing power that they have. They're just going on inertia. So that's really the one key element that we need to all untangle.
Louka Parry (46:23.118)
Yeah.
Louka Parry (46:32.366)
Brilliant. Charles, thank you. And you know, we're doing some work in Australia around that, the Learning Creator team and others. And some wonderful leadership as well at system level. I have a final question for you, which is from your beautifully unique vantage point, with all of the expertise that you have and the way you transfer, what is your take home message for someone listening to this conversation in the work that they do daily, leading a school?
Charles Fadel (46:38.395)
Exactly. Exactly.
Charles Fadel (46:45.499)
Sure.
Louka Parry (47:01.006)
leading an organization, educating young people. What's that take home message you want to resonate in their mind?
Charles Fadel (47:08.471)
Perhaps I would want to give them a sense of urgency and a sense of...
being more ambitiously incremental. I'm not saying radical, I'm saying ambitiously incremental. I know the world doesn't change all at once, all radically, but certainly with some ambition to drive things forward. So driving change themselves at their local level with whatever they have at their disposal. And that's, you know, the typical think globally act locally.
Louka Parry (47:41.806)
Charles, thank you so much for spending time with me on the learning future podcast. Thank you too for the work that you continue to do at the center for curriculum redesign with the wonderful team there and for putting out this wonderful book that I am devouring and kind of going back and looking at. Cause it is so, it's, it's so much more than just a philosophy. It shows you quite tangibly. I mean, the, the engineering perspective that you bring, I think is really clearly put forward in here.
So thank you very much and I look forward to our next conversation.
Charles Fadel (48:15.579)
Thank you so much Louka it's been a pleasure. Take good care.