Chais Pre-Conference Keynote: Prof. Roger Azevedo, University of Central Florida 2026
1664 segments
Good evening everyone and good morning
to professor Roger Zerdo who is joining
us from Florida. I'm delighted to
welcome you all to the opening session
of the 2026 chase preconference.
My name is Dr. Na Brundle. I'm a
post-doctoral fellow at the Open
University's research center for
innovation and learning in learning
technologies and I serve as head of the
Chase conference organizing committee.
Um, we are honored to have Professor
Azto here with us as our first keynote
speaker. Although we had planned to meet
in person, uh, the current circumstances
have required us to connect via Zoom
instead. Well, we regret not being able
to welcome you to Israel at this time.
We're grateful for the opportunity to
have you join us virtually and we very
much look forward to meeting you in
person in the near future.
So without further ado, it is my great
pleasure to introduce profess professor
Roger Azavedo from the school of
modeling simulation and training at the
University of Central Florida. Professor
Azdetto is a world-renowned expert in
the field of self-regulated learning and
his interdisciplinary research examines
the cognitive, metacognitive, emotional,
motivational, and social processes
involved in learning with advanced
learning technologies. He has authored
over 300 peer-reviewed publications and
currently serves as co-editor and chief
of the British Journal of Educational
Psychology as well as on the editorial
boards of several leading journals in
learning and cognitive sciences.
Professor Aedo will now deliver his
keynote lecture entitled Measuring and
Supporting Self-regulated Learning and
Metacognition in Digital Learning
Environments. Before we begin, a brief
note for the audience. If you have any
questions during the talk, you're
welcome to submit them via the chat and
they will be addressed at the end of the
session where you can also ask your
question uh live. Roger, the floor is
yours.
>> Well, thank you so much uh Dr. Bandal.
Thank you so much for a wonderful um
introduction. So happy to be here. Thank
you. And also Dr. Blau for the
invitation and yes, I I'm sorry I'm not
be able to be with you uh in person, but
like you said, hopefully in the future.
Thank you for the opportunity to present
uh today. So we change it from a
workshop to lecture. So maybe spend
about 45 minutes talking and then we'll
open it up to questions if that's okay.
Uh yes and so today we're going to be
focusing on measuring and supporting
self-regulated learning and
metacognition uh in digital learning
environments. We also call them advanced
learning technologies and if you look in
the literature people call them
technology re uh rich environments etc.
So really focusing on measuring
supporting and then I'll also I'd like
to acknowledge all the funding agencies
uh because this work is also not
possible without some uh major uh
funding agencies from the federal
government uh from the military also and
then from very centralized uh specialty
centers and also some of our work is
also influenced by some of the work when
I was still at McGill University uh and
then we're also funded by the early and
the Jacobs Foundation and we'll talk
about some of those uh projects. So
really want to focus on is the following
is if we go to number slide number two.
So kind of start off with a little bit
of a of an overview in terms of the fact
that there is a science of learning with
advanced learning technologies. You'll
find a lot of this work that comes from
educational psychology, cognitive
science, learning sciences and also the
field of artificial intelligence and
education. You know some people uh that
kind of have discovered Gen AI. We see
this a lot in the literature and some of
these public forums. It's almost like oh
my gosh there's this thing called AI and
education. you know there's an entire
field uh that goes back way before you
know chat PT came out etc. um kind of
want to give everybody kind of a a a
global perspective on when we talk about
metacognition and self-regulation,
right? We're really talking about at
least from our perspective today is the
human's ability to have an awareness.
Are you monitoring? Are you regulating?
Are you evaluating? Reflection,
adaptivity. Those tend to be the very
high level macrolevel processes that we
think that learners, any type of learner
when using a technology will experience
those and go through those processes.
But again it depends who the learner is,
what the task is and the learning
technology
and really what we have been doing for
the last over close to 30 years is
really to use technologies to do a whole
bunch of things. We they're trying to
induce those processes like you know
trying to trigger metacognitive
awareness or we're trying to detect
because if you want the system to be
intelligent to provide individualized
key scaffolding for example it needs to
detect and today we're going to be
focused on the detection and modeling
and then tracking and supporting and
foster like once we know or have a good
idea what the student needs then how can
we support and foster those
self-regulatory processes right and that
could be related to cognition emotions
motivation
right and a effect.
And then what we're going to focus on
today really is not just the typical
self-report measures which have
basically uh inundated our literature
for decades in almost centuries, right?
Is use a multimodal data. So it's good
to know that students have
self-perceptions about these uh
metacognitive self-regulatory processes.
What we have been doing is using
basically we want to know what how
they're doing those processes and how
they're engaging in those processes in
real time right and to do that we use
the multimodal data right which is log
files so any kind of interaction between
the learner and the technology is
captured typically at the millisecond
level sometimes we have them go through
concurrent think aloud so the students
who may be reading something about
biology and then they'll say something
like oh I don't understand this that's
beautiful because we can segment that
that means are monitoring their emerging
understanding. Then the question becomes
what does a human do? How do you
regulate that? And if you can't then can
we use a technology to regulate that and
that could be with a pedagogical agent
right or any other kind of environment
and we'll talk a little bit about that.
Eye movements, facial expressions,
physiological sensors and then we also
have audio and video recordings of the
complete kind of a bird's eye view the
learner the task the context and all
their interactions. Right? We're
focusing specifically on CAM's
processes. So that's the cognition,
affectic, metacognition, motivation and
social processes. And related to this
talk is once we have a good model
understanding of what the student the
learner needs then the question becomes
how do we make it adaptive. So a lot of
the literature is a little bit confusing
because some people talk about
adaptivity, some people talk about
personalization and then if you look in
the game based learning environment they
talk about gaming gamify gamifying a
game. So what does that really mean?
Okay. And I'm going to talk a little bit
focusing on the assessment and support
uh by showing you some of our current uh
projects and we'll have an opportunity
to talk about that and then hopefully uh
Wednesday we'll continue with this which
is talking about you know the natural
language processing how do we use LLMs
what kind of genai tools artificial
agents and then really want to focus on
on Wednesdays to talk about kind of this
future of AI which is the use of
simulated learners and human digital
twins which supposed to be replicas of
who we are. Okay, so that's a little
bit. So we asked the fundamental
questions, right, as a psychologist,
right? Is self-regulation across ALTS or
advanced learning technologies, you
right? So we ask fundamental questions.
What is self-regulation? Right? What is
it? When, where, and how is it
occurring? How are students deploying?
How do we measure these processes?
Right? What kinds of meth methods and
techniques and approaches exist that we
use? Uh how do we analyze? Yes, we use
traditional, you know, statistical
methods, but we use also mixed methods.
Sometimes we do qualitative analysis,
but we're also have been doing this is
also using machine learning techniques,
right? And now we're getting into the
computational modeling because there is
a whole another area in true AI field
that talk about artificial metacognition
which is different than human
metacognition. But what can we borrow
from that field for us to basically
design uh more intelligent systems? And
then there's been a new push uh with
people's dissatisfaction of how we can
measure self-regulation as it temporally
unfolds is the use of complex dynamical
systems which you know philosophers of
science have been using uh computational
models have been using it biological
sciences etc. And then because I'm in
the school of modeling and simulation we
also take it to the next level. Well, if
we collect all this data on these
humans, right? Well, how do we model
self-regulation? Is it an an avatar that
basically says, well, if you want to
increase your metacognitive awareness,
you should do X, Y, and Z. Okay, that
could be one. But can we use intelligent
data visualization? So, imagine a
teacher in the classroom that has a
dashboard, right? What information could
we provide to that teacher, right? And
then simulation. Uh, so how do we
simulate these in different environments
and different artificial agents, right?
to maybe model some new approach to
providing scaffolding. So those are some
of the fundamental questions that really
drive our research. And here I have a
little snippet of some of the
environments that we built in. Right? So
the top right is our typical student,
right? Who is instrumented? We're
collecting screen capture, emotions, log
files, eyetracking, keyboard, uh
physiology, mouse, think alalouds, etc.
But you know that's that's okay in the
lab if we can just basically try to
control as much as possible. But you
know we also do work in the classroom.
Well it's not feasible to have that kind
of setup for 30 or 40 students. So the
question is what research strategy do
you develop? So when we do 30 to 40
students let's say K12 right we
typically collect log file data right uh
and also pre pre and post test of the
content knowledge right and also some
self-report measures. So we're missing
all that multimodal data. All we have is
log file. So what inferences can we make
about these processes? Sometimes the top
left here's a high school student,
right? Sometimes schools don't even have
labs where we can actually put you know
our setups. So for example, here's a
student learning about uh synthesizing
compounds in chemistry and the data is
actually being collected in a wet lab in
a high school or the bottom right where
we have two clinicians, clinical
students basically. Uh one is using a
facial EMG. You can see the sensors on
her cheek and on her head and she's
wearing a portable eye tracker. And here
we're testing some new technologies when
we're dealing with nurse uh and
healthcare training. Okay. So, uh those
are some of the big questions.
Conceptualization. I'm not going to get
too much into this. Just want to say
that related to measurement and support,
there are many many different
conceptualization issues that we need to
uh discuss, present, worry about, right?
Uh so some of them for example is the
fourth bullet. What is the boundary in
these metacognitive processes? When a
human goes from awareness to monitoring
regulation, when is it happening? Why is
it happening? Right? And then the
question is if it's becoming
maladaptive, then what can a learning
technology or digital learning
environment intervene? And how does
intervene, right? And unfortunately,
some of our theories in educational
psychology and learning sciences, right,
are too either too abstract, right, and
they're not prescriptive enough, they're
more descriptive, right? And the
question is well that doesn't really
help with those of us who also design
intelligent systems right so there's a
lot of work to be done uh there's
developmental differences that occur
individual differences right and then
from an assessment and measurement
perspective is this is all assuming that
a human can externalize we're making all
these processes overt covert excuse me
overt so that everybody can see them but
what happens when you are an expert at
engaging metacognition
and you no longer talk about it and you
have skills and knowledge and so the
question then becomes you know invisible
if you will we can't see it happening
okay so we have a couple of these issues
but what I want to talk about also is
that you know all our work is
theoretically driven here's just a
sample of some of the models and
theories and frameworks of
self-regulation metacognition that we
use and these are very applicable to our
area right because some of our studies
we go back to the top left that is the
basic Right? Nelson and Aaron's model of
self-regul of of metacognition if you
will. A very simple model. It can be
used for reading. It can be used for
really anything. Okay. Then over here on
the right hand on next to it in in the
orange and blue is computational model
of of metac cognition that has occurred.
The third one here is one that we have
used extensively. That's the wind and
Hadwin model of information processing
theory. Right? that also makes
assumptions about the phases, the
sequences, the operations that a human
goes through as they're learning about
really any kind of topic with a learning
technology, whether it's a game, an
intelligent tutoring system, etc. We
also see a lot of work by Dunloski that
differentiates the monitoring component
from the control or the regulation
aspect. right here just to show you that
according to some theories or models
like this one, there is an expected time
during learning or performance or
reasoning or problem solving when a when
a human should be deploying some of
these processes, right? But then also
some of our work has to do with not just
cognition, metacognition, but sometimes
our students get frustrated because the
content is too hard, right? Or they or
they need scaffolding and they're not
getting scaffolding. So we use the the
melon gracer model of a effect. Okay.
And then on the bottom here you have two
different models of emotions that come
from other disciplines. So we have claus
shares and then we have James Gross.
Right. One question and one thing that
we would like to continue doing is this
is a model of emotion regulation is can
we embed emotional regulation in our
digital learning environments. Right? So
the students can actually attempt to or
see a model like how do I stop
ruminating? How do I engage in cognitive
reappraisal? Does I know that's better
than rumination? Right? And then uh on
finishing off here on the bottom left is
work by uh our dear colleague Sandy
Yarvala and Allison Hadwin and their
colleagues on socially shared
regulation. Right? In some cases we have
two humans interacting with let's say a
game environment. Sometimes it could be
a human in an artificial agent. So the
question becomes how do we extend models
of self regulation into socially shared
regulation externally regulation
co-regulation right so a lot of this
work has been done and this work is now
actually more important because as
people start to using genai the question
becomes is genai what is gen AI okay
just to give you a snippet uh before we
talk about more data here's a snippet of
some of the work that we do just to show
you that we cut across different uh I'll
just sorry not to be disre
disrespective to ourselves as humans
across humans, okay, tasks and content.
Uh, so the top left is we've been doing
tons of work on Crystal Island, which is
a game-based learning environment. Here,
students have to self-regulate because
they're learning about microbiology and
they're learning how to engage in the
scientific reasoning process. Uh, bottom
left is again example of multimodal data
during complex problem solving. So a
student who's using gamebased learning
environment this is typically what they
would look like a fully instrumented
student. Uh the top right here is we do
a lot of work also here the digital
learning environment is not a typical
technology it's the highfidelity
mannequin. Okay it's a clinical
healthcare mannequin. So in this case
for example we work with one of our
children's hospital where we actually
model and simulate using a mannequin of
a child a resuscitation process which we
know that if a child is not resuscitated
right then they die. So here we look at
team performance right with nurses and
pediatricians who are residents and
sometimes emergency medicine and they
have seven minutes to save the child. So
the question is that this child will uh
communicate or try to communicate with
the team. Okay. Okay. So here in this
case it's it's high stress environment
where you're not just learning about the
cell structure. Here you are actually
saving lives. Okay. Uh our new work here
is on simulated learner. Uh I'll talk
more about this on Wednesday during the
keynote is the imagine a student a high
school student who's having trouble with
algebra problem solving, right? Gets to
actually teach her simulated learner how
to solve the problem and also about her
self-regulatory skills that she would
use. And then she basically would get to
see her agent, her simulated learner
model all these processes. But what the
student doesn't know is that sometimes
the simulated learner will deviate, make
more errors, use different strategies
because we want that to be the trigger
for the student to say, "Hey, I didn't I
didn't ask you to do that. That's not a
strategy. Is that better?" So in one way
to accumulate and learn metacognitive
knowledge and problem solving. In some
cases, we talk about uh system beliefs.
So for example, here's an open learner
model. This comes out of Judy K's work.
She's at the University of Sydney. Is a
lot of our AI systems make decisions,
for example, about adaptivity and
scaffolding, but we never really tell
the students or show them why we're
having them. We're scaffolding them in a
particular way. So an open learn model
is a data visualization tool that
basically takes the beliefs of the AI
system and shows them to the learner so
they can understand and they can
potentially negotiate, right? It's like,
oh, you think I'm a really poor
self-regulator. Well, why is that? You
know, I don't think that is true. So,
let me basically through sliders use the
data visualizations to improve the
system. Sometimes we try to accelerate
their learning by showing them some of
their heat map, which is their attention
allocation. Okay.
We also do a lot of work in VR. And then
the top one here, which I'll also talk
more about tomorrow, here's Megan. Megan
is one of my posttos. And we have
created a digital twin of Megan who
lives in a box. And we're using NA uh
NLP and AI to basically imagine you
talking to your future self. Megan can
be a teacher, right? So it's like, hey,
I'm a new teacher. I'm having really
hard time, you know, uh engaging
emotional regulation strategies in my
middle school kids. Can you show me,
right, how to do that? Megan could be a
student. Megan could be a teacher. Megan
could be a parent. Some of our work in
biomedical science, Megan can also be a
clinician or a patient. Okay? So imagine
being diagnosed with pre-diabetes,
right? And we have all her genome data,
okay? And then the question becomes,
well, show me what's going to happen to
me in 5 months, 6 months, a year, will I
actually become diabetic? Okay? Or if I
don't continue to make any kind of
changes, what's going to happen to me in
five or 10 years from now? Will I lose a
limb? Will I be blind? Okay? So, can we
do disease progression?
Uh, so that's some of the work uh that
we're doing. So yes, we're dealing with
a lot of theoretical uh issues which
obviously you know trying to be mindful
of time. Uh there's quite a number of
them. One of them is these theories. Do
we have theories? So their new handbook
of AI and education came out in 2023.
There's another one coming out this year
where we kind of list a lot of like how
do we embed these theories of
self-regulation into these different
types of digital learning environments,
right? Not just traditional but like I
was mentioning in mannequins also. Okay.
And so if we look at the literature in
terms of the learning technologies that
have been developed right which ones
have focused on SRL and have use SRL
right so the top right is metatutor
which I'll come back to in the next
slide this is one that we developed over
15 years ago it's about learning human
circulatory system and we have different
agents and each agent is responsible one
is for cognition
one is for metacognition and it's a very
kind of you know multimedia hypermedia
environment very traditional intelligent
tutoring system where we can detect
every thing that the student is doing.
Uh the bottom one bio world is from my
former adviser who uh who just actually
retired two years ago from uh Suzanne
Loa from uh McGill University. This is
self-regulation also by using more
cognitive load theory and this is for
medical students where they learn how to
solve cases. Here's Crystal Island.
Completely different approach here.
There is no AI in the system. Okay, it's
a game-based learning environment. It's
purely constructivistic, open-ended. the
students get to basically they have to
solve the mystery in order to end the
game. Okay. Uh our colleagues at
Vanderbilt Gautam Biswis in terms of
Betty's brain completely different
approach. This one is learning by
teaching. So Betty's brain is this white
box here, right? It starts off as blank.
So the students are learning about
ecology and then they're teaching Betty
by populating her brain if you will,
okay, with concepts and relationships.
And then they get another student to go
and ask Betty about particular, you
know, how is phosphates and nitrates
related to the quality water quality.
And they can ask those questions, but
guess what? Betty could only answer
correctly if you taught her about that
topic. So the kids go into these cycles
of teaching and learning. uh Phil Winnie
our uh one of our long-term
collaborators who also retired two years
ago developed G study right which is a
very hypermedia hypertext versus
environment where the assumption that
Phil makes is that because when you
highlight right you're actually engaging
metacognitive processes okay and then
one of our colleagues at NC State who's
developed sim students this is for
algebra and here the students are
actually teaching almost like a sim
learner the teachers are the students
are teaching the simulated students how
to solve these problems. So this is so
again just want to show you that there
are alts or advanced learning
technologies or digital learning
environments that had have been using
self-regulation different models and
theories and the question becomes where
is the AI in some of these systems.
Okay. And then one that I wanted to
emphasize was the metatutor. This is one
that we developed over 15 years ago and
uh well now four years ago we actually
synthesized all the literature that we
did in terms of cognition,
metacognition, emotions and motivation
uh by all these co-authors. These co
co-authors were at one point in time
either my posttos or grad students who
are now faculty at different
universities across the world
uh who led some of these projects.
Right? But as you can tell, it's got
agents. It's got a self-regulation
pallet. It's got a whole bunch of a
timer, table of contents, etc. Because
we wanted to be able to track
everything. And that work with the
original pedagogical agents has given uh
rise to new environments such as
different versions of a metatutor where
now we actually hire actresses for
example where we videotape them and take
pictures and we literally if you will
peel off their face to make it much more
so that when the student is interacting
for example with her name is Emma when
she's interacting with Emma Emma
actually gets to respond to her so that
there is some kind of a effective
connection in terms of engagement and
also using facial expressions to trigger
some metacognitive processes. Like when
Emma kind of looks like this, like I'm
doing now, it's like you probably are
spending too much time on relevant task
content. Okay, so using verbal uh facial
expressions and verbalizations to do
that. And then what I wanted to show you
is really at the crux of all this. So we
also have a project NSF funded project
with five universities one in Europe
three in the US and it's about parallel
programming which is a very complex
topic for undergrad computer science
students. So I wanted to show you is the
kinds of data as we start talking about
how we what are we measuring. So what I
want to show you is different aspects of
this. So here is a video screen
recording okay of a student an
undergraduate student learning about
semaphors and parallel programming in
this game called parallel and so the
question is okay so here it's better
than nothing because now I have screen
recording I can see what you're doing
okay but think about it from our
perspective as researchers and designers
of learning techn well what happens now
if I have now I have screen recording
plus I have the students eyetracking in
real time. This is creating a whole
whole a CSV file which looks like an
Excel file with 250 because it's a 250
Hz side tracker. I'm getting 250 data
points per second as to what the student
is doing. Okay, so that's a little bit
better. Well, here I want to show you is
now we got screen recording happening
here. We got the eye tracking of the
student. Okay. And what you're seeing
moving really really slowly here that
looks like a whole bunch of EKGs is the
facial detection awareness of affective
states like curiosity. Okay. Anger etc.
Okay. But I turned off his voice on
purpose because we're trying to figure
out like with the question is with more
data. And here is the holy grail.
>> Got everything you had before.
He's going to talk soon.
>> Yeah, you can toggle on the links to
display all the time
>> on your build your solution. To turn it
off, simply click this button in the
sidebar.
>> Can you hear him?
>> Display all the time. Okay. I'm not sure
that.
>> Yes.
>> Thank you. Thank you. Okay. Just to show
you that the the so one issue for
measurement is each of these data
channels contributes to our
understanding of particular
self-regulatory metacognitive processes.
But if you notice the language is the
one that basically contextualizes
everything.
Okay. And so these are all dynamic
versions of different types of data,
right? That we then use to understand
why this student solved the problem
correctly or was having challenges. And
then we also have uh non-dynamic
representations. So for example, after
they do one one level of the game,
here's a heat map, right? that shows us
the eye movement to basically where they
allocated their attention more uh more
than anywhere else. Okay. So the
question comes we have a lot of data
channel data multi-data channel each one
can make inferences from each of these
right and then the question is we also
have summary data right so the question
is with all this data we tend to use it
as researchers to make decisions more
adaptive but also to understand
self-regulatory processes uh using this
game. The question though is from a
teaching and supporting perspective and
training is why can't we take this these
snippets of data especially if they're
meaningful, okay, and give them back to
the students so that now they could
understand because otherwise it's what
we typically do. Oh, you didn't do very
well because you didn't do this. It's
always verbal. Can we provide
verbalizations with either static or
dynamic representations of this type of
information? Okay.
>> Okay.
>> Okay. So here's another example in a
more and that's a very that's one
student learning about doing problem
solving on a context. So here just to
throw you on the other side here's an
example okay I'm going to show you this
video I'm going to show you is taken
from the perspective as you're watching
the video at the bottom of the table
okay the bed is the lead um clinician
and these are the three team members. So
we going to show you here is the
eyetracking data. Okay, these balls that
you see here is jumping around. This is
the eyetracking data of the resident who
is in charge of saving this child who
was born is on the verge of dying. Okay,
this is a very uh precarious delivery of
this child for mom and baby. Okay. And
the question is we want to be able to to
uh to see that. No, I did the same thing
on purpose. I turned off the audio.
Okay. So this is what it looks like. So
this is you're going to hear this is the
this is the guy whose eye tracking
you're seeing from a different angle. So
here now let's hear the whole
>> part
two.
>> Okay. So it's looking
good.
>> So he's giving the team members. You can
see what he's looking at. We've actually
isolated the next step in correct. I
mean
>> how long should we how long
>> and the baby actually the mannequin
since that's a focal point has multiple
areas of interest. Okay. So we're trying
to figure out medical errors and then
how does this resident who has very
limited clinical experience basically
managing themselves and also monitoring
and regulating their entire team in
order to save the child. Okay. So
looking here we're looking more at uh
performance and medical errors. Okay.
But also still self-regulation.
And here's some other examples of the
work we do. Some cases we have game
based simulations where there's a
pandemic. Um, this used to be our
favorite uh game-based learning
environment. Okay. And I'll tell you in
a second why. Um, I'll go back to I I'll
leave it there for a second until we go
to the next slide. Uh, we also do work
in the military. So we have very low
fidelity simulation of a tank commander.
Okay. So basically looking So here's
some more kids using VR. Um
and then I will show you uh
yeah so we'll come back to that
methodological issues we have ton so we
typically in in most of our research
there's always at least one instrumented
learner sometimes we have two sometimes
have three there's always some research
component and I really want to emphasize
that was exemplified by the research
that we do is our experiments tend to be
you know longer they're not just a 30 30
minutes they tend to be mult multi-day.
Okay. And then the question becomes
where do we administer our learning
outcomes? Where do we when and where do
we administer our self-report measures
etc. Uh something that we have not done
yet that we're planning on doing
starting this year is also doing
retrospective analysis of giving the
participant let's say or the human could
be a teacher can be a student uh really
any anyone um a break and then basically
show them their multimodal data back.
Now to come back to this used to be our
favorite uh
um environment. It's called outbreak
simulator where working with colleagues
uh in data science and actually
epidemiology
is that they mapped unfortunately the
video doesn't work. Our university
security system just kicked in something
but basically uh students can see uh
what happens they can create a virus
they can then deploy the virus on
anywhere in the United States. So let's
say they create a virus, they they put
it in New York City and compare New York
City versus let's say Orlando and it's a
population dynamics. They get to see how
many people are incubating, how many
people are dying, how many people are
not uh affected, etc. And then the idea
is to have these u metahumans be part of
the scaffolders, if you will. So what I
wanted to show you is um here's an
example of what it looks like when
here's an undergrad student from our
Bernett School of Biomedical Sciences.
So they need to know something about
viruses and and so you know here's an
example uh at this point in time how
many per day average could be delivered
for this simulation.
>> Oh he's fully instrumented. So he's
learning about how to create a virus. So
he gets to a point to after he creates
the virus and it generates a whole bunch
of data visualization trying to see if
he's making the right inferences.
Okay.
So we ask the global question is this
you're actively monitor regulating their
learning right but what are they
learning right how do we know that's the
basic fundament
so this uh as I was mentioned this was
our this was funded two years two and a
half years ago when with our previous
administration
and on April 1st of last year almost uh
10 months ago uh given the new
administration u they figured that we
should not be teaching anyone about
pandemics or preparing for pandemics So
this was an NIH funded project that was
terminated by the federal government uh
without any notice. So uh we are trying
to figure out ways to still be able to
teach students right K12 and even
college students uh about pandemics
because obviously it is extremely
important topic uh without being too
confrontational if you will with our
regime
vaccine. Uh so this is what it looks
like. So we collect a whole bunch of
data right and then what we are not
going to be able to talk about today
because this will take hours
is then how do we you know which
features of each of these data types do
we collect how do we use that for
prediction and then how do we use it to
make our systems much more intelligent
right and this is some of the those are
some of the citations by by us and some
of our collaborators
uh so yeah so all to say that what you
haven't seen behind the hood is that
this produces so many different types of
signals we We have physiological
signals, we have behavioral signals,
affect the motivational indicators,
cognitive, metacognitive indicators and
contextual information. And the question
becomes for us is this usually the
bottleneck is how do we take all this
data? How do we temporal line the data?
which data channels do we focus on you
know and uh what inferences are we
making and is it just for pure
publication or is it then also for a
design when we work with our computer
science and AI and even game based
learning environment
colleagues to design new environments
and again not just scary so take it to
the next level is if you think about
just even the work that we do in VR is
it produce a lot of processes metrics
and inferences and the red line is
basically so for example If you were
just to collect log file data, right,
these are examples of the kinds of
metrics that you'd be able to extract
from those. Okay? And also those are
some of the inferences that you're able
to make. Okay? And so some data channels
contribute very idiosyncratic, right, or
very specific processes. But sometimes
we also have uh across data channels
where you're able to capture the same
data. So then you can at least converge
data. Okay? And this is just a snippet.
This is not obviously all of the entire
uh data that can be generated. So for
multimodal data, you know, it's not
perfect. Uh not many people want to do
this because it's expensive. It it's
time consuming. It takes a lot of
training, right? And then here are some
of the data issues that we deal with,
right? We have privacy issues, of
course, ethical issues, right? Uh so for
example, you're collecting data in a
school. Well, we have to ask parents. We
actually provide them a list of all the
data and they indicate which data they
would allow us to collect on their
children. So a lot of parents of course
they don't want their children's facial
expression data to be collected. Right?
We have some counties that will not
allow us to collect physiological data
on the children. Okay? So we work with
the parents in terms of being
respectful. Right? Uh so that's
extremely important ethics and but
sometimes it's also very incomplete
data, right? The data is messy. It's um
there's lots of volume. Okay. So,
there's a lot of issues to deal with.
Um, and we've actually back in uh going
back all the way back to 2015 in one of
our chapters in in the handbook of
cognition and education of Dunlow and
Rosson, we actually try to help the
community understand that depending on
if you're looking at quality of quality
and quantity. These are for example, you
know, the issues that you can deal with,
these are the sample data that you would
collect. And then part of this chapter
is to provide researchers with here are
the types of questions research
questions and hypotheses that you can
ask from this uh type different types of
data. Okay. And analyses. So yes. So not
to go into this. I'm trying to be
mindful of time. Uh where are we at now?
12:07. Yeah. Uh so there's major
accomplish accomplishments that have
been done. Uh obviously generative AI is
really pushing the envelope and allowing
us to collect a lot more data and
becoming having systems that are much
more adaptive even though of course they
hallucinate but you know they're not
perfect.
Um,
one thing I wanted to mention also is
when it comes to measurement and support
is that we as a science, right, we're
still in this kind of very descriptive
we're describing stuff and that's fine.
But the question becomes is and you see
this in the learning analytics community
is when can we get to predict? Have I
collected enough eyetracking data of
let's say a middle school student
learning about biology and examining a
very complex cell? Have we collected
enough data maybe after five minutes or
10 minutes to be able to predict
that they have a good understanding or
an excellent understanding if we give
them an assessment. Okay, that's where
you want to go. And then if we can do
that, then when are we going to get to
the point of explanatory challenges?
When will the system have enough data on
this on the student let's say and be
able to explain like you've been using
the system for algebra for like 3 weeks
now and what the trends that we've seen
is X Y and Z and if you want to improve
improve your metacognitive judgments of
learning then this is what you need to
do. That's kind of the these these two
last pillars are really where we would
like to go. Okay. So yes and here are
some challenges for adaptivity and
personalization. And we'll come back to
this one tomorrow. Right? And of course
the view those of you are interested,
right? The Michael Janako's uh uh this
is a 2022 there is a multimodal learning
handbook. This is actually free. You
don't even have to buy it for Springer.
It's available online. Okay. And then
Muhammad Sakir who is um in Eastern
Finland University actually also this
one is also free. Okay. This is a
practical guide to using Right. Uh as a
statistical technique for um analyzing
this kind of multimodal multi-
channelannel data. It's a really good
and then Suzanne Demoji who's one of our
posttos on our cellar project. She's at
the Radbood University. Okay. Uh in 2025
kind of synthesized 42 articles uh based
on Metatutor and the Flora engine which
is an engine that has been developed by
uh Dragon Gasevich our colleague Maria
Bannard at and Sana Yarvala and
Ingamolinar.
So if you're interested in those and
then so we look for like what does the
future look like? Okay. So we continue
to work work with teachers and
administrators on looking at you know
can we increase enhance instructional
decision- making if we had students were
instrumented in classrooms that were
instrumented uh one of our projects is
like what is the classroom of the future
going to look like okay or when we talk
about for example open learner models
let's give access to the humans and
explain to them how we're making these
decisions and understandings about their
metacognition
right another one part of the seller
project that's funded by the Jacobs
Foundation is the content now is their
kids are learning these are 12 to 15
year olds are learning about AI for a
future of AI with AI tools okay so we're
doing this massive large 30 plus country
uh international um study to compare uh
adolescence across AI learning and then
we also use immersive virtual
environments okay we keep working on
this these are tools are environments
for us to be able to teach Okay. About
self-regulation using different types of
agents. Okay. Uh and there also data
collection devices and then for example
at least in the US right uh is the the
issue of identity and minority students,
right? So imagine you have a young high
school African-American student who
wants to go into health science as a
career. Okay. But minimal role models
opportunities in this environment. she
can develop an avatar of her future
self, okay? Where she sees herself as a
clinician and she gets to practice
clinical skills, okay? So, we made it
somewhat very COVID related but
respiratory skills, but don't forget
she's instrumented, right? So, as she is
learning and practicing her clinical
skills, she's leaving residue. So,
here's okay, a heat map of the way she
was trying to intubate the patient.
Normally you'd have to have the teacher
there looking and doing it real time.
Well, she's leaving residue. Now the
question is, can we use multimodal data
to assess the quality of her clinical
skills? Okay, so from an assessment
perspective, but then can we use it also
to support the development of those
intubation skills, if you will. And this
here is just to show you the kinds of
data that we collect prior to learning,
prior and post, the second and the third
column. This is the multimodal data that
we would collect while she's learning
and using clinical skills. And this also
has to do with career choice and
interest in health sciences. So there
are plenty of other uh retention
engagement um
uh data that we can collect and now
we're moving into kind of the human
digital twins and I'll talk more about
this tomorrow. So these so our
university is funding uh the hiring of
38 new faculty member across
disciplines. There's a lot of digital
twin work that comes from engineering.
So, you know, engineers of course have a
a digital twin of a tank, a digital twin
of a pain, a digital twin of a an
engine. And the question is, well, I'm a
psychologist. I study humans. Can we
have a human digital twin? Okay. So that
we can potentially embody some of these
processes and then have it operate,
learn, solve a problem. Okay. Um, so
what does that look like? So, you know,
these are some of the projects that
we're engaging in. Whether it's a high
school student learning with her digital
twin about, you know, ukareotic cells.
When we talk to some of our school
administrators, they say, "Well, Dr.
Asa, what we'd love is to have a digital
twin of a math high school teacher
because that is the hardest thing to
teach." Okay, so the question is, well,
so we're working with them and the
question is, well, imagine you teaching
your students and you have a human
digital twin of yourself. What role does
that digital twin play? Right? What what
does it do to support the human teacher?
Okay. Uh we also have some military. So,
you know, digital twin of a commander in
in a three crew tank. Okay. What is a
digital twin? Well, there's not going to
be any room for a physical entity there.
It's going to be more of a voice with a
brain, if you will. And then in health
sciences, everything from some of our
work is also on patient education. So,
we deal for examp patients with breast
cancer. So, imagine here, right?
Unfortunately, our medical system
could be better. You're lucky if you
spend 5 10 minutes with your doctor when
you go see your doctor. But if you're a
patient and you've been diagnosed with
stage four, for example, breast cancer,
well, the last thing I want to do is be
rushed, okay, in terms of trying to
understand what I have and how that's
going to affect me and my family and my
health, etc. If we had a digital twin
who could spend time with and explain
all the different options, etc., to the
patient or to help a novice clinician,
okay? or in a critical care setting. So,
we're doing a lot of this work in in
K12, as you can see, uh, health sciences
and,
uh, the military. Okay. And here is also
some of the work we're doing. So, um,
here's another example. So, you've
already seen Megan. So, for example,
here is Dr. Tucker. She's a physical uh,
therapy faculty member in our school of
health, professional sciences. And what
you're seeing is two walls. It's
actually a projection wall. Okay, it's a
projection wall and basically we're
basically simulating undergraduate
students in physical in her physical
therapy class who are learning to
physically manipulate premature babies.
These babies are actually mannequins.
Okay. And the question is how can we
help that faculty member be able to
assess whether the man those uh
undergrad students are actually doing uh
the manipulation properly. Right? So,
not only have they brought the incubator
with, okay, we can bring a a whole bunch
of incubators in here, but we're also
projecting the stressors of being in
this medical environment where you have
babies crying. You may have parents
intervening and interrupting the medical
staff and other medical staff, etc. Same
thing with with with some of the work
that we do with nurses, okay? Nurse
practitioners, we'll talk a lot about
tomorrow. Uh, simulated learners. Um one
of our colleagues here there's a group
that has developed u teach live okay one
of them has retired the one has left and
the question is can we build after can
we model right the simulated learners if
you will after uh real students okay and
the question is can we put teachers in
front of these simulated learners okay
we've submitted some grants most of our
colleagues here are focusing on students
with learning disabilities okay and
behavioral management in the class my
question is and what we've trying to do
is well instead of that of course those
are important issues is can we build
different self-regulatory profiles in
those students. So a teacher gets to
learn what is it like to deal with the
most unmotivated student who's got low
metacognitive monitoring skills very few
learning strategies and you know
basically doesn't want to be in class.
Okay. How do how do we deal with that
and can we model that? Uh trying to be
mindful. We'll talk about some of those
issues and let's kind of wrap up if you
will with instructional assessment
issues in AI. Okay, there's quite a
number of issues. Uh one of the ones
that two of the most important ones are
the ones here that are elaborated. So
what are the rules for adaptive
instruction? You know, you go to a
conference like ARRA or early and people
are like oh yeah well you know
adaptivity is based on Vagotssky zone
approximal development. I'm like oh yeah
sure that's great. Uh but how do you
know what those rings are for the zone
of proximal development or yeah or we
use PIA
uh that's great use P's theory of
cognitive equilibrium but how do you
know as a teacher when assimulation and
accommodation are happening right those
are very challenging questions so
theoretically we can talk about them in
a graduate seminar etc but the question
is what are those rules for adapting
especially if you have multimodal what
do we adapt to when how by and by whom
or what is it the teacher is it the
agent you know um and then the other
thing that we're experiencing here at
least in the US context is our teachers
I mean you know with all due respect
they they have they lack training on
self-regulation right we look at the
literature teachers are not very
comfortable about engaging students
metacognition or even assessing
metacognition why is that why are they
not being taught that that would be like
me I want to become a surgeon but I'm
not going to take anatomy
I mean it doesn't make any sense right
it's uh you know and then the other
question is like this lack of AI tools
that empower teachers now I know there's
a lot of focus on uh dashboards right
but what what what about beyond
dashboards so for example if you look
back at this u picture here right with
simulated learners let's say I'm a
teacher and tomorrow we're changing
topics in chemistry so here's my here's
a my immersive environment it's a
chemistry environment okay here are my
six worst students. Sorry for saying
that. Don't mean to disrespect students.
Okay. And it's like I want to introduce
a new topic. I want to see how these six
students what are the challenges they're
going to experience tomorrow in class or
next week in class so I can be better
prepared as opposed to just walking in
and hey, I'm going to hope for the best.
Okay. But this kind of environment could
also be for students.
I know I'm going to have a problem
learning how to let's say balance
chemical equations and these are the top
students in class and because I know I'm
going to have challenges I want to see
how they're going to solve the problem
and can they show me the strategy
they're using so when I get to class I'm
much better prepared okay so being
mindful I'm going to come to these on
Wednesday a few to talk about so there's
plenty of educational issues to talk
about you know uh conceptual theoretical
issues methodological and And always in
terms of the core of our research is
what's the role in humans right humans
being teachers parents peers artificial
agents of different kinds right and yes
and this work is not possible obviously
without the you know a wonderful uh lab
of ours we have 20 students and post all
the way from undergrads to posttos in
our lab and you know international
collaborators that uh we we really
cherish and love we're working with okay
and I think we'll stop there to give at
least people a talk uh time to
>> Yes. Thank you.
>> Thank you so much, Roger, for a
fascinating talk. Um we will now um move
to the Q&A session. So if anyone in the
audience uh would like to ask a
question, please raise your hand, your
virtual hand, and we'll call on you. Uh
we also already have one question I
think in the in the chat. So we can
start with that or if people ah guy guy
guy guy guy guy guy guy guy guy guy guy
guy guy guy guy guy guy guy guy guy guy
already so guy
>> yeah so first of all thanks a lot
professor it's a a real honor to hear
you today and also thank you to the open
university for this opportunity your
research is very valuable for us I have
two questions that might be related
>> the first one is that I I feel that
there is all the time
ongoing debate on how this
SRL skills are trainable and usually
they it's connected to the brain
mechanism and how limited we are how
students are limited in developing these
skills and it it is related to the
second question is so first of all what
is your perspective on this issue and
the second one is
how advanced the SR research in terms of
neuroscience approaches because we know
about E EG and EDA but is there anything
more advanced?
>> Yeah. No, no, thank you guys so much for
your questions. Yeah, I I'll answer the
second one first if that's okay. Yeah, I
think that's a big um we need to connect
more of you know the kind of cognitive
neuroscience research that we know right
especially in cognition metacognition
and even emotions right uh to the work
on self-regulated learning right if we
look at our models we don't we don't
include that level of abstraction or
description which you know we could do
better so for example in our lab we're
about to purchase a fne combine fneers
and EEG to start looking at things also
like cognitive load right because uh
only Anique the Bruins uh has proposed a
couple of things in terms of cognitive
load. It was actually in Edsych review
that was just published last year,
right? We need to start including like
these processes and you know would FNERS
and EG start um
uh you know providing additional
evidence and the question also is
to your first question is um which now I
forget I'm trying to retrieve the first
person should I should have gone there
first. trainable how trainable trainable
>> yes yeah so absolutely and that's
something that we haven't done right
that I I know for example uh um Tova
Bowski and Braha Karsski right and even
as long time ago I guess Zamira right
did some of the work on improve I think
was for math right that has been and
that's for for teachers uh so the
question is you know can we move from
standup delivery right uh also like uh
Eve Carlin and Charlotte Dignath right
in both in Switzerland and and Germany
have been doing a lot and you guys have
been doing some of this work your team
also the question is how do we go to
designing potentially AI based
technologies for the actual teaching and
training of those self-regulatory skills
right because a lot of the stuff that we
see in the literature also is a
one-sizefits-all right approach to
teaching oh first we teach medical
knowledge and then we teach medical
procedures and then we teach the most
difficult one which is the conditional
knowledge right And so the question is
uh we actually have an NSF grants that
is being reviewed under review and the
question is
can we stop building these design you
know digital learning environments or
advanced learning technologies to focus
on a particular type of student the
typical content etc and more importantly
can we develop a system that is teaching
self-regulation across tasks right if
that makes sense right and I think
that's one area that we need to
Did that make sense? Right. Yeah, I
think so.
>> Uh so, so Tuvi also has a question uh in
the chat. So maybe you'll already
>> Yes. So I I can just rephrase it. It's
okay. So basically regarding facial
gestures and eye tracking, one thing I
think we know from anyone who's given
you know talks or or or taught classes
in different cultures is that it is very
culture dependent. So for example in
Israel we wear we wear our heart on our
sleeve. We would you know snort. We
would be like you know uh feeling like
we're in the family room and express
everything. Americans might be in the
middle and Asians for example in many
cultures you can give a whole talk and
you get no visual feedback.
>> Uh how do you handle that?
>> Yeah. No no absolutely that's a great
question. Yeah. Uh so from a measurement
perspective uh we always get baseline.
So for talk about emotions, right? And
there's also also different
expressivity, right? A lot of this is
culturally based, but also
developmentally based, right? We know
that children are, you know, are willing
to more bleed, they will through their
face, right? To show that there's
discontent, etc. Versus an adult, right?
If I'm if I'm mad at the, you know, the
director of my school, right? I I I
better I better do my best facial
expression retention so that I don't get
in trouble, right? So that as an adult
uh so we always collect data on basic
emotions uh by having by you're being
compared to yourself not to others right
so for example when it comes to let's
say facial expressions there are
database that we use there's a ch
there's a database for children there's
a database for um African-Americans also
because when we started this let's say
20 years ago when I was at University of
Memphis the majority of the students
were African-American students and
because of the contrast in their face
the algorithms had a terrible time
trying to detect like even confusion the
A4. So we actually have an illumination
system right there differences between
like you said genders also and then the
context right we have some context where
it's basically trying to get the
clinician to be empathetic so the
virtual patient actually accentuates the
crying on on purpose right so yes
absolutely so so we take it from we we
try not to generalize but of course like
confusion you know any culture would
show confusion but the problem that we
have also seen in the literature is when
it comes to confusion what they call the
action unit 4 which is this fur eyebrow
is there've been a lot of
generalizations and we're like no I mean
you know I could I could look confused
but well I'm not confused I'm actually
if it's I'm solving a math problem I may
be actually in deep concentration right
so yeah so you have to be we have to be
very careful and very considerate of
cultural changes developmental changes
right and even for example some of my
colleagues who do work I don't know
about you but uh on students on on the
spectrum right and this also goes back
to guy's question, right? I mean, if I
have not established theory of mind, am
I even going to engage with an avatar
regardless? I mean, I can't even make an
inference about your cognitive state,
right? So, yeah, absolutely very
important questions.
Any other questions?
Okay, maybe just one more. So, uh I
think uh some people might say that they
object to the idea of measuring your
understanding based on signals rather
than on your performance in a test. Uh
for example, I remember you know being
in advanced math classes. Some people
they would be like the chess players.
They would gaze at the screen that
actually it was a whiteboard and they
would look to the side and anyone would
think they're you know they're actually
not in the scene but their mind was
racing ahead. They saw what they needed
and they were now processing. And others
who normally would look, you know, very
normal when they were thinking very
hard, look like they were on the
spectrum if you didn't know them.
>> All kinds of very strange behavior. And
>> and and still they would have said, I
guess if I would have said, you know, it
looks like you're not engaged or it
looks like you didn't understand. They
would have said, actually, I'm the best
student in class. Uh uh look at my test
results. How do you react to that?
>> Yeah. No, no, absolutely. Uh I think
that Oh,
>> sorry. He's n he's 18 pounds. I can't
fight with it. Sorry.
>> That's funny. Uh yeah. No, absolutely.
So the whole issue of individual
differences are are important to look
at, right? And then I think what you
raised also is this issue of expertise
development, right? We're also making
the assumptions that humans can express,
right? So
we collect data on process, right?
because we're also interested in how
these processes temporally unfold in
real time right and that's more from an
understanding right but when then we
also converge right most of the
literature not just our group but many
groups around the world right is how do
we compare which one is more predictive
is it the fact that I've collected let's
say 5 10 minutes of your eyetracking and
I can see how well you're going to
perform let's say on a math test right
uh or is it just a math test because
we're also interested in the process
which process which data channel is the
most predictive of different outcomes.
And what's interesting is we find so
much variability even some example in
Metatutor, right? We have students from
pre-to post who who are in the control
group who had agents but agents play
note scaffolding who still performed
extremely well from pre to post but by
contrast we have students who were in
the full agency full adaptive
scaffolding condition who got the best
support that we can provide and still
you know let's say started with 50% and
then end 50% and we're like but you had
the agents there to scaffold and you
still didn't learn right so we're
interested in those variability right
and and the question of why, right? And
how do we support these students? And I
think to on Wednesday when we have the
the keynote, we want to talk a little
bit more is now let's have the student
be able to interact and talk naturally
to let's say a genai driven pedagogical
agent. Right? It's like I think I'm
still not understanding this very well.
Can you generate a new diagram or give
me a new a new problem that is less
complex? Like can we take it to that
next level? Right. Um, so I hope that
answer your question.
>> Any any last question?
Okay. So,
Shi,
>> thank you very much. That's it.
>> Yeah.
>> So, uh, so it was it was mindblowing.
>> I met you a few years ago. I'm a student
of Dr. Billy Alam, Professor Billy Alam.
and we met many years ago. So it is
mind-blowing where it's going. Thank you
very much.
>> Oh, thank you. Thank you so much.
>> Yeah.
So, so let us thank our keynote speaker
once again. Thank you, Roger, for an
engaging talk and a stimulating
discussion again for agreeing to join us
virtually.
Um,
>> thanks and and we hope to see you uh on
Wednesday.
>> Wednesday. Yeah. Thank you so much, Noah
and Nina. Yeah. Thank you and everyone
for attending and your wonderful
questions.
>> Thank you.
>> Thank you.
>> We hope to see you in Israel.
>> Yes. Yes.
>> Yes. I'm so sad I couldn't go.
>> Thank you. All right. Thanks a lot.
So we will now take uh 30 minutes break
uh give or take and then return for the
second and final session of the
pre-conference featuring our second
keynote speaker, Professor Richard
Mayer. Those of you who registered
should have received the Zoom link by
now. I'll post it in the chat for anyone
who didn't receive it or didn't register
and uh can now uh join.
Wait, I'll do that.
>> Okay. Um,
and uh, please join us again at 8 using
the link you were sent or the one that
is now posted in the chat. It's the same
one. Hoping to see you all there and uh,
enjoy the break.
Ask follow-up questions or revisit key timestamps.
Professor Roger Azevedo's keynote lecture explores the science of measuring and supporting self-regulated learning (SRL) and metacognition in digital learning environments. He highlights the evolution from self-report measures to multimodal data collection, utilizing eye tracking, facial expressions, and physiological sensors to model and support learners in real time. Azevedo discusses various innovative projects, including simulated learners and human digital twins, while addressing significant challenges such as cultural variability in emotional expression and the need for better teacher training in AI-driven educational tools.
Videos recently processed by our community