HomeVideos

Chais Pre-Conference Keynote: Prof. Roger Azevedo, University of Central Florida 2026

Now Playing

Chais Pre-Conference Keynote: Prof. Roger Azevedo, University of Central Florida 2026

Transcript

1664 segments

0:05

Good evening everyone and good morning

0:07

to professor Roger Zerdo who is joining

0:09

us from Florida. I'm delighted to

0:12

welcome you all to the opening session

0:14

of the 2026 chase preconference.

0:17

My name is Dr. Na Brundle. I'm a

0:19

post-doctoral fellow at the Open

0:21

University's research center for

0:22

innovation and learning in learning

0:24

technologies and I serve as head of the

0:26

Chase conference organizing committee.

0:29

Um, we are honored to have Professor

0:31

Azto here with us as our first keynote

0:33

speaker. Although we had planned to meet

0:35

in person, uh, the current circumstances

0:38

have required us to connect via Zoom

0:40

instead. Well, we regret not being able

0:42

to welcome you to Israel at this time.

0:45

We're grateful for the opportunity to

0:46

have you join us virtually and we very

0:48

much look forward to meeting you in

0:50

person in the near future.

0:52

So without further ado, it is my great

0:55

pleasure to introduce profess professor

0:57

Roger Azavedo from the school of

0:58

modeling simulation and training at the

1:00

University of Central Florida. Professor

1:03

Azdetto is a world-renowned expert in

1:05

the field of self-regulated learning and

1:07

his interdisciplinary research examines

1:09

the cognitive, metacognitive, emotional,

1:12

motivational, and social processes

1:14

involved in learning with advanced

1:16

learning technologies. He has authored

1:18

over 300 peer-reviewed publications and

1:20

currently serves as co-editor and chief

1:22

of the British Journal of Educational

1:24

Psychology as well as on the editorial

1:26

boards of several leading journals in

1:28

learning and cognitive sciences.

1:30

Professor Aedo will now deliver his

1:32

keynote lecture entitled Measuring and

1:35

Supporting Self-regulated Learning and

1:37

Metacognition in Digital Learning

1:39

Environments. Before we begin, a brief

1:42

note for the audience. If you have any

1:43

questions during the talk, you're

1:45

welcome to submit them via the chat and

1:47

they will be addressed at the end of the

1:49

session where you can also ask your

1:50

question uh live. Roger, the floor is

1:54

yours.

1:55

>> Well, thank you so much uh Dr. Bandal.

1:57

Thank you so much for a wonderful um

1:59

introduction. So happy to be here. Thank

2:01

you. And also Dr. Blau for the

2:03

invitation and yes, I I'm sorry I'm not

2:05

be able to be with you uh in person, but

2:08

like you said, hopefully in the future.

2:10

Thank you for the opportunity to present

2:11

uh today. So we change it from a

2:13

workshop to lecture. So maybe spend

2:15

about 45 minutes talking and then we'll

2:17

open it up to questions if that's okay.

2:19

Uh yes and so today we're going to be

2:21

focusing on measuring and supporting

2:23

self-regulated learning and

2:24

metacognition uh in digital learning

2:26

environments. We also call them advanced

2:27

learning technologies and if you look in

2:29

the literature people call them

2:31

technology re uh rich environments etc.

2:35

So really focusing on measuring

2:36

supporting and then I'll also I'd like

2:37

to acknowledge all the funding agencies

2:39

uh because this work is also not

2:41

possible without some uh major uh

2:43

funding agencies from the federal

2:45

government uh from the military also and

2:48

then from very centralized uh specialty

2:50

centers and also some of our work is

2:52

also influenced by some of the work when

2:54

I was still at McGill University uh and

2:56

then we're also funded by the early and

2:58

the Jacobs Foundation and we'll talk

3:00

about some of those uh projects. So

3:02

really want to focus on is the following

3:05

is if we go to number slide number two.

3:07

So kind of start off with a little bit

3:09

of a of an overview in terms of the fact

3:11

that there is a science of learning with

3:12

advanced learning technologies. You'll

3:14

find a lot of this work that comes from

3:16

educational psychology, cognitive

3:18

science, learning sciences and also the

3:20

field of artificial intelligence and

3:22

education. You know some people uh that

3:24

kind of have discovered Gen AI. We see

3:26

this a lot in the literature and some of

3:27

these public forums. It's almost like oh

3:29

my gosh there's this thing called AI and

3:31

education. you know there's an entire

3:32

field uh that goes back way before you

3:35

know chat PT came out etc. um kind of

3:39

want to give everybody kind of a a a

3:41

global perspective on when we talk about

3:42

metacognition and self-regulation,

3:44

right? We're really talking about at

3:46

least from our perspective today is the

3:48

human's ability to have an awareness.

3:50

Are you monitoring? Are you regulating?

3:52

Are you evaluating? Reflection,

3:53

adaptivity. Those tend to be the very

3:55

high level macrolevel processes that we

3:58

think that learners, any type of learner

4:00

when using a technology will experience

4:02

those and go through those processes.

4:04

But again it depends who the learner is,

4:06

what the task is and the learning

4:07

technology

4:09

and really what we have been doing for

4:11

the last over close to 30 years is

4:13

really to use technologies to do a whole

4:15

bunch of things. We they're trying to

4:17

induce those processes like you know

4:19

trying to trigger metacognitive

4:21

awareness or we're trying to detect

4:22

because if you want the system to be

4:24

intelligent to provide individualized

4:26

key scaffolding for example it needs to

4:29

detect and today we're going to be

4:30

focused on the detection and modeling

4:32

and then tracking and supporting and

4:34

foster like once we know or have a good

4:36

idea what the student needs then how can

4:38

we support and foster those

4:40

self-regulatory processes right and that

4:42

could be related to cognition emotions

4:45

motivation

4:46

right and a effect.

4:48

And then what we're going to focus on

4:50

today really is not just the typical

4:52

self-report measures which have

4:54

basically uh inundated our literature

4:56

for decades in almost centuries, right?

4:59

Is use a multimodal data. So it's good

5:01

to know that students have

5:03

self-perceptions about these uh

5:05

metacognitive self-regulatory processes.

5:08

What we have been doing is using

5:10

basically we want to know what how

5:12

they're doing those processes and how

5:14

they're engaging in those processes in

5:16

real time right and to do that we use

5:18

the multimodal data right which is log

5:20

files so any kind of interaction between

5:23

the learner and the technology is

5:25

captured typically at the millisecond

5:26

level sometimes we have them go through

5:29

concurrent think aloud so the students

5:30

who may be reading something about

5:31

biology and then they'll say something

5:33

like oh I don't understand this that's

5:35

beautiful because we can segment that

5:37

that means are monitoring their emerging

5:38

understanding. Then the question becomes

5:41

what does a human do? How do you

5:42

regulate that? And if you can't then can

5:45

we use a technology to regulate that and

5:47

that could be with a pedagogical agent

5:49

right or any other kind of environment

5:51

and we'll talk a little bit about that.

5:53

Eye movements, facial expressions,

5:54

physiological sensors and then we also

5:57

have audio and video recordings of the

6:00

complete kind of a bird's eye view the

6:02

learner the task the context and all

6:04

their interactions. Right? We're

6:05

focusing specifically on CAM's

6:07

processes. So that's the cognition,

6:09

affectic, metacognition, motivation and

6:11

social processes. And related to this

6:14

talk is once we have a good model

6:15

understanding of what the student the

6:17

learner needs then the question becomes

6:19

how do we make it adaptive. So a lot of

6:21

the literature is a little bit confusing

6:22

because some people talk about

6:24

adaptivity, some people talk about

6:26

personalization and then if you look in

6:28

the game based learning environment they

6:29

talk about gaming gamify gamifying a

6:31

game. So what does that really mean?

6:34

Okay. And I'm going to talk a little bit

6:37

focusing on the assessment and support

6:39

uh by showing you some of our current uh

6:41

projects and we'll have an opportunity

6:43

to talk about that and then hopefully uh

6:45

Wednesday we'll continue with this which

6:47

is talking about you know the natural

6:49

language processing how do we use LLMs

6:51

what kind of genai tools artificial

6:53

agents and then really want to focus on

6:55

on Wednesdays to talk about kind of this

6:58

future of AI which is the use of

6:59

simulated learners and human digital

7:02

twins which supposed to be replicas of

7:04

who we are. Okay, so that's a little

7:07

bit. So we asked the fundamental

7:08

questions, right, as a psychologist,

7:10

right? Is self-regulation across ALTS or

7:12

advanced learning technologies, you

7:15

right? So we ask fundamental questions.

7:16

What is self-regulation? Right? What is

7:18

it? When, where, and how is it

7:20

occurring? How are students deploying?

7:23

How do we measure these processes?

7:25

Right? What kinds of meth methods and

7:27

techniques and approaches exist that we

7:29

use? Uh how do we analyze? Yes, we use

7:32

traditional, you know, statistical

7:34

methods, but we use also mixed methods.

7:36

Sometimes we do qualitative analysis,

7:38

but we're also have been doing this is

7:40

also using machine learning techniques,

7:43

right? And now we're getting into the

7:44

computational modeling because there is

7:47

a whole another area in true AI field

7:50

that talk about artificial metacognition

7:52

which is different than human

7:54

metacognition. But what can we borrow

7:56

from that field for us to basically

7:58

design uh more intelligent systems? And

8:01

then there's been a new push uh with

8:03

people's dissatisfaction of how we can

8:06

measure self-regulation as it temporally

8:08

unfolds is the use of complex dynamical

8:11

systems which you know philosophers of

8:13

science have been using uh computational

8:15

models have been using it biological

8:17

sciences etc. And then because I'm in

8:20

the school of modeling and simulation we

8:21

also take it to the next level. Well, if

8:23

we collect all this data on these

8:24

humans, right? Well, how do we model

8:27

self-regulation? Is it an an avatar that

8:29

basically says, well, if you want to

8:31

increase your metacognitive awareness,

8:32

you should do X, Y, and Z. Okay, that

8:35

could be one. But can we use intelligent

8:37

data visualization? So, imagine a

8:38

teacher in the classroom that has a

8:40

dashboard, right? What information could

8:42

we provide to that teacher, right? And

8:45

then simulation. Uh, so how do we

8:47

simulate these in different environments

8:49

and different artificial agents, right?

8:51

to maybe model some new approach to

8:53

providing scaffolding. So those are some

8:55

of the fundamental questions that really

8:57

drive our research. And here I have a

8:59

little snippet of some of the

9:00

environments that we built in. Right? So

9:02

the top right is our typical student,

9:04

right? Who is instrumented? We're

9:06

collecting screen capture, emotions, log

9:09

files, eyetracking, keyboard, uh

9:11

physiology, mouse, think alalouds, etc.

9:15

But you know that's that's okay in the

9:17

lab if we can just basically try to

9:19

control as much as possible. But you

9:20

know we also do work in the classroom.

9:23

Well it's not feasible to have that kind

9:24

of setup for 30 or 40 students. So the

9:27

question is what research strategy do

9:29

you develop? So when we do 30 to 40

9:32

students let's say K12 right we

9:34

typically collect log file data right uh

9:37

and also pre pre and post test of the

9:40

content knowledge right and also some

9:42

self-report measures. So we're missing

9:43

all that multimodal data. All we have is

9:45

log file. So what inferences can we make

9:47

about these processes? Sometimes the top

9:50

left here's a high school student,

9:52

right? Sometimes schools don't even have

9:54

labs where we can actually put you know

9:56

our setups. So for example, here's a

9:58

student learning about uh synthesizing

10:00

compounds in chemistry and the data is

10:02

actually being collected in a wet lab in

10:04

a high school or the bottom right where

10:06

we have two clinicians, clinical

10:08

students basically. Uh one is using a

10:11

facial EMG. You can see the sensors on

10:13

her cheek and on her head and she's

10:15

wearing a portable eye tracker. And here

10:17

we're testing some new technologies when

10:19

we're dealing with nurse uh and

10:21

healthcare training. Okay. So, uh those

10:24

are some of the big questions.

10:26

Conceptualization. I'm not going to get

10:27

too much into this. Just want to say

10:28

that related to measurement and support,

10:31

there are many many different

10:32

conceptualization issues that we need to

10:35

uh discuss, present, worry about, right?

10:39

Uh so some of them for example is the

10:41

fourth bullet. What is the boundary in

10:43

these metacognitive processes? When a

10:45

human goes from awareness to monitoring

10:47

regulation, when is it happening? Why is

10:49

it happening? Right? And then the

10:51

question is if it's becoming

10:53

maladaptive, then what can a learning

10:55

technology or digital learning

10:57

environment intervene? And how does

10:58

intervene, right? And unfortunately,

11:01

some of our theories in educational

11:02

psychology and learning sciences, right,

11:05

are too either too abstract, right, and

11:08

they're not prescriptive enough, they're

11:10

more descriptive, right? And the

11:12

question is well that doesn't really

11:14

help with those of us who also design

11:16

intelligent systems right so there's a

11:19

lot of work to be done uh there's

11:20

developmental differences that occur

11:22

individual differences right and then

11:25

from an assessment and measurement

11:26

perspective is this is all assuming that

11:29

a human can externalize we're making all

11:31

these processes overt covert excuse me

11:34

overt so that everybody can see them but

11:36

what happens when you are an expert at

11:39

engaging metacognition

11:41

and you no longer talk about it and you

11:45

have skills and knowledge and so the

11:46

question then becomes you know invisible

11:48

if you will we can't see it happening

11:51

okay so we have a couple of these issues

11:54

but what I want to talk about also is

11:55

that you know all our work is

11:56

theoretically driven here's just a

11:58

sample of some of the models and

12:00

theories and frameworks of

12:01

self-regulation metacognition that we

12:03

use and these are very applicable to our

12:05

area right because some of our studies

12:07

we go back to the top left that is the

12:09

basic Right? Nelson and Aaron's model of

12:12

self-regul of of metacognition if you

12:14

will. A very simple model. It can be

12:16

used for reading. It can be used for

12:17

really anything. Okay. Then over here on

12:20

the right hand on next to it in in the

12:22

orange and blue is computational model

12:24

of of metac cognition that has occurred.

12:27

The third one here is one that we have

12:29

used extensively. That's the wind and

12:30

Hadwin model of information processing

12:32

theory. Right? that also makes

12:34

assumptions about the phases, the

12:35

sequences, the operations that a human

12:38

goes through as they're learning about

12:39

really any kind of topic with a learning

12:42

technology, whether it's a game, an

12:44

intelligent tutoring system, etc. We

12:46

also see a lot of work by Dunloski that

12:48

differentiates the monitoring component

12:50

from the control or the regulation

12:52

aspect. right here just to show you that

12:54

according to some theories or models

12:56

like this one, there is an expected time

13:00

during learning or performance or

13:03

reasoning or problem solving when a when

13:04

a human should be deploying some of

13:06

these processes, right? But then also

13:09

some of our work has to do with not just

13:11

cognition, metacognition, but sometimes

13:13

our students get frustrated because the

13:15

content is too hard, right? Or they or

13:17

they need scaffolding and they're not

13:18

getting scaffolding. So we use the the

13:20

melon gracer model of a effect. Okay.

13:23

And then on the bottom here you have two

13:25

different models of emotions that come

13:27

from other disciplines. So we have claus

13:29

shares and then we have James Gross.

13:32

Right. One question and one thing that

13:33

we would like to continue doing is this

13:35

is a model of emotion regulation is can

13:38

we embed emotional regulation in our

13:40

digital learning environments. Right? So

13:42

the students can actually attempt to or

13:44

see a model like how do I stop

13:46

ruminating? How do I engage in cognitive

13:48

reappraisal? Does I know that's better

13:50

than rumination? Right? And then uh on

13:53

finishing off here on the bottom left is

13:56

work by uh our dear colleague Sandy

13:58

Yarvala and Allison Hadwin and their

14:01

colleagues on socially shared

14:03

regulation. Right? In some cases we have

14:05

two humans interacting with let's say a

14:07

game environment. Sometimes it could be

14:09

a human in an artificial agent. So the

14:12

question becomes how do we extend models

14:14

of self regulation into socially shared

14:16

regulation externally regulation

14:19

co-regulation right so a lot of this

14:20

work has been done and this work is now

14:22

actually more important because as

14:24

people start to using genai the question

14:26

becomes is genai what is gen AI okay

14:30

just to give you a snippet uh before we

14:32

talk about more data here's a snippet of

14:35

some of the work that we do just to show

14:37

you that we cut across different uh I'll

14:39

just sorry not to be disre

14:41

disrespective to ourselves as humans

14:43

across humans, okay, tasks and content.

14:47

Uh, so the top left is we've been doing

14:49

tons of work on Crystal Island, which is

14:51

a game-based learning environment. Here,

14:52

students have to self-regulate because

14:54

they're learning about microbiology and

14:56

they're learning how to engage in the

14:58

scientific reasoning process. Uh, bottom

15:00

left is again example of multimodal data

15:03

during complex problem solving. So a

15:04

student who's using gamebased learning

15:06

environment this is typically what they

15:08

would look like a fully instrumented

15:10

student. Uh the top right here is we do

15:13

a lot of work also here the digital

15:15

learning environment is not a typical

15:17

technology it's the highfidelity

15:19

mannequin. Okay it's a clinical

15:22

healthcare mannequin. So in this case

15:23

for example we work with one of our

15:25

children's hospital where we actually

15:27

model and simulate using a mannequin of

15:30

a child a resuscitation process which we

15:33

know that if a child is not resuscitated

15:35

right then they die. So here we look at

15:38

team performance right with nurses and

15:40

pediatricians who are residents and

15:42

sometimes emergency medicine and they

15:44

have seven minutes to save the child. So

15:46

the question is that this child will uh

15:49

communicate or try to communicate with

15:51

the team. Okay. Okay. So here in this

15:53

case it's it's high stress environment

15:55

where you're not just learning about the

15:56

cell structure. Here you are actually

15:59

saving lives. Okay. Uh our new work here

16:02

is on simulated learner. Uh I'll talk

16:04

more about this on Wednesday during the

16:06

keynote is the imagine a student a high

16:08

school student who's having trouble with

16:09

algebra problem solving, right? Gets to

16:11

actually teach her simulated learner how

16:14

to solve the problem and also about her

16:16

self-regulatory skills that she would

16:18

use. And then she basically would get to

16:20

see her agent, her simulated learner

16:23

model all these processes. But what the

16:25

student doesn't know is that sometimes

16:27

the simulated learner will deviate, make

16:29

more errors, use different strategies

16:31

because we want that to be the trigger

16:33

for the student to say, "Hey, I didn't I

16:35

didn't ask you to do that. That's not a

16:36

strategy. Is that better?" So in one way

16:38

to accumulate and learn metacognitive

16:40

knowledge and problem solving. In some

16:43

cases, we talk about uh system beliefs.

16:46

So for example, here's an open learner

16:48

model. This comes out of Judy K's work.

16:50

She's at the University of Sydney. Is a

16:52

lot of our AI systems make decisions,

16:55

for example, about adaptivity and

16:56

scaffolding, but we never really tell

16:58

the students or show them why we're

17:00

having them. We're scaffolding them in a

17:02

particular way. So an open learn model

17:04

is a data visualization tool that

17:06

basically takes the beliefs of the AI

17:08

system and shows them to the learner so

17:11

they can understand and they can

17:13

potentially negotiate, right? It's like,

17:15

oh, you think I'm a really poor

17:17

self-regulator. Well, why is that? You

17:20

know, I don't think that is true. So,

17:21

let me basically through sliders use the

17:23

data visualizations to improve the

17:25

system. Sometimes we try to accelerate

17:28

their learning by showing them some of

17:29

their heat map, which is their attention

17:31

allocation. Okay.

17:33

We also do a lot of work in VR. And then

17:35

the top one here, which I'll also talk

17:37

more about tomorrow, here's Megan. Megan

17:39

is one of my posttos. And we have

17:41

created a digital twin of Megan who

17:43

lives in a box. And we're using NA uh

17:46

NLP and AI to basically imagine you

17:48

talking to your future self. Megan can

17:51

be a teacher, right? So it's like, hey,

17:53

I'm a new teacher. I'm having really

17:55

hard time, you know, uh engaging

17:57

emotional regulation strategies in my

17:59

middle school kids. Can you show me,

18:01

right, how to do that? Megan could be a

18:04

student. Megan could be a teacher. Megan

18:06

could be a parent. Some of our work in

18:07

biomedical science, Megan can also be a

18:09

clinician or a patient. Okay? So imagine

18:12

being diagnosed with pre-diabetes,

18:15

right? And we have all her genome data,

18:18

okay? And then the question becomes,

18:19

well, show me what's going to happen to

18:21

me in 5 months, 6 months, a year, will I

18:24

actually become diabetic? Okay? Or if I

18:27

don't continue to make any kind of

18:29

changes, what's going to happen to me in

18:30

five or 10 years from now? Will I lose a

18:32

limb? Will I be blind? Okay? So, can we

18:35

do disease progression?

18:37

Uh, so that's some of the work uh that

18:40

we're doing. So yes, we're dealing with

18:41

a lot of theoretical uh issues which

18:44

obviously you know trying to be mindful

18:46

of time. Uh there's quite a number of

18:48

them. One of them is these theories. Do

18:49

we have theories? So their new handbook

18:51

of AI and education came out in 2023.

18:53

There's another one coming out this year

18:56

where we kind of list a lot of like how

18:58

do we embed these theories of

18:59

self-regulation into these different

19:01

types of digital learning environments,

19:03

right? Not just traditional but like I

19:05

was mentioning in mannequins also. Okay.

19:09

And so if we look at the literature in

19:11

terms of the learning technologies that

19:14

have been developed right which ones

19:16

have focused on SRL and have use SRL

19:19

right so the top right is metatutor

19:21

which I'll come back to in the next

19:22

slide this is one that we developed over

19:25

15 years ago it's about learning human

19:27

circulatory system and we have different

19:28

agents and each agent is responsible one

19:30

is for cognition

19:32

one is for metacognition and it's a very

19:35

kind of you know multimedia hypermedia

19:37

environment very traditional intelligent

19:38

tutoring system where we can detect

19:40

every thing that the student is doing.

19:42

Uh the bottom one bio world is from my

19:44

former adviser who uh who just actually

19:47

retired two years ago from uh Suzanne

19:49

Loa from uh McGill University. This is

19:52

self-regulation also by using more

19:53

cognitive load theory and this is for

19:55

medical students where they learn how to

19:56

solve cases. Here's Crystal Island.

19:59

Completely different approach here.

20:01

There is no AI in the system. Okay, it's

20:04

a game-based learning environment. It's

20:05

purely constructivistic, open-ended. the

20:07

students get to basically they have to

20:09

solve the mystery in order to end the

20:11

game. Okay. Uh our colleagues at

20:15

Vanderbilt Gautam Biswis in terms of

20:17

Betty's brain completely different

20:19

approach. This one is learning by

20:20

teaching. So Betty's brain is this white

20:22

box here, right? It starts off as blank.

20:25

So the students are learning about

20:26

ecology and then they're teaching Betty

20:28

by populating her brain if you will,

20:30

okay, with concepts and relationships.

20:32

And then they get another student to go

20:34

and ask Betty about particular, you

20:37

know, how is phosphates and nitrates

20:38

related to the quality water quality.

20:40

And they can ask those questions, but

20:42

guess what? Betty could only answer

20:44

correctly if you taught her about that

20:46

topic. So the kids go into these cycles

20:48

of teaching and learning. uh Phil Winnie

20:51

our uh one of our long-term

20:53

collaborators who also retired two years

20:55

ago developed G study right which is a

20:57

very hypermedia hypertext versus

21:00

environment where the assumption that

21:01

Phil makes is that because when you

21:04

highlight right you're actually engaging

21:06

metacognitive processes okay and then

21:08

one of our colleagues at NC State who's

21:11

developed sim students this is for

21:12

algebra and here the students are

21:15

actually teaching almost like a sim

21:17

learner the teachers are the students

21:18

are teaching the simulated students how

21:22

to solve these problems. So this is so

21:24

again just want to show you that there

21:26

are alts or advanced learning

21:27

technologies or digital learning

21:29

environments that had have been using

21:31

self-regulation different models and

21:32

theories and the question becomes where

21:34

is the AI in some of these systems.

21:36

Okay. And then one that I wanted to

21:38

emphasize was the metatutor. This is one

21:40

that we developed over 15 years ago and

21:44

uh well now four years ago we actually

21:47

synthesized all the literature that we

21:49

did in terms of cognition,

21:50

metacognition, emotions and motivation

21:53

uh by all these co-authors. These co

21:55

co-authors were at one point in time

21:57

either my posttos or grad students who

21:59

are now faculty at different

22:00

universities across the world

22:03

uh who led some of these projects.

22:05

Right? But as you can tell, it's got

22:06

agents. It's got a self-regulation

22:08

pallet. It's got a whole bunch of a

22:10

timer, table of contents, etc. Because

22:12

we wanted to be able to track

22:13

everything. And that work with the

22:15

original pedagogical agents has given uh

22:18

rise to new environments such as

22:20

different versions of a metatutor where

22:22

now we actually hire actresses for

22:24

example where we videotape them and take

22:26

pictures and we literally if you will

22:28

peel off their face to make it much more

22:31

so that when the student is interacting

22:33

for example with her name is Emma when

22:35

she's interacting with Emma Emma

22:36

actually gets to respond to her so that

22:38

there is some kind of a effective

22:40

connection in terms of engagement and

22:42

also using facial expressions to trigger

22:45

some metacognitive processes. Like when

22:47

Emma kind of looks like this, like I'm

22:48

doing now, it's like you probably are

22:50

spending too much time on relevant task

22:53

content. Okay, so using verbal uh facial

22:57

expressions and verbalizations to do

22:59

that. And then what I wanted to show you

23:01

is really at the crux of all this. So we

23:03

also have a project NSF funded project

23:06

with five universities one in Europe

23:08

three in the US and it's about parallel

23:11

programming which is a very complex

23:14

topic for undergrad computer science

23:16

students. So I wanted to show you is the

23:18

kinds of data as we start talking about

23:19

how we what are we measuring. So what I

23:21

want to show you is different aspects of

23:22

this. So here is a video screen

23:25

recording okay of a student an

23:27

undergraduate student learning about

23:29

semaphors and parallel programming in

23:31

this game called parallel and so the

23:34

question is okay so here it's better

23:36

than nothing because now I have screen

23:37

recording I can see what you're doing

23:39

okay but think about it from our

23:42

perspective as researchers and designers

23:45

of learning techn well what happens now

23:46

if I have now I have screen recording

23:50

plus I have the students eyetracking in

23:51

real time. This is creating a whole

23:54

whole a CSV file which looks like an

23:56

Excel file with 250 because it's a 250

24:01

Hz side tracker. I'm getting 250 data

24:03

points per second as to what the student

24:05

is doing. Okay, so that's a little bit

24:08

better. Well, here I want to show you is

24:10

now we got screen recording happening

24:13

here. We got the eye tracking of the

24:15

student. Okay. And what you're seeing

24:17

moving really really slowly here that

24:19

looks like a whole bunch of EKGs is the

24:22

facial detection awareness of affective

24:24

states like curiosity. Okay. Anger etc.

24:29

Okay. But I turned off his voice on

24:31

purpose because we're trying to figure

24:32

out like with the question is with more

24:35

data. And here is the holy grail.

24:41

>> Got everything you had before.

24:45

He's going to talk soon.

24:46

>> Yeah, you can toggle on the links to

24:48

display all the time

24:50

>> on your build your solution. To turn it

24:52

off, simply click this button in the

24:53

sidebar.

24:55

>> Can you hear him?

24:56

>> Display all the time. Okay. I'm not sure

24:58

that.

24:59

>> Yes.

24:59

>> Thank you. Thank you. Okay. Just to show

25:02

you that the the so one issue for

25:04

measurement is each of these data

25:06

channels contributes to our

25:08

understanding of particular

25:09

self-regulatory metacognitive processes.

25:12

But if you notice the language is the

25:13

one that basically contextualizes

25:15

everything.

25:17

Okay. And so these are all dynamic

25:20

versions of different types of data,

25:22

right? That we then use to understand

25:24

why this student solved the problem

25:26

correctly or was having challenges. And

25:28

then we also have uh non-dynamic

25:30

representations. So for example, after

25:32

they do one one level of the game,

25:34

here's a heat map, right? that shows us

25:36

the eye movement to basically where they

25:38

allocated their attention more uh more

25:41

than anywhere else. Okay. So the

25:43

question comes we have a lot of data

25:45

channel data multi-data channel each one

25:47

can make inferences from each of these

25:49

right and then the question is we also

25:50

have summary data right so the question

25:53

is with all this data we tend to use it

25:55

as researchers to make decisions more

25:56

adaptive but also to understand

25:59

self-regulatory processes uh using this

26:02

game. The question though is from a

26:04

teaching and supporting perspective and

26:06

training is why can't we take this these

26:09

snippets of data especially if they're

26:12

meaningful, okay, and give them back to

26:14

the students so that now they could

26:16

understand because otherwise it's what

26:18

we typically do. Oh, you didn't do very

26:20

well because you didn't do this. It's

26:22

always verbal. Can we provide

26:24

verbalizations with either static or

26:26

dynamic representations of this type of

26:28

information? Okay.

26:31

>> Okay.

26:32

>> Okay. So here's another example in a

26:34

more and that's a very that's one

26:35

student learning about doing problem

26:37

solving on a context. So here just to

26:39

throw you on the other side here's an

26:42

example okay I'm going to show you this

26:44

video I'm going to show you is taken

26:45

from the perspective as you're watching

26:47

the video at the bottom of the table

26:49

okay the bed is the lead um clinician

26:54

and these are the three team members. So

26:56

we going to show you here is the

26:57

eyetracking data. Okay, these balls that

27:00

you see here is jumping around. This is

27:02

the eyetracking data of the resident who

27:05

is in charge of saving this child who

27:07

was born is on the verge of dying. Okay,

27:10

this is a very uh precarious delivery of

27:13

this child for mom and baby. Okay. And

27:15

the question is we want to be able to to

27:18

uh to see that. No, I did the same thing

27:20

on purpose. I turned off the audio.

27:22

Okay. So this is what it looks like. So

27:24

this is you're going to hear this is the

27:26

this is the guy whose eye tracking

27:28

you're seeing from a different angle. So

27:30

here now let's hear the whole

27:31

>> part

27:33

two.

27:36

>> Okay. So it's looking

27:42

good.

27:44

>> So he's giving the team members. You can

27:46

see what he's looking at. We've actually

27:48

isolated the next step in correct. I

27:51

mean

27:52

>> how long should we how long

27:54

>> and the baby actually the mannequin

27:56

since that's a focal point has multiple

27:58

areas of interest. Okay. So we're trying

28:00

to figure out medical errors and then

28:02

how does this resident who has very

28:04

limited clinical experience basically

28:07

managing themselves and also monitoring

28:10

and regulating their entire team in

28:12

order to save the child. Okay. So

28:14

looking here we're looking more at uh

28:16

performance and medical errors. Okay.

28:18

But also still self-regulation.

28:20

And here's some other examples of the

28:22

work we do. Some cases we have game

28:23

based simulations where there's a

28:25

pandemic. Um, this used to be our

28:28

favorite uh game-based learning

28:30

environment. Okay. And I'll tell you in

28:32

a second why. Um, I'll go back to I I'll

28:34

leave it there for a second until we go

28:36

to the next slide. Uh, we also do work

28:38

in the military. So we have very low

28:40

fidelity simulation of a tank commander.

28:43

Okay. So basically looking So here's

28:46

some more kids using VR. Um

28:50

and then I will show you uh

28:54

yeah so we'll come back to that

28:57

methodological issues we have ton so we

28:59

typically in in most of our research

29:01

there's always at least one instrumented

29:03

learner sometimes we have two sometimes

29:05

have three there's always some research

29:07

component and I really want to emphasize

29:09

that was exemplified by the research

29:11

that we do is our experiments tend to be

29:13

you know longer they're not just a 30 30

29:16

minutes they tend to be mult multi-day.

29:19

Okay. And then the question becomes

29:21

where do we administer our learning

29:22

outcomes? Where do we when and where do

29:24

we administer our self-report measures

29:26

etc. Uh something that we have not done

29:29

yet that we're planning on doing

29:30

starting this year is also doing

29:31

retrospective analysis of giving the

29:34

participant let's say or the human could

29:36

be a teacher can be a student uh really

29:38

any anyone um a break and then basically

29:42

show them their multimodal data back.

29:44

Now to come back to this used to be our

29:47

favorite uh

29:49

um environment. It's called outbreak

29:52

simulator where working with colleagues

29:54

uh in data science and actually

29:55

epidemiology

29:57

is that they mapped unfortunately the

29:59

video doesn't work. Our university

30:00

security system just kicked in something

30:03

but basically uh students can see uh

30:07

what happens they can create a virus

30:09

they can then deploy the virus on

30:11

anywhere in the United States. So let's

30:13

say they create a virus, they they put

30:14

it in New York City and compare New York

30:16

City versus let's say Orlando and it's a

30:19

population dynamics. They get to see how

30:21

many people are incubating, how many

30:23

people are dying, how many people are

30:24

not uh affected, etc. And then the idea

30:27

is to have these u metahumans be part of

30:30

the scaffolders, if you will. So what I

30:32

wanted to show you is um here's an

30:35

example of what it looks like when

30:37

here's an undergrad student from our

30:39

Bernett School of Biomedical Sciences.

30:40

So they need to know something about

30:42

viruses and and so you know here's an

30:44

example uh at this point in time how

30:48

many per day average could be delivered

30:51

for this simulation.

30:53

>> Oh he's fully instrumented. So he's

30:55

learning about how to create a virus. So

30:58

he gets to a point to after he creates

31:05

the virus and it generates a whole bunch

31:08

of data visualization trying to see if

31:10

he's making the right inferences.

31:12

Okay.

31:14

So we ask the global question is this

31:15

you're actively monitor regulating their

31:17

learning right but what are they

31:18

learning right how do we know that's the

31:20

basic fundament

31:22

so this uh as I was mentioned this was

31:24

our this was funded two years two and a

31:26

half years ago when with our previous

31:28

administration

31:30

and on April 1st of last year almost uh

31:34

10 months ago uh given the new

31:36

administration u they figured that we

31:39

should not be teaching anyone about

31:41

pandemics or preparing for pandemics So

31:44

this was an NIH funded project that was

31:46

terminated by the federal government uh

31:48

without any notice. So uh we are trying

31:50

to figure out ways to still be able to

31:53

teach students right K12 and even

31:56

college students uh about pandemics

31:58

because obviously it is extremely

31:59

important topic uh without being too

32:02

confrontational if you will with our

32:06

regime

32:08

vaccine. Uh so this is what it looks

32:10

like. So we collect a whole bunch of

32:11

data right and then what we are not

32:13

going to be able to talk about today

32:14

because this will take hours

32:17

is then how do we you know which

32:18

features of each of these data types do

32:20

we collect how do we use that for

32:22

prediction and then how do we use it to

32:24

make our systems much more intelligent

32:26

right and this is some of the those are

32:28

some of the citations by by us and some

32:30

of our collaborators

32:33

uh so yeah so all to say that what you

32:36

haven't seen behind the hood is that

32:37

this produces so many different types of

32:39

signals we We have physiological

32:41

signals, we have behavioral signals,

32:42

affect the motivational indicators,

32:45

cognitive, metacognitive indicators and

32:46

contextual information. And the question

32:48

becomes for us is this usually the

32:50

bottleneck is how do we take all this

32:52

data? How do we temporal line the data?

32:53

which data channels do we focus on you

32:56

know and uh what inferences are we

32:58

making and is it just for pure

32:59

publication or is it then also for a

33:02

design when we work with our computer

33:04

science and AI and even game based

33:05

learning environment

33:07

colleagues to design new environments

33:09

and again not just scary so take it to

33:11

the next level is if you think about

33:14

just even the work that we do in VR is

33:17

it produce a lot of processes metrics

33:19

and inferences and the red line is

33:22

basically so for example If you were

33:23

just to collect log file data, right,

33:26

these are examples of the kinds of

33:28

metrics that you'd be able to extract

33:30

from those. Okay? And also those are

33:33

some of the inferences that you're able

33:34

to make. Okay? And so some data channels

33:37

contribute very idiosyncratic, right, or

33:41

very specific processes. But sometimes

33:43

we also have uh across data channels

33:46

where you're able to capture the same

33:48

data. So then you can at least converge

33:50

data. Okay? And this is just a snippet.

33:52

This is not obviously all of the entire

33:55

uh data that can be generated. So for

33:58

multimodal data, you know, it's not

33:59

perfect. Uh not many people want to do

34:01

this because it's expensive. It it's

34:03

time consuming. It takes a lot of

34:05

training, right? And then here are some

34:07

of the data issues that we deal with,

34:08

right? We have privacy issues, of

34:10

course, ethical issues, right? Uh so for

34:13

example, you're collecting data in a

34:14

school. Well, we have to ask parents. We

34:16

actually provide them a list of all the

34:18

data and they indicate which data they

34:21

would allow us to collect on their

34:22

children. So a lot of parents of course

34:23

they don't want their children's facial

34:25

expression data to be collected. Right?

34:27

We have some counties that will not

34:29

allow us to collect physiological data

34:30

on the children. Okay? So we work with

34:32

the parents in terms of being

34:34

respectful. Right? Uh so that's

34:36

extremely important ethics and but

34:38

sometimes it's also very incomplete

34:39

data, right? The data is messy. It's um

34:43

there's lots of volume. Okay. So,

34:46

there's a lot of issues to deal with.

34:48

Um, and we've actually back in uh going

34:51

back all the way back to 2015 in one of

34:53

our chapters in in the handbook of

34:55

cognition and education of Dunlow and

34:57

Rosson, we actually try to help the

35:00

community understand that depending on

35:02

if you're looking at quality of quality

35:04

and quantity. These are for example, you

35:06

know, the issues that you can deal with,

35:09

these are the sample data that you would

35:10

collect. And then part of this chapter

35:12

is to provide researchers with here are

35:15

the types of questions research

35:17

questions and hypotheses that you can

35:19

ask from this uh type different types of

35:21

data. Okay. And analyses. So yes. So not

35:25

to go into this. I'm trying to be

35:26

mindful of time. Uh where are we at now?

35:28

12:07. Yeah. Uh so there's major

35:32

accomplish accomplishments that have

35:34

been done. Uh obviously generative AI is

35:37

really pushing the envelope and allowing

35:38

us to collect a lot more data and

35:40

becoming having systems that are much

35:42

more adaptive even though of course they

35:43

hallucinate but you know they're not

35:45

perfect.

35:47

Um,

35:50

one thing I wanted to mention also is

35:51

when it comes to measurement and support

35:53

is that we as a science, right, we're

35:57

still in this kind of very descriptive

35:58

we're describing stuff and that's fine.

36:02

But the question becomes is and you see

36:04

this in the learning analytics community

36:06

is when can we get to predict? Have I

36:08

collected enough eyetracking data of

36:10

let's say a middle school student

36:12

learning about biology and examining a

36:14

very complex cell? Have we collected

36:18

enough data maybe after five minutes or

36:19

10 minutes to be able to predict

36:23

that they have a good understanding or

36:25

an excellent understanding if we give

36:28

them an assessment. Okay, that's where

36:30

you want to go. And then if we can do

36:32

that, then when are we going to get to

36:33

the point of explanatory challenges?

36:35

When will the system have enough data on

36:37

this on the student let's say and be

36:40

able to explain like you've been using

36:42

the system for algebra for like 3 weeks

36:45

now and what the trends that we've seen

36:47

is X Y and Z and if you want to improve

36:50

improve your metacognitive judgments of

36:53

learning then this is what you need to

36:56

do. That's kind of the these these two

36:58

last pillars are really where we would

36:59

like to go. Okay. So yes and here are

37:02

some challenges for adaptivity and

37:04

personalization. And we'll come back to

37:05

this one tomorrow. Right? And of course

37:07

the view those of you are interested,

37:09

right? The Michael Janako's uh uh this

37:12

is a 2022 there is a multimodal learning

37:14

handbook. This is actually free. You

37:16

don't even have to buy it for Springer.

37:17

It's available online. Okay. And then

37:19

Muhammad Sakir who is um in Eastern

37:22

Finland University actually also this

37:25

one is also free. Okay. This is a

37:27

practical guide to using Right. Uh as a

37:30

statistical technique for um analyzing

37:32

this kind of multimodal multi-

37:33

channelannel data. It's a really good

37:35

and then Suzanne Demoji who's one of our

37:37

posttos on our cellar project. She's at

37:39

the Radbood University. Okay. Uh in 2025

37:42

kind of synthesized 42 articles uh based

37:45

on Metatutor and the Flora engine which

37:47

is an engine that has been developed by

37:49

uh Dragon Gasevich our colleague Maria

37:52

Bannard at and Sana Yarvala and

37:55

Ingamolinar.

37:57

So if you're interested in those and

37:59

then so we look for like what does the

38:00

future look like? Okay. So we continue

38:03

to work work with teachers and

38:04

administrators on looking at you know

38:06

can we increase enhance instructional

38:09

decision- making if we had students were

38:11

instrumented in classrooms that were

38:13

instrumented uh one of our projects is

38:15

like what is the classroom of the future

38:16

going to look like okay or when we talk

38:19

about for example open learner models

38:21

let's give access to the humans and

38:24

explain to them how we're making these

38:26

decisions and understandings about their

38:28

metacognition

38:29

right another one part of the seller

38:31

project that's funded by the Jacobs

38:33

Foundation is the content now is their

38:36

kids are learning these are 12 to 15

38:38

year olds are learning about AI for a

38:40

future of AI with AI tools okay so we're

38:43

doing this massive large 30 plus country

38:47

uh international um study to compare uh

38:50

adolescence across AI learning and then

38:53

we also use immersive virtual

38:55

environments okay we keep working on

38:57

this these are tools are environments

38:59

for us to be able to teach Okay. About

39:01

self-regulation using different types of

39:03

agents. Okay. Uh and there also data

39:07

collection devices and then for example

39:09

at least in the US right uh is the the

39:13

issue of identity and minority students,

39:16

right? So imagine you have a young high

39:18

school African-American student who

39:20

wants to go into health science as a

39:21

career. Okay. But minimal role models

39:25

opportunities in this environment. she

39:28

can develop an avatar of her future

39:30

self, okay? Where she sees herself as a

39:32

clinician and she gets to practice

39:35

clinical skills, okay? So, we made it

39:38

somewhat very COVID related but

39:40

respiratory skills, but don't forget

39:42

she's instrumented, right? So, as she is

39:45

learning and practicing her clinical

39:47

skills, she's leaving residue. So,

39:49

here's okay, a heat map of the way she

39:53

was trying to intubate the patient.

39:56

Normally you'd have to have the teacher

39:57

there looking and doing it real time.

39:59

Well, she's leaving residue. Now the

40:01

question is, can we use multimodal data

40:03

to assess the quality of her clinical

40:05

skills? Okay, so from an assessment

40:08

perspective, but then can we use it also

40:10

to support the development of those

40:13

intubation skills, if you will. And this

40:15

here is just to show you the kinds of

40:17

data that we collect prior to learning,

40:19

prior and post, the second and the third

40:22

column. This is the multimodal data that

40:24

we would collect while she's learning

40:25

and using clinical skills. And this also

40:28

has to do with career choice and

40:30

interest in health sciences. So there

40:31

are plenty of other uh retention

40:34

engagement um

40:36

uh data that we can collect and now

40:39

we're moving into kind of the human

40:41

digital twins and I'll talk more about

40:42

this tomorrow. So these so our

40:44

university is funding uh the hiring of

40:47

38 new faculty member across

40:49

disciplines. There's a lot of digital

40:51

twin work that comes from engineering.

40:52

So, you know, engineers of course have a

40:54

a digital twin of a tank, a digital twin

40:56

of a pain, a digital twin of a an

40:59

engine. And the question is, well, I'm a

41:01

psychologist. I study humans. Can we

41:03

have a human digital twin? Okay. So that

41:06

we can potentially embody some of these

41:08

processes and then have it operate,

41:10

learn, solve a problem. Okay. Um, so

41:14

what does that look like? So, you know,

41:16

these are some of the projects that

41:17

we're engaging in. Whether it's a high

41:18

school student learning with her digital

41:20

twin about, you know, ukareotic cells.

41:23

When we talk to some of our school

41:25

administrators, they say, "Well, Dr.

41:26

Asa, what we'd love is to have a digital

41:28

twin of a math high school teacher

41:30

because that is the hardest thing to

41:32

teach." Okay, so the question is, well,

41:35

so we're working with them and the

41:36

question is, well, imagine you teaching

41:38

your students and you have a human

41:39

digital twin of yourself. What role does

41:43

that digital twin play? Right? What what

41:46

does it do to support the human teacher?

41:48

Okay. Uh we also have some military. So,

41:52

you know, digital twin of a commander in

41:54

in a three crew tank. Okay. What is a

41:57

digital twin? Well, there's not going to

41:58

be any room for a physical entity there.

42:00

It's going to be more of a voice with a

42:02

brain, if you will. And then in health

42:04

sciences, everything from some of our

42:05

work is also on patient education. So,

42:07

we deal for examp patients with breast

42:10

cancer. So, imagine here, right?

42:13

Unfortunately, our medical system

42:17

could be better. You're lucky if you

42:18

spend 5 10 minutes with your doctor when

42:19

you go see your doctor. But if you're a

42:21

patient and you've been diagnosed with

42:22

stage four, for example, breast cancer,

42:24

well, the last thing I want to do is be

42:26

rushed, okay, in terms of trying to

42:28

understand what I have and how that's

42:30

going to affect me and my family and my

42:32

health, etc. If we had a digital twin

42:34

who could spend time with and explain

42:35

all the different options, etc., to the

42:38

patient or to help a novice clinician,

42:42

okay? or in a critical care setting. So,

42:44

we're doing a lot of this work in in

42:46

K12, as you can see, uh, health sciences

42:48

and,

42:50

uh, the military. Okay. And here is also

42:54

some of the work we're doing. So, um,

42:58

here's another example. So, you've

42:59

already seen Megan. So, for example,

43:01

here is Dr. Tucker. She's a physical uh,

43:03

therapy faculty member in our school of

43:06

health, professional sciences. And what

43:08

you're seeing is two walls. It's

43:09

actually a projection wall. Okay, it's a

43:12

projection wall and basically we're

43:14

basically simulating undergraduate

43:17

students in physical in her physical

43:18

therapy class who are learning to

43:20

physically manipulate premature babies.

43:23

These babies are actually mannequins.

43:25

Okay. And the question is how can we

43:27

help that faculty member be able to

43:28

assess whether the man those uh

43:31

undergrad students are actually doing uh

43:33

the manipulation properly. Right? So,

43:36

not only have they brought the incubator

43:38

with, okay, we can bring a a whole bunch

43:40

of incubators in here, but we're also

43:42

projecting the stressors of being in

43:44

this medical environment where you have

43:46

babies crying. You may have parents

43:47

intervening and interrupting the medical

43:49

staff and other medical staff, etc. Same

43:52

thing with with with some of the work

43:54

that we do with nurses, okay? Nurse

43:55

practitioners, we'll talk a lot about

43:57

tomorrow. Uh, simulated learners. Um one

44:01

of our colleagues here there's a group

44:03

that has developed u teach live okay one

44:06

of them has retired the one has left and

44:08

the question is can we build after can

44:09

we model right the simulated learners if

44:12

you will after uh real students okay and

44:16

the question is can we put teachers in

44:19

front of these simulated learners okay

44:20

we've submitted some grants most of our

44:22

colleagues here are focusing on students

44:24

with learning disabilities okay and

44:26

behavioral management in the class my

44:28

question is and what we've trying to do

44:30

is well instead of that of course those

44:32

are important issues is can we build

44:34

different self-regulatory profiles in

44:36

those students. So a teacher gets to

44:38

learn what is it like to deal with the

44:40

most unmotivated student who's got low

44:43

metacognitive monitoring skills very few

44:46

learning strategies and you know

44:50

basically doesn't want to be in class.

44:51

Okay. How do how do we deal with that

44:53

and can we model that? Uh trying to be

44:55

mindful. We'll talk about some of those

44:57

issues and let's kind of wrap up if you

45:00

will with instructional assessment

45:01

issues in AI. Okay, there's quite a

45:03

number of issues. Uh one of the ones

45:05

that two of the most important ones are

45:07

the ones here that are elaborated. So

45:09

what are the rules for adaptive

45:10

instruction? You know, you go to a

45:12

conference like ARRA or early and people

45:14

are like oh yeah well you know

45:15

adaptivity is based on Vagotssky zone

45:16

approximal development. I'm like oh yeah

45:19

sure that's great. Uh but how do you

45:23

know what those rings are for the zone

45:25

of proximal development or yeah or we

45:27

use PIA

45:29

uh that's great use P's theory of

45:31

cognitive equilibrium but how do you

45:33

know as a teacher when assimulation and

45:36

accommodation are happening right those

45:39

are very challenging questions so

45:40

theoretically we can talk about them in

45:41

a graduate seminar etc but the question

45:44

is what are those rules for adapting

45:46

especially if you have multimodal what

45:47

do we adapt to when how by and by whom

45:52

or what is it the teacher is it the

45:54

agent you know um and then the other

45:56

thing that we're experiencing here at

45:58

least in the US context is our teachers

46:01

I mean you know with all due respect

46:04

they they have they lack training on

46:06

self-regulation right we look at the

46:08

literature teachers are not very

46:09

comfortable about engaging students

46:11

metacognition or even assessing

46:12

metacognition why is that why are they

46:14

not being taught that that would be like

46:17

me I want to become a surgeon but I'm

46:20

not going to take anatomy

46:21

I mean it doesn't make any sense right

46:24

it's uh you know and then the other

46:26

question is like this lack of AI tools

46:28

that empower teachers now I know there's

46:30

a lot of focus on uh dashboards right

46:33

but what what what about beyond

46:35

dashboards so for example if you look

46:37

back at this u picture here right with

46:41

simulated learners let's say I'm a

46:43

teacher and tomorrow we're changing

46:44

topics in chemistry so here's my here's

46:47

a my immersive environment it's a

46:48

chemistry environment okay here are my

46:50

six worst students. Sorry for saying

46:52

that. Don't mean to disrespect students.

46:54

Okay. And it's like I want to introduce

46:57

a new topic. I want to see how these six

47:00

students what are the challenges they're

47:02

going to experience tomorrow in class or

47:04

next week in class so I can be better

47:06

prepared as opposed to just walking in

47:09

and hey, I'm going to hope for the best.

47:11

Okay. But this kind of environment could

47:13

also be for students.

47:17

I know I'm going to have a problem

47:18

learning how to let's say balance

47:20

chemical equations and these are the top

47:23

students in class and because I know I'm

47:26

going to have challenges I want to see

47:28

how they're going to solve the problem

47:29

and can they show me the strategy

47:30

they're using so when I get to class I'm

47:33

much better prepared okay so being

47:36

mindful I'm going to come to these on

47:38

Wednesday a few to talk about so there's

47:40

plenty of educational issues to talk

47:42

about you know uh conceptual theoretical

47:45

issues methodological and And always in

47:47

terms of the core of our research is

47:49

what's the role in humans right humans

47:51

being teachers parents peers artificial

47:54

agents of different kinds right and yes

47:57

and this work is not possible obviously

47:59

without the you know a wonderful uh lab

48:03

of ours we have 20 students and post all

48:05

the way from undergrads to posttos in

48:07

our lab and you know international

48:09

collaborators that uh we we really

48:12

cherish and love we're working with okay

48:14

and I think we'll stop there to give at

48:16

least people a talk uh time to

48:21

>> Yes. Thank you.

48:23

>> Thank you so much, Roger, for a

48:25

fascinating talk. Um we will now um move

48:28

to the Q&A session. So if anyone in the

48:31

audience uh would like to ask a

48:33

question, please raise your hand, your

48:34

virtual hand, and we'll call on you. Uh

48:37

we also already have one question I

48:40

think in the in the chat. So we can

48:43

start with that or if people ah guy guy

48:46

guy guy guy guy guy guy guy guy guy guy

48:46

guy guy guy guy guy guy guy guy guy guy

48:46

already so guy

48:51

>> yeah so first of all thanks a lot

48:54

professor it's a a real honor to hear

48:57

you today and also thank you to the open

49:00

university for this opportunity your

49:03

research is very valuable for us I have

49:06

two questions that might be related

49:10

>> the first one is that I I feel that

49:12

there is all the time

49:15

ongoing debate on how this

49:19

SRL skills are trainable and usually

49:22

they it's connected to the brain

49:24

mechanism and how limited we are how

49:28

students are limited in developing these

49:30

skills and it it is related to the

49:32

second question is so first of all what

49:35

is your perspective on this issue and

49:36

the second one is

49:39

how advanced the SR research in terms of

49:42

neuroscience approaches because we know

49:45

about E EG and EDA but is there anything

49:51

more advanced?

49:52

>> Yeah. No, no, thank you guys so much for

49:54

your questions. Yeah, I I'll answer the

49:55

second one first if that's okay. Yeah, I

49:58

think that's a big um we need to connect

50:00

more of you know the kind of cognitive

50:02

neuroscience research that we know right

50:04

especially in cognition metacognition

50:06

and even emotions right uh to the work

50:10

on self-regulated learning right if we

50:12

look at our models we don't we don't

50:13

include that level of abstraction or

50:15

description which you know we could do

50:18

better so for example in our lab we're

50:19

about to purchase a fne combine fneers

50:23

and EEG to start looking at things also

50:25

like cognitive load right because uh

50:27

only Anique the Bruins uh has proposed a

50:30

couple of things in terms of cognitive

50:31

load. It was actually in Edsych review

50:33

that was just published last year,

50:35

right? We need to start including like

50:37

these processes and you know would FNERS

50:39

and EG start um

50:42

uh you know providing additional

50:44

evidence and the question also is

50:48

to your first question is um which now I

50:52

forget I'm trying to retrieve the first

50:54

person should I should have gone there

50:55

first. trainable how trainable trainable

50:58

>> yes yeah so absolutely and that's

51:01

something that we haven't done right

51:02

that I I know for example uh um Tova

51:06

Bowski and Braha Karsski right and even

51:09

as long time ago I guess Zamira right

51:11

did some of the work on improve I think

51:12

was for math right that has been and

51:14

that's for for teachers uh so the

51:17

question is you know can we move from

51:19

standup delivery right uh also like uh

51:23

Eve Carlin and Charlotte Dignath right

51:25

in both in Switzerland and and Germany

51:27

have been doing a lot and you guys have

51:28

been doing some of this work your team

51:30

also the question is how do we go to

51:32

designing potentially AI based

51:34

technologies for the actual teaching and

51:38

training of those self-regulatory skills

51:40

right because a lot of the stuff that we

51:42

see in the literature also is a

51:43

one-sizefits-all right approach to

51:46

teaching oh first we teach medical

51:47

knowledge and then we teach medical

51:49

procedures and then we teach the most

51:51

difficult one which is the conditional

51:53

knowledge right And so the question is

51:56

uh we actually have an NSF grants that

51:58

is being reviewed under review and the

52:01

question is

52:04

can we stop building these design you

52:07

know digital learning environments or

52:09

advanced learning technologies to focus

52:10

on a particular type of student the

52:12

typical content etc and more importantly

52:16

can we develop a system that is teaching

52:18

self-regulation across tasks right if

52:21

that makes sense right and I think

52:23

that's one area that we need to

52:25

Did that make sense? Right. Yeah, I

52:28

think so.

52:34

>> Uh so, so Tuvi also has a question uh in

52:37

the chat. So maybe you'll already

52:40

>> Yes. So I I can just rephrase it. It's

52:43

okay. So basically regarding facial

52:46

gestures and eye tracking, one thing I

52:50

think we know from anyone who's given

52:52

you know talks or or or taught classes

52:55

in different cultures is that it is very

52:58

culture dependent. So for example in

53:00

Israel we wear we wear our heart on our

53:03

sleeve. We would you know snort. We

53:04

would be like you know uh feeling like

53:06

we're in the family room and express

53:08

everything. Americans might be in the

53:11

middle and Asians for example in many

53:13

cultures you can give a whole talk and

53:15

you get no visual feedback.

53:18

>> Uh how do you handle that?

53:20

>> Yeah. No no absolutely that's a great

53:22

question. Yeah. Uh so from a measurement

53:25

perspective uh we always get baseline.

53:27

So for talk about emotions, right? And

53:29

there's also also different

53:31

expressivity, right? A lot of this is

53:33

culturally based, but also

53:34

developmentally based, right? We know

53:36

that children are, you know, are willing

53:37

to more bleed, they will through their

53:39

face, right? To show that there's

53:41

discontent, etc. Versus an adult, right?

53:43

If I'm if I'm mad at the, you know, the

53:45

director of my school, right? I I I

53:48

better I better do my best facial

53:49

expression retention so that I don't get

53:52

in trouble, right? So that as an adult

53:55

uh so we always collect data on basic

53:56

emotions uh by having by you're being

53:59

compared to yourself not to others right

54:03

so for example when it comes to let's

54:05

say facial expressions there are

54:06

database that we use there's a ch

54:08

there's a database for children there's

54:10

a database for um African-Americans also

54:13

because when we started this let's say

54:14

20 years ago when I was at University of

54:16

Memphis the majority of the students

54:17

were African-American students and

54:19

because of the contrast in their face

54:21

the algorithms had a terrible time

54:24

trying to detect like even confusion the

54:26

A4. So we actually have an illumination

54:28

system right there differences between

54:30

like you said genders also and then the

54:32

context right we have some context where

54:34

it's basically trying to get the

54:37

clinician to be empathetic so the

54:39

virtual patient actually accentuates the

54:42

crying on on purpose right so yes

54:44

absolutely so so we take it from we we

54:47

try not to generalize but of course like

54:49

confusion you know any culture would

54:51

show confusion but the problem that we

54:54

have also seen in the literature is when

54:55

it comes to confusion what they call the

54:56

action unit 4 which is this fur eyebrow

54:58

is there've been a lot of

54:59

generalizations and we're like no I mean

55:02

you know I could I could look confused

55:04

but well I'm not confused I'm actually

55:06

if it's I'm solving a math problem I may

55:08

be actually in deep concentration right

55:11

so yeah so you have to be we have to be

55:13

very careful and very considerate of

55:15

cultural changes developmental changes

55:18

right and even for example some of my

55:20

colleagues who do work I don't know

55:21

about you but uh on students on on the

55:24

spectrum right and this also goes back

55:26

to guy's question, right? I mean, if I

55:28

have not established theory of mind, am

55:30

I even going to engage with an avatar

55:33

regardless? I mean, I can't even make an

55:34

inference about your cognitive state,

55:36

right? So, yeah, absolutely very

55:39

important questions.

55:46

Any other questions?

55:54

Okay, maybe just one more. So, uh I

55:57

think uh some people might say that they

56:00

object to the idea of measuring your

56:03

understanding based on signals rather

56:07

than on your performance in a test. Uh

56:10

for example, I remember you know being

56:11

in advanced math classes. Some people

56:14

they would be like the chess players.

56:15

They would gaze at the screen that

56:18

actually it was a whiteboard and they

56:20

would look to the side and anyone would

56:21

think they're you know they're actually

56:23

not in the scene but their mind was

56:25

racing ahead. They saw what they needed

56:27

and they were now processing. And others

56:30

who normally would look, you know, very

56:32

normal when they were thinking very

56:33

hard, look like they were on the

56:34

spectrum if you didn't know them.

56:37

>> All kinds of very strange behavior. And

56:40

>> and and still they would have said, I

56:41

guess if I would have said, you know, it

56:43

looks like you're not engaged or it

56:45

looks like you didn't understand. They

56:46

would have said, actually, I'm the best

56:48

student in class. Uh uh look at my test

56:51

results. How do you react to that?

56:54

>> Yeah. No, no, absolutely. Uh I think

56:55

that Oh,

56:58

>> sorry. He's n he's 18 pounds. I can't

57:01

fight with it. Sorry.

57:04

>> That's funny. Uh yeah. No, absolutely.

57:06

So the whole issue of individual

57:08

differences are are important to look

57:10

at, right? And then I think what you

57:12

raised also is this issue of expertise

57:14

development, right? We're also making

57:15

the assumptions that humans can express,

57:18

right? So

57:20

we collect data on process, right?

57:22

because we're also interested in how

57:24

these processes temporally unfold in

57:26

real time right and that's more from an

57:28

understanding right but when then we

57:30

also converge right most of the

57:31

literature not just our group but many

57:33

groups around the world right is how do

57:36

we compare which one is more predictive

57:38

is it the fact that I've collected let's

57:40

say 5 10 minutes of your eyetracking and

57:42

I can see how well you're going to

57:43

perform let's say on a math test right

57:46

uh or is it just a math test because

57:48

we're also interested in the process

57:49

which process which data channel is the

57:51

most predictive of different outcomes.

57:53

And what's interesting is we find so

57:55

much variability even some example in

57:57

Metatutor, right? We have students from

58:00

pre-to post who who are in the control

58:01

group who had agents but agents play

58:03

note scaffolding who still performed

58:06

extremely well from pre to post but by

58:09

contrast we have students who were in

58:11

the full agency full adaptive

58:13

scaffolding condition who got the best

58:15

support that we can provide and still

58:17

you know let's say started with 50% and

58:19

then end 50% and we're like but you had

58:22

the agents there to scaffold and you

58:24

still didn't learn right so we're

58:25

interested in those variability right

58:27

and and the question of why, right? And

58:30

how do we support these students? And I

58:31

think to on Wednesday when we have the

58:33

the keynote, we want to talk a little

58:35

bit more is now let's have the student

58:37

be able to interact and talk naturally

58:40

to let's say a genai driven pedagogical

58:43

agent. Right? It's like I think I'm

58:45

still not understanding this very well.

58:48

Can you generate a new diagram or give

58:51

me a new a new problem that is less

58:53

complex? Like can we take it to that

58:55

next level? Right. Um, so I hope that

59:00

answer your question.

59:01

>> Any any last question?

59:05

Okay. So,

59:07

Shi,

59:09

>> thank you very much. That's it.

59:13

>> Yeah.

59:14

>> So, uh, so it was it was mindblowing.

59:18

>> I met you a few years ago. I'm a student

59:22

of Dr. Billy Alam, Professor Billy Alam.

59:26

and we met many years ago. So it is

59:30

mind-blowing where it's going. Thank you

59:33

very much.

59:34

>> Oh, thank you. Thank you so much.

59:36

>> Yeah.

59:38

So, so let us thank our keynote speaker

59:41

once again. Thank you, Roger, for an

59:43

engaging talk and a stimulating

59:45

discussion again for agreeing to join us

59:48

virtually.

59:50

Um,

59:52

>> thanks and and we hope to see you uh on

59:55

Wednesday.

59:56

>> Wednesday. Yeah. Thank you so much, Noah

59:58

and Nina. Yeah. Thank you and everyone

60:00

for attending and your wonderful

60:01

questions.

60:02

>> Thank you.

60:03

>> Thank you.

60:04

>> We hope to see you in Israel.

60:06

>> Yes. Yes.

60:08

>> Yes. I'm so sad I couldn't go.

60:13

>> Thank you. All right. Thanks a lot.

60:17

So we will now take uh 30 minutes break

60:21

uh give or take and then return for the

60:23

second and final session of the

60:25

pre-conference featuring our second

60:26

keynote speaker, Professor Richard

60:28

Mayer. Those of you who registered

60:30

should have received the Zoom link by

60:32

now. I'll post it in the chat for anyone

60:34

who didn't receive it or didn't register

60:36

and uh can now uh join.

60:40

Wait, I'll do that.

60:42

>> Okay. Um,

60:45

and uh, please join us again at 8 using

60:48

the link you were sent or the one that

60:49

is now posted in the chat. It's the same

60:51

one. Hoping to see you all there and uh,

60:54

enjoy the break.

Interactive Summary

Professor Roger Azevedo's keynote lecture explores the science of measuring and supporting self-regulated learning (SRL) and metacognition in digital learning environments. He highlights the evolution from self-report measures to multimodal data collection, utilizing eye tracking, facial expressions, and physiological sensors to model and support learners in real time. Azevedo discusses various innovative projects, including simulated learners and human digital twins, while addressing significant challenges such as cultural variability in emotional expression and the need for better teacher training in AI-driven educational tools.

Suggested questions

5 ready-made prompts