HomeVideos

Chow Lectures 2025 by Nima Arkani-Hamed: Geometry & Combinatorics of Scattering Amplitudes Part III

Now Playing

Chow Lectures 2025 by Nima Arkani-Hamed: Geometry & Combinatorics of Scattering Amplitudes Part III

Transcript

3304 segments

0:15

All right, guys. Sorry, I'm a little

0:17

late. Um, so today we're going to start

0:21

uh where we left off yesterday. Uh

0:25

remember at the end of yesterday

0:27

uh in the middle of yesterday we saw

0:30

there were two interesting facts um that

0:35

uh were exposed by uh thinking about the

0:38

fan of the uh of the the normal fan of

0:41

the associ. One of them uh was that uh

0:46

of course the normal fan of any polytope

0:48

gives you a bunch of cones that tile all

0:50

of space and the cones oh sorry mic's

0:52

not on.

0:55

Is that okay? All right. Of course, a

0:57

normal fan of any polytope gives you a

0:58

bunch of cones that tile all of space

1:00

and that you can think of each cone as

1:02

associated with the vertices of the

1:03

polytope.

1:06

Um, but that sort of preassumes that the

1:08

association was handed to you uh via the

1:11

construction in Carolina's lectures for

1:13

example. So the first comment uh that we

1:16

got through yesterday is how to get at

1:18

least the rays of that cone from

1:21

nowhere, right? Starting from a picture

1:24

of the surface, just recording the data

1:27

of every curve on the surface by these

1:29

words uh with left and right turns and

1:31

extracting the G vectors for the for the

1:34

curves uh from those words. Um and that

1:39

just plunks down a bunch of cones,

1:41

plunks down a bunch of rays which turn

1:43

out to bound cones that can be

1:44

interpreted in terms of triangulations

1:46

and which collectively uh cover all of

1:49

space. The second uh maybe much more

1:55

striking uh observation was that you

1:57

could generate all of these cones. you

1:59

could uh the rays and the cones um and

2:02

they're associated with these polomials

2:03

that we talked about uh and their

2:06

tropicalizations gave you a bunch of

2:07

peace-wise linear functions that uh a

2:10

few of them that when you put them all

2:12

on top of each other the domains of

2:14

linearity of these peace-wise linear

2:15

functions broke the space up into all

2:17

these cones each one of which

2:19

corresponds to a diagrams and which we

2:21

can think of as uh being put together

2:24

into the integrals that we wrote down as

2:26

giving a sort of global version of

2:28

Schwarler parameterization, some kind of

2:30

uh one single integral that as you moved

2:34

around in the whole space morphed in

2:36

pie-wise linear region by region, cone

2:39

by cone morphed into the Schwinger

2:41

parameterization for the corresponding

2:44

uh diagrams and the uh pre-tropicalized

2:48

version um uh with just the polomials

2:51

that we talked about sitting there were

2:52

instead string amplitudes. So this is

2:54

the sort of a a uh connection what I'd

2:58

like so by by the end of uh uh yesterday

3:01

we talked about where the g vectors come

3:03

from and what I want to start with today

3:05

is telling you where those u variables

3:07

come from and uh because of the time I'm

3:10

not going to be able to prove to you

3:12

that these variables do what they're

3:13

supposed to do. But I'm going to

3:15

describe what at least one motivation

3:18

for where they come from. This

3:19

motivation will still maybe seem

3:21

slightly alien if you haven't seen

3:22

anything like this before, but at least

3:24

you'll see a very simple sort of

3:26

combinatorial problem associated with

3:27

these words naturally leads you to think

3:29

about these u variables. Uh I could just

3:33

give you a formula as you'll see in 15

3:34

minutes will be a formula taking a

3:36

product of a bunch of 2x2 matrices uh

3:38

that's going to produce these u

3:40

variables. But I want to at least uh

3:42

give the the the counting problem

3:44

motivation uh for this. Not just because

3:47

it's cool, but because if you go away

3:49

and think more about the counting

3:50

problem, it'll be it'll make it clearer

3:52

why these objects do what they're

3:55

supposed to do.

3:57

Okay. So, um but uh you can really just

4:00

think of this as a recipe for producing

4:03

uh these uh new variables. Okay. So, so

4:06

um so we're going to start with uh

4:09

getting uh the U variables

4:16

and again remember all of our life our

4:19

universe is specified by this uh uh by

4:23

this uh fact. Okay.

4:26

So for for the process you draw one

4:28

representative diagram that represents

4:30

the sort of flow of color and then

4:32

everything else lives uh in this world.

4:35

And if we talk about a curve, we decided

4:38

that a curve was uh one of these tricks

4:41

uh through the back graph. And so this

4:43

curve uh 24 would have uh the word that

4:47

we described as 2 three makes a left

4:50

turn onto 13 makes a right turn onto 14

4:54

makes a left turn on to four five. And

4:57

we also discussed how we could draw it

4:59

as a mountainscape. 2 three goes up to 1

5:02

3 goes down to 1 14 goes up to four five

5:04

okay these are two different ways of

5:06

representing uh exactly the same word

5:11

okay so now I'm going to define a

5:13

counting problem associated with any

5:15

word forget forget about all this we're

5:16

going to say a counting problem

5:23

that are associated with words

5:26

and this is what we're going to do let's

5:27

say have a word that looks like a b See,

5:31

I'm going to tell you uh uh what the

5:33

counting problem is. Um you just want to

5:36

uh you want to choose

5:39

um uh every

5:43

you want to choose sort of every object

5:45

that you see uh in this word. So for

5:49

example, I could choose to uh uh I could

5:51

choose I want to pick every uh letter

5:54

that I see uh in this word in every

5:56

combination. So if you have I could

5:58

choose to pick nothing and I would write

6:00

down one. So if you had to make a

6:01

generating function that goes with that.

6:03

Okay. So I could choose to do nothing. I

6:04

write down one. I could choose to choose

6:06

A by itself. I can choose to choose C by

6:09

itself. I can choose A and C.

6:14

But if I choose B, I have to choose

6:17

everyone who's downhill from B. Right?

6:22

So I used to call this the relationships

6:24

with baggage generating function because

6:25

it's like you know it's like dating when

6:27

you date someone like dated everyone

6:29

that they've dated before. Okay. So so

6:33

uh so if you choose B you have to choose

6:36

everyone in the sort of past lyome of uh

6:39

of B. So you have to do plus b a c.

6:43

All right. Is that is that clear? So if

6:45

we do another example, let's say it's a

6:48

uh a down b up c. What is this?

6:53

>> Uh this would be again I could choose

6:55

nothing. I could choose b.

6:59

I could choose a but if I choose a I

7:01

have to choose b as well. If I choose c,

7:03

I have to choose b as well. I can choose

7:07

a and c together. Of course, then I

7:09

still have to choose B, right?

7:12

Is that clear?

7:14

All right. Now,

7:17

um now it's sort of obvious from uh high

7:20

school um how you can compute I mean you

7:23

don't have to do it this way. It's

7:24

possible to compute this uh this uh

7:28

function this generating function in an

7:31

obvious way recursively. Okay. So in

7:34

other words, let's say you give me a

7:35

word. I can kind of start from the end

7:38

and uh and and get the generating

7:40

function related to the one shorter word

7:42

where I peel off a net. Now, how does

7:45

that work? Um well, let's say that

7:50

uh let's say that so I'm going to call

7:52

these uh uh objects f associated with

7:56

the word. They're not quite going to be

7:58

uh these f polinomials that we'll talk

7:59

about later, but anyway, let me just uh

8:02

call them call them f.

8:05

So let's say I have a and it goes up and

8:08

it goes up to some b and it's going to

8:10

do something else. There's sort of rest

8:12

rest of the word there. Okay.

8:15

What we're going to do is uh is divide

8:17

this generating function into two

8:19

pieces. I'm going to call f yes and f

8:21

no. Okay. So f yes is does it choose a

8:26

or and f no is uh does it not choose a.

8:29

All right. So there's f yes and nos

8:31

according whether it does or doesn't

8:33

choose a. So what is f yes?

8:37

Okay. Well f yes

8:40

it it it it chooses a. Okay. So uh f yes

8:45

is because it chooses a it's going to uh

8:48

it's going to have a factor of uh a

8:50

there right?

8:53

Um

8:56

uh but uh uh if I choose if I choose uh

9:02

uh uh if I if I choose A then here the

9:05

question is uh do I have to choose B or

9:08

not right?

9:11

And well I can choose B or not choose B

9:13

it doesn't matter. Right? So I'm going

9:16

to call the rest of the word here is

9:19

going to have its own little F. So it

9:21

has its own little F. Yes. No, where

9:23

this uh yes or no refers to B, right?

9:26

Call this B, right?

9:30

So this is A times little F yes little F

9:34

no. It doesn't care either way. Okay,

9:38

is that clear?

9:40

On the other hand, if I don't choose A,

9:43

well, I'm not going to have anything

9:45

here, right? I'm not going to have A.

9:47

But I definitely could not have chosen B

9:50

because if I choose B, I have to have A,

9:53

right? So this is just equal to F. No.

10:00

>> And so we immediately learn something

10:02

that

10:09

uh F yes and F no

10:13

is this little 2x2 matrix A a01

10:17

times little F yes and little F no.

10:22

So when I turn it up, I get that little

10:24

uh 2x2 matrix. Okay. So I'm going to

10:27

call this M left

10:31

of A.

10:33

Right?

10:36

Well, similarly, if I go down, let's

10:38

just do it quickly here. If I go down A

10:40

down to B and the rest,

10:44

then F yes

10:46

is equal to A. But it since B is

10:49

downhill of A, I must have B here. So

10:52

this is F yes and meanwhile F no is the

10:56

thing which again I could take either

10:58

one. It could be little F yes little F

11:02

no.

11:04

So this is turning left

11:07

from A

11:09

and so we all s have a matrix for

11:11

turning right from A. Right? So we have

11:13

f yes and f no return for turning right

11:16

from a

11:18

is this other matrix

11:21

a 011

11:23

times little f yes and little f no.

11:28

So we'll call this matrix m right of a.

11:34

Okay.

11:35

And so this tells us how to build this

11:37

uh f. I just take the product of these

11:40

matrices, right? I take the the product

11:42

of the matrices as I go down the word.

11:45

Uh and then at the very end I have f yes

11:47

and f no. I can add them if I want to

11:49

get the uh the full hat.

11:52

Okay.

11:55

Now if you remember uh our words coming

11:58

back to our context um there are open

12:01

curves and so they're kind of special in

12:03

that they start and stop somewhere

12:07

at some boundaries.

12:11

And then I leave it as a very small

12:12

exercise. It's very very obvious, but

12:14

I'm going to leave it as a very uh small

12:16

exercise to show

12:19

that if I have a word that's like alpha

12:23

as a boundary and it goes turn, I'm

12:26

going to turn is left or right. Left or

12:28

right. Okay, alpha turn and some uh some

12:31

road A1 turn A2 turn. Okay. And it's

12:35

going to go finally uh some turn uh

12:38

beta. So alpha and beta are the ends.

12:42

Okay.

12:45

Um

12:48

okay. Then uh there is a uh in I can

12:53

look at the sort of f that I would get

12:54

from this from this big word. But in

12:57

this big word it's also natural to look

12:58

at which part of f includes both alpha

13:02

and beta. Which part includes only

13:04

alpha? Which part includes only beta?

13:06

And which part includes none of them?

13:08

All right.

13:09

So I'm going to group those things. Let

13:11

me call them to f has a yes on alpha,

13:14

yes on beta. F no on alpha, yes on beta.

13:19

F no uh yes on alpha, no on beta, and f

13:24

no on both.

13:26

Okay. So this is a this is a a 2x2

13:29

matrix.

13:31

And uh it's a trivial consequence of

13:33

what I just told you that this matrix is

13:36

in fact the product of all of these uh

13:41

matrices. Okay. So this is the product

13:43

of of the matrix for um uh uh turn at

13:48

one

13:50

turn at two.

13:53

Okay. Up to whatever it was that he had.

13:56

Okay.

13:58

Let me be a little more precise.

14:01

Um so if I take this matrix and um for

14:06

the for the so for this first matrix if

14:09

I turn left or right I'd have to put the

14:11

m for alpha right um but here you have

14:14

to put alpha equals 1 and you imagine

14:17

that this is sort of beta equals one as

14:19

well of course beta doesn't show up

14:20

there's no term that depends on beta

14:22

beta is the the last thing okay so if

14:26

you look at this matrix and set alpha to

14:27

one of course beta is passely one then

14:31

you get an entry here where this would

14:34

be uh to get the if to get the real word

14:36

for this that would have an alpha and

14:38

beta you just multiply this by alpha

14:39

beta okay so this is uh so these are the

14:42

sort of coefficients of the alpha beta

14:44

term the coefficients of the beta term

14:46

the coefficient of the alpha term the

14:47

coefficient of none of them comes simply

14:50

from taking the product of these

14:51

matrices okay so this is a matrix that's

14:56

attached to to any word

15:08

Now um

15:18

back on screen.

15:27

[Music]

15:28

Okay. Now, if we stare at these

15:30

matrices, sorry I uh uh I partially

15:33

erased it. Notice that each one of these

15:36

matrices says uh as determinant is just

15:39

given by A, right? So determinant of

15:42

this matrix A, the determinant of that

15:43

matrix is A. Um

15:47

and so one thing which is obvious is

15:49

that the determinant

15:51

of this uh matrix associated with any

15:53

word if we imagine that those variables

15:57

the ABCs are just positive the

16:00

determinant is positive. Okay. So the

16:02

determinant of this matrix associated

16:06

with the word w is just the product over

16:10

all these uh all what I call there that

16:13

little a is all the sort of a k and this

16:15

is positive if the aks are positive

16:18

so which uh we'll we'll be assuming so

16:22

if I look at this matrix so this is a

16:25

matrix that has a has a one one

16:30

two 2 1 M22 entries

16:34

then I know that M11 M22

16:37

minus M12 M21 the determinant it's a

16:40

product of all these A's is positive

16:45

and uh therefore the ratio u associated

16:49

with this word which is M12 M21 / M11

16:55

M22 is less than one and of course

17:00

Again, if all of these variables are

17:01

positive, clearly as I multiply these

17:03

matrices, they're all plus signs here

17:04

everywhere, these simple matrices. Okay,

17:07

this U is also going to be bigger than

17:09

zero, greater than equal to Z, less than

17:11

or equal to one. That's the U variable

17:15

attached to a curve.

17:19

So if you give me any word for any curve

17:21

on any surface

17:24

all you do you just build the word you

17:26

multiply the 2x2 matrices associated

17:29

with the uh with the entries in the word

17:33

uh and you take this ratio 1221 off

17:36

diagonal over diagonal and that gives

17:38

you the u variables associated with the

17:40

curve. So I'll leave it as an exercise

17:43

for you to do this for the little

17:46

fivepoint problem. uh you'll have to

17:48

multiply at most uh two 2x2 matrices um

17:53

and take the ratios and discover those u

17:55

variables that we wrote down uh

17:59

yesterday. All right.

18:02

Okay.

18:06

Now, um and that's all I'm going to uh

18:09

uh say about it. Um maybe one point to

18:13

make is that this is sort of presented

18:17

as sort of manifestly a ratio of

18:19

polomials right all of these things are

18:21

sort of ratios

18:22

>> could you say one more time what the

18:23

variables are

18:24

>> oh the variables associated with what we

18:26

call the y so let's just do let's just

18:29

write down

18:31

this example okay so back to this

18:34

example you would now go and and label

18:37

uh the internal rows not just with the

18:39

name but also the variable y. So

18:41

there'll be a y13 that goes with this

18:44

road

18:46

and a variable y14.

18:48

Okay? And then you again take your

18:50

favorite curve like 24

18:52

which is now the word would be y one two

18:56

up 1 3 down 14 up four five.

19:00

Okay.

19:02

And now the the uh the matrix associated

19:05

with this word the matrix associated

19:07

with the word M24

19:10

is this is up at one two but remember uh

19:14

the uh this I'm supposed to set to one.

19:17

So the up matrix

19:19

was a a 01 but a is one here for this

19:22

one. Okay. So I'm starting at the

19:24

boundary. So this is 1 1 0 1.

19:28

Then at 1 13 I turned right. So here I

19:31

put the matrix Y13

19:34

011.

19:36

[Music]

19:37

At 1 14 I turn left. So I put the matrix

19:39

Y14 Y14 01.

19:43

Okay.

19:46

So this gives me a 2x2 matrix

19:51

[Music]

19:54

>> and the U for 24 is going to be YZ over

19:58

XW.

20:01

which if you did it in this case you

20:02

would discover is uh 1 + y14 + y14 y13

20:09

over 1 + y4

20:12

1 + y13.

20:14

Okay, you can see how these things are

20:16

built up from products of these 2x2

20:19

matrices. See the these things look like

20:21

relationships with baggage generating

20:23

functions. Okay,

20:25

>> very quick question. Is this five or 24?

20:27

>> Sorry.

20:28

>> Is this path 25 or 24?

20:30

>> This path is 24 cuz it's 1 three to 2

20:34

three left on three. Right.

20:37

>> Then I turn uh then I turn

20:40

uh started at one two.

20:43

>> Oh, I'm sorry. I just wrote this wrong

20:46

entirely. I'm sorry.

20:47

>> So, I'm starting at 2 three. Sorry about

20:49

that. at 2 three left on three right on

20:54

14 and then left on four five. Sorry

20:57

about that.

20:58

>> Is that okay?

21:00

>> No.

21:04

>> I have a question.

21:05

>> Yes. Two questions in fact.

21:06

>> Sorry.

21:06

>> Two questions. Yes.

21:07

>> Can I ask them?

21:08

>> Yes, you can ask them.

21:11

>> But you can't ask three questions.

21:13

>> There's a sharp upper back.

21:16

>> Okay. So the first question um is um

21:19

yeah what kind of um geometrical

21:22

resolution or precision we obtain on the

21:25

two dimensional trajectories within the

21:27

formalism used.

21:28

>> Oh there's no geometry here at all. This

21:30

is pure this is this this is pure uh

21:36

there's nothing about what this curve

21:37

looks like at all. This is just a

21:39

labeling for what the curve looks like.

21:41

So it has nothing to do with that.

21:42

>> Yeah. Thank you for your answer. And um

21:44

um question number two is um directly um

21:48

related to the observation you just um

21:51

made publicly. Um well for her um where

21:55

does this formalism first arise from? I

21:57

mean physics or maths

21:59

>> both.

22:00

Uh I mean although I I should say that

22:02

these uh the the these these variables

22:05

are closely related to many other

22:07

variables that have been uh discussed.

22:09

They're very closely related to uh

22:11

xcluster variables.

22:13

uh for surfaces uh alop and gonerov

22:16

they're related to cluster variables xcl

22:19

cluster variables but there are very

22:21

special sets of them and uh this

22:24

particular way of thinking about them

22:26

and focusing on them is uh uh is uh is

22:31

new and was very much motivated by by

22:33

physics okay um the thing is that you

22:36

can talk about cluster anything you like

22:38

it's these variables that turn into the

22:41

shringer parameterization of all

22:43

diagrams and string amplitude and so on.

22:45

So these things that go from 0 to 1,

22:47

they satisfy these remarkable u plus u u

22:51

uu equals 1 formula that's the uh they

22:55

they they realize in this binary way the

22:58

uh the combinatorics of uh compatibility

23:02

whether curves cross or not. Okay. So

23:04

that's uh uh that was very much

23:06

motivated by by

23:08

>> actually I'll just make just since the

23:10

the question was partially historical um

23:14

uh these variables literally these

23:16

variables

23:18

um for the context of studying string

23:20

amplitudes at tree level were discovered

23:24

in string theory before the standard

23:26

formulas for doing string calculations

23:28

okay by there's a famous formula called

23:30

the kovielson formula and these u

23:32

variables were were described by Kova

23:35

and Neielson in uh 1968. Okay. Um and

23:39

they were they were very excited about

23:41

them exactly because they made

23:42

factorization manifest. Okay. That's uh

23:45

only later that did coin Nielson

23:47

themselves realized that you could

23:50

formulate the problem instead in terms

23:52

of sort of cross ratios of points uh on

23:55

on a disk and the picture of a string

23:57

world sheet began to emerge and then the

24:00

string world sheet picture um uh kind of

24:04

took over because you could make it

24:05

systematic. You could do it for any

24:07

surfaces, think about loop corrections.

24:08

So lots of things came from that picture

24:10

and this more algebraic

24:13

uh uh and the way that's sort of

24:15

centered on curves on surfaces was

24:16

entirely forgotten. Um and uh what's

24:20

happened now is two things. First of all

24:22

um uh we understand that they also make

24:24

sense for all surfaces. Okay. So there

24:26

is no loss in going back and forth. So

24:29

all those physical things that Coen and

24:31

Neielson were excited about are actually

24:33

there and and present and uh uh and and

24:36

useful not just at tree level but for

24:38

all surfaces. But maybe more importantly

24:41

uh Coen Nielson found the U equations

24:45

and they even found how to solve them

24:47

but they only found how to solve them in

24:48

this way that I mentioned yesterday. is

24:50

most direct way solving for the U's in

24:52

terms of the U's which you can only do

24:54

in some cases and it's very artisal uh

24:57

and definitely you couldn't imagine

24:58

doing it for more complicated surfaces.

25:00

Okay. So the second novelty here is the

25:03

solution of the equations like we're

25:05

telling you how to solve them by

25:06

associating these uh counting problems

25:09

with the curves on the surface and the

25:10

product of these matrices and so on.

25:12

Okay. So, so it's both the fact that

25:14

they exist and that they're concretely

25:15

available.

25:17

And um uh and what they do what these

25:20

what the what the U variables do is they

25:23

give a sort of completely algebraic way

25:26

of thinking about what uh what not just

25:30

the the modulized space really the

25:32

tangular space of these two dimensional

25:35

surfaces but also what what the

25:36

compactification of these spaces you

25:38

know how you add back in all the

25:40

boundaries and what's again what's novel

25:42

about it is it's done an entirely

25:44

algebraic way you just write down all

25:46

these equations And you just declare the

25:47

user positive and you're done. The usual

25:50

way of thinking about the

25:50

compactification of these spaces even in

25:53

sort of a clustery way of thinking about

25:55

things or the faky boner way of thinking

25:57

about things is more local. You it's

25:59

synthetic. You go to the neighborhood of

26:01

a boundary and you figure out how to

26:02

sort of complete things in the

26:03

neighborhood of that boundary. So patch

26:05

by patch. This is not patch by batch.

26:07

And that's it's a very practical

26:09

advantage. It's conceptual conceptually

26:11

interesting but also practically

26:12

important. That's what allows us to

26:14

write down one integral. Done. Right.

26:16

And the one integral uh blows up all the

26:18

singularities and tells us how to how to

26:21

just uh go ahead.

26:22

>> Can you say a word about finitness when

26:24

you and the example are always at this

26:26

fivepoint example but as soon as we do

26:28

something more interesting everything

26:30

will be infinite. Can you comment on f

26:32

that bothers me?

26:33

>> Can you comment on finite versus

26:36

>> absolutely absolutely yes. So what what

26:37

burn is uh uh referring to is that the

26:41

moment we get to more interesting

26:42

surfaces

26:44

uh and actually uh you have to get uh it

26:47

really starts with the annulus. Okay. So

26:49

the problem has nothing to do with these

26:51

variables per se. Uh the problem has to

26:53

do something about curves on surfaces.

26:54

So if I think about this picture of an

26:57

angulus um uh I could draw a

26:59

triangulation of the angulus. This is a

27:01

slightly degenerate triangulation, but I

27:04

I hope you can see this is a triangle,

27:06

right? This is a triangle and this is a

27:08

little triangle here, right? They all

27:10

have uh uh three sides. Uh that's the

27:13

important thing. The triangles have to

27:14

have three sides. They don't have to

27:16

have uh three vertices. They have to

27:17

have three three sides in them. And

27:19

that's what lets them go along with uh

27:21

cubic graphs. So, well, what cubic graph

27:24

would go along with this picture? If

27:25

this is one and two, the cubic craft

27:27

that goes along with it would be this

27:29

little picture of an angulus. Okay, so

27:32

this line one is what's going around

27:35

here. Okay, so that would be sort of

27:37

region one or line one. This is region

27:39

two or one two. Okay.

27:44

Okay.

27:45

And now what burn is talking about is

27:48

the following. You see at this level

27:49

this triangulation is this diagram but

27:51

there's one diagram here. There's

27:54

there's a single diagram. But here I

27:57

this picture and I could just twist

27:58

everything. Okay, I could just twist

28:00

everything once. And clearly, it's going

28:02

to look a little more wound around. I'm

28:04

not even going to try drawing it. I

28:05

could twist it any number of times.

28:07

Okay, this way. That way. So, on this

28:10

side, there's an infinity. There's

28:13

there's infinitely many uh

28:15

triangulations

28:17

that just differ by winding uh or

28:21

game twist

28:26

or this is the action of the mapping

28:28

class group uh on the surface. But if

28:31

you haven't heard these words, it's just

28:32

the obvious picture of uh of winding

28:35

here.

28:36

Um and that's completely absent on this

28:39

side apparently. Okay, so it looks like

28:42

so and and the point is of course that

28:44

that precisely because all of these

28:47

things are related to each other by

28:49

winding, they're combinatorily exactly

28:51

the same, but they're all this

28:52

triangulation, but thought of as a uh in

28:55

terms of curves on surfaces, there's

28:57

infinitely many of them. Okay,

29:00

so this seems uh this seems uh um uh

29:05

problematic at first. It's actually um

29:09

it's actually uh it's actually a

29:11

blessing in disguise.

29:13

>> One of the reasons that it's a blessing

29:14

in disguise is that remember uh I

29:17

mentioned early on that um as physicists

29:21

we would like to imagine putting all the

29:23

diagrams together even at loop level and

29:26

putting them under a common loop

29:28

integration sign to sort of identify

29:30

what we mean by loop momenta from

29:32

diagram to diagram to a diagram.

29:35

And I said that when it's planer,

29:37

there's a sort of way of doing it. But

29:39

when it's not planer, there isn't there

29:42

isn't naively a consistent way of

29:43

labeling loopa that allows you to

29:45

combine all diagrams together. But in

29:47

fact, there is. And the curve on

29:49

surfaces exactly tells you how to do it.

29:52

You draw for any diagram any diagram is

29:55

some uh is some triangulation, right? Um

29:59

so you draw the curves on the surface

30:01

and then for any curve you read off the

30:04

momentum associated with that curve by

30:07

homology. Okay. So we saw it already uh

30:10

we we didn't say it in this slightly

30:12

fancy uh language but already here when

30:15

I draw you know even in in the disc when

30:18

I draw something like that I draw this

30:20

curve uh and I say the momentum

30:22

associated with that curve that I

30:24

squared to get the x associated with it

30:26

is p1 plus p2 plus p3. But you can sort

30:29

of think that I assigned momentum to

30:30

these boundary components and the

30:32

momentum of this guy is the same as the

30:33

sum of those guys because this curve is

30:35

homologous to these. I can I can

30:38

continuously deform it uh to the

30:40

boundary. Okay. So in that way you can

30:42

take any surface. Now in a case like

30:45

this you see this curve for example

30:47

cannot be this curve can't be uh uh

30:51

continuously uh uh deformed to the

30:54

boundary. So you have to give that one a

30:55

name. So we have to come up with an

30:57

element uh of a mole. So this group this

30:59

this curve I'll I'll give this momentum

31:01

a name. I'll call it L. But then this

31:04

curve is uh is is going to be L plus the

31:09

momentum associated with particle 2.

31:11

Okay. So this curve would have this

31:12

curve would have momentum L plus the

31:15

momentum particle 2 with Q with Q. If it

31:17

wind around twice it would be L + 2 Q

31:20

and so on. Okay.

31:23

And so that actually gives you a

31:24

completely consistent way of labeling

31:26

what you mean by momenta including loot

31:28

momenta for any diagram. Right?

31:32

What's the catch? The caches are that

31:33

there's now infinitely many diagrams.

31:35

Okay. Uh there's now infinitely many

31:37

diagrams. So that when you do the action

31:39

of the mapping class group, what it does

31:42

is it doesn't leave the loop momentum

31:44

invariant, right? It'll shift the loop

31:45

momentum. For example, if I wind here

31:47

once, the loop momentum will shift by L

31:50

goes to L plus Q. Now, of course, since

31:52

I'm integrating over the loop momentum,

31:54

the amplitude doesn't change. That's the

31:57

point. But we sort of found a way to

31:59

solve this essential you know kinematic

32:01

problem of how you label momenta uh

32:03

directly from homology of curves on

32:05

surfaces at the expense of having

32:07

infinitely many diagrams. Okay. So but

32:09

that's a sense in which it's a good

32:10

thing right that at least it's a first

32:11

step to letting us put everything under

32:13

the same sign. So now all we have to do

32:16

quotes all we have to do is mod out by

32:19

the mapping class group when we're done.

32:21

And that's what we actually do. We take

32:23

this whole formulas for any surface we

32:25

have the product of u to the x um

32:28

including when there's loops it's the

32:30

product of u to the x uh when there's

32:33

loop the x's will contain loop momentum

32:34

in them and this u to the x will have

32:36

things that are quadratic and loop

32:37

momenta upstairs again this looks

32:39

exactly like swinger parameterization uh

32:42

if you're familiar with it either in

32:43

physics or math um uh so we can do the

32:46

mmenta they're gaussian integrals that's

32:48

exactly the swinger parameterization

32:50

point so you're still left with an

32:52

integral over the y's. Now you're done

32:53

integrating the loop momentum. So

32:55

whatever you're left with and you're

32:56

integrating over all these y's is

32:58

mapping class group invariant because

33:00

you've uh integrated over the loop

33:02

momentum. Now it's totally mapping class

33:03

group invariant. So all you have to do

33:05

is now mod out by the mapping class

33:07

group. Okay. And uh you might think that

33:10

modding out by the mapping class group

33:12

would uh would uh uh involve you know

33:15

maybe identifying some fundamental

33:16

domain that then uh that then sort of

33:19

covers uh the the entire space. uh after

33:22

you do the uh action mapping class group

33:25

if you wanted to do that it would not be

33:27

such a fun thing to do but there's much

33:29

simpler ways of modeling out by the

33:31

mapping class group so what a physicist

33:32

would call the fa popoff trick what

33:34

mathematicians would call the merely

33:37

kernel okay are very straightforward

33:40

ways of just modding out by the mountain

33:42

glassass group uh to get the answer so

33:44

that's the so beyond at tree level we

33:46

don't run into this phenomenon even at

33:47

one loop we don't run into this

33:49

phenomenon but starting to annulus and

33:51

beyond

33:52

this thing that burn uh mentioned about

33:54

the infinity is ubiquitous but the

33:57

infinity is a good thing somehow across

33:59

the board that if we didn't have

34:00

infinitely many curves these U equations

34:02

would not make sense we really need

34:04

every curve on the surface for these

34:05

equations to make sense not only the

34:07

ones that wind infinitely much even the

34:09

ones that intersect themselves I mean

34:11

every conceivable curve on the surface

34:14

uh shows up when you write down these uh

34:16

>> so no way for me to look at the first 25

34:19

in a meaningful way

34:20

>> ah let's say I want to look at this 25

34:23

variable and 25.

34:25

How should I list them?

34:26

>> Yeah. Yeah, that's right. Yeah. So, um

34:28

so uh one of the I mean that this is one

34:31

of the things that makes this very very

34:32

practical. Um there are infinitely many

34:34

curves and so on. It's true. But the way

34:37

it's presented makes it clear you have

34:38

these ratios of of polomials and uh

34:42

there's a sort of concrete sense in

34:43

which you start with your parent graph,

34:46

right? You draw the simple curves.

34:48

called the simplest curves are the ones

34:49

that occur in the triangulation and then

34:51

the ones that you know where by simple I

34:53

mean literally the length of the word

34:55

you know how long does the word get okay

34:57

then as the words get longer and longer

35:01

the u variables associated with them get

35:03

closer and closer to one okay remember

35:05

they're stuck between zero and one okay

35:08

and so if the y's are so for example if

35:10

the y is just to say a super concrete

35:13

statement if you think about the u

35:14

variables as a tailor expansion in the

35:16

y's okay the ratios of the polinomials.

35:18

You can certainly tailor expand them in

35:20

y's. Of course, the y's go from 0 to

35:21

infinity, but uh somehow the the y is

35:24

close to zero means that you're in the

35:26

neighborhood of the uh of the surface uh

35:30

degenerating to the triangulation that

35:32

you started with. Okay. So, if you just

35:33

look as a tailor expansion in y at any

35:36

fixed order at the tailor expansion, a

35:38

finite number of curves have u not equal

35:40

to one. Okay. So first of all in the

35:42

sort of super precise sense you need a

35:44

finite number of curves at any order in

35:46

the tailor expansion. Secondly um even

35:50

if you let y's be anything but some

35:52

fixed number you know y have 10 y's and

35:55

the y's are 17 23 I just fixed some

35:58

fixed numbers for them as the words get

36:01

longer and longer for those fixed y's

36:03

the 's for longer and longer words

36:05

approach one exponentially quickly.

36:08

Okay, so these U's are sort of very

36:11

they're very practical. If you want to

36:13

sort of numerically evaluate these

36:14

integrals, you don't need infinitely

36:15

many of them remotely. You really need a

36:17

small handful of them because of how

36:19

exponentially quickly they go to one.

36:21

Okay.

36:24

>> Yes.

36:25

>> So these are like some type spaces.

36:27

These spaces have some dimension. Do I

36:29

know how many views I need? Are

36:31

>> there infinitely many use? In principle

36:33

there's infinitely many use. Okay. Uh

36:35

there in principle there are infinitely

36:37

many U. If you want to write down a

36:38

string amplitude, okay, uh it's uh um uh

36:42

it's your choice uh what use to put into

36:45

these formulas, okay? Because um uh

36:48

because uh um

36:51

uh uh

36:56

first of all uh there there's there's

36:58

infin there's infinitely many curves

37:00

partially because even given any one

37:02

curve when the surface is interesting

37:04

enough it can go around and self-

37:06

intersect itself lots and lots of times.

37:08

So there's all the sort of self-

37:09

intersecting uh friends of a given curve

37:13

right they're there. have u variables

37:15

associated with them. But notice if a

37:16

curve intersects itself, it's u can

37:19

never go to zero because in the u

37:21

equations it occurs in both terms,

37:23

right? So you have u plus a product of

37:25

all the curves that intersect

37:27

x uh is equal to one. If a curves

37:29

intersect itself, it shows up in both

37:31

terms of the equation. So you cannot set

37:32

it to zero. So that means that all of

37:35

the self- intersecting curves are

37:36

irrelevant for the field theory limit.

37:38

Okay? If you want to take these curves

37:40

and see what they look like in the field

37:41

theory limit, you can just throw all of

37:42

them out. That's the beginning of seeing

37:45

the sense in which we have a bigger

37:46

world of objects that uh kind of

37:48

connects to string theory in some limit

37:50

but is not necessarily string theory

37:52

because it turns out that string

37:53

amplitudes require you to put in every

37:55

single curve on the surface including

37:56

all the self-intersecting ones. Okay.

37:59

And that's why you know if someone

38:01

handed you a one loop string amplitude

38:02

and said please take the field limit

38:04

you'd be a little terrified of doing it

38:06

because of these horrible Jacobian data

38:07

functions everywhere. What are you

38:08

supposed to do? Where are those coming

38:10

from? They're coming from the product of

38:11

these infinitely many self-insected

38:12

curves. They're are literally irrelevant

38:14

baggage as far as the field theory limit

38:17

is concerned and you do not need them.

38:18

Okay. So when you ask the question what

38:21

you include it's a little bit up to you

38:23

what you want to do. Okay. Um so um but

38:27

but if I'm strictly talking about theory

38:30

limit the answer is a little bit more

38:31

interesting and it's related to

38:32

something I just want to say about these

38:34

U variables. Now

38:37

uh and then I'm just going to answer

38:38

this question but it dubtales exactly

38:40

with what I was going to say.

38:42

Um,

38:46

so

38:49

so I have these u variables that depend

38:51

on the y's

38:54

for any for any x. Let's once again um

38:59

some yk. Let's once again put yk equals

39:02

e to the minus tk just uh so I can talk

39:05

about uh uh uh talization. So again,

39:09

we're interested now what happens when

39:10

when when the sort of t uh goes to

39:13

infinity. You know what's going on

39:14

because we go to infinity. And so u is

39:18

going to go to e to the something uh e

39:20

to the uh we'll call it alpha times the

39:23

sort of tropization of uh ux. Okay,

39:26

that's uh

39:29

um sorry uh e to the trough of of ux and

39:34

this I'm going to call e to the alpha x.

39:37

Okay. So now alpha x is going to be some

39:39

peacewise linear function on tsp space.

39:43

Now the properties of the use guarantee

39:46

the following.

39:54

So sort of key point about

39:57

uh tropicalization and these alpha

40:00

variables is that these alphas are the

40:02

global swing parameters.

40:06

And so the the key point is that if I

40:08

take alpha for a curve x, it's a

40:11

function on this space where this where

40:14

all these cones live right in this t

40:16

space. If I take alpha for curve x and I

40:19

evaluate it on the g vector for the

40:21

curve y,

40:23

it's equal to zero if y is not equal to

40:27

x

40:28

and otherwise one if y equals x. Okay.

40:31

So, so the tropicalization of the U

40:34

variables lights up the G vector okay

40:36

for its own variable and a zero on on

40:38

everybody else. Okay.

40:42

And that's why if you then just write

40:44

down this integral integral you know uh

40:47

d and t whatever the dimensionality is

40:50

uh e to the minus the sum over all the

40:52

curves on the surface um the x I'm sorry

40:56

I'm doing maybe I should say some sum

40:58

over all curves c the x associated with

41:00

the c remember with every curve we just

41:02

figured out how associated momentum I

41:04

can square that momentum to get uh an x

41:07

or I could just say that there's

41:09

variables xc associated with every curve

41:11

that's my kinematics My kinematics bases

41:13

some variables XC associated with every

41:15

curve multiplied by the alpha associated

41:19

with that with that curve.

41:22

Okay, this is something which looks like

41:26

shrinker parameization in every cone.

41:28

Okay. And this is something which is

41:31

equal to the sum over all the cones

41:34

the product of the one over uh of of the

41:38

x little c for the uh uh for the c's

41:42

belonging to the cones. Okay,

41:46

which is what we call the amplitude for

41:50

right.

41:53

So now this brings me back to your to

41:55

your question. Okay. So, um uh if if

41:59

we're if we're doing this uh before

42:01

doing loop integration, we're doing this

42:04

before we're doing loop integration, all

42:05

of these axes have the loop variables in

42:07

them. We would have all these infinitely

42:09

many winding guys, right? And that's

42:12

fine. Okay, I do the loop integration.

42:16

This is in Schwinger form. So, I know

42:18

what I get. I get uh I get the uh

42:21

semantic polinomials except these you

42:23

would call surface semantic polinomials.

42:25

So they're not semantic polinomials for

42:26

a fixed graph. They're the semantic

42:28

polinomials associated with the entire

42:29

surface. Okay. So there's an analog

42:31

semantic polinomials that we put all the

42:32

curves on the surface together. They're

42:34

the same old polinomials very similar to

42:36

the polinomials that you had before

42:37

except with these alpha C's showing up.

42:39

Okay.

42:41

Um and now you have something which is

42:44

mapping class group invariant. So you

42:46

have to mod up by mapping class group.

42:47

You model out by the math class by

42:49

putting a little extra factor of a

42:50

kernel in this measure that uh does

42:54

again to a physicist the fa popoff trick

42:56

for modeling out by uh by by by

42:58

symmetries. Okay, there's a very

43:00

standard and simple way of producing

43:01

this kernel. You can do it for all

43:04

surfaces at once. Uh it's algorithmic.

43:07

You can put it on the computer and do it

43:08

to 10 loops. It's it's all it's all

43:10

fine. Okay. So So these formulas are

43:14

absolutely concrete. There's nothing

43:15

formal about them. you can put them on

43:17

the computer and integrate uh okay so

43:20

Julia Salvator has put them on computers

43:22

and integrated them into 10 loops okay

43:24

so there's no issue

43:27

you know say there's an epsilon here a

43:29

rotate there just uh of course if the

43:32

kinematics is positive ukidian etc etc

43:35

once the kinematics gets interesting all

43:37

the usual physical issues about

43:39

thresholds and Ion those of course don't

43:42

disappear but I just want to stress this

43:44

is not some sort of formal object in in

43:46

the regime where the integral looks well

43:48

defined. It's 100% well well well well

43:50

defined. But now coming back to your

43:52

question you see the purpose so what

43:55

what I'm uh uh supposed to do in general

43:59

after I loop integrate is put this

44:01

kernel there. The purpose of the kernel

44:03

is to sort of kill uh the the far away

44:06

windings. Okay. So that's effectively

44:08

what it does.

44:09

>> Can you say a little bit more about the

44:10

kernel?

44:10

>> Yes.

44:11

>> You don't wake up every morning and

44:13

think about me.

44:14

>> No, I know. I'm sorry. Yeah. So let me

44:15

instead of saying uh let me let me say

44:17

it in a simpler let me say what it is in

44:20

a simpler example.

44:23

So um

44:26

and this is really all that's going on

44:28

but we can see it already in this

44:30

example. Let's say you have a a function

44:32

of of just one variable. Let's say have

44:34

an f ofx which is translationally

44:36

invariant by some amount a. So it's

44:38

equal to f of x plus a.

44:41

Okay.

44:43

But what you want to do is uh make sense

44:46

of integral minus infinity to infinity

44:48

dx f ofx.

44:50

Okay, this is of course infinite exactly

44:53

because it's uh translationally. Okay,

44:56

so there are these uh translations t

44:59

uh so this is equal to infinity.

45:04

And so what's our usual attitude about

45:06

this? Our usual attitude if you're a uh

45:08

you know if you're in high school

45:10

student or undergrad or something is to

45:12

find a fundamental domain right so in

45:14

this case fundamental domain would be a

45:16

little interval that starts at b's any

45:18

old b and goes to b plus a okay and so

45:22

you say really what I should do is I

45:24

want to make sense of integral dx fx mod

45:30

translations whatever this means mod

45:32

translations okay but what this should

45:34

mean is just the integral from b to b

45:37

plus a dx of f ofx. Okay. So this is the

45:41

fundamental domain idea.

45:46

All right.

45:48

And you can try to do that for surfaces

45:50

too. Just a mapping class gets more and

45:52

more complicated. This is not a fun

45:54

problem to try to identify a fundamental

45:55

domain. You can do it for a taurus, you

45:58

know, but things get uh I mean already

46:00

for a taurus slz is not, you know, a

46:03

walk in the park. It's not that hard,

46:04

but it's not the most trivial thing you

46:06

do. And then it gets more and more

46:07

complicated from there. Okay.

46:09

All right. But the the sort of fa popoff

46:13

or but physicists run into this issue

46:15

all the time, right? When you define any

46:18

path integral in the gauge theory, it's

46:19

infinite because it's gauge invariance

46:21

and the volume of the gauge group is

46:22

infinite. Um so we have to figure out

46:25

how to mod out by these uh by these kind

46:27

of symmetries all the time. And when we

46:30

do the path of gauge theory, we

46:31

absolutely do not do this. You don't

46:33

find the analog in the fundamental

46:35

domain which is insanely complicated.

46:37

That's a sort of space of all possible

46:38

engaging variant states. Ridiculously

46:41

complicated. You would never think of

46:42

doing that. Instead you uh judiciously

46:46

insert one. Okay. So what you do is you

46:50

say um I'm going to just pick any old

46:54

function. This case I'm going to pick

46:56

any old function g of x. Okay.

47:01

Let me pick some g of x. I'm gonna draw

47:04

g of x. Here's g of x. And g of x is

47:08

going to look like this. Any old random

47:10

g of x you like. Okay.

47:15

Okay. But I'm now going to insert one is

47:18

equal to the sum over all k of g of x +

47:23

k a

47:25

just translating g divided by the sum

47:29

over all k g of x + k a. So clearly I

47:33

haven't done anything. Okay. And I'm

47:34

just taking the function and all of its

47:36

translates. Okay.

47:40

So far so good, right?

47:45

So I'm going to insert that into my

47:46

integral. My integral is integral minus

47:48

infinity to infinity dx to begin with

47:50

one. But I want to somehow mod out by

47:52

the translations. Okay. 1 * f ofx. But

47:56

the one I'm going to write as a sum over

47:58

k g of x + k a over I'm going to call

48:03

this big sum g. I'm call this big sum

48:06

capital g. Okay, capital g. The only

48:10

thing I need is that capital G does not

48:12

vanish anywhere. Okay, so I want this

48:15

formula to be meaningful. So capital G

48:18

better not vanish anywhere. So G of X

48:19

had better be kind of like enough non

48:22

zero so that when I translate it, it it

48:25

can itself even vanish in a lot of

48:27

places like G of X could even look like

48:28

this. And G of X could look like this.

48:31

So long as this length is bigger than A.

48:34

Okay. So that when you translate it, the

48:36

translates sum together never go to zero

48:38

anywhere. Okay, so I just want this

48:41

denominator to be multiplied

48:44

and then it's sort of sorry times f ofx

48:46

and then it's sort of obvious what's

48:48

happening because precisely by

48:50

translational invariance in every term

48:52

of this sum I could translate back to k

48:56

equals z if you like okay so modding out

49:00

by the so if there was a fundamental

49:02

domain even without finding it I know

49:05

that uh that that this uh is going to be

49:08

the same as the integral minus infinity

49:11

to infinity just one term here for

49:13

example just g of x / capital g * fx

49:20

okay

49:23

if you like all the other ones are just

49:25

copies of this one by the action of the

49:27

mapping class and modeling precisely

49:29

throws them out now this is very cool of

49:33

course if you make a choice for g of x

49:34

to be a a a step right with size length

49:39

x a. This goes back to the fundamental

49:42

domain picture. Okay, you can put g

49:45

anywhere you want and this goes back to

49:47

the fundamental domain picture because

49:48

the sum of the g's is just equal to one.

49:51

But you don't have to be smart to do

49:52

that. Just choose a random g, any g that

49:54

you like. Um uh it's a little fun to do

49:57

it, you know, with a Gaussian or

49:58

something. It's slightly surprising that

50:00

this integral over everywhere without

50:02

thinking gives you the right answer, but

50:04

of course it's designed to do that.

50:05

Okay. So, this isn't what people what

50:07

physicists call the fa pop-up trick.

50:08

Perhaps it's not being dignified with

50:10

that. I'm sure did it in the 18 in the

50:13

1700s.

50:13

>> This is your way of taking a quotion.

50:15

>> This is a way of taking a quotion. This

50:16

is a practical way of taking a quotient

50:18

without manifestally identifying a

50:19

fundamental domain. Okay, that's the

50:21

sort of key key point.

50:23

>> Can I follow up? But absolutely one more

50:25

only one not two.

50:27

>> Okay. So, I'm going to I'm going to ask

50:29

the what's in it for me question and

50:31

it's as follows. So in this institute

50:33

there's a geometry group.

50:35

>> Yes.

50:35

>> And there's an algebra group.

50:36

>> Yes.

50:37

>> So now you moved you know everything

50:39

into the geometry sense of timma space

50:42

and the algebra.

50:44

Yeah.

50:45

>> So I would like to take something back

50:47

to the study of algebraic curves.

50:50

>> So the moduliz space mg or mgn

50:54

>> and I'm interested in algebraic curves.

50:56

>> Yes. Yes.

50:57

>> Not in remon surfaces. So what can how

50:59

can I make use of this technology to

51:02

study algebraic curves?

51:04

>> Uh I don't know um uh um uh what what

51:09

what what I would say is that um uh uh

51:13

is that what makes it useful for us

51:16

>> um uh is is in fact it's distinctly

51:20

algebraic flavor. I mean that's the

51:21

point. You just have these algebraic

51:23

equations you solve as ratios of

51:25

polinomials and this does something. it

51:28

uh it gives a gives an algebraic way of

51:30

characterizing

51:31

>> but the algebra is on the on the

51:32

geometric analytic side right

51:34

>> sorry

51:34

>> it's we have to somehow cross the

51:36

transcendental divide I mean you spoke

51:38

about ugly phobi theta functions they

51:40

seem nicer

51:42

>> oh yes well

51:43

>> we have to somehow go from the from the

51:46

analytic side to the algebraic side

51:49

maybe this I'm willing to do numerical

51:51

computations even with Julio

51:53

>> right well um um

51:57

Maybe one one one interesting point uh

52:01

uh to make is also related to your to

52:03

your earlier questions about the uh

52:04

infinitely many use. So you can ask I

52:07

mean there are we know that uh we're

52:10

talking about string theory at loop

52:12

level. There are jacobi data functions

52:14

everywhere. Where do the jacobi data

52:15

functions come from in this world?

52:16

>> For example,

52:17

>> the jacobi data come functions come from

52:19

the product over infinitely many curves.

52:21

Okay. So that's so that of course you

52:23

know the product representation for

52:25

dedicant theta or jacobi theta is like

52:27

product of 1 - y the k right what are

52:30

those product 1 - y the k in this

52:32

language the 1 - y the k is coming from

52:35

these f polinomials and there's

52:37

infinitely many terms of the product

52:38

because we have infinitely many

52:40

self-intersecting curves so that's uh uh

52:43

and uh maybe at least one thing that I

52:47

find uh interesting is that a lot of

52:50

this story uh can be abstracted away

52:52

from curves on surfaces. Uh there's even

52:54

more general settings where you can have

52:57

uh associated both with cluster algebra

52:59

and quiver representation theory and so

53:01

on and so forth. There's more general

53:02

settings where you can associate all of

53:04

these things. There's sets of variables

53:06

and u variables that go along with them.

53:07

They satisfy U equations. There's

53:10

infinitely many of them and so on. And

53:12

so it's natural to wonder whether

53:14

whether the things like jacobi theta

53:15

functions that are associated with

53:17

surfaces have similar generalizations in

53:19

these bigger sets that um uh it's

53:22

somewhat for example uh many just to say

53:25

a negative if I give you a jacobian

53:27

theta function the most exciting thing

53:29

about it is it modular properties and so

53:31

you can ask are modular properties

53:32

obvious in this way of writing things

53:34

absolutely not as far as I can see you

53:36

know what this tells you is that it's

53:38

good to write the the formula product of

53:40

1 - y to the k that comes out of this

53:43

sort of way of thinking

53:44

>> but we should see this and then using

53:45

the u equation formulism we should see

53:48

remon theta functions and all that

53:50

>> yes absolutely absolutely yes

53:52

>> and it could be a tool it could be a

53:53

tool to study and evaluate

53:55

>> it could be and I think one really a key

53:58

thing which is uh which has come up in

54:00

the question I also haven't uh answered

54:02

yet is it gives a natural regularization

54:04

of these things okay so that's kind of

54:06

kind of the point um if you're a if

54:09

you're a street theorist talking about

54:11

uh calculations at high loop order.

54:13

There are some greens function on a

54:14

surface that you have to compute. You

54:15

compute it roughly by the method of

54:17

images. It involves infinite products

54:18

that have to be regulated. They're very

54:20

subtle things. When they come out in the

54:22

uniform, they don't have to be regulated

54:23

because they're manifestly finite

54:25

products of things that are bounded

54:26

between zero and one. So uh uh and it

54:29

and it gives you a picture of what's

54:30

regulating it. The sort of degree of

54:32

complexity of the curves that you're

54:33

talking about is what is regulating it.

54:35

So I think the that connection what what

54:38

the probably the most uh the most useful

54:40

thing about it is a way to sneak up on

54:42

these very infinite objects in a well-

54:44

definfined finite way that has geometric

54:46

meaning right in terms of putting these

54:48

cuto offs on the complexity of the word

54:50

and on uh on how complicated the curve

54:53

looks on the surface. And finally coming

54:55

back to your question um you see this

54:58

now means so what was the purpose of the

55:00

g of x the purpose is kind of mostly to

55:03

shut off a little bit away from a right

55:06

so now of course uh when we when we do

55:09

this in our setting with the mapping

55:11

class we don't choose random g of x's we

55:13

choose the g of x's to be made out

55:15

exactly out of these alphas right so

55:17

it's very easy to build you know sums of

55:20

alphas whose translates uh are never

55:23

zero anywhere that you can put in the

55:25

denominator, right? But that means that

55:28

in this formula in the numerator, you're

55:29

just going to have the alphas for some

55:31

finite set of curves because there's a

55:33

finite set of curves upstairs. It's

55:36

there's literally a finite region in

55:38

this fan in which this kernel is non

55:40

zero. Okay? And so you don't need to

55:42

keep infiltur.

55:47

Okay? So that makes it uh uh that makes

55:50

it that's that's what really allows it

55:51

to be completely practical because uh uh

55:54

uh I told burn that these u variables

55:56

mostly go to one or if you tropicalize

55:58

them only in very narrow cones far away

56:02

uh do they matter but in a precise sense

56:05

you can throw them out if you're doing

56:07

uh the field theory uh computations when

56:09

you mod by the mapping class because

56:11

they are literally you don't go there

56:13

okay it's just that a fundamental domain

56:16

is a low brow way of saying it is

56:18

choosing one representative set of finer

56:20

diagrams. So if you choose one

56:21

representative set of diagrams to cover

56:23

the space. If you do that that's going

56:24

to look very ugly. That means you have a

56:26

weird collection of cones uh that you

56:29

say okay this collection and their and

56:32

their translates are going to cover the

56:34

space. But you have to artisally choose

56:36

the collection of cones. Instead of

56:38

doing that you just sort of light up

56:40

some region by this method. Okay. say

56:42

here's some sort of wedge and that's

56:44

what I shove upstairs and then I'm done

56:47

by by this uh by this uh by this idea.

56:50

So long as I just do the integral over

56:51

that cone that's going to it's it's

56:53

going to pick a third of a vinement

56:55

diagram from from this cone and 2/3 of

56:57

one from the other cone I'm going to add

56:59

them up in some nice way but without

57:00

thinking I cover the uh entire space and

57:04

then it's finite then it's really a

57:05

finite number. Yes,

57:06

>> I know you stated the third commandment

57:08

that is forbidden to ask the third

57:10

question. Yes,

57:11

>> I will cross this forbidden.

57:14

>> Yes,

57:14

>> things. Uh, okay. So, um it seems that

57:18

you are trying to um go back and forth

57:21

um in this crosspollination between

57:23

mathematics and physics to this problem.

57:25

Um it's normal you got this the kicks

57:27

for with abstraction.

57:29

Um yeah um one natural question um uh

57:34

yeah which arises I think um are there

57:37

any toy models in three dimensions with

57:40

the same formalism uh of course without

57:43

the monstrosity focus on the right

57:45

topic. Um

57:47

>> I I will stop you and say this is

57:50

already a a toy bottle. It's already

57:53

it's already a toy model uh that that

57:56

actually uh uh is dimension agnostic. It

57:59

could be in three dimensions if you

58:00

wanted. That's uh I I never said what

58:02

the dimensionality was in the story.

58:04

>> Yeah. So you're you're saying that

58:05

there's a natural generalization between

58:07

>> generalization or specialization, you

58:09

know, that's uh there's a there's a

58:11

there's the number D. It doesn't have to

58:12

be an integer. It could be complex, you

58:14

know, it could be anything you want. Um

58:16

uh but um that's one of the nice things

58:19

about working with the the scalers. As I

58:23

mentioned yesterday, although I won't

58:24

have uh uh I won't have time to explain

58:27

it in detail, it is surprising that

58:29

starting with this formula just for

58:30

scalers secretly knows about gluons as

58:33

well. Okay, so there's a way in which it

58:35

knows about gluons and pons also in any

58:37

number of dimensions. So that adds a lot

58:39

of uh physical excite and novelty about

58:43

these ideas that uh that you're saying

58:45

but uh but nowhere here do we say

58:47

anything about the number of space

58:50

professor you're saying that the

58:52

generalization is completely obvious and

58:54

we have only three three matrices for

58:55

example for three.

58:56

>> Oh no no no that that three is nothing

58:58

new in any number of uh spacetime

59:01

dimensions you'd always have the same

59:02

2x2 matrices. If you're asking if there

59:04

it's a generalization where you have

59:06

instead of words I don't know some

59:08

threedimensional words and you'd

59:10

multiply 3x3 matrices and 2x2 matrices I

59:13

have no idea probably there is because

59:14

mathematicians are very interesting

59:16

people who invent all kinds of things

59:18

but uh but uh anyway uh I think we have

59:21

to stop now for the break right for the

59:24

tea break anyway so but so what what

59:26

what I what I can do in the time that I

59:28

have is give you what I mentioned in the

59:31

beginning is the sort of a very new

59:34

application of uh these uh of the ideas

59:39

that we've just been talking about. Um

59:43

uh

59:44

that gives us access to a very

59:46

interesting region of uh uh physical

59:49

processes involving uh scattering with

59:52

asmtoically large number of particles.

59:56

Um so

60:00

so um so we're going to again talk about

60:02

this uh trace by cube theory but I want

60:05

to imagine I have amplitudes

60:08

um oh this a nice truck in trace 5 cub

60:12

um but where I have sort of n particles

60:18

um and n goes to infinity. Okay.

60:24

And I want to say something about what

60:26

these uh uh amplitudes look like. Now

60:29

again, uh I'll just say this uh uh I

60:33

said it already before as part of the

60:35

motivation. Um what I like about this

60:37

this question is it forces you to think

60:40

uh it makes it very clear what part of

60:42

what we're doing before is continuously

60:45

connected to standard ways of doing

60:46

physics and what could be really new

60:49

because none of the ways you think about

60:52

computing uh amplitudes whether we

60:54

interpret amplitudes as canonical

60:56

forearms or BCFW or other kinds of

60:59

recursion relations every one of these

61:01

pictures has something recursive built

61:03

into it right a canonical form is an

61:05

object

61:06

that recurses to a canonical form,

61:08

right? Um so uh you build up higher

61:11

point amplitudes by gluing together

61:13

lower on points, right? So it's all a

61:14

picture that the simplicity is in few

61:16

particles and complexity is in many.

61:19

Okay? And so we're looking for something

61:21

essentially new where the simplicity

61:23

when the partic number of particles is

61:25

huge. So that's what what we're looking

61:27

for. Okay.

61:28

And um the clue that that such a thing

61:32

is possible is from the tropical

61:34

representation of the trace 5 cube

61:35

amplitudes that I now want to write on

61:38

this board. Okay. So if I look at the

61:41

tree amplitudes for trace 5 cube for n

61:44

then I can write it as this integral dn

61:47

minus 3t

61:49

e to the minus some s

61:52

okay that depends on t

61:56

and s of t uh this is now associated

61:58

with this picture of the mesh okay

62:08

so s oft T is the sum of X1 J T1J.

62:15

Okay. So these T's are associated with

62:17

this bottom uh boundary. So this is this

62:20

is the mesh.

62:22

Um we mentioned it I mentioned it before

62:25

but that's associated with this kind of

62:27

uh this kind of triangulation where

62:29

that's one 2 3 4. Okay. So um

62:35

um and

62:38

uh so again this is just a picture uh as

62:40

a pneummonic for thinking about all all

62:42

the variables in the problem that makes

62:43

it easy to write this expression. Um and

62:47

then you have for every internal for

62:50

every internal C you have a sum of these

62:53

internal C's. So the internal J is CI

62:59

times something. It's a max of zero and

63:02

a bunch of things, right? And um so just

63:06

let me just give it concretely in this

63:08

example. Um just I I hate it when people

63:11

write formulas with bounds on indices

63:13

that you can barely read on the side.

63:15

Okay? So I'm not going to do that. So

63:17

I'll just sort of show what it looks

63:19

like uh in a big enough example where

63:21

you can see how it generalizes. Okay. So

63:24

if I have this example that's 131415.

63:28

Okay. So my action is t13

63:37

is uh t13 x13 plus t14 x14

63:43

is t15 x15.

63:46

Okay. And then uh associated with this

63:48

C. So this is C13. I have plus C13 max

63:53

of zero and minus C13.

63:56

Okay. So in Karolina's language remember

63:59

this was associated with this little

64:02

that little uh uh mic in that direction.

64:05

This guy's the other direction. This

64:07

guy's uh uh uh the other direction. So

64:10

there's C13. This would be 2 4 5 2 36 35

64:15

26 and 46. Okay. So then I'd have plus

64:18

uh so I'm going up this way also because

64:22

organized by how complicated these uh

64:25

these things are. So plus c24 max of 0

64:28

and minus t14 plus c35 max

64:33

0 and minus uh t15

64:38

and then uh here and here I have plus uh

64:41

c14

64:42

max of zero and you see I've gone up to

64:46

14 here. Okay, so it's max of minus P14

64:50

and minus P14 minus P13.

64:53

Okay, so I go as far up as I go and I go

64:57

down from there to see it's a max of uh

64:59

all of those guys. Okay, so this guy

65:02

instead would be plus uh C25

65:06

max of zero and minus T15 minus T15 -

65:11

T14.

65:14

Okay. And finally this one would be plus

65:16

C15

65:18

max of 0 minus T1 5

65:23

- T15 - T14 - T15 - T14 - T2.

65:33

By the way, I apologize if these things

65:35

have minus signs on them. You might have

65:36

thought it would be convenient to uh uh

65:40

you know just reverse all sign

65:41

conventions and call all these things uh

65:43

plus signs. If you thought that you

65:45

would agree with all my collaborators uh

65:48

and so in in the papers you'll find it

65:50

in that uh officially intelligent way

65:53

but I have my own personal reasons for

65:55

preferring it this way. So screw you

65:56

all.

66:00

All right. Um, and that's a technical

66:03

term for my collaborators.

66:06

Um, okay. So anyway, so what again the

66:10

the and so I hope that the the pattern

66:12

for any n is is is clear, right? You're

66:14

getting you got a max of zero and these

66:16

strings of sums of uh of uh of negative

66:19

t's. Okay.

66:21

So once again in this example already we

66:23

see we have you know uh 1 2 3 4 5 6 we

66:27

have six little uh tropical functions

66:31

that are going to turn this

66:33

three-dimensional space into 14 cones.

66:36

Right? So here we have 14 diagrams at

66:39

six points as catal numbers we have 14

66:41

diagrams and so we're starting to see

66:43

the point right that as you build a

66:45

large n there's n squared roughly of

66:47

these uh of these maxes they have order

66:50

n terms each and magically they they

66:52

they uh their domains of linearity turn

66:55

into all the roughly four to the n

66:57

diagrams at larger. Okay.

67:00

So that's why there are some sort of

67:03

vague hope that uh maybe this integral

67:06

it's a single integral. I'm not talking

67:08

about summing diagram. It's a single

67:09

integral. It's a single integral made

67:11

out of like simple objects that if you

67:13

sort of squint even maybe have some kind

67:15

of seeming nice continuum limit like

67:18

these sums of consecutive t's maybe look

67:20

like an integral. You go to large n you

67:22

replace sums with integrals or that

67:24

integral kind of turns into a path

67:26

integral. There's all sorts of sort of

67:28

vague words that you could say about why

67:30

you expect something nice to happen here

67:33

in the large end limit. Okay, that's

67:36

exactly what what we're going to see.

67:37

Um, but I first want to uh uh I want to

67:41

do two things here. Um uh first I want

67:44

to be a little bit more precise about um

67:48

what a large end limit could mean

67:51

because you take a limit you always have

67:52

to say what you're holding fixed. Mhm.

67:54

>> Okay. So, we have to have fold something

67:56

fixed as we go to infinity.

67:59

And then I want to do the super simplest

68:01

case of the large end limit just to give

68:03

us an idea of what what we're looking

68:04

for. Okay.

68:06

So, the first comment is uh what are we

68:09

holding fixed in the large end limit.

68:11

Okay. Uh what's fixed? So large and

68:15

kinematics essentially I want to talk

68:16

about

68:18

and if you're a physicist we could

68:21

actually spend a fair amount of time on

68:23

this topic. think it's actually uh uh

68:26

interesting to talk about but let me

68:28

sort of give you a quick impression.

68:30

I'll make a very very precise statement.

68:32

Um so if you're a physicist you'll care

68:34

about the physical implications of all

68:35

these things but we can just sort of

68:36

make the precise statement quickly. So

68:38

remember we said that uh we started with

68:41

this picture where we imagined you know

68:43

we have momentum of particles that we

68:46

put uh end to end uh to make this

68:49

momentum polygon to give us uh this

68:51

picture of a momentum conservation.

68:53

Okay, so this picture immediately so as

68:55

n goes to infinity, you have to somehow

68:57

give me a large number of momentum,

68:59

right? And but in some way that

69:00

something is held fixed. So the obvious

69:02

thing to do here is to say that what's

69:05

held fixed is just this curve. I draw

69:07

some curve. Okay, I draw some curve C

69:13

and then for any finite end, well I

69:16

just, you know, point down end points on

69:17

this curve.

69:20

Of course, maybe I plonked them down

69:22

with some density. You know, there's

69:24

more details we could talk about about

69:25

how I plunk them down, but at zero

69:27

order, I'm just going to plunk down a

69:29

bunch of points on this curve, and I'm

69:30

going to use that to define my momentum

69:32

polygon. Okay, so this defines my

69:35

momentum polygon

69:37

for any finite.

69:40

Okay, so at least here we've defined

69:42

something, right? Here we've uh here

69:43

we've defined uh uh here we've defined

69:46

something uh that is going to uh stay

69:49

fixed as n goes to infinity. And so

69:52

that's what we want to know. We're going

69:53

to have the amplitude is going to depend

69:54

on these n momenta. But the hope is

69:57

again this is sort of vague that at

69:58

large n the amplitude will only depend

70:00

on this curve c and it won't depend on

70:04

the particular way that you put uh

70:06

points on it. Okay, that's the that's

70:09

what that's what uh we want to see if

70:11

it's true. something like that is true.

70:15

Now that picture

70:17

immediately translates to a kind of an

70:19

obvious state the language of this mesh.

70:22

If we think about this mesh as defining

70:24

our kinematics space, don't worry, I've

70:27

got to write down a simpler version of

70:28

this equation again if you're missing

70:30

it.

70:32

um is that uh so another way so so so so

70:36

kinematic limit to another way of

70:38

talking about it slightly more general

70:40

way of talking about it is draw this

70:42

mesh

70:43

okay

70:45

but now just imagine that the x so so my

70:48

kinematic variables are some xigs in

70:50

here right so and I'm just going to

70:54

imagine this is a very fine mesh okay so

70:57

I'm going to imagine that the xig's on

70:59

the inside are really secretly some

71:03

smooth function I'll call it x of u and

71:05

v but when for example u is i n and v is

71:09

j overn

71:11

okay so it's like here's u it goes from

71:15

0 to 1 here's v it goes from 0 to one

71:20

the other way I don't know right so I

71:22

have a smooth function of x of u and v

71:24

inside this triangle and I'm just

71:26

discretizing I'm just plotting down a

71:28

mesh and reading off the x objs from the

71:30

value of that smooth function evaluated

71:33

on uh the discrete points. It doesn't

71:35

have to be a uniform mesh if we do it in

71:37

uh in in many ways.

71:40

So this is the this is the sort of

71:42

slightly more general picture. I mean

71:44

obviously everything that looks like

71:45

that will give me something smooth in

71:47

here. Okay, but now we're just going to

71:49

say it in here. I'm just going to have a

71:51

smooth X inside this uh have a smooth X

71:55

inside this triangle.

71:58

Okay, is that clear? So, and so once

72:00

again, the hope now is that the

72:02

amplitude as n goes to infinity is now

72:04

just going to become a function of this

72:07

smooth x of u and v.

72:11

Okay.

72:14

Okay. Now, now let's get a

72:16

>> can you say one more time how I compute

72:18

the x from the

72:19

>> I'm sorry.

72:23

No, but suppose I have a continuous

72:24

curve

72:25

>> and I want to compute the continuous x.

72:28

willing to do a continuous computation.

72:30

>> Yes, a a a a a continuous computation.

72:32

You know, I pick uh you know, a point

72:34

here and call uh uh zero. Okay. So, my

72:37

my my curve is labeled by some little x

72:42

that depends on a parameter, let's call

72:44

it u that goes from 0 to one. So, here

72:47

is u equals z, u equ= a half,

72:50

>> u comes back to one here. Okay. Mhm.

72:53

Uh and so my x of u and v the sort of

72:56

first guess for x of u and v would be

72:59

little x of u minus little x of v squ.

73:04

Okay. And if the particle has a mass

73:06

plus m

73:08

okay

73:10

so that's what we're that's what we're

73:12

talking like in the smooth curve or

73:15

>> Yeah. Yeah. That's that's that's how I'm

73:18

defining the uh the uh the Okay.

73:22

Is there some notion of some shellness

73:24

in the in the curve like the E square?

73:27

>> You could you could you could ask for

73:28

that or not. Okay. Uh and actually so

73:31

this this gets into the longer physics

73:33

uh uh discussion. You could ask for that

73:35

or not. The simplest thing to do is just

73:38

say these are whatever they are. You

73:40

would call this an offshell correlator.

73:42

Uh the formulas are exactly the same.

73:44

You could interpret as an onshell

73:46

amplitude with a little bit of

73:47

scaffolding on top of these guys if you

73:49

wanted. Okay. So um and and that that's

73:52

where there's a there's about a 45minut

73:54

discussion of all the different ways we

73:56

could draw this curve that could be

73:57

space-like. They could be timelike. They

73:59

have an interpretation of a bomb going

74:00

off. They have the interpretation of

74:01

lots of particles going in, lots of

74:03

particles going out. So there's lots of

74:04

different sort of physics words.

74:06

>> But then there was also the C. So the C

74:08

is the H. You know that

74:09

>> uh that that's we're going to come to

74:11

the C's in a moment. That's going to be

74:12

the sort of key thing that makes that

74:14

that that that gives some hope that

74:16

something simple is just going on. Okay.

74:19

Okay. But in fact, so so so before uh

74:22

before before getting there, I want to

74:23

do the very simplest version of a smooth

74:26

X, which is all x's equal. Okay, so

74:30

that's the very simplest thing you can

74:31

do. What what if all the x's are equal?

74:34

Put it here.

74:38

So if all x's are equal,

74:42

then the amplitude is very simple. What

74:43

is the amplitude? Well, every single

74:45

finding diagram, every diagram is the

74:47

same. It's like 1 /x to the power of the

74:49

number of propagators, right? So all the

74:51

x's are the same. So the amplitude is

74:54

just 1x the n minus 3 multiplied by the

74:57

number of diagrams.

74:59

And number of diagrams are the catalan

75:01

numbers.

75:04

And it's very easy to see that the

75:06

catalan numbers grow like four to the n

75:08

at large x. So we said said that

75:10

already. So already here we see that

75:12

this goes like 4x

75:15

to the n at large x.

75:18

Okay.

75:20

So that's our sort of first more precise

75:22

hint for what we're looking for. And if

75:23

you're a physicist, you're very excited

75:25

to see that adding the exponent. Okay,

75:27

because the amplitude looks like

75:40

the amplitude looks like e to the n

75:43

times some little some function a of x.

75:47

In this case, a of x is just uh you know

75:50

log 4x.

75:55

Okay.

75:58

or I put a minus sign there and it would

75:59

be log of x over 4

76:02

either way

76:05

but it's very common in physics when you

76:08

have a small coupling in a theory that

76:11

the leading amplitude or partition

76:13

function or whatever looks like e to the

76:15

minus some coupling normally we call it

76:18

1 / g ^2 times the thing that sits here

76:22

is controlled by some classical

76:26

physics

76:29

that's the leading thing this is called

76:31

the leading classical behavior and then

76:33

you do a semiclass expansion around it

76:35

and so on and so forth but the sort for

76:37

example in the path integral uh in the

76:39

finest path integral we have e to the 1

76:41

/ h bar it's pine's constant that goes

76:43

downstairs in the action okay so

76:45

>> so in some sense this equal one is a

76:47

complicated case because the

76:49

dimensionality of your curve also goes

76:51

up

76:52

>> should I think about your polygon

76:53

staying in a so your curve being a fixed

76:55

dimension.

76:56

>> You can think of it as saying in a fix.

76:58

>> This is not a good model.

76:59

>> That's right. This is not a good model.

77:00

Exactly. It's a bad model for that.

77:02

Exactly. Exactly. So, this is the

77:03

slightly longer uh uh discussion about

77:05

exactly what does or doesn't work for

77:07

the story that I'm telling you. Just to

77:09

to to skip to the end, there's a the

77:11

story I'm telling you is valid for a

77:13

fixed curve and fixed dimension. But

77:15

you're right, x equals 1 is not a model

77:17

for that because we cannot realize x

77:19

equals 1 from any curve in a fixed

77:20

number of dimensions. Absolutely. Okay.

77:22

But somehow uh I mean uh it's kind of

77:26

interesting in this entire story.

77:28

Everything is about the x's in our

77:30

stories. Everything is about the x's.

77:32

That's the whole point. The moment are

77:33

somehow far away somewhere, right?

77:35

Everything is about the ex. So it's

77:36

natural to ask this question in our

77:38

story just to get some sort of first

77:40

handle on what's going on. But I want to

77:43

stress that this is now looking exciting

77:45

because it looks like at large n we

77:48

should be looking for some kind of

77:50

theory whose weak coupling is 1 / n.

77:53

Exactly. Right. One over g^2 to be n. So

77:55

n is 1 over g^2. So it looks like

77:57

there's a dual theory at large n. Okay.

78:00

So if you're a physicist those were in

78:02

the beginning of any talk on the subject

78:04

you know you would say this is evidence

78:06

for some kind of dual theory at large n.

78:08

We're going to be finding this dual

78:10

theory at large n. That's exactly what

78:12

we're looking for.

78:15

Okay. So, but of course, as Burn was

78:18

saying, that's a very very special case.

78:21

Um,

78:23

let's imagine more general X of U and B.

78:27

And now comes a second uh uh interesting

78:31

point. Maybe before we get into this uh

78:34

a second point, um let's go back to this

78:36

picture. This is actually essentially

78:39

repeating uh slightly more slowly uh

78:43

something uh uh Kolina was saying at the

78:45

end of her talk yesterday. Let's say we

78:47

have this uh this example at six points.

78:49

Again, sorry for the bad drawing. Uh

78:57

and it's just kind of interest. So the

78:59

whole associated derivative, the whole

79:00

amplitude language involves turning on

79:02

all of these C's. Okay, but let's just

79:05

see what happens if we turn some of them

79:06

off. Okay, Carolina mentioned that there

79:09

are some limits where we turn so many

79:11

off that this association collapses the

79:12

dimension that the amplitude gives us a

79:14

zero. But let's just back up a little

79:15

and see what some other sort of simple

79:17

things look like. One simple thing you

79:19

can do, remember these guys were

79:20

intervals, right? Whatever direction

79:24

that was, right? So, one kind of obvious

79:26

thing to do is just turn off all of

79:27

these guys and just have these

79:29

intervals.

79:31

Then the object is really simple. It's a

79:33

hyper cube just a cube in this case,

79:35

right?

79:37

And also the amplitude is extremely

79:39

simple in that limit. Okay, so if you

79:40

remember I erased the action but let me

79:42

write it again. You know these

79:45

contributions. So my action was uh t13

79:49

x13 plus t14 x14 plus t15 x15.

79:55

Remember I'm integrating the amplitude

79:57

is integral uh dt e minus s. So in this

80:01

example, this is S has these linear

80:03

pieces and then these guys were the

80:05

simple uh things whose whose maxes uh

80:09

only involve one variable at a time. So

80:11

C13 max of 0 and - X13 plus uh C14 max

80:18

of 0 - X14 sorry C24 and C35

80:24

max of 0 and - X15

80:28

T. Thank you.

80:34

Okay. Um, so these integrals are really

80:37

simple too. It's just, you know, a

80:38

single integral, right? Uh, so what is

80:41

that that integral? What is this action?

80:44

This s like when t is positive, uh, when

80:47

t13 is positive. Uh, it's equal to t13

80:51

x13.

80:53

And when t13 is negative, uh, that's the

80:56

max. So here it's equal to this slope uh

81:00

so this is c13 minus x13

81:04

uh sorry this is the times minus p13

81:09

okay so just as a slope of different

81:12

slopes I made them look the same but

81:14

different slopes on this side and that

81:15

side and so when I do the integral from

81:17

that I get 1 /x13

81:20

from the integral from 0 to infinity and

81:22

from this side I get + 1 / c13 - x13

81:28

Okay.

81:30

And so I just get a product of those

81:32

factors for each one of these things. So

81:34

times 1 x24 plus 1 / c 24 - x14 1 /x uh

81:42

x14 sorry x15 + 1 c35 - x15. So I just

81:47

get a product of these factors. And

81:50

that's just each one of these is the is

81:52

the canonical form for that little

81:54

interval or looks like a four particle

81:56

amplitude just a little local for for

81:59

particle amplitude. Okay.

82:02

So this is a this is a limit

82:05

uh uh this is something uh uh Nick loves

82:09

um um they're uh they're called the

82:11

maximally split limit. Okay. to where

82:13

the amplitudes are maximally slit. Uh

82:15

burn loves because this is a kinematics

82:17

for which uh the saddle point equations

82:20

uh the scatter equations are precisely

82:22

one solution. All sorts of nice things

82:24

about this limit. So we call them

82:26

maximal splits and the association is

82:28

just a cube in this case. Right? Of

82:30

course the associated in general is much

82:31

more complicated than a cube. Uh however

82:34

the essential story you're going to see

82:36

is that in the large end limit the

82:37

association essentially turns into a

82:39

cube in some particular specific sense

82:42

uh of that of that statement. Okay. So

82:46

um all right but anyway let's uh let's h

82:48

keep going here. There's there's another

82:52

obvious thing that we could do. We could

82:54

turn everyone off here and turn this guy

82:56

on in the corner. Right? That's also a

83:00

limit where we get a non-zero amplitude.

83:02

That's as simple as possible. And I

83:03

won't write down the formula, but it's

83:04

just the, you know, it's just the

83:06

canonical form of that little tri. Okay,

83:09

it's a single term again. So if I turn

83:12

on all of these guys, I get something

83:14

simple. If I turn on that guy, I get

83:15

something simple. Okay, I add up

83:18

everything, I get the full associ.

83:21

It's actually amusing that if you if you

83:23

take this simple guy and that simple

83:26

guy. So this is gives you a cube that

83:28

gives you a that just gives you a a top

83:31

dimensional simplex already. Just

83:33

summing those guys is a very complicated

83:35

object. Okay, it's almost as complicated

83:38

as the full assoc in this case. It just

83:40

shrinks two of the edges of the associ.

83:43

Okay. So you have uh uh so you have 19

83:46

edges instead of 21 for the uh the the

83:49

threedimensional associated string has

83:51

all the same uh facets it had before.

83:54

Okay. Um so it's already a pretty rich

83:57

object where you sum just the little

83:59

intervals on one side and the big top

84:02

dimensional simplex on the other side.

84:04

As you go to large n this object is uh

84:07

has you know roughly 2 to the n facets.

84:10

That's a lot of facets. Okay. quite

84:12

complicated object. Sorry, has a lot of

84:14

vertices. It's two to the n uh uh

84:16

>> but the dimension always goes up, right?

84:18

>> Dimension always goes up. That's right.

84:20

Exactly.

84:20

>> There's no scenario where the dimension

84:22

stays the same.

84:23

>> No. No. Exactly. So that's that's that's

84:25

so I I gave you the like reasons to

84:26

expect a large end limit

84:29

at very beginning before the reason to

84:31

be worried is that the integrals are not

84:32

staying in fixed dimensions. So the

84:33

integrals are sort of getting bigger and

84:35

bigger and so it's not obvious from that

84:37

point of view that a large end limit

84:39

should exist. Okay. But uh but here we

84:41

are not talking about large end. I'm

84:42

just saying something in general. If you

84:44

take a big mesh, there's two limits

84:46

where it's extremely simple. One where

84:48

you just turn on the sum ends on the

84:50

strip on the left. One where you turn on

84:52

the sum end in the corner. In one case,

84:54

you get a big cube. In the other case,

84:56

you get a simplex. Already when you just

84:58

sum these two guys, you get something

85:00

really complicated. Okay, just summing

85:02

these two guys something pretty

85:03

complicated. Of course, the whole

85:04

association is summing everything else.

85:06

Okay, so so already these guys have

85:07

roughly two to the n vertices. If you

85:09

put everything else in there, you grow

85:11

to roughly all the four to the n

85:12

vertices of the full association. All

85:14

right.

85:16

Now, let's return to large n and to

85:18

burn's question about what the c's look

85:20

like cuz that's the sort of key uh

85:22

that's one uh key thing.

85:26

So, this is a pretty large n. Uh, and

85:29

here's the uh cool point. So, I'm

85:32

imagining that my x's are, you know,

85:34

somewhere in the neighborhood of this

85:36

all the x's equals constant. They're not

85:38

equal. They're constant, but you know,

85:39

they're all, you know, they're varying.

85:41

They may be varying a lot, but they're

85:43

they're staying away from zero. Okay?

85:45

They're sort of staying away from zero,

85:46

but they're varying a lot inside this uh

85:49

inside this triangle.

85:52

But that means something really cool.

85:54

Let's look at the x the typical x on the

85:56

inside.

85:59

I currently said typical x on the inside

86:03

is this x i j = x i + 1 j + 1 - x i + 1

86:07

j - x i j + 1

86:11

this is roughly the derivative in i the

86:14

derivative in j of x the discrete

86:16

derivative of i discrete d of x

86:19

and therefore this is of order 1 n^2

86:23

okay all right this is of order 1 n

86:29

I shouldn't have said I mean it's really

86:32

1 / n derivative with respect to u 1 n d

86:34

respect to v. Okay, so this is order

86:37

this is something of order 1 n square.

86:40

So all the c's on the inside are small.

86:43

Of course, there's many of them. Okay,

86:45

so this doesn't mean that you can just

86:46

throw them out. Okay, but it kind of

86:48

means that the C's on the inside are

86:50

tiny. Meanwhile, what are the C's on

86:53

these corners?

86:55

These are X plus X - X, but this one is

86:58

absent for them. Okay,

87:01

so the C's on these corners are of order

87:04

one. These guys are of all of order one.

87:08

And this guy on this corner is also of

87:11

order one. Okay. So in this continuum

87:15

limit when the x is inside are taken to

87:17

be smooth.

87:19

Then the c's on the outside here and

87:21

this one here are of border one

87:24

and all the ones on the inside are of

87:27

order one n square.

87:28

>> Sorry.

87:29

>> Yes.

87:31

>> This I don't understand why they're

87:33

order one. So the other one you

87:35

interpreted as some derivative. Okay,

87:36

here one of them is missing.

87:38

>> Yes,

87:38

>> that I understand. You can't write it as

87:40

a double derivative anymore. But what

87:42

can you write it as?

87:43

>> Oh, you can't. It's just x + x - x. All

87:47

these x's are of order one. So this is

87:49

of order one.

87:50

>> It's like a single derivative plus

87:52

something.

87:53

>> Well, it's like a single derivative plus

87:55

plus an x. Yes,

87:56

>> if you like. It says x - x plus plus an

87:59

x. This is this is a more than one. I

88:01

want to stress something to you. So if

88:02

you if you thought that the x should go

88:05

to zero on the boundaries then this

88:08

would not be true. Okay. So this is sort

88:09

of important that uh in this story uh

88:12

again this is the the half hour if

88:14

you're a physicist that we added this

88:16

mass squar. Okay. We add this uh we add

88:18

this m squ. So even on the boundary is

88:21

not going to zero. It's going to m okay

88:23

that's what makes it uh that's what

88:25

makes it happen. But on the inside all

88:26

the m squares cancel. On the outside

88:28

they uh they they they don't. Okay. So

88:31

how

88:32

>> just to be more specific one x of order

88:34

one and one derivative of order one.

88:36

>> That's right. So the whole thing is is

88:37

of order one one I'm sorry this is order

88:39

one plus order one n but you know you

88:41

can put together like this. Yeah.

88:44

Okay. So

88:48

uh so that's already some some

88:50

indication. So let's for the moment

88:52

forget about all the internal seats.

88:54

Okay. As I said we can't we won't

88:57

actually forget about them. We're going

88:58

to put them all back in uh in a bit. Um,

89:00

but this at least suggests that it's an

89:02

okay idea to start thinking by throwing

89:04

out the C's. And it's interesting that

89:06

that exactly lands us on this sort of

89:08

first complicated associate. Okay, that

89:11

it could be super simple when it's just

89:13

these guys on one end. It's super simple

89:15

when you just turn on guys on the other

89:17

end. You add the two of them, you get

89:18

something complicated at all. But

89:19

anyway, somehow that first complicated

89:21

thing is our uh first object of study

89:24

here for for a for a natural reason.

89:27

All right. And that that lets me write

89:29

the integral down. I can now just write

89:30

a single integral down that we can all

89:32

stare at. Um

89:39

>> so I'm just going to introduce some uh

89:41

>> so Nemo since you mentioned the

89:43

scattering equation should we solve them

89:45

for this too at the inside and outside?

89:47

Yeah, as as you see this this large end

89:50

story suggests a dramatic simplification

89:52

in solving the scattering equations at

89:54

large

89:54

>> for this situation

89:56

>> for all situations for all situations in

89:58

this world even when we turn back on the

90:00

seas. Okay, so that's so all these

90:02

things go together. I'm actually talking

90:04

about but sort of physically more

90:05

interesting perhaps about the what the

90:07

field theory amplitudes look like. were

90:09

a lot more tropical words and it's

90:10

actually much more interesting and

90:12

involved analysis. Uh uh but the same

90:15

statements end up being true for string

90:16

amplitudes in the limit where the where

90:18

the energies are huge um where you're

90:21

just solving the scattering equations

90:22

and the arguments are oneline arguments

90:24

for what what the large end results look

90:26

like. So they're they're much simpler uh

90:28

at large end there's a specific sense in

90:30

which you isolate a single solution.

90:32

it's rational and you follow it and it

90:35

becomes simpler and simpler as you go to

90:36

a large larger

90:38

>> and you'll see there's sort of one

90:39

simple idea uh which uh we probably

90:42

won't get to it from the strings but

90:43

once you see what it is uh here we'll

90:45

immediately you'll immediately see what

90:47

you have to do in the scattering

90:49

equation context okay

90:51

>> yes

90:51

>> so is what is the statement about the

90:53

middle one since there are many of them

90:55

yes is it that they individually

90:57

contribute in different regions or

90:59

>> no I mean at the moment you have no

91:00

reason to ignore them Okay, at the

91:02

moment there's one of n square but

91:03

there's n squ of them. So you can shrug

91:05

them. Okay, you can at least say they're

91:07

definitely not more important somehow.

91:08

They're not going to like dominate

91:10

everything. So uh and as we'll see we

91:13

might see if we Yeah, I think we we we

91:15

might even see uh uh you can really

91:17

include their effects. Sometimes they're

91:18

not important, sometimes they're

91:19

important, but uh but there's a there's

91:21

going to be a formula at large n in all

91:24

regions of kinematic space that you care

91:26

about where you can see um all these

91:29

things happen. All right. So

91:33

okay. So um so what is but so let's

91:36

let's turn off the so turn off

91:41

um the C internals.

91:48

And so our amplitude is then just the

91:51

integral dt13 up to dt1 nus1

91:56

minus infinity to infinity e minus s

92:00

is the sum of t1j

92:04

x1 j

92:06

sum over j plus the sum over j c j minus

92:10

2 j these are these quarter js times the

92:13

max of zero and minus uh p1j

92:18

and then has one complicated term which

92:20

is this corner C C1 N minus one.

92:24

So these these are the C J minus 2 J and

92:29

this one C here is C1 N minus one. Okay

92:33

C1 N minus one and this one has the max

92:35

of everybody. So max of negative T1 NUS1

92:40

T1 N -1 T1 N -2

92:44

and so on. Okay.

92:46

So we have to do this single integral.

92:49

I'm going to introduce a little bit of

92:50

uh uh notation just so I don't drag

92:53

machines around all the time. Um so

92:57

remember

92:59

the the kind of obviously they're the

93:01

simplest part of this action this these

93:04

first two terms those are the ones that

93:07

gave us the like hyper cube. Those are

93:09

those are the ones that were sort of

93:10

trivial. I'm going to call these I'm

93:11

going to call this S not for the moment

93:13

I'm going to call this uh S1

93:18

and furthermore in here remember what

93:20

what it looked like in here when T1J was

93:24

positive uh it had a slope that was X1J

93:28

uh when it was negative it had a slope

93:30

that was CJ minus 2 J minus X1J

93:34

so I'm just going to call this one AJ

93:38

and this BJ J.

93:40

Okay, they just stand for those

93:42

variables. And I'm going to call C1 N

93:45

minus one. I'm going to call C. Okay, so

93:47

I don't just keep dragging these indices

93:49

around. So my amplitude depends

93:52

on the AJ, the BJ, and the C. Okay,

93:58

that's what the uh uh amplitude depends

94:01

on.

94:03

Okay, and so I can write this as an

94:05

integral. Let me write it in this uh way

94:07

as an integral uh again t dp t13 up to

94:11

t1 n minus one and I'll write as a

94:13

product over j the sort of independent

94:16

things. So I'm going to write this as

94:18

product over J. Um uh slightly

94:22

suggestively I'll write them as P hat of

94:25

TJ.

94:28

Um and then uh an E to the minus S1.

94:33

Okay. where where phat of t phat j of uh

94:39

uh t1j

94:43

is e to the minus a jt when t is

94:47

positive and e to the bjt when t is

94:50

negative right that's just exactly the

94:52

thing that we we're talking

94:55

okay so that's our integral

95:00

okay so once again in the limit where

95:03

the C is zero. If I set C to zero, I

95:06

know what this is. It's just the product

95:07

of 1 over AJ plus one over BJ, right? So

95:10

in that limit, the answer is uh is a

95:12

symbol.

95:14

C is this complicated term, right? C is

95:16

the is the complicated term where all

95:18

the variables uh talk to each other.

95:20

Everything else they're uh decoupled.

95:24

All right?

95:26

So um and again if we officially do this

95:30

integral the way to do it is to find the

95:34

regions where it's peacewise linear and

95:35

then we're back to doing the diagrams

95:37

right that's just back to where where it

95:39

came from. So we have to figure out some

95:40

way of thinking about this integral

95:42

which is not that. Yeah.

95:46

So um

95:49

one key idea this is it's not I mean

95:52

it's just psychology

95:55

is to interpret this as saying well okay

95:57

well at least there's a limit where the

95:59

answer looks simple. So I'm going to

96:00

define a kn to be the product over j of

96:04

1 a j + 1 bj. This is exactly what I

96:08

would get. This is what the amplitude

96:10

would be if the c was zero. Okay this is

96:12

aj and bj as I send c to zero. Okay.

96:18

So, it's sort of reasonable to look at

96:19

what a is relative to a KN. Okay. So,

96:22

and if I look at a / a KN,

96:26

well, a / a kn has a very nice

96:28

interpretation.

96:29

A / a KN has the interpretation of an

96:32

expectation value of the quantity eus s1

96:37

in the probability distribution given by

96:39

eus s.

96:42

Right?

96:44

where now what I mean by by the

96:46

probability is there's an independent

96:48

probability for every t they're all

96:49

independent and p of t not p hat of t p

96:53

of t is just this p hat normalized to

96:56

one okay so this is just exactly p hat

96:59

of t divided by 1 / a plus one over b

97:04

one over g tjid by one over a j plus one

97:08

over bj okay so I've got nothing here

97:12

right I've just multiplied and divided

97:13

divided by by a kn but that sort of lets

97:16

me interpret the amplitude is the

97:18

expectation value of e to the minus s1

97:20

in the probability distribution given by

97:22

eus s. So this is the extremely

97:24

important point. So is this is this

97:26

clear or uh totally clear? Okay, trivial

97:29

point but extremely important for how

97:30

we're going to think about things.

97:34

Okay, now this now lets us sort of think

97:37

probabilistically and that's the that's

97:40

the sort of key to making uh progress

97:43

here.

97:49

But before proceeding um I should have

97:52

done this uh a second ago. I want to

97:54

give you an example of what the answers

97:55

look like. Okay, just so you have an

97:57

idea of of of the kind of answer that

97:59

that we're going to get because it's at

98:01

least to us it was somewhat shocking.

98:04

um

98:06

get this chart.

98:09

So here's here's the claim.

98:13

As you'll see, the interesting thing is

98:14

that there's not one uniform formula.

98:17

Okay? Instead, there are different

98:19

regions in kinematic space. There's

98:21

different regions in this kinematic

98:23

space where there's a different simple

98:25

formula.

98:27

But let me say what it looks like

98:28

already in this limit where I've turned

98:29

off the rest of the C's. I can tell you

98:31

what it looks like when you turn them

98:32

back on as well. But just just for for

98:35

uh I mean in fact the formulas don't

98:37

change largely don't don't don't change

98:39

but for instance if all the AJ in fact

98:42

let me I'm going to make statements that

98:44

are true even if you turn the C's back

98:46

on. Okay so you can even turn the C's

98:48

back on the internal C's back on and the

98:50

formulas that I'm are writing are are

98:52

going to be true you have to turn them

98:53

back on but be order one over n squ okay

98:55

exactly as we said but that's that's all

98:57

that is needed. So here are at least two

99:00

regions in kinematic space where the

99:01

simple formulas and then I'll I'll tell

99:03

you what the story is if we get there in

99:05

general. So so so uh so so here are some

99:09

examples of what the answer looks like.

99:15

And remember we're supposed to have that

99:17

the amplitude goes like e to the n

99:19

something.

99:21

Um what I'm going to do instead is is

99:23

write the amplitude. It's just really

99:24

more convenient to write it as it's

99:26

equal to a product uh over all J of

99:29

something. Okay, which of course does

99:31

this uh uh as well. I just don't want to

99:34

write the log of this expression. So I'm

99:36

just going to directly write the

99:37

formula. Okay, or if you like I'm going

99:39

to use a notation where a goes to b

99:43

means that log a / n equals log b / n

99:50

plus order 1 / n to some power. Okay. So

99:53

if I say a go to border 1 / n to a power

99:57

I what I mean by this notation is that

99:59

log a over n is log b n up to that

100:01

accuracy. Okay.

100:04

All right. So here are some examples. So

100:06

region one this is the first one that'll

100:08

be most simple to talk about. Suppose

100:10

all the a k are smaller than the bk for

100:13

all k.

100:17

Okay. Then in this limit the amplitude

100:20

goes to precisely that maximum splits

100:23

formula

100:28

plus corrections that are of order one

100:30

over n. So that's the first surprise.

100:34

Okay,

100:36

one term.

100:38

So you turn on the C, the poly gets way

100:41

more complicated. Doesn't matter. the

100:43

answer is just exactly uh uh the same uh

100:46

thing again.

100:48

Okay, a second limit.

100:51

The AK are bigger than the BK for all K.

100:57

And not only that, this fin quantity A K

101:00

minus BK over 2 C

101:04

is less than one but increasing with K

101:12

or nondereasing. Okay, so I told you the

101:15

regions of kinematic space. Okay, so

101:17

this should look funny. So these are who

101:19

why would we ever care whether AK is

101:21

less than BK or these other things are

101:23

true is not remotely manifest a lot of

101:25

times. We should care about these

101:26

regions and all of kinematic space is

101:28

going to get carved up into regions like

101:30

this. I'm just giving you two examples

101:32

where we can write down the formulas uh

101:35

what uh extremely easily. In this case,

101:38

the amplitude goes

101:40

to the product of all j 4 over a j + bj.

101:53

So I hope you see in a very uh the sort

101:56

of concrete sense in which simplicity

101:58

emerges at large n at no finite n are

102:01

the amplitude simple. You can work them

102:04

out. They're they look like horrendously

102:06

complicated expressions. You go to large

102:08

n and they collapse to these oneline

102:10

expressions. They're as simple as the

102:12

Park Taylor formula. Okay. But not any

102:15

finite end. They become simple at large.

102:18

>> And they have this non-physical.

102:20

>> Yes. Yeah. So um can I explain something

102:23

here? Is that okay?

102:26

>> There's a C here that's positive that

102:28

should we talk about that one too.

102:31

So um uh yeah. So uh so what what Kelly

102:36

was saying if you look at this formula

102:37

you might not even think this formula

102:39

cannot possibly be true um because uh

102:42

the amplitude doesn't even have poles

102:44

when the a's and the b's go to zero in

102:46

general as opposed when the x's go to

102:47

zero well a is an x but b is not an x

102:50

right b is c minus x okay so how can

102:54

this possibly formula possibly be true

102:56

it doesn't even have the right poles

102:58

well this formula is only valid in this

102:59

region of kinematic space where a is

103:01

less than b in this region of kinematic

103:03

space you cannot not reach the poles

103:04

where b goes to zero uh while while

103:07

keeping uh the x's positive. Okay, so uh

103:11

but this also shows how non diagram like

103:15

these formulas are. They don't care

103:17

about the poles. They couldn't give a

103:18

crap about where where the poles are.

103:20

Okay, this formula is even more dramatic

103:22

because none of the terms downstairs are

103:24

poles. Okay, these are not poles

103:26

anywhere. If you write this in terms of

103:28

the underlying physical momenta

103:31

uh this this formula is the amplitude

103:32

goes to the product of 2 over pi pi plus

103:37

2 pj.pj plus 2.

103:40

Okay. So it looks a lot like the partial

103:42

formula which has consecutive uh uh

103:45

things downstairs. But here you

103:47

precisely have the nonpler poles

103:48

downstairs. Okay.

103:52

Oh, and I forgot to tell you in this

103:54

case the correction is order 1 / square

103:58

root of n. Okay,

104:02

which suggests the existence of some

104:04

random walk behind the picture which is

104:06

what we're going to see in a moment.

104:08

Okay, so the physics of large n is

104:10

associated with a picture of a random

104:11

walk in stringer parameter space. Okay,

104:13

so that's uh okay. Anyway, that's what

104:16

that that's what the uh formulas look

104:18

like. We are in the midst of learning

104:21

how to do a systematic one overn

104:23

expansion around these asmtoic limits.

104:26

So we don't have a theory for that yet.

104:28

We have sort of hints of of how to do

104:30

that uh systematic expansion. But what

104:33

we do know how to do is you you give me

104:35

any picture of what the AKs and BKS look

104:37

like. We know how to produce a formula

104:38

that looks like this. And it always

104:39

looks like this. It's always the product

104:42

of some one over AJ with a shift and one

104:44

over BJ with a shift. And we're going to

104:46

explain where that now comes from. Okay.

104:49

Maybe I'll make a final comment. The

104:50

Catalan case is this one. Okay, me could

104:53

have said it before. The Catalan case

104:55

when you set all the X's to one

104:57

corresponds to a situation where all the

104:58

C's on the boundary are one. The corner

105:01

C is one and all the C's in the middle

105:03

are zero. Okay, so the Catalan case

105:05

where he's just counting diagrams is

105:07

this limit is also this limit. Um and uh

105:11

and that ends up being in this case. Uh

105:15

the AJ ends up being one. the BJ ends up

105:17

being zero because it's a C minus X and

105:20

so this is the four to the N. Okay, so

105:21

that's where the that's where the Calan

105:23

is and this is a sort of specific

105:24

defamation away from Cal. All right,

105:29

now I think I have uh 10 minutes. So let

105:32

me give you an idea of where these

105:33

formulas come from. Um

105:38

uh and so we're going to begin by

105:40

looking at this uh at this picture.

105:43

Okay. So, um

105:52

to get a first intuition for what's

105:54

going on. Uh so I'm I'm going to write a

105:57

is a kn I'll write it again kind in this

106:00

way. Expectation value of e to the minus

106:02

s1 and the probability distribution is

106:05

by s. The first thing we're going to do

106:07

is is to try to bound this guy. Okay,

106:12

this has an obvious upper bound, right?

106:14

The upper bound is that s1 is positive

106:17

because it's a max of zero and something

106:19

and the c was positive. So s1 is

106:21

manifestally positive. So this is this

106:23

is less than or equal to one. So this is

106:25

less than or equal to a kn. So we have

106:27

an upper bound on the amplitude which is

106:29

that's already kind of cool for absolute

106:30

free, right? You know, so the amplitude

106:32

is less than equal to s not.

106:35

We also have a cheap lower bound on the

106:37

amplitude. A cheap lower bound is from

106:40

Jensen's inequality. Right? So Jensen's

106:41

inequality says that if you have a

106:42

convex function f

106:45

uh f ofx it says that the average of f

106:49

is bigger the average of f ofx is bigger

106:52

than f of the average of x. Right?

106:54

That's the JSON inequality. You draw a

106:56

convex function and it's this sort of

106:58

famous picture. Right? So this is the

107:00

average of f. This is f of the average.

107:02

So if the function is convex uh this is

107:05

all all this is always true. Okay, the

107:08

function e to the minus x is convex.

107:12

Okay. So from here we learn there's a

107:14

lower bound the amplitude is greater

107:16

than or equal to e minus the expectation

107:18

value of s1 and the probability

107:20

distribution e minus s. Okay,

107:25

right

107:25

>> sorry

107:27

>> uh sorry time z sorry exactly. Okay.

107:34

>> All right. Now let's look at that lower

107:36

bound. Let's try to understand what e to

107:37

the minus s1 is. Well, remember let's

107:40

let's write S1.

107:45

Yes, there was a question.

107:52

So, S1 was uh

107:56

S1 was max

107:58

of 0 - t1 n -1 - t1 n - 1 - t1 n - 2 and

108:04

so on.

108:07

All right. And now let's think about

108:08

what the probability distribution is.

108:10

Remember uh it has a slope that's a k

108:14

for t1k and bk in this other direction.

108:20

They're a general difference. And I've

108:21

drawn it now in the limit where a k is

108:23

smaller than the bk. Okay, just to make

108:25

it clear that what does this probability

108:27

distribution want to do? It wants t you

108:30

know t can jump forward or backwards but

108:32

it preferentially wants t to jump to

108:34

positive t. Okay. So if it jumps to

108:37

positive t of order 1 / a, that's not

108:39

suppressed. Of course, 10 over a is

108:41

exponentially suppressed. But negative t

108:44

of order 1 over b is okay of order 10

108:47

over b anyway. So so it it it wants to

108:50

preferentially go uh in the positive

108:52

direction. Is that clear? That's for

108:55

each t individually. But what are the

108:57

things that are showing up here? The

108:59

things that are showing up here are

109:00

precisely what you would sort of think

109:01

of if I if I plot here PN minus one. tn

109:05

minus one n minus2 and so on tn minus3

109:08

all the way up to t3

109:11

then um uh so sorry if I plot what what

109:15

I see here which is uh sorry let me do

109:17

it the other way so here I'm going to

109:19

plot 3 4 5 up to n right

109:24

um

109:25

uh then uh what then what I'm what I'm

109:29

seeing here is uh uh uh here I see t

109:33

here I see t plus Right? So it's exactly

109:37

like the sums of everything before. It's

109:38

like I'm doing a walk, right? A random

109:41

walk from here.

109:44

Uh and what I see at in in every max is

109:47

a sum of all the t's that that came

109:50

before. Okay.

109:52

So if I draw this uh picture um it looks

109:56

like in general I'm doing a biased

109:58

random walk. Okay. If a is if a is uh

110:03

smaller than b, the bias is in the

110:05

positive direction. I want to make the

110:06

sum of these keys bigger and it's

110:08

roughly going to go linearly, right?

110:10

It's roughly going to go linearly as I

110:12

uh go up from here. Okay? If a was

110:15

smaller than b for all of them, then it

110:17

goes the other way.

110:20

If a is equal to b, then it would sort

110:22

of like randomly walk with a rand sort

110:24

of uh fluctuation around it. Okay?

110:28

But let's begin from the simplest case

110:30

where a k is less than bk for all k. If

110:34

I do ak less than bk for all k then

110:37

what's going on? Every like this term

110:39

wants to be t1 n minus one wants to be

110:43

positive. Okay. So this wants to be

110:45

negative. This wants to be more

110:47

negative. Uh so each one of these things

110:50

wants to be more and more negative.

110:52

And so the max is dominated by zero.

110:57

Okay,

110:58

so this is why it matters whether AK is

111:00

bigger than BK or less than BK or so on

111:02

because the sign of the drift of this

111:05

random walk is going to matter. Okay,

111:07

but when the AK is small at all than BK,

111:10

the max is dominated by zero.

111:13

That means that we're talking about at

111:15

order n, the max is going to be order

111:17

zero. It can't be order n, the max will

111:19

be order zero. Yeah.

111:22

And therefore this lower bound

111:30

this thing is e to the minus order zero.

111:34

Doesn't have any ns in it. E minus order

111:36

n to the zero. Okay.

111:41

Well, that's fantastic. We have the

111:42

amplitude at leaving order at large n.

111:45

We've bounded it between two numbers a

111:47

knot and a half a knot. you know some

111:50

number of order one * a knot. So that

111:53

tells us that uh that log of a n / n

111:59

approaches log of a kn over n plus

112:04

corrections that look like 1 / n. So

112:07

that's our first

112:10

super cheap um uh prediction. Okay, it's

112:15

already very non-trivial from the

112:16

perspective of uh any other normal

112:19

perspective of a fun time. Um I should

112:22

say that these predictions have been

112:23

checked. How do we check these

112:24

predictions? Uh this theory is simple

112:27

enough that uh we can use normal

112:29

recursion relations uh to compute the

112:32

tree amplitudes. Okay, you can do it if

112:34

you're not good at computers on uh on

112:37

Mathematica up to n of like 200. If

112:40

you're good at computers, you can do it

112:41

up to n of 5,000. Okay. And we've gone

112:45

up to around 5,000 because we need to.

112:47

These one over root ends are errors that

112:50

uh that we should expect. And one over

112:52

root 100 if the coefficient happens to

112:54

be three is not necessarily a small

112:56

number. Okay? You can get confused. So

112:58

we really needed to go to n of a few

112:59

thousand. Okay? So every formula that

113:02

I'm telling you has been checked up to n

113:04

of a few thousand. Okay? And and and

113:08

they're correct and the errors are

113:09

correct. Okay? as sort of we would we

113:11

would predict from this picture but um

113:14

anyway so that's the most trivial that's

113:17

the most uh that's the most trivial

113:18

example uh maybe just in 5 minutes let

113:21

me give you an idea of how we do things

113:23

in general

113:25

okay so for example let's say ak is

113:27

bigger than bk

113:29

let's say ak is bigger than uh bk for

113:32

all k then I just erase it but the drift

113:36

is going in the opposite direction right

113:38

that means that the expectation value of

113:40

S1 is of order n, right? That means that

113:44

this bound is absolutely lousy. These

113:46

bounds are totally useless. The

113:48

amplitude is less than one uh is bigger

113:51

than e to the minus n. So utterly

113:54

useless in the it just tells us it's

113:55

between zero and one uh in these uh in

113:57

these units. That doesn't tell us

113:58

anything apart from the upper bound.

114:02

But our reaction to this is that in fact

114:05

what's wrong with this is that the upper

114:07

bound was too stupid. Okay, we we did

114:10

too lazy a job with the upper bound.

114:12

We're going to find a much better upper

114:14

bound. Okay, so I'm now going to tell

114:16

you how to find a much better upper

114:18

bound. And it's a little bit of magic

114:20

that this optimal upper bound happens to

114:22

be saturated at large n. Okay, so that's

114:25

some separate arguments, but I'm now

114:26

going to just give you a new way of

114:28

getting an upper bound. And the

114:30

remarkable at any n. But the remarkable

114:33

thing is that this formula for the upper

114:34

bound at any end turns out to be the

114:36

exact amplitude, the leading amplitude

114:38

at large n. Okay, a conceptual reason

114:40

why this happens, we still don't know.

114:42

It feels like some erotic explanation

114:44

should be available, right? Somehow the

114:46

amplitude is filling up some phase

114:48

space, maximizing some phase space, some

114:50

some words like that. We don't have

114:52

oneline conceptual explanations like

114:54

that. We do have relatively simple

114:55

proofs that have very much this

114:57

character of looking at these random

114:59

walks and thinking about what happens

115:00

with these uh uh uh random walks. Okay,

115:03

but anyway, let me tell you what the

115:04

argument is. It's extremely simple and

115:06

this is something I would be astonished

115:08

if it has not come up in many other uh

115:11

places in uh mathematics. I'm sure it

115:14

has. Um and it would be interesting to

115:17

compare uh to the setting that we're

115:18

seeing here. Okay. So I want to come up

115:21

with a better uh upper bound and so you

115:24

see my formula was S1 was the max of a

115:27

bunch of things you know C times the max

115:29

and let me just call them some cap a1 a2

115:32

a3 etc. Right? One of these happened to

115:35

be zero for us. And so we said this is

115:38

like saying max of a1 uh a2 and a3 is

115:43

greater than or equal to a1. Why do we

115:46

choose this one? There's no reason to

115:47

choose that one. Okay? I could put here

115:50

a2 or a3 or anyone else for that matter.

115:53

I could put any weighted average of

115:55

them. Okay? And the max is bigger than

115:57

any weighted average of them. So that's

116:00

what we're going to do. We're going to

116:01

say that S1 is greater than or equal to

116:05

W1 A1 plus W2 A2 plus you know W1 A N

116:10

where these are positive weights that

116:11

add up to one.

116:16

Okay.

116:18

And so that means that the that means

116:20

that the amplitude is uh for any choice

116:23

of weights is still upper bounded by

116:25

what by what I get if I if I did that.

116:27

Okay. But you see the nice thing that

116:30

happens the moment you replace S1 with

116:32

this weighted average the maxes are gone

116:35

and all the variables are decoupled

116:37

again.

116:39

Okay. So I can do the integral.

116:43

So all the variables are are decoupled.

116:45

Okay. So what do I get? Well, it has

116:48

extremely simple uh interpretation.

116:50

Let's just uh just see what it is here.

116:55

Where did my go?

116:58

Oh, I left it in here. That's great.

117:00

That's good.

117:01

Sorry.

117:03

It's really yellow in there now. That's

117:05

great. So, let me erase this. So,

117:17

leave that to dry

117:21

[Music]

117:23

right down here. Okay.

117:26

So the amplitude is going to be less

117:28

than or equal to again this integral uh

117:30

of the dts uh eus s. But then I'm going

117:34

to have e to the minus c * w n -1 t1

117:40

wnus one. I'm going to call the weights

117:43

for the for the uh for the terms labeled

117:45

by the smallest t that occur. So plus w

117:47

n -2 * uh t1 n -1 + t1 n -2 plus and so

117:55

on. So plus finally w3 * t1 n -1 +

118:01

uh t12. Okay. And oh and there are minus

118:05

signs in front of all these guys. So the

118:06

maximus have the negatives in them.

118:07

Okay. So has a negative t1 n minus one

118:11

uh negative all of these guys plus w3

118:15

negative all these guys. Okay.

118:18

And so here I just replaced uh S1 by

118:21

this uh by this weighted average.

118:24

But now you see very nicely again all

118:26

the T's are decoupled again. So every T

118:29

integral has some effective new AJ and

118:32

BJ. There's effective new AJ and BJ.

118:37

Uh so

118:49

so this is equal to uh integral dt 1j

118:53

the product of all all j of some uh p of

118:57

some a hat j and bhat j

119:02

where what are these uh a hat js and uh

119:05

bhat js for example

119:07

T13

119:09

only uh the only place anything involves

119:12

T13 is this weight w3. Okay. So we have

119:16

that a3 hat is a3 minus w3 * c.

119:24

Okay.

119:25

This is uh there's a minus sign minus

119:28

sign. So this e to the c t13 but there's

119:31

an e to the minus a t13. So a hat 2 is

119:35

a13 minus w3 c. bhat 3 is b3 + w3 c. But

119:41

then a hat 4 is a4 minus and there's two

119:45

w's. There's w3 plus w4

119:48

that touches uh 4. Bhat 4 is b4 plus w3

119:54

+ w4.

119:56

And in general we can say that a k hat

120:00

is a k minus c. We can call it sigma k

120:03

where sigma k is w3 plus w4 plus all the

120:08

way up to plus wk. Okay. And b k is bk

120:14

plus c sigma k.

120:18

Okay. And therefore uh what we're left

120:20

with is that this mu upper bound on the

120:23

amplitude that depends on these weights

120:26

is that the amplitude is less than or

120:28

equal to

120:30

since they're all decoupled again. It's

120:32

just the product over all the k's of 1 /

120:36

a hat plus one over b hat. So I'll write

120:38

it in terms of 1 / a k minus c sigma k

120:42

plus 1 over bk

120:44

plus c sigma k.

120:47

Okay. So for any choice of weights, so

120:50

for any choice of W's, any choice of

120:53

weights,

120:57

this is a true state.

120:59

Now we can instead of working with the

121:01

weights, we can sort of nicely directly

121:03

work with the sigas. The weights being

121:05

positive means that the sigas are

121:08

positive and increasing, right? Because

121:10

the sigma k are sums of consecutive

121:12

sigas. So I can not talk about the W's.

121:15

I can just directly talk about the sigas

121:16

but say that I have zero bigger than

121:18

sigma 3 sigma 4 less than sigma n minus

121:22

one less than one okay so I can give

121:26

sigas or I can give weights or I can

121:28

give any choice of sigas in this uh in

121:31

this

121:33

and so now comes the point there's a

121:35

best possible upper bound that I can

121:37

find right the best possible upper bound

121:40

is to just you know minimize this

121:42

function

121:44

uh over the space of all sigas that

121:46

satisfy this property. Okay.

121:49

Okay. So let's see uh let's see what

121:53

that uh means. If I just take this note

121:56

not note that that ak hat plus bk hat

121:59

does not depend on the sigas right the

122:00

sigas cancel between these things. So if

122:04

I just looked at this and said where is

122:06

the global minimum of this function when

122:08

I you know where's the global minimum of

122:09

this function the global minimum occurs

122:12

at some value let's call it sigma k star

122:15

which is equal to a k minus bk over 2c

122:19

that's where the global minimum occurs

122:21

okay

122:23

so if you can attain that global minimum

122:25

that's the best upper bound you can find

122:27

okay

122:29

but you have to but but the minimum uh

122:32

so either you obtain global minimum or

122:34

if you can't you have to find the

122:35

minimum somewhere on the boundary of

122:37

this space. Okay.

122:39

So let's say we go back to our previous

122:40

problem when the a so and sorry what is

122:43

this doing? This is just a sort of you

122:44

know uh famous fact. You have a 1 over a

122:48

plus one over b right the the sum is is

122:50

constant. So they just want to you want

122:52

to tune sigma to make them equal if you

122:54

can. Okay.

122:57

So but let's say we go back to our

122:59

previous example. If ak is less than bk

123:02

then well this sigma would want to be

123:04

negative which we're definitely not

123:06

allowed to do. Okay. So in the case

123:08

where ak is less than vk the best you

123:10

can do is to do nothing. Okay, because

123:12

anything you can do with these sigas

123:14

will hurt. So the optimal upper bound is

123:16

to put all the sigas equal to zero.

123:18

Okay, that's our previous formula. But

123:21

you can ask can we sometimes get the

123:23

optimal just the dead optimal case? Yes,

123:26

you can. If you if this is realized, by

123:28

the way, if this is realized the sort of

123:30

a star absolute optimal would equal the

123:33

product over all k. If you shove this

123:35

in, you get precisely the formula that I

123:37

had before. 4 over a k plus bk.

123:41

Okay,

123:43

but when can that be when that cannot be

123:45

realized? When these sigma stars live in

123:48

this space. So that's what I told you

123:50

before, right? It's when the when a k

123:53

minus bk over 2c is bigger than zero and

123:56

increasing, right? Which exactly means

123:59

that it uh that it lives in in this

124:01

space. Okay. So these are sort of two

124:03

extremes where this upper bound uh what

124:08

where we can easily figure out what the

124:09

best upper bound is

124:12

and so we get this best upper bound and

124:13

the claim is that the amplitude is

124:16

saturated by that best upper bound. In

124:18

fact, the way to uh prove it is to

124:20

really return to this picture.

124:23

Um, and instead of saying that I'm

124:25

giving it an upper bound, just say that

124:28

uh I have the the the action is S plus

124:31

S1. But I'm going to rewrite S1 as S1

124:35

minus this sort of approximation to it.

124:38

the sum of the weights max of the you

124:42

know uh the max of the of the

124:45

corresponding sums of negative t's

124:48

plus the same

124:52

I'm going to call this whole thing s1

124:55

hat and I'm going to group this I mean

124:58

this is the piece where every everyone

124:59

is decoupled this and sot

125:03

I'm going to call snot hat these will

125:06

depend on the weights this will depend

125:08

on choice of wakes

125:10

and now I have a new interpretation

125:11

where the amplitude is equal to a kn hat

125:16

multiplied by the expectation value of

125:18

eus s1 hat in the probability

125:20

distribution eus shat hat. Okay. So for

125:24

any choice of weights we can have this

125:26

probabilistic interpretation and what

125:28

you can then show is that when you

125:31

choose the optimal choice of weights

125:34

then this even minus s1 hat is always a

125:36

border line. Okay. So, and that that

125:39

needs some more argument to sort of

125:40

thinking about drifts and random walks.

125:42

Um, but uh but you can see why there's a

125:45

correlation between optimizing this

125:47

upper bound and uh and getting this to

125:50

be order one. So, it's not an accident

125:51

that the two things uh happen at the

125:54

same time. Okay.

125:56

Okay. So, that's I think all I will say

125:58

about this. I've uh uh gone uh gone o

126:01

over time. Um but uh but there's just to

126:05

say something something quickly. This

126:07

problem of optimizing this product in

126:09

this little region is a very very pretty

126:12

little high school problem. Okay. So if

126:14

you have uh just just to give you an

126:16

example um uh let's just plot like I'm

126:20

going to plot what is a k minus bk over

126:22

2c. This is clearly this is this sigma

126:25

star. This is an important quantity. Uh

126:27

I'm going to plot as a function of k. So

126:29

here would be k of three but I'm going

126:31

to draw zero to n. Okay. I'm going to

126:32

draw it in a in a continuum. So zero is

126:35

clearly special. one is clearly special

126:37

here. Okay, so um if the AK if the AK's

126:42

were less than the BK's, you're down

126:45

here somewhere

126:46

and uh the optimal is sigma equals zero,

126:50

right? That's what that's what we said.

126:53

[Music]

126:55

If a k minus bk is positive, but this

126:57

thing but c is still so small that this

127:00

quantity is up here somewhere, then

127:03

you're in the opposite limit. I mean you

127:04

want to make the A and B equal but it's

127:08

so far to get there that but you know

127:10

you just take a little step in that

127:11

direction and sigma's already maxed out

127:13

at one. Okay. So what you can do is just

127:15

keep sigma at one.

127:18

Okay.

127:20

But uh if on the other hand the AK minus

127:23

BK over 2C if the kinematics looks

127:25

something like this it's increasing in

127:27

some way then that's what you can call

127:29

this tracking solution. then the optimal

127:32

the the the pink curves are here what

127:34

sigma looks like the sort of optimal uh

127:36

the sigma ks that optimize the bounds

127:38

will sort of just subtract it okay but

127:41

in general whatever the picture for the

127:43

sigas are has to be some increasing

127:46

picture here that's bounded between zero

127:48

and one and you know as as we learned in

127:51

in Europe and junior high school

127:53

probably and in the US in college uh

127:56

when you're minimizing uh a function on

127:58

a on some you know on some compact back

128:00

region again either the minimum is at

128:02

the interior or it's on the boundary

128:04

somewhere. Okay. And what does it mean

128:05

to be on the boundaries here on the

128:07

boundaries here if you're drawing a

128:08

picture of the sigas means the sigas are

128:10

just going to be constant for a while.

128:12

Okay. So the bunch of sigas can be

128:13

equal. So they can be constant or they

128:15

can be increasing but they can only be

128:17

increasing if this curve of akus bk over

128:20

2c is increasing. So for instance

128:24

if your picture of ak minus bk over 2c

128:27

looks like this. Let's say it's

128:29

decreasing.

128:31

Then the best you can do is just keep

128:33

sigma at a constant somewhere. Okay. So

128:35

that's what what the the boundary looks

128:37

like. Sigma can't go up anywhere because

128:39

that would correspond to a being in the

128:41

interior, right? But it's not it can't

128:42

be. It's got to be on the boundaries.

128:44

It's got to stay constant. So the best

128:46

you can do is just choose some sigma

128:48

star and then just, you know, pull out

128:50

that quantity, you know, as a function

128:52

of sigma star until you find the optimum

128:54

and that gives you the formula for the

128:55

amplitude. So it's not like there's an

128:57

analytic formula you can write down

128:58

ahead of time. You have to, you know,

128:59

solve this little optimization problem.

129:01

But this optimization problem has

129:03

nothing to do with n. You just draw me

129:05

the smooth curve and I just take this

129:07

product and I I minimize it. Okay. So um

129:10

and so if you have a more interesting

129:12

situation, let's say you have a curve

129:14

that looks like um uh I don't know, it

129:17

looks like this.

129:19

That's what ak minus bk looks like. Well

129:21

down here the best you can do as we saw

129:23

before is to keep the sigma zero.

129:26

Now the ak minus bk is increasing

129:28

between zero and one. So you can track

129:29

for a while but you can't track forever.

129:33

So at some point you've got to stop at

129:34

some sigma star. You've got to stop and

129:36

say no now I'm going to go flat. I I can

129:38

go flat until the next time the thing is

129:40

increasing. Then I can try to increase

129:42

again and then here it's going to

129:43

flatten out again. Okay. So this is what

129:46

your arzops for the sigas has to look

129:48

like and then you optimize again. So

129:51

again, there's not an analytic formula,

129:53

but the cool thing is that it has

129:55

nothing to do with n. The limit has to

129:57

do with the number of humps in the

129:59

picture for ak minus bk. Okay, there

130:01

more and more humps there are in the

130:02

picture. The more and more parameters

130:04

there are in your optimization problem.

130:05

But for example, in this case, there's

130:07

one parameter. So you put it on the

130:09

computer, you plot it, you find sigma

130:10

star, you shove it back into the

130:11

formula, and that's the amplitude at

130:13

large n. If it looks like this, there's

130:15

there's this point. Well, okay, there's

130:17

really still only one uh uh uh sigma

130:19

star. So you just sort of you vary the

130:21

sort of point in which this escape could

130:23

happen and you optimize uh you minimize

130:25

this product.

130:27

So that's a sense in which there isn't

130:29

an analytic formula uh at large but

130:32

kinematic space is broken up into

130:34

regions that correspond to sort of

130:35

qualitatively different ways that this

130:37

optimization problem can be solved.

130:39

>> Is it possible to make a last question?

130:42

>> Yes. Yeah. That's how all I I I want to

130:45

say exist. Yes. But what is your your

130:46

question? Okay. Um so how universal are

130:49

the gridlike structures? Um moment uh

130:52

how how universal are those gridlike

130:54

structures? Um no uh how universal are

130:58

are those are this formalism um string

131:03

theory derived gridlike structures? In

131:05

other words, how does all of this

131:08

translate to a more complicated

131:10

combinatorial object derived from

131:12

physics? I mean,

131:14

>> well, I I I think the answer is going to

131:16

be a little similar to to uh to answers

131:18

I gave to to earlier questions. The

131:20

whole model to begin with, the whole

131:22

model to begin with is a is a is a toy

131:24

toy model. Naively, it's surprising that

131:26

the toy model ends up being connected to

131:28

very physical models in a non

131:30

non-trivial in a very simple but not

131:31

non-trivial way. Uh, one thing I will

131:33

say is that there is something very

131:35

universal about what we're talking

131:36

about. All of these X's are

131:39

singularities. All these X's are poles

131:42

of amplitudes and those poles are there

131:44

in every theory of colored particles. So

131:46

that part the sort of singular part is

131:48

totally universal is there for any

131:50

theory that has a that has a color. Uh

131:53

so gluons pions all kinds of theories

131:55

that have a color will have those those

131:58

poles. That was a big motivation for

132:00

studying these things a long time ago.

132:01

They were super duper simple but they

132:02

had some universality in their pole

132:04

structure. That thought would make you

132:06

think that to get to realistic theories,

132:07

you have to do a lot of work to get the

132:09

intricate structure of the interactions

132:11

that show up in the numerators. And the

132:13

big surprises that all of that intricate

132:14

structure of the numerators turn into

132:16

exactly the same formula just shifting

132:18

the uh variables in some in some ways.

132:21

Okay.

132:21

>> Yes. So um so just uh just just uh just

132:25

uh to make this a point um I didn't say

132:28

what the I didn't say what the uh

132:30

specific shift was to get to pons uh

132:34

from uh from from this scalar theory but

132:37

there was a simple shift to get pons and

132:38

it gives rise to formulas that are

132:40

virtually identical to these formulas

132:41

for pon amplitude okay so that

132:44

>> can I say just

132:45

>> yes

132:45

>> because I have to read because I'm if I

132:47

get too late I'm going to let it spelled

132:49

from this class

132:50

>> that sounds very dangerous. Yes. Right.

132:52

>> Um so um I have a no um I sneak a lot

132:57

into places and sometimes I sneak into

133:01

this AI website um at 3:00 a.m. just to

133:05

have fun with little and see what people

133:06

are doing. So like seven months ago or

133:09

eight months ago there was this

133:11

colloquium about um moment um a

133:14

colloquium about um combinational

133:17

objects. I don't recall the exact title

133:19

of it. It was very recent and um I I I

133:24

remember that I was uh just messing

133:26

around on Coora or Reddit and they had

133:29

>> Can we answer this question? I'm sorry

133:31

because I had a few points and we're

133:33

we're over time already for my talk. Uh

133:35

but there was one more physics point I

133:37

wanted to make and I want to give people

133:38

an opportunity to ask questions about

133:39

the physics. You're not asking questions

133:41

about the physics that I'm talking

133:42

about. We can talk about it later but

133:43

you're not talking about this physics.

133:44

So let's can we can we just put this

133:46

aside for a moment please and just talk

133:47

about it later. Thank you. Yes. Go

133:49

ahead. Yeah. Just a quick quick physics

133:50

question. So do you see any relation of

133:53

this kind of to people in cosmology have

133:54

studied kind of these tales of a PDF

133:57

>> um especially also for kind of from

133:59

black formation?

134:00

>> Yes. Yes.

134:00

>> Um anything rings a bell there like what

134:05

>> I don't know enough not technically

134:06

about what they're doing to know if it

134:07

rings if it rings a bell to you. Let's

134:08

let's uh let's uh talk about it. Yeah.

134:11

Um, I'll just say uh just just very very

134:13

quickly that um uh yeah that adding the

134:16

C's back in in this picture is is is I

134:18

hope it's kind of clear. Uh for example,

134:21

if we did the AK less than the BK

134:23

problem, uh it's it's clear all the

134:26

other C's still involve maxes of 0T

134:28

negative T. So they're all going to be

134:30

near zero, right? They're all

134:31

irrelevant. So it doesn't matter if you

134:32

add them back in or not. Okay? So long

134:35

as they're order one over n square. one

134:37

r square of them doesn't uh give you

134:39

something of order n give you something

134:40

of order one. So it does matter if the

134:42

c's are of order one of n r squ. If

134:43

they're of order one r it wouldn't work.

134:45

Okay. But if the cs are of order one of

134:47

r squared uh uh it doesn't work. And um

134:51

and uh just uh since uh Berg asked the

134:56

the the the version of this idea that

134:58

you use for uh scattering equations for

135:00

string amplitudes is the non-tropical

135:03

version of the formula the max of a1 a2

135:06

a3 is bigger than the weighted average

135:08

of the max is uh the arithmetic

135:10

geometric median inequality. Okay. So um

135:13

so we take all of these formulas and so

135:16

now we have polomials a1 plus a2 plus a3

135:20

to some power. Um

135:23

okay and this is what makes it uh

135:25

complicated but but we use the fact that

135:28

a1 plus a2 plus a3

135:32

>> is greater than equal to a1 over w1 to

135:34

the w1 a2 over w2 w2 and so on. Right?

135:39

A3 over w3. So this is the pre-tropical

135:42

version of a statement and once again

135:45

this decouples all the variables and you

135:48

can do all all the integrals and it

135:50

gives you a something new to optimize

135:53

over over the weights. What's

135:55

fascinating is that the new thing that

135:56

you optimize over the weights you the

135:59

optimization formula is like another set

136:01

of you know critical point formula but

136:03

they're not the same as the scatter

136:05

equations. However, they have exactly

136:07

the same solution and exactly the same

136:09

value of the uh uh of the of the of the

136:12

action at the critical point. Okay, so

136:15

that seems very interesting and this new

136:18

form makes it much easier to analyze the

136:20

large end limit. That's that's the

136:21

point. So the scattering equations

136:22

themselves look very complicated, but

136:24

this these are much simpler. The large

136:26

end limit lets you isolate the solution

136:28

which is relevant and that seems like

136:29

something worth. All right. All right.

136:32

Thank you very much. I I'll stop.

Interactive Summary

This lecture delves into advanced concepts connecting combinatorics, geometry, and physics, specifically focusing on the 'u variables' derived from words representing curves on surfaces. The initial part revisits the two key observations from the previous session: how to derive cone rays from surface data and how tropicalizations of polynomials lead to piecewise linear functions that partition space into cones. The core of the lecture then shifts to explaining the origin of the 'u variables,' which are crucial for string theory amplitudes and Schwinger parameterization. A novel 'counting problem' motivation is presented, where choosing elements from a word (with specific baggage rules) leads to generating functions and, subsequently, 2x2 matrices associated with turns (left/right) in the word. The product of these matrices for a given word yields a matrix whose off-diagonal to diagonal ratio defines the 'u variable' for that curve. The discussion extends to open curves with boundaries and how their associated matrices can be decomposed to track the inclusion of boundary elements. A significant portion of the lecture addresses the potential for infinities in calculations involving complex surfaces and how these infinities, particularly from infinitely many curves, are not problematic but a 'blessing in disguise.' This is because they provide a consistent way to label loop momenta for any diagram via homology of curves on surfaces, even though it leads to infinitely many diagrams. The talk also touches upon the historical discovery of these variables in string theory and their relation to cluster variables. A substantial segment explores the large-N limit in scattering amplitudes, where the complexity of calculations dramatically simplifies. The concept of a 'mesh' representing kinematics is introduced, and the behavior of amplitudes as N approaches infinity is analyzed. The lecture highlights how simple limits, like when all kinematic variables are equal or when specific variables are turned off, lead to surprisingly simple, yet fundamental, amplitude formulas. Finally, it discusses the connection between these amplitudes and random walks, providing a probabilistic interpretation and a method for obtaining optimal upper bounds on amplitudes, which in turn reveal the leading behavior at large N. The lecture concludes by emphasizing the universality of the singular structures of amplitudes, which appear in any theory with color, and how the intricate numerator structures in realistic theories simplify to shifts in variables in this framework.

Suggested questions

5 ready-made prompts