Chow Lectures 2025 by Nima Arkani-Hamed: Geometry & Combinatorics of Scattering Amplitudes Part III
3304 segments
All right, guys. Sorry, I'm a little
late. Um, so today we're going to start
uh where we left off yesterday. Uh
remember at the end of yesterday
uh in the middle of yesterday we saw
there were two interesting facts um that
uh were exposed by uh thinking about the
fan of the uh of the the normal fan of
the associ. One of them uh was that uh
of course the normal fan of any polytope
gives you a bunch of cones that tile all
of space and the cones oh sorry mic's
not on.
Is that okay? All right. Of course, a
normal fan of any polytope gives you a
bunch of cones that tile all of space
and that you can think of each cone as
associated with the vertices of the
polytope.
Um, but that sort of preassumes that the
association was handed to you uh via the
construction in Carolina's lectures for
example. So the first comment uh that we
got through yesterday is how to get at
least the rays of that cone from
nowhere, right? Starting from a picture
of the surface, just recording the data
of every curve on the surface by these
words uh with left and right turns and
extracting the G vectors for the for the
curves uh from those words. Um and that
just plunks down a bunch of cones,
plunks down a bunch of rays which turn
out to bound cones that can be
interpreted in terms of triangulations
and which collectively uh cover all of
space. The second uh maybe much more
striking uh observation was that you
could generate all of these cones. you
could uh the rays and the cones um and
they're associated with these polomials
that we talked about uh and their
tropicalizations gave you a bunch of
peace-wise linear functions that uh a
few of them that when you put them all
on top of each other the domains of
linearity of these peace-wise linear
functions broke the space up into all
these cones each one of which
corresponds to a diagrams and which we
can think of as uh being put together
into the integrals that we wrote down as
giving a sort of global version of
Schwarler parameterization, some kind of
uh one single integral that as you moved
around in the whole space morphed in
pie-wise linear region by region, cone
by cone morphed into the Schwinger
parameterization for the corresponding
uh diagrams and the uh pre-tropicalized
version um uh with just the polomials
that we talked about sitting there were
instead string amplitudes. So this is
the sort of a a uh connection what I'd
like so by by the end of uh uh yesterday
we talked about where the g vectors come
from and what I want to start with today
is telling you where those u variables
come from and uh because of the time I'm
not going to be able to prove to you
that these variables do what they're
supposed to do. But I'm going to
describe what at least one motivation
for where they come from. This
motivation will still maybe seem
slightly alien if you haven't seen
anything like this before, but at least
you'll see a very simple sort of
combinatorial problem associated with
these words naturally leads you to think
about these u variables. Uh I could just
give you a formula as you'll see in 15
minutes will be a formula taking a
product of a bunch of 2x2 matrices uh
that's going to produce these u
variables. But I want to at least uh
give the the the counting problem
motivation uh for this. Not just because
it's cool, but because if you go away
and think more about the counting
problem, it'll be it'll make it clearer
why these objects do what they're
supposed to do.
Okay. So, um but uh you can really just
think of this as a recipe for producing
uh these uh new variables. Okay. So, so
um so we're going to start with uh
getting uh the U variables
and again remember all of our life our
universe is specified by this uh uh by
this uh fact. Okay.
So for for the process you draw one
representative diagram that represents
the sort of flow of color and then
everything else lives uh in this world.
And if we talk about a curve, we decided
that a curve was uh one of these tricks
uh through the back graph. And so this
curve uh 24 would have uh the word that
we described as 2 three makes a left
turn onto 13 makes a right turn onto 14
makes a left turn on to four five. And
we also discussed how we could draw it
as a mountainscape. 2 three goes up to 1
3 goes down to 1 14 goes up to four five
okay these are two different ways of
representing uh exactly the same word
okay so now I'm going to define a
counting problem associated with any
word forget forget about all this we're
going to say a counting problem
that are associated with words
and this is what we're going to do let's
say have a word that looks like a b See,
I'm going to tell you uh uh what the
counting problem is. Um you just want to
uh you want to choose
um uh every
you want to choose sort of every object
that you see uh in this word. So for
example, I could choose to uh uh I could
choose I want to pick every uh letter
that I see uh in this word in every
combination. So if you have I could
choose to pick nothing and I would write
down one. So if you had to make a
generating function that goes with that.
Okay. So I could choose to do nothing. I
write down one. I could choose to choose
A by itself. I can choose to choose C by
itself. I can choose A and C.
But if I choose B, I have to choose
everyone who's downhill from B. Right?
So I used to call this the relationships
with baggage generating function because
it's like you know it's like dating when
you date someone like dated everyone
that they've dated before. Okay. So so
uh so if you choose B you have to choose
everyone in the sort of past lyome of uh
of B. So you have to do plus b a c.
All right. Is that is that clear? So if
we do another example, let's say it's a
uh a down b up c. What is this?
>> Uh this would be again I could choose
nothing. I could choose b.
I could choose a but if I choose a I
have to choose b as well. If I choose c,
I have to choose b as well. I can choose
a and c together. Of course, then I
still have to choose B, right?
Is that clear?
All right. Now,
um now it's sort of obvious from uh high
school um how you can compute I mean you
don't have to do it this way. It's
possible to compute this uh this uh
function this generating function in an
obvious way recursively. Okay. So in
other words, let's say you give me a
word. I can kind of start from the end
and uh and and get the generating
function related to the one shorter word
where I peel off a net. Now, how does
that work? Um well, let's say that
uh let's say that so I'm going to call
these uh uh objects f associated with
the word. They're not quite going to be
uh these f polinomials that we'll talk
about later, but anyway, let me just uh
call them call them f.
So let's say I have a and it goes up and
it goes up to some b and it's going to
do something else. There's sort of rest
rest of the word there. Okay.
What we're going to do is uh is divide
this generating function into two
pieces. I'm going to call f yes and f
no. Okay. So f yes is does it choose a
or and f no is uh does it not choose a.
All right. So there's f yes and nos
according whether it does or doesn't
choose a. So what is f yes?
Okay. Well f yes
it it it it chooses a. Okay. So uh f yes
is because it chooses a it's going to uh
it's going to have a factor of uh a
there right?
Um
uh but uh uh if I choose if I choose uh
uh uh if I if I choose A then here the
question is uh do I have to choose B or
not right?
And well I can choose B or not choose B
it doesn't matter. Right? So I'm going
to call the rest of the word here is
going to have its own little F. So it
has its own little F. Yes. No, where
this uh yes or no refers to B, right?
Call this B, right?
So this is A times little F yes little F
no. It doesn't care either way. Okay,
is that clear?
On the other hand, if I don't choose A,
well, I'm not going to have anything
here, right? I'm not going to have A.
But I definitely could not have chosen B
because if I choose B, I have to have A,
right? So this is just equal to F. No.
>> And so we immediately learn something
that
uh F yes and F no
is this little 2x2 matrix A a01
times little F yes and little F no.
So when I turn it up, I get that little
uh 2x2 matrix. Okay. So I'm going to
call this M left
of A.
Right?
Well, similarly, if I go down, let's
just do it quickly here. If I go down A
down to B and the rest,
then F yes
is equal to A. But it since B is
downhill of A, I must have B here. So
this is F yes and meanwhile F no is the
thing which again I could take either
one. It could be little F yes little F
no.
So this is turning left
from A
and so we all s have a matrix for
turning right from A. Right? So we have
f yes and f no return for turning right
from a
is this other matrix
a 011
times little f yes and little f no.
So we'll call this matrix m right of a.
Okay.
And so this tells us how to build this
uh f. I just take the product of these
matrices, right? I take the the product
of the matrices as I go down the word.
Uh and then at the very end I have f yes
and f no. I can add them if I want to
get the uh the full hat.
Okay.
Now if you remember uh our words coming
back to our context um there are open
curves and so they're kind of special in
that they start and stop somewhere
at some boundaries.
And then I leave it as a very small
exercise. It's very very obvious, but
I'm going to leave it as a very uh small
exercise to show
that if I have a word that's like alpha
as a boundary and it goes turn, I'm
going to turn is left or right. Left or
right. Okay, alpha turn and some uh some
road A1 turn A2 turn. Okay. And it's
going to go finally uh some turn uh
beta. So alpha and beta are the ends.
Okay.
Um
okay. Then uh there is a uh in I can
look at the sort of f that I would get
from this from this big word. But in
this big word it's also natural to look
at which part of f includes both alpha
and beta. Which part includes only
alpha? Which part includes only beta?
And which part includes none of them?
All right.
So I'm going to group those things. Let
me call them to f has a yes on alpha,
yes on beta. F no on alpha, yes on beta.
F no uh yes on alpha, no on beta, and f
no on both.
Okay. So this is a this is a a 2x2
matrix.
And uh it's a trivial consequence of
what I just told you that this matrix is
in fact the product of all of these uh
matrices. Okay. So this is the product
of of the matrix for um uh uh turn at
one
turn at two.
Okay. Up to whatever it was that he had.
Okay.
Let me be a little more precise.
Um so if I take this matrix and um for
the for the so for this first matrix if
I turn left or right I'd have to put the
m for alpha right um but here you have
to put alpha equals 1 and you imagine
that this is sort of beta equals one as
well of course beta doesn't show up
there's no term that depends on beta
beta is the the last thing okay so if
you look at this matrix and set alpha to
one of course beta is passely one then
you get an entry here where this would
be uh to get the if to get the real word
for this that would have an alpha and
beta you just multiply this by alpha
beta okay so this is uh so these are the
sort of coefficients of the alpha beta
term the coefficients of the beta term
the coefficient of the alpha term the
coefficient of none of them comes simply
from taking the product of these
matrices okay so this is a matrix that's
attached to to any word
Now um
back on screen.
[Music]
Okay. Now, if we stare at these
matrices, sorry I uh uh I partially
erased it. Notice that each one of these
matrices says uh as determinant is just
given by A, right? So determinant of
this matrix A, the determinant of that
matrix is A. Um
and so one thing which is obvious is
that the determinant
of this uh matrix associated with any
word if we imagine that those variables
the ABCs are just positive the
determinant is positive. Okay. So the
determinant of this matrix associated
with the word w is just the product over
all these uh all what I call there that
little a is all the sort of a k and this
is positive if the aks are positive
so which uh we'll we'll be assuming so
if I look at this matrix so this is a
matrix that has a has a one one
two 2 1 M22 entries
then I know that M11 M22
minus M12 M21 the determinant it's a
product of all these A's is positive
and uh therefore the ratio u associated
with this word which is M12 M21 / M11
M22 is less than one and of course
Again, if all of these variables are
positive, clearly as I multiply these
matrices, they're all plus signs here
everywhere, these simple matrices. Okay,
this U is also going to be bigger than
zero, greater than equal to Z, less than
or equal to one. That's the U variable
attached to a curve.
So if you give me any word for any curve
on any surface
all you do you just build the word you
multiply the 2x2 matrices associated
with the uh with the entries in the word
uh and you take this ratio 1221 off
diagonal over diagonal and that gives
you the u variables associated with the
curve. So I'll leave it as an exercise
for you to do this for the little
fivepoint problem. uh you'll have to
multiply at most uh two 2x2 matrices um
and take the ratios and discover those u
variables that we wrote down uh
yesterday. All right.
Okay.
Now, um and that's all I'm going to uh
uh say about it. Um maybe one point to
make is that this is sort of presented
as sort of manifestly a ratio of
polomials right all of these things are
sort of ratios
>> could you say one more time what the
variables are
>> oh the variables associated with what we
call the y so let's just do let's just
write down
this example okay so back to this
example you would now go and and label
uh the internal rows not just with the
name but also the variable y. So
there'll be a y13 that goes with this
road
and a variable y14.
Okay? And then you again take your
favorite curve like 24
which is now the word would be y one two
up 1 3 down 14 up four five.
Okay.
And now the the uh the matrix associated
with this word the matrix associated
with the word M24
is this is up at one two but remember uh
the uh this I'm supposed to set to one.
So the up matrix
was a a 01 but a is one here for this
one. Okay. So I'm starting at the
boundary. So this is 1 1 0 1.
Then at 1 13 I turned right. So here I
put the matrix Y13
011.
[Music]
At 1 14 I turn left. So I put the matrix
Y14 Y14 01.
Okay.
So this gives me a 2x2 matrix
[Music]
>> and the U for 24 is going to be YZ over
XW.
which if you did it in this case you
would discover is uh 1 + y14 + y14 y13
over 1 + y4
1 + y13.
Okay, you can see how these things are
built up from products of these 2x2
matrices. See the these things look like
relationships with baggage generating
functions. Okay,
>> very quick question. Is this five or 24?
>> Sorry.
>> Is this path 25 or 24?
>> This path is 24 cuz it's 1 three to 2
three left on three. Right.
>> Then I turn uh then I turn
uh started at one two.
>> Oh, I'm sorry. I just wrote this wrong
entirely. I'm sorry.
>> So, I'm starting at 2 three. Sorry about
that. at 2 three left on three right on
14 and then left on four five. Sorry
about that.
>> Is that okay?
>> No.
>> I have a question.
>> Yes. Two questions in fact.
>> Sorry.
>> Two questions. Yes.
>> Can I ask them?
>> Yes, you can ask them.
>> But you can't ask three questions.
>> There's a sharp upper back.
>> Okay. So the first question um is um
yeah what kind of um geometrical
resolution or precision we obtain on the
two dimensional trajectories within the
formalism used.
>> Oh there's no geometry here at all. This
is pure this is this this is pure uh
there's nothing about what this curve
looks like at all. This is just a
labeling for what the curve looks like.
So it has nothing to do with that.
>> Yeah. Thank you for your answer. And um
um question number two is um directly um
related to the observation you just um
made publicly. Um well for her um where
does this formalism first arise from? I
mean physics or maths
>> both.
Uh I mean although I I should say that
these uh the the these these variables
are closely related to many other
variables that have been uh discussed.
They're very closely related to uh
xcluster variables.
uh for surfaces uh alop and gonerov
they're related to cluster variables xcl
cluster variables but there are very
special sets of them and uh this
particular way of thinking about them
and focusing on them is uh uh is uh is
new and was very much motivated by by
physics okay um the thing is that you
can talk about cluster anything you like
it's these variables that turn into the
shringer parameterization of all
diagrams and string amplitude and so on.
So these things that go from 0 to 1,
they satisfy these remarkable u plus u u
uu equals 1 formula that's the uh they
they they realize in this binary way the
uh the combinatorics of uh compatibility
whether curves cross or not. Okay. So
that's uh uh that was very much
motivated by by
>> actually I'll just make just since the
the question was partially historical um
uh these variables literally these
variables
um for the context of studying string
amplitudes at tree level were discovered
in string theory before the standard
formulas for doing string calculations
okay by there's a famous formula called
the kovielson formula and these u
variables were were described by Kova
and Neielson in uh 1968. Okay. Um and
they were they were very excited about
them exactly because they made
factorization manifest. Okay. That's uh
only later that did coin Nielson
themselves realized that you could
formulate the problem instead in terms
of sort of cross ratios of points uh on
on a disk and the picture of a string
world sheet began to emerge and then the
string world sheet picture um uh kind of
took over because you could make it
systematic. You could do it for any
surfaces, think about loop corrections.
So lots of things came from that picture
and this more algebraic
uh uh and the way that's sort of
centered on curves on surfaces was
entirely forgotten. Um and uh what's
happened now is two things. First of all
um uh we understand that they also make
sense for all surfaces. Okay. So there
is no loss in going back and forth. So
all those physical things that Coen and
Neielson were excited about are actually
there and and present and uh uh and and
useful not just at tree level but for
all surfaces. But maybe more importantly
uh Coen Nielson found the U equations
and they even found how to solve them
but they only found how to solve them in
this way that I mentioned yesterday. is
most direct way solving for the U's in
terms of the U's which you can only do
in some cases and it's very artisal uh
and definitely you couldn't imagine
doing it for more complicated surfaces.
Okay. So the second novelty here is the
solution of the equations like we're
telling you how to solve them by
associating these uh counting problems
with the curves on the surface and the
product of these matrices and so on.
Okay. So, so it's both the fact that
they exist and that they're concretely
available.
And um uh and what they do what these
what the what the U variables do is they
give a sort of completely algebraic way
of thinking about what uh what not just
the the modulized space really the
tangular space of these two dimensional
surfaces but also what what the
compactification of these spaces you
know how you add back in all the
boundaries and what's again what's novel
about it is it's done an entirely
algebraic way you just write down all
these equations And you just declare the
user positive and you're done. The usual
way of thinking about the
compactification of these spaces even in
sort of a clustery way of thinking about
things or the faky boner way of thinking
about things is more local. You it's
synthetic. You go to the neighborhood of
a boundary and you figure out how to
sort of complete things in the
neighborhood of that boundary. So patch
by patch. This is not patch by batch.
And that's it's a very practical
advantage. It's conceptual conceptually
interesting but also practically
important. That's what allows us to
write down one integral. Done. Right.
And the one integral uh blows up all the
singularities and tells us how to how to
just uh go ahead.
>> Can you say a word about finitness when
you and the example are always at this
fivepoint example but as soon as we do
something more interesting everything
will be infinite. Can you comment on f
that bothers me?
>> Can you comment on finite versus
>> absolutely absolutely yes. So what what
burn is uh uh referring to is that the
moment we get to more interesting
surfaces
uh and actually uh you have to get uh it
really starts with the annulus. Okay. So
the problem has nothing to do with these
variables per se. Uh the problem has to
do something about curves on surfaces.
So if I think about this picture of an
angulus um uh I could draw a
triangulation of the angulus. This is a
slightly degenerate triangulation, but I
I hope you can see this is a triangle,
right? This is a triangle and this is a
little triangle here, right? They all
have uh uh three sides. Uh that's the
important thing. The triangles have to
have three sides. They don't have to
have uh three vertices. They have to
have three three sides in them. And
that's what lets them go along with uh
cubic graphs. So, well, what cubic graph
would go along with this picture? If
this is one and two, the cubic craft
that goes along with it would be this
little picture of an angulus. Okay, so
this line one is what's going around
here. Okay, so that would be sort of
region one or line one. This is region
two or one two. Okay.
Okay.
And now what burn is talking about is
the following. You see at this level
this triangulation is this diagram but
there's one diagram here. There's
there's a single diagram. But here I
this picture and I could just twist
everything. Okay, I could just twist
everything once. And clearly, it's going
to look a little more wound around. I'm
not even going to try drawing it. I
could twist it any number of times.
Okay, this way. That way. So, on this
side, there's an infinity. There's
there's infinitely many uh
triangulations
that just differ by winding uh or
game twist
or this is the action of the mapping
class group uh on the surface. But if
you haven't heard these words, it's just
the obvious picture of uh of winding
here.
Um and that's completely absent on this
side apparently. Okay, so it looks like
so and and the point is of course that
that precisely because all of these
things are related to each other by
winding, they're combinatorily exactly
the same, but they're all this
triangulation, but thought of as a uh in
terms of curves on surfaces, there's
infinitely many of them. Okay,
so this seems uh this seems uh um uh
problematic at first. It's actually um
it's actually uh it's actually a
blessing in disguise.
>> One of the reasons that it's a blessing
in disguise is that remember uh I
mentioned early on that um as physicists
we would like to imagine putting all the
diagrams together even at loop level and
putting them under a common loop
integration sign to sort of identify
what we mean by loop momenta from
diagram to diagram to a diagram.
And I said that when it's planer,
there's a sort of way of doing it. But
when it's not planer, there isn't there
isn't naively a consistent way of
labeling loopa that allows you to
combine all diagrams together. But in
fact, there is. And the curve on
surfaces exactly tells you how to do it.
You draw for any diagram any diagram is
some uh is some triangulation, right? Um
so you draw the curves on the surface
and then for any curve you read off the
momentum associated with that curve by
homology. Okay. So we saw it already uh
we we didn't say it in this slightly
fancy uh language but already here when
I draw you know even in in the disc when
I draw something like that I draw this
curve uh and I say the momentum
associated with that curve that I
squared to get the x associated with it
is p1 plus p2 plus p3. But you can sort
of think that I assigned momentum to
these boundary components and the
momentum of this guy is the same as the
sum of those guys because this curve is
homologous to these. I can I can
continuously deform it uh to the
boundary. Okay. So in that way you can
take any surface. Now in a case like
this you see this curve for example
cannot be this curve can't be uh uh
continuously uh uh deformed to the
boundary. So you have to give that one a
name. So we have to come up with an
element uh of a mole. So this group this
this curve I'll I'll give this momentum
a name. I'll call it L. But then this
curve is uh is is going to be L plus the
momentum associated with particle 2.
Okay. So this curve would have this
curve would have momentum L plus the
momentum particle 2 with Q with Q. If it
wind around twice it would be L + 2 Q
and so on. Okay.
And so that actually gives you a
completely consistent way of labeling
what you mean by momenta including loot
momenta for any diagram. Right?
What's the catch? The caches are that
there's now infinitely many diagrams.
Okay. Uh there's now infinitely many
diagrams. So that when you do the action
of the mapping class group, what it does
is it doesn't leave the loop momentum
invariant, right? It'll shift the loop
momentum. For example, if I wind here
once, the loop momentum will shift by L
goes to L plus Q. Now, of course, since
I'm integrating over the loop momentum,
the amplitude doesn't change. That's the
point. But we sort of found a way to
solve this essential you know kinematic
problem of how you label momenta uh
directly from homology of curves on
surfaces at the expense of having
infinitely many diagrams. Okay. So but
that's a sense in which it's a good
thing right that at least it's a first
step to letting us put everything under
the same sign. So now all we have to do
quotes all we have to do is mod out by
the mapping class group when we're done.
And that's what we actually do. We take
this whole formulas for any surface we
have the product of u to the x um
including when there's loops it's the
product of u to the x uh when there's
loop the x's will contain loop momentum
in them and this u to the x will have
things that are quadratic and loop
momenta upstairs again this looks
exactly like swinger parameterization uh
if you're familiar with it either in
physics or math um uh so we can do the
mmenta they're gaussian integrals that's
exactly the swinger parameterization
point so you're still left with an
integral over the y's. Now you're done
integrating the loop momentum. So
whatever you're left with and you're
integrating over all these y's is
mapping class group invariant because
you've uh integrated over the loop
momentum. Now it's totally mapping class
group invariant. So all you have to do
is now mod out by the mapping class
group. Okay. And uh you might think that
modding out by the mapping class group
would uh would uh uh involve you know
maybe identifying some fundamental
domain that then uh that then sort of
covers uh the the entire space. uh after
you do the uh action mapping class group
if you wanted to do that it would not be
such a fun thing to do but there's much
simpler ways of modeling out by the
mapping class group so what a physicist
would call the fa popoff trick what
mathematicians would call the merely
kernel okay are very straightforward
ways of just modding out by the mountain
glassass group uh to get the answer so
that's the so beyond at tree level we
don't run into this phenomenon even at
one loop we don't run into this
phenomenon but starting to annulus and
beyond
this thing that burn uh mentioned about
the infinity is ubiquitous but the
infinity is a good thing somehow across
the board that if we didn't have
infinitely many curves these U equations
would not make sense we really need
every curve on the surface for these
equations to make sense not only the
ones that wind infinitely much even the
ones that intersect themselves I mean
every conceivable curve on the surface
uh shows up when you write down these uh
>> so no way for me to look at the first 25
in a meaningful way
>> ah let's say I want to look at this 25
variable and 25.
How should I list them?
>> Yeah. Yeah, that's right. Yeah. So, um
so uh one of the I mean that this is one
of the things that makes this very very
practical. Um there are infinitely many
curves and so on. It's true. But the way
it's presented makes it clear you have
these ratios of of polomials and uh
there's a sort of concrete sense in
which you start with your parent graph,
right? You draw the simple curves.
called the simplest curves are the ones
that occur in the triangulation and then
the ones that you know where by simple I
mean literally the length of the word
you know how long does the word get okay
then as the words get longer and longer
the u variables associated with them get
closer and closer to one okay remember
they're stuck between zero and one okay
and so if the y's are so for example if
the y is just to say a super concrete
statement if you think about the u
variables as a tailor expansion in the
y's okay the ratios of the polinomials.
You can certainly tailor expand them in
y's. Of course, the y's go from 0 to
infinity, but uh somehow the the y is
close to zero means that you're in the
neighborhood of the uh of the surface uh
degenerating to the triangulation that
you started with. Okay. So, if you just
look as a tailor expansion in y at any
fixed order at the tailor expansion, a
finite number of curves have u not equal
to one. Okay. So first of all in the
sort of super precise sense you need a
finite number of curves at any order in
the tailor expansion. Secondly um even
if you let y's be anything but some
fixed number you know y have 10 y's and
the y's are 17 23 I just fixed some
fixed numbers for them as the words get
longer and longer for those fixed y's
the 's for longer and longer words
approach one exponentially quickly.
Okay, so these U's are sort of very
they're very practical. If you want to
sort of numerically evaluate these
integrals, you don't need infinitely
many of them remotely. You really need a
small handful of them because of how
exponentially quickly they go to one.
Okay.
>> Yes.
>> So these are like some type spaces.
These spaces have some dimension. Do I
know how many views I need? Are
>> there infinitely many use? In principle
there's infinitely many use. Okay. Uh
there in principle there are infinitely
many U. If you want to write down a
string amplitude, okay, uh it's uh um uh
it's your choice uh what use to put into
these formulas, okay? Because um uh
because uh um
uh uh
first of all uh there there's there's
infin there's infinitely many curves
partially because even given any one
curve when the surface is interesting
enough it can go around and self-
intersect itself lots and lots of times.
So there's all the sort of self-
intersecting uh friends of a given curve
right they're there. have u variables
associated with them. But notice if a
curve intersects itself, it's u can
never go to zero because in the u
equations it occurs in both terms,
right? So you have u plus a product of
all the curves that intersect
x uh is equal to one. If a curves
intersect itself, it shows up in both
terms of the equation. So you cannot set
it to zero. So that means that all of
the self- intersecting curves are
irrelevant for the field theory limit.
Okay? If you want to take these curves
and see what they look like in the field
theory limit, you can just throw all of
them out. That's the beginning of seeing
the sense in which we have a bigger
world of objects that uh kind of
connects to string theory in some limit
but is not necessarily string theory
because it turns out that string
amplitudes require you to put in every
single curve on the surface including
all the self-intersecting ones. Okay.
And that's why you know if someone
handed you a one loop string amplitude
and said please take the field limit
you'd be a little terrified of doing it
because of these horrible Jacobian data
functions everywhere. What are you
supposed to do? Where are those coming
from? They're coming from the product of
these infinitely many self-insected
curves. They're are literally irrelevant
baggage as far as the field theory limit
is concerned and you do not need them.
Okay. So when you ask the question what
you include it's a little bit up to you
what you want to do. Okay. Um so um but
but if I'm strictly talking about theory
limit the answer is a little bit more
interesting and it's related to
something I just want to say about these
U variables. Now
uh and then I'm just going to answer
this question but it dubtales exactly
with what I was going to say.
Um,
so
so I have these u variables that depend
on the y's
for any for any x. Let's once again um
some yk. Let's once again put yk equals
e to the minus tk just uh so I can talk
about uh uh uh talization. So again,
we're interested now what happens when
when when the sort of t uh goes to
infinity. You know what's going on
because we go to infinity. And so u is
going to go to e to the something uh e
to the uh we'll call it alpha times the
sort of tropization of uh ux. Okay,
that's uh
um sorry uh e to the trough of of ux and
this I'm going to call e to the alpha x.
Okay. So now alpha x is going to be some
peacewise linear function on tsp space.
Now the properties of the use guarantee
the following.
So sort of key point about
uh tropicalization and these alpha
variables is that these alphas are the
global swing parameters.
And so the the key point is that if I
take alpha for a curve x, it's a
function on this space where this where
all these cones live right in this t
space. If I take alpha for curve x and I
evaluate it on the g vector for the
curve y,
it's equal to zero if y is not equal to
x
and otherwise one if y equals x. Okay.
So, so the tropicalization of the U
variables lights up the G vector okay
for its own variable and a zero on on
everybody else. Okay.
And that's why if you then just write
down this integral integral you know uh
d and t whatever the dimensionality is
uh e to the minus the sum over all the
curves on the surface um the x I'm sorry
I'm doing maybe I should say some sum
over all curves c the x associated with
the c remember with every curve we just
figured out how associated momentum I
can square that momentum to get uh an x
or I could just say that there's
variables xc associated with every curve
that's my kinematics My kinematics bases
some variables XC associated with every
curve multiplied by the alpha associated
with that with that curve.
Okay, this is something which looks like
shrinker parameization in every cone.
Okay. And this is something which is
equal to the sum over all the cones
the product of the one over uh of of the
x little c for the uh uh for the c's
belonging to the cones. Okay,
which is what we call the amplitude for
right.
So now this brings me back to your to
your question. Okay. So, um uh if if
we're if we're doing this uh before
doing loop integration, we're doing this
before we're doing loop integration, all
of these axes have the loop variables in
them. We would have all these infinitely
many winding guys, right? And that's
fine. Okay, I do the loop integration.
This is in Schwinger form. So, I know
what I get. I get uh I get the uh
semantic polinomials except these you
would call surface semantic polinomials.
So they're not semantic polinomials for
a fixed graph. They're the semantic
polinomials associated with the entire
surface. Okay. So there's an analog
semantic polinomials that we put all the
curves on the surface together. They're
the same old polinomials very similar to
the polinomials that you had before
except with these alpha C's showing up.
Okay.
Um and now you have something which is
mapping class group invariant. So you
have to mod up by mapping class group.
You model out by the math class by
putting a little extra factor of a
kernel in this measure that uh does
again to a physicist the fa popoff trick
for modeling out by uh by by by
symmetries. Okay, there's a very
standard and simple way of producing
this kernel. You can do it for all
surfaces at once. Uh it's algorithmic.
You can put it on the computer and do it
to 10 loops. It's it's all it's all
fine. Okay. So So these formulas are
absolutely concrete. There's nothing
formal about them. you can put them on
the computer and integrate uh okay so
Julia Salvator has put them on computers
and integrated them into 10 loops okay
so there's no issue
you know say there's an epsilon here a
rotate there just uh of course if the
kinematics is positive ukidian etc etc
once the kinematics gets interesting all
the usual physical issues about
thresholds and Ion those of course don't
disappear but I just want to stress this
is not some sort of formal object in in
the regime where the integral looks well
defined. It's 100% well well well well
defined. But now coming back to your
question you see the purpose so what
what I'm uh uh supposed to do in general
after I loop integrate is put this
kernel there. The purpose of the kernel
is to sort of kill uh the the far away
windings. Okay. So that's effectively
what it does.
>> Can you say a little bit more about the
kernel?
>> Yes.
>> You don't wake up every morning and
think about me.
>> No, I know. I'm sorry. Yeah. So let me
instead of saying uh let me let me say
it in a simpler let me say what it is in
a simpler example.
So um
and this is really all that's going on
but we can see it already in this
example. Let's say you have a a function
of of just one variable. Let's say have
an f ofx which is translationally
invariant by some amount a. So it's
equal to f of x plus a.
Okay.
But what you want to do is uh make sense
of integral minus infinity to infinity
dx f ofx.
Okay, this is of course infinite exactly
because it's uh translationally. Okay,
so there are these uh translations t
uh so this is equal to infinity.
And so what's our usual attitude about
this? Our usual attitude if you're a uh
you know if you're in high school
student or undergrad or something is to
find a fundamental domain right so in
this case fundamental domain would be a
little interval that starts at b's any
old b and goes to b plus a okay and so
you say really what I should do is I
want to make sense of integral dx fx mod
translations whatever this means mod
translations okay but what this should
mean is just the integral from b to b
plus a dx of f ofx. Okay. So this is the
fundamental domain idea.
All right.
And you can try to do that for surfaces
too. Just a mapping class gets more and
more complicated. This is not a fun
problem to try to identify a fundamental
domain. You can do it for a taurus, you
know, but things get uh I mean already
for a taurus slz is not, you know, a
walk in the park. It's not that hard,
but it's not the most trivial thing you
do. And then it gets more and more
complicated from there. Okay.
All right. But the the sort of fa popoff
or but physicists run into this issue
all the time, right? When you define any
path integral in the gauge theory, it's
infinite because it's gauge invariance
and the volume of the gauge group is
infinite. Um so we have to figure out
how to mod out by these uh by these kind
of symmetries all the time. And when we
do the path of gauge theory, we
absolutely do not do this. You don't
find the analog in the fundamental
domain which is insanely complicated.
That's a sort of space of all possible
engaging variant states. Ridiculously
complicated. You would never think of
doing that. Instead you uh judiciously
insert one. Okay. So what you do is you
say um I'm going to just pick any old
function. This case I'm going to pick
any old function g of x. Okay.
Let me pick some g of x. I'm gonna draw
g of x. Here's g of x. And g of x is
going to look like this. Any old random
g of x you like. Okay.
Okay. But I'm now going to insert one is
equal to the sum over all k of g of x +
k a
just translating g divided by the sum
over all k g of x + k a. So clearly I
haven't done anything. Okay. And I'm
just taking the function and all of its
translates. Okay.
So far so good, right?
So I'm going to insert that into my
integral. My integral is integral minus
infinity to infinity dx to begin with
one. But I want to somehow mod out by
the translations. Okay. 1 * f ofx. But
the one I'm going to write as a sum over
k g of x + k a over I'm going to call
this big sum g. I'm call this big sum
capital g. Okay, capital g. The only
thing I need is that capital G does not
vanish anywhere. Okay, so I want this
formula to be meaningful. So capital G
better not vanish anywhere. So G of X
had better be kind of like enough non
zero so that when I translate it, it it
can itself even vanish in a lot of
places like G of X could even look like
this. And G of X could look like this.
So long as this length is bigger than A.
Okay. So that when you translate it, the
translates sum together never go to zero
anywhere. Okay, so I just want this
denominator to be multiplied
and then it's sort of sorry times f ofx
and then it's sort of obvious what's
happening because precisely by
translational invariance in every term
of this sum I could translate back to k
equals z if you like okay so modding out
by the so if there was a fundamental
domain even without finding it I know
that uh that that this uh is going to be
the same as the integral minus infinity
to infinity just one term here for
example just g of x / capital g * fx
okay
if you like all the other ones are just
copies of this one by the action of the
mapping class and modeling precisely
throws them out now this is very cool of
course if you make a choice for g of x
to be a a a step right with size length
x a. This goes back to the fundamental
domain picture. Okay, you can put g
anywhere you want and this goes back to
the fundamental domain picture because
the sum of the g's is just equal to one.
But you don't have to be smart to do
that. Just choose a random g, any g that
you like. Um uh it's a little fun to do
it, you know, with a Gaussian or
something. It's slightly surprising that
this integral over everywhere without
thinking gives you the right answer, but
of course it's designed to do that.
Okay. So, this isn't what people what
physicists call the fa pop-up trick.
Perhaps it's not being dignified with
that. I'm sure did it in the 18 in the
1700s.
>> This is your way of taking a quotion.
>> This is a way of taking a quotion. This
is a practical way of taking a quotient
without manifestally identifying a
fundamental domain. Okay, that's the
sort of key key point.
>> Can I follow up? But absolutely one more
only one not two.
>> Okay. So, I'm going to I'm going to ask
the what's in it for me question and
it's as follows. So in this institute
there's a geometry group.
>> Yes.
>> And there's an algebra group.
>> Yes.
>> So now you moved you know everything
into the geometry sense of timma space
and the algebra.
Yeah.
>> So I would like to take something back
to the study of algebraic curves.
>> So the moduliz space mg or mgn
>> and I'm interested in algebraic curves.
>> Yes. Yes.
>> Not in remon surfaces. So what can how
can I make use of this technology to
study algebraic curves?
>> Uh I don't know um uh um uh what what
what what I would say is that um uh uh
is that what makes it useful for us
>> um uh is is in fact it's distinctly
algebraic flavor. I mean that's the
point. You just have these algebraic
equations you solve as ratios of
polinomials and this does something. it
uh it gives a gives an algebraic way of
characterizing
>> but the algebra is on the on the
geometric analytic side right
>> sorry
>> it's we have to somehow cross the
transcendental divide I mean you spoke
about ugly phobi theta functions they
seem nicer
>> oh yes well
>> we have to somehow go from the from the
analytic side to the algebraic side
maybe this I'm willing to do numerical
computations even with Julio
>> right well um um
Maybe one one one interesting point uh
uh to make is also related to your to
your earlier questions about the uh
infinitely many use. So you can ask I
mean there are we know that uh we're
talking about string theory at loop
level. There are jacobi data functions
everywhere. Where do the jacobi data
functions come from in this world?
>> For example,
>> the jacobi data come functions come from
the product over infinitely many curves.
Okay. So that's so that of course you
know the product representation for
dedicant theta or jacobi theta is like
product of 1 - y the k right what are
those product 1 - y the k in this
language the 1 - y the k is coming from
these f polinomials and there's
infinitely many terms of the product
because we have infinitely many
self-intersecting curves so that's uh uh
and uh maybe at least one thing that I
find uh interesting is that a lot of
this story uh can be abstracted away
from curves on surfaces. Uh there's even
more general settings where you can have
uh associated both with cluster algebra
and quiver representation theory and so
on and so forth. There's more general
settings where you can associate all of
these things. There's sets of variables
and u variables that go along with them.
They satisfy U equations. There's
infinitely many of them and so on. And
so it's natural to wonder whether
whether the things like jacobi theta
functions that are associated with
surfaces have similar generalizations in
these bigger sets that um uh it's
somewhat for example uh many just to say
a negative if I give you a jacobian
theta function the most exciting thing
about it is it modular properties and so
you can ask are modular properties
obvious in this way of writing things
absolutely not as far as I can see you
know what this tells you is that it's
good to write the the formula product of
1 - y to the k that comes out of this
sort of way of thinking
>> but we should see this and then using
the u equation formulism we should see
remon theta functions and all that
>> yes absolutely absolutely yes
>> and it could be a tool it could be a
tool to study and evaluate
>> it could be and I think one really a key
thing which is uh which has come up in
the question I also haven't uh answered
yet is it gives a natural regularization
of these things okay so that's kind of
kind of the point um if you're a if
you're a street theorist talking about
uh calculations at high loop order.
There are some greens function on a
surface that you have to compute. You
compute it roughly by the method of
images. It involves infinite products
that have to be regulated. They're very
subtle things. When they come out in the
uniform, they don't have to be regulated
because they're manifestly finite
products of things that are bounded
between zero and one. So uh uh and it
and it gives you a picture of what's
regulating it. The sort of degree of
complexity of the curves that you're
talking about is what is regulating it.
So I think the that connection what what
the probably the most uh the most useful
thing about it is a way to sneak up on
these very infinite objects in a well-
definfined finite way that has geometric
meaning right in terms of putting these
cuto offs on the complexity of the word
and on uh on how complicated the curve
looks on the surface. And finally coming
back to your question um you see this
now means so what was the purpose of the
g of x the purpose is kind of mostly to
shut off a little bit away from a right
so now of course uh when we when we do
this in our setting with the mapping
class we don't choose random g of x's we
choose the g of x's to be made out
exactly out of these alphas right so
it's very easy to build you know sums of
alphas whose translates uh are never
zero anywhere that you can put in the
denominator, right? But that means that
in this formula in the numerator, you're
just going to have the alphas for some
finite set of curves because there's a
finite set of curves upstairs. It's
there's literally a finite region in
this fan in which this kernel is non
zero. Okay? And so you don't need to
keep infiltur.
Okay? So that makes it uh uh that makes
it that's that's what really allows it
to be completely practical because uh uh
uh I told burn that these u variables
mostly go to one or if you tropicalize
them only in very narrow cones far away
uh do they matter but in a precise sense
you can throw them out if you're doing
uh the field theory uh computations when
you mod by the mapping class because
they are literally you don't go there
okay it's just that a fundamental domain
is a low brow way of saying it is
choosing one representative set of finer
diagrams. So if you choose one
representative set of diagrams to cover
the space. If you do that that's going
to look very ugly. That means you have a
weird collection of cones uh that you
say okay this collection and their and
their translates are going to cover the
space. But you have to artisally choose
the collection of cones. Instead of
doing that you just sort of light up
some region by this method. Okay. say
here's some sort of wedge and that's
what I shove upstairs and then I'm done
by by this uh by this uh by this idea.
So long as I just do the integral over
that cone that's going to it's it's
going to pick a third of a vinement
diagram from from this cone and 2/3 of
one from the other cone I'm going to add
them up in some nice way but without
thinking I cover the uh entire space and
then it's finite then it's really a
finite number. Yes,
>> I know you stated the third commandment
that is forbidden to ask the third
question. Yes,
>> I will cross this forbidden.
>> Yes,
>> things. Uh, okay. So, um it seems that
you are trying to um go back and forth
um in this crosspollination between
mathematics and physics to this problem.
Um it's normal you got this the kicks
for with abstraction.
Um yeah um one natural question um uh
yeah which arises I think um are there
any toy models in three dimensions with
the same formalism uh of course without
the monstrosity focus on the right
topic. Um
>> I I will stop you and say this is
already a a toy bottle. It's already
it's already a toy model uh that that
actually uh uh is dimension agnostic. It
could be in three dimensions if you
wanted. That's uh I I never said what
the dimensionality was in the story.
>> Yeah. So you're you're saying that
there's a natural generalization between
>> generalization or specialization, you
know, that's uh there's a there's a
there's the number D. It doesn't have to
be an integer. It could be complex, you
know, it could be anything you want. Um
uh but um that's one of the nice things
about working with the the scalers. As I
mentioned yesterday, although I won't
have uh uh I won't have time to explain
it in detail, it is surprising that
starting with this formula just for
scalers secretly knows about gluons as
well. Okay, so there's a way in which it
knows about gluons and pons also in any
number of dimensions. So that adds a lot
of uh physical excite and novelty about
these ideas that uh that you're saying
but uh but nowhere here do we say
anything about the number of space
professor you're saying that the
generalization is completely obvious and
we have only three three matrices for
example for three.
>> Oh no no no that that three is nothing
new in any number of uh spacetime
dimensions you'd always have the same
2x2 matrices. If you're asking if there
it's a generalization where you have
instead of words I don't know some
threedimensional words and you'd
multiply 3x3 matrices and 2x2 matrices I
have no idea probably there is because
mathematicians are very interesting
people who invent all kinds of things
but uh but uh anyway uh I think we have
to stop now for the break right for the
tea break anyway so but so what what
what I what I can do in the time that I
have is give you what I mentioned in the
beginning is the sort of a very new
application of uh these uh of the ideas
that we've just been talking about. Um
uh
that gives us access to a very
interesting region of uh uh physical
processes involving uh scattering with
asmtoically large number of particles.
Um so
so um so we're going to again talk about
this uh trace by cube theory but I want
to imagine I have amplitudes
um oh this a nice truck in trace 5 cub
um but where I have sort of n particles
um and n goes to infinity. Okay.
And I want to say something about what
these uh uh amplitudes look like. Now
again, uh I'll just say this uh uh I
said it already before as part of the
motivation. Um what I like about this
this question is it forces you to think
uh it makes it very clear what part of
what we're doing before is continuously
connected to standard ways of doing
physics and what could be really new
because none of the ways you think about
computing uh amplitudes whether we
interpret amplitudes as canonical
forearms or BCFW or other kinds of
recursion relations every one of these
pictures has something recursive built
into it right a canonical form is an
object
that recurses to a canonical form,
right? Um so uh you build up higher
point amplitudes by gluing together
lower on points, right? So it's all a
picture that the simplicity is in few
particles and complexity is in many.
Okay? And so we're looking for something
essentially new where the simplicity
when the partic number of particles is
huge. So that's what what we're looking
for. Okay.
And um the clue that that such a thing
is possible is from the tropical
representation of the trace 5 cube
amplitudes that I now want to write on
this board. Okay. So if I look at the
tree amplitudes for trace 5 cube for n
then I can write it as this integral dn
minus 3t
e to the minus some s
okay that depends on t
and s of t uh this is now associated
with this picture of the mesh okay
so s oft T is the sum of X1 J T1J.
Okay. So these T's are associated with
this bottom uh boundary. So this is this
is the mesh.
Um we mentioned it I mentioned it before
but that's associated with this kind of
uh this kind of triangulation where
that's one 2 3 4. Okay. So um
um and
uh so again this is just a picture uh as
a pneummonic for thinking about all all
the variables in the problem that makes
it easy to write this expression. Um and
then you have for every internal for
every internal C you have a sum of these
internal C's. So the internal J is CI
times something. It's a max of zero and
a bunch of things, right? And um so just
let me just give it concretely in this
example. Um just I I hate it when people
write formulas with bounds on indices
that you can barely read on the side.
Okay? So I'm not going to do that. So
I'll just sort of show what it looks
like uh in a big enough example where
you can see how it generalizes. Okay. So
if I have this example that's 131415.
Okay. So my action is t13
is uh t13 x13 plus t14 x14
is t15 x15.
Okay. And then uh associated with this
C. So this is C13. I have plus C13 max
of zero and minus C13.
Okay. So in Karolina's language remember
this was associated with this little
that little uh uh mic in that direction.
This guy's the other direction. This
guy's uh uh uh the other direction. So
there's C13. This would be 2 4 5 2 36 35
26 and 46. Okay. So then I'd have plus
uh so I'm going up this way also because
organized by how complicated these uh
these things are. So plus c24 max of 0
and minus t14 plus c35 max
0 and minus uh t15
and then uh here and here I have plus uh
c14
max of zero and you see I've gone up to
14 here. Okay, so it's max of minus P14
and minus P14 minus P13.
Okay, so I go as far up as I go and I go
down from there to see it's a max of uh
all of those guys. Okay, so this guy
instead would be plus uh C25
max of zero and minus T15 minus T15 -
T14.
Okay. And finally this one would be plus
C15
max of 0 minus T1 5
- T15 - T14 - T15 - T14 - T2.
By the way, I apologize if these things
have minus signs on them. You might have
thought it would be convenient to uh uh
you know just reverse all sign
conventions and call all these things uh
plus signs. If you thought that you
would agree with all my collaborators uh
and so in in the papers you'll find it
in that uh officially intelligent way
but I have my own personal reasons for
preferring it this way. So screw you
all.
All right. Um, and that's a technical
term for my collaborators.
Um, okay. So anyway, so what again the
the and so I hope that the the pattern
for any n is is is clear, right? You're
getting you got a max of zero and these
strings of sums of uh of uh of negative
t's. Okay.
So once again in this example already we
see we have you know uh 1 2 3 4 5 6 we
have six little uh tropical functions
that are going to turn this
three-dimensional space into 14 cones.
Right? So here we have 14 diagrams at
six points as catal numbers we have 14
diagrams and so we're starting to see
the point right that as you build a
large n there's n squared roughly of
these uh of these maxes they have order
n terms each and magically they they
they uh their domains of linearity turn
into all the roughly four to the n
diagrams at larger. Okay.
So that's why there are some sort of
vague hope that uh maybe this integral
it's a single integral. I'm not talking
about summing diagram. It's a single
integral. It's a single integral made
out of like simple objects that if you
sort of squint even maybe have some kind
of seeming nice continuum limit like
these sums of consecutive t's maybe look
like an integral. You go to large n you
replace sums with integrals or that
integral kind of turns into a path
integral. There's all sorts of sort of
vague words that you could say about why
you expect something nice to happen here
in the large end limit. Okay, that's
exactly what what we're going to see.
Um, but I first want to uh uh I want to
do two things here. Um uh first I want
to be a little bit more precise about um
what a large end limit could mean
because you take a limit you always have
to say what you're holding fixed. Mhm.
>> Okay. So, we have to have fold something
fixed as we go to infinity.
And then I want to do the super simplest
case of the large end limit just to give
us an idea of what what we're looking
for. Okay.
So, the first comment is uh what are we
holding fixed in the large end limit.
Okay. Uh what's fixed? So large and
kinematics essentially I want to talk
about
and if you're a physicist we could
actually spend a fair amount of time on
this topic. think it's actually uh uh
interesting to talk about but let me
sort of give you a quick impression.
I'll make a very very precise statement.
Um so if you're a physicist you'll care
about the physical implications of all
these things but we can just sort of
make the precise statement quickly. So
remember we said that uh we started with
this picture where we imagined you know
we have momentum of particles that we
put uh end to end uh to make this
momentum polygon to give us uh this
picture of a momentum conservation.
Okay, so this picture immediately so as
n goes to infinity, you have to somehow
give me a large number of momentum,
right? And but in some way that
something is held fixed. So the obvious
thing to do here is to say that what's
held fixed is just this curve. I draw
some curve. Okay, I draw some curve C
and then for any finite end, well I
just, you know, point down end points on
this curve.
Of course, maybe I plonked them down
with some density. You know, there's
more details we could talk about about
how I plunk them down, but at zero
order, I'm just going to plunk down a
bunch of points on this curve, and I'm
going to use that to define my momentum
polygon. Okay, so this defines my
momentum polygon
for any finite.
Okay, so at least here we've defined
something, right? Here we've uh here
we've defined uh uh here we've defined
something uh that is going to uh stay
fixed as n goes to infinity. And so
that's what we want to know. We're going
to have the amplitude is going to depend
on these n momenta. But the hope is
again this is sort of vague that at
large n the amplitude will only depend
on this curve c and it won't depend on
the particular way that you put uh
points on it. Okay, that's the that's
what that's what uh we want to see if
it's true. something like that is true.
Now that picture
immediately translates to a kind of an
obvious state the language of this mesh.
If we think about this mesh as defining
our kinematics space, don't worry, I've
got to write down a simpler version of
this equation again if you're missing
it.
um is that uh so another way so so so so
kinematic limit to another way of
talking about it slightly more general
way of talking about it is draw this
mesh
okay
but now just imagine that the x so so my
kinematic variables are some xigs in
here right so and I'm just going to
imagine this is a very fine mesh okay so
I'm going to imagine that the xig's on
the inside are really secretly some
smooth function I'll call it x of u and
v but when for example u is i n and v is
j overn
okay so it's like here's u it goes from
0 to 1 here's v it goes from 0 to one
the other way I don't know right so I
have a smooth function of x of u and v
inside this triangle and I'm just
discretizing I'm just plotting down a
mesh and reading off the x objs from the
value of that smooth function evaluated
on uh the discrete points. It doesn't
have to be a uniform mesh if we do it in
uh in in many ways.
So this is the this is the sort of
slightly more general picture. I mean
obviously everything that looks like
that will give me something smooth in
here. Okay, but now we're just going to
say it in here. I'm just going to have a
smooth X inside this uh have a smooth X
inside this triangle.
Okay, is that clear? So, and so once
again, the hope now is that the
amplitude as n goes to infinity is now
just going to become a function of this
smooth x of u and v.
Okay.
Okay. Now, now let's get a
>> can you say one more time how I compute
the x from the
>> I'm sorry.
No, but suppose I have a continuous
curve
>> and I want to compute the continuous x.
willing to do a continuous computation.
>> Yes, a a a a a continuous computation.
You know, I pick uh you know, a point
here and call uh uh zero. Okay. So, my
my my curve is labeled by some little x
that depends on a parameter, let's call
it u that goes from 0 to one. So, here
is u equals z, u equ= a half,
>> u comes back to one here. Okay. Mhm.
Uh and so my x of u and v the sort of
first guess for x of u and v would be
little x of u minus little x of v squ.
Okay. And if the particle has a mass
plus m
okay
so that's what we're that's what we're
talking like in the smooth curve or
>> Yeah. Yeah. That's that's that's how I'm
defining the uh the uh the Okay.
Is there some notion of some shellness
in the in the curve like the E square?
>> You could you could you could ask for
that or not. Okay. Uh and actually so
this this gets into the longer physics
uh uh discussion. You could ask for that
or not. The simplest thing to do is just
say these are whatever they are. You
would call this an offshell correlator.
Uh the formulas are exactly the same.
You could interpret as an onshell
amplitude with a little bit of
scaffolding on top of these guys if you
wanted. Okay. So um and and that that's
where there's a there's about a 45minut
discussion of all the different ways we
could draw this curve that could be
space-like. They could be timelike. They
have an interpretation of a bomb going
off. They have the interpretation of
lots of particles going in, lots of
particles going out. So there's lots of
different sort of physics words.
>> But then there was also the C. So the C
is the H. You know that
>> uh that that's we're going to come to
the C's in a moment. That's going to be
the sort of key thing that makes that
that that that gives some hope that
something simple is just going on. Okay.
Okay. But in fact, so so so before uh
before before getting there, I want to
do the very simplest version of a smooth
X, which is all x's equal. Okay, so
that's the very simplest thing you can
do. What what if all the x's are equal?
Put it here.
So if all x's are equal,
then the amplitude is very simple. What
is the amplitude? Well, every single
finding diagram, every diagram is the
same. It's like 1 /x to the power of the
number of propagators, right? So all the
x's are the same. So the amplitude is
just 1x the n minus 3 multiplied by the
number of diagrams.
And number of diagrams are the catalan
numbers.
And it's very easy to see that the
catalan numbers grow like four to the n
at large x. So we said said that
already. So already here we see that
this goes like 4x
to the n at large x.
Okay.
So that's our sort of first more precise
hint for what we're looking for. And if
you're a physicist, you're very excited
to see that adding the exponent. Okay,
because the amplitude looks like
the amplitude looks like e to the n
times some little some function a of x.
In this case, a of x is just uh you know
log 4x.
Okay.
or I put a minus sign there and it would
be log of x over 4
either way
but it's very common in physics when you
have a small coupling in a theory that
the leading amplitude or partition
function or whatever looks like e to the
minus some coupling normally we call it
1 / g ^2 times the thing that sits here
is controlled by some classical
physics
that's the leading thing this is called
the leading classical behavior and then
you do a semiclass expansion around it
and so on and so forth but the sort for
example in the path integral uh in the
finest path integral we have e to the 1
/ h bar it's pine's constant that goes
downstairs in the action okay so
>> so in some sense this equal one is a
complicated case because the
dimensionality of your curve also goes
up
>> should I think about your polygon
staying in a so your curve being a fixed
dimension.
>> You can think of it as saying in a fix.
>> This is not a good model.
>> That's right. This is not a good model.
Exactly. It's a bad model for that.
Exactly. Exactly. So, this is the
slightly longer uh uh discussion about
exactly what does or doesn't work for
the story that I'm telling you. Just to
to to skip to the end, there's a the
story I'm telling you is valid for a
fixed curve and fixed dimension. But
you're right, x equals 1 is not a model
for that because we cannot realize x
equals 1 from any curve in a fixed
number of dimensions. Absolutely. Okay.
But somehow uh I mean uh it's kind of
interesting in this entire story.
Everything is about the x's in our
stories. Everything is about the x's.
That's the whole point. The moment are
somehow far away somewhere, right?
Everything is about the ex. So it's
natural to ask this question in our
story just to get some sort of first
handle on what's going on. But I want to
stress that this is now looking exciting
because it looks like at large n we
should be looking for some kind of
theory whose weak coupling is 1 / n.
Exactly. Right. One over g^2 to be n. So
n is 1 over g^2. So it looks like
there's a dual theory at large n. Okay.
So if you're a physicist those were in
the beginning of any talk on the subject
you know you would say this is evidence
for some kind of dual theory at large n.
We're going to be finding this dual
theory at large n. That's exactly what
we're looking for.
Okay. So, but of course, as Burn was
saying, that's a very very special case.
Um,
let's imagine more general X of U and B.
And now comes a second uh uh interesting
point. Maybe before we get into this uh
a second point, um let's go back to this
picture. This is actually essentially
repeating uh slightly more slowly uh
something uh uh Kolina was saying at the
end of her talk yesterday. Let's say we
have this uh this example at six points.
Again, sorry for the bad drawing. Uh
and it's just kind of interest. So the
whole associated derivative, the whole
amplitude language involves turning on
all of these C's. Okay, but let's just
see what happens if we turn some of them
off. Okay, Carolina mentioned that there
are some limits where we turn so many
off that this association collapses the
dimension that the amplitude gives us a
zero. But let's just back up a little
and see what some other sort of simple
things look like. One simple thing you
can do, remember these guys were
intervals, right? Whatever direction
that was, right? So, one kind of obvious
thing to do is just turn off all of
these guys and just have these
intervals.
Then the object is really simple. It's a
hyper cube just a cube in this case,
right?
And also the amplitude is extremely
simple in that limit. Okay, so if you
remember I erased the action but let me
write it again. You know these
contributions. So my action was uh t13
x13 plus t14 x14 plus t15 x15.
Remember I'm integrating the amplitude
is integral uh dt e minus s. So in this
example, this is S has these linear
pieces and then these guys were the
simple uh things whose whose maxes uh
only involve one variable at a time. So
C13 max of 0 and - X13 plus uh C14 max
of 0 - X14 sorry C24 and C35
max of 0 and - X15
T. Thank you.
Okay. Um, so these integrals are really
simple too. It's just, you know, a
single integral, right? Uh, so what is
that that integral? What is this action?
This s like when t is positive, uh, when
t13 is positive. Uh, it's equal to t13
x13.
And when t13 is negative, uh, that's the
max. So here it's equal to this slope uh
so this is c13 minus x13
uh sorry this is the times minus p13
okay so just as a slope of different
slopes I made them look the same but
different slopes on this side and that
side and so when I do the integral from
that I get 1 /x13
from the integral from 0 to infinity and
from this side I get + 1 / c13 - x13
Okay.
And so I just get a product of those
factors for each one of these things. So
times 1 x24 plus 1 / c 24 - x14 1 /x uh
x14 sorry x15 + 1 c35 - x15. So I just
get a product of these factors. And
that's just each one of these is the is
the canonical form for that little
interval or looks like a four particle
amplitude just a little local for for
particle amplitude. Okay.
So this is a this is a limit
uh uh this is something uh uh Nick loves
um um they're uh they're called the
maximally split limit. Okay. to where
the amplitudes are maximally slit. Uh
burn loves because this is a kinematics
for which uh the saddle point equations
uh the scatter equations are precisely
one solution. All sorts of nice things
about this limit. So we call them
maximal splits and the association is
just a cube in this case. Right? Of
course the associated in general is much
more complicated than a cube. Uh however
the essential story you're going to see
is that in the large end limit the
association essentially turns into a
cube in some particular specific sense
uh of that of that statement. Okay. So
um all right but anyway let's uh let's h
keep going here. There's there's another
obvious thing that we could do. We could
turn everyone off here and turn this guy
on in the corner. Right? That's also a
limit where we get a non-zero amplitude.
That's as simple as possible. And I
won't write down the formula, but it's
just the, you know, it's just the
canonical form of that little tri. Okay,
it's a single term again. So if I turn
on all of these guys, I get something
simple. If I turn on that guy, I get
something simple. Okay, I add up
everything, I get the full associ.
It's actually amusing that if you if you
take this simple guy and that simple
guy. So this is gives you a cube that
gives you a that just gives you a a top
dimensional simplex already. Just
summing those guys is a very complicated
object. Okay, it's almost as complicated
as the full assoc in this case. It just
shrinks two of the edges of the associ.
Okay. So you have uh uh so you have 19
edges instead of 21 for the uh the the
threedimensional associated string has
all the same uh facets it had before.
Okay. Um so it's already a pretty rich
object where you sum just the little
intervals on one side and the big top
dimensional simplex on the other side.
As you go to large n this object is uh
has you know roughly 2 to the n facets.
That's a lot of facets. Okay. quite
complicated object. Sorry, has a lot of
vertices. It's two to the n uh uh
>> but the dimension always goes up, right?
>> Dimension always goes up. That's right.
Exactly.
>> There's no scenario where the dimension
stays the same.
>> No. No. Exactly. So that's that's that's
so I I gave you the like reasons to
expect a large end limit
at very beginning before the reason to
be worried is that the integrals are not
staying in fixed dimensions. So the
integrals are sort of getting bigger and
bigger and so it's not obvious from that
point of view that a large end limit
should exist. Okay. But uh but here we
are not talking about large end. I'm
just saying something in general. If you
take a big mesh, there's two limits
where it's extremely simple. One where
you just turn on the sum ends on the
strip on the left. One where you turn on
the sum end in the corner. In one case,
you get a big cube. In the other case,
you get a simplex. Already when you just
sum these two guys, you get something
really complicated. Okay, just summing
these two guys something pretty
complicated. Of course, the whole
association is summing everything else.
Okay, so so already these guys have
roughly two to the n vertices. If you
put everything else in there, you grow
to roughly all the four to the n
vertices of the full association. All
right.
Now, let's return to large n and to
burn's question about what the c's look
like cuz that's the sort of key uh
that's one uh key thing.
So, this is a pretty large n. Uh, and
here's the uh cool point. So, I'm
imagining that my x's are, you know,
somewhere in the neighborhood of this
all the x's equals constant. They're not
equal. They're constant, but you know,
they're all, you know, they're varying.
They may be varying a lot, but they're
they're staying away from zero. Okay?
They're sort of staying away from zero,
but they're varying a lot inside this uh
inside this triangle.
But that means something really cool.
Let's look at the x the typical x on the
inside.
I currently said typical x on the inside
is this x i j = x i + 1 j + 1 - x i + 1
j - x i j + 1
this is roughly the derivative in i the
derivative in j of x the discrete
derivative of i discrete d of x
and therefore this is of order 1 n^2
okay all right this is of order 1 n
I shouldn't have said I mean it's really
1 / n derivative with respect to u 1 n d
respect to v. Okay, so this is order
this is something of order 1 n square.
So all the c's on the inside are small.
Of course, there's many of them. Okay,
so this doesn't mean that you can just
throw them out. Okay, but it kind of
means that the C's on the inside are
tiny. Meanwhile, what are the C's on
these corners?
These are X plus X - X, but this one is
absent for them. Okay,
so the C's on these corners are of order
one. These guys are of all of order one.
And this guy on this corner is also of
order one. Okay. So in this continuum
limit when the x is inside are taken to
be smooth.
Then the c's on the outside here and
this one here are of border one
and all the ones on the inside are of
order one n square.
>> Sorry.
>> Yes.
>> This I don't understand why they're
order one. So the other one you
interpreted as some derivative. Okay,
here one of them is missing.
>> Yes,
>> that I understand. You can't write it as
a double derivative anymore. But what
can you write it as?
>> Oh, you can't. It's just x + x - x. All
these x's are of order one. So this is
of order one.
>> It's like a single derivative plus
something.
>> Well, it's like a single derivative plus
plus an x. Yes,
>> if you like. It says x - x plus plus an
x. This is this is a more than one. I
want to stress something to you. So if
you if you thought that the x should go
to zero on the boundaries then this
would not be true. Okay. So this is sort
of important that uh in this story uh
again this is the the half hour if
you're a physicist that we added this
mass squar. Okay. We add this uh we add
this m squ. So even on the boundary is
not going to zero. It's going to m okay
that's what makes it uh that's what
makes it happen. But on the inside all
the m squares cancel. On the outside
they uh they they they don't. Okay. So
how
>> just to be more specific one x of order
one and one derivative of order one.
>> That's right. So the whole thing is is
of order one one I'm sorry this is order
one plus order one n but you know you
can put together like this. Yeah.
Okay. So
uh so that's already some some
indication. So let's for the moment
forget about all the internal seats.
Okay. As I said we can't we won't
actually forget about them. We're going
to put them all back in uh in a bit. Um,
but this at least suggests that it's an
okay idea to start thinking by throwing
out the C's. And it's interesting that
that exactly lands us on this sort of
first complicated associate. Okay, that
it could be super simple when it's just
these guys on one end. It's super simple
when you just turn on guys on the other
end. You add the two of them, you get
something complicated at all. But
anyway, somehow that first complicated
thing is our uh first object of study
here for for a for a natural reason.
All right. And that that lets me write
the integral down. I can now just write
a single integral down that we can all
stare at. Um
>> so I'm just going to introduce some uh
>> so Nemo since you mentioned the
scattering equation should we solve them
for this too at the inside and outside?
Yeah, as as you see this this large end
story suggests a dramatic simplification
in solving the scattering equations at
large
>> for this situation
>> for all situations for all situations in
this world even when we turn back on the
seas. Okay, so that's so all these
things go together. I'm actually talking
about but sort of physically more
interesting perhaps about the what the
field theory amplitudes look like. were
a lot more tropical words and it's
actually much more interesting and
involved analysis. Uh uh but the same
statements end up being true for string
amplitudes in the limit where the where
the energies are huge um where you're
just solving the scattering equations
and the arguments are oneline arguments
for what what the large end results look
like. So they're they're much simpler uh
at large end there's a specific sense in
which you isolate a single solution.
it's rational and you follow it and it
becomes simpler and simpler as you go to
a large larger
>> and you'll see there's sort of one
simple idea uh which uh we probably
won't get to it from the strings but
once you see what it is uh here we'll
immediately you'll immediately see what
you have to do in the scattering
equation context okay
>> yes
>> so is what is the statement about the
middle one since there are many of them
yes is it that they individually
contribute in different regions or
>> no I mean at the moment you have no
reason to ignore them Okay, at the
moment there's one of n square but
there's n squ of them. So you can shrug
them. Okay, you can at least say they're
definitely not more important somehow.
They're not going to like dominate
everything. So uh and as we'll see we
might see if we Yeah, I think we we we
might even see uh uh you can really
include their effects. Sometimes they're
not important, sometimes they're
important, but uh but there's a there's
going to be a formula at large n in all
regions of kinematic space that you care
about where you can see um all these
things happen. All right. So
okay. So um so what is but so let's
let's turn off the so turn off
um the C internals.
And so our amplitude is then just the
integral dt13 up to dt1 nus1
minus infinity to infinity e minus s
is the sum of t1j
x1 j
sum over j plus the sum over j c j minus
2 j these are these quarter js times the
max of zero and minus uh p1j
and then has one complicated term which
is this corner C C1 N minus one.
So these these are the C J minus 2 J and
this one C here is C1 N minus one. Okay
C1 N minus one and this one has the max
of everybody. So max of negative T1 NUS1
T1 N -1 T1 N -2
and so on. Okay.
So we have to do this single integral.
I'm going to introduce a little bit of
uh uh notation just so I don't drag
machines around all the time. Um so
remember
the the kind of obviously they're the
simplest part of this action this these
first two terms those are the ones that
gave us the like hyper cube. Those are
those are the ones that were sort of
trivial. I'm going to call these I'm
going to call this S not for the moment
I'm going to call this uh S1
and furthermore in here remember what
what it looked like in here when T1J was
positive uh it had a slope that was X1J
uh when it was negative it had a slope
that was CJ minus 2 J minus X1J
so I'm just going to call this one AJ
and this BJ J.
Okay, they just stand for those
variables. And I'm going to call C1 N
minus one. I'm going to call C. Okay, so
I don't just keep dragging these indices
around. So my amplitude depends
on the AJ, the BJ, and the C. Okay,
that's what the uh uh amplitude depends
on.
Okay, and so I can write this as an
integral. Let me write it in this uh way
as an integral uh again t dp t13 up to
t1 n minus one and I'll write as a
product over j the sort of independent
things. So I'm going to write this as
product over J. Um uh slightly
suggestively I'll write them as P hat of
TJ.
Um and then uh an E to the minus S1.
Okay. where where phat of t phat j of uh
uh t1j
is e to the minus a jt when t is
positive and e to the bjt when t is
negative right that's just exactly the
thing that we we're talking
okay so that's our integral
okay so once again in the limit where
the C is zero. If I set C to zero, I
know what this is. It's just the product
of 1 over AJ plus one over BJ, right? So
in that limit, the answer is uh is a
symbol.
C is this complicated term, right? C is
the is the complicated term where all
the variables uh talk to each other.
Everything else they're uh decoupled.
All right?
So um and again if we officially do this
integral the way to do it is to find the
regions where it's peacewise linear and
then we're back to doing the diagrams
right that's just back to where where it
came from. So we have to figure out some
way of thinking about this integral
which is not that. Yeah.
So um
one key idea this is it's not I mean
it's just psychology
is to interpret this as saying well okay
well at least there's a limit where the
answer looks simple. So I'm going to
define a kn to be the product over j of
1 a j + 1 bj. This is exactly what I
would get. This is what the amplitude
would be if the c was zero. Okay this is
aj and bj as I send c to zero. Okay.
So, it's sort of reasonable to look at
what a is relative to a KN. Okay. So,
and if I look at a / a KN,
well, a / a kn has a very nice
interpretation.
A / a KN has the interpretation of an
expectation value of the quantity eus s1
in the probability distribution given by
eus s.
Right?
where now what I mean by by the
probability is there's an independent
probability for every t they're all
independent and p of t not p hat of t p
of t is just this p hat normalized to
one okay so this is just exactly p hat
of t divided by 1 / a plus one over b
one over g tjid by one over a j plus one
over bj okay so I've got nothing here
right I've just multiplied and divided
divided by by a kn but that sort of lets
me interpret the amplitude is the
expectation value of e to the minus s1
in the probability distribution given by
eus s. So this is the extremely
important point. So is this is this
clear or uh totally clear? Okay, trivial
point but extremely important for how
we're going to think about things.
Okay, now this now lets us sort of think
probabilistically and that's the that's
the sort of key to making uh progress
here.
But before proceeding um I should have
done this uh a second ago. I want to
give you an example of what the answers
look like. Okay, just so you have an
idea of of of the kind of answer that
that we're going to get because it's at
least to us it was somewhat shocking.
um
get this chart.
So here's here's the claim.
As you'll see, the interesting thing is
that there's not one uniform formula.
Okay? Instead, there are different
regions in kinematic space. There's
different regions in this kinematic
space where there's a different simple
formula.
But let me say what it looks like
already in this limit where I've turned
off the rest of the C's. I can tell you
what it looks like when you turn them
back on as well. But just just for for
uh I mean in fact the formulas don't
change largely don't don't don't change
but for instance if all the AJ in fact
let me I'm going to make statements that
are true even if you turn the C's back
on. Okay so you can even turn the C's
back on the internal C's back on and the
formulas that I'm are writing are are
going to be true you have to turn them
back on but be order one over n squ okay
exactly as we said but that's that's all
that is needed. So here are at least two
regions in kinematic space where the
simple formulas and then I'll I'll tell
you what the story is if we get there in
general. So so so uh so so here are some
examples of what the answer looks like.
And remember we're supposed to have that
the amplitude goes like e to the n
something.
Um what I'm going to do instead is is
write the amplitude. It's just really
more convenient to write it as it's
equal to a product uh over all J of
something. Okay, which of course does
this uh uh as well. I just don't want to
write the log of this expression. So I'm
just going to directly write the
formula. Okay, or if you like I'm going
to use a notation where a goes to b
means that log a / n equals log b / n
plus order 1 / n to some power. Okay. So
if I say a go to border 1 / n to a power
I what I mean by this notation is that
log a over n is log b n up to that
accuracy. Okay.
All right. So here are some examples. So
region one this is the first one that'll
be most simple to talk about. Suppose
all the a k are smaller than the bk for
all k.
Okay. Then in this limit the amplitude
goes to precisely that maximum splits
formula
plus corrections that are of order one
over n. So that's the first surprise.
Okay,
one term.
So you turn on the C, the poly gets way
more complicated. Doesn't matter. the
answer is just exactly uh uh the same uh
thing again.
Okay, a second limit.
The AK are bigger than the BK for all K.
And not only that, this fin quantity A K
minus BK over 2 C
is less than one but increasing with K
or nondereasing. Okay, so I told you the
regions of kinematic space. Okay, so
this should look funny. So these are who
why would we ever care whether AK is
less than BK or these other things are
true is not remotely manifest a lot of
times. We should care about these
regions and all of kinematic space is
going to get carved up into regions like
this. I'm just giving you two examples
where we can write down the formulas uh
what uh extremely easily. In this case,
the amplitude goes
to the product of all j 4 over a j + bj.
So I hope you see in a very uh the sort
of concrete sense in which simplicity
emerges at large n at no finite n are
the amplitude simple. You can work them
out. They're they look like horrendously
complicated expressions. You go to large
n and they collapse to these oneline
expressions. They're as simple as the
Park Taylor formula. Okay. But not any
finite end. They become simple at large.
>> And they have this non-physical.
>> Yes. Yeah. So um can I explain something
here? Is that okay?
>> There's a C here that's positive that
should we talk about that one too.
So um uh yeah. So uh so what what Kelly
was saying if you look at this formula
you might not even think this formula
cannot possibly be true um because uh
the amplitude doesn't even have poles
when the a's and the b's go to zero in
general as opposed when the x's go to
zero well a is an x but b is not an x
right b is c minus x okay so how can
this possibly formula possibly be true
it doesn't even have the right poles
well this formula is only valid in this
region of kinematic space where a is
less than b in this region of kinematic
space you cannot not reach the poles
where b goes to zero uh while while
keeping uh the x's positive. Okay, so uh
but this also shows how non diagram like
these formulas are. They don't care
about the poles. They couldn't give a
crap about where where the poles are.
Okay, this formula is even more dramatic
because none of the terms downstairs are
poles. Okay, these are not poles
anywhere. If you write this in terms of
the underlying physical momenta
uh this this formula is the amplitude
goes to the product of 2 over pi pi plus
2 pj.pj plus 2.
Okay. So it looks a lot like the partial
formula which has consecutive uh uh
things downstairs. But here you
precisely have the nonpler poles
downstairs. Okay.
Oh, and I forgot to tell you in this
case the correction is order 1 / square
root of n. Okay,
which suggests the existence of some
random walk behind the picture which is
what we're going to see in a moment.
Okay, so the physics of large n is
associated with a picture of a random
walk in stringer parameter space. Okay,
so that's uh okay. Anyway, that's what
that that's what the uh formulas look
like. We are in the midst of learning
how to do a systematic one overn
expansion around these asmtoic limits.
So we don't have a theory for that yet.
We have sort of hints of of how to do
that uh systematic expansion. But what
we do know how to do is you you give me
any picture of what the AKs and BKS look
like. We know how to produce a formula
that looks like this. And it always
looks like this. It's always the product
of some one over AJ with a shift and one
over BJ with a shift. And we're going to
explain where that now comes from. Okay.
Maybe I'll make a final comment. The
Catalan case is this one. Okay, me could
have said it before. The Catalan case
when you set all the X's to one
corresponds to a situation where all the
C's on the boundary are one. The corner
C is one and all the C's in the middle
are zero. Okay, so the Catalan case
where he's just counting diagrams is
this limit is also this limit. Um and uh
and that ends up being in this case. Uh
the AJ ends up being one. the BJ ends up
being zero because it's a C minus X and
so this is the four to the N. Okay, so
that's where the that's where the Calan
is and this is a sort of specific
defamation away from Cal. All right,
now I think I have uh 10 minutes. So let
me give you an idea of where these
formulas come from. Um
uh and so we're going to begin by
looking at this uh at this picture.
Okay. So, um
to get a first intuition for what's
going on. Uh so I'm I'm going to write a
is a kn I'll write it again kind in this
way. Expectation value of e to the minus
s1 and the probability distribution is
by s. The first thing we're going to do
is is to try to bound this guy. Okay,
this has an obvious upper bound, right?
The upper bound is that s1 is positive
because it's a max of zero and something
and the c was positive. So s1 is
manifestally positive. So this is this
is less than or equal to one. So this is
less than or equal to a kn. So we have
an upper bound on the amplitude which is
that's already kind of cool for absolute
free, right? You know, so the amplitude
is less than equal to s not.
We also have a cheap lower bound on the
amplitude. A cheap lower bound is from
Jensen's inequality. Right? So Jensen's
inequality says that if you have a
convex function f
uh f ofx it says that the average of f
is bigger the average of f ofx is bigger
than f of the average of x. Right?
That's the JSON inequality. You draw a
convex function and it's this sort of
famous picture. Right? So this is the
average of f. This is f of the average.
So if the function is convex uh this is
all all this is always true. Okay, the
function e to the minus x is convex.
Okay. So from here we learn there's a
lower bound the amplitude is greater
than or equal to e minus the expectation
value of s1 and the probability
distribution e minus s. Okay,
right
>> sorry
>> uh sorry time z sorry exactly. Okay.
>> All right. Now let's look at that lower
bound. Let's try to understand what e to
the minus s1 is. Well, remember let's
let's write S1.
Yes, there was a question.
So, S1 was uh
S1 was max
of 0 - t1 n -1 - t1 n - 1 - t1 n - 2 and
so on.
All right. And now let's think about
what the probability distribution is.
Remember uh it has a slope that's a k
for t1k and bk in this other direction.
They're a general difference. And I've
drawn it now in the limit where a k is
smaller than the bk. Okay, just to make
it clear that what does this probability
distribution want to do? It wants t you
know t can jump forward or backwards but
it preferentially wants t to jump to
positive t. Okay. So if it jumps to
positive t of order 1 / a, that's not
suppressed. Of course, 10 over a is
exponentially suppressed. But negative t
of order 1 over b is okay of order 10
over b anyway. So so it it it wants to
preferentially go uh in the positive
direction. Is that clear? That's for
each t individually. But what are the
things that are showing up here? The
things that are showing up here are
precisely what you would sort of think
of if I if I plot here PN minus one. tn
minus one n minus2 and so on tn minus3
all the way up to t3
then um uh so sorry if I plot what what
I see here which is uh sorry let me do
it the other way so here I'm going to
plot 3 4 5 up to n right
um
uh then uh what then what I'm what I'm
seeing here is uh uh uh here I see t
here I see t plus Right? So it's exactly
like the sums of everything before. It's
like I'm doing a walk, right? A random
walk from here.
Uh and what I see at in in every max is
a sum of all the t's that that came
before. Okay.
So if I draw this uh picture um it looks
like in general I'm doing a biased
random walk. Okay. If a is if a is uh
smaller than b, the bias is in the
positive direction. I want to make the
sum of these keys bigger and it's
roughly going to go linearly, right?
It's roughly going to go linearly as I
uh go up from here. Okay? If a was
smaller than b for all of them, then it
goes the other way.
If a is equal to b, then it would sort
of like randomly walk with a rand sort
of uh fluctuation around it. Okay?
But let's begin from the simplest case
where a k is less than bk for all k. If
I do ak less than bk for all k then
what's going on? Every like this term
wants to be t1 n minus one wants to be
positive. Okay. So this wants to be
negative. This wants to be more
negative. Uh so each one of these things
wants to be more and more negative.
And so the max is dominated by zero.
Okay,
so this is why it matters whether AK is
bigger than BK or less than BK or so on
because the sign of the drift of this
random walk is going to matter. Okay,
but when the AK is small at all than BK,
the max is dominated by zero.
That means that we're talking about at
order n, the max is going to be order
zero. It can't be order n, the max will
be order zero. Yeah.
And therefore this lower bound
this thing is e to the minus order zero.
Doesn't have any ns in it. E minus order
n to the zero. Okay.
Well, that's fantastic. We have the
amplitude at leaving order at large n.
We've bounded it between two numbers a
knot and a half a knot. you know some
number of order one * a knot. So that
tells us that uh that log of a n / n
approaches log of a kn over n plus
corrections that look like 1 / n. So
that's our first
super cheap um uh prediction. Okay, it's
already very non-trivial from the
perspective of uh any other normal
perspective of a fun time. Um I should
say that these predictions have been
checked. How do we check these
predictions? Uh this theory is simple
enough that uh we can use normal
recursion relations uh to compute the
tree amplitudes. Okay, you can do it if
you're not good at computers on uh on
Mathematica up to n of like 200. If
you're good at computers, you can do it
up to n of 5,000. Okay. And we've gone
up to around 5,000 because we need to.
These one over root ends are errors that
uh that we should expect. And one over
root 100 if the coefficient happens to
be three is not necessarily a small
number. Okay? You can get confused. So
we really needed to go to n of a few
thousand. Okay? So every formula that
I'm telling you has been checked up to n
of a few thousand. Okay? And and and
they're correct and the errors are
correct. Okay? as sort of we would we
would predict from this picture but um
anyway so that's the most trivial that's
the most uh that's the most trivial
example uh maybe just in 5 minutes let
me give you an idea of how we do things
in general
okay so for example let's say ak is
bigger than bk
let's say ak is bigger than uh bk for
all k then I just erase it but the drift
is going in the opposite direction right
that means that the expectation value of
S1 is of order n, right? That means that
this bound is absolutely lousy. These
bounds are totally useless. The
amplitude is less than one uh is bigger
than e to the minus n. So utterly
useless in the it just tells us it's
between zero and one uh in these uh in
these units. That doesn't tell us
anything apart from the upper bound.
But our reaction to this is that in fact
what's wrong with this is that the upper
bound was too stupid. Okay, we we did
too lazy a job with the upper bound.
We're going to find a much better upper
bound. Okay, so I'm now going to tell
you how to find a much better upper
bound. And it's a little bit of magic
that this optimal upper bound happens to
be saturated at large n. Okay, so that's
some separate arguments, but I'm now
going to just give you a new way of
getting an upper bound. And the
remarkable at any n. But the remarkable
thing is that this formula for the upper
bound at any end turns out to be the
exact amplitude, the leading amplitude
at large n. Okay, a conceptual reason
why this happens, we still don't know.
It feels like some erotic explanation
should be available, right? Somehow the
amplitude is filling up some phase
space, maximizing some phase space, some
some words like that. We don't have
oneline conceptual explanations like
that. We do have relatively simple
proofs that have very much this
character of looking at these random
walks and thinking about what happens
with these uh uh uh random walks. Okay,
but anyway, let me tell you what the
argument is. It's extremely simple and
this is something I would be astonished
if it has not come up in many other uh
places in uh mathematics. I'm sure it
has. Um and it would be interesting to
compare uh to the setting that we're
seeing here. Okay. So I want to come up
with a better uh upper bound and so you
see my formula was S1 was the max of a
bunch of things you know C times the max
and let me just call them some cap a1 a2
a3 etc. Right? One of these happened to
be zero for us. And so we said this is
like saying max of a1 uh a2 and a3 is
greater than or equal to a1. Why do we
choose this one? There's no reason to
choose that one. Okay? I could put here
a2 or a3 or anyone else for that matter.
I could put any weighted average of
them. Okay? And the max is bigger than
any weighted average of them. So that's
what we're going to do. We're going to
say that S1 is greater than or equal to
W1 A1 plus W2 A2 plus you know W1 A N
where these are positive weights that
add up to one.
Okay.
And so that means that the that means
that the amplitude is uh for any choice
of weights is still upper bounded by
what by what I get if I if I did that.
Okay. But you see the nice thing that
happens the moment you replace S1 with
this weighted average the maxes are gone
and all the variables are decoupled
again.
Okay. So I can do the integral.
So all the variables are are decoupled.
Okay. So what do I get? Well, it has
extremely simple uh interpretation.
Let's just uh just see what it is here.
Where did my go?
Oh, I left it in here. That's great.
That's good.
Sorry.
It's really yellow in there now. That's
great. So, let me erase this. So,
leave that to dry
[Music]
right down here. Okay.
So the amplitude is going to be less
than or equal to again this integral uh
of the dts uh eus s. But then I'm going
to have e to the minus c * w n -1 t1
wnus one. I'm going to call the weights
for the for the uh for the terms labeled
by the smallest t that occur. So plus w
n -2 * uh t1 n -1 + t1 n -2 plus and so
on. So plus finally w3 * t1 n -1 +
uh t12. Okay. And oh and there are minus
signs in front of all these guys. So the
maximus have the negatives in them.
Okay. So has a negative t1 n minus one
uh negative all of these guys plus w3
negative all these guys. Okay.
And so here I just replaced uh S1 by
this uh by this weighted average.
But now you see very nicely again all
the T's are decoupled again. So every T
integral has some effective new AJ and
BJ. There's effective new AJ and BJ.
Uh so
so this is equal to uh integral dt 1j
the product of all all j of some uh p of
some a hat j and bhat j
where what are these uh a hat js and uh
bhat js for example
T13
only uh the only place anything involves
T13 is this weight w3. Okay. So we have
that a3 hat is a3 minus w3 * c.
Okay.
This is uh there's a minus sign minus
sign. So this e to the c t13 but there's
an e to the minus a t13. So a hat 2 is
a13 minus w3 c. bhat 3 is b3 + w3 c. But
then a hat 4 is a4 minus and there's two
w's. There's w3 plus w4
that touches uh 4. Bhat 4 is b4 plus w3
+ w4.
And in general we can say that a k hat
is a k minus c. We can call it sigma k
where sigma k is w3 plus w4 plus all the
way up to plus wk. Okay. And b k is bk
plus c sigma k.
Okay. And therefore uh what we're left
with is that this mu upper bound on the
amplitude that depends on these weights
is that the amplitude is less than or
equal to
since they're all decoupled again. It's
just the product over all the k's of 1 /
a hat plus one over b hat. So I'll write
it in terms of 1 / a k minus c sigma k
plus 1 over bk
plus c sigma k.
Okay. So for any choice of weights, so
for any choice of W's, any choice of
weights,
this is a true state.
Now we can instead of working with the
weights, we can sort of nicely directly
work with the sigas. The weights being
positive means that the sigas are
positive and increasing, right? Because
the sigma k are sums of consecutive
sigas. So I can not talk about the W's.
I can just directly talk about the sigas
but say that I have zero bigger than
sigma 3 sigma 4 less than sigma n minus
one less than one okay so I can give
sigas or I can give weights or I can
give any choice of sigas in this uh in
this
and so now comes the point there's a
best possible upper bound that I can
find right the best possible upper bound
is to just you know minimize this
function
uh over the space of all sigas that
satisfy this property. Okay.
Okay. So let's see uh let's see what
that uh means. If I just take this note
not note that that ak hat plus bk hat
does not depend on the sigas right the
sigas cancel between these things. So if
I just looked at this and said where is
the global minimum of this function when
I you know where's the global minimum of
this function the global minimum occurs
at some value let's call it sigma k star
which is equal to a k minus bk over 2c
that's where the global minimum occurs
okay
so if you can attain that global minimum
that's the best upper bound you can find
okay
but you have to but but the minimum uh
so either you obtain global minimum or
if you can't you have to find the
minimum somewhere on the boundary of
this space. Okay.
So let's say we go back to our previous
problem when the a so and sorry what is
this doing? This is just a sort of you
know uh famous fact. You have a 1 over a
plus one over b right the the sum is is
constant. So they just want to you want
to tune sigma to make them equal if you
can. Okay.
So but let's say we go back to our
previous example. If ak is less than bk
then well this sigma would want to be
negative which we're definitely not
allowed to do. Okay. So in the case
where ak is less than vk the best you
can do is to do nothing. Okay, because
anything you can do with these sigas
will hurt. So the optimal upper bound is
to put all the sigas equal to zero.
Okay, that's our previous formula. But
you can ask can we sometimes get the
optimal just the dead optimal case? Yes,
you can. If you if this is realized, by
the way, if this is realized the sort of
a star absolute optimal would equal the
product over all k. If you shove this
in, you get precisely the formula that I
had before. 4 over a k plus bk.
Okay,
but when can that be when that cannot be
realized? When these sigma stars live in
this space. So that's what I told you
before, right? It's when the when a k
minus bk over 2c is bigger than zero and
increasing, right? Which exactly means
that it uh that it lives in in this
space. Okay. So these are sort of two
extremes where this upper bound uh what
where we can easily figure out what the
best upper bound is
and so we get this best upper bound and
the claim is that the amplitude is
saturated by that best upper bound. In
fact, the way to uh prove it is to
really return to this picture.
Um, and instead of saying that I'm
giving it an upper bound, just say that
uh I have the the the action is S plus
S1. But I'm going to rewrite S1 as S1
minus this sort of approximation to it.
the sum of the weights max of the you
know uh the max of the of the
corresponding sums of negative t's
plus the same
I'm going to call this whole thing s1
hat and I'm going to group this I mean
this is the piece where every everyone
is decoupled this and sot
I'm going to call snot hat these will
depend on the weights this will depend
on choice of wakes
and now I have a new interpretation
where the amplitude is equal to a kn hat
multiplied by the expectation value of
eus s1 hat in the probability
distribution eus shat hat. Okay. So for
any choice of weights we can have this
probabilistic interpretation and what
you can then show is that when you
choose the optimal choice of weights
then this even minus s1 hat is always a
border line. Okay. So, and that that
needs some more argument to sort of
thinking about drifts and random walks.
Um, but uh but you can see why there's a
correlation between optimizing this
upper bound and uh and getting this to
be order one. So, it's not an accident
that the two things uh happen at the
same time. Okay.
Okay. So, that's I think all I will say
about this. I've uh uh gone uh gone o
over time. Um but uh but there's just to
say something something quickly. This
problem of optimizing this product in
this little region is a very very pretty
little high school problem. Okay. So if
you have uh just just to give you an
example um uh let's just plot like I'm
going to plot what is a k minus bk over
2c. This is clearly this is this sigma
star. This is an important quantity. Uh
I'm going to plot as a function of k. So
here would be k of three but I'm going
to draw zero to n. Okay. I'm going to
draw it in a in a continuum. So zero is
clearly special. one is clearly special
here. Okay, so um if the AK if the AK's
were less than the BK's, you're down
here somewhere
and uh the optimal is sigma equals zero,
right? That's what that's what we said.
[Music]
If a k minus bk is positive, but this
thing but c is still so small that this
quantity is up here somewhere, then
you're in the opposite limit. I mean you
want to make the A and B equal but it's
so far to get there that but you know
you just take a little step in that
direction and sigma's already maxed out
at one. Okay. So what you can do is just
keep sigma at one.
Okay.
But uh if on the other hand the AK minus
BK over 2C if the kinematics looks
something like this it's increasing in
some way then that's what you can call
this tracking solution. then the optimal
the the the pink curves are here what
sigma looks like the sort of optimal uh
the sigma ks that optimize the bounds
will sort of just subtract it okay but
in general whatever the picture for the
sigas are has to be some increasing
picture here that's bounded between zero
and one and you know as as we learned in
in Europe and junior high school
probably and in the US in college uh
when you're minimizing uh a function on
a on some you know on some compact back
region again either the minimum is at
the interior or it's on the boundary
somewhere. Okay. And what does it mean
to be on the boundaries here on the
boundaries here if you're drawing a
picture of the sigas means the sigas are
just going to be constant for a while.
Okay. So the bunch of sigas can be
equal. So they can be constant or they
can be increasing but they can only be
increasing if this curve of akus bk over
2c is increasing. So for instance
if your picture of ak minus bk over 2c
looks like this. Let's say it's
decreasing.
Then the best you can do is just keep
sigma at a constant somewhere. Okay. So
that's what what the the boundary looks
like. Sigma can't go up anywhere because
that would correspond to a being in the
interior, right? But it's not it can't
be. It's got to be on the boundaries.
It's got to stay constant. So the best
you can do is just choose some sigma
star and then just, you know, pull out
that quantity, you know, as a function
of sigma star until you find the optimum
and that gives you the formula for the
amplitude. So it's not like there's an
analytic formula you can write down
ahead of time. You have to, you know,
solve this little optimization problem.
But this optimization problem has
nothing to do with n. You just draw me
the smooth curve and I just take this
product and I I minimize it. Okay. So um
and so if you have a more interesting
situation, let's say you have a curve
that looks like um uh I don't know, it
looks like this.
That's what ak minus bk looks like. Well
down here the best you can do as we saw
before is to keep the sigma zero.
Now the ak minus bk is increasing
between zero and one. So you can track
for a while but you can't track forever.
So at some point you've got to stop at
some sigma star. You've got to stop and
say no now I'm going to go flat. I I can
go flat until the next time the thing is
increasing. Then I can try to increase
again and then here it's going to
flatten out again. Okay. So this is what
your arzops for the sigas has to look
like and then you optimize again. So
again, there's not an analytic formula,
but the cool thing is that it has
nothing to do with n. The limit has to
do with the number of humps in the
picture for ak minus bk. Okay, there
more and more humps there are in the
picture. The more and more parameters
there are in your optimization problem.
But for example, in this case, there's
one parameter. So you put it on the
computer, you plot it, you find sigma
star, you shove it back into the
formula, and that's the amplitude at
large n. If it looks like this, there's
there's this point. Well, okay, there's
really still only one uh uh uh sigma
star. So you just sort of you vary the
sort of point in which this escape could
happen and you optimize uh you minimize
this product.
So that's a sense in which there isn't
an analytic formula uh at large but
kinematic space is broken up into
regions that correspond to sort of
qualitatively different ways that this
optimization problem can be solved.
>> Is it possible to make a last question?
>> Yes. Yeah. That's how all I I I want to
say exist. Yes. But what is your your
question? Okay. Um so how universal are
the gridlike structures? Um moment uh
how how universal are those gridlike
structures? Um no uh how universal are
are those are this formalism um string
theory derived gridlike structures? In
other words, how does all of this
translate to a more complicated
combinatorial object derived from
physics? I mean,
>> well, I I I think the answer is going to
be a little similar to to uh to answers
I gave to to earlier questions. The
whole model to begin with, the whole
model to begin with is a is a is a toy
toy model. Naively, it's surprising that
the toy model ends up being connected to
very physical models in a non
non-trivial in a very simple but not
non-trivial way. Uh, one thing I will
say is that there is something very
universal about what we're talking
about. All of these X's are
singularities. All these X's are poles
of amplitudes and those poles are there
in every theory of colored particles. So
that part the sort of singular part is
totally universal is there for any
theory that has a that has a color. Uh
so gluons pions all kinds of theories
that have a color will have those those
poles. That was a big motivation for
studying these things a long time ago.
They were super duper simple but they
had some universality in their pole
structure. That thought would make you
think that to get to realistic theories,
you have to do a lot of work to get the
intricate structure of the interactions
that show up in the numerators. And the
big surprises that all of that intricate
structure of the numerators turn into
exactly the same formula just shifting
the uh variables in some in some ways.
Okay.
>> Yes. So um so just uh just just uh just
uh to make this a point um I didn't say
what the I didn't say what the uh
specific shift was to get to pons uh
from uh from from this scalar theory but
there was a simple shift to get pons and
it gives rise to formulas that are
virtually identical to these formulas
for pon amplitude okay so that
>> can I say just
>> yes
>> because I have to read because I'm if I
get too late I'm going to let it spelled
from this class
>> that sounds very dangerous. Yes. Right.
>> Um so um I have a no um I sneak a lot
into places and sometimes I sneak into
this AI website um at 3:00 a.m. just to
have fun with little and see what people
are doing. So like seven months ago or
eight months ago there was this
colloquium about um moment um a
colloquium about um combinational
objects. I don't recall the exact title
of it. It was very recent and um I I I
remember that I was uh just messing
around on Coora or Reddit and they had
>> Can we answer this question? I'm sorry
because I had a few points and we're
we're over time already for my talk. Uh
but there was one more physics point I
wanted to make and I want to give people
an opportunity to ask questions about
the physics. You're not asking questions
about the physics that I'm talking
about. We can talk about it later but
you're not talking about this physics.
So let's can we can we just put this
aside for a moment please and just talk
about it later. Thank you. Yes. Go
ahead. Yeah. Just a quick quick physics
question. So do you see any relation of
this kind of to people in cosmology have
studied kind of these tales of a PDF
>> um especially also for kind of from
black formation?
>> Yes. Yes.
>> Um anything rings a bell there like what
>> I don't know enough not technically
about what they're doing to know if it
rings if it rings a bell to you. Let's
let's uh let's uh talk about it. Yeah.
Um, I'll just say uh just just very very
quickly that um uh yeah that adding the
C's back in in this picture is is is I
hope it's kind of clear. Uh for example,
if we did the AK less than the BK
problem, uh it's it's clear all the
other C's still involve maxes of 0T
negative T. So they're all going to be
near zero, right? They're all
irrelevant. So it doesn't matter if you
add them back in or not. Okay? So long
as they're order one over n square. one
r square of them doesn't uh give you
something of order n give you something
of order one. So it does matter if the
c's are of order one of n r squ. If
they're of order one r it wouldn't work.
Okay. But if the cs are of order one of
r squared uh uh it doesn't work. And um
and uh just uh since uh Berg asked the
the the the version of this idea that
you use for uh scattering equations for
string amplitudes is the non-tropical
version of the formula the max of a1 a2
a3 is bigger than the weighted average
of the max is uh the arithmetic
geometric median inequality. Okay. So um
so we take all of these formulas and so
now we have polomials a1 plus a2 plus a3
to some power. Um
okay and this is what makes it uh
complicated but but we use the fact that
a1 plus a2 plus a3
>> is greater than equal to a1 over w1 to
the w1 a2 over w2 w2 and so on. Right?
A3 over w3. So this is the pre-tropical
version of a statement and once again
this decouples all the variables and you
can do all all the integrals and it
gives you a something new to optimize
over over the weights. What's
fascinating is that the new thing that
you optimize over the weights you the
optimization formula is like another set
of you know critical point formula but
they're not the same as the scatter
equations. However, they have exactly
the same solution and exactly the same
value of the uh uh of the of the of the
action at the critical point. Okay, so
that seems very interesting and this new
form makes it much easier to analyze the
large end limit. That's that's the
point. So the scattering equations
themselves look very complicated, but
this these are much simpler. The large
end limit lets you isolate the solution
which is relevant and that seems like
something worth. All right. All right.
Thank you very much. I I'll stop.
Ask follow-up questions or revisit key timestamps.
This lecture delves into advanced concepts connecting combinatorics, geometry, and physics, specifically focusing on the 'u variables' derived from words representing curves on surfaces. The initial part revisits the two key observations from the previous session: how to derive cone rays from surface data and how tropicalizations of polynomials lead to piecewise linear functions that partition space into cones. The core of the lecture then shifts to explaining the origin of the 'u variables,' which are crucial for string theory amplitudes and Schwinger parameterization. A novel 'counting problem' motivation is presented, where choosing elements from a word (with specific baggage rules) leads to generating functions and, subsequently, 2x2 matrices associated with turns (left/right) in the word. The product of these matrices for a given word yields a matrix whose off-diagonal to diagonal ratio defines the 'u variable' for that curve. The discussion extends to open curves with boundaries and how their associated matrices can be decomposed to track the inclusion of boundary elements. A significant portion of the lecture addresses the potential for infinities in calculations involving complex surfaces and how these infinities, particularly from infinitely many curves, are not problematic but a 'blessing in disguise.' This is because they provide a consistent way to label loop momenta for any diagram via homology of curves on surfaces, even though it leads to infinitely many diagrams. The talk also touches upon the historical discovery of these variables in string theory and their relation to cluster variables. A substantial segment explores the large-N limit in scattering amplitudes, where the complexity of calculations dramatically simplifies. The concept of a 'mesh' representing kinematics is introduced, and the behavior of amplitudes as N approaches infinity is analyzed. The lecture highlights how simple limits, like when all kinematic variables are equal or when specific variables are turned off, lead to surprisingly simple, yet fundamental, amplitude formulas. Finally, it discusses the connection between these amplitudes and random walks, providing a probabilistic interpretation and a method for obtaining optimal upper bounds on amplitudes, which in turn reveal the leading behavior at large N. The lecture concludes by emphasizing the universality of the singular structures of amplitudes, which appear in any theory with color, and how the intricate numerator structures in realistic theories simplify to shifts in variables in this framework.
Videos recently processed by our community