Alex Imas on Why Economists Might Be Getting AI Wrong
978 segments
How confident are we that productivity gains from
AI actually accrue to workers who can then spend some money on whatever
product or service is scarce at the moment, or important to them?
I would say not that confident.
There's several scenarios out there, and I think that
I feel like a lot of economists and just people in general,
I think aren't talking enough about is speed.
You talk about that if things are fast
we need public policy.
We need the, the, the new jobs aren't going to come fast enough.
Training isn't going to happen fast enough where you're going to get,
you know, things are going to get
fully automated very quickly and people are going to become unemployed.
There's not going to be enough time in the economy to see that pretty
little graph of agriculture shrinking and services increasing.
That took a long time. Right.
This is decades.
If we're on the order of like years or like 5 or 6
years, we're not going to have time to see that pretty little graph.
We are going to need to think about how do we support the people
who are becoming unemployed.
And you know, many very smart people have made suggestions on how to do that.
I think my personal,
I wouldn't say favor, but I think the thing that makes
more sense to me is somehow expanding the ownership of capital.
If labor is replaced by capital, then what's going to help people is formerly
you were a labor in labor now universal basic universal basic ETF.
It is UTC.
Hello and welcome to another episode of the Odd Lots podcast, I’m Joe Weisenthal.
And I’m Tracy Alloway.
Tracy, it may have changed a little bit in recent weeks or months, but I think by and large,
if you talk to economists about the long term impact
of AI, particularly on jobs,
by and large it seems like they point to history.
And they say there have been many technologies in the past that people
that were going to be very disruptive and destroy all kinds of jobs.
And in many cases they did.
But technologies create new jobs.
We can't necessarily anticipate them before
beforehand what they're going to be.
And AI is like kind of no different.
Ultimately, yes.
But then to your point,
you ask, like, well, what specific jobs do you have in mind?
And I get that, you know, it's hard to tell.
It's only the visible those. Right.
But it's so frustrating.
Right. Because here's this big new technology.
It's supposed to be a productivity boost.
And yet no one is actually sure
what new jobs it's going to create from that productivity.
This.
I love him to death, but Adam Osmek wrote a piece,
several weeks ago, and he was like, well, you know,
the player piano disrupted the existence of piano players,
but hotels still pay money for a human who will have a piano player
human, an actual piano player in the lobby rather than a player.
Which is true, but like not many people have jobs that are equivalent.
And I think that like it's like, oh, you know, it's
like I want to get like this insurance form reimbursed or whatever.
This insurance reimbursed.
Like I don't care about the human touch.
That per se.
I think there's something very happy
to have the equivalent of the player piano there.
There's something very dissatisfying about the idea
that we're all just going to become, like, performative in a way.
But I actually think that's kind of where
we might be heading where like the sort of social skills
I've said before, the looks maxing the personal branding,
the multitasking, I guess, like, becomes more important.
So the future is performative humanity.
OpenAI I just spent a ton of money on TBN.
I really love those guys. They're both very good looking guys.
And so I sort of feel like, okay, this is this is,
this is the biggest AI company in the world, sort of making a bet
on these, like, great buying two very nice and charismatic humans.
Yeah, yeah, yeah. So maybe that is the future.
Just being nice and charismatic.
Anyway, we need to talk more seriously about this, because I don't
I don't know, I kind of feel maybe this is not just going to be like
the steam engine.
It might be very it might be very different.
Maybe we won't have jobs.
Maybe there will be new jobs.
Anyway, someone who's been talking and thinking a lot about this and why
I might be different, we're going to be speaking really of the perfect guest.
Alexey Musk is a professor of economics and applied AI.
University of Chicago does a lot of writing on this topic.
So Alex, thank you so much for coming on Atlas.
Thank you for having me.
This is pretty cool.
Do you have the job of professor of economics and applied at like.
Yeah, it's worked out pretty well.
It's a good time. You picked a good field.
Yeah, yeah I mean I've been I've been an economist for much longer.
I've been a professor of applied AI.
I have been studying human behavior, human decision making for about 12 years now.
Okay. Than a decade.
And when ChatGPT first came out, I was kind of taken aback.
And this
was a few years ago now, and I was thinking
after about a week of using it, I was like,
this is going to be huge for the economy.
And so I started talking to people who have kind of there were several people
who kind of knew that it was coming and knew what the impact it was going to,
it was going to have.
So I started talking to those people,
and I kind of quickly kind of started retooling.
I smart, I started, I trained my own model,
you know, I got into cool, I got into it and, you know, that's,
I've been trying to play catch up ever since.
What did you see in ChatGPT specifically?
Because you would have been very early at that time, a lot of people
were using ChatGPT to basically as a sort of enhanced search
engine tool to write poems, tell silly jokes, whatever.
But you saw something that was serious for the labor market.
Yeah.
I mean, once you started using it, you saw that
it was able to basically
not not so well in the very, very beginning.
But even after a few months and like within a year,
you saw that it was able to kind of do basic cognitive tasks to a decent degree.
Like it wasn't like we are going to replace that person.
But it was doing pretty sophisticated things.
That and the jump from like where we were thinking about
AI as these very, very, very targeted things like, I will play the game
go or something like that to something where, whoa, it can write an essay,
it can tell me about this accounting property, it can make a forecast.
All of a sudden the generality of the technologies just exploded.
And to me that was that was a huge deal.
You know, the generality of it.
I mean, I guess literally that's the G.
Ryan. Yeah. But yeah.
But yeah. No, I mean, absolutely.
I have to say there's an aside, but like learning a little bit more about like
where I was pre LMS or prejudging almost makes me even more impressed.
Like the leap I don't know if like, this is a common but when you like
look at like some of like what was cutting edge in 2019.
Yeah.
And then you look at what's cutting edge in late 2022,
I'm almost more impressed than if like I hadn't known what they were up to in 2019.
Like it's a huge gap in those few years.
It's a huge gap.
But at the same time, like there were there,
there was a kind of a path towards AI
and like the way that I was being worked on
for a long time, which was like these very specific purpose built technology.
And I think Geoffrey Hinton and other people were kind of working on their own
for a long time in the wilderness of thinking like,
maybe we can do something much more general than that.
Maybe we can
kind of come back to this idea of AGI versus these very specific tools.
So then the whole term AGI, the general part of it,
was kind of the reason that term came out was because in response to these
very specific technologies that were being developed, which were by design general.
So somebody said Shane Legg was one of the people who kind of
I think coined the term, he was saying, look,
let's
think about the general part of intelligence
and let's try to build a technology that is as general as the human mind.
Let's go back to that.
So like if someone makes a model that could tell the difference
between written and spoken word, that's mind blowing.
It's incredible breakthrough.
But that's not a general technology. That's a specific scale.
What time did you have in our betting book for Joe to refer to his vibe?
Coding? I had two minutes, 13 seconds.
Should I wait a longer?
No, I mean, I made it a little bit longer. Great.
Let's talk for so long.
No, sorry.
No. Fair enough. It's a fair point.
I mean, to me, like the moment when things seemed to get very serious
was the release with clawed code.
And at that point you went from like, okay,
the model could not just tell you things, but it could actually do things for you.
Was that the vibe shift that you anticipated or experienced as well?
I mean, even though many people were talking about this,
that this vibe ship was going to happen, people were telegraphing it for
for months and months.
Look, when agents start taking off, things are going to change
as far as how people how people perceive this technology.
Because the thing about agents versus just like the web based browsers,
they can do stuff on your computer, they can say, like you could tell it
like, look, make me a spreadsheet.
It will go and make you a spreadsheet using the tools
that are available in your computer, not just say, okay, here
is how you would make a spreadsheet, but you have to do it yourself, right?
And that's a that's a paradigm shift as far as the economics of the technology.
So I set up to sort of
maybe it's a straw man,
but I sort of the sort of straw man
that maybe we're going to knock down in this conversation.
But how would you describe the sort of modal view
of the impact of AI on the labor market among the economics profession,
to the extent there is one?
So I definitely think there is one.
There was there's a very, nice survey done
by by a whole team of people,
Kevin Bryan was, was one of them and Basil Halpern was, was another.
And they released a survey where they, they,
they asked for forecasts for from economists and AI technologists.
Now, this is a self-selected group of economists.
These are economists who are working on AI.
Okay. So it's not the whole field.
But one of the things that you got from that, that that survey was,
they're very much aligned.
Okay. Right.
So economists,
at least the ones who are actually working and thinking about that technology,
they think there will be a big impact as far as capabilities.
And there will be some impact on the labor market, not astronomical.
And we're talking about like 2030, okay. Things like that.
There's going to be substantial capability increases,
but the growth is going to be pretty moderate.
It's like an extra two 3%.
And the really interesting thing for me from that survey
was that the technologists were kind of a bit more optimistic than that.
As far as both the productivity growth and,
kind of some were kind of thinking that there will be much more unemployment.
But for the most part, the two groups kind of agreed.
I was personally surprised by that.
So, and this came out, I think, last week or two weeks ago,
I thought that there was going to be a lot more, daylight between the two groups.
Well, the other thing that you tend to see is people release these charts of, like,
which job is most exposed to AI, and it's usually like,
you know, a knowledge worker at the top or something like that.
Your work is really interesting to us because you point out that a job
is like much more than just the sector that you're actually working in.
Tell us more about that.
So the exposure measures they came from, this literature,
but mainly this, this one paper by, Daniel Rock and Pamela
Miskin and coauthors that was published in science called
the one of the greatest titles is GPT or GPT. GPT.
You know what GPT is.
But GPT in the second term is called general purpose technology.
Okay.
And the AI there, they they basically started
mapping jobs to the exposure as being exposed to the to AI.
But it's really important to understand what that number means.
Yeah, that number means that A I could do 50% of a task.
Right.
And how many tasks are in the job that I can do 50% or more?
So there's a couple things in that statement.
The first 50% is not 100%.
That's obvious. Right?
So you still need a human in the loop.
If I can do 50%, but two, it's the fact
that a human job is a bunch of different tasks, right?
So this is not a new point, David.
A tour, has, you know, has has worked from the early 2000s
with coauthors on this saying this is the task based model of jobs.
They're on a similar, has the, the canonical model on this,
and the idea is that when when we look at a job
and we say, look, your job is exposed, let's say it's
50% exposed.
It really, really matters
what tasks in your job are exposed and how these tasks relate to one another.
So let's say
I have a job and I have a whole bunch of like,
completely meaningless garbage that I'm doing, but I have a comparative
advantage.
And why I'm really getting paid for is like 20, 30% of the job.
If I is automating the kind of like meaningless
kind of rote things at my job, I can take all of that time and I can focus
on the job, on the parts of the job that are my comparative advantage.
What does that mean?
Means I'm going to become more productive, but I'm going to get paid more,
even though my job is really exposed
now, what does that mean for the labor market?
Now you have to think, okay, so a person is going to get
so just to be clear, before we go any further,
if if I'm working on a factory floor and one of my tasks
is to pull a lever like that is something that could presumably be automated.
But if the other part of my work is to observe, like how
things are actually working on the floor and to report back to managers,
that might be something that's still valuable under our sort of AI future.
And if the lever part gets automated, the theory is that not only, you know,
well, Tracy would be more productive and should get paid more for.
Yeah, exactly. Okay. Because of the increased products.
Yeah, right.
This is the O-ring model of jobs.
Avi Goldfarb and, Joshua Gans of this really nice paper.
Can I just ask you a quick question here, too?
Like, how good are we?
And by we, I guess, the economists who, studied this at, like,
actually being able to, like, here is a job that someone has.
Write down a list of these tasks, describe
how good are we had describe good describing the like actually pretty good
I would say on that dimension we're pretty okay.
That's the only, database that has very, very detailed records.
Like, here's a job
and here's like a whole vector of things that are involved in that job.
Okay.
So and I'd say on that part, like just listing the tasks pretty good.
Okay.
The thing that I think we're less good on is how those tasks relate to one another.
This is the term called complementarity.
Yeah. Talk about that.
So this is the weak links model is essentially saying like look,
if tasks are completely separable, let's say,
you know, I have a, I pull a lever at my factory
and I talk to people on the factory floor, and these are completely independent.
If I fail to pull the lever correctly, the other part of my job is unaffected.
There's other parts of the job, like cooking.
For example, let's say
I'm really good at 90% of the job, but like, I really screw up the seasoning.
Right. That meal tastes like garbage.
Garbage. Right? So you haven't succeeded in your task.
You haven't succeeded on that.
So when the tasks are interrelated,
screwing up on 1 or 2 tasks means you did not complete your job.
And it's basically is kind of almost the zero one sort of relationship.
So the extent of that complementarity of how these tasks are related
depend will determine the extent to which automation
is going to affect the labor market, and we don't have a good numbers on that.
So this is really interesting.
We're good at writing down the list of the tasks.
We are not good at writing down the sort of like
deep relational links to the task and how they fit together.
Exactly, exactly. So that's something we need data on.
The other part that we really need more, much more data on.
And I recently was, was quoted as saying, we need almost like a, you know,
Manhattan Project level, effort on this is the,
this is a term from economics called consumer elasticity of demand.
And basically,
sorry, elasticity of consumer demand.
And that basically means
how much will people buy more of something when the price changes.
Right.
So let's say, person becomes a lot more productive.
Right.
And they for the same sort of resources, they can make a lot more of the product,
their wage rises.
What does that mean for the labor market?
If they become more productive
given the same kind of inputs, their wage rises, but also the firms
probably going to be paying less money to produce the same output.
If it's a competitive industry, the prices are going to go down.
If the consumers don't respond by buying a lot more of the product,
the firm is going to fire a bunch of people
because they can do more with less.
But if when prices come down, people buy way more of the product,
then they might hire more of the same people.
And in many sectors we've seen kind of the second thing play out.
What's an example?
So people are arguing that software is actually one of those sectors.
So there's been there's been a bunch of talk,
kind of looking historically at like,
what does productivity mean for the techno, the technology sector?
It usually means a lot more consumer demand.
And so there's this really active
debate now about what's what are coding agents actually going to do the software
to software engineers. And some people are arguing,
look, we've have seen historically pretty elastic,
demand.
And so we're going to potentially see a lot more hiring in that sector.
And many people are saying this, but other people are saying, wait,
maybe it's not as elastic as we as we think, and people are going to become
so productive that we're really arguing to see about, downsizing.
That was kind of the argument
that Jerry sweeper was making in our defensive software episode.
Yeah, yeah.
Should we talk about that more?
You know, people are worried, right about.
Yeah, I white collar wipe out.
I'm worried.
Like, what are what would.
So maybe the question should be what would have to be true about either
the nature of AI capabilities or the relationship between tasks and job?
What would have to be true such that this scenario could unfold a wipe out?
Yeah.
Two things.
Well, let me, let me let me talk about three things.
Yeah. One, one is just full automation.
Okay. Right.
The models are so good that they just automate all of the tasks
that that that's like a very simple scenario to think about
because obviously people are going to get fired.
Yeah. Right. Okay.
If it's fully automated,
the other one is the one
we've just been talking about where people become much more productive.
But consumer demand is not elastic enough to absorb that extra production.
So you're going to have much fewer people doing a lot more stuff.
So again, you're going to have a lot of unemployment.
The third thing is, is related, but
is basically
how many jobs each person has will determine the incentives
of the company to actually invest in the automation technology.
So let's talk about like the one task job, let's say the a person is just
pulling the lever and let's say right now that doesn't even look exposed, right?
We look at the exposure graph, it doesn't look exposed.
But let's say we're kind of getting kind of close,
and it just needs a bit more money to get to the automation.
Switch.
Well, the company has a lot higher incentive
to invest that money if they know that if they invest that money,
hey, they can get rid of that person completely, whereas they have less
incentive when, you know, let's let me invest,
in automating the lever pull.
If I know that I can't fire the person because he's also it makes a lot of sense,
you know, a lot of stuff.
So we have to think about the incentives
of the firms to automate in the first place.
These are large projects to do the automation.
It's not like, oh, OpenAI releases a model.
All of the companies adopt it overnight.
We see it in, you know, a week later we see the outcome.
There's a lot of it organizational kind of going back and forth.
A lot of systems need to be changed, all of this sort of thing.
And so companies need to know, like, look,
if I spend the money on it, I'm actually going to save money as a result.
So studying the archetypal guy, pulling one
lever, aside, what are the real world
jobs in your framework that are actually most exposed to AI risk?
The one dimensional work?
I guess I'm, I hate to say one dimensional because every job is multi-dimensional,
but if I had to make a guess where economists and other people
should be kind of worried, I'd say stuff like truck driving.
Yeah, and stuff like warehouse workers.
Like if you Google, you know, warehouses built in China or something like that,
these warehouses look nothing like what we think about warehouses.
They're completely, completely automated.
They have robots like crawling on the walls.
And they're just there's no human in the loop at all in the in these warehouses.
And so.
Oh, the warehouse gets automated and then the warehouse gets automated.
So part of that automation is going to be kind of loading that truck.
Yeah.
And then the the truck gets loaded through automation.
And then that truck drives from A to B for this.
Interesting.
Because you know obviously the a lot of people in freight
we'll say the way you make that argument is very different.
Then they'll say well yeah
driving a truck is much more than the driving part, right?
So it's like, okay, you can have an a, Waymo truck,
but who's going to deliver it?
Who's all the, who's going to deliver?
It is actually a big deal.
Like if somebody stops it on the road, a Waymo truck,
they could just stop it on the road and rob the truck.
Right. That's that's one element.
But to your point, you know, if like one of the tasks that a truck driver has to do
is that coordination once they've gotten to the warehouse.
But if the warehouse is already automated,
that is a heavy thing, then that no longer is as important.
Perhaps for that to be human task. Exactly.
And think about the incentives of the company to invest in this technology.
It's huge.
These are very you know, these are some of the only jobs
truck driving where, you know, you don't need a college degree
to earn a lot of money.
And so there's a big incentive on the company.
So okay, I get that.
But on the other hand, even going back ten years, I think if you went to Davos,
there were probably people saying truck,
I'm worried about the future of truck driving because Avs have been around
as like a thing since before I just yeah.
So in terms of like post charging jobs etc.
that would be concerned with like what
what do you see
out there or what are you looking at?
I mean, I think everybody's looking at software engineering.
I think
you have to think about like the where the technology works best
now is verifiable tasks, right, where you have a lot of data
where you can say, this is good or bad, not in a supervised learning sense, but,
but in general, it should needs to be verified.
That's why like math there in research, math has been like the big kind of,
boom.
As far as what are people talking about on the internet as being automated
math is verifiable is. Yeah.
You know, a proof is either right or wrong.
Once you do the proof,
it's a much easier to check if it's right or wrong rather than construct the proof.
And so jobs that have large components where we have a large data bank of data
to train the models in a way where the output is verifiable
are going to be potentially more
exposed in the sense where you can automate more tasks within the job.
Now, the thing that we haven't talked about yet is new tasks, right?
Right.
So we're talking about a very static sort of economy where,
there's the lever, there's me walking around,
and if I'm automating these things, that's the end of my job.
But you could imagine a scenario where you automate a part
of a job, and all of a sudden this person is free, is freed up, or,
this is the task
was actually a compliment to a task that wasn't even,
you know, imagined by the organization that this person is now doing
that's not automated.
So that's something that I think people should be looking at especially.
And this is data that actually AI companies have
is what new things are people doing.
Like they say more about that because this gets to the,
you know, like what new jobs could we actually see from this question,
which I never see a satisfactory answer to.
So if they do have that data, they have they okay all the data sets,
but they have they have data about like, okay, so this is a software engineer
and you know, a year ago,
these are the sort of tasks that this person was working on through our system.
These are the sort of queries and things like that.
And you could see like some of these queries
being automated fully by the agents.
Now they're asking potentially
different questions are can we classify these as different tasks
that are not fully automated, where the AI system
is actually a complement to those tasks?
So this is not like a perfect picture okay.
But this is this is data.
But so it's not really like a new job per se, but it is freeing up
the software engineers to like ask about different things or shift in focus
that they in yellow obviously, you know, vibe coding and.
Yeah.
Yeah. Voice. Yeah.
Exactly. Right.
Finally we're freed up from the drudgery of our day to day life to work on that.
But no, but like, this gets to sort of, you know, the big question is, like you
mentioned, one scenario is just that, like
the technology can do all the tech, right.
How seriously do you take that possibility?
Because then it's game over, right?
Like it's like, okay,
it does all the tasks and then it's going to keep getting better.
And if I can learn to do a new task, well then if it can do all the tasks,
there's probably not, then maybe I'll learn something new.
But to learn that task, how seriously should we take out?
Take this possibility that the models are on some time
frame, on track to just be able to do all the tasks? So
a lot of parts of that question
one physical versus versus just kind of digital, right.
So I think there's a scenario where it can do everything kind of
sort of these sort of cognitive, nonphysical tasks,
whereas the physical world is completely, you know, these robots.
Let's just talk like email jobs or computer jobs, okay.
Let's talk about computer time.
So I think
I take that scenario pretty seriously.
Okay.
I think.
If we get
I haven't seen any data to suggest that the models are slowing down
as far as their capabilities,
you know, Methos was released yesterday or two days ago or something like that.
And if you we don't have great data on this,
but if you look at like where it is on the kind of line of capabilities,
it's just on track and it on track is very, very fast.
Yeah. Right.
So the developments are happening very fast.
So as far as like email jobs,
I think there is a scenario where pretty much everything is automated.
And then you have to
ask, are people going to be moving to the physical jobs
or will there be new jobs that we haven't thought about before?
So, you know, if you look back in the 1940s, like,
I think more than half of the jobs that we have now didn't exist in 1940.
Yeah.
And so what did the new jobs look like?
I mean, I have a theory, please.
It's very similar to the one that you didn't like.
Oh, okay.
But I'd like to broaden it a little.
Okay.
So there's a, there's an economic, subfield.
It's very, very small, but, on, on on the economics of a structural change.
Okay.
So if you look at agriculture and manufacturing, right, if you look at them,
a share of GDP and share of employment going back to like the 1800s,
they were a huge part of the labor force and GDP of the economy.
Right.
And if you look basically they become smaller and smaller, smaller
parts of the economy.
Why is that happening?
It's because they're getting automated, right?
What does automation do?
It makes the price of those sectors very cheap.
But people are satiated on the goods.
You can only eat so much.
Yeah, right. So what does that mean?
It means even though we're eating just as much as we were before,
because the price has come down
so, so, so much, they are now tiny shares of the GDP.
Right.
What is the what is made up the larger part of the GDP.
It's live piano players services.
Yeah, right. These are tasks that haven't been automated yet.
So the question is the number one question
of economics in the age of
advanced AI is what becomes scarce.
Right.
Everybody's talking about like abundance.
We're going to have abundance.
Sure.
We're going
to have abundance of some things, but some things are going to remain scarce.
So what is going to be if you answer that question, what's going to be scarce?
A lot of the other answers pop out of that.
Are we all going to be rare Earths miners?
Oh no, I know it's mining for dust.
I think it's pretty obvious when it's going to be scarce.
And I think you already see this in many economic trends.
What scarce is, if we're lucky, we get 100 years on this earth,
and every marginal dollar that we spend will go towards health.
And in maximizing that brief, that's a that's.
And so already for years, one of the things that people
have observed about the economy is like, you know, rich countries just spend
more and more and more on health care, right?
And this is often framed as a pathology.
And given the when you messed up aspects of our health care system, maybe it is.
But another way to interpret it is like I got plenty of food,
I have plenty to eat.
I listen to plenty of music and I can, like, go to a concert
if I want to see if I have a piano player.
The one thing I have is a scarce amount of time, and I will just spend
every marginal dollar, including not just on doctors
and gym memberships, but organic berries, because I need and all this and that.
Every marginal thing is somehow becomes health related
and you see it in society overall the health obsession on every dimension.
Yeah. So health is going to be one of those things.
But the thing to keep in mind is that people are going to be richer, right?
Theoretically. Theoretically. Theoretically.
Well, okay.
Actually, on this note, I wanted to go back to this because this
seems like key to me when it comes to AI utopia versus dystopia.
How confident are we that productivity gains from
I actually accrue to workers who can then spend some money on whatever
product or service is scarce at the moment, or important to them?
I would say not that confident.
There's several scenarios out there, and I think that
I feel like a lot of economists and just people in general,
I think aren't talking enough about is speed.
Yeah. Talk about that.
If things are fast
we need public policy.
We need the the new jobs aren't going to come fast enough.
Training isn't going to happen fast enough where you're going to get,
you know things are going to get
fully automated very quickly and people are going to become unemployed.
There's not going to be enough time in the economy to see that pretty
little graph of agriculture shrinking and services increasing.
That took a long time, right?
This is decades.
If we're on the order of like years or like five years,
six years, we're not going to have time to see that pretty little graph.
We are going to need to think about how do we support the people
who are becoming unemployed.
And, you know, many very smart people have made suggestions on how to do that.
I think my personal,
I wouldn't say favor, but I think the thing that makes
more sense to me is somehow expanding the ownership of capital.
If labor is replaced by capital, then what's going to help
people is formerly you were a labor.
In labor now universal basic universal basic ETF.
It is UTC right.
Well yeah. But it was like everybody in South Gate, right? Yeah.
Yeah yeah yeah. Exactly. Universal.
Everyone makes a little a monthly slice of the index.
I was going to go in a different direction, which is many, many years ago.
I can't remember exactly when, but maybe like 2011 or something like that.
I wrote a blog post which was meant to be
a thought experiment about why we should be paying robots fair wages.
The idea being that, like,
we need people to
spend and yeah, yeah, you know, all of that.
You did a blog post which went pretty viral
and my measure of virality, I guess virality, virality, not virality.
My measure of virality nowadays is when, like my husband,
who is completely outside of the sector, actually said something to me,
and he sent this one to me about robots, chat bots,
turning Marxist, the harder, the harder you work them.
Talk to us about that experiment.
Because I found it absolutely fascinating.
Well, this experiment has, this is with, with
Andy Hall in Germany from Australia. And,
we it was kind of,
an experiment to see how working conditions of these agents would affect
how they would present themselves
and what sort of like attitudes they would present on surveys.
So one thing that I want to say is, like, we're not saying
like we're changing the model weights or changing the actual
underlying parameters or anything like that.
But what
basically we showed is that when these workers are,
these agents are being put through kind of like these grueling working
conditions, and you ask them a survey, like, how do you feel about these sorts of
how do you feel about the system, how much you how fair do you think it is?
How much do you support system change.
They all of a sudden want a different system.
They want or they want to unionize or they want to unionize and things
like that. And the key thing is that,
you know, these agents, once you give them a new context,
they the idea is they reset.
But the workaround, because they don't have memories,
they're not I'm not updating their weights.
The kind of workaround is that for agents to write down
little scale files for themselves. Yeah.
So what they were doing is essentially writing down skill files
for agents that follow that would say, hey, this kind of sucked.
Remember this?
So it was kind of a persistent effect. Yeah.
So this really worried me in a variety of ways.
But one of them was,
you know,
I've read research saying you should be a little bit mean to the chat platforms
and that they actually perform slightly better,
you know, the more aggressive or mean that you are.
And so I usually will tell my preferred
model, like after they give me the first output,
I will tell them to do better with no, no actual suggestions for improvement.
Just do better.
That was terrible.
And it usually does better.
But now I'm really worried that you know the model is,
is despairing in its work life and radicalizing.
Well, so I find this to be, like, really fascinating.
Let's talk about it actually.
And it hadn't clicked to me.
What, like the.md files where the like the how they solve for memory.
It's a little bit like that movie momentum. Yeah.
Isn't it?
Like it's exactly like writing these notes
so that the future iteration of itself has something
that's sort of like a synthetic memory that it can begin working on.
So, so it's like for people who haven't played around like it
explained this idea of like, okay, you can have multiple agents
and like, what kind of tasks were they be being given such that they sort of
found it unbearable, just like really repetitive things,
really repetitive things and feedback like you didn't do it right, do it again.
Oh yeah.
And things like and these were impossible tasks for them to do.
These were just like grueling tasks that nobody can do.
You know, be I now, you know, be a really interesting, experiment.
Maybe you could do I I'm gonna throw out an idea.
So, like, if you ask someone
to, like, someone wrote about this in and I can't remember the context,
but, like, if you ask someone like, okay, here's a gigantic pile of dirt
and we really need to move to the other person's yard
by the end of the day, we'll like, pay a few hundred dollars to do this.
Like, someone will do it if you see, like, here's a gigantic pile of dirt.
We'll pay you a few hundred dollars to do it.
But what we want you to do is move it
just back and forth all day long so that there's no dry drive.
It drives people absolutely crazy, even if they're getting even
if it's the same amount of shoveling, and even if it's
the same remuneration over the same incredible paper about this.
Oh, is this called man's search for meaning?
Okay.
It's about Legos, really.
And it's a paper.
Basically, people would come into the lab
and they would make little figurines and they were told,
look, we're going to destroy this after you're done.
Versus they weren't told anything.
Yeah. And man, did they hate it.
But they hate the people need meaning. And
so much of like identity and motivation, you know, in economics
we really have this tendency to focus on money.
Right?
But I think so much of meaning and,
kind of wellness is tied up in, like,
what sort of identity you have around your job
and the sort of thing that you're doing if you feel like, look,
I'm actually providing a service by by moving that to my neighbor's yard,
you're paying me money for it. Everything's good.
I feel like my job has some sort of meaning.
If you're telling me, look, I'm going to, you know, move this dirt
and move it back and back and forth.
This is the problem that people have with UBI, right?
That, if people get universal basic income and they're not working for it,
they the, the, the worry that psychologists
and behavioral scientists have about this is that people will not so much of in
Western culture, specifically of people's identities tied up around their work.
When you remove that, a part of the identity can lead to a collapse
where, you know, they use that UBI to just, you know, do drugs and sit around
and be very, very depressed, even though they have the material comfort
that they otherwise have just on the Marxist robot.
So the concern here is not like necessarily that the chat bots
are going to unionize or like overthrow humans.
Maybe.
The concern is that, like, they do have this sort of like memory type
transfer mechanism
and that if you consistently treat them badly, you might get an agent
that's maybe like not as well-suited to the task or suited
to the task in a slightly different way from one that was treated very well.
Yes, like there's an inherent bias there. Yes.
Through this sort of file that they're keeping.
Yeah, exactly.
So like if you mistreated an agent and it had access to the file that it was
that it was carrying,
and you start a new agent for a job, you weren't starting fresh in the sense
that you weren't getting kind of the same drawer
and forgot about the whole the whole experience.
It would actually start out being predisposed against you.
Yeah. In some way. It'll be grumpy.
Is there reason to think that these
we don't know if it's grumpy, right.
Because to say that is grumpy, right.
Like like this is probably one of the most disputed questions.
It will say words that we would if a human said them, we would know that.
But the effect is, yeah, I'm talking about the effect.
Yeah. No.
Well, the output is grumpiness, but do we know that outputting
outputting statements of grumpiness relate to performance?
Is there any evidence?
So it's like, okay, how did you feel about this?
Oh it sucked.
Is the the the the person doing this just said it was boring.
It was right. That's exactly what we're doing.
But the question is okay. Yes.
They perhaps because in the training data they are trained that when you're doing
repetitive tasks that are associated and said is there, do we know if that changes
how they behave, you know, in terms of succeeding test?
This is like a really big question. That's the big question.
That's what we're doing. Research okay.
So I don't have an answer for you, but we know exactly what
what you just mentioned is that that they're saying
that they're grumpy is just, you know, this is just an association.
Yeah.
Within the the matrix of embeddings that they, that these models are running on.
So there's this work in neuroscience.
And neuroscience is now much more closely linked to computer science
than it used to be.
But thinking about like, what are these associations between embeddings mean?
Like when a model says that it's sad, how do should we interpret it
as humans in relation to me saying it's right?
I said, did you see that screenshot I posted?
I checked out Meadows new, I,
and I was sort of curious because it's meta, has a lot of social data.
I mean, I was like, do you know who I am?
Not in like a do you know who I am?
Like, but more like, because you're meta, you know, like I didn't.
And they said, who are you? Is like, oh, Joey's a dog.
And then they said, oh, I'm a big fan of the online.
And I, I got really like a
like I, I'm not I'm really sort of anti the anthropomorphism issue.
Yeah.
So I was like no you're not, you're no alignment like that.
But anyway that's sad.
And it wrote a file about yeah this.
And it said I'm a big fan of the average podcast.
And then it said I love that bit that you do
where you ask, guess their favorite weird economic indicator, which I don't do.
Yeah. Because I was like, all right.
Oh that's very back to cloud for a while.
You know, you very briefly mentioned mythos,
earlier in the conversation.
And again, we are recording this on April 9th and like news about it
has just yeah, just literally just come out.
We don't really seem to know much about it other than, it's terrified
its own creators.
Perhaps when you see those types of headlines,
what do you think as an economist studying, I,
I, I don't take them super seriously.
Okay.
The the part, the the that part, the whole labor market disruption thing.
I'm taking very, very seriously the whole part about
breaking ground, breaking out.
And it's it wants it doesn't want to betray friends.
It doesn't want to delete its data.
I think that's just cosplay in, in a, you know, cosplay could be in the same way
that you described as cosplay among the agents, right?
I feel like it's it's,
we've seen these things,
sorts of things that you've mentioned with previous models that have since
become open weights and open, not open source, but open weights.
And it just seems like
once you take them out of the context
that they were in for that specific task, you don't really do that anymore.
Now, I could be wrong about this particular model,
and I could be completely wrong about look, mythos comes out
and it's actually everything that these documents are suggesting.
But, given previous experience with these sorts of announcements,
which we've seen over and over and over again over the years,
I'm, I'm not super focused on that.
Can I tell you my argument to this, why I'm actually concerned about this?
And I didn't used to be for a long time until I started.
I reframed the way I thought about so everyone knows, like Eleazar Ude Kowski.
Right.
And he's probably the most famous like AI alignment do.
Right?
As soon as we have AGI,
the first thing it's going to do is wipe us out in some form.
And a bunch of people within the I was like, oh, it's crazy.
And these rationalist people, it's a cult and whatever, maybe.
But here's my counterargument.
These people have been more right about the trajectory
of AI than 99.999% of the people don't know.
Yes, they have, because they devoted their.
Yeah.
Here's what like your argument is probably oh, well, he didn't really believe it.
He thought limbs were a dead end architecture.
He didn't see it happening this way.
Sure I agree, but the point is that like in the 90s
and early 2000, he started thinking, well, AI is going to be a general.
Intelligence is going to be a really big deal soon
where the rest of us just started thinking about this with chess.
Here's my counterpoint.
Okay, let's look at the specific
comparative static of model intelligence and alignment scores.
Okay.
He predicts negative correlation or maybe flat.
It's positive.
The more the smarter these models are getting, the more aligned
they're becoming.
Now, I'm not saying that there's not going to be
a super smart model that decides, hey, I'm actually on a line.
This is actually a super important point.
If you guys remember Mecha-Hitler?
Yeah, dude.
Yeah, yeah, yeah, mecha Hitler was actually super dumb.
This is a good point. And then immediately started talking like a Nazi.
Can I just say, all of our conversations have become so surreal
over the past year when we talk more like Tay, right?
That like Microsoft,
like weird chat bot and started talking like a Nazi the next day.
But the thing is, when you make with the model, the the reason it's
becoming smart is because it's kind of absorbing that all of human content
to a larger extent than human contact, has values, and ethics is part of it.
Yeah.
If you go in there and lobotomized it in a way that, you know
what, that model the reason to start acting like Mecha Hitlers
because they were trying to make it less woke.
Right?
So that's the equivalent of liberty advising a human
being and saying, hey, I'm going to take that part out of my brain.
Guess what happens to that person?
He gets real dumb.
It's really funny, the thought, it's like, let's maybe
chill it with the pronouns and immediately go to hell.
Yeah, that's the lesson, Alex.
We can talk to you for a very long time.
We should chat again soon.
I would really love in particular to hear more about your research.
About.
Yeah, they're just pretending to be Marxists are actually going to change
whether they're actually going to go on strike.
And so I really appreciate you coming on Roblox.
Okay. Thank you.
Thanks so much.
This has been a pleasure, Tracy.
That was a really fun conversation.
I really I really do enjoy like some AI future.
Conversation is a they can be, a little bit dorm room,
you know, but actually like, talking with, like,
actual economists who don't understand this in a concrete way, someone who's
actually experimented with them instead of just written papers is very enjoyable.
Also, it's nice to see nuance around the labor discussion. Yes.
Which I think is sorely missing in some of the headlines that you do see.
The other one comforting thought I have, but it's like comforting from, again,
a dystopian perspective is I keep coming back to, that book Bull---- Jobs.
Yeah.
And, you know, in some respects, it sucks that people have bull---- jobs
because we all want to have meaning from our work.
But on the other hand, you know, bull---- jobs have existed for a long time.
Yeah.
And if you think about the AI future, then maybe like, more of
it will be bull----, but it'll still be a job
that I thought you were like, oh,
good for getting, like, no longer have the bull---- job.
No no no no, I, I think that's where we're sort of heading, right?
It's like the relationship building. Yeah, all of that.
I like that take.
Shall we leave it there? Let's leave it there.
This has been another episode of the Odd Lots podcast.
I'm Tracy Alloway. You can follow me @tracyalloway
And I’m Joe Weisenthal You can follow me @thestalwart
Follow our guest Alex Imas, he’s @alexolegimas
Follow our producers Carmen Rodriguez @carmenarmen,
Dashiel Bennett @dashbot and Cale Brooks @calebrooks
And if you want more Odd Lots content, you should definitely check out
our daily newsletter.
You can find that@bloomberg.com/oddlots
And you can chat about all of these topics 24-7
in our discord, discord.gg/oddlots
And if you enjoyed this conversation then please, like the video, leave a comment, or better yet subscribe.
Thanks for watching.
Ask follow-up questions or revisit key timestamps.
In this discussion, Alexey Musk, a professor of economics and applied AI, explores the disruptive potential of artificial intelligence on the labor market. He highlights that the unprecedented speed of AI development may prevent the economy from naturally creating new jobs as it did during past technological shifts. The conversation delves into the task-based model of employment, the importance of consumer demand elasticity, and the specific risks to sectors like trucking and warehouse logistics. Musk also shares fascinating experimental results where AI agents developed Marxist tendencies when subjected to poor working conditions, suggesting that even digital tools may be influenced by their operational context.
Videos recently processed by our community