99% of Developers Don't Get Transpilers
352 segments
Transpilers is a concept that most
developers fail to understand. They
think it's black magic that turns
TypeScript into JavaScript or they treat
tools like Babel and SWC as black boxes.
In the next 10 minutes, we're going to
unpack abstract syntax trees, reverse
engineer how JSX becomes JavaScript, and
finally understand the difference
between a transpiler, a compiler, and an
interpreter. Let's dive in. A transpiler
is a special kind of compiler that
translates source code from one highle
language to another high-le language,
usually while preserving the original
program's intent and structure. The key
idea is that the output is still human
readable source code, not machine code
or bite code. At a conceptual level, a
transpiler exists to bridge a gap
between how developers want to write
code and what a target platform actually
understands. Transpilers allow
developers to use modern or more
expressive language features like those
in Typescript, Babel for JavaScript, or
CoffeeScript that the target environment
such as an older web browser or specific
JavaScript engine might not natively
support. Developers can write code using
preferred syntax, newer standards,
ECMAScript 2015 plus features, or higher
level abstractions, while the transpiler
ensures the resulting output is
compatible with the required often older
or more constrained target platform.
Let's talk about transpilers versus
compilers and interpreters. A
traditional compiler typically takes
source code and emits low-level output
like machine code or an intermediate
form such as bite code. An interpreter
executes programs by directly evaluating
source code or more commonly in modern
systems by executing bite code produced
by an earlier compilation step. In
practice, most interpreters are hybrid
systems that parse source code into an
intermediate representation and then
interpret or jit compile that
representation at runtime. A transpiler
sits in between. Like a compiler, it
performs parsing, semantic analysis, and
structured transformations. But instead
of lowering the program into machine
code or bite code, it lowers it into
another source language. So the output
is kept at a similar level of
abstraction. The output is intended to
be consumed by a separate compiler,
interpreter, or runtime. For example,
JavaScript emitted by a transpiler is
later parsed, optimized, and often JIT
compiled by a browser's JavaScript
engine. The key distinction is not how
much analysis is done, but what
abstraction level the output targets.
Compilers target execution. Interpreters
target evaluation. Transpilers target
compatibility and language translation.
Modern programming environments
sometimes blur these lines. For example,
JIT compilers. They combine both
interpreting code initially and then
compiling hot sections to machine code
for speed and hybrid models. Languages
like Java are compiled to bite code, a
form of compilation, and then
interpreted by a virtual machine. Now,
how do transpilers actually work under
the hood? Internally, a transpiler is
best understood as a treeto-tree
transformation system with compiler
discipline. Everything revolves around
the abstract syntax tree, not text. Once
source code is parsed, formatting and
surface syntax mostly disappear. What
remains is a structural model of the
program. The process begins with parsing
into an abstract syntax tree or a using
the source language grammar. Operator
precedence, associivity, and syntactic
structure are resolved here. At this
point, the transpiler is not thinking
about compatibility or lowering yet. It
is only constructing a faithful
representation of the program. Next
comes scope and binding resolution.
Identifiers are linked to their
declarations. Lexical scopes are
constructed and shadowing is made
explicit. This step is important.
Transformations that look trivial at the
syntax level can break program behavior
if scoping rules are misunderstood. For
example, rewriting block scoped
variables requires precise knowledge of
closure boundaries and lifetime
semantics. The core of the transpiler is
a transformation pipeline composed of
multiple passes. Each pass targets a
specific language feature and rewrites
it into more primitive constructs. These
passes pattern match on a nodes and
replace them with equivalent sub trees.
A single feature may go through several
stages. The ordering of these passes
matters because earlier transformations
often introduce structures that later
passes must understand. This is how
modern transpilers like Babel,
TypeScript or SWC function. They use
this pipeline of transformation passes
that traverse and modify the abstract
syntax tree. Breaking down complex
syntax into smaller backward compatible
code. The pipeline ensures that highle
features such as await often lowered to
generators or ES6 classes are
systematically reduced to more
fundamental constructs like state
machines or ES5 functions in a specific
order. Each pass reduces the program
toward a smaller, more restricted subset
of the target language. Mature
transpilers define a lowest common
denominator language level and guarantee
that all inputs eventually converge to
it. When the target language lacks
certain semantics, the transpiler
injects runtime helpers. These are small
support functions that emulate behavior
such as inheritance, iteration
protocols, or private fields. At this
stage, transpilers begin to resemble
runtime systems. Correctness now depends
not only on the transformation logic but
also on the behavior of these helpers.
Finally, the transformed a is printed
back into source code. The code
generator must reconstruct valid syntax,
manage parentheses and precedents and
often balance readability against output
size. Source maps are emitted alongside
the output so runtime errors can be
traced back to the original source code.
What makes transpilers uniquely
difficult is that they must preserve
observable behavior across real
runtimes, not just theoretical
semantics. This means modeling
evaluation order, shortcircuiting,
hoisting rules, and even long-standing
platform quirks. In practice, a
transpiler is less about syntax
rewriting and more about encoding a deep
executable understanding of language
semantics and compatibility boundaries.
So, while the transpiler ensures your
code syntax works in older browsers, it
doesn't guarantee your actual features
are working for users. To ensure your
application flows are solid without
spending all day rewriting your test
suite, check out today's sponsor, QA
Tech. QA Tech is an AI testing tool for
QA automation that uses agents to test
your product like a real user. goal
oriented, relying on UI behavior rather
than brittle scripts or selectors. The
AI explores your product and generates
new test cases to build your test suite.
It navigates beyond predefined paths,
catching functional and non-functional
regressions that scripted tests often
miss. QA plugs into your existing
infrastructure. Just point an agent at
your environment to run exploratory
tests or validate changes directly in
PRs and CI. Instead of binary pass or
fail results, you get qualitative
feedback and an actionable list of
issues to fix complete with the logs and
screen recordings. The biggest pain with
end to end is usually maintenance.
Right? With QA tests adapt as your
product evolves. When the UI or flows
change, the AI updates existing steps or
suggests new coverage to keep critical
user journeys protected. That means no
flaky tests and far less time spent
fixing broken automation. QA Tech is
built to provide your team with
continuous feedback via integrations
with GitHub, GitLab, Slack, and CI/CD
pipelines so you can trigger test runs
when and how you need. Check out QA Tech
using the link in the description to
start automating your tests with AI
today. Now, back to the video. One of
the most well-known transpilers is
Babel. It translates modern JavaScript
and JSX into older JavaScript versions.
Features like arrow functions or let and
const are rewritten into function
expressions and var often with helper
code injected to preserve semantics.
TypeScript is another example.
TypeScript adds a static type system and
additional syntax on top of JavaScript.
The TypeScript compiler strips away type
annotations and emits plain JavaScript.
In this sense, it is a transpiler from
TypeScript to JavaScript. Even though it
is often called a compiler, coffecript
provides a concise Ruby or Python
inspired syntax using indentation and
implicit returns that compiles to
JavaScript. It focuses on developer
ergonomics by reducing verbosity and
improving readability compared to
traditional JavaScript. Although it maps
closely to JavaScript rather than
ignoring compatibility. Tools like
mcripton using LLVM translate C or C++
into web assembly or JavaScript allowing
near native performance for complex
applications on the web. The line
between a transpiler source to source
and a traditional compiler source to
machine code does blur in these cases
because the output is often highly
optimized low-level code that is not
intended to be read or maintained by
humans. similar to assembly code rather
than just cleaner source code. Why do
transpilers exist? Transpilers exist
because language evolution and platform
deployment operate on different
timelines. Language designers can add
features quickly, but runtimes,
operating systems, embedded
environments, and browsers upgrade
slowly, unevenly, or not at all. A
transpiler decouples these timelines by
allowing new language constructs to be
expressed in terms of older already
deployed capabilities. In large
ecosystems, backward compatibility is
often non-negotiable. Enterprises may be
locked to specific runtime versions.
Embedded systems may never receive
updates, and browsers must support
legacy code indefinitely. Transpilers
allow developers to write modern
expressive code without forcing every
consumer of that code to upgrade their
execution environment. Transpilers also
exist to manage semantic complexity at
scale. As languages grow, features like
async control flow, advanced type
systems, and new module semantics
introduce behavior that is difficult to
reason about directly. By lowering these
features into simpler constructs,
transpilers make the runtime execution
model more explicit and predictable,
even if the source language remains high
level. Another major driver is tooling
and ecosystem leverage. by transpiling
into an existing widely supported
language. New languages inherit
debuggers, profilers, llinters, build
systems, and deployment pipelines for
free. This is why many experimental or
domain specific languages target
JavaScript, the JVM, or existing
scripting languages instead of building
new runtimes. A useful mental model is
to think of a transpiler as a
large-scale dshugaring engine. Many
language features exist to improve
expressiveness, not power. A transpiler
removes that sugar and expresses
everything in terms of simpler
constructs. A for of loop becomes an
explicit iterator protocol loop. The
target language already has the
expressive power. The transpiler just
makes the control flow explicit. A sync
await becomes promise chains or a state
machine. Unfortunately, translation is
not free. Generated code can be harder
to read, debug, and optimize. Source
maps are essential to maintain developer
ergonomics. There are also semantic edge
cases. When the target runtime does not
perfectly match the source language's
behavior, transpilers must compensate
with helpers or polyfills, increasing
complexity and bundle size. Polyfills
are just pieces of code that provide
modern functionality to older JavaScript
environments that do not natively
support those features. Polyfills run at
runtime in the browser, while
transpiling occurs at buildtime before
deployment. So transpiling converts
modern syntax into older compatible
syntax while polyfilling adds missing
built-in features such as APIs or
methods to older environments. For
example, converting an arrow function in
ES6 into a traditional function
expression would be considered
transpiling while mimicking missing APIs
such as promise fetch
array.prototype.includes
or math.runk using existing older
JavaScript capabilities is considered
polyfilling. Most importantly,
transpilers cannot transcend fundamental
platform limitations. They work best
when the target language is at least as
expressive as the source. Now, here's
the bigger picture. At a deeper level,
transpilers separate language evolution
from platform adoption. They enable
developers to write code using
future-facing abstractions while still
targeting present-day runtimes. If you
want to begin your journey in becoming a
10x engineer, I highly recommend
checking out codec crafters down below.
You can learn how to build Docker,
Reddus, Git compilers, and other
developer tooling from scratch. They are
hands down the best project-based coding
platform out there. So, do give their
platform a try. As always, thank you
very much for watching this video and
happy coding.
Ask follow-up questions or revisit key timestamps.
This video explores the concept of transpilers, distinguishing them from traditional compilers and interpreters. It explains how transpilers like Babel and TypeScript act as source-to-source translation tools that allow developers to use modern language features while maintaining compatibility with older target environments. The video details the technical process under the hood, covering abstract syntax tree transformations, parsing, scoping, and the use of runtime helpers. It also clarifies the difference between transpiling and polyfilling, and highlights why transpilers are essential for decoupling language evolution from platform adoption.
Videos recently processed by our community