Azure Update 27th March 2026
364 segments
Hey everyone, welcome to this week's
Azure update. It's the 27th of March. As
always, we have the chapters so you can
jump to any particular update you care
about the most. New videos this week.
So, I dived into, hey, I want to build
an agent. Should I use agent builder?
Should I use copilot studio? Should I
use Microsoft Foundry? So, I really went
through what are some of the personas
that would use the different options and
what are the capability differences that
would drive me to use one over another.
We also had our 400,000 subscriber ask
me anything session. So, the recording
is now available to see that. And then I
did a video on the new entra backup and
recovery. So, I get these snapshots
taken daily. There are five of them. So
I can go back and restore the state of
objects to a previous point in time. If
I then combine that with things like
soft delete and protected actions, I get
that complete ability to mitigate
accidental or malicious changes to my
entra environment. So went through that.
Okay. On to what's new on the compute
side. So Azure Kubernetes service now
has application network in preview. So
think of this as a new application layer
abstraction
for all of the Kubernetes traffic. So it
enables me to control that service to
service sets of communications. It gets
me better observability
without having to inject a side car into
each of the various pods. So you can
think of it as providing mesh
capabilities
without all of that mesh overhead and
management I would have to do. And one
of the interesting things it's doing and
we're starting to hear more about this
is it's using Spiffy
as a method to identify the various
parties and then using that as part of
the network tracking the network
control.
This is Linux only today, but the great
thing is because of this abstraction
layer, there's no app changes to take
advantage of it.
Um, the AKS meshlessto
app routing is in preview. So basically,
I can now use the Kubernetes gateway
APIs for ingress management again
without having to leverage a sidecar
architecture.
and the AKS network logs have gone GA.
So container network logs provide a
capture of the network flow metadata. So
it's not all of the packet data but it's
the metadata. So IP addresses, ports,
namespace, the pods, the services, the
flow direction, um verdicts from
policies and a bunch of other stuff at
layer 3, four and seven. Now this can be
stored as
writing to local storage and then
optionally to a log analytics workspace.
And when I do that right, I can filter
it to only specific resources of
interest. And there's also an on demand
mode where I would use Hubble as part of
that capturing. So it's all like getting
really good insight into the traffic.
There are now manage GPU metrics in
preview. So if I'm using nodes with
GPUs, I can see performance and
utilization data from where I'm using
Nvidia GPUs. And so this is going to
hook into manage Prometheus and
Graphfana. So things like GPU
utilization, memory utilization,
streaming, multiprocessor efficiency.
There are metrics around temperature,
power, bandwidth, frequency,
reliability. So basically giving me
really good insight into the GPU
characteristics of the nodes.
AKS fleet manager now has crosscluster
networking in preview. So if I have
applications that span AKS clusters,
what it's actually going to provide is a
managed psyllium cluster mesh. But
because it's managed, it's going to make
it really easy to configure and
obviously manage the thing. So what it's
going to do is once I enable this any
published service from anything within a
particular cluster can then be used by
any connected cluster as if it was local
to the cluster. Um all I have to do is
for the services there's a global
annotation I'm going to mark it as but
then hey I I really get this easy
crosscluster network communication and I
also am going to get global
observability because I'm going to get
shared metrics and flow logs across all
those clusters that are part of that
crosscluster networking.
Um AKS uh container network metrics have
filtering in GA. So the normal metrics
that are part of container network
observability can generate a massive
volume of data. And filtering allows me
to control what data is captured. So I'm
only going to get the signals I really
want, which helps reduce storage, which
helps reduce cost, which also reduces
the noise when I'm trying to actually
understand what's going on.
There is an AKS network AI agent
available in preview. So what it enables
me to do is interact using natural
language. I can give it a problem
description and it will turn it into
diagnostics information from all of the
various data that's captured. So it's
going to make the cluster
troubleshooting much easier.
AKS now has a blue green agent pool
upgrade option available in preview. So
when I think of safe deployment
practices, one approach is kind of we
have these rings of deployment. We do
kind of a rolling upgrade approach which
is the traditional approach. But what
this lets me now do is I can have a
parallel node pool that runs the new
configuration
and then as I make a change that new
node pool has the new configuration. I
can split traffic to start hey seeing is
it functioning as I would anticipate. If
all's good I move all the traffic over
to it and then obviously I can delete
the old node pool. There's a problem I
can roll back to the current node pool
and I can use this for Kubernetes
upgrades, node image upgrades, config
changes.
It does mean you've got double the
number of resources during the upgrade.
They're not always existing. When I want
to use the blue green, it will go and
create the new node pool with a new
config.
During the time of that blue green, I've
got double the resources, so double the
cost, double the quotota use. You need
to make sure you have the quotota, but
then obviously it gets deleted. So, it's
only double during the period of the
upgrade.
Um, Arc enabled Kubernetes now has
oneclick enablement of recommended
Prometheus alerts based off of community
rules. So that's going to give you
really good coverage of the cluster, the
nodes, the pods, and I had this
previously, but I had to do a template
based deployment. So this makes it much
much easier.
Um, Azure container storage now has
elastic sand integration in G. So
remember, Azure container storage is all
about providing
very high quality storage for my AKS
workloads. Previously it was GA for
local node NVMe storage.
Now in addition to that I support
elastic sand which gives me more durable
storage, more flexible pools for various
different scenarios of different levels
of performance. So now I get a a greater
choice
database.
So SQL database now has this automatic
index compaction in preview. So this is
SQL DB, SQL MI and SQL in fabric. It is
a background automatic index compaction.
So I'm automatically going to reduce the
amount of storage space I use, therefore
the cost and also I'm going to get
improved performance because it's going
to use less CPU, memory, and dis IO. So
this removes the need for me to have
scheduled index jobs and I just enable
it with a single command.
um SQL managed instance now has change
event streaming in preview. So any rowle
change so an insert an update delete can
now stream to an event hub with this
change event streaming and it's
basically in near real time then
obviously from event hub I can trigger
various serverless things to work off of
that. So it's going to let me build an
event driven solution use real-time
analytics and more without having to do
anything specific in my code.
Uh, SQL Server has soft delete available
in preview. So, hey, I can set a soft
delete retention. So, I can self store
SQL servers in the event of a deletion.
Uh, SQL hypers scale has some new SKUs
in preview. Remember hypers scale um
enables me to scale to much higher
performance and capacity because it
separates the compute from the page
servers. So, there are new 160 192 vcore
options for premium series hardware.
So that gives me a much larger compute,
much larger memory configuration where I
have those really really demanding
workloads. So if I think large scale
OLTP, HTAP analytics heavy workloads and
I can use this for both single database
and elastic pool.
There have been some disknments
in preview um across SQL database. So
vector databases are huge today. When we
think of generative AI, it's natural
language interactions. We often want
these vector databases that store
embeddings in these high dimensions that
represent the semantic meaning of data
and then I go and search for hey I'm
looking for something. I turn that into
an embedding and I find the closest
match. So disk an ANN is a Microsoft
research created vector search
capability that is part of SQL database
part of SQL database in fabric and it's
been improved so that the tables are no
longer read only after that index
creation. There are filters applied
during vector searches and not after. So
it's going to be a lot more performant
use less resource. There's also
improvements between choosing between
disk A&N and the regular uh the K
nearest neighbor algorithms along with
some other optimization. So basically
just improving uh those all up
capabilities related to the vectors.
Uh Azure monitor OLTP ingestion is in
preview. So I can bring in open
telemetry data, metrics, logs, traces
directly into an Azure monitor workspace
because it has a native open telemetry
protocol supported endpoint. Um uses
entra for the authentication
and then Postgress SQL has custom time
zone for the chrom the scheduled jobs.
So now I can set a time zone to be used
for those scheduled jobs which is really
useful to ensure jobs happen based on a
desired regional time zone. So hey I
want to make sure this doesn't happen
during business hours of the place using
it instead of trying to work around well
what is the default based on the server
post SQL has migration updates in G. So
I can now migrate from EDB Postgress SQL
and Google alloy alloy DB uh to Azure
manage Postgress SQL and I can also use
PG output now for minimal downtime
online migrations
and then Microsoft fabric now supports
my SQL mirroring. So I have my Azure
database uh for my SQL flexible. It can
then mirror without me having to create
data pipelines or anything else into
fabrics one lake in basically near real
time. So it makes it immediately
available for any of the fabric
workloads like analytics AI uh PowerBI
you you kind of name it
and then fabric cosmos DB private
endpoint enabled databases mirroring has
gone GA. So I have a Cosmos DB database.
It's using private endpoints. I can now
enable the mirroring of it to Microsoft
Fabric. There's some additional
networking I have to add during the
establishment of the mirror. But once
it's established, I can remove it again.
So I I'm reducing the connectivity for
my Cosmos DB to only be those private
endpoints.
on the uh miscellaneous side.
So foundry priority processing has gone
GA. So there are certain situations
where the latency for inferencing is
critical to the AI app agents
performance. Now one thing we've had in
the past and still do is to use
provisioned throughput units PTUs. So a
guaranteed amount of throughput
which is a set amount that I provision
and pay for or it's set in advance
instead of the regular pay as you go
usage.
Well, priority processing gives high
speed performance on a pay as you go
basis. So maybe I don't know the exact
amount I need or maybe I've got a PTU
but I need some additional at certain
times. So this lets me get higher
priority processing. So lower latency,
higher throughput when I have that time
critical inferencing need, but I'm not
doing that commitment to that amount of
throughput in advance. Now obviously I'm
going to pay a price premium for this.
There is a price premium over the
standard tier pricing. It varies by
model. Um but it is available for the
latest models for global and data zone.
And obviously it's not I only use one of
them. I could combine this with pay as
you go with standard I pay as you go
with PTU with batch to work out what is
the right solution for what I need.
Oh, I went backwards somehow. Didn't
even notice that. Uh, Entra ID external
MFA has gone G. So that lets me use an
external MFA solution
right supports open ID connect as part
of the entra ID authentication. So that
includes using conditional access and it
replaces the old custom controls which
are being deprecated
and then uh entertenant governance has
gone G. So this will help a number of
different things. So one it will help me
as an organization detect almost shadow
tenants being used by my company. So
based on patterns of external
identities, multi-tenant apps, even
billing, it will go and find those other
tenants. It will then help create
relationships to help administer those
other tenants. And then I can also
enable a secure tenant creation. So any
new tenants are configured correctly at
creation time. And there is an API
available now. And some of the features,
hey, they're still in preview. But
that is it. As always, I hope this was
useful. Until next video, take care.
Ask follow-up questions or revisit key timestamps.
Loading summary...
Videos recently processed by our community