Homelab Walkthrough: Kubernetes Cluster on Orange Pi 6 Plus & HP ProBook
131 segments
Hello everyone. This video has been a
long time in coming. I wanted to do a
quick run through of my home lab. As you
can see here, everything is displayed on
homepage. And even though my hardware is
fairly modest, I am able to run quite a
bit without using too much of the
available resources. So recently I
upgraded from a Raspberry Pi 4 with 8
gigs of RAM to an orange Pi 6 Plus which
you can see here. I will include all the
links to the GitHub for my home lab as
well as the various hardware that I'm
using. So when you look here, this is
actually a very new piece of hardware. I
believe it was October or November of
2025 it was released, but it is running
very well uh with my whole Kubernetes
stack. I'm using Iuntu 26 on that and I
will have a video going more in detail
on that and the experience so far. The
other piece of hardware that I have is
an HP ProBook 430 G6 which you can see
you can get for a fairly reasonable
price used on eBay which is what I did.
I think it cost me about 120 bucks and
it has 100 sorry 16 gigs of RAM and an
i3 processor.
So yeah, in here on my homepage, you can
see I have a lot that I'm testing out
with regarding AI and LLMs. Open web UI
is my main chat that I use to test
various MCP servers that I'm working on
as well as different models and testing
out some free offerings from different
providers. I also have Llama, Quadrant
as my vector database, fresh RSS for
some RSS feeds, Libra chat as an
alternative to open web UI. Also, I'm
experimenting with Jupyter Notebooks and
also Phoenix. This is Arise Phoenix,
which allows you to hook up things like
Open Web UI, and you can actually go in
here and see some of the traces and
spans and different things that you have
uh to actually track the conversations
that you're having with your LLMs in
Open Web UI. I have some pretty standard
uh monitoring and observability setup in
Graphana and Prometheus as well. I can
see my whole home lab. I can see the
database size for my Postgress database
or database whatever I have as
well as uh some MCP server monitoring
here which does also show up uh when you
configure it in Arise Phoenix as well.
Here is my quadrant deployment still a
work in progress I'm hoping to make a
video on that soon. PG admin which just
helps me administer my Postgress
database context forge which is an MCP
gateway more or less with some
authentication
uh capabilities as well as some virtual
server capabilities where I can specify
the specific tools I want to expose from
the MCP servers that I'm running. I also
have MCPO uh which I believe is MCP
open it's an open- source proxy for
various MCP servers. So I use that for
some out of the box MCP servers like
Kubernetes and others that I'm working
with. I also have a local deployment of
AWX which is a front-end open source for
Anible. I'm doing some testing there. I
do also have N8N Jenkins Loki Coder some
various other things in my home lab that
I'm using to test out. I have Reddus as
my uh quick caching software to help u
with some of the performance of open web
UI and then also the performance of
context forge and then regarding how
this is all deployed. So it's a full
Kubernetes stack [clears throat] managed
through flux. So basically everything
that I have in my code which I can pull
up here.
All of this is operating under a GitOps
flow where it uses Flux to sync all the
manifests and the Helm charts and
everything else that I have in my
Kubernetes home lab which you can see
here. I'm not going to go into too much
depth today on how each of these is
configured, but if I just pull one up,
I'm using customize and it basically
says these are all the YAML configs that
I want. Flux will sync that to my
cluster and then deploy the pods
accordingly. So you can see all the pods
are running here. Uh if you're familiar
with Flux, you know generally how it's
working. But if I run this command, it's
going to pull down everything from my
GitHub repo and then sync it to my
cluster. Whatever is in that code, it
gets pushed as the config for this
cluster. So if I remove some config,
it's going to tear down those pods. Uh
and vice versa, if I add some new code,
it's going to spin it up from there. So,
it's very functional, especially for a
home lab. I only have, you know, one
worker node and one control plane node.
So, it's nothing crazy, just for my own
testing, but it does work very well. So,
that being said, and you can see the
homepage. Uh, I just refreshed because I
ran that Flux uh sync. But, yeah, let me
know if this was helpful. Uh, let me
know if you have questions about any of
these particular items or if you want me
to make a more in-depth video on any of
these pieces of my home lab. I do have
plans for quite a bit more regarding
MCP. I'll be releasing some videos uh
related to MCP servers that I have
running as well as NAN that hooks into
those MCP servers as well as various LLM
backends. And yeah, as always, I
appreciate you guys watching. Thanks.
Ask follow-up questions or revisit key timestamps.
The video provides a tour of the speaker's home lab, showcasing the hardware and software used. The setup features an Orange Pi 6 Plus and an HP ProBook 430 G6, running a modest yet capable system. The lab is primarily used for testing AI and LLMs, utilizing tools like Open Web UI, Llama, Quadrant, and Phoenix for tracing conversations. Monitoring is handled by Grafana and Prometheus. The entire system is deployed as a Kubernetes stack managed via Flux, following a GitOps approach. The speaker also shares plans for future videos focusing on MCP servers, N8N, and LLM backends.
Videos recently processed by our community