HomeVideos

Homelab Walkthrough: Kubernetes Cluster on Orange Pi 6 Plus & HP ProBook

Now Playing

Homelab Walkthrough: Kubernetes Cluster on Orange Pi 6 Plus & HP ProBook

Transcript

131 segments

0:01

Hello everyone. This video has been a

0:03

long time in coming. I wanted to do a

0:05

quick run through of my home lab. As you

0:08

can see here, everything is displayed on

0:10

homepage. And even though my hardware is

0:13

fairly modest, I am able to run quite a

0:16

bit without using too much of the

0:18

available resources. So recently I

0:21

upgraded from a Raspberry Pi 4 with 8

0:25

gigs of RAM to an orange Pi 6 Plus which

0:29

you can see here. I will include all the

0:31

links to the GitHub for my home lab as

0:33

well as the various hardware that I'm

0:35

using. So when you look here, this is

0:37

actually a very new piece of hardware. I

0:40

believe it was October or November of

0:42

2025 it was released, but it is running

0:45

very well uh with my whole Kubernetes

0:48

stack. I'm using Iuntu 26 on that and I

0:51

will have a video going more in detail

0:54

on that and the experience so far. The

0:57

other piece of hardware that I have is

0:59

an HP ProBook 430 G6 which you can see

1:03

you can get for a fairly reasonable

1:05

price used on eBay which is what I did.

1:07

I think it cost me about 120 bucks and

1:10

it has 100 sorry 16 gigs of RAM and an

1:14

i3 processor.

1:16

So yeah, in here on my homepage, you can

1:19

see I have a lot that I'm testing out

1:21

with regarding AI and LLMs. Open web UI

1:25

is my main chat that I use to test

1:27

various MCP servers that I'm working on

1:30

as well as different models and testing

1:31

out some free offerings from different

1:34

providers. I also have Llama, Quadrant

1:37

as my vector database, fresh RSS for

1:40

some RSS feeds, Libra chat as an

1:43

alternative to open web UI. Also, I'm

1:45

experimenting with Jupyter Notebooks and

1:47

also Phoenix. This is Arise Phoenix,

1:50

which allows you to hook up things like

1:52

Open Web UI, and you can actually go in

1:54

here and see some of the traces and

1:56

spans and different things that you have

1:58

uh to actually track the conversations

2:00

that you're having with your LLMs in

2:02

Open Web UI. I have some pretty standard

2:05

uh monitoring and observability setup in

2:08

Graphana and Prometheus as well. I can

2:11

see my whole home lab. I can see the

2:13

database size for my Postgress database

2:15

or database whatever I have as

2:18

well as uh some MCP server monitoring

2:20

here which does also show up uh when you

2:23

configure it in Arise Phoenix as well.

2:26

Here is my quadrant deployment still a

2:28

work in progress I'm hoping to make a

2:30

video on that soon. PG admin which just

2:33

helps me administer my Postgress

2:35

database context forge which is an MCP

2:39

gateway more or less with some

2:41

authentication

2:42

uh capabilities as well as some virtual

2:45

server capabilities where I can specify

2:47

the specific tools I want to expose from

2:49

the MCP servers that I'm running. I also

2:51

have MCPO uh which I believe is MCP

2:56

open it's an open- source proxy for

2:59

various MCP servers. So I use that for

3:01

some out of the box MCP servers like

3:03

Kubernetes and others that I'm working

3:05

with. I also have a local deployment of

3:07

AWX which is a front-end open source for

3:11

Anible. I'm doing some testing there. I

3:13

do also have N8N Jenkins Loki Coder some

3:17

various other things in my home lab that

3:19

I'm using to test out. I have Reddus as

3:22

my uh quick caching software to help u

3:26

with some of the performance of open web

3:28

UI and then also the performance of

3:30

context forge and then regarding how

3:33

this is all deployed. So it's a full

3:36

Kubernetes stack [clears throat] managed

3:38

through flux. So basically everything

3:42

that I have in my code which I can pull

3:45

up here.

3:47

All of this is operating under a GitOps

3:51

flow where it uses Flux to sync all the

3:54

manifests and the Helm charts and

3:56

everything else that I have in my

3:58

Kubernetes home lab which you can see

3:59

here. I'm not going to go into too much

4:01

depth today on how each of these is

4:03

configured, but if I just pull one up,

4:05

I'm using customize and it basically

4:07

says these are all the YAML configs that

4:10

I want. Flux will sync that to my

4:12

cluster and then deploy the pods

4:15

accordingly. So you can see all the pods

4:16

are running here. Uh if you're familiar

4:18

with Flux, you know generally how it's

4:20

working. But if I run this command, it's

4:22

going to pull down everything from my

4:24

GitHub repo and then sync it to my

4:27

cluster. Whatever is in that code, it

4:29

gets pushed as the config for this

4:30

cluster. So if I remove some config,

4:33

it's going to tear down those pods. Uh

4:36

and vice versa, if I add some new code,

4:38

it's going to spin it up from there. So,

4:39

it's very functional, especially for a

4:41

home lab. I only have, you know, one

4:44

worker node and one control plane node.

4:45

So, it's nothing crazy, just for my own

4:47

testing, but it does work very well. So,

4:51

that being said, and you can see the

4:53

homepage. Uh, I just refreshed because I

4:55

ran that Flux uh sync. But, yeah, let me

4:58

know if this was helpful. Uh, let me

5:00

know if you have questions about any of

5:02

these particular items or if you want me

5:04

to make a more in-depth video on any of

5:07

these pieces of my home lab. I do have

5:09

plans for quite a bit more regarding

5:11

MCP. I'll be releasing some videos uh

5:14

related to MCP servers that I have

5:16

running as well as NAN that hooks into

5:17

those MCP servers as well as various LLM

5:20

backends. And yeah, as always, I

5:22

appreciate you guys watching. Thanks.

Interactive Summary

The video provides a tour of the speaker's home lab, showcasing the hardware and software used. The setup features an Orange Pi 6 Plus and an HP ProBook 430 G6, running a modest yet capable system. The lab is primarily used for testing AI and LLMs, utilizing tools like Open Web UI, Llama, Quadrant, and Phoenix for tracing conversations. Monitoring is handled by Grafana and Prometheus. The entire system is deployed as a Kubernetes stack managed via Flux, following a GitOps approach. The speaker also shares plans for future videos focusing on MCP servers, N8N, and LLM backends.

Suggested questions

6 ready-made prompts