New Beginnings
Welcome to my shiny new blog! As my first blog post of any kind in over 5 years, and the first ever where I try to document a personal Software Development project and the context leading up to it, I appreciate it will be highly unorganised and potentially boring to read. However, I thought it would be nice to go out of the cave and start sharing a bit of what I spent the last 2 years doing in terms of free-time software projects, another bit of what happened in the last month, and why I’m writing this/what’s next.
So, without further ado, here’s my story.
The (boring) context
I had been working on a Backyard Monsters rewrite with the help of a friend, a project that has been around for over 2 years now and has not seen the light yet - the first commit on the GitHub repo dates back to August 2nd, 2022. Since late 2024, the one thing that kept flying around my mind while I worked on the code was to (self-)host the server somewhere outside of my PC, just for the sake of hosting it, even though it was far from finished - at present (19th March 2025), only the most basic things are set up if you look at it from a potential player’s perspective:
There is a website…
that kindly asks you to log in/register in order to gain access…
then you set up your user account…
then you return to the homepage and suddenly you see it loads a game screen…
and BOOM! you see a very pretty, 2010s-Facebook-game-that-you-played-with-your-grandmas-account-looking pre-generated base with a few free resources on the screen! And also a menu from which you can add more buildings (if you had enough resources, which you don’t)…
… and that’s about it.
Behind that simple, most basic game client there’s actually a backend that, in theory, can also manage resource production & banking, monster production & research, and moving, upgrading, recycling and repairing buildings. Unfortunately, I don’t think the appetite for playing a game by sending manual REST API requests is very big. If anyone reading this can’t wait to play the game, however, there is a project run by some cool people that’s basically an Adobe Flash launcher for the original game (yes, the original files) with a custom backend. Personally, I have not tried it because… well… I don’t want a Flash executable anywhere near my PC, but that’s just me.
Anyways, so I wanted to deploy this thing somewhere. Azure App Engine? Too easy, too limited (also, the Azure account I used to have got disabled). A free VM (if I remember correctly, it’s pretty much the only way to host anything that’s free for life)? Too much to worry about. Google? Oracle? Burned through all the free tiers and now there’s no way for me to create an account.
So I just never got around to deploying it anywhere.
And then, I was a victim of the YouTube algorithm, and a video that mentioned Cloudflare Tunnels as a way to expose services hosted on your local network to the big wild Internet. Around the same time, I happened to need to “renew” the domain of a GoDaddy website I had been tasked with managing 5 years ago (spoiler: I had lost the previous domain and the hosted files related to it, and a random guy had bought it with the only intention of selling it - as usual). And after a quick search for similar domains, it turned out that Cloudflare offered the same domains as other providers, but for less money - and also free static site hosting with Cloudflare Pages, and so with these two things they had my attention.
The domain “renewal” had worked fine, it was a quick and easy thing to do - it all took 3-4 days in mid-January. But I still wanted to do more with Cloudflare than just chucking a static website in there, and the crazy IT guy part of me, which had been hibernating for a few years (ignoring the game project) started coming up with ideas.
What if I got a new, custom email address, on my own domain? The Gmail address I had since I was 10 years old was cool, but a bit too funny-looking, let’s say. Said and done, I bought a domain and set it up in an evening.
But why have a domain if you’re not using it for anything other than an email address? It was early February at this point, Valentine’s Day was fast approaching, and I had a Raspberry Pi I was willing to put to some good use.
The “experiment”
So, Valentine’s Day… How do I explain this. The Spanish meme community is just amazing. One year (2014 if I remember correctly), someone posted a meme with a scene from Lord Of The Rings where Sam can be seen walking behind Frodo, and the caption “Sam Va Lentin” (which translates to Sam is walking slowly). For the non-Spanish speakers, Valentine’s Day is called Día de San Valentín, I think you can connect the dots pretty easily. Every year, the original meme returns on the February 14th (and the week before) to brighten everyone’s day, each year with new and increasingly surreal variations of the original meme, to the point where someone compiled a Google Drive folder with over 600 of them.
The night between the 14th and 15th of February I happened to have quite an intense desire to do a silly thing with those 600 images, and what better idea than to download that folder and serve the memes on a dedicated site. What if, for example, you served a different meme every time the user reloaded the page? Well, I got my hands dirty, made use of some fresh .NET skills, and within around 20 minutes I got it working how I wanted. But the implementation wasn’t the challenging part of it, it was merely an excuse to have a real service with which I could do an actual deployment.
A week of re-learning Docker…
And so a magical adventure started. Getting the program running in Visual Studio on Windows was one thing, and being able to run that same program on a Raspberry Pi (and not just installing the runtime on it) was some other thing. The day after writing the original code, I worked on setting up a Docker image for it. I had only ever come across Docker once, it was long ago, and in all honesty, I did not quite understand a lot about it, I just remember getting a container working on Docker Desktop on the same PC.
It was time for that to change, so I started doing a bit of research on essential Docker concepts, as well as the networking side of things. I managed to understand how the Docker support for .NET worked, and got a container working locally. After quite a few attempts (and failed commits on the main branch) the GitHub Action for publishing the image to the Docker Hub registry seemed to work fine (Note: improving my workflow for developing & debugging these CI/CD builds is at the top of my to-do list, as I always run into the same issue of having to make dozens of very small commits on the repository just to get a build that works, instead of being able to test them locally before committing. Might give ACT or a self-hosted runner a try).
As I said, the image publishing worked fine, and pulling & running it on my Windows PC showed no issues. So I thought I would just pull it into the Raspberry Pi and, because I had heard Docker containers worked out of the box anywhere you put them, it should just work - right?
(Short-lived) cross-platform nightmares
Well, actually, no. Because my Windows PC was based on an AMD64 CPU and the RPi was a Linux ARM64-based device, the container just would not build. Luckily, the .NET image was multi-platform, so it did have the capability of “running” on different OS+CPU architecture combinations, just not with the Docker build action I was using at the time. So back to changing the GitHub action a thousand times, and thanks to the Docker buildx command, the SamVaLentin server image could now be run on the Raspberry Pi!
On February 16th, 363 days early for the next Valentine’s Day, the container was finally up and running on the Raspberry (on the Docker engine), so it was time to show it to friends, family… and whoever was curious enough to access it. I opened the first endpoint on the Cloudflare Tunnel that I had set up, accessible on https://samvalentin.alejandrob.uk and the day after that I could see some basic stats provided by Cloudflare, including which countries the visitors were from: Spain and United Kingdom mostly, which made sense. I was thrilled that the silly meme site was out there in the big wild Internet (yes, I said that again).
I checked a couple of times a day to see if anyone else had visited the site, and there was nothing to be seen, until on the 3rd day a very regular pattern of visits started to emerge. Exactly 3 visits per day, at regular intervals, coming from (drumroll)… the Russian Federation! I believe these were website trackers that just knock on the door, ask “anyone there?” and leave with no bad intentions. However, that made me think of adding a couple of elements to the system.
Request Logging and Monitoring
Why rely exclusively on Cloudflare’s Analytics instead of having one’s own monitoring?
Initially, the only logging on the program was the default .NET HTTP logging, with no log file persistence out of the box. However, .NET 6 and newer versions offer a logger that can store the logs to files in W3C format - it was a matter of adding a couple of lines to the Program.cs file configuring what fields should be logged and in what files. Crucially, it also allowed logging of the X-Forwarded-For header, as the client IP received by a program that runs behind a Cloudflare Tunnel is a Cloudflare IP, and not the original requester’s.
In addition to those logs, it would be great if you could have a dashboard with coloured graphs that go up and down and up again insightful statistics about the health of the server, volume and length of requests in real time, so I found and applied some documentation & tutorials on using Prometheus and Grafana (both running on Docker) and integrating them with .NET, as well as some pre-built dashboards that came in very handy.
At this point, all the things I had in mind in terms of functionality (both user-facing and for myself) were complete. However, some questions were still unanswered: What if I ever wanted to update the Docker image again and deploy the changes - how could that be done without downtime? What if the container, for whatever highly unlikely reason, stopped running or became unhealthy? Or, in general, how could all these containers (.NET, Prometheus Server, Grafana) be run automatically, without requiring me to enter the same shell commands time and time again?
… and nearly a month of love-hate towards Kubernetes
Ah, container orchestration. Moving from the manual, yet simple, Docker commands to Kubernetes was a blessing and a curse. It was clearly the right thing to do, but the journey towards the final set up was long and tedious. I found there to be a large gap between the beginner documentation (there are plenty of tutorials provided on the own Kubernetes website) and that of the more advanced, in some cases even niche, features. The casual handful of Q&As on online forums were, more often than not, a distraction rather than a pointer.
All the issues can be traced back to two limitations of containers running on Kubernetes by design: file system persistence on the host, and networking. The latter was only an issue for as long as it took me to learn about Services and to know what “inside” and “outside” the K8s cluster meant. However, understanding file permissions, security contexts, and the thousands of options around Kubernetes volume mounts, required a deeper dive. I will not go into too much detail as the post is already way too long, but these were the main issues that took several sessions (and breaks in between) across 3 weeks to resolve:
- Making the .NET W3C log files from the pods accessible directly on the host filesystem.
- At the same time, not losing the capability to produce these logs.
- Setting up the persistent volume necessary for the Grafana container to work.
Finally, after all three items were corrected, it was time to revisit the Prometheus scraping configuration. The old static configuration was not appropriate for the new Kubernetes solution, as it would require manually adding the IP addresses of each running pod to the configuration, and updating them whenever new pods are created. Luckily, there was a mention of Kubernetes service discovery on the Prometheus configuration docs and a very helpful article full of early 2000s geeky dev vibe GIFs that explained every step necessary to set it up (and, on the way, learn how to use Kubernetes Config Maps).
With all that now sorted, the SamVaLentin service was up and serving requests behind the Cloudflare Tunnel, logging the request details, and exposing the Prometheus metrics; the Prometheus server was now able to automatically scrape from all running pods of the service with just 4 lines of configuration in the .NET K8s service YAML file; and Grafana was up and running too, displaying graphs for all the metrics collected by Prometheus.
It was, for now, the end of the experiment.
So what now?
Well, first… Starting this blog. After having gone through the process of building the system, I felt it was worth documenting the learning (or re-learning) journey throughout, and why I even started it in the first place. As I mentioned in the introduction, the circumstances that made this project happen were a succession of “happy coincidences” - it is hard to give a concrete explanation.
However, the start of it was not as important as what the SamVaLentin project evolved into: a sandbox for me to play around with aspects of Software Engineering that I previously had little practical exposure to, but that are as important as the functionality of the system itself: containerization, container orchestration, CI/CD, and monitoring. It was a proof of concept, an example of how to take a .NET web server application (of which I had done a few) and set up the tooling and environment to turn it into a publicly accessible service (of which I had done none).
Similarly, I did not have a clear picture in my mind of what this post should be when I started writing. I guess it should have been titled “The Diary of Alejandro, September 2022 to March 2025”, but it is more than that. It marks the switch from working on a very simple project development-wise but with extensive learnings on infrastructure, to starting a new, more complex project which, when the time comes, can be easily deployed building on what has already been done. The SamVaLentin project makes no sense on its own, it is merely a stepping stone.
That new project will start as soon as I publish this first blog post, and my intention is to document any progress made when appropriate.
If you made it here without falling asleep - congratulations! I will try my best to keep any future posts shorter than this.
Until then, take care - and keep building.