I guess I’m old enough that the first few web servers I interacted with, either I could personally kick them, or I could see that they were running on the same kind of hardware as the ones I could kick. For me, it’s the “cloud” that’s disorienting and disturbing.
I have different memories of that era. My home connection was dial-up, so running any server locally required me to leave a phone line occupied (with per-minute charges) all of the time. I addition to thread ware cost, cost of the line rental and calls was on the order of £30/month, which is more than double that when adjusted for inflation. Oh, and the fast modems we’re asymmetric and so you needed to buy the expensive variants with the asymmetry the other way around to be able to saturate a single client’s connection. You could buy an ISDN leased line for a few hundred a month and that would give you an always-on 128 kb/s connection, but the ISP needed one for the other end as well and so you were looking at close to a thousand quid each month (the cost of a fairly decent desktop).
When my home connection became fast enough that I could consider using it for a server, ISPs started blocking the useful inbound ports and charged more for a static IP. We had a fairly expensive cable modem with 128 kb/s upstream, 1Mb/s up. There was no useful QoS in the modem, so saturating the upstream would drop TCP ACKs and cause downloads to slow to a crawl. At the same time, I could rent a dedicated server for about £100/month for the entry-level things. There were shared hosting things that were cheaper but they either had poor security, serious limitations on what you could run, or both.
Cloud offerings were a game changer for me. For the cost of a tenth of a dedicated server, I could get a virtual server that was around a tenth the power, which was fine because most of the server things were not particularly intensive workloads, they just required a lot of always-on bandwidth. These cheap VPSs were on networks a lot faster than a typical home connection and so could actually give clients a good experience if two or more tried to use it at the same time.
I did rent a dedicated server for a few years from a company that hosted Mac Minis (PowerPC, mine ran OpenBSD). This was pretty expensive (I bought the machine up front and then it was about $20/month for the hosting) but cheaper than any of the alternatives. Unfortunately, the hardware wasn’t designed for always on operation and the disk failed after a year and I needed to restore everything from backups (anything with RAID was a lot more expensive). I moved to a Gandi VPS for a lower price and a lot better reliability. I can now buy a VPS that can run a few low-volume server applications for around $5/month. A RPi is a bit cheaper, but then I have to think about the wear on its SD card, which will not last long with any write-heavy workload.
Being able to run a server on professional hardware by renting a tiny corner of the real machine but still have multi-gigabit pipes and RAID storage made hosting your own server workloads far more accessible to those of us without unlimited funds.
Oh, I’m actually not talking about a web server on a home machine, in this case. The WWW went public while I was in college, and the university web server and math/CS department web server were both in rooms I had at least seen, and the college newspaper web server was in the editor’s office (on a Quadra running A/UX).
That makes sense. The first server’s had a proper shell account on was the Swansea University Computer Society. If I remember correctly, it was a 133MHz Pentium that ran the web and mail servers for the society members (over a hundred active users). We replaced it with a 1GHz AMD Athlon a couple of years later. This was only a few hundred metres from the first ever web server in the UK (someone in the physics department brought the source code for the web server back from a visit to Cern and created a web site for the surfing society). Having a room paid for by someone else that had a 10 Mbit connection to the JANET backbone made this possible, I definitely couldn’t have done it as an individual.
My web server is in the corner of my room. I do appreciate the feeling of “Yeah, packets come HERE.”
You can do a backup elsewhere if you like, but it’s fun to have at least some traffic hitting your home office.
This is very achievable if you lower your standards.
The only real issue mentioned in this post is the routable IP address. I can get public traffic to my self-hosting box because it’s connected to a cloud machine over a VPN that acts as a reverse proxy, and that isn’t a turn-key simple process.
I like the thought of running the server(s) for my domain locally at home, but on the other hand I’m wary of exposing things running in my home to the Internet. I suppose a bit of creative network segmentation would lessen the risks, but still… It’s sad that this is an issue, but such is life on the modern Internet.
I feel you worries are exaggerated. Much higher chance to get hacked through an unpatched router than a static website with fail2ban-enabled ssh. Maybe with NextCloud or other more complex suites.
My server(s) are in the corner of my room!
Tire Fire Heavy Industries (@tirefireind) lives in my office and my garage. It serves my 3d printer, blog and git hosting. Maybe some day I’ll get around to building shell/compute hosting for friends. It started life as an entire half rack of gear, but is down to a few mini servers with ahem my 2011 laptop doing the lions share of the lifting.
I’d posit that the reason more folks don’t do this is uplink quality. “High speed” commodity residential cable internet here in the US from a major vendor is maybe 100MiBps down and 35MiBps up with a dynamic IP. Sure in some pockets you can get “gigabit”, but unless you’re fortunate enough to be able to get fiber to the home (shout out to CenturyLink who had 2GiBps symmetric fiber connectivity to my unit in an old apartment) an uplink speed better than a few hundred MiBps is extremely unusual. This in a world where your intranet is probably all gigabit. Heck 10-25+gige exists and in the grand scheme of things isn’t THAT expensive.
Commercial isn’t that much better. 100MiBps/100MiBps (10x+ slower than intranet!) on commercial cable is commonly available, but probably runs you $300/mo+. Yes you pay probably twice as much for less downlink because of the class of service, static IP and support. Sometimes you can’t even get THAT due to location. And at best you’ve probably only got one fragment of the Ma Bell breakup as your local cable provider to do business with so it’s not like there’s much for competition or consumer choice in the market unless you think dialup DSL, 5g bridges or satellite compete with cable or fiber.
In a world where maybe you’ve got 35MiBps up, sure you can run some TTYs or serve HTML or a small art website from your home. You can even probably survive being slashdotted (do we even call it that anymore?). But you’ve no hope really of streaming much for video or hosting content. I’m constantly surprised that my webcam feed from the 3d printer works when I’m away from home, and that’s thanks mostly to being my own only user.
For that you’ve got to attach the server in your house to someone else’s high-bandwidth hosting which may or may not be CDN optimized.
So EVEN IF you’re technical enough to run a server at home, wrangle automating DNS to deal with a non-fixed residential IP, you can’t eek functionality out of an entirely self-hosted website that pars with what we’re used to getting for “free” from FB/Twitter/Imgur/YouTube/… when it comes to multimedia. Not because your home server can’t push gigabit but because it’s far more cost effective to rent a $5 VPC in DigitalOcean and use their transit than to get equivalent transit to the corner of your room.
Not much more than a decade ago I was still running real money-making websites for real money-making business a pair of bonded T1s. So I’ll turn things around and say that what most people can get at home (for lower cost, with less hassle) these days is more than 10x as much as that — and that $5 DO droplet has a 500GB/mo cap, which is about 1.6Mbit/s averaged over the month, i.e. about equal to one T1. If you want to burst to a hundred, you better be doing nothing at all for most of the month.
I’m playing with this now, and trying out running kubernetes across a cheapish VPC and a beefier server at home, then doing some node affinity stuff to keep backend a local, but proxy with caching via the VPC.
it’s kinda working?
Fascinating read because it’s not often that I can say “I was there and I am 0% nostalgic about this”. I mean, I’ve had my servers running next to my bed (noisy, hot) and in the basement (much better, but that was 10Base-T) and the only reason I want something local these days is a) much cheaper than hosting elsewhere and I can shut it down at will and b) faster access. If I had FTTH I don’t think I’d host a lot at home anymore.
Feels like this is a more meaningful experience of the internet. I especially like the solar powered web servers.
Thanks to Cloudflare Tunnels, my web server is sitting in my laundry room closet and I don’t have to worry about exposing my home network to DDoS attacks or anything. It’s not quite the same as just having the packets go straight to my home IP, but it’s a decent substitute in my mind.
(mandatory disclosure: I work for Cloudflare)