Previous discussions here, here ,and here
All the things.
And by that, I mean I’m hosting all the little web apps I want to share with a few people in my basement instead of dropping them onto a cloud VPS. My VPS zoo got too big in 2019 and for 2020 I started driving it the other direction.
I have a single VPS that runs a reverse proxy and a wireguard interface. VMs in my basement connect over wireguard to the VPS, which then proxies traffic back over wireguard. No holes get punched in the home firewall.
Here’s where I wrote down the general approach.
Here are the steps I take to expose a new thing.
For things that don’t use much bandwidth and don’t need perfect uptime, I love this.
It’s the way to go, never put stuff directly onto the internet. It’s so easy to restore a haproxy config and even a simple pptp server and get things rolling in minutes. A $2 a month VPS is enough for a few web sites and even my Exchange server which is connected via cellphone!
Local copies, local backups. “enterprise” software is dirt cheap second hand, just as is the “free” stuff.
I didn’t mention this in my below post, but I use the same trick of proxying services running on my physical hardware to a cloud VPS over wireguard, to avoid exposing my home IP address. I actually have several such VPSs, since that allows me to make publicly available services I use in different social contexts, without someone being able to notice that my.realname.com and my.internet.pseudonym.com are both pointing at the same hosting provider IP address.
That said, I’m paying somewhat more than $2/month, and I’m curious where you’ve been able to find that price. $5/month seems to be the standard price across several VPS providers for their lowest-end VPS with a gig of RAM. The absolute cheapest I’ve seen is a vultr VPS at $2.50/month, which only has an IPv6 address and isn’t available at all their data centers. It wouldn’t surprise me if they’re making so little money off of it that they’d be prone to discontinuing it in the future. If you just have one VPS there’s not much difference between $2 and $5 a month, but since I have several, the costs do start to add up.
Depending on how much traffic you need to handle, Oracle free VMs could work as your exit nodes: https://matrix.org/docs/guides/free-small-matrix-server#get-a-free-server
The smallest entry on Hetzner Cloud’s list is a 3€/month machine.
It’s not a VPS, but you can deploy a small VM on https://fly.io which should be enough to run wg.
This is super interesting, chur
With any luck, my compiler will be self hosting this year.
Hah, that’s a badass answer. 😎
These days, as little as possible. I had a server (Xen VPS and then a dedicated server) online for something like 10 years but realized that administering was occupying too much of my time (a few hours per week) and it was just too expensive ($40/mo). I ran my own mail configuration (Postfix + Courier + enough anti-spam/anti-malware to use literally 1/2 of my RAM continually), Apache with probably a dozen sites (PHP, mostly WordPress or my custom sites from long ago) on it, a database server, some IRC bouncer software, iodine DNS tunneling, and a bunch of other crap.
My inattentiveness to it caused some pretty serious data loss: the hard drive died in 2016 and while I’d recently manually backed up my blog by way of converting the WordPress database to Markdown files intending to switch to Jekyll, I learned the hard way that my automated off-site backups using rdiff-backup hadn’t succeeded for nearly two years. Fortunately, all of my email users except me were using a client that kept a full local mirror (most of them Thunderbird; I’d switched to Airmail, argh). So, I lost email from mid-2014 through mid-2016. That sounds awful, but it’s probably not. I’ve not… missed anything. I’ve not even imported the backed-up mail. Nothing of real, immediate value was lost: the only thing measurably impacted was my digital packrat pride.
Since then, I’ve decided time and again that it’s nearly always a more efficient use of my time to pay someone else to be attentive. Fastmail costs me around $80/yr for my needs. Google gets around $3/mo for me to dump up to 100 GB of stuff I need to push to others via Drive. Dropbox keeps bothering me about paying but I got so much free space that I use it as a hot landing zone for some automation and backups for apps that integrate with it. The exception is in cost-prohibitive areas, such as backing up my photos and videos. I’ve got a huge NAS for that and budget about $500/year amortized for that data storage. The math works out in my favor versus just about any online service.
In my home network, on that NAS, I’m running Home Assistant for some minor home automation, Miniflux for feed reading, some VPN software, a GitLab Runner so that my side gig’s builds keep building despite the recent reduction of free plan CI minutes, and some other minor things.
Running less stuff means adminning less stuff and reserving my focus for building software and communities.
Thank you for posting this.
It’s SUPER important to be very careful about the risk/reward ratio for anything you want to self host.
Mail is SUPER high risk IMO and honestly, from where I sit very low reward for self hosting. Sure, you retain custody of all the bits on your side, but you can retain that same custody by using Fastmail and a good IMAP client.
I love Fastmail too BTW and wish they got more love :)
You’ll note that every single one of those services above are:
So I feel like I’ve got my risk/reward dialed in pretty well.
I’ve got a huge NAS for that and budget about $500/year amortized for that data storage
I’ve got a huge NAS for that and budget about $500/year amortized for that data storage
I’m curious, what data volumes are we speaking about here? For me, I’m still okay with Google, currently on 200GB tier but even a full TB would still be cheaper and safer than doing it at home.
At 200GB? You’re right. You won’t be able to beat a Gdrive or Dropbox.
However when you’re talking terabytes, doing it at home is the only way to fly. I have an 8GB Synology NAS with their default 2 drive redundant setup (Don’t ask me what RAID level it is. I dunno. I took the defaults :) and I back up my 5TB worth of data on Backblaze B2 for ~$20/month. That works pretty well for me.
My current configuration is 4x 6 TB = 24 TB — 16 TB after RAID5 — in one NAS and 8x 2 TB = 16 TB — 12 TB after RAID 6 — in another older one that was EoL’d at the end of 2019 and I haven’t fully moved off of it yet. The older unit is probably around 75% storage utilization. A huge chunk of that is 4K video from a conference I ran in 2019. The newer one is around 25% but has 100% of the services running on it that had been running on the older NAS. The summer got too busy for me to finish moving off the old NAS and then I just kind of got complacent. That is perhaps a testament of my inattentiveness!
I’ve got some geolocational redundancy with off-site backups of critical, irreplaceable data. One of those off-site NAS devices is EoL now and will get replaced this year.
Quite a few things:
Most of this infrastructure is hosted on VMs controlled by a Proxmox server in my home. I use VPSs for a couple of services that I would like to have continue to run even if the power or internet gets knocked out at my house. The exact list of things changes every now and then as I experiment with different things - I’m still looking for a RSS reader I like better than miniflux, for instance.
mostly use it as a quick way of archiving YouTube videos
mostly use it as a quick way of archiving YouTube videos
Gonna have to do a double take on PeerTube. Didn’t realize it could be used in that way.
How do you like Miniflux? I’ve been on the fence with it for years now and have been using RSS for probably around a decade now and its definitely been on my radar.
I’m honestly not a huge fan, and I’m looking for a replacement RSS reader. I do like that it’s easy to host, but I am really not a fan of how it presents unread articles to you. I would prefer something that works more like how google reader used to work (I’ve given freshrss a try, but haven’t been super happy with that either).
I use a hosted tt-rss + RSS Guard (desktop app). They work really well together.
Glancing over the thread, I’m surprised by how many people seem to selfhost almost everything/nothing. There are three classes of answers in the thread today:
I see some “everything” answers, fewer “nothing”, some comments on people’s answers, and ≤1 inbetween. This makes me think that the decision of whether to selfhost depends almost 100% on the person who decides (and the area, perhaps) and almost 0% on specifics of each software/system/blah. This is strange and surprising to me.
Sample bias. I am the in your “some things, others not” category. I have a raspberry pi running pihole, a server for keeping some files around, and use hosted platforms for everything else. It works fine for me. However, it’s not especially interesting or unique so I don’t typically participate in this kind of discussion.
Once I’m already adminning a server adding another service to it is almost no overhead
I used to self host everything. I hosted my website on a Sharp Zaurus, with the files on a Compact Flash card. I hosted my own email server, a forum, various tools (RSS reader, for example), etc.
There were some problems:
Spam filtering didn’t work very well. When I sent email, sometimes it didn’t arrive - presumably because it was seen as spam, though of course I didn’t spam anyone. I was just one person sending normal emails. When I tried putting a whitelisting system in place to stop spam, I missed emails people wanted to send me.
Running a server requires keeping it up to date and secure. OpenBSD was good, but sometimes I’d ask it to update itself and it’d get into a knot, despite me having only half a dozen packages installed. I also tried Debian Linux. Same thing - it started spewing errors during a self-update and I didn’t know how to fix it. I’m not exactly a UNIX/Linux newbie, but I had no clue, the docs had no clue, the mailing lists had no clue… And I started again, and ran into the same again.
You really want somewhere you can keep a ‘server’ where you can get a keyboard and monitor on it easily, without having to crouch / contort yourself, because some random thing will break and you’ll be sat there for hours trying to figure out why it’s decided not to boot, or bring up its network interface, or whatever. The easiest way, I’ve found, is to use an old laptop. ThinkPads are cheap and reliable. Unfortunately the old, cheap, reliable ThinkPad I had was also my main laptop, so my ‘server’ was always a cobbled-together load of unreliable parts.
Because of the above, and when Google were less evil:
I moved my email there (but kept my own domain)
I moved my website to Google Sites, because my site was just a collection of ill-thought-out opinions and no-longer-useful code, but occasionally I’d get emails thanking me for certain bits of info or code, so I wanted it to live.
I moved my forum to Google Groups, because it did spam filtering and could do email without it getting blocked randomly.
Over time I started to see that I could stick stuff on someone else’s machine and it was fine.
I made my own link shortener and pastebin on Google Cloud and used them for many years - never paying anything for hosting them, and only once being forced to do an upgrade of the pastebin code, which was painless and I had plenty of notice about.
If I want to make tools like the previously mentioned link shortener or pastebin, I’ll do them on a cloud provider again, as it’ll cost me nothing to run, I’ll never have to worry about them going down, or having security issues, or needing their OS upgrading or hardware fixing.
I see some “everything” answers, fewer “nothing”, some comments on people’s answers, and ≤1 inbetween.
I see some “everything” answers, fewer “nothing”, some comments on people’s answers, and ≤1 inbetween.
Well, the question isn’t “which services do you self host and which don’t you self-host?”. For example, a few years ago my answer would not have included my git repositories. That’s because a few years ago I hosted my repositories on Bitbucket (when they still offered Mercurial and I was still using Mercurial). My answer would not have mentioned Mercurial at all. And like I also mention in my comment, I would not be hosting DNS myself except for the crappiness of my registrar’s DNS servers (and I fear downtime if I did switch and I just don’t have the time anymore to spend on fiddling with that stuff; it works - don’t touch it).
I see plenty of answers where “my repositories” and “DNS” is not part of the list. Since the vast majority of people here are programmers, I conclude that there are >1 in between.
I fall in your in between category even though I said “all the things” in the first sentence of my top level comment. That was figurative language meaning I self host all the little things I like to play with and share with family and friends. Once I need reliable uptime or need to share with a large number of people I don’t know personally, I let someone who’s in the hosting business handle it. So hugginn, bitwarden-rs, 7 or 8 little web apps I make, miniflux, nextcloud, a znc bouncer, a small forum where some friends and I discuss coffee, and a few more things all get hosted by me, many using the scheme I described in my blog posts.
My email, DNS, web pages that I really need to keep online, and repositories that I need to collaborate on with arbitrary people all get hosted by people who are in the business of doing those things.
Hopefully the expansion of the “in between” you’re seeing in the replies to your comment makes it less strange and surprising.
Some said that self-hosted email will not reach Gmail. It just takes time, My self-hosted email reaches straight to Gmail Inbox. I just need to ask 4-5 of my friend to mark it as “not spam”.
How are you running thelounge? Whenever I try to run it in docker, the container fills up and craps itself.
Maybe it’s because I don’t stay logged in all the time.
I use this compose config. Then I just run docker-compose up -d. To add user, I need to do login into that container and add a user there
docker-compose up -d
Whatr do you use thelounge for?
I am a SoureHut user. Sometimes I ask a question in IRC. I don’t have a fiber connection (I use my phone for tethering). So I need thelounge to always connect to IRC. If I turn off my phone. I can still come back and see people’s answers.
(Your firefly-iii link seems to be a mis-paste. I found it via websearch, though.) How do you find Firefly compares to gnucash? (other than, obviously, one is server-side, the other is a local app)
Oh my bad, Yes it firefly-iii.
I use GnuCash for a year on-and-off, unfortunately, It’s very hard to recall what I bought (where was my money) after some days. It’s very hard to make a daily entry. I switch to GnuCash for android.
It works well for me. I can make an entry directly after purchase. Besides being unmaintained and buggy, but I have no other choice. I keep using it for a year. Until I have so many debits/withdraw. I keep asking:
firefly-iii also releasing many features (while keeping its principle) often. So I thought it will be easy if I need a more complex report.
I don’t trust anyone else. They go broke or go scared of Nethack for Windows CE. I don’t know why anyone hosts on shared platforms or why they even exist.
A couple of reasons:
Well cellphone isn’t so great either. But things like email have a baked in several day delay model. I front with office 365 as I too have a life and couldn’t be bothered with lists and all the crap. My exchange server has the pptp RAs set to dial with default gate on the VPS so it sends out the “smart host” just as I only allow the MS servers inbound. I’ve been doing it for years now.
Since my stuff is the pptp client how does the ISP know i’m hosting? Simple they don’t. No connections go to my home they all go to the VPS which in turn points to the pptp client.
Powe and servers are so cheap, along with virtualization… Xeon e5v2’s board /cpu/16gb of ram are sub $100 USD. It’s trivial.
To save time! You pay someone money to spend their time on it. With my day job I make enough to pay for these services, leaving me with time to spend with my family and doing leisure activities (reading, playing games, social events). There are some things I do for fun and to learn, but I can pick and choose those.
If you truly trust no one, then of course self-hosting is the only option. For me, there’s quite a few companies I’ll happily trust with my projects.
For myself? Absolutely nothing. I do not want to spend my limited free time managing servers.
For other people? One thing: I run my wife’s website off of Digital Ocean.
On the internet-facing side of things
In the internal network / behind a firewall / via a VPN
All of those are running on a FreeBSD host, each of them is a FreeBSD Jail, the machine is Dell E5470 in my home with a range of static IPs provided by the ISP.
Nothing. I value my time. :)
Self-hosting-ish a blog of static pages. The rest is hosted by someone else.
I actually really don’t enjoy ops and maintenance. The little free time I get besides work and family is spent on goofing off on immature code to learn new concepts or language.
That’s a strange question. What I /don’t/ self-host and depend on is e-mail, IRC servers, internet search, Linux distro repositories and various work-related stuff. I’m still heavily thinking about hosting my own e-mail server for mostly-input-only things like online accounts.
I’ve been self-hosting since around 2015 and shitty laptops on mediocre links have always been 100% adequate, silent and extremely power-efficient–most of my RAM is free and the CPU stays mostly idle. Paying 50 euro a month like @djsumdog seems ridiculous to me, though PeerTube might be bandwidth-intensive and so require decent uplink, which should nonetheless still be well under 50 euro, including a public IP. But the true reason why my server is actually mine is distrust–if someone gets in, the door should be in splinters, or me covered in bruises.
I’ve got a pretty large list… I don’t think this is even exhaustive.
A small network hosted for friends:
Stuff I’m experimenting with:
new as of this weekend: a sr.ht instance
new as of this weekend: a sr.ht instance
How was that setup? I have no desire to host my own, whatsoever, but I did want to propose a patch for this issue that I opened. I gave up before I could test my patch because I couldn’t find a speedy path to getting a development instance set up to test it.
If you think it was easy to set up and feel like sharing, please share anything you found that was a good starting place.
Actually, I’ve found it a huge pain. I first thought I’d put it in a jail on my main FreeBSD VPS, like everything else I run. Good luck trying to find instructions on how to get everything installed from source or Python packages, or what versions are needed. (You’re meant to reference the Alpine build scripts, which I did, but it gets out of hand so quickly.) The docs basically only exist to tell you to install from packages, the “standard” way of installing, but then leave you completely dry. Having actually gotten an instance working, one gets the impression it’s not meant to be set up by anyone other than Drew.
So I thought, okay, packages. Alpine is the only ‘officially supported’ install method, so it sounded ripe for putting in Docker. I tried putting together some Dockerfiles but after ~90 minutes it was obvious it was going to be too much hassle, and reading around it seems like. Debian, then. I spin up a new Debian VPS, but it can only run on unstable and testing, so I bring it up to testing.
That’s when I actually start to make traction, but I’ve hit so many weird issues since then that I’m not sure I want to keep on with it. I’m determined to give it a proper go – I’ve imported everything from my old cgit instance, and I’m trying to contribute back to the mailing lists.
There are still lots of miscellaneous pieces broken; weird stuff with interservice OAuth (spent maybe two hours this morning trying to get pastes.sr.ht to accept a goddamn PAT, and it’s become clear that part of the setup to get this done is simply undocumented and needs you to manually modify the database to mark some OAuth clients as internal), initial migrations don’t run consistently, and it seems the default failure mode is that all services will simultaneously fail to be brought up, and will instead just crash loop forever. The latest unsolved mystery is that everything currently works, but only if you delay some services’ startup – otherwise they just crash loop.
I am sure the biggest problem is simply lack of manpower meaning they haven’t had time to polish or go over the server-owner path. There’s no set of directions to follow that take you from “I want my own sr.ht instance” to having one fully set up and working, without any broken bits. You will need to hack on it to get it simply working, and probably need to understand OAuth more than you want to. I had to resort to strace to diagnose one of my many encountered so far “everything just crash loops without saying why” instance, and needed to work through and printf debug the codebase to work out the preauthorized database thing.
I’m glad it wasn’t just me, I suppose. I came away with a similar impression about getting a development instance going. I felt like I should be able to cut more corners since I just needed to get the mailing list service up, really. But I couldn’t find them. When I hit the end of my timebox, I was thinking “This will only work if my dev environment matches his, and I can’t find a description of that anywhere.” I did even try alpine.
I suspect I’ll try again next time I want a feature, and see how things have progressed.
Mastodon, a tilde server for CS students I’m trying to get off the ground, a very basic Git setup, Gemini (gemini://gemini.jahziel.xyz, not much there tho, runs on my own server called Titan). All running on a Hetzner VPS. Can’t recommend Hetzner enough; I pay ~$10 for 8gb of RAM and ~100gb of disk.
A lot of stuff
Wrote about it here https://0x7f.dev/infra.html but some parts changed since then.
A couple of static blog-like sites, a wiki, a Scuttlebutt pub, an IRC server + web client, a code-server instance, and a small file drop.
Everything actually runs on a server at home, connected to a Vultr VPS with WireGuard and proxied via nginx or haproxy depending on whether the thing talks HTTP or something over TCP. The VPS also holds a wildcard cert so I can put all the web apps on dedicated domains so basic web security primitives in the browser work.
SSH ProxyJump gives me a shell on my home machine when I’m out and about, Every host runs firewalld and tallow, public IP or not.
I’m spinning up osquery and remote log shipping for a smidge more security during or after the inevitable break in right now. I also have vague plans for a Consul setup to get away from my hand-rolled proxy configs and host IPs, but given the small number of apps and backend hosts it hasn’t been essential.
Do you use your Vultr VPS for privacy or for NAT-punching?
I’ve thought about putting a VPS in front of my home infrastructure, but I haven’t had any issues using dynamic DNS or otherwise exposing it publicly.
Both! I definitely like having an IP address unassociated with my home infrastructure attached to public DNS, and likewise find it easier to punch through the dual-NAT (ISP gateway and my home network router/firewall) in front of my machines via a dedicated WG link. It also makes port mapping for new services trivial, since anything exposed on the WireGuard interface of my home server(s) is automatically routable from the public VPS.
I have a $5/month Digital Ocean droplet that hosts my blog, a couple of static-y services, and which I ssh into to access IRC and twitter via tmux.
NixOS on a Dell PowerEdge r720 with 128GB of RAM. I also run a bunch of virtual machines for development and SSH into them.
I have a NAS and a service server that hosts Plex, an IRC server, a Gitea instance and an earlier incarnation of my personal website.
I’ve been considering making a Pleroma server on my main big server, however I’m not entirely sure if I want to deal with the results of having to maintain/keep one up. It’s a lot of moderation overhead.
Mostly hosting on VPSs in the cloud though. Would love to host email myself, but I find blacklisting and overall deliverability issues to much of a hassle.
Most things that represent me online and everything I don’t want to lose and/or don’t want to entrust any other third party.
I moved everything (well, except the redundancies) to a private rack in a colocation data center in the beginning of 2020, after years of hosting most of the same things on a bunch of rented VPS’s (some of which I still keep around for backup/redundancy purposes). Since my “daily driver” machines are on the same OS as all my servers, maintenance and interoperability is not really a problem (I update them all mostly in sync).
The important things in the setup are the mail (2 MXes) and web servers, in addition to source control (git) and backup storage as well as realtime communication (mostly IRC for me).
So an incomplete list would be
Additionally, there’s a NAS at home for local video streaming and additional backup storage.
I might have a particularly serious case of NIH syndrome; a rather large amount of the tooling I self-host is also written by me.
Strictly at home, I have a media server for blu ray/dvd rips and storage of photos etc. Also has syncthing running on it.
I also have a little IntelNUC with a USB hard drive full of music. The ‘main’ server is noisy and lives in the office, so this lets me listen to my stuff without turning it on. I use MoodeAudio for this, which is the based MPD based distro I could find.
Cloud based, I run:
I used to do:
Any cloud services I rely on (RSS, email, bookmarks etc) I make sure they stay portable. It takes me 1 minute to move my RSS feeds elsewhere. I always keep a full local copy of my email inbox, contacts and calendar.
Running most of my stuff on a self-hosted kubernetes cluster.
Also a small project called “Monkey Radio Reborn”. Me and my sister have been scrounging around for the playlists from the old Monkey Radio music stream (https://web.archive.org/web/20080705112816/http://monkeyradio.org/). I think we have about 50% of the collection at this point
Nice one. I have been surviving with Soma.fm: Groove Salad, and, latterly, https://open.spotify.com/playlist/1plJAm2h7qXQtxkSl82DDz
I rent a dedicated server for aprox 50 euro a month (i7-6700k, 32GB ram):
Mastodon, Pleroma, tt-rss, bookmarks (linkding), matrix + bridges, website, peertube, calendar/contacts (radicalie)
some matrix bridges
Mind sharing where you’re renting your server from? Looks like you got a good deal.
Until recently I had a server at prgmr.com which I think offers really good prices.
Nothing against it, I just got a better deal at someone I know ;-) https://openbsd.amsterdam/
I just host contacts and calendar for the whole family and run my 24/7 programs there.
It looks like hetzner’s dedicated offerings. (same cpu spec)
Hetzner. They have regular auctions for servers. I moved everything off of 4 Vultr VMs over to a single Hetzner dedicated. I even paid for the USB stick so I could install Void Linux on it. (There is network KVM access but you have to reserve it in 2 hour blocks. I only needed it for the install).
My ongoing plan is to move more from literal piles of consumer-grade stuff to cheap rack-mount hardware, stored in a 25RU mobile cabinet at the end of the hall.
Not a lot to to be honest:
A couple of Wordpresses (one of them somewhat high-profile). A couple of Wordpresses with Woocommerce. A node instance for a personal project. One matomo tracker for all those sites.
I am planning to add a NextCloud and some feed reader to start avoiding doomscrolling so much.
It is amazing the prevalence of PHP in all of these.
I currently run two ProxMox installations:
I love ProxMox because it gives me the flexibility to run either VMs (where I can then host docker containers) or LXC containers for apps that just want to be ‘bare metal’ installed by packages.
Using a Synology DS216+ii NAS for storage, mostly via NFS in this context. Ubiquiti networking all the way down.
Having a ton of fun with it all and teaching myself a lot about small scale yet modern Linux administration.
I have a bunch of Vultr VPSes running OpenBSD hosting a variety of services:
At home I host (also on OpenBSD running on an Intel NUC):
Thinking about hosting an rss aggregator in 2021 as well as moving the minecraft server to physical hardware at my home since it’s getting expensive in the cloud, but that’s dependent on my ISP’s fiber to the home roll out schedule and me figuring out how to proxy traffic from a wireguard endpoint into my home network. I see a few other’s have posted how they’ve done this here so THANKS!
TT-RSS, an IRC bouncer, a NAS, and media server.
I do take up space on some of my friends’ self-hosted things: email and Nextcloud, mostly.
Only one thing: the searx metasearch engine. Works like a charm!
All of it on my home server (except Jitsi), with two tiny VPS’s for DNS and mail-server redundancy. Everything on Debian stable.
No holes in firewall, everything just accessible locally.
About constant 250W total.
I do not host: email, important code repos, anything that needs to be open to the internet.
I plan to migrate my life off google and start with this list:
With email, I scratched my own itch and run this service: https://hanami.run
For photo, I plan to get a synology. But I don’t know how I can get that 1TB of photo into synology…
Any idea anyone ?
I self host a prometheus and grafana setup with a custom written ISP quality monitor tool I wrote in rust for fun. I also host an assortment of small personal projects in various states of completeness.
I’m considering moving my source code hosting in-house as well with Github just being a mirror but haven’t quite tackled that yet.
All on a personal FreeNAS instance
These days on my local network…
Elsewhere, my website on DigitalOcean.
For everything else - I pay someone else to deal with it (e.g. Fastmail).
I don’t self-host in the sense of having a machine at home, but I do self-host various software on VPSes that would otherwise rely on a trusted third party SaaS provider:
Everything I have that’s important gets synced into Dropbox.
On a VPS:
Nothing. I don’t even have a permanent network. I use the hotspot my smartphone when I need to use the internet. (I have an unlimited 4G plan)
Linode VPS: Website, wiki with gitit, email with Postfix, and private source repos, plus general SSH terminal services.
Home server: Shared drives via Samba, Borg backups, and occasionally a game server for Terraria or Avorion.
Most things are deployed/managed through Ansible, at least on good days. Works really well.
On a public VPS hosted by Linode:
I can really recommend cgit. It is a powerful interface to Git repositories, much more so than many other web front-ends. I’ve written a bit about my setup here. I also have a video demonstrating cgit.
I also have an old HP dc5700 at home that I occasionally power on. It is currently running:
I noticed the quality was a little bad on the video, so I upscaled it and uploaded a new version .
Question: Does anyone use ngrok.com or a similar service? I’m looking to do some self hosting and this looks like a really easy option (I’m willing to pay the $9 per month to avoid all the headaches with firewalls and other networking stuff that I’m not too savvy with)
I’m working on setting up caprover on a VPS, to see if I can make it easier to spin up new projects. I want to consolidate various small wikis and whatnot, and do better at keeping the software up-to-date. I might try hosting Jitsi, to stop paying Zoom.
Hosting at home seems thrifty, but dangerous; perhaps ok, with good offsite backups.
Nothing anymore. I’m still excited by the opportunities technology offers, but too burnt out to deal with the minutiae of tech admin outside work myself. That eats into family and music time!
I’m still hosting what I hosted last year.
I have a ZNC, Transmission-web, and my website. Managed with Ansible. Makes it super easy to switch to any machine.
Web sites, and build servers. I’m also running SaaS on a bunch of VPSes.
I’ve given up on self-hosting e-mail. I’m not a professional sysadmin, so e-mail has been a constant drain of my time to fight with deliverability, spam, TLS compatibility, and random outages.
I’m currently hosting pihole as it allows me to block ads on my network. I am also thinking about hosting my family pictures but I’m kinda hesistent
At home, I run my own recursive DNS resolver. As of a couple weeks ago,
I also run a squid instance that is configured to MITM my HTTPS connections.
When I bring back my blog, that’ll probably be the first thing I write about.
At some point, I plan on doing some personal code hosting. I’m evaluating
options right now. Solutions that I am drawn to include
fossil and sr.ht. I also want to get involved in the fediverse; I’m
exploring options right now.
Honestly not too much nowadays, I find that self hosting eats a lot of time as I get older and it’s not as fun as it used to be (for me at least, ymmv).
Things I do self host still:
And this isn’t including the tens of VMs I have running on my LAN for dev work etc, but those aren’t doing much “self hosting” more than just running dbs/queues/etc.
How’s Jitsi working out for you, either hosting or usage?
hosting is super easy. The official debian packages work well. We have weekly group calls with both sides of our family and it is pretty smooth. I’d say it is not zoom level of smoothness but for a free product that I run on a 10€ server, it is pretty good.
Define “self-hosting” – like, on a VPS? On bare metal under my desk?