I’m getting in to running my own self hosted services and looking for other cool stuff I could run. What stuff are you all running on your personal server?
Things I self-host now on the Interwebs (as opposed to at home):
NextCloud
Bookstack Wiki
Various sites and smaller web apps (Privatebin, Lutim, Framadate etc)
Mailu Mail Server
Searx
Things I’m setting up on the Interwebs:
Gitea on HTTPd
OpenSMTPd
Munin
Pleroma
Transmission
DNS (considering Unbound for now)
Over time I may move the Docker and KVM-based Linux boxes over to OpenBSD and VMM as it matures. I’m moving internal systems from Debian to Open or NetBSD because I’ve had enough of Systemd.
Out of curiosity, why migrate your entire OS to avoid SystemD rather than just switch init systems? Debian supports others just fine. I use OpenRC with no issues, and personally find that solution much more comfortable than learning an entirely new management interface.
To be fair, it’s not just systemd, but systemd was the beginning of the end for me.
I expect my servers to be stable and mostly static. I expect to understand what’s running on them, and to manage them accordingly. Over the years, Debian has continued to change, choosing things I just don’t support (systemd, removing ifconfig etc). I’ve moved most of my stack over to docker, which has made deployment easier at the cost of me just not being certain what code I’m running at any point in time. So in effect I’m not even really running Debian as such (my docker images are a mix of alpine and ubuntu images anyway).
I used to use NetBSD years back quite heavily, so moving back to it is fairly straightforward, and I like OpenBSD’s approach to code reduction and simplicity over feature chasing. I think it was always on the cards but the removal of ifconfig and the recent furore over the abort() function with RMS gave me the shove I needed to start moving.
For now I’m backing up my configs in git, data via rsync/ssh and will probably manage deployment via Ansible.
It’s not as easy as docker-compose, but not as scary as pulling images from public repos. Plus, I’ll actually know what code I’m running at a given point in time.
I don’t run ruby, given the choice. It’s not a dogmatic thing, it’s just that I’ve found that there are more important things for me to get round to than learning ruby properly, and that if I’m not prepared to learn it properly I’m not giving it a fair shout.
N.B. You can partially remove systemd, but not completely remove it. Many binaries runtime depend on libsystemd even if they don’t appear like they would need it.
Some more info and discussion here. I didn’t want to switch away from Arch, but I also didn’t want remnants of systemd sticking around. Given the culture of systemd adding new features and acting like a sysadmin on my computer I thought it wise to try and keep my distance.
Fun fact: Ubuntu 18.04 also sports a 26 kilobytes executable pgrep requiring libsystemd.so.0
A bit of digging revealed that procps utilities may print/use systemd-generated fields (lsession, machine, owner ouid, seat, slice, unit…) and containers.
/* try to locate the lxc delimiter eyecatcher somewhere in a task’s cgroup
The author of the article regarding pgrep you linked used an ancient, outdated kernel, and complained that the newest versions of software wouldn’t work. He/She used all debug flags for the kernel, and complained about the verbosity. He/She used a custom, unsupported build of a bootloader, and complained about the interface. He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions. He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly, and likely with the default sRGB set (which is horribly inaccurate anyway). He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.
ancient, outdated kernel
all debug flags for the kernel
unsupported build of a bootloader
The kernel, kernel build options and bootloader were set by Arch Linux ARM project. They were not unsupported or unusual, they were what the team provided in their install instructions and their repos.
A newer mainstream kernel build did appear in the repos at some point, but it had several features broken (suspend/resume, etc). The only valid option for day to day use was the recommended old kernel.
complained that the newest versions of software wouldn’t work
I’m perfectly happy for software to break due to out of date dependencies. But an init system is a special case, because if it fails then the operating system becomes inoperable.
Core software should fail gracefully. A good piece of software behaves well in both normal and adverse conditions.
I was greatly surprised that systemd did not provide some form of rescue getty or anything else upon failure. It left me in a position that was very difficult to solve.
He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions
This was not a custom kernel package, it was provided by the Arch Linux ARM team. It was a newer kernel package that described itself as supporting my model. As it turns out it was the new recommended/mandated kernel package in the Arch Linux ARM install instructions for my laptop.
Even if the kernel were custom, it is highly unusual for distribution packages to contain scripts that overwrite partitions.
He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly
There are multiple concepts under the words of ‘colour profiles’ that it looks like you have merged together here.
Colour profiles are indeed used by image and video codecs every day on our computers. Most of these formats do not store their data in the same format as our monitors expect (RGB888 gamma ~2.2, ie common sRGB) so they have to perform colour space conversions.
Whatever the systemd unit was providing in the form of ‘colour profiles’ was completely unnecessary for this process. All my applications worked before systemd did this. And they still do now without systemd doing it.
likely with the default sRGB set (which is horribly inaccurate anyway)
1:1 sRGB is good enough for most people, as it’s only possible to obtain benefits from colour profiles in very specific scenarios.
If you are using a new desktop monitor and you have a specific task you need or want to match for, then yes.
If you are using a laptop screen like I was: most change their colour curves dramatically when you change the screen viewing angle. Tweaking of colour profiles provides next to no benefit. Some laptop models have much nicer screens and avoid this, but at the cost of battery life (higher light emissions) and generally higher cost.
I use second hand monitors for my desktop. They mostly do not have factory provided colour profiles, and even then the (CCFL) backlights have aged and changed their responses. Without calibrated color profiling equipment there is not much I can do, and is not worth the effort unless I have a very specific reason to do so.
He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.
You can do this without making systemd libraries a hard runtime dependency.
I raised this issue because of a concept that seemed more pertinent to me: the extension of systemd’s influence. I don’t think it’s appropriate for basic tools to depend on any optional programs or libraries, whether they be an init system like systemd, a runtime like mono or a framework like docker.
Systemd can work without the color profile daemon, and ps and pgrep can work without systemd. Same with the kernel.
But the policy of Arch is to always build all packages with all possible dependencies as hard dependencies.
e.g. for Quassel, which can make use of KDE integration, but doesn’t require it, they decide to build it so that it has a hard dependency on KDE (which means it pulls in 400M of packages for a package that would be fine without any of them).
I really wish the FreeBSD port of Docker was still maintained. It’s a few years behind at this point, but if FreeBSD was supported as a first class Docker operating system, I think we’d see a lot more people running it.
IME Docker abstracts the problem under a layer of magic rather than providing a sustainable solution.
Yes it makes things as easy as adding a line referencing a random github repo to deploy otherwise troublesome software. I’m not convinced this is a good thing.
As someone who needs to know exactly what gets deployed in production, and therefore cannot use any public registry, I can say with certainty that Docker is a lot less cool without the plethora of automagic images you can run.
Personally, I’ll have to disagree with that. I’m letting Gitlab automatically build the containers I need as basis, plus my own. And the result is very amazing because scaling, development, reproducibility etc are much easier given.
Bookstack is by far one of the best wikis I’ve given to non-technical people to use. However I think it stores HTML internally, which is a bit icky in my view. I’d prefer it if they converted it to markdown. Still, it’s fairly low resource, pretty and works very, very well.
GOPHER, wrote my own gopher server (source code available via said server)
QOTD, again, wrote my own
DNS, running bind but it’s not visible to the outside world. It’s authoritative for all my domains; the company serving up my zones slaves off my DNS server.
E-Mail (Postfix+Dovecot) + XMPP (Prosody) + TeamSpeak3 on one server
websites and files (Syncthing) and misc shit (IRC bots, Discord bots) on another
Syncthing on home NAS, also Subsonic (but I never really use it)
OpenVPN and socks5-proxy via SSH on demand (I rarely need those)
Actually I think the only thing I’m not self-hosting is one Wordpress blog at wordpress.com (don’t want to have my real name associated with it, but it’s only a gaming blog, nothing super secret).
What has your experience hosting your own email been like? I’ve idly considered it, but it’s a famously unfriendly service to deal with (spam, major providers deciding your messages are spam, all the onerous metaprotocols to combat spam) and I’m happy with Fastmail’s service.
I’ve been hosting email myself for 15+ years. Postfix made it easier to configure (Sendmail was… complicated, in comparison, in my opinion.) Dovecot works really well for IMAP/POP3. Finally Let’s Encrypt allows you to get a nice certificate relatively easily.
Greylisting helped a lot to reduce spam, but spam is still a nuisance - especially if you don’t have good filtering in your mail client (I’m using crm114).
Setting up SPF, DKIM and DMARC can be a little complicated, but it seems to work fine, as long as all email from your domain is sent from a well defined set of IPs.
I’ve not had many problems, but there’s a bit of luck of the draw in getting a clean IP. I have SPF and DKIM set up (not DMARC), with the self-signed certificate that Debian auto-generated, and that seems to be enough to get mail delivered to the big providers.
For incoming spam, I reject hosts that: 1) lack valid RDNS, or 2) are on the Spamhaus ZEN RBL. This seems to catch >95% of spam. Minor config hint if you’re using the free Spamhaus tier: you need to set up a local DNS resolver (I use unbound) so you query directly, otherwise your usage gets aggregated with whoever else is using your upstream DNS, which probably exceeds the free tier.
Like the other commenters, I use Postfix, which is reasonably nice, and has good documentation.
Mostly positive. I had that discussion this morning on IRC, so I’m gonna quote myself to not retype everything:
[...] on a "decent" hoster blacklist-wise and not DO or something
and it's been running for 10 years, I don't seem to have the typical
"gmail thinks I am spam"-problem
usually.
interestingly I had it yesterday when sending something to myself
but dunno, empty body, 25mb-video.. who knows
I hardly use my gmail-account
But thinking about it, sending a job application in November ended up in the spam folder for 2 people and I only got a reply once I pinged them out of band. That was a shitty experience, but as I hate using GMail I prefer this to a years-long shitty experience using it :P
If I was to “start over” these days I might go to a dedicated email hoster like FastMail, but I think it’s just too expensive. I have 4 people with their main email addresses on my server and it costs me 10 EUR /month and I get to host other “communication” services as well. For FM it would be 15-20 USD per month and I still haven’t found out if I could use “a low amount of” domains and not just “use your own (one) domain”. Sure it takes some maintenance work, but it’s part hobby, part learning experience and part keeping in touch how stuff is done as it touches on my dayjob, depending on which role at what company I do. (Been running my own mailserver for roughly 15 years I guess)
Thank’s for the pointer to weeWX, I’ve more thought of using Grafana to display weather data.
Are you able to create alerts (something is moving in your flat) with motion?
Yes, you can tell Motion to run a command when motions starts, when motion ends etc. I don’t use that functionality at home, but at work I use it to send an XMPP message e.g. when somebody enters the serverroom and when the video is completed (including a link to the video), so I can keep track of who enters and what they do.
I have had to fiddle a little with ignoring part of the image that constantly flickers in the server room; I can recomment Motion, it works well.
weewx does enough that I haven’t bothered doing something with the data myself - I’ve only changed the display (colours and such) to integrate it into my website.
OpenSMTPd + Dovecot + Rainloop as Tor Onion Services (once I write a little self-service web interface for creating/modifying/deleting users, I’ll open this one to the general Tor userbase)
GitLab in a HardenedBSD jail as a Tor Onion Service
I plan to set up the following services out of my home:
Mastodon as a Tor Onion Service
IRCd as a Tor Onion Service
It’s really easy for me to run various Tor Onion Services since my home has a special fully Tor-ified network. Just plug in a device and all its traffic automagically gets routed through Tor.
Buildbot for CI on various platforms as most hosted things are Linux-only
Plex media server
When travelling I spin up an Algo VPN on GCE or DigitalOcean.
I used to run my own colo server but $DAYJOB has 6000+ physical servers and I got bored of having to do maintenance of yet another physical box :-)
Now I run most of my things from a VPS at Mythic Beasts running FreeBSD with misc. other bits running on dedicated boxes from Hetzner which currently run SmartOS, although I’m considering moving to FreeBSD with bhyve sometime.
I’m using Prometheus and Grafana for monitoring all of it.
I’m also using Postfix+Dovecot with Spamassassin for spam checking. I’m an Alpine user but keep a Roundcube instance (under nginx w/PHP-FPM) up for friends&family.
Some particulars on my setup:
I gotta have full text email searching and use a dedicated Solr machine for that. Apache Tika indexes the insides of attachments and classifies images. A search for “automotive” finds the Excel spreadsheet with insurance rates in it. A search for “dog” finds shots of the neighbor’s corgie.
Solr is resource intensive but worth it. I can search a 200K+ message inboxes in a blink, faster than I can get Gmail to do the same.
Several of my users are fellow Carnegie Mellon alum who used Sieve at school. So I’ve got Pigeonhole stuck on the side of Dovecot for that. Managesieve lets them set thier filters via Roundcube.
I’ve got a few other Roundcube plugins installed. sauserprefs lets users manage their own Spamassassin thresholds, whitelists/blacklists, rulesets and other config items. password lets them chat their account’s password via the web. Enigma handles email encryption.
I moved to AWS from bare metal some years back (right after it become possible to do HVM installs of FreeBSD on EC2). All AWS datacenter IPs are on some blacklists and some of them are on many blacklists. Things that have helped keep Google and others from rejecting mail:
Making sure all my domains have DKIM and SPF records and that outgoing email has DKIM Signatures.
Checking my IP addresses with the excellent MultiRBL checker at http://multirbl.valli.org/ and taking what steps I can (including getting new IP addresses) to keep off blacklists.
Getting my email service to a high level of quality has been a lot of work. But now my email system is faster and more responsive than Gmail. I also enjoy far more visibility and control and can do some tricky things (like per-origin email domains) that wouldn’t be possible using a “normal” email service.
I use an AWS-hosted FreeBSD server as my main “desktop.” SSH/Mosh as my access method, Tmux as my windowing system, Alpine for email, Weechat+Bitlbee for IRC/IM/SMS, Emacs for editing. The SSH server also listens to ports 53, 80 and 3128 to aid in getting around various silly “firewall” “solutions”. I keep ttyd around for those times when all I’ve got is web access.
The same box that runs all that also runs my Asterisk server. And my email system. And a Lounge instance. And syncthing (w/syncthing-inotify), Prosody, MySQL, ZFS, a photo server and about 800 (very low-volume) web sites.
I have a physical server in my house comprised of commodity PC hardware, running linux. This runs:
irssi in a persistent tmux session
sshd (and so anything that you can do over ssh, such as push to private git repos or ssh port-forward)
diaspora instance (that I’m unfortunately not doing much with)
cherrymusic music streaming instance, so I can stream my music library to anywhere with a browser
nginx as a frontend for cherrymusic and a few webservices I run for personal experimentation
an instance of a RSS feed reader called Miniflux. I’m not entirely happy with the UI this presents, but I don’t have
I plan to run a Matrix instance in the near future that I’d like to bridge to IRC and replace irssi+tmux, but haven’t gotten the software to work properly on my home server yet.
My ISP only provides a public IPv6 address, not a public v4 one, so I also have a small $5/month Digital Ocean droplet. The most important thing this does is run a socat instance, which listens for traffic on a select number of IPv4 ports and rebroadcasts it on v6 to my home server, so I don’t have to rely on a connection to the v6 internet existing in order to use my home services. I also run:
nginx that proxy-passes web services to my v6-only home server
a personal Mastodon server
a Pleroma server
a Gittea instance
a bespoke web service for a friend of mine’s project
I currently self-host Nextcloud on a Debian VPS. I primarily use this for Contacts, Calendars, and getting Files between my various devices. My wife also uses the same installation. We haven’t yet used the “document sharing” stuff that’s integrated with LibreOffice, or similar. But I look forward to doing that.
I pay someone else to host my mail, but I’d like to host that myself: I’m just not ready to do that again yet.
My intention is to build all of the “self hosted” stuff around nextcloud: I’ll use Passman, I’m building a GTD application, and a budgeting application, and a few other things. Nextcloud gives me a nice platform for syncing and sharing, and I don’t really care about implementation language otherwise.
If you are looking in to self hosting email, https://mailinabox.email/ is really easy. Main issue is it requires it’s own dedicated ubuntu install and doesn’t work in docker yet. But it’s super simple and just works. It also has next cloud packaged in.
Redis, Postgres, Prometheus+Grafana for database, cache/queuing and monitoring
Minio (an S3 compatible file store)
Email (postfix+dovecot+spamassassin+opendkim, but moving towards Apache James)
PowerDNS
My own custom, still unfinished PowerDNS frontend
Keycloak (for auth)
Gitlab (for CI and source management)
Seafile (for file storage and sync)
A custom web music player that uses Seafile’s data under the hood (think Google Play Music)
A custom web photo gallery that uses Seafile’s data under the hood (think Google Photos)
i.k8r.eu, an (invite-only) image host running a custom software I wrote
(because imgur scales images wrong and doesn’t do hotlinking anymore)
Quassel (as IRC bouncer)
Quassel-Webserver (web frontend)
Quassel-Rest-Search (web fulltext log search)
Quassel-Logs (for public logs of certain channels)
My own websites for my own projects and apps
(usually using nginx and syncing from the gitlab repos, sometimes also directly running docker containers built in CI, but always using nginx to serve files)
An F-Droid repo for my apps
Planned in the long-term future are a custom password manager and a custom Firefox Sync Server (with better history sync + web fulltext history search). For the short-term a clone of Google Keep is planned.
A while ago due to a bug Google wiped my calendar and contacts. Shortly after that I lost access to one of my other Google accounts and only managed to get it back because I had a lot of luck (and help from the new owner of my old phone number). These two events, combined with the Snowden papers, have over the years been a major motivation for me trying to self-host everything.
Self-hosting is for me a long term project and I’m working on it infrequently… I should probably write a blog-posting at some point. I really need to start using some provisioning/automation tool… I can’t decide which ‘container’ technology I’d like to use.
git-annex (I still haven’t figured out how I can have a git-annex non-bare-metal repo, which let non annex-aware application access the data)
some self hosted ‘dropbox’ alternative (not decided which tool)
some issue tracker
Firefox Sync Server
XMPP/Matrix
DNS
Offsite and/or cloud backup (I ‘only’ got 2.5 Megabyte/s upload, so 4TB to upload will take at least three weeks)
The whole setup (three computers) are using constantly about 60W (I’ve an energy meter installed).
The setup costs me about 30 Euro for the Internet, ~12 Euro for electricity and 5 Euros for some server in a datacenter.
If I’d store backups on backblaze ‘B2’, it’d cost me at least 20 Euros per month to have cloud-backups. (0.005 Cent per GB for storing uploaded data) and 0.01 Cent per GB if I need to retrieve the data. I should probably not mention this in public, but another possibility would be running the Backblaze Personal Backup in Wine (which I’ve tried out in 2014) - but this would be clearly a violation of the terms, and you’d have to hack something together, that ‘transparently’ encrypts all files infront of the backblaze wine client, and still is able to support delta uploads.
I have two physical servers, one at home, one colocated, both running SmartOS. Split between them, I’m running:
Plex Media Server, for media hosting and streaming
Prosody, for Jabber/XMPP
ZNC, as an IRC bouncer
Software to remote control my house lights (via a RS-232 to Ethernet bridge, as I don’t have the correct ports anymore)
A WordPress site, at least until I export it to be a static site
Gerrit, for code hosting and review for personal projects
An SFTP/SCP Dropbox
Envoy for L4 and L7 load balancing
Along with a miscellaneous legacy stuff on a Digital ocean droplet I plan on turning down soon.
I’ll I’m looking to start self-hosting in the future:
Simplified music streaming with a read-only view of the underlying music, preferably with optional mpd and upnp support (currently using Plex, but it doesn’t respect metatdata tags, which I’m so careful to set)
VPN. Wireguard seems interesting, but I’m on the wrong host OS, I think
A secure and easy to use CA for my personal CA, to make provisioning TLS on other things easier.
Gopher and a BBS, for fun.
Grafana / Prometheus, because I should probably be a little serious
URL shortener
Buildbot for building and testing the projects on Gerrit
Unlike many others in this thread, I’m not interested in self-hosted PIMs: Google and Fastmail do a much better job than I ever would.
I currently mostly host some websites and file backups on my current VPS. I’ve run a couple instances of an IRC bot in the past, but don’t host it currently.
I was using Proxmox before but I found it easier and more efficient to use docker as there isn’t reserved memory for each container like you have with a VM.
Hm, yeah, but I have lots of Containers in Proxmox too (LXC) which also works better for IPv6 connectivity. I need the VM mostly for PFSense, which is BSD and doesn’t run too well in a container (it doesn’t run at all).
PM also has a lot more functions that I like than Docker, especially towards failover with data persistence.
I’ve experimented a bit. While it looks rather nice, the category functionality is probably not quite sufficient. I rely heavily on a hierarchy of categories to sort out my feeds…
Yeah, you can buy a license for $10 and self-host it. it’s a bit of a memory hog running on the JVM though. Unfortunately the alternatives I tried didn’t quite fit my use-case for house renovation project management.
A PBX such as Asterisk might be fun. Connct it to a (handheld) SIP; perhaps use a VoiceXML interpreter, and you can implement custom voice-based applications.
A private git server, access through gitolite/ssh and web frontend through gitweb/nginx
A private calendar, radicale through nginx
All this is currently on one VPS (two until recently, but one of my hosts closed up shop), which is a little tight with the Minecraft server (which is a terrible memory hog), so I will probably migrate and upgrade in July when my current plan comes up for renewal.
Additionally, I have
A private file server, ZFS on FreeBSD
at home.
The most useful single service is git, which aside from project hosting holds a dotfiles repo, password store (used by pass) and assorted other convenient document syncing repos.
Some things I have vague plans to set up if I feel motivated and get around to it include
A VPN to my home LAN. I had this set up (using SSH) at one point, but never really figured out the client side (tunneling through SSH on Linux didn’t work well or at all at the time), and the state of the network has since changed enough that I pulled the DNS because it could no longer resolve.
Some way to dynamically start/stop my file server. It spends most of its time off (because otherwise it would spend most of its time idle), but I have to push a button on the front of the thing to actually turn it on or off, which is both inconvenient and not amenable to remote access.
Firefox Sync? I have its config in my dotfiles repo, but Firefox doesn’t play nicely with that sort of access pattern and likes to constantly tweak, rearrange and generally have sole access to its configuration.
I run Radicale (calendar/contacts with DavDroid/Thunderbird clients), my websites, awstats (log analysis), simpleid (although everyone is removing openid support, so this will probably be gone before long), certbot, roundcube (webmail) and ttrss (rss reader).
I have a FreeBSD server that runs OpenVPN so I can connect to these two boxes via that and only have 80/443 exposed on the web hosts. This all runs on Vultr. My e-mail (postfix+dovecot+dkim+spamassissan) server is still on Linode and I plan on migrating it over.
Good thread idea. I am not an expert at this stuff so suggestions and feedback most appreciated:
git server via gitolite
dns authoritative server for my primary domain with maradns
This is only because my domain provider’s API for manipulating managed DNS is awful
I do want to look into setting up a caching server as well, but I have very few cases and the effort doesn’t seem worthwhile. Please prove me wrong.
A custom file-sharing/upload thing written in Go
I wrote my own because I wanted one without any client-side JS, lightweight markup, file tagging support etc. and none of the existing solutions worked for me
OpenSMTPD
It receives email and relays to gmail and stores a local copy in a maildir
Don’t have a way to send email yet. I don’t like mutt, and really would like to avoid another ncurses app. I would really like a well written email client on top of notmuch.
Stuff is rather unorganized right now. I’d like to move to a BSD, setup a proper VPN (wireguard looks neat, hope it works on BSDs soon) to connect all devices, make it easier to manage multiple domains and servers together and throwaway email accounts with domain rotation etc. The overall broken state of computing makes me loose motivation towards even trying to build my utopia of personal computing, though.
Right now, just a (Ghost) blog and a couple other things, all on docker. I need to port over some tools for my seedbox and plex servers. I recently started consolidating everything on a single baremetal machine so it’s a bit of work to move things around. I’d like to migrate off of dropbox + all the other proprietary stuff, but I have minimal time and it’s not a trivial thing to move a decades worth of setup. My current plans are to get a VPN, Monitoring, and CI set up, then probably Plex + some torrenting solution + whatever FOSS filesyncing system looks good. I’ve already ported over my blog, but I need to rebuild the way I’m doing routing because it’s pretty ancient magic at this point (and doesn’t really support SSL easily).
I stopped hosting my own email when I realized that I wasn’t reading my personal email because of the spam. And yeah I tried greylisting and spamassassin and all kinds of shit. At that time I was running my own DNS too (primary & secondary on different continents).
These days I’m only really self-hosting web stuff though I’m pretty sure that’s a bad idea. Nobody offers the web hosting flexibility I want at the price I want to pay, though I think letsencrypt’s ubiquity may start to change that.
I want to run a self-hosted issue tracker, which is my favorite thing about GitHub. I do NOT want a replacement for GitHub. This has nothing to do with the Microsoft/GitHub merger. This is purely about the fact that I do not like fork-and-PR workflow/s and I don’t like the way that GitHub has implemented code reviews. So I’m not looking to run GitLab, Gitea, Gogs, or any other GHE clone.
I’d rather host my own raw Git server (possibly using Patchwork to manage patches). I just need some sort of issue-tracking software that has the ability to link to specific patches and commits in my Git repos.
There are plenty of standalone issue trackers. Bugzilla is the godfather of them all; Request Tracker is similarly venerable but is more often used for IT helpdesks, and only occasionally OSS projects (e.g. Perl).
The trouble with standalone issue tracking software is that since issue tracking is the focus of its existence, they tend to end up a lot more complex than something like GitHub issues, if something that simple is what you’re looking for. If you want something GitHub issues-like, I wonder if mild modification of Gitea to shut off the code hosting aspects would be productive.
Another thing I’ve been thinking about lately is tracking issues in a branch of the repository (similarly to how GitHub uses an unrelated gh-pages branch for website hosting). This would have the not insignificant advantage that the issues would then become as portable as Git itself, and be versioned using standard Git processes. I think there are some tools that do this, but I haven’t looked at them yet.
If those issue trackers are too complex for your needs, I reckon it’d be about an afternoons work to throw together a simple one (which might be why there isn’t one packaged - it’s not big enough!). Of course, within a few months you’ll start wanting to add more features…
Agree that tracking issues in a git repo is great.
backups of all the other machines (home-grown script using btrfs+LUKS with snapshots; one big usb disk at home and one at my parents house rotating; has so far saved us from deleting family photos many times)
I used to have Owncloud, but got sick of having to (re-)configure the same stuff every update, and Syncthing covered my file syncing needs, while I mostly use git+emacs org-mode for my calendar (and bbdb with Syncthing for contacts).
Currently running my personal website, Mediawiki and Nextcloud. A mailserver based on OpenSMTPd and Dovecot is WIP. Together with some friends, we host an XMPP server based on Prosody.
Things I self-host now on the Interwebs (as opposed to at home):
Things I’m setting up on the Interwebs:
Over time I may move the Docker and KVM-based Linux boxes over to OpenBSD and VMM as it matures. I’m moving internal systems from Debian to Open or NetBSD because I’ve had enough of Systemd.
Out of curiosity, why migrate your entire OS to avoid SystemD rather than just switch init systems? Debian supports others just fine. I use OpenRC with no issues, and personally find that solution much more comfortable than learning an entirely new management interface.
To be fair, it’s not just systemd, but systemd was the beginning of the end for me.
I expect my servers to be stable and mostly static. I expect to understand what’s running on them, and to manage them accordingly. Over the years, Debian has continued to change, choosing things I just don’t support (systemd, removing ifconfig etc). I’ve moved most of my stack over to docker, which has made deployment easier at the cost of me just not being certain what code I’m running at any point in time. So in effect I’m not even really running Debian as such (my docker images are a mix of alpine and ubuntu images anyway).
I used to use NetBSD years back quite heavily, so moving back to it is fairly straightforward, and I like OpenBSD’s approach to code reduction and simplicity over feature chasing. I think it was always on the cards but the removal of ifconfig and the recent furore over the abort() function with RMS gave me the shove I needed to start moving.
Docker doesn’t work on OpenBSD though, so what are you going to do?
For now I’m backing up my configs in git, data via rsync/ssh and will probably manage deployment via Ansible.
It’s not as easy as docker-compose, but not as scary as pulling images from public repos. Plus, I’ll actually know what code I’m running at a given point in time.
Have you looked at Capistrano for deployment? Its workflow for deployment and rollback centers around releasing a branch of a git repo.
I’m interested in what you think of the two strategies and why you’d use one or the other for your setup, if you have an opinion.
I don’t run ruby, given the choice. It’s not a dogmatic thing, it’s just that I’ve found that there are more important things for me to get round to than learning ruby properly, and that if I’m not prepared to learn it properly I’m not giving it a fair shout.
N.B. You can partially remove systemd, but not completely remove it. Many binaries runtime depend on libsystemd even if they don’t appear like they would need it.
When I ran my own init system on Arch (systemd was giving me woes) I had to keep libsystemd.so installed for even simple tools like pgrep to work.
Some more info and discussion here. I didn’t want to switch away from Arch, but I also didn’t want remnants of systemd sticking around. Given the culture of systemd adding new features and acting like a sysadmin on my computer I thought it wise to try and keep my distance.
Fun fact: Ubuntu 18.04 also sports a 26 kilobytes executable pgrep requiring libsystemd.so.0
A bit of digging revealed that procps utilities may print/use systemd-generated fields (lsession, machine, owner ouid, seat, slice, unit…) and containers.
The author of the article regarding pgrep you linked used an ancient, outdated kernel, and complained that the newest versions of software wouldn’t work. He/She used all debug flags for the kernel, and complained about the verbosity. He/She used a custom, unsupported build of a bootloader, and complained about the interface. He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions. He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly, and likely with the default sRGB set (which is horribly inaccurate anyway). He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.
I’m the author of the article.
The kernel, kernel build options and bootloader were set by Arch Linux ARM project. They were not unsupported or unusual, they were what the team provided in their install instructions and their repos.
A newer mainstream kernel build did appear in the repos at some point, but it had several features broken (suspend/resume, etc). The only valid option for day to day use was the recommended old kernel.
I’m perfectly happy for software to break due to out of date dependencies. But an init system is a special case, because if it fails then the operating system becomes inoperable.
Core software should fail gracefully. A good piece of software behaves well in both normal and adverse conditions.
I was greatly surprised that systemd did not provide some form of rescue getty or anything else upon failure. It left me in a position that was very difficult to solve.
This was not a custom kernel package, it was provided by the Arch Linux ARM team. It was a newer kernel package that described itself as supporting my model. As it turns out it was the new recommended/mandated kernel package in the Arch Linux ARM install instructions for my laptop.
Even if the kernel were custom, it is highly unusual for distribution packages to contain scripts that overwrite partitions.
There are multiple concepts under the words of ‘colour profiles’ that it looks like you have merged together here.
Colour profiles are indeed used by image and video codecs every day on our computers. Most of these formats do not store their data in the same format as our monitors expect (RGB888 gamma ~2.2, ie common sRGB) so they have to perform colour space conversions.
Whatever the systemd unit was providing in the form of ‘colour profiles’ was completely unnecessary for this process. All my applications worked before systemd did this. And they still do now without systemd doing it.
1:1 sRGB is good enough for most people, as it’s only possible to obtain benefits from colour profiles in very specific scenarios.
If you are using a new desktop monitor and you have a specific task you need or want to match for, then yes.
If you are using a laptop screen like I was: most change their colour curves dramatically when you change the screen viewing angle. Tweaking of colour profiles provides next to no benefit. Some laptop models have much nicer screens and avoid this, but at the cost of battery life (higher light emissions) and generally higher cost.
I use second hand monitors for my desktop. They mostly do not have factory provided colour profiles, and even then the (CCFL) backlights have aged and changed their responses. Without calibrated color profiling equipment there is not much I can do, and is not worth the effort unless I have a very specific reason to do so.
You can do this without making systemd libraries a hard runtime dependency.
I raised this issue because of a concept that seemed more pertinent to me: the extension of systemd’s influence. I don’t think it’s appropriate for basic tools to depend on any optional programs or libraries, whether they be an init system like systemd, a runtime like mono or a framework like docker.
Almost all of these issues are distro issues.
Systemd can work without the color profile daemon, and ps and pgrep can work without systemd. Same with the kernel.
But the policy of Arch is to always build all packages with all possible dependencies as hard dependencies.
e.g. for Quassel, which can make use of KDE integration, but doesn’t require it, they decide to build it so that it has a hard dependency on KDE (which means it pulls in 400M of packages for a package that would be fine without any of them).
Why he/she instead of they? It makes your comment difficult to read
tbh, I dunno. I usually use third-person they.
I really wish the FreeBSD port of Docker was still maintained. It’s a few years behind at this point, but if FreeBSD was supported as a first class Docker operating system, I think we’d see a lot more people running it.
IME Docker abstracts the problem under a layer of magic rather than providing a sustainable solution.
Yes it makes things as easy as adding a line referencing a random github repo to deploy otherwise troublesome software. I’m not convinced this is a good thing.
As someone who needs to know exactly what gets deployed in production, and therefore cannot use any public registry, I can say with certainty that Docker is a lot less cool without the plethora of automagic images you can run.
Exactly, once you start running private registries it’s not the timesaver it may have first appeared as.
Personally, I’ll have to disagree with that. I’m letting Gitlab automatically build the containers I need as basis, plus my own. And the result is very amazing because scaling, development, reproducibility etc are much easier given.
I think Kubernetes has support for some alternative runtimes, including FreeBSD jails? That might make FreeBSD more popular in the long run.
How is the next cloud video chat feature? Does it work reliably compared to Zoom.us?
Works fine for me(tm).
It seems fine both over mobile and laptop, and over 4G. I haven’t tried any large groups and I doubt I’ll use it much, but so far I’ve been impressed.
Is bookstack good? I’m on the never ending search for a good wiki system. I keep half writing my own and (thankfully) failing to complete it.
Cowyo is pretty straighforward (if sort of sparse).
Being
go
and working with flat files, it’s pretty straightforward to run & backup.Bookstack is by far one of the best wikis I’ve given to non-technical people to use. However I think it stores HTML internally, which is a bit icky in my view. I’d prefer it if they converted it to markdown. Still, it’s fairly low resource, pretty and works very, very well.
Planning a bunch of other stuff as well - Firefox Sync Server, Self-hosted password sync, etc
I’m running the following:
What’s your MUD?
Possibly helpful? https://www.reddit.com/r/selfhosted/
https://www.reddit.com/r/selfhosted/comments/8j8mo3/what_are_you_self_hosting/
Actually I think the only thing I’m not self-hosting is one Wordpress blog at wordpress.com (don’t want to have my real name associated with it, but it’s only a gaming blog, nothing super secret).
What has your experience hosting your own email been like? I’ve idly considered it, but it’s a famously unfriendly service to deal with (spam, major providers deciding your messages are spam, all the onerous metaprotocols to combat spam) and I’m happy with Fastmail’s service.
I’ve been hosting email myself for 15+ years. Postfix made it easier to configure (Sendmail was… complicated, in comparison, in my opinion.) Dovecot works really well for IMAP/POP3. Finally Let’s Encrypt allows you to get a nice certificate relatively easily.
Greylisting helped a lot to reduce spam, but spam is still a nuisance - especially if you don’t have good filtering in your mail client (I’m using crm114).
Setting up SPF, DKIM and DMARC can be a little complicated, but it seems to work fine, as long as all email from your domain is sent from a well defined set of IPs.
I’ve not had many problems, but there’s a bit of luck of the draw in getting a clean IP. I have SPF and DKIM set up (not DMARC), with the self-signed certificate that Debian auto-generated, and that seems to be enough to get mail delivered to the big providers.
For incoming spam, I reject hosts that: 1) lack valid RDNS, or 2) are on the Spamhaus ZEN RBL. This seems to catch >95% of spam. Minor config hint if you’re using the free Spamhaus tier: you need to set up a local DNS resolver (I use unbound) so you query directly, otherwise your usage gets aggregated with whoever else is using your upstream DNS, which probably exceeds the free tier.
Like the other commenters, I use Postfix, which is reasonably nice, and has good documentation.
Mostly positive. I had that discussion this morning on IRC, so I’m gonna quote myself to not retype everything:
But thinking about it, sending a job application in November ended up in the spam folder for 2 people and I only got a reply once I pinged them out of band. That was a shitty experience, but as I hate using GMail I prefer this to a years-long shitty experience using it :P
If I was to “start over” these days I might go to a dedicated email hoster like FastMail, but I think it’s just too expensive. I have 4 people with their main email addresses on my server and it costs me 10 EUR /month and I get to host other “communication” services as well. For FM it would be 15-20 USD per month and I still haven’t found out if I could use “a low amount of” domains and not just “use your own (one) domain”. Sure it takes some maintenance work, but it’s part hobby, part learning experience and part keeping in touch how stuff is done as it touches on my dayjob, depending on which role at what company I do. (Been running my own mailserver for roughly 15 years I guess)
You can, I have 5 domains * under one, one-user account. It’s explicitly spelt out here: https://www.fastmail.com/help/account/limits.html
* – One with my AFK name, and four domain hacks, of which I have a guilty pleasure of buying ;-)
Generally, problem free since I started doing it in the mid 2000s.
Main home server:
Jukebox:
Tiny virtual servers:
Thank’s for the pointer to weeWX, I’ve more thought of using Grafana to display weather data. Are you able to create alerts (something is moving in your flat) with motion?
Yes, you can tell Motion to run a command when motions starts, when motion ends etc. I don’t use that functionality at home, but at work I use it to send an XMPP message e.g. when somebody enters the serverroom and when the video is completed (including a link to the video), so I can keep track of who enters and what they do.
I have had to fiddle a little with ignoring part of the image that constantly flickers in the server room; I can recomment Motion, it works well.
weewx does enough that I haven’t bothered doing something with the data myself - I’ve only changed the display (colours and such) to integrate it into my website.
Things I self host out of my home:
I plan to set up the following services out of my home:
It’s really easy for me to run various Tor Onion Services since my home has a special fully Tor-ified network. Just plug in a device and all its traffic automagically gets routed through Tor.
Hosted on a mix of OpenBSD and FreeBSD on baremetal.
Algernon, for serving Markdown files as rendered HTML over HTTP/2.
I’m using some VPS, not (yet) a server at home. I use Debian or Arch Linux on these.
I’m hosting:
Similar to others, however:
When travelling I spin up an Algo VPN on GCE or DigitalOcean.
I used to run my own colo server but $DAYJOB has 6000+ physical servers and I got bored of having to do maintenance of yet another physical box :-)
Now I run most of my things from a VPS at Mythic Beasts running FreeBSD with misc. other bits running on dedicated boxes from Hetzner which currently run SmartOS, although I’m considering moving to FreeBSD with bhyve sometime.
I’m using Prometheus and Grafana for monitoring all of it.
I used to self-host my email (with Postfix, Dovecot and Rainloop most recently) but I ended up giving up and switching to Fastmail.
I’m also using Postfix+Dovecot with Spamassassin for spam checking. I’m an Alpine user but keep a Roundcube instance (under nginx w/PHP-FPM) up for friends&family.
Some particulars on my setup:
I gotta have full text email searching and use a dedicated Solr machine for that. Apache Tika indexes the insides of attachments and classifies images. A search for “automotive” finds the Excel spreadsheet with insurance rates in it. A search for “dog” finds shots of the neighbor’s corgie.
Solr is resource intensive but worth it. I can search a 200K+ message inboxes in a blink, faster than I can get Gmail to do the same.
Several of my users are fellow Carnegie Mellon alum who used Sieve at school. So I’ve got Pigeonhole stuck on the side of Dovecot for that. Managesieve lets them set thier filters via Roundcube.
I’ve got a few other Roundcube plugins installed. sauserprefs lets users manage their own Spamassassin thresholds, whitelists/blacklists, rulesets and other config items. password lets them chat their account’s password via the web. Enigma handles email encryption.
I use policyd-weight. It’s a big help.
I moved to AWS from bare metal some years back (right after it become possible to do HVM installs of FreeBSD on EC2). All AWS datacenter IPs are on some blacklists and some of them are on many blacklists. Things that have helped keep Google and others from rejecting mail:
Making sure both my addresses (both IPv4 and IPv6) have PTR DNS records that resolve to the canonical hostname. You have to request this from Amazon by filling out https://aws.amazon.com/forms/ec2-email-limit-rdns-request
Making sure all my domains have DKIM and SPF records and that outgoing email has DKIM Signatures.
Checking my IP addresses with the excellent MultiRBL checker at http://multirbl.valli.org/ and taking what steps I can (including getting new IP addresses) to keep off blacklists.
Getting my email service to a high level of quality has been a lot of work. But now my email system is faster and more responsive than Gmail. I also enjoy far more visibility and control and can do some tricky things (like per-origin email domains) that wouldn’t be possible using a “normal” email service.
I use an AWS-hosted FreeBSD server as my main “desktop.” SSH/Mosh as my access method, Tmux as my windowing system, Alpine for email, Weechat+Bitlbee for IRC/IM/SMS, Emacs for editing. The SSH server also listens to ports 53, 80 and 3128 to aid in getting around various silly “firewall” “solutions”. I keep ttyd around for those times when all I’ve got is web access.
The same box that runs all that also runs my Asterisk server. And my email system. And a Lounge instance. And syncthing (w/syncthing-inotify), Prosody, MySQL, ZFS, a photo server and about 800 (very low-volume) web sites.
Great idea for a thread!
I have a physical server in my house comprised of commodity PC hardware, running linux. This runs:
I plan to run a Matrix instance in the near future that I’d like to bridge to IRC and replace irssi+tmux, but haven’t gotten the software to work properly on my home server yet.
My ISP only provides a public IPv6 address, not a public v4 one, so I also have a small $5/month Digital Ocean droplet. The most important thing this does is run a socat instance, which listens for traffic on a select number of IPv4 ports and rebroadcasts it on v6 to my home server, so I don’t have to rely on a connection to the v6 internet existing in order to use my home services. I also run:
I have a few private servers in total running various things.
Not public, but I’ve been setting up SILC servers and a few IRDCds as well.
I currently self-host Nextcloud on a Debian VPS. I primarily use this for Contacts, Calendars, and getting Files between my various devices. My wife also uses the same installation. We haven’t yet used the “document sharing” stuff that’s integrated with LibreOffice, or similar. But I look forward to doing that.
I pay someone else to host my mail, but I’d like to host that myself: I’m just not ready to do that again yet.
My intention is to build all of the “self hosted” stuff around nextcloud: I’ll use Passman, I’m building a GTD application, and a budgeting application, and a few other things. Nextcloud gives me a nice platform for syncing and sharing, and I don’t really care about implementation language otherwise.
If you are looking in to self hosting email, https://mailinabox.email/ is really easy. Main issue is it requires it’s own dedicated ubuntu install and doesn’t work in docker yet. But it’s super simple and just works. It also has next cloud packaged in.
I’m running
(because imgur scales images wrong and doesn’t do hotlinking anymore)
(usually using nginx and syncing from the gitlab repos, sometimes also directly running docker containers built in CI, but always using nginx to serve files)
Planned in the long-term future are a custom password manager and a custom Firefox Sync Server (with better history sync + web fulltext history search). For the short-term a clone of Google Keep is planned.
A while ago due to a bug Google wiped my calendar and contacts. Shortly after that I lost access to one of my other Google accounts and only managed to get it back because I had a lot of luck (and help from the new owner of my old phone number). These two events, combined with the Snowden papers, have over the years been a major motivation for me trying to self-host everything.
At home on a FreeBSD server:
On a KVM VPS running FreeBSD:
Self-hosting is for me a long term project and I’m working on it infrequently… I should probably write a blog-posting at some point. I really need to start using some provisioning/automation tool… I can’t decide which ‘container’ technology I’d like to use.
Already hosting:
Planned:
The whole setup (three computers) are using constantly about 60W (I’ve an energy meter installed).
The setup costs me about 30 Euro for the Internet, ~12 Euro for electricity and 5 Euros for some server in a datacenter.
If I’d store backups on backblaze ‘B2’, it’d cost me at least 20 Euros per month to have cloud-backups. (0.005 Cent per GB for storing uploaded data) and 0.01 Cent per GB if I need to retrieve the data. I should probably not mention this in public, but another possibility would be running the Backblaze Personal Backup in Wine (which I’ve tried out in 2014) - but this would be clearly a violation of the terms, and you’d have to hack something together, that ‘transparently’ encrypts all files infront of the backblaze wine client, and still is able to support delta uploads.
checks most of his XMPP contacts
I’m going to hope this is just the website.
You’re a JMP customer? I’m the primary sysadmin for the main server – dedicated box with OVH in Quebec
Yes. The phrasing above just makes it seems like you’re running this on an old shoebox you have. ;)
I have two physical servers, one at home, one colocated, both running SmartOS. Split between them, I’m running:
Along with a miscellaneous legacy stuff on a Digital ocean droplet I plan on turning down soon.
I’ll I’m looking to start self-hosting in the future:
Unlike many others in this thread, I’m not interested in self-hosted PIMs: Google and Fastmail do a much better job than I ever would.
I host a little box for my friends and I to play with. We’ve been running these services for about 18 months. I enthusiastically recommend both:
I have a VPS with
pass
etc)I used to run btsync, but have since switched to mega.nz. At some stage I’ll look at another self-hosted option.
I also have a lowendspirit VPS or two that I use for VPN.
I currently mostly host some websites and file backups on my current VPS. I’ve run a couple instances of an IRC bot in the past, but don’t host it currently.
WebsitesAll of these various sites and services (other than weechat) are currently behind a single nginx instance.
Currently hosting:
I’m getting a lot of ideas from this thread though.
I run a SYS Dedicated Server with Proxmox as a VM Host. I recommend PM if you wanna get into virtualized hosting, it’s rather neat.
try selfoss for a RSS aggregator/reader. IMO the best user experience among self-hsoted tools in this regard
Thanks, I’ll try and see if it can handle my workloads <3
I was using Proxmox before but I found it easier and more efficient to use docker as there isn’t reserved memory for each container like you have with a VM.
Hm, yeah, but I have lots of Containers in Proxmox too (LXC) which also works better for IPv6 connectivity. I need the VM mostly for PFSense, which is BSD and doesn’t run too well in a container (it doesn’t run at all).
PM also has a lot more functions that I like than Docker, especially towards failover with data persistence.
RSS reader written in Python (and not PHP): newspipe. I haven’t test it out yet but it looks solid.
I’ve experimented a bit. While it looks rather nice, the category functionality is probably not quite sufficient. I rely heavily on a hierarchy of categories to sort out my feeds…
The things I use that haven’t been mentioned elsewhere:
Jira, as in Atlassian’s?
Yeah, you can buy a license for $10 and self-host it. it’s a bit of a memory hog running on the JVM though. Unfortunately the alternatives I tried didn’t quite fit my use-case for house renovation project management.
A PBX such as Asterisk might be fun. Connct it to a (handheld) SIP; perhaps use a VoiceXML interpreter, and you can implement custom voice-based applications.
All this is currently on one VPS (two until recently, but one of my hosts closed up shop), which is a little tight with the Minecraft server (which is a terrible memory hog), so I will probably migrate and upgrade in July when my current plan comes up for renewal.
Additionally, I have
at home.
The most useful single service is git, which aside from project hosting holds a dotfiles repo, password store (used by pass) and assorted other convenient document syncing repos.
Some things I have vague plans to set up if I feel motivated and get around to it include
Mastodon, a Minecraft server, various Python web experiments.
I wrote Bee2 which I use to host several things via Docker. Here’s an abbreviated list of my containers:
I run Radicale (calendar/contacts with DavDroid/Thunderbird clients), my websites, awstats (log analysis), simpleid (although everyone is removing openid support, so this will probably be gone before long), certbot, roundcube (webmail) and ttrss (rss reader).
I have a FreeBSD server that runs OpenVPN so I can connect to these two boxes via that and only have 80/443 exposed on the web hosts. This all runs on Vultr. My e-mail (postfix+dovecot+dkim+spamassissan) server is still on Linode and I plan on migrating it over.
I own a Synology Nas.
Good thread idea. I am not an expert at this stuff so suggestions and feedback most appreciated:
Stuff is rather unorganized right now. I’d like to move to a BSD, setup a proper VPN (wireguard looks neat, hope it works on BSDs soon) to connect all devices, make it easier to manage multiple domains and servers together and throwaway email accounts with domain rotation etc. The overall broken state of computing makes me loose motivation towards even trying to build my utopia of personal computing, though.
Right now, just a (Ghost) blog and a couple other things, all on docker. I need to port over some tools for my seedbox and plex servers. I recently started consolidating everything on a single baremetal machine so it’s a bit of work to move things around. I’d like to migrate off of dropbox + all the other proprietary stuff, but I have minimal time and it’s not a trivial thing to move a decades worth of setup. My current plans are to get a VPN, Monitoring, and CI set up, then probably Plex + some torrenting solution + whatever FOSS filesyncing system looks good. I’ve already ported over my blog, but I need to rebuild the way I’m doing routing because it’s pretty ancient magic at this point (and doesn’t really support SSL easily).
I stopped hosting my own email when I realized that I wasn’t reading my personal email because of the spam. And yeah I tried greylisting and spamassassin and all kinds of shit. At that time I was running my own DNS too (primary & secondary on different continents).
These days I’m only really self-hosting web stuff though I’m pretty sure that’s a bad idea. Nobody offers the web hosting flexibility I want at the price I want to pay, though I think letsencrypt’s ubiquity may start to change that.
On my personal servers only OpenBSD httpd(8) for now.
I want to run a self-hosted issue tracker, which is my favorite thing about GitHub. I do NOT want a replacement for GitHub. This has nothing to do with the Microsoft/GitHub merger. This is purely about the fact that I do not like fork-and-PR workflow/s and I don’t like the way that GitHub has implemented code reviews. So I’m not looking to run GitLab, Gitea, Gogs, or any other GHE clone.
I’d rather host my own raw Git server (possibly using Patchwork to manage patches). I just need some sort of issue-tracking software that has the ability to link to specific patches and commits in my Git repos.
Does anybody have any suggestions please?
There are plenty of standalone issue trackers. Bugzilla is the godfather of them all; Request Tracker is similarly venerable but is more often used for IT helpdesks, and only occasionally OSS projects (e.g. Perl).
The trouble with standalone issue tracking software is that since issue tracking is the focus of its existence, they tend to end up a lot more complex than something like GitHub issues, if something that simple is what you’re looking for. If you want something GitHub issues-like, I wonder if mild modification of Gitea to shut off the code hosting aspects would be productive.
Another thing I’ve been thinking about lately is tracking issues in a branch of the repository (similarly to how GitHub uses an unrelated
gh-pages
branch for website hosting). This would have the not insignificant advantage that the issues would then become as portable as Git itself, and be versioned using standard Git processes. I think there are some tools that do this, but I haven’t looked at them yet.If those issue trackers are too complex for your needs, I reckon it’d be about an afternoons work to throw together a simple one (which might be why there isn’t one packaged - it’s not big enough!). Of course, within a few months you’ll start wanting to add more features…
Agree that tracking issues in a git repo is great.
IRC server at Digital Ocean
Zoneminder on raspberry pi at home
I used to have Owncloud, but got sick of having to (re-)configure the same stuff every update, and Syncthing covered my file syncing needs, while I mostly use git+emacs org-mode for my calendar (and bbdb with Syncthing for contacts).
Currently running my personal website, Mediawiki and Nextcloud. A mailserver based on OpenSMTPd and Dovecot is WIP. Together with some friends, we host an XMPP server based on Prosody.
The only thing that has changed since last year is the addition of a Gemini server, which again is a self-written server. And I have since release the code to my gopher server.