Going back to my Common Lisp HTTP server. I started to switch for the simple multithreaded model (a fixed set of threads handling the connection queue) to a model where requests are read and decoded in a non-blocking way. Then a set of request handling threads pick them up. This way indelicate clients cannot block the entire server.
I thought this was a mildly interesting article. There is some discourse on the idea of HTTP status codes being a bit dated for the modern web, and that anything beyond the idea of a 200 or 404 should just be communicated in the response body.
The simplicity and immediate understandability of many of these status codes certainly has its charm though.
Many of these codes are still extremely important, although often at lower levels than what a typical web-app developer sees. They are by no means merely “simple” or “charming”, they are the bedrock of the Web.
For example, conditional requests and codes like 412, 304 are crucial for caching by user agents and proxies, and for techniques like MVCC and optimistic concurrency.
201 vs 200 conveys information in the response to a PUT about whether the resource already existed, and 409 indicates a PUT conflicts with existing data. 405 indicates a method isn’t useable with a resource, and 400 means the request is syntactically invalid.These are important parts of REST.
206 is used for partial GETs and range requests, which allow browsers to do things like resuming an interrupted download.
301, 302, 307 all enable redirects to work.
The HTTP/1.1 RFC explains all these in detail; it’s not abstruse, although it’s very long and there is a ton of stuff to keep track of.
It’s cute, but as @snej points out it’s easy to get these wrong or just for people to have differing interpretations of what they mean for a particular API. I still think it’s better to use specific error codes in the body.
It’s not just about their interpretation. They can never map to the wide range of domain specific errors your application can return. And more importantly any proxy between client and server can return them.
So if your query returns a 404 status, you have no idea which software actually returned the error, which resource is “not found”, and what not found actually means precisely
HTTP is a transport protocol, and HTTP status codes should only be used to signal errors in the transport layer. Infortunately most HTTP APIs happily mix application and transports, but it does not make it right.
Hmm, I would suggest that HTTP is very much at the application layer, and not merely a transport layer protocol. This (transport protocol) view of HTTP, in my estimation, led to, or at least abetted many initiatives* that also deliberately ignore the status codes available, and how HTTP works, and as such, led to the loosening of general understanding (which in turn begets these “charming” views mentioned earlier in this interesting thread).
I’d love to see a study about the actual content of the Fediverse. While the graph shows a sharp increase, I’ve also seen recently a lot more spam (mostly the usual non-sensical messages from accounts with female portraits trying to get you to follow a profile link to a NSFW website).
I do not believe one can grow a social network without having to choose between anonymous accounts and mass spam, but I’d love to be proven wrong.
I don’t think anonymous vs not anonymous makes a difference.
People spam less under their real name because it costs them reputation. I would expect that any other currency works just as well. One could for example not accept messages by people who are more than 5 degrees of seperation away from oneself. So messaging you would cost the sender the work to get close enough to you.
Is that so? Most of the spam I receive that makes it through filters is under very real names, very openly.
Aside from anonymity, I’d even argue that Federation has a bit of an edge there, as moderation is a) on many shoulders and b) instances that succumb to the problem can be cut off (temporarily). This is regularly happening.
I worked at a place where we had other teams contacting our team’s manager asking him to please stop scheduling so many meetings for our team because we weren’t able to meet our commitments to other teams.
In one part of Microsoft, I heard that they wrote a meeting cost calculator that took the levels of everyone and the median salary for that level and gave an approximate cost of the meeting. I tried doing this as a back of an envelope calculation and worked out that one recurring meeting I was in cost over half a million dollars a year. I wish Outlook had this kind of of thing integrated.
I’ve seen this referred to as the “burn rate” of the meeting and I find it useful to think about sometimes, especially for meetings that have a lot of people or low engagement.
I had a manager who would schedule all hands meetings every time I told them we didn’t have the capacity to do something urgent. I never managed to get it through his skull, that an day long all hands would only slow us down even more… I would even be sitting there in the meeting room with my laptop, fanatically trying to get a server back on line or something like that while various other non-technical types would ask me how long it would take…
I would even be sitting there in the meeting room with my laptop, fanatically trying to get a server back on line or something like that while various other non-technical types would ask me how long it would take…
It took me quite a while to realize that I should really not try to protect management from their own mistakes. In that kind of situation, you are expected to attend the meeting. Attend the meeting. If the server is down, well, not much you can do, you are in a mandatory meeting.
At some point, if everything fails, someone higher up the chain will realize there’s a problem. And if no one does (I’ve seen it happen), who cares as long as you are being paid? Nobody is going to reward you for your efforts; quite the contrary, you will probably be blamed for what your boss will perceived as attempts to go against him (again, been there, done that).
For the first time in months, I’m really not sure.
I should focus on finding a product idea, but apparently the more I try, the less ideas I get. I found a lot of apparently profitable small SaaS maintained by solo developers, so clearly it is possible. But every potential idea I get is already implemented by the dozen. And I’m not sure how to get out of the crowd since I’m a nobody.
Improving the whole Twitter/Mastodon audience thing to solve the nobody problem isn’t really working really well. Most articles on the subject peddle the same abstract ideas such as “write interesting messages” or “post regularly”. But it is more and more clear that there is a huge luck factor (is someone successful going to repost you with significant traction).
It’s not just luck, it’s having a good foundation for a product and luck. It’s kind of an annoying thing really. Interestingly enough, while you’re working on something else that’s already been done, you’re likely going to implement something in a novel way that’s unexpected and different. Just act on money your bad ideas for something that already exists, and do it in an interesting way.
Yes, luck is just a factor. The rest is about going through the motions and be prepared for opportunities. The hard part is to know if your preparations are correct.
Regarding competition, I feel like you’re right. At least having competitors mean the idea makes sense. Up to me to find the right spin.
I’m thinking about going back to my Raft implementation in Go. I already have the consensus, I could start to add the log and the snapshot system. Would be nice to cleanup the mess, maybe move the persistent state to the beginning of the log. Starting to write tests.
This is incredible. I think it’s mostly large, established businesses where this is a thing. On the other end of the spectrum you have the overworked startup workers who have to work lots of overtime, and the struggling underpaid freelancers.
I’m not convinced it’s size so much as location of the department within the company and whether that department is on a critical revenue path. I mean it’s hard to imagine this at a tiny to small (<25 headcount) company but such a company won’t really have peripheral departments as such and would somehow need to be simultaneously very dysfunctional and also successful to sustain such a situation.
The original author keeps talking about “working in tech” but the actual jobs listed as examples suggest otherwise: “software developer [at] one of the world’s most prestigious investment banks”, “data engineer for one the world’s largest telecommunications companies”, “data scientist for a large oil company”, “quant for one of the world’s most important investment banks.”
First off, these are not what I’d personally call the “tech industry”.
More importantly, these don’t sound like positions which are on a direct critical path to producing day-to-day revenue. Similarly importantly, they’re also not exactly cost centres within the company, whose productivity is typically watched like a hawk by the bean counters. Instead, they seem more like long-term strategic roles, vaguely tasked with improving revenue or profit at some point in the future. It’s difficult to measure productivity here in any meaningful way, so if leadership has little genuine interest in that aspect, departmental sub-culture can quickly get bogged down in unproductive pretend busywork.
But what do I know, I’ve been contracting for years and am perpetually busy doing actual cerebral work: research, development, debugging, informing decisions on product direction, etc.. There’s plenty of that going on if you make sure to insert yourself at the positions where money is being made, or at least where there’s a product being developed with an anticipation of direct revenue generation.
I’ve seen very similar things at least twice in small companies (less than a hundred people in the tech department). In both cases, Scrum and Agile (which had nothing to do with the original manifesto but this is how it is nowadays) were religion, and you could see this kind of insane inefficiency all the time. But no one but a handful of employees cared about it and they all got into trouble.
From what I’ve seen, managers love this kind of structure because it gives them visibility, control and protection (“every one does Scrum/Agile so it is the right way; if productivity is low, let’s blame employees and not management or the process”). Most employees (managers included) also have no incentive beeing more productive: you do not get more money, and you get more work (and more expectations) every single time. So yes, the majority will vocally announce that a 1h hour task is really hard and will take a week. Because why would they do otherwise?
Last time I was in this situation, I managed to sidestep the problem by joining a new tiny separate team which operated independently and removed all the BS (Scrum, Agile, standups, reviews…) and in general concentrated on getting things done. It worked until a new CTO fired the lead and axed the team for political reasons but this is another story.
It worked until a new CTO fired the lead and axed the team for political reasons but this is another story.
I’m guessing maybe it isn’t: a single abnormally productive team potentially makes many people look very bad, and whoever leads the team is therefore dangerous and threatens the position of other people in the company without even trying. I’d find it very plausible that the productivity of your team was the root cause of the political issues that eventually unfolded.
This was 80% of the problem indeed. When I said it was another story, I meant that this kind of political game was unrelated to my previous comments on Scrum/Agile. Politics is everywhere, whether the people involved are productive or not.
It’s not just a question of people not wanting to “look bad,” though.
As a professional manager, about 75% of my job is managing up and out to maintain and improve the legibility of my team’s work to the rest of the org. Not because I need to build a happy little empire, but because that’s how I gain evidence to use when arguing for the next round of appealing project assignments, career development, hiring, and promotions for my team.
That doesn’t mean I need to invent busywork for them, but it does mean that random, well-intentioned but poorly-aimed contributions aren’t going to net any real org-level recognition or benefit for the team, or that teammate individually. So the other 25% of my energy goes to making sure my team members understand where their work fits in that larger framework, how to gain recognition and put their time into engaging and business-critical projects, etc., etc.
…then there’s another 50% of my time that goes to writing: emails to peers and collaborators whose support we need, ticket and incident report updates, job listings, performance evaluations, notes to myself, etc. Add another 50% specifically for actually thinking ahead to where we might be in 9-18 months and laying the groundwork for staff development and/or hiring needed to have the capacity for it, as well as the design, product, and marketing buy-in so we aren’t blocked asking for go-to-market help.
Add up the above and you can totally see why middle managers are useless overhead who contribute nothing, and everyone would be better off working in a pure meritocracy without anyone “telling them what to do.”
omg, I’ve recently worked in a ‘unicorn’ where everyone one was preoccupied with how their work will look like from the outside and if it will improve their ‘promo package’. Never before have I worked in a place so full of buzzword driven projects that barely worked. But hey, you need one more cross team project with dynamodb to get that staff eng promo! 🙃 < /rant>
Given your work history (from your profile), have you seen an increase in engineers being willfully ignorant about how their pet project does or does not fit into the big picture of their employer?
I ask this from having some reports who, while quite sharp, over half the time cannot be left alone to make progress without getting bogged-down in best-practices and axe-sharpening. Would be interested to hear how you’ve handled that, if you’ve encountered it.
I don’t think there’s any kind of silver bullet, and obviously not everyone is motivated by pay, title, or other forms of institutional recognition.
But over the medium-to-long term, I think the main thing is to show consistently and honestly how paying attention to those drivers gets you more of whatever it is you want from the larger org: autonomy, authority, compensation, exposure in the larger industry, etc.
Folks who are given all the right context, flexibility, and support to find a path that balances their personal goals and interests with the larger team and just persistently don’t are actually performing poorly, no matter their technical abilities.
Of course, not all organizations are actually true to the ethos of “do good by the team and good things will happen for you individually.” Sometimes it’s worth to go to battle to improve it; other times you have accept that a particular boss/biz unit/company is quite happy to keep making decisions based on instinct and soft influence. (What to do about the latter is one of the truly sticky + hard-to-solve problems for me in the entire field of engineering management, and IME the thing that will make me and my team flip the bozo bit hard on our upper management chain.)
Would you be able to elaborate on the last paragraph about making decisions based on instinct and soft influence? Why is it a problem and what do you mean by “soft influence” in particular? Quite interested to understand more.
Both points (instinct + soft influence) refer to the opposite of “data-driven” decision-making. I.e., “I know you and we think alike” so I’m inclined to support your efforts + conclusions. Or conversely, “that thing you’re saying doesn’t fit my mental model,” so even though there are processes and channels in place for us to talk about it and come to some sort of agreement, I can’t be bothered.
It’s also described as “type 1” thinking in the Kahneman model (fast vs. slow). Not inherently wrong, but also very prone to letting bias and comfort drown out actually-critical information when you’re wrestling with hard choices.
Being on the “supplicant” end and trying to use facts to argue against unquestioned biases is demoralizing and often pointless, which is the primary failure mode I was calling out.
This is true and relevant, but it’s also key to point out why instinct-driven decisions are preferred in so many contexts.
By comparison, data-driven decision-making is slower, much more expensive, and often (due to poor statistical rigor) no better.
Twice in my career I have worked with someone whose instincts consistently steered the team in the right direction, and given the option that’s what I’d always prefer. Both of these people were kind and understanding to supplicants like me, and - with persistence - could be persuaded to see new perspectives.
Excellent points! Claiming to be “data driven” while cherry-picking the models and signals you want is really another form of instinctive decision-making…but also, the time + energy needed to do any kind of science in the workplace can easily be more than you (individually or as a group) have to give.
If you have collaborators (particularly in leadership roles) with a) good instincts, b) the willingness to change their mind, and c) an attitude of kindness towards those who challenge their answers, then you have found someone worth working with over the long-term. I personally have followed teammates who showed those traits between companies more than once, and aspire to at least very occasionally be that person for someone else.
I ask this from having some reports who, while quite sharp, over half the time cannot be left alone to make progress without getting bogged-down in best-practices and axe-sharpening.
I think this is part of the natural life-cycle of the software developer - the majority of developers I’ve known have had an extended period where this was true, usually around 7-10 years professional experience.
This is complicated by most of them going into management around the 12-year mark, meaning that only 2-3 years of their careers combine “experienced enough to get it done” with “able to regulate their focus to a narrow target”.
I think those timelines have been compressed these days. For better or worse, many people hold senior or higher engineering roles with significantly fewer than 7-10 years experience.
My experience suggests that what you’ve observed still happens - just with less experience behind the best-practices and axe-sharpening o_O
Continuing working on some ugly frontend code (simple HTML/CSS/JS, no framework or libs) that I can easily reuse to build small SaaS. Not as bad as it used to be (as long as you only care about modern browsers), and closure-compiler helps a lot.
Still doing research on the whole SaaS idea thing. The hard part is how to find the right audience to test the idea. The internet is full of people telling you what they want. But it does not mean companies are going to buy your software.
Mostly trying to figure out what I’m doing wrong on Twitter. I have much more engagement on Mastodon, while I cannot seem to find how to grow on Twitter. And even on Mastodon it is slow.
I feel a bit dirty about this kind of marketing approach, but I learned the hard way that without an audience, it is almost impossible to build/sell products, find contract work or meet potential associates. Ultimately I see this as a way to meet and engage with like-minded people.
Rest of the time I’m going to start playing Starsector. Took some time to get convinced to give it a try, but it seems to be the kind of complex sandbox I enjoy.
Is it worth the time?
The C-level of my company was all about social media, and we had to incorporate all those fancy social media buttons in our apps. With the advent of GDPR those had to be converted to opt-in, what a waste of resources.
FB and Twitter up and down.
Until an old school sales guy came along, looked at the numbers in the CRM and told them that the bigger part of our leads come from our satisfied customers’ recommendations.
Meanwhile the social media buttons are gone completely (me gusta mucho), and they do lots of webinars and such things.
I am not an established company with a client base. I’m trying to bootstrap a solo business, and I can assure you that finding contract work or potential clients is surprisingly hard when you do not have an established network or an audience.
I’ve lost count of the number of successful solo bootstraps which start with “I presented my idea to my thousands of Twitter followers and a lot of them preordered”.
And my last attempt with targeted cold mailing was quite underwhelming.
Obviously I’m doing something (multiple somethings really) wrong, but I’m trying to fix it.
This seems to match what I’ve read on the subject. I started to post several times a day, and I really need to start replying when I can add something to an existing discussion. Using TweetDeck with several search columns is already really helpful.
Taking notes about the whole graphics thing, I’m really not good with that.
I’ve been a lone developer for most of my career, and had I known that programming would turn into a team sport, I might not have even become a programmer. I don’t think I’ve written confusing code, and rarely have I gone back to code I’ve written and not have it make sense to me, even years later. And the one time I did pair programming, it was painful and I don’t wish to do it ever again (I was a mere secretary, taking dictation and shut up! Keep typing!).
Do not worry, you are not alone. If it helps, I found it possible to work on solo projects in every company I ever worked for. And the more senior your are, the easier it is.
I’m being pulled more and more into team-sport-coding. There are days when I just want to lock myself into a room, focus on the problem, and write the code. Sometimes I just can’t deal with the meetings, the pairing sessions, the mentoring, the code reviews. I wish I had a gig as a lone programmer.
This sounds like a good idea, but I’m a bit intimidated by the whole process of finding customers and taking care of my own business. Moreover, in the country I live in, I should earn at least double my present income to live the same lifestyle as a consultant.
Yeah, finding customers can be difficult for introverts without a large network (I have the same problem). You could always frequent those freelancing job boards to get started and hopefully build up a clientele, but realise you’ll likely have to fight bottom-of-the-barrel “developers” on price.
I should earn at least double my present income to live the same lifestyle as a consultant.
Unless you’re raking in tons of cash at a big tech company, if you calculate what you make per hour, you should be able to (eventually) charge double that quite easily. Remember, your employer needs to charge the customer for your time, pay your wage, social securities/pension, rent, employee hardware and “overhead” (wages for managers, reception, cleaners, whatever other roles the company has that don’t work directly for the customer) and still make a profit.
Fixing a small issue on Eventline (never hardcode a resource name as being somehow special if it can be deleted by the user).
More work on my Common Lisp OpenAPI implementation. Most of the core work is done, I can execute operations and decode responses, but I still have a few things to support such as request body encoding and header parameters.
A netrc implementation in Common Lisp. Because I do not want to copy paste API tokens in SLIME everytime I work with a HTTP API.
Advancing my article on the state of Common Lisp implementations in 2023. Interesting but needs research.
I like this kind of topic, it forces me to focus on short term goals.
Continuing (and hopefully finishing) my OpenAPI implementation in my Common Lisp toolkit. This is a prerequisite for things such as Stripe and the GitHub API, so I needed to bite the bullet and do the work. Again not the most pragmatic work I’ve ever done, but it gives me time to think about product ideas.
Looking at the possibilities described by the author, it seems they do not envision the possibility of selling support contract. Do not produce a commercial version: simply explain that you won’t provide features, bug fixes or support to users who do not have a support contract. Everything else is best effort.
Of course companies are not going to “contribute” unless they have to. First reason being that you lead/manager/director does not have the legal option to make donations to external individuals. But they have a budget which can be used for support contracts.
core-js is not a several lines library that you can write and forget about it. Unlike the vast majority of libraries, it’s bound to the state of the Web. It should react to any change of JavaScript standards or proposals, to any new JS engine release, to any detection of a bug in JS engines, etc.
If nobody cares about it because it is just a polyfill, then the author should just either stop maintaining it, or doing what he want with it and ignore everything else. If this is actually important, if core-js not being updated causes problems to companies using it, then selling a support contract absolutely makes sense. Of course it means having to do some marketing, contacting companies directly, but there’s no free money.
As an example, I’ve worked at companies who used Sidekiq, and paying for the Pro version was a no brainer. Features, support, companies will pay for it if they need it. But you need to sell something, not just ask for money.
But there’s no extras to sell, no room for special treatment for paying customers. It has one goal and it does it really well. There’s only been a few dozen bugs in over a decade. It’s a lot of work but the work is extremely obvious and easily tested.
I think they meant more in the sense of Redhat vs CentOS Linux (well, as it was a while ago). You wanted rock-solid commercally developed Linux? Get CentOS. You also wanted support for when the shit hits the fan? You pay Redhat licence for the same code, but you kick the problems can down the road.
I think that’s something that all OSS developers (at least the ones that hope for a financial support) should understand. You need to give your users, however little, an excuse to pay you, otherwise they might be as helpless as you are in convincing their management. Even the management themselves might be helpless, because they might not be able to justify cashing out randomly while they’re themselves ultimately responsible to the shareholders.
As an employee, it’s a needless waste of social capital for me to ask a superior to donate, say $500 to a project we use, but it’s very easy to ask for a minor purchase of $500 that will marginally increase my (team’s) productivity.
Conversely, installing this in Arch requires like 100 other Haskell packages (I know it only shows 75 but I believe some dependencies have other Haskell dependencies).
I don’t see the problem. pandoc is very clearly an integration project, integrating many document formats, providing a unified AST, etc.
It is a very reasonable path to pick high-quality and accepted dependencies in the ecosystem over implementing your own in this case. With that eye, I’m actually surprised that it’s just 100.
You cannot blame Pandoc: someone decided that each Haskell library had to be its own separate Archlinux package, and there is not much you can do about it.
This is infortunate and only works because there are very few Haskell packages, so the chances of having version conflicts are low. But it is still annoying to run pacman and have hundreds of packages to update.
It’s been a while since I touched it but my recollection is that pacman is still really fast with lots of little files and packages, so no biggie. Is that still correct, please?
Performances are fine (but it might be that my NVME disk is doing all the work). It is mostly annoying when reviewing package lists everytime I make an update an Pandoc and its dependencies are in it.
This kind of issue is precisely why I’d like to investigate something such as Guix, but I haven’t found the time yet.
Agree with this and will add I really appreciate pandoc’s compatibility. I use pandoc and a makefile to build a static site and it started as maybe 5 lines of make, as I wanted to add additional things it was easy to add in pre- and post-processing steps, do templating, write filters etc. While filters do require learning a bit about pandoc internals, filters are just programs that read from stdin and write to stdout. It generally works out of the box great and can be integrated into unix-y pipelines really well without having to own a build process end-to-end like traditional static site generators or other document building toolchains.
Continuing to send cold emails to potential leads for my CSP platform. Not fun, but this seems to be the only viable way to find potential clients.
Learning more about Guile/Scheme, mostly for fun. I might use it for the Advent of Code this year.
In general I’m very enthousiastic about Guile. I abandoned Common Lisp years ago because it was a dead end, and never really investigated Scheme. Even though the RnRs standardization process does not seem to be going anywhere, several Scheme implementations are actively developped, so I might start using it for small scripts and tools.
My issue with scheme standards… no typing, slowness, and inconsistencies galore. Each scheme is its own language basically… Still worth experiencing but i think it’s not worth mastering for myself.
Exactly, Scheme is more of a subfamily of Lisps than a single language. The RNRS of defines a base to build upon, but despite how small it is it still has flaws and each implementation of Scheme stems from a different version anyway, I think Guile from 6 and Racket from 5 for example. Writing portable Scheme code is basically impossible because of all this fragmentation.
What you should do is pick one of the more used Schemes and stick with it. My favorite one is Racket.
In this case, the form-action directive would have stopped the attack. It is common to forget it when writing a CSP, especially because the fact that it is not affected by default-src is not common knowledge.
I’m working on a small service to handle CSP (Content Security Policy) reports. They seem to be tricky enough that lots of people ignore them, so I thought there was an opportunity to do something useful. It also means I am probably going to have to start contacting people to see who might be interested and get some feedback.
Just a nitpick, but forge itself doesn’t support (most) code-review-like functionality; that’s actually handled by external packages that are usually used alongside forge, e.g. code-review.el.
Working on my personal server. I’m trying to delegate services as much as possible: maintaining your own infrastructure is fun, but it takes time. I already have moved DNS to a provider, and I am slowly migrating multiple email accounts and Dovecot/Fetchmail to a single Google Workspace account (at least it is my own domain).
Improving my Gnus (Emacs) configuration to deal with the way Gmail handles IMAP labels.
Probably starting a new Satisfactory base now than Update 6 is out.
For work:
Finishing the Docker runner for my job scheduling platform (it currently only support local execution and Kubernetes).
I love hearing about folks personal servers. I think I’m like you - i use it as my test bed and playground for things i want to learn. What is your favorite thing you host locally?
I used to use bare repos on an ssh server, but switched to gitea because it can be configured to create a repo on push. This allows me to easily create new repos from any machine with access.
Now I use it as an authentication store for some services. Gitea can be an oauth provider and is much simpler than many of the alternatives to run.
I host all my private Git repositories (because I can and because all Git operations are way faster than with GitHub, which is satisfying when pushing). I also have NGINX for my website, a private IRC server (ngircd) for a few friends, a mail setup with Fetchmail/Dovecot and Influx/Grafana (mostly for the fun of it).
Everything is running on FreeBSD and managed with a deployment system written in pure POSIX sh.
While it sometimes means a couple hours spent upgrading the system or fixing some kinks, it is satisfying. I have learned a lot about software and ops that way.
Note to any developer out there: running your own server will change the way you design software. Running in production is not easy.
When you run a server, you have to deal with software not working properly, because it happens all the time. Thus you learn how important it is to write precise and meaningful error messages with the right context. You learn how software should behave consistently, and how this behaviour should be documented.
Having to deal with software in production is a good wakeup call for all developers.
I’ve been keeping a personal server for years (website, VCS, file synchronisation, file sharing, chat servers/client/bots, central place for all my note keeping, all sorts of Internet processing), but I don’t share your experience. Things go wrong very rarely, mostly during development. The worst problem I’ve had was with the laptop killing its battery circuitry and then randomly shutting down, resolved by retiring the machine.
If anything, I’ve learnt to use #!/bin/sh -e and actually keep logfiles–cron mails error output automatically, systemd needs to be nudged into doing that. Knowing that something’s wrong at all is what’s important.
Also not who you’re replying to, but I love hosting Snipe-It locally to track all the machines flowing in and out of my repair lab (plus my own machine and parts collection).
I always felt like the right balance for DNS was: run BIND, but only as a “hidden master”; let some third party service AXFR from you and handle all of the public requests.
Going back to my Common Lisp HTTP server. I started to switch for the simple multithreaded model (a fixed set of threads handling the connection queue) to a model where requests are read and decoded in a non-blocking way. Then a set of request handling threads pick them up. This way indelicate clients cannot block the entire server.
I thought this was a mildly interesting article. There is some discourse on the idea of HTTP status codes being a bit dated for the modern web, and that anything beyond the idea of a 200 or 404 should just be communicated in the response body.
The simplicity and immediate understandability of many of these status codes certainly has its charm though.
Many of these codes are still extremely important, although often at lower levels than what a typical web-app developer sees. They are by no means merely “simple” or “charming”, they are the bedrock of the Web.
For example, conditional requests and codes like 412, 304 are crucial for caching by user agents and proxies, and for techniques like MVCC and optimistic concurrency.
201 vs 200 conveys information in the response to a PUT about whether the resource already existed, and 409 indicates a PUT conflicts with existing data. 405 indicates a method isn’t useable with a resource, and 400 means the request is syntactically invalid.These are important parts of REST.
206 is used for partial GETs and range requests, which allow browsers to do things like resuming an interrupted download.
301, 302, 307 all enable redirects to work.
The HTTP/1.1 RFC explains all these in detail; it’s not abstruse, although it’s very long and there is a ton of stuff to keep track of.
It’s cute, but as @snej points out it’s easy to get these wrong or just for people to have differing interpretations of what they mean for a particular API. I still think it’s better to use specific error codes in the body.
It’s not just about their interpretation. They can never map to the wide range of domain specific errors your application can return. And more importantly any proxy between client and server can return them.
So if your query returns a 404 status, you have no idea which software actually returned the error, which resource is “not found”, and what not found actually means precisely
HTTP is a transport protocol, and HTTP status codes should only be used to signal errors in the transport layer. Infortunately most HTTP APIs happily mix application and transports, but it does not make it right.
Hmm, I would suggest that HTTP is very much at the application layer, and not merely a transport layer protocol. This (transport protocol) view of HTTP, in my estimation, led to, or at least abetted many initiatives* that also deliberately ignore the status codes available, and how HTTP works, and as such, led to the loosening of general understanding (which in turn begets these “charming” views mentioned earlier in this interesting thread).
*I’m thinking of early SOAP, WS-Deathstar, etc.
I’d love to see a study about the actual content of the Fediverse. While the graph shows a sharp increase, I’ve also seen recently a lot more spam (mostly the usual non-sensical messages from accounts with female portraits trying to get you to follow a profile link to a NSFW website).
I do not believe one can grow a social network without having to choose between anonymous accounts and mass spam, but I’d love to be proven wrong.
I don’t think anonymous vs not anonymous makes a difference.
People spam less under their real name because it costs them reputation. I would expect that any other currency works just as well. One could for example not accept messages by people who are more than 5 degrees of seperation away from oneself. So messaging you would cost the sender the work to get close enough to you.
Is that so? Most of the spam I receive that makes it through filters is under very real names, very openly.
Aside from anonymity, I’d even argue that Federation has a bit of an edge there, as moderation is a) on many shoulders and b) instances that succumb to the problem can be cut off (temporarily). This is regularly happening.
I worked at a place where we had other teams contacting our team’s manager asking him to please stop scheduling so many meetings for our team because we weren’t able to meet our commitments to other teams.
When it’s that visible, you know it’s bad.
In one part of Microsoft, I heard that they wrote a meeting cost calculator that took the levels of everyone and the median salary for that level and gave an approximate cost of the meeting. I tried doing this as a back of an envelope calculation and worked out that one recurring meeting I was in cost over half a million dollars a year. I wish Outlook had this kind of of thing integrated.
I’ve seen this referred to as the “burn rate” of the meeting and I find it useful to think about sometimes, especially for meetings that have a lot of people or low engagement.
I had a manager who would schedule all hands meetings every time I told them we didn’t have the capacity to do something urgent. I never managed to get it through his skull, that an day long all hands would only slow us down even more… I would even be sitting there in the meeting room with my laptop, fanatically trying to get a server back on line or something like that while various other non-technical types would ask me how long it would take…
It took me quite a while to realize that I should really not try to protect management from their own mistakes. In that kind of situation, you are expected to attend the meeting. Attend the meeting. If the server is down, well, not much you can do, you are in a mandatory meeting.
At some point, if everything fails, someone higher up the chain will realize there’s a problem. And if no one does (I’ve seen it happen), who cares as long as you are being paid? Nobody is going to reward you for your efforts; quite the contrary, you will probably be blamed for what your boss will perceived as attempts to go against him (again, been there, done that).
For the first time in months, I’m really not sure.
I should focus on finding a product idea, but apparently the more I try, the less ideas I get. I found a lot of apparently profitable small SaaS maintained by solo developers, so clearly it is possible. But every potential idea I get is already implemented by the dozen. And I’m not sure how to get out of the crowd since I’m a nobody.
Improving the whole Twitter/Mastodon audience thing to solve the nobody problem isn’t really working really well. Most articles on the subject peddle the same abstract ideas such as “write interesting messages” or “post regularly”. But it is more and more clear that there is a huge luck factor (is someone successful going to repost you with significant traction).
So more thinking and researching I guess.
It’s not just luck, it’s having a good foundation for a product and luck. It’s kind of an annoying thing really. Interestingly enough, while you’re working on something else that’s already been done, you’re likely going to implement something in a novel way that’s unexpected and different. Just act on money your bad ideas for something that already exists, and do it in an interesting way.
Yes, luck is just a factor. The rest is about going through the motions and be prepared for opportunities. The hard part is to know if your preparations are correct.
Regarding competition, I feel like you’re right. At least having competitors mean the idea makes sense. Up to me to find the right spin.
Web form of the tools used on https://www.fourmilab.ch/hackdiet/ to graph weight in a meaningful and actionable way?
This is incredible. I think it’s mostly large, established businesses where this is a thing. On the other end of the spectrum you have the overworked startup workers who have to work lots of overtime, and the struggling underpaid freelancers.
I’m not convinced it’s size so much as location of the department within the company and whether that department is on a critical revenue path. I mean it’s hard to imagine this at a tiny to small (<25 headcount) company but such a company won’t really have peripheral departments as such and would somehow need to be simultaneously very dysfunctional and also successful to sustain such a situation.
The original author keeps talking about “working in tech” but the actual jobs listed as examples suggest otherwise: “software developer [at] one of the world’s most prestigious investment banks”, “data engineer for one the world’s largest telecommunications companies”, “data scientist for a large oil company”, “quant for one of the world’s most important investment banks.”
First off, these are not what I’d personally call the “tech industry”.
More importantly, these don’t sound like positions which are on a direct critical path to producing day-to-day revenue. Similarly importantly, they’re also not exactly cost centres within the company, whose productivity is typically watched like a hawk by the bean counters. Instead, they seem more like long-term strategic roles, vaguely tasked with improving revenue or profit at some point in the future. It’s difficult to measure productivity here in any meaningful way, so if leadership has little genuine interest in that aspect, departmental sub-culture can quickly get bogged down in unproductive pretend busywork.
But what do I know, I’ve been contracting for years and am perpetually busy doing actual cerebral work: research, development, debugging, informing decisions on product direction, etc.. There’s plenty of that going on if you make sure to insert yourself at the positions where money is being made, or at least where there’s a product being developed with an anticipation of direct revenue generation.
I’ve seen very similar things at least twice in small companies (less than a hundred people in the tech department). In both cases, Scrum and Agile (which had nothing to do with the original manifesto but this is how it is nowadays) were religion, and you could see this kind of insane inefficiency all the time. But no one but a handful of employees cared about it and they all got into trouble.
From what I’ve seen, managers love this kind of structure because it gives them visibility, control and protection (“every one does Scrum/Agile so it is the right way; if productivity is low, let’s blame employees and not management or the process”). Most employees (managers included) also have no incentive beeing more productive: you do not get more money, and you get more work (and more expectations) every single time. So yes, the majority will vocally announce that a 1h hour task is really hard and will take a week. Because why would they do otherwise?
Last time I was in this situation, I managed to sidestep the problem by joining a new tiny separate team which operated independently and removed all the BS (Scrum, Agile, standups, reviews…) and in general concentrated on getting things done. It worked until a new CTO fired the lead and axed the team for political reasons but this is another story.
I’m guessing maybe it isn’t: a single abnormally productive team potentially makes many people look very bad, and whoever leads the team is therefore dangerous and threatens the position of other people in the company without even trying. I’d find it very plausible that the productivity of your team was the root cause of the political issues that eventually unfolded.
This was 80% of the problem indeed. When I said it was another story, I meant that this kind of political game was unrelated to my previous comments on Scrum/Agile. Politics is everywhere, whether the people involved are productive or not.
It’s not just a question of people not wanting to “look bad,” though.
As a professional manager, about 75% of my job is managing up and out to maintain and improve the legibility of my team’s work to the rest of the org. Not because I need to build a happy little empire, but because that’s how I gain evidence to use when arguing for the next round of appealing project assignments, career development, hiring, and promotions for my team.
That doesn’t mean I need to invent busywork for them, but it does mean that random, well-intentioned but poorly-aimed contributions aren’t going to net any real org-level recognition or benefit for the team, or that teammate individually. So the other 25% of my energy goes to making sure my team members understand where their work fits in that larger framework, how to gain recognition and put their time into engaging and business-critical projects, etc., etc.
…then there’s another 50% of my time that goes to writing: emails to peers and collaborators whose support we need, ticket and incident report updates, job listings, performance evaluations, notes to myself, etc. Add another 50% specifically for actually thinking ahead to where we might be in 9-18 months and laying the groundwork for staff development and/or hiring needed to have the capacity for it, as well as the design, product, and marketing buy-in so we aren’t blocked asking for go-to-market help.
Add up the above and you can totally see why middle managers are useless overhead who contribute nothing, and everyone would be better off working in a pure meritocracy without anyone “telling them what to do.”
omg, I’ve recently worked in a ‘unicorn’ where everyone one was preoccupied with how their work will look like from the outside and if it will improve their ‘promo package’. Never before have I worked in a place so full of buzzword driven projects that barely worked. But hey, you need one more cross team project with dynamodb to get that staff eng promo! 🙃 < /rant>
Given your work history (from your profile), have you seen an increase in engineers being willfully ignorant about how their pet project does or does not fit into the big picture of their employer?
I ask this from having some reports who, while quite sharp, over half the time cannot be left alone to make progress without getting bogged-down in best-practices and axe-sharpening. Would be interested to hear how you’ve handled that, if you’ve encountered it.
I don’t think there’s any kind of silver bullet, and obviously not everyone is motivated by pay, title, or other forms of institutional recognition.
But over the medium-to-long term, I think the main thing is to show consistently and honestly how paying attention to those drivers gets you more of whatever it is you want from the larger org: autonomy, authority, compensation, exposure in the larger industry, etc.
Folks who are given all the right context, flexibility, and support to find a path that balances their personal goals and interests with the larger team and just persistently don’t are actually performing poorly, no matter their technical abilities.
Of course, not all organizations are actually true to the ethos of “do good by the team and good things will happen for you individually.” Sometimes it’s worth to go to battle to improve it; other times you have accept that a particular boss/biz unit/company is quite happy to keep making decisions based on instinct and soft influence. (What to do about the latter is one of the truly sticky + hard-to-solve problems for me in the entire field of engineering management, and IME the thing that will make me and my team flip the bozo bit hard on our upper management chain.)
Thanks for the reply, that’s quite helpful and matches a lot of what’s been banging around in my head.
Would you be able to elaborate on the last paragraph about making decisions based on instinct and soft influence? Why is it a problem and what do you mean by “soft influence” in particular? Quite interested to understand more.
Both points (instinct + soft influence) refer to the opposite of “data-driven” decision-making. I.e., “I know you and we think alike” so I’m inclined to support your efforts + conclusions. Or conversely, “that thing you’re saying doesn’t fit my mental model,” so even though there are processes and channels in place for us to talk about it and come to some sort of agreement, I can’t be bothered.
It’s also described as “type 1” thinking in the Kahneman model (fast vs. slow). Not inherently wrong, but also very prone to letting bias and comfort drown out actually-critical information when you’re wrestling with hard choices.
Being on the “supplicant” end and trying to use facts to argue against unquestioned biases is demoralizing and often pointless, which is the primary failure mode I was calling out.
This is true and relevant, but it’s also key to point out why instinct-driven decisions are preferred in so many contexts.
By comparison, data-driven decision-making is slower, much more expensive, and often (due to poor statistical rigor) no better.
Twice in my career I have worked with someone whose instincts consistently steered the team in the right direction, and given the option that’s what I’d always prefer. Both of these people were kind and understanding to supplicants like me, and - with persistence - could be persuaded to see new perspectives.
Excellent points! Claiming to be “data driven” while cherry-picking the models and signals you want is really another form of instinctive decision-making…but also, the time + energy needed to do any kind of science in the workplace can easily be more than you (individually or as a group) have to give.
If you have collaborators (particularly in leadership roles) with a) good instincts, b) the willingness to change their mind, and c) an attitude of kindness towards those who challenge their answers, then you have found someone worth working with over the long-term. I personally have followed teammates who showed those traits between companies more than once, and aspire to at least very occasionally be that person for someone else.
That’s helpful, thanks for clarifying.
I think this is part of the natural life-cycle of the software developer - the majority of developers I’ve known have had an extended period where this was true, usually around 7-10 years professional experience.
This is complicated by most of them going into management around the 12-year mark, meaning that only 2-3 years of their careers combine “experienced enough to get it done” with “able to regulate their focus to a narrow target”.
I think those timelines have been compressed these days. For better or worse, many people hold senior or higher engineering roles with significantly fewer than 7-10 years experience.
My experience suggests that what you’ve observed still happens - just with less experience behind the best-practices and axe-sharpening o_O
My team explicitly doesn’t use scrum and all the other teams are asking: “How then would you ever get anything done?”
Well… a lot better.
Mostly trying to figure out what I’m doing wrong on Twitter. I have much more engagement on Mastodon, while I cannot seem to find how to grow on Twitter. And even on Mastodon it is slow.
I feel a bit dirty about this kind of marketing approach, but I learned the hard way that without an audience, it is almost impossible to build/sell products, find contract work or meet potential associates. Ultimately I see this as a way to meet and engage with like-minded people.
Rest of the time I’m going to start playing Starsector. Took some time to get convinced to give it a try, but it seems to be the kind of complex sandbox I enjoy.
Is it worth the time? The C-level of my company was all about social media, and we had to incorporate all those fancy social media buttons in our apps. With the advent of GDPR those had to be converted to opt-in, what a waste of resources. FB and Twitter up and down.
Until an old school sales guy came along, looked at the numbers in the CRM and told them that the bigger part of our leads come from our satisfied customers’ recommendations. Meanwhile the social media buttons are gone completely (me gusta mucho), and they do lots of webinars and such things.
Worth the time? I really hope so.
I am not an established company with a client base. I’m trying to bootstrap a solo business, and I can assure you that finding contract work or potential clients is surprisingly hard when you do not have an established network or an audience.
I’ve lost count of the number of successful solo bootstraps which start with “I presented my idea to my thousands of Twitter followers and a lot of them preordered”.
And my last attempt with targeted cold mailing was quite underwhelming.
Obviously I’m doing something (multiple somethings really) wrong, but I’m trying to fix it.
I read a few articles about social media growth. Didn’t try out everything, but a few things helped:
and some more things I’m probably forgetting. It depends on your comfort level too.
This seems to match what I’ve read on the subject. I started to post several times a day, and I really need to start replying when I can add something to an existing discussion. Using TweetDeck with several search columns is already really helpful.
Taking notes about the whole graphics thing, I’m really not good with that.
I’ve been a lone developer for most of my career, and had I known that programming would turn into a team sport, I might not have even become a programmer. I don’t think I’ve written confusing code, and rarely have I gone back to code I’ve written and not have it make sense to me, even years later. And the one time I did pair programming, it was painful and I don’t wish to do it ever again (I was a mere secretary, taking dictation and shut up! Keep typing!).
Do not worry, you are not alone. If it helps, I found it possible to work on solo projects in every company I ever worked for. And the more senior your are, the easier it is.
I’m being pulled more and more into team-sport-coding. There are days when I just want to lock myself into a room, focus on the problem, and write the code. Sometimes I just can’t deal with the meetings, the pairing sessions, the mentoring, the code reviews. I wish I had a gig as a lone programmer.
Have you considered going into consulting/freelancing? Depending on the type of job you might be able to work alone on the code.
This sounds like a good idea, but I’m a bit intimidated by the whole process of finding customers and taking care of my own business. Moreover, in the country I live in, I should earn at least double my present income to live the same lifestyle as a consultant.
Yeah, finding customers can be difficult for introverts without a large network (I have the same problem). You could always frequent those freelancing job boards to get started and hopefully build up a clientele, but realise you’ll likely have to fight bottom-of-the-barrel “developers” on price.
Unless you’re raking in tons of cash at a big tech company, if you calculate what you make per hour, you should be able to (eventually) charge double that quite easily. Remember, your employer needs to charge the customer for your time, pay your wage, social securities/pension, rent, employee hardware and “overhead” (wages for managers, reception, cleaners, whatever other roles the company has that don’t work directly for the customer) and still make a profit.
I like this kind of topic, it forces me to focus on short term goals.
Continuing (and hopefully finishing) my OpenAPI implementation in my Common Lisp toolkit. This is a prerequisite for things such as Stripe and the GitHub API, so I needed to bite the bullet and do the work. Again not the most pragmatic work I’ve ever done, but it gives me time to think about product ideas.
Looking at the possibilities described by the author, it seems they do not envision the possibility of selling support contract. Do not produce a commercial version: simply explain that you won’t provide features, bug fixes or support to users who do not have a support contract. Everything else is best effort.
Of course companies are not going to “contribute” unless they have to. First reason being that you lead/manager/director does not have the legal option to make donations to external individuals. But they have a budget which can be used for support contracts.
Am I missing something?
It’s a polyfill. Either it works for everyone, or it doesn’t. There are hardly any bug tickets because of how simple yet crucial it is.
To quote the author:
If nobody cares about it because it is just a polyfill, then the author should just either stop maintaining it, or doing what he want with it and ignore everything else. If this is actually important, if core-js not being updated causes problems to companies using it, then selling a support contract absolutely makes sense. Of course it means having to do some marketing, contacting companies directly, but there’s no free money.
As an example, I’ve worked at companies who used Sidekiq, and paying for the Pro version was a no brainer. Features, support, companies will pay for it if they need it. But you need to sell something, not just ask for money.
But there’s no extras to sell, no room for special treatment for paying customers. It has one goal and it does it really well. There’s only been a few dozen bugs in over a decade. It’s a lot of work but the work is extremely obvious and easily tested.
I think they meant more in the sense of Redhat vs CentOS Linux (well, as it was a while ago). You wanted rock-solid commercally developed Linux? Get CentOS. You also wanted support for when the shit hits the fan? You pay Redhat licence for the same code, but you kick the problems can down the road.
I think that’s something that all OSS developers (at least the ones that hope for a financial support) should understand. You need to give your users, however little, an excuse to pay you, otherwise they might be as helpless as you are in convincing their management. Even the management themselves might be helpless, because they might not be able to justify cashing out randomly while they’re themselves ultimately responsible to the shareholders.
As an employee, it’s a needless waste of social capital for me to ask a superior to donate, say $500 to a project we use, but it’s very easy to ask for a minor purchase of $500 that will marginally increase my (team’s) productivity.
He really should talk to the owners of https://github.com/briansmith/ring and https://github.com/rui314/mold. They both have an open source product with a support contract model. Read this thread: https://github.com/briansmith/ring/issues/774. This is how you do open source business.
Though if you look at crates.io, it seems he’s not yanking anymore. https://crates.io/crates/ring/versions
Pandoc is one of the handful of opensource projects I love:
Conversely, installing this in Arch requires like 100 other Haskell packages (I know it only shows 75 but I believe some dependencies have other Haskell dependencies).
That is unfortunate, pandoc releases static binaries which work great in my experience, https://github.com/jgm/pandoc/releases/download/3.0/pandoc-3.0-linux-amd64.tar.gz
The AUR has a pandoc-bin package.
I don’t see the problem. pandoc is very clearly an integration project, integrating many document formats, providing a unified AST, etc.
It is a very reasonable path to pick high-quality and accepted dependencies in the ecosystem over implementing your own in this case. With that eye, I’m actually surprised that it’s just 100.
You cannot blame Pandoc: someone decided that each Haskell library had to be its own separate Archlinux package, and there is not much you can do about it.
This is infortunate and only works because there are very few Haskell packages, so the chances of having version conflicts are low. But it is still annoying to run pacman and have hundreds of packages to update.
It’s been a while since I touched it but my recollection is that pacman is still really fast with lots of little files and packages, so no biggie. Is that still correct, please?
Performances are fine (but it might be that my NVME disk is doing all the work). It is mostly annoying when reviewing package lists everytime I make an update an Pandoc and its dependencies are in it.
This kind of issue is precisely why I’d like to investigate something such as Guix, but I haven’t found the time yet.
Thanks
With the new split of pandoc-cli, pandoc-lua, pandoc-server, etc. this might be better. For example, these are HTTP server libraries:
And pandoc-cli now has an option to build without the lua and server dependencies. I don’t really use either of those functionalities myself.
Agree with this and will add I really appreciate pandoc’s compatibility. I use pandoc and a makefile to build a static site and it started as maybe 5 lines of make, as I wanted to add additional things it was easy to add in pre- and post-processing steps, do templating, write filters etc. While filters do require learning a bit about pandoc internals, filters are just programs that read from stdin and write to stdout. It generally works out of the box great and can be integrated into unix-y pipelines really well without having to own a build process end-to-end like traditional static site generators or other document building toolchains.
In general I’m very enthousiastic about Guile. I abandoned Common Lisp years ago because it was a dead end, and never really investigated Scheme. Even though the RnRs standardization process does not seem to be going anywhere, several Scheme implementations are actively developped, so I might start using it for small scripts and tools.
My issue with scheme standards… no typing, slowness, and inconsistencies galore. Each scheme is its own language basically… Still worth experiencing but i think it’s not worth mastering for myself.
Exactly, Scheme is more of a subfamily of Lisps than a single language. The RNRS of defines a base to build upon, but despite how small it is it still has flaws and each implementation of Scheme stems from a different version anyway, I think Guile from 6 and Racket from 5 for example. Writing portable Scheme code is basically impossible because of all this fragmentation.
What you should do is pick one of the more used Schemes and stick with it. My favorite one is Racket.
In this case, the
form-action
directive would have stopped the attack. It is common to forget it when writing a CSP, especially because the fact that it is not affected bydefault-src
is not common knowledge.I’m working on a small service to handle CSP (Content Security Policy) reports. They seem to be tricky enough that lots of people ignore them, so I thought there was an opportunity to do something useful. It also means I am probably going to have to start contacting people to see who might be interested and get some feedback.
In the same vein, if are using Emacs, Forge (https://github.com/magit/forge) extends the incredibly well designed Magit to work with GitHub, including reviews. Documentation is quite good too: https://github.com/magit/forge.
Just a nitpick, but forge itself doesn’t support (most) code-review-like functionality; that’s actually handled by external packages that are usually used alongside forge, e.g.
code-review.el
.For myself:
For work:
I love hearing about folks personal servers. I think I’m like you - i use it as my test bed and playground for things i want to learn. What is your favorite thing you host locally?
Not who you’re replying to, but I’d probably list gitea as my favorite thing I host locally. It’s small, lightweight and an amazing little service.
And if you don’t care about collaboration features in the web ui, cgit is a very light and straightforward option.
I used to use bare repos on an ssh server, but switched to gitea because it can be configured to create a repo on push. This allows me to easily create new repos from any machine with access.
Now I use it as an authentication store for some services. Gitea can be an oauth provider and is much simpler than many of the alternatives to run.
I host all my private Git repositories (because I can and because all Git operations are way faster than with GitHub, which is satisfying when pushing). I also have NGINX for my website, a private IRC server (ngircd) for a few friends, a mail setup with Fetchmail/Dovecot and Influx/Grafana (mostly for the fun of it).
Everything is running on FreeBSD and managed with a deployment system written in pure POSIX sh.
While it sometimes means a couple hours spent upgrading the system or fixing some kinks, it is satisfying. I have learned a lot about software and ops that way.
Note to any developer out there: running your own server will change the way you design software. Running in production is not easy.
This is a weird statement. Change how?
When you run a server, you have to deal with software not working properly, because it happens all the time. Thus you learn how important it is to write precise and meaningful error messages with the right context. You learn how software should behave consistently, and how this behaviour should be documented.
Having to deal with software in production is a good wakeup call for all developers.
I’ve been keeping a personal server for years (website, VCS, file synchronisation, file sharing, chat servers/client/bots, central place for all my note keeping, all sorts of Internet processing), but I don’t share your experience. Things go wrong very rarely, mostly during development. The worst problem I’ve had was with the laptop killing its battery circuitry and then randomly shutting down, resolved by retiring the machine.
If anything, I’ve learnt to use
#!/bin/sh -e
and actually keep logfiles–cron mails error output automatically, systemd needs to be nudged into doing that. Knowing that something’s wrong at all is what’s important.Also not who you’re replying to, but I love hosting Snipe-It locally to track all the machines flowing in and out of my repair lab (plus my own machine and parts collection).
I always felt like the right balance for DNS was: run BIND, but only as a “hidden master”; let some third party service AXFR from you and handle all of the public requests.
That’s what I do for my domains. It works well.
Yawn another sans-serif.
Sure, Go Mono is the one true serif monospaced typeface, but it would be nice to see some competition.
I’ve used and liked Verily Serif Mono as well
The Triplicate font by Matthew Butterick is both monospace and a true serif.
JetBrains Mono NL is an excellent alternative.