I nearly posted this as an ‘ask’: Slack is not good for $WORK’s use case because it does not have an on-premise option. What on-premise alternatives are people using/would you recommend?
I’ve used Mattermost before, which AFAIK has an on-prem version - just as a user, not setup or admin so I can’t speak to that end.
Same, actually. It does look very interesting, I’d be highly interested in whether anyone has any experience with it?
We’ve used mattermost for a few years now, it’s pretty easy to setup and maintain, you basically just replace the go binary every 30 days with the new version. We just recently moved to the integrated version with Gitlab, and now Gitlab handles it for us, even easier now, since Gitlab is just a system package you upgrade.
A lot of people have said Mattermost, might be a good drop-in replacement. According to the orange site they’re considering dropping a “welcome from Hipchat” introductory offer, which is probably a smart move.
IIRC mattermost is open core. I’ve heard good things about zulip. Personally, I like matrix, which federates and bridges
If systemd had restricted itself to services, it would have been a nice init replacement. The problem I have with systemd is everything else it does.
I recently discovered how horribly complicated traditional init scripts are whilst using Alpine Linux. OpenRC might be modern, but it’s still complicated.
Runit seems to be the nicest I’ve come across. It asks the question “why do we need to do all of this anyway? What’s the point?”
It rejects the idea of forking and instead requires everything to run in the foreground:
/etc/sv/nginx/run:
#!/bin/sh
exec nginx -g 'daemon off;'
/etc/sv/smbd/run
#!/bin/sh
mkdir -p /run/samba
exec smbd -F -S
/etc/sv/murmur/run
#!/bin/sh
exec murmurd -ini /etc/murmur.ini -fg 2>&1
Waiting for other services to load first does not require special features in the init system itself. Instead you can write the dependency directly into the service file in the form of a “start this service” request:
/etc/sv/cron/run
#!/bin/sh
sv start socklog-unix || exit 1
exec cron -f
Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.
The only other feature I can think of is “reloading” a service, which Aker does in the article via this line:
ExecReload=kill -HUP $MAINPID
I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?
Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.
The logging mechanism works like this to be stable and only lose logs in case runsv and the log service would die.
Another thing about separate logging services is that stdout/stderror are not necessarily tagged, adding all this stuff to runsv would just bloat it.
There is definitively room for improvements as logger(1) is broken since some time in the way void uses it at the moment (You can blame systemd for that).
My idea to simplify logging services to centralize the way how logging is done can be found here https://github.com/voidlinux/void-runit/pull/65.
For me the ability to exec svlogd(8) from vlogger(8) to have a more lossless logging mechanism is more important than the main functionality of replacing logger(1).
Instead you can write the dependency directly into the service file in the form of a “start this service” request
But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order. Depending on network being setup, for example, brings complexity to each of those shell scripts.
I’m of the opinion that a dsl of whitelisted items (systemd) is much nicer to handle than writing shell scripts, along with the standardized commands instead of having to know which services that accepts ‘reload’ vs ‘restart’ or some other variation in commands - those kind of niceties are gone when the shell scripts are individually an interface each.
The runit/daemontools philosophy is to just keep trying until something finally runs. So if the order is wrong, presumably the service dies if a dependent service is not running, in which case it’ll just get restart. So eventually things progress towards a functioning state. IMO, given that a service needs to handle the services it depends on crashing at any time anyways to ensure correct behaviour, I don’t feel there is significant value in encoding this in an init system. A dependent service could also be moved to running on another machine which this would not work in as well.
It’s the same philosophy as network-level dependencies. A web app that depends on a mail service for some operations is not going to shutdown or wait to boot if the mail service is down. Each dependency should have a tunable retry logic, usually with an exponential backoff.
But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order.
That was my initial thought, but it turns out the opposite is true. The services are retried until they work. Things are definitely paralleled – there is not “exit” in these scripts, so there is no physical way of running them in a linear (non-parallel) nature.
Ignoring the theory: void’s runit provides the second fastest init boot I’ve ever had. The only thing that beats it is a custom init I wrote, but that was very hardware (ARM Chromebook) and user specific.
Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services.
runit and s6 also don’t support cgroups, which can be very useful.
Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services
Why? The runit/daemontools philsophy is just to try to keep something running forever, so if something dies, just restart it. If one restarts a service, than either those that depend on it will die or they will handle it fine and continue with their life.
either those that depend on it will die or they will handle it fine
If they die, and are configured to restart, they will keep bouncing up and down while the dependency is down? I think having dependency resolution is definitely better than that. Restart the dependency, then the dependent.
It’s a computer, it’s meant to do dumb things over and over again. And presumably that faulty component will be fixed pretty quickly anyways, right?
It’s a computer, it’s meant to do dumb things over and over again
I would rather have my computer do less dumb things over and over personally.
And presumably that faulty component will be fixed pretty quickly anyways, right?
Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever). The dependency tree can be complicated. Ideally once something is fixed everything that depends on it can restart immediately, rather than waiting for the next automatic attempt which could (with the exponential backoff that proponents typically propose) take quite a while. And personally I’d rather have my logs show only a single failure rather than several for one incident.
But, there are merits to having a super-simple system too, I can see that. It depends on your needs and preferences. I think both ways of handling things are valid; I prefer dependency management, but I’m not a fan of Systemd.
I would rather have my computer do less dumb things over and over personally.
Why, though? What’s the technical argument. daemontools (and I assume runit) do sleep 1 second between retries, which for a computer is basically equivalent to it being entirely idle. It seems to me that a lot of people just get a bad feeling about running something that will immediately crash.
Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever).
What’s the distinction here? Also, with microservices the dependency graph in the init system almost certainly doesn’t represent the dependency graph of the microservice as it’s likely talking to services on other machines.
I think both ways of handling things are valid
Yeah, I cannot provide an objective argument as to why one should prefer one to the other. I do think this is a nice little example of the slow creep of complexity in systems. Adding a pinch of dependency management here because it feels right, and a teaspoon of plugin system there because we want things to be extensible, and a deciliter of proxies everywhere because of microservices. I think it’s worth taking a moment every now and again and stepping back and considering where we want to spend our complexity budget. I, personally, don’t want to spend it on the init system so I like the simple approach here (especially since with microservies the init dependency graph doesn’t reflect the reality of the service anymore). But as you point out, positions may vary.
Why, though? What’s the technical argument
Unnecessary wakeup, power use (especially for a laptop), noise in the logs from restarts that were always bound to fail, unnecessary delay before restart when restart actually does become possible. None of these arguments are particularly strong, but they’re not completely invalid either.
We’re not necessarily just talking about standard daemons …
What’s the distinction here?
I was trying to point out that we shouldn’t make too many generalisations about how services might behave when they have a dependency missing, nor assume that it is always ok just to let them fail (edit:) or that they will be easy to fix. There could be exceptions.
Perhaps wandering off topic, but this is a good way to trigger even worse cascade failures.
eg, an RSS reader that falls back to polling every second if it gets something other than 200. I retire a URL, and now a million clients start pounding my server with a flood of traffic.
There are a number of local services (time, dns) which probably make some noise upon startup. It may not annoy you to have one computer misbehave, but the recipient of that noise may disagree.
In short, dumb systems are irresponsible.
But what is someone supposed to do? I cannot force a million people using my RSS tool not to retry every second on failure. This is just the reality of running services. Not to mention all the other issues that come up with not being in a controlled environment and running something loose on the internet such as being DDoS’d.
I think you are responsible if you are the one who puts the dumb loop in your code. If end users do something dumb, then that’s on them, but especially, especially, for failure cases where the user may not know or observe what happens until it’s too late, do not ship dangerous defaults. Most users will not change them.
In this case we’re talking about init systems like daemontools and runit. I’m having trouble connecting what you’re saying to that.
N.B. bouncing up and down ~= polling. Polling always intrinsically seems inferior to event based systems, but in practice much of your computer runs on polling perfectly fine and doesn’t eat your CPU. Example: USB keyboards and mice.
USB keyboard/mouse polling doesn’t eat CPU because it isn’t done by the CPU. IIUC the USB controller generates an interrupt when data is received. I feel like this analogy isn’t a good one (regardless). Checking a USB device for a few bytes of data is nothing like (for example) starting a Java VM to host a web service which takes some time to read its config and load its caches only to then fall over because some dependency isn’t running.
Sleep 1 and restart is the default. It is possible to have another behavior by adding a ./finish script to the ./run script.
I really like runit on void. I do like the simplicity of SystemD target files from a package manager perspective, but I don’t like how systemd tries to do everything (consolekit/logind, mounting, xinet, etc.)
I wish it just did services and dependencies. Then it’d be easier to write other systemd implementations, with better tooling (I’m not a fan of systemctl or journalctl’s interfaces).
You might like my own dinit (https://github.com/davmac314/dinit). It somewhat aims for that - handle services and dependencies, leave everything else to the pre-existing toolchain. It’s not quite finished but it’s becoming quite usable and I’ve been booting my system with it for some time now.
I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?
It’s nice to be able to reload a well-written service without having to look up what mechanism it offers, if any.
Runits sv(8) has the reload command which sends SIGHUP by default.
The default behavior (for each control command) can be changed in runit by creating a small script under $service_name/control/$control_code.
I was thinking of the difference between ‘restart’ and ‘reload’.
Reload is only useful when:
I have not been in environments where this is necessary, restart has always done me well. I assume that the primary use cases are high-uptime webservers and databases.
My thoughts were along the lines o: If you’re running a high-uptime service, you probably don’t care about the extra effort of writing ‘killall -HUP nginx’ than ‘systemctl reload nginx’. In fact I’d prefer to do that than take the risk of the init system re-interpreting a reload to be something else, like reloading other services too, and bringing down my uptime.
I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.
I used to use something like logexec for that, to “wrap” the program inside the runit script, and send output to syslog. I agree it would be nice if it were builtin.
I like the truly p2p aspect here, but it’s a big red flag that SSB seems to refer to a specific node.js implementation and not to a wider protocol with multiple implementations. I did a bit of digging and couldn’t find anything, but maybe I missed something?
The protocol is defined: https://ssbc.github.io/scuttlebutt-protocol-guide/
rust client: https://crates.io/crates/ssb-client
Other versions(go, c, etc) are being worked on as well.
A pity the signing / marshalling algorithm is such a PITA to implement (the signature must be the last key/value pair in the JSON document, and it signs the bytes of the document up to that point).
At least being able to produce a known canonical order is important for signing. And the signature cannot be part of that which it signs.
Oh yeah - the canonical form is nonexistent, you just sign whatever bytes you’ve written so far.
If you were signing a message body (eg a json string value) it would be different - but as it stands relays have to implement white space compatible json marshalling with the sender.
Having alternate clients is a good start, but is it still true that there’s only one server implementation?
I believe someone is working on a go implementation, but I don’t know where the code may be, and I’m not on my SSB machine to try and find it. But there is definitely only one that’s usable at the moment, that I’m aware of…
and I agree, it’s a good start. It’s also not smartphone/mobile ready yet either, but work is happening on that front as well.
The fact that Guix is written in Scheme is a big appeal for me as opposed to Nix’s custom language. I preferred Nix as a way to support a standard environment (it has more packages), but this new feature makes the distribution of fat binaries a lot simpler than the other solutions. Less is more!
FWIW, I tried to dissuade Gentoo from using Bash and Nix from creating their own language, both at basically around the 0.0.1 timeframe. I guess I am not terribly persuasive. Guix and Nix should merge. The separation is kinda ridiculous.
Guix and Nix should merge.
Seems like a great idea until you consider Guix’s commitment to freedom, and as a result blobless experience. Unless NixOS adopted that stance as well, the philosophical incompatibility would doom it. Nix adopting guile is more likely, I’d say, especially since guile did have a lua like front end that might make it a bit easier to slowly migrate everything…
It is similar to vegetarian and non-vegetarian, one can have a blobless, freedom filled diet and then occasionally should they choose, sprinkle some bin01bits on top.
I upvoted, but as a vegan, I kind of take offense to vegetarians (in a half hearted way, of course), who only “half” commit. But, I recognize that not everyone does it for the animals (even vegans).
But, why would you go out of your way to run a completely free system, only to sprinkle some blobbits on it? That completely invalidates the point! That blob, is where the nasty things that disrespect your freedoms are.
I didn’t realize Guix forbade blobs (though I’m not surprised, given its origin). Is there a with-blob version of Guix? I didn’t see one, but that doesn’t necessarily mean no…
Obviously, you can acquire and install the blobs yourself, and I’m sure there are blog posts around in support of that. But, yeah, it’s like Trisquel, gNewsense, and the others that have similar governance for totally-libre.
I haven’t used it in a long time, but I thought that you could point Guix at the package store from Nix, similar to how you can point Debian at apt repos from other sources. You would have to be really careful with this; I remember early on getting burned because I asked Nix to install Firefox and it gave me Firefox-with-adobe-flash which was pretty gross.
Ha! Well, there must be an alternate universe where you managed to convince them ;) I think they do borrow some ideas and even some code (I remember a FOSDEM talk from Ludovic last year mentioning that). Implementation wise, I would suspect Guix has the upper hand, but the restriction to GNU packages is problematic not you need specific packages.
I like Riot, but I have this thing where I can’t stand IM clients that want to live in a web browser tab. Web browser tabs are fluid and fungible. I don’t want to care that closing one will lose some critical state.
But unlike an irc client, closing it doesn’t lose critical state. It will just sync when you open it again
I have their previous router, the Turris Omnia. Very much like it and the NAS case which fits 2x3,5” drives. Not sure why they don’t offer anything for dual drives in this new router.
I’m a fan of matrix/riot, but this wasn’t a very well written piece. Yes, the python homeserver (synapse) is a resource hog, but there’s a plan for improvement. Overall matrix hasn’t advanced much in the past almost year, but is not picking up steam as they’ve received funding and are growing the team.
Overall matrix hasn’t advanced much in the past almost year
Recently the main efforts have been in speeding it up and polishing things and you can really see it paying off when the main homeserver is faster now than it was 8 months ago despite handling a much larger load.
I disagree heavily with the core thesis of this article–that Javascript is in need of replacement–but the treatment of it and the ideas explored are quite interesting.
I’d honestly settle for browsers handling JS the way they do cookies. Let me decide whether to allow all JS, allow only self-hosted JS, or disable JS entirely – and let me blacklist/whitelist particular domains.
I use a hosts file.
I use a hosts file too, but umatrix allows more fine-grained controls than just blocking all requests to a domain, in addition to doing things like only allowing iframes/cookies/media from domain X to be loaded from domain Y.
You could write or use a browser extension that injects a Content Security Policy into the response. Make it configurable on a per-site basis is a stretch goal. :-)
It was quite predictable. Their incentives as a VC-backed, for-profit company aiming for massive IPO are to lock-in as many people as possible. Interoperability works against profitable lock-in. This is why rich, software companies either fight, subvert, or cripple it where possible. So, Slack eventually would ditch that. I doubt they put a lot of effort into maintaining its quality either if it was a marketing gimmick. I don’t use Slack, though, so I can’t say.
Interop feels a lot like what some leaders said about democracy:
It’s like a train. You get off when you reached your destination.
Honestly, Slack to me has become a lot more than just chat, and I can see how they can’t coerce their methodology for chat anymore into IRC. Threads are used very extensively by my team, and I can see how that’s hard to fit into IRC. Rich content messages from apps, images, and posts are basically impossible to fit into IRC. I agree that all those things don’t fit into some people’s ideas of an ideal workflow, but they’ve become crucial for a lot of people on Slack, and kind of break in IRC.
I think that the features you mention could be mapped to IRC, with some loss of course, but IRC users are (maybe?) used to a simpler experience.
IIRC, less choice is often touted as a good design practice. But Slack is removing the simple thing in favour of the bells and whistles. It’s not a surprise, but it’s sad.
hard to fit into IRC
Could you be more specific? This is a Slack-IRC gateway using the recent IRCv3 drafts for threads, reactions and rich content messages: https://twitter.com/irccloud/status/971416931373854721
As far as I can see, IRC can handle all these just fine.
It’s in the ‘wrong’ place in my stack, but the wee-slack plugin mentioned by @oz claims to have thread support. As a WeeChat plugin has access to windows and buffers I can imagine that being a smoother experience that a plugin in the otherwise ‘correct’ place: the bouncer.
Messages from apps are or could be notices in IRC, and images appear as links that I can click through to see using a web browser. It is certainly true that the more a tool tries to structure a conversation the more difficult it becomes to map that to the IRC protocol. That said, I’m absolutely open to retaining the ability to chat from an IRC client by fixing problems anywhere and everywhere they need to be fixed. There is no fundamental reason a thread feature can’t work outside of the official client.
It’s in the ‘wrong’ place in my stack, but the wee-slack plugin mentioned by @oz claims to have thread support. As a WeeChat plugin has access to windows and buffers I can imagine that being a smoother experience that a plugin in the otherwise ‘correct’ place: the bouncer.
Yeah I can see where you’re coming from. I love wee-slack, and would use it if it had Enterprise Grid support. I just think that Slack is making more and more design decisions that make it hard to shoe-horn back into IRC.
I just think that Slack is making more and more design decisions that make it hard to shoe-horn back into IRC.
If not IRC, then an open, extended version of it or new protocol with a reference client. Worst case is that important stuff like messages stay in the open system whereas extra bells and whistles end up in proprietary system. Less transition cost later if people want to ditch Slack for something better. An open, reference implementation people are using in a lot of environments would also give them more testing of their protocols. They definitely have the money for it at their revenue levels.
They’re locking it up instead since it’s more profitable in the long run for the founders and investors. The good news is they might have at least inspired some revamps of IRC or chat that will be done better for us without their problems. I think I’ve already seen some like that but we gotta wait to see who gets a sound, business model going.
I’ve used the Jabber gateway to connect to HipChat and the IRC gateway to connect to Slack. Hands down the Slack gateway was the superior experience. You could, to be sure, tell you were not connecting to a real IRC server. The experience was remarkably good anyway. by comparison, my messages in to HipChat would sometimes take hours (actual multiple hours) to be received–completely crippling my ability to participate.
I’m mostly using borg for backups, but still use duplicity where I want the backup source to only be capable of encrypting new backups and not decrypting old ones. Is there another backup system, nicer than duplicity, that allows you to make backups using only the public part of a keypair?
Duplicity can also use your public key. Instead of providing a passphrase you can use the --encrypt-key flag to provide your key’s fingerprint.
Not sure if it’s nicer, but tarsnap does it and hasn’t been mentioned so far
While tarsnap is really cool, the pricing makes it sort of in a different category (i.e., it’s good for backing up important stuff, whereas duplicity, restic, borg, duplicacy, etc, can be affordable to backup everything).
Unfortunately the Slack bridge to matrix seems to be unmaintained. They talked how they were looking about pupetting user over 1 year ago, but there was no effort in this direction since then. As long as matrix will not provide anything better than webhook integration to Slack I doubt anyone will move over. And that’s not talking about how managing your own matrix homeserver can be a pain.
For the last year matrix has had a hard time with funding and little progress. But they got funding and the future is looking brighter
I’m sorry that some of you feel like this is spam. I’ve seen several discussions about matrix on lobsters and really thought that this would be a suitable topic here.
Silent mode at all times and finer-grained control offered by the OS
What is finer grained on iOS? It was my (mis)understanding that this is one of the places Android was still fairly far ahead with multiple tiers of notification levels tweak-able by app or person.
I’m too lazy to run my own DNS. CloudFlare doesn’t provide all record types, which is irritating sometimes. But if I did run my own, I wouldn’t run BIND. And I wouldn’t run a slave. Zone transfers add complexity. DJB got it right. I’d distribute the master database from a central place, LDAP, SQL, git repo, something like that.
General charities:
Houston and Texas-specific charities:
Also, Wikipedia.
Does wikipedia really need more money? https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2017-02-27/Op-ed
I didn’t post it to get into a debate. I’ll continue donating. Everyone else can make up their own mind.
Both are valid. You posted it as an organization you believe is beneficial that’s worth donating to (I agree). You are and will continue donating to it. ptman’s comment might help others concerned about whether there money should go to one good cause or another. Such people, myself included in that group, are often concerned about (a) do they already have a lot of money or representation in donations and (b) are they spending it well enough to justify more? The article gives that crowd some info to act on.
So, I value both your comments even if there’s no debate to be had here.
It’s a risky move to skip uni. I’ve seen people succeed from all backgrounds though. No degree at all, masters in CS, physics, business and one great programmer i know actually did english literature. You just have to start learning and do it.
I’m not surprised at all, I think Dijkstra said that literature & writing best predicts success in programming.
The error handling section was a little confusing to me: Are there any example that are meant by proper error handling?
I think when you have multiple return values, you get many of the advantages of exceptions, but I don’t really know what the author of this blog is arguing for as compared to python besides exceptions.
Well you’d at least expect this to be baked into the language: https://github.com/pkg/errors
But by not being baked in, you have alternatives: https://pocketgophers.com/error-handling-packages/ , all of which conform to the error interface
I think he means the kind of stuff he now gets from third-party tools: showing where errors occurred, and making it hard to accidentally discard errors.
(Also: big plug for https://github.com/gordonklaus/ineffassign. It finds assignments in your code that make no difference to how it runs, a generalization of the ‘no unused variables’ rule that turns up some bugs and, when it doesn’t, often points to somewhere you could make your code tighter. Better yet, you can use something like https://github.com/alecthomas/gometalinter with a subset of checks enabled.)
Thank you for the planet. There seems to be about 100 blogs/feeds coming in to the planet. But the planet rss feed is just 100 items, most of which seem to come from just a couple of blogs that don’t have proper timestamps?
Well spotted.. it wasn’t apparent yesterday but I just fixed an SSL problem and suddenly there are quite a few. I’ll remove any more I spot, but please, feel free to go crazy on pull requests :)
edit: this is way more broken than I thought. Planet doesn’t seem to do anything about feeds that lack timestamp, which is surprising. Anyone got a recommendation for better software? The main value in this existing thing is the Travis setup and the list of feed URLs.
edit: ok, I /think/ I’ve got it this time.. some bad settings in there, and squelching untimestamped feeds doesn’t happen after the first time they’re seen, so had to wipe the cache and start again
I’m tempted to write something better, or at least help improve what you have currently got working :)
I’ve once authored a planet generator named Uranus, but I don’t really maintain it anymore. It does have the advtange of not having any dependencies other than Ruby, though (no gems, just plain stdilb). There’s another planet generator named Pluto that is still maintained.