1. 1

    While this would likely help some people, it really feels more like “How to set up a basic vagrant development environment” than at all Erlang related to me, honestly.

      1. 1

        OK, thanks!

        1. 1

          Question is why it’s taking this long to just generate a new cert with the extra SAN…

          1. 4

            No one is paid to work on lobsters. If you know ansible and letsencrypt you should be able to help out.

            1. 1

              Well I don’t really know how the current Lets Encrypt cert was generated, but it’s literally just another argument. Did ask about it when it came up on IRC 3 weeks ago, but didn’t get a reply then, and figured it would probably be fixed pretty quickly then so completely forgot about it.

              1. 1

                It was manually created with certbot but, as noted in the bug, should probably be replaced with use of acmeclient to have much fewer moving parts, if nothing else.

                It’d be great to have someone who knows the topic well to help the issue along in any capacity, if you have the spare attention.

                1. 1

                  I’ve done entirely too much work with acmeclient to automate certs for http://conj.io and some other properties I run. Will try and find time this weekend to take a run at this.

                  1. 1

                    That or to use dehydrated: in a text file, one certificate per line, each domain separated by a space.

          1. 1

            Don’t use WPA2, it may have been cracked, stick to something secure like WEP instead :)

            1. 3

              I’m honestly not sure if you’re serious or not, and that will probably bother me for a few minutes. Well played.

              1. 1

                Someone made a similar comment from a reddit thread so I thought I’d copy it here :)

              1. 4

                Ugh, I really dislike it when they don’t have the slides as shown to the audience in the video. It both becomes much more annoying when they skip back and forward, but I also watch a lot of talks on my phone when on public transit, which makes a lot of talks almost inaccessible because you need the slides for context with most of them.

              1. 8

                I think the takeaway here is a) don’t confuse all kind of errors with a http request with invalid tokens (I’m not familiar with the Github API, but I suppose it returns 503 unauthorized correctly) and b) don’t delete important data, but flag it somehow.

                1. 5

                  It returns a 404 which is a bit annoying since if you fat finger your URL you’ll get the same response as if a token doesn’t exist.

                  https://developer.github.com/v3/oauth_authorizations/#check-an-authorization

                  Invalid tokens will return 404 NOT FOUND

                  I’ve since moved to using a pattern of wrapping all external requests in objects that we can explicitly check their state instead of relying on native exceptions coming from underlying HTTP libraries. It makes things like checking explicit status code in the face of non 200 status easier.

                  I might write on that pattern in the future. Here’s the initial issue with some more links https://github.com/codetriage/codetriage/issues/578

                  1. 3

                    Why not try to get issues, and if it fails with a 401, you know the token is bad? You can double check with the auth_is_valid method you’re using now…

                    1. 2

                      That’s a valid strategy.

                      Edit: I like it, I think this is the most technically correct way to move forwards.

                    2. 1

                      Did the Github API return a 404 Not Found instead of a 5xx during the outage?

                      1. 1

                        No clue.

                        1. 1

                          Then there’s your problem. Your request class throws RequestError on every non-2xx response, and auth_is_valid? thinks any RequestError means the token is invalid. In reality you should only take 4xx responses to mean the token is invalid – not 5xx responses, network layer errors, etc.

                          1. 1

                            Yep, that’s what OP in the thread said. I mention it in the post as well.

                    3. 2

                      I think the takeaway is that programmers are stupid.

                      Programs shouldn’t delete/update anything, only insert. Views/triggers can update reconciled views so that if there’s a problem in the program (2) you can simply fix it and re-run the procedure.

                      If you do it this way, you can also get an audit trail for free.

                      If you do it this way, you can also scale horizontally for free if you can survive a certain amount of split/brain.

                      If you do it this way, you can also scale vertically cheaply, because inserts can be sharded/distributed.

                      If you don’t do it this way – this way which is obviously less work, faster and simpler and better engineered in every way, then you should know it’s because you don’t know how to solve this basic CRUD problem.

                      Of course, the stupid programmer responds with some kind of made up justification, like saving disk space in an era where disk is basically free, or enterprise, or maybe this is something to do with unit tests or some other garbage. I’ve even heard a stupid programmer defend this crap because the the unit tests need to be idempotent and all I can think is this fucking nerd ate a dictionary and is taking it out on me.

                      I mean, look: I get it, everyone is stupid about something, but to believe that this is a specific, critical problem like having to do with 503 errors instead of a systemic chronic problem that boils down to a failure to actually think really makes it hard to discuss the kinds of solutions that might actually help.

                      With a 503 error, the solution is “try harder” or “create extra update columns” or whatever. But we can’t try harder all the time, so there’ll always be mistakes. Is this inevitable? Can business truly not figure out when software is going to be done?

                      On the other hand, if we’re just too fucking stupid to program, maybe we can work on trying to protect ourselves from ourselves. Write-only-data is a massive part of my mantra, and I’m not so arrogant to pretend it’s always been that way, but I know the only reason I do it is because I deleted a shit-tonne of customer data on accident and had the insight that I’m a fucking idiot.

                      1. 4

                        I agree with the general sentiment. It took me a bout 3 read throughs to parse through all the “fucks” and “stupids”. I think there’s perhaps a more positive and less hyperbolic way to frame this way.

                        Append only data is a good option, and basically what I ended up doing in this case. It pays to know what data is critical and what isn’t. I referenced the acts_as_paranoid and it pretty much does what you’re talking about. It makes a table append only, when you modify a record it saves an older copy of that record. Tables can get HUGE, like really huge, as in the largest tables i’ve ever heard of.

                        /u/kyrias pointed out that large tables have a number of downsides such as being able to perform maintenance and making backups.

                        1. 2

                          you can do periodic data warehousing though to keep the tables as arbitrarily small as you’d like but that introduces the possibility of programmer error when doing the data warehousing. it’s an easier problem to solve than making sure every destructive write is correct in every scenario though.

                          1. 1

                            Tables can get HUGE, like really huge, as in the largest tables i’ve ever heard of

                            I have tables with trillions of rows in them, and while I don’t use MySQL most of the time, even MySQL can cope with that.

                            Some people try to do indexes, or they read a blog that told them to 1NF everything, and this gets them nowhere fast, so they’ll think it’s impossible to have multi-trillion-row tables, but if we instead invert our thinking and assume we have the wrong architecture, maybe we can find a better one.

                            /u/kyrias pointed out that large tables have a number of downsides such as being able to perform maintenance and making backups.

                            And as I responded: /u/kyrias probably has the wrong architecture.

                          2. 2

                            Of course, the stupid programmer responds with some kind of made up justification, like saving disk space in an era where disk is basically free

                            It’s not just about storage costs though. For instance at $WORK we have backups for all our databases, but if we for some reason would need to restore the biggest one from a backup it would take days where all our user-facing systems would be down, which would be catastrophic for the company.

                            1. 1

                              You must have the wrong architecture:

                              I fill about 3.5 TB of data every day, and it absolutely would not take days to recover my backups (I have to test this periodically due to audit).

                              Without knowing what you’re doing I can’t say, but something I might do differently: Insert-only data means it’s trivial to replicate my data into multiple (even geographically disparate) hot-hot systems.

                              If you do insert-only data from multiple split brains, it’s usually possible to get hot/cold easily, with the risk of losing (perhaps only temporarily) a few minutes of data in the event of catastrophe.

                            2. 0

                              Unfortunately, if you hold any EU user data, you will have to perform an actual delete if the EU user wants you to delete their stuff if you want to be compliant with their stuff. I like the idea of the persistence being an event log and then you construct views as necessary. I’ve heard that it’s possible to use this for almost everything and store an association of random-id to person, and then just delete that association when asked to in order to be compliant, but I haven’t actually looked into that carefully myself.

                              1. 2

                                That’s not true. The ICO recognises there are technological reasons why “actual deletion” might not be performed (see page 4). Having a flag that blinds the business from using the data is sufficient.

                                1. 1

                                  Very cool. Thank you for sharing that. I was under the misconception that having someone in the company being capable of obtaining the data was sufficient to be a violation. It looks like the condition to be compliant is weaker than that.

                                  1. 2

                                    No problem. A big part of my day is GDPR-related at the moment, so I’m unexpectedly versed with this stuff.

                              2. 0

                                There’s actually a database out there that enforces the never-delete approach (together with some other very nice paradigms/features). Sadly it isn’t open source:

                                http://www.datomic.com/

                            1. 5

                              I wonder how many versions of this article with pretty much the same examples and code exist by now…

                              1. 4

                                Aside from the recent unroll.me hate, I think that’s just a mistake in their writing.

                                1. 2

                                  Mistake? AFAIK there isn’t a way to retrieve your password. https://unroll.me/a/login

                                  1. 3

                                    If they only use gmail/google oauth, they wouldn’t have a password. They’re just implying that the login uses oauth which uses your google account email/password.

                                    I used to have an unroll.me account, and don’t have a password recorded for them.

                                    1. 1

                                      I guess if they truly use OAuth for everything, then it’s fine.

                                      1. 1

                                        For Gmail, Outlook, and Yahoo they use OAuth to access your email. For AOL and iCloud they just need your password for IMAP access.

                                1. 1

                                  Hm, an annoyingly large portion of the text goes off the screen on my phone with no way to scroll over there, even in landscape mode and with “Request desktop site” enabled in chrome. Makes it rather annoying to read.

                                  1. 1

                                    I even have to go to landscape mode on my iPad otherwise I face the same issue, just to have lots of margins on both left and right side of the article. Not cool.

                                    1. 1

                                      … which makes the following rather funny:

                                      Dozuki makes documentation software for everything — from visual work instructions for manufacturing to product manuals that will make your customers love you.

                                    1. 5

                                      Part of the reason why it took us awhile to debug our issue was that we assumed that the stack trace we saw was accurate.

                                      Recall how it was compiled:

                                      clang -std=c99 -O3 -g -o inline_merge inline_merge.c
                                      

                                      As the GCC manual says with respect to combinging O with g: “The shortcuts taken by optimized code may occasionally produce surprising results.”

                                      1. 4

                                        And relatedly from the clang man page:

                                        Note that Clang debug information works best at -O0.

                                        1. 1

                                          So one should recompile for the purpose of debugging?

                                          1. 7

                                            Yes.

                                            GCC has -Og, which turns on all optimizations that can’t affect debugging.

                                            1. 2

                                              Or design your software in such a way that the stacktrace is not needed for debugging. Which is hard, for sure, but the current trend of depending on stacktraces for everything (Java, Python, for example) is a bit too extreme IMO. For a comparison, in Ocaml I tend to use a result type monad for things that can fail at which point the compiler makes sure I do something with all errors.

                                        1. 10

                                          I’ve read many people say that dvorak was fine for the vim movement keys.

                                          And as for the keycaps, I’m not sure I see the problem, why not just use a blank keyboard and switch at will?

                                          1. 5

                                            Although I am in theory capable of typing without looking at the keys, in practice I do a lot of key stabbing as well. And a lot of one handed typing as well. I’ve practiced this some in the dark, and it’s no fun. Definitely not interested in a blank keyboard.

                                            Anyway, same experience as the author. Learned dvorak because there were people who didn’t know dvorak, used it for a while, then found I had trouble using a qwerty keyboard. Now I just use qwerty full time, but go back and practice dvorak for a week or so at a time to maintain the skill in case I ever have a compelling reason to switch.

                                            I like dvorak for English, but find it substantially more annoying for code. And it’s a disaster for passwords. I usually set up hotkeys so I can quickly change on the fly depending on what and how much I’m typing.

                                            1. 2

                                              I love Dvorak for code! Having -_ and =+ much closer is so convenient.

                                              1. 1

                                                More than { [ ] }?

                                                1. 2

                                                  For sure, think about where it’s now positioned. Typing …) {… is so easy when ) and { are side by side. And for code that doesn’t use egyptian braces, )<enter>{ is easier for me too. When I hit enter with my pinky, and follow up with { with my middle finger, that’s natural. But trying to squeeze my middle finger into the QWERTY location for { while my pinky is still on enter totally sucks.

                                                  Meanwhile -_=+ are all typed in line with other words (i.e. variable names). And - and _ are frequently part of filenames and variables, so it’s great that they’re closest to the letter keys.

                                              2. 2

                                                I like dvorak for English, but find it substantially more annoying for code.

                                                Exactly! If I were a novelist I would probably just continue using Dvorak.

                                                1. 2

                                                  in practice I do a lot of key stabbing as well

                                                  I recently bought a laptop with a Swiss(?) keyboard layout. (It really is a monstrosity with up to five characters on one key). I thought I wouldn’t need to look at the keys at all and I could just use my preferred keymap, but I’ve been caught ought a few times. I’m just about used to it now, though.

                                                2. 4

                                                  When I am typing commands into a production machine I feel like it is only responsible of me to use a properly labelled keyboard.

                                                  This is really important when you’re on your last ssh password/smartcard PIN attempt, because you can go slow and look at what you’re doing.

                                                  1. 5

                                                    I got a blank keyboard, and I must admit that I still look at it from time to time. like for numbers, or b/v, u/i… I only do so when I start thinking “OMG this is a password, don’t get it wrong!”

                                                    Having a blank keyboard doesn’t stop you from looking at your hands. It only disappoint you when you do.

                                                    1. 5

                                                      As a happy Dvorak user I’d have to say there are better fixes to that problem. Copy it from your password manager? (You use one, right?) Type it into somewhere else, and cut and paste? Or use the keyboard viewer? (Ok that one is macOS specific, perhaps.)

                                                      Specifically re: “typing commands into prod machines” I don’t buy the argument. Commands generally don’t take effect until you hit ENTER and until then you’ve got all the time you need to review what you’ve typed in. Some programs do prompt for yes/no without waiting for Enter but it’s not like Dvorak or Qwerty’s y or n keys have a common location in either layout, so I don’t really see that as an issue either.

                                                      1. 2

                                                        Yes, the “production machines” argument is a strange one. I’d imagine it would only be an issue on a Windows system (if you’re logging in via ssh it’s immaterial) and then it would be fairly obvious quite quickly that the keyboard map is wrong. And if the keyboard map is wrong in the Dvorak vs QWERTY sense you’d quickly realise you’re typing gibberish. Or so I’d think?

                                                        Ignoring the whole issue of “you shouldn’t be logging in to a production machine to make changes”…

                                                      2. 1

                                                        In this case, I find the homing keys, reorient myself, and type whatever I need to type. (Or just use a password manager & paste). Haven’t mistyped a password in years, and I’m using Dvorak with blanks.

                                                        Homing keys are there for a reason.

                                                        Labels are only necessary when you don’t touch type. If you do, they serve no useful purpose.

                                                      3. 2

                                                        I’ve read many people say that dvorak was fine for the vim movement keys.

                                                        Dvorak is fine for Vim movement keys, but not as nearly as nice as Qwerty.

                                                        And as for the keycaps, I’m not sure I see the problem, why not just use a blank keyboard and switch at will?

                                                        The problem is, when I’m entering a password or bash command sometimes I want to slow down and actually look at the keyboard while I’m typing. In sensitive production settings raw speed isn’t nearly as valuable as accuracy. A blank keyboard would not solve this problem :)

                                                        1. 6

                                                          Dvorak is fine for Vim movement keys, but not as nearly as nice as Qwerty.

                                                          They actually work better with Dvorak for me, because the grouping feels more logical than on qwerty to me.

                                                          1. 1

                                                            Likewise: vertical and horizontal movement keys separated onto different hands rather than all on the one (and interspersed) works much better for me.

                                                          2. 2

                                                            I hate vim movement in QWERTY. I think it’s because I’m left handed, and Dvorak puts up/down on my left pointer and middle finger. For me, it’s really hard to manipulate all four directions with my right hand quickly.

                                                            1. 1

                                                              Would it make sense to use AOEU for motion then (or HTNS for right handed people)? I guess doing so may open a whole can of remapping worms though?

                                                              That won’t help with apps that don’t support remapping but which support vi-style motion though (as they’ll expect you to hit HJKL)…

                                                        1. 29

                                                          Hmm. I have just spent a week or two getting my mind around systemd, so I will add a few comments….

                                                          • Systemd is a Big step forward on sysv init and even a good step forward on upstart. Please don’t throw the baby out with the bathwater in trying achieve what seems to be mostly political rather than technical aims. ie.

                                                          ** The degree of parallelism achieved by systemd does very good things to start up times. (Yes, that is a critical parameter, especially in the embedded world)

                                                          ** Socket activation is very nifty / useful.

                                                          ** There are a lot of learning that has gone into things like dbus https://lwn.net/Articles/641277/ While there are things I really don’t like about dbus (cough, xml, cough)…. I respect the hard earned experience encoded into it)

                                                          ** Systemd’s use of cgroups is actually a very very nifty feature in creating rock solid systems, systems that don’t go sluggish because a subsystem is rogue or leaky. (But I think we are all just learning to use it properly)

                                                          ** The thought and effort around “playing nice” with distro packaging systems via “drop in” directories is valuable. Yup, it adds complication, but packaging is real and you need a solution.

                                                          ** The thought and complication around generators to aid the transition from sysv to systemd is also vital. Nobody can upgrade tens of thousands of packages in one go.

                                                          TL;DR; Systemd actually gives us a lot of very very useful and important stuff. Any competing system with the faintest hope of wide adoption has a pretty high bar to meet.

                                                          The biggest sort of “WAT!?” moments for me around systemd is that it creates it’s own entirely new language… that is remarkably weaker even than shell. And occasionally you find yourself explicitly invoking, yuck, shell, to get stuff done.

                                                          Personally I would have preferred it to be something like guile with some addons / helper macros.

                                                          1. 15

                                                            I actually agree with most of what you’ve said here, Systemd is definitely trying to solve some real problems and I fully acknowledge that. The main problem I have with Systemd is the way it just subsumes so much and it’s pretty much all-or-nothing; combined with that, people do experience real problems with it and I personally believe its design is too complicated, especially for such an essential part of the system. I’ll talk about it a bit more in my blog (along with lots of other things) at some stage, but in general the features you list are good features and I hope to have Dinit support eg socket activation and cgroups (though as an optional rather than mandatory feature). On the other hand I am dead-set that there will never be a dbus-connection in the PID 1 process nor any XML-based protocol, and I’m already thinking about separating the PID 1 process from the service manager, etc.

                                                            1. 9

                                                              Please stick with human-readable logs too. :)

                                                              1. 6

                                                                Please don’t. It is a lot easier to turn machine-readable / binary logs to human-readable than the other way around, and machines will be processing and reading logs a lot more than humans.

                                                                1. 4

                                                                  Human-readable doesn’t mean freeform. It can be machine-readable too. At my last company, we logged everything as date, KV pairs, and only then freeform text. It had a natural mapping to JSON and protocol buffers after that.

                                                                  https://github.com/uber-go/zap This isn’t what we used, but the general idea.

                                                                  1. 3

                                                                    Yeah, you can do that. But then it becomes quite a bit harder to sign, encrypt, or index logs. I still maintain that going binary->human readable is more efficient, and practical, as long as computers do more processing on the logs than humans do.

                                                                    Mind you, I’m talking about storage. The logs should be reasonably easy for a human to process when emitted, and a mapping to a human-readable format is desirable. When stored, human-readability is, in my opinion, a mistake.

                                                                    1. 2

                                                                      You make good points. It’s funny, because I advocated hard for binary logs (and indeed stored many logs as protocol buffers on Kafka; only on the filesystem was it text) from systems at $dayjob-1, but when it comes to my own Linux system it’s a little harder for me to swallow. I suppose I’m looking at it from the perspective of an interactive user and not a fleet of Linux machines; on my own computer I like to be able to open my logs as standard text without needing to pipe it through a utility.

                                                                      I’ll concede the point though: binary logs do make a lot more sense as building blocks if they’re done right and have sufficient metadata to be better than the machine-readable text format. If it’s a binary log of just date + facility + level + text description, it may as well have been a formatted text log.

                                                                2. 2

                                                                  So long as they accumulate the same amount of useful info…. and is machine parsable, sure.

                                                                  journalctl spits out human readable or json or whatever.

                                                                  I suspect to achieve near the same information density / speed as journalctl with plain old ascii will be a hard ask.

                                                                  In my view I want both. Human and machine readable… how that is done is an implementation detail.

                                                                3. 4

                                                                  I’m sort of curious about which “subsume everything” bits are hurting you in particular.

                                                                  For example, subsuming the business of mounting is fairly necessary since these days the order in which things get mount relative to the order in which various services are run is pretty inexorable.

                                                                  I have doubts about how much of the networkd / resolved should be part of systemd…. except something that collaborates with the startup infrastructure is required. ie. I suspect your choices in dinit will be slightly harsh…. modding dinit to play nice with existing network managers or modding existing network managers to play nice with dinit or subsuming the function of network management or leaving fairly vital chunks of functionality undone and undoable.

                                                                  Especially in the world of hot plug devices and mobile data….. things get really really hairy.

                                                                  I am dead-set that there will never be a dbus-connection in the PID 1

                                                                  You still need a secure way of communicating with pid 1….

                                                                  That said, systemd process itself could perhaps be decomposed into more processes than it currently is.

                                                                  However as I hinted…. there are things that dbus gives you, like bounded trusted between untrusted and untrusting and untrustworthy programs that is hard to achieve without reimplementing large chunks of dbus….

                                                                  …and then going through the long and painful process of learning from your mistakes that dbus has already gone through.

                                                                  Yes, I truly hate xml in there…. but you still need some security sensitive serialization mechanism in there.

                                                                  ie. Whatever framework you choose will still need to enforce the syntactic contract of the interface so that a untrusted and untrustworthy program cannot achieve a denial of service or escalation of privilege through abuse of a serialized interface.

                                                                  There are other things out there that do that (eg. protobuffers, cap’n’proto, …), but then you still in a world where desktops and bluetooth and network managers and …….. need to be rewritten to use the new mechanism.

                                                                  1. 3

                                                                    For example, subsuming the business of mounting is fairly necessary since these days the order in which things get mount relative to the order in which various services are run is pretty inexorable.

                                                                    systemd’s handling of mounting is beyond broken. It’s impossible to get bind mounts to work successfully on boot, nfs mounts don’t work on boot unless you make systemd handle it with autofs and sacrifice a goat, and last week I had a broken mount that couldn’t be fixed. umount said there were open files, lsof said none were open. Had to reboot because killing systemd would kill the box anyway.

                                                                    It doesn’t even start MySQL reliably on boot either. Systemd is broken. Stop defending it.

                                                                    1. 3

                                                                      For example, subsuming the business of mounting is fairly necessary since these days the order in which things get mount relative to the order in which various services are run is pretty inexorable.

                                                                      There are a growing number of virtual filesystems that Linux systems expect or need to be mounted for full operation - /proc, /dev, /sys and cgroups all have their own - but these can all be mounted in the traditional way: by running ‘/bin/mount’ from a service. And because it’s a service, dependencies on it can be expressed. What Systemd does is understand the natural ordering imposed by mount paths as implicit dependencies between mount units, which is all well and good but which could also be expressed explicitly in service descriptions, either manually (how often do you really change your mount hierarchies…) or via an external tool. It doesn’t need to be part of the init system directly.

                                                                      (Is it bad that systemd can do this? Not really; it is a feature. On the other hand, systemd’s complexity has I feel already gotten out of hand. Also, is this particular feature really giving that much real-world benefit? I’m not convinced).

                                                                      I suspect your choices in dinit will be slightly harsh…. modding dinit to play nice with existing network managers or modding existing network managers to play nice with dinit

                                                                      At this stage I want to believe there is another option: delegating Systemd API implementation to another daemon (which communicates with Dinit if and as it needs to). Of course such a daemon could be considered as part of Dinit anyway, so it’s a fine distinction - but I want to keep the lines between the components much clearer (than I feel they are in Systemd).

                                                                      I believe in many cases the services provided by parts of Systemd don’t actually need to be tied to the init system. Case in point, elogind has extraced the logind functionality from systemd and made it systemd-independent. Similarly there’s eudev, the Gentoo fork of the udev device node management daemon which extracts it from systemd.

                                                                      You still need a secure way of communicating with pid 1…

                                                                      Right now, that’s via root-only unix socket, and I’d like to keep it that way. The moment unprivileged processes can talk to a privileged process, you have to worry about protocol flaws a lot more. The current protocol is compact and simple. More complicated behavior could be wrapped in another daemon with a more complex API, if necessary, but again, the boundary lines (is this init? is this service management? or is this something else?) can be kept clearer, I feel.

                                                                      Putting it another way, a lot of the parts of Systemd that required a user-accessible API just won’t be part of Dinit itself: they’ll be part of an optional package that communicates the Dinit only if it needs to, and only by a simple internal protocol. That way, boundaries between components are more clear, and problems (whether bugs or configuration issues) are easier to localise and resolve.

                                                                    2. 1

                                                                      On the other hand I am dead-set that there will never be a dbus-connection in the PID 1 process nor any XML-based protocol

                                                                      Comments like this makes me wonder what you actually know about D-Bus and what you think it uses XML for.

                                                                      1. 2

                                                                        I suppose you are hinting that I’ve somehow claimed D-Bus is/uses an XML-based protocol? Read the statement again…

                                                                        1. 1

                                                                          It certainly sounded like it anyway.

                                                                    3. 8

                                                                      Systemd solves (or attempts to) some actually existing problems, yes. It solves them from a purely Dev(Ops) perspective while completely ignoring that we use Linux-based systems in big part for how flexible they are. Systemd is a very big step towards making systems we use less transparent and simple in design. Thus, less flexible.

                                                                      And if you say that’s the point: systems need to get more uniform and less unique!.. then sure. I very decidedly don’t want to work in an industry that cripples itself like that.

                                                                      1. 8

                                                                        Hmm. I strongly disagree with that.

                                                                        As a simple example, in sysv your only “targets” were the 7 runlevels. Pretty crude.

                                                                        Alas the sysv simplicity came at a huge cost. Slow boots since it was hard to parallelize, and Moore’s law has stopped giving us more clock cycles… it only gives us more cores these days.

                                                                        On my ubuntu xenial box I get… locate target | grep -E ‘^/(run|etc|lib)/.*.target$’ | grep -v wants | wc 61 61 2249

                                                                        (Including the 7 runlevels for backwards compatibility)

                                                                        ie. Much more flexibility.

                                                                        ie. You have much more flexibility than you ever had in sysv…. and if you need to drop into a whole of shell (or whatever) flexibility…. nothing is stopping you.

                                                                        It’s actually very transparent…. the documentation is actually a darn sight better that sysv init ever was and the source code is pretty readable. (Although at the user level I find I can get by mostly by looking at the .service files and guessing, it’s a lot easy to read than a sysv init script.)

                                                                        So my actual experience of wrangling systemd on a daily basis is it is more transparent and flexible than what we had before…..

                                                                        A bunch of the complexity is due to the need to transition from sysv/upstart to systemd.

                                                                        I can see on my box a huge amount of crud that can just be deleted once everything is converted.

                                                                        All the serious “Huh!? WTF!?” moments in the last few weeks have been around the mishmash of old and new.

                                                                        Seriously. It is simpler.

                                                                        That said, could dinit be even simpler?

                                                                        I don’t know.

                                                                        As I say, systemd has invented it’s own quarter arsed language for the .unit files. Maybe if dinit uses a real language…. (I call shell a half arsed language)

                                                                        1. 11

                                                                          You are comparing systemd to “sysv”. That’s a false dichotomy that was very agressively pushed into every conversation about systemd. No. Those are not the only two choices.

                                                                          BTW, sysvinit is a dumb-ish init that can spawn processes and watch over them. We’ve been using it as more or less just a dumb init for the last decade or so. What you’re comparing systemd to is an amorphous, distro-specific blob of scripts, wrappers and helpers that actually did the work. Initscripts != sysvinit. Insserv != sysvinit.

                                                                          1. 4

                                                                            Ok, fair cop.

                                                                            I was using sysv as a hand waving reference to the various flavours of init /etc/init.d scripts, including upstart that Debian / Ubuntu have been using prior to systemd.

                                                                            My point is not to say systemd is the greatest and end point of creation… my point is it’s a substantial advance on what went before (in yocto / ubuntu / debian land) (other distros may have something better than that I haven’t experienced.)

                                                                            And I wasn’t seeing anything in the dinit aims and goals list yet that was making me saying, at the purely technical level, that the next step is on it’s way.

                                                                      2. 3

                                                                        Personally I would have preferred it to be something like guile with some addons / helper macros.

                                                                        So, https://www.gnu.org/software/shepherd/ ?

                                                                        Ah, no, you probably meant just the language within systemd. But adding systemd-like functionality to The Shepherd would do that. I think running things in containers is in, or will be, but maybe The Shepherd is too tangled up in GuixSD for many people’s use cases.

                                                                      1. 6

                                                                        I have been reading through this and I feel like POSIX MQs are really underutilized. Perhaps it’s because they don’t use the file API.

                                                                        Does anyone have thoughts on using POSIX MQs?

                                                                        1. 8

                                                                          One issue with them on Linux, at least, is that they have smallish default maximums: 8kb max message size, max 10 messages in flight at a time.

                                                                          The bigger issue though imo is that they fit in an awkward gap between stream abstractions like pipes or sockets, and full-featured message queue systems. Most people who want simple local IPC just use pipes or unix sockets (even if this requires a bit of DIY protocol around message delimeters), and people who want full-on queueing usually want queue state that persists over reboots, at least the option for network distribution, etc., so they use zeromq or similar.

                                                                          1. 4

                                                                            I’m looking at POSIX mqueue’s for better concurrency control than pipes but with less ceremony than sockets. Seems like that might be its sweet spot. Also, on FreeBSD, PIPE_BUF is way too small (512 bytes). I might whip up some test programs to see how well they go.

                                                                          2. 3

                                                                            I don’t think I’ve ever actually heard of them before, though I’m definitely planning on looking into using them for a few things now…

                                                                            1. 2

                                                                              They seem like a very nice tool. It’s interesting that they support priority natively, and can handle both non-blocking modes and blocking with a timeout. To clarify what mjn says, they have a 8kb default max size. Looks like the actual max size is somewhere around 16MB, which makes them more than big enough for my use cases.

                                                                              I wonder what the performance is like.

                                                                              1. 2

                                                                                Yeah, the max size (and number of messages) is configurable, but it’s a kernel parameter rather than something accessible from userspace (so the program can’t request the change itself, at least if it doesn’t run as root). Which is probably fine if you’re writing something fairly specialized, but it really reduces the number of cases I’d consider using them. I don’t want to write software where the install instructions have to tell users how to tweak their kernel parameters (and package managers don’t like to package those kinds of packages either).

                                                                                1. 2

                                                                                  That’s interesting. I wonder how that relates to containers/cgroups/etc. Can a docker container be spun up with a dedicated message size specific to that container?

                                                                                  [edit]

                                                                                  I think I partially answered my own question. It appears that POSIX message queues are part of the IPC namespace, what I’m not sure about is if /proc/sys/fs/mqueue/msgsize_max is per-container.

                                                                                  1. 2

                                                                                    Yeah, the max size (and number of messages) is configurable, but it’s a kernel parameter rather than something accessible from userspace

                                                                                    Is this the case? mq_open takes an mq_attr which lets one specify the size and max messages. There are upper bounds but they seem quite high from what I can gather.

                                                                                    1. 2

                                                                                      Of the various /proc/sys/fs/mqueue/ parameters: you can override msgsize_default and msg_default with the parameters to mq_open, but only up to the ceilings specified by msgsize_max and msg_max.

                                                                                      But on my Debian machine, the *_default and *_max parameters are the same, 8192 message size and 10 messages, so in practice you can’t actually request anything larger than the default, without tweaking settings in /proc. It’s possible other distributions ship different defaults; I’ve only checked Debian.

                                                                              1. 1

                                                                                This is just unbelievable, and if I hadn’t been following the author for many many years I don’t think I’d believe it. The fact that it also rick rolls you is …. I’m speechless.

                                                                                1. 1

                                                                                  Also check out PoC||GTFO. :D

                                                                                1. 16

                                                                                  What would be more interesting is a list of ways in which it differs from the heaps of existing open-source federated social networks.

                                                                                  1. 18

                                                                                    The key difference right now is people are using it, or at least have started to use it in the last few days. So if I were to draw a Venn diagram of “people I follow on twitter” and “people who have mastodon accounts”, it wouldn’t just be two distinct circles, which I can’t say for any of the other federated networks.

                                                                                    Whether they’re all still using it next week, well, we can be optimistic.

                                                                                    1. 8

                                                                                      I suspect part of it is that the people running Mastodon instances tend to have specific anti-harassment policies at a time when Twitter is getting a lot of flak for ignoring their harassment problems.

                                                                                      Icosahedron (a Mastodon instance) specifically calls out that “Fascism is incompatible with a free exchange of ideas”: https://icosahedron.website/about/more which is a breath of fresh air after the way Twitter has been avoiding admitting there’s even anything wrong.

                                                                                      I have a gut feeling that enforcing such policies would be easier on a federated network.

                                                                                      1. 8

                                                                                        Not sure how much of a factor it is, but I did notice that the majority of the instances (including all the big ones) are run by either Germans or French, which provides a different cultural and legal background compared to American-run services like Twitter. For example, whether to allow overt Nazism isn’t even really a debate in the German or French context, because it’s illegal.

                                                                                        1. 3

                                                                                          Disallowing ideas seems a bit less compatible with a free exchange of ideas to me.

                                                                                          1. 4

                                                                                            you’re free to think that, doesn’t mean you’re right though, and I guess with Mastadon you’re also free to pick a host that agrees with you rather than being up to the mercy of a totally centralized model.

                                                                                        2. 1

                                                                                          or at least have started to use it in the last few days.

                                                                                          Any idea why that is?

                                                                                          I’ve had an account for ages, but I’m seeing Mastodon everywhere today and can’t figure out why it’s suddenly a thing.

                                                                                          1. 3

                                                                                            Twitter changed how replies work and that’s became a sort of last straw for some people. Some high(ish) profile folk tweeted they had made mastodon accounts, others followed, and it’s gained traction.

                                                                                        3. 2

                                                                                          For those of us with less knowledge on this subject, can you share some examples that you have in mind?

                                                                                          1. 12

                                                                                            Some that immediately come to mind upon reading about Mastodon:

                                                                                            • Diaspora
                                                                                            • Tent.io
                                                                                            • Pump.io
                                                                                            • Friendi.ca
                                                                                            • Identi.ca (now GNU social?)

                                                                                            Edit: seems there are quite a few more I didn’t know about!

                                                                                            1. 3

                                                                                              Identi.ca is a pump.io node; it used to be based on StatusNet which is now GNU social.

                                                                                              1. 15

                                                                                                Mastodon is compatible with GNU Social fwiw. It’s an alternate server and web-UI implementation, but speaks the same protocol and can federate with GNU Social instances. The linked article doesn’t make this clear, but the GitHub repo does.

                                                                                                I think the linked article is targeted at Twitter users looking to switch who don’t already know anything about the history of open-source / federated networks, so avoids going into too much digression there. There’s been a huge spike in people signing up on mastodon.social the past 2-3 days, it seems due to a dislike of some recent Twitter changes that somehow it was in the right place at the right time to capitalize on. So I think this post is an attempt at writing a “hello, welcome to this new option” article for people who are seeing people on Twitter post about it and are wondering what this is all about.

                                                                                              2. 1

                                                                                                Here’s someone’s attempt to provide a short history of how all this stuff came about, and how it relates.

                                                                                            2. 2

                                                                                              The Github repo has more technical infos. Mastodon should federate with the others.

                                                                                            1. 4

                                                                                              Summary : configuration files evolve into impromptu programming languages over time. Just use a programming language to configure a program.

                                                                                              I think the same is valid for languages like SQL: you make a small language to solve a domain-specific problem, but the problem’s scope is too broad, leading to the creation of not-so-well-designed programming languages like PL/SQL.

                                                                                              1. 13

                                                                                                I think the same is valid for languages like SQL: you make a small language to solve a domain-specific problem, but the problem’s scope is too broad, leading to the creation of not-so-well-designed programming languages like PL/SQL.

                                                                                                I wouldn’t want to imagine a world though where everyone writes their SQL queries in a turing-complete, imperative programming language, let alone optimize them. SQL is one of the best examples where favoring declarative style has lead to huge improvements outside of edge-cases. Still lots of footguns around, but much less then in most programming languages.

                                                                                                And PL/SQL is a good example of how things can go wrong :).

                                                                                                1. 2

                                                                                                  I wouldn’t want to imagine a world though where everyone writes their SQL queries in a turing-complete, imperative programming language, let alone optimize them.

                                                                                                  SQL itself is turing-complete already though. ;)

                                                                                                  1. 2

                                                                                                    So is Magic: the Gathering.

                                                                                                    Turing completeness can arise by accident. It doesn’t make an environment where you’d actually use that property.

                                                                                              1. 13

                                                                                                URL has some garbage at the end.

                                                                                                  1. 2

                                                                                                    Fixed, thanks.

                                                                                                  2. 1

                                                                                                    Sorry. Thanks to Irene.

                                                                                                  1. 3

                                                                                                    Objection! It’s not weekly if you’re going to release it biweekly! ;)

                                                                                                    1. 5

                                                                                                      True enough, but we need to start somewhere :) Also FWIW I’m not going to post these (bi-)weekly. Just really really interesting ones, like the very first.

                                                                                                      1. 8

                                                                                                        Call it NixOS fortnightly, that word isn’t used often enough! >.<

                                                                                                    1. 6

                                                                                                      I know others will disagree, but I find 500px unreadably narrow, and couldn’t find where to turn it off in the inspector.

                                                                                                      1. 4

                                                                                                        In the Chrom{e,ium} inspector, expand the body, click on the site-wrapper div then uncheck the width: 500px style, then expand the site-wrapper div and click on the core-content div and uncheck the width there too.

                                                                                                        1. 1

                                                                                                          These steps should work in inspector tool of any browser these days.

                                                                                                      1. 3

                                                                                                        I rather like the Mazes for Programmers book, and the author’s blog, as well.

                                                                                                        Edit: And I just realized that I glanced over the part where he already links to it!

                                                                                                        1. 1

                                                                                                          For my font needs, Infinality is all I need. After installing it, fonts are simply perfect for my taste.

                                                                                                          1. 8

                                                                                                            The article talks about Infinality and about how v40 is a faster version…