1. 2

    It’s nice to see ewontfix coming back to life again!

    1. 11

      To be fair, the article brings up some strong points, but in the end it all depends on your norms. It’s obvious why suckless disregards systemd, you all know our philosophy. If you’re happy with systemd, keep using it. We won’t be knocking on your door and demand you to uninstall it.

      Just don’t complain when Red Hat in the form of Lennart Poettering et. al. centralizes more and more subsystems of the userspace. They started with PulseAudio and systemd, who knows what will come next. What’s really shocking is the indifference toward this hostile (due to corporate interests) takeover. The Linux community has voluntarily put on shackles on itself, and when they sense them in a few years we, and many many others, will have told you so.

      Yes, systemd solves some problems, but can we really stomach the baggage it brings to the table? Can’t we as a community come up with a better solution? We have our own proposal: The suckless core, a set of different tools to manage your userspace needs. You can replace any component you like, write your own and improve it. We do not claim it’s an end-all solution, it’s just a proposal. Systemd claims to be an open standard and is theoretically reimplementable as an interface, but have you seen any serious attempt to reimplement this beast?

      The flexibility of the userspace is Linux’ both biggest strength, due to adaptability, and weakness, due to fragmentation. We should not throw it away and accept that it’s a difficult problem that needs to be solved differently than with centralization.

      1. 1

        What’s really shocking is the indifference toward this hostile (due to corporate interests) takeover. The Linux community has voluntarily put on shackles on itself

        feel free to not like systemd, but debian, at least, voted overwhelmingly to keep systemd as their main init system. What you call “centralization” is what I call having a computer that actually works. systemd wouldn’t be popular if it didn’t solve real problems that real people have.

        1. 4

          What you call “centralization” is what I call having a computer that actually works.

          The only systems I have ever had that had literally undebuggable problems were computers running systemd. I followed all of the instructions on the arch wiki, and any I could scrounge up elsewhere, and I still have no idea why my one computer running systemd decides to hang for literally 30 minutes randomly on boot, I still don’t know why the other computer, a thinkpad, takes 5 minutes to start up despite having dependency management (compared to my other, much older thinkpad running alpine linux, which takes about 30 seconds to boot up to a terminal, after I toggled with it and removed some blocking processes that are ready by the time I log in anyway), I still have no idea why my network will randomly fail on another computer running Ubuntu, or sometimes the entire computer will just freeze up, leaving me to remember the sysreq keys to reboot it, and sometimes they fail entirely and I just have to power cycle! I have been running linux on almos all these computers for a little over ten years and the only times I have ever had problems of this sort were after systemd was introduced.

          I would love to live in this world where, magically, systemd just works.

          Even putting all of that aside, whenever I try to futz about with init scripts on the only piece of hardware I have systemd that works reliably, my Pi W, I realise how horrifying the systemd init process is! There are about 3 different places unit files can reside, some of them are symlinks, some of them override others, and (last time I checked) there isn’t a sure way to figure out what part is overriding another part.

        2. 1

          I agree with most of your points, but indifference? There’s been a pretty vocal opposition to PulseAudio, systemd, etc. Problem is the big bodies like Red Hat just keep going on this course.

          Luckily there’s still enough distributions without systemd. I’m personally fond of Void Linux and use OpenBSD on my servers. (Since I can finally play most of my Steam library on Linux I’m not going to make it harder again on myself in that respect.)

          1. 1

            Maybe I was a bit unclear in this regard. With indifference I addressed the “majority” and those that just accepted this situation. I remember back when Debian adopted systemd there were, of course, the Devuan-forkers, but many many developers who just let it happen and still let it happen, given Debian is currently in the process of rediscussing this choice.

            1. 6

              many many developers who just let it happen

              If by letting it happen you mean “explicitly chose to make it happen”; let’s not frame Debian’s adoption of systemd as a thing some people snuck through in through some relatively obscure vote. It was voted for once, adopted and the second, more recent vote reaffirmed to continue with systemd.

        1. 54

          Okay, okay, that’s technically off-topic, but…

          Can we talk about the awesome project name?

          1. 39

            It’s cool on two levels because ores are often oxides.

            1. 7

              It’s a masterpiece! I love this level of creativeness, especially nowadays when everyone slaps an “.io” or “d” at the end of their projects’ names.

          1. 8

            The crucial detail is this

            To deploy drivers built with DriverKit, allow other developers to use your system extensions, or use the EndpointSecurity API, you’ll need an entitlement from Apple.

            If I understand this correctly, we can say goodbye to hackintoshes.

            1. 1

              That was my first thought too. I also wonder about tools like LittleSnitch.

              1. 3

                In theory, Apple have been expanding Network Extensions API to be a sufficient substitute for NKEs. I’m not an expert on either, but if it’s anything like the EndpointSecurity framework as the supposed substitute for KAUTH listeners, there will be features that are killed off.

                I’m still in the process of porting some USB kexts to DriverKit, so I’ll see how good a substitute that is. I’m a little worried about the “magic” compiler-generated IPC glue being a debugging nightmare. I only recently started working on DriverKit though, largely thanks to lots of time spent on miscellaneous immediate Catalina regressions and working around the shortcomings of some of the braindead user consent implementations, which were more urgent because they immediately affected users.

                The 10.15.4 beta SDK has also added support for PCI/Thunderbolt drivers to DriverKit, it’ll definitely be interesting to see just how well that works out.

                I do think we’ll lose a bunch of software and hardware diversity on the Mac as a result of this. I’m finding the amount of effort required treading water to keep things running on the latest OS version or jumping through hoops due to badly designed features/APIs, compared to time left actually building something that has intrinsic value is becoming increasingly skewed. I fear lots of developers are going to decide it’s no longer worth it.

            1. 0

              It’s still easier to read than Rust. Nice work! :)

              1. 2

                Perl still beats it though. Go Perl!

              1. 6

                If anybody is interested in the topic, I wrote a short manuscript (10 pages) on the mathematical model and analysis of anomymity networks, especially in regard to attacks using the Crowds network as an example.

                1. 23

                  WireGuard is so much better than any other VPN solution I’ve tried: Not only in regard to performance, which shines when I look at connection stability, latency and overhead (the main reason being that connections are stateless). The much more crucial point is that WireGuard is so easy to setup (literally <12 lines of config on server and client get you started). I would’ve never dared to do this with OpenVPN, but I’ve successfully set up a “real” VPN, meaning I linked multiple computers into one private network, allowing me to access my local machines from wherever I am, savely guarded from surveillance and other actors.

                  WireGuard is a prime example of what we always promote at suckless.org: One doesn’t need an enterprise-ready solution to be productive or solve problems. Enterprise-ready often means bloated, full of legacy cruft and hard to setup (as it becomes peoples’ jobs to set it up). I’m not saying that WireGuard was trivial to reimplement, but just looking at the interfaces it provides it is damn simple, and that’s how every software should be.

                  1. 5

                    The much more crucial point is that WireGuard is so easy to setup (literally <12 lines of config on server and client get you started)

                    NixOS users can also configure it declaratively: https://nixos.wiki/wiki/Wireguard

                    I’ve been using Wireguard for more than a year now, in order to serve web apps that run on my home machine. I use a small DigitalOcean VM with nginx that proxies through wireguard.

                    1. 1

                      Wow, that’s brilliant! How bad is the added latency?

                      1. 2

                        I did not measure it, but you can check it out for yourself by accessing one of my apps: https://slownews.srid.ca/ (just ignore that JS overhead, as that is compiled from Haskell using GHCJS).

                    2. 5

                      I love WireGuard and use it every day, but I really do wish it had a shitty TCP mode so that I could use it on public Wi-Fi networks that block UDP. I understand performance would be bad, but a slow VPN beats one you can’t use every single time.

                      1. 2

                        Could you maybe rig up something with socat or similar as a TCP<->UDP proxy on each endpoint as a band-aid? I guess it might take a bit of extra work to delimit UDP message boundaries if the protocol depends on those…

                        1. 2

                          I mostly want this on my iPhone, so first-party support would be ideal. The WireGuard iOS app is great, btw.

                      2. 3

                        Thanks for the nice words. I’ve always thought highly of suckless.org, so that means a lot.

                      1. 20

                        I’ve heard from pen-testers in my building that they now recommend Windows Defender because it’s the default one everywhere and therefore have a much user base for detection and reports.

                        I have no idea if this is what the security community agrees with, but I thought it made sense.

                        1. 11

                          Yup, I can confirm that. There was a test here in Germany by a respected security firm and they found that Windows Defender really upped its game since Vista days, being the best from all tested solutions in regard to detection, speed and security. There’s no reason to use anything else other than wanting to unnecessarily slow one’s system down or having their data sold. :P

                          1. 8

                            Penetration Tester here, yep I agree with that. The other AV vendors have a tendency to do some crazy kernel module behaviour that would normally be considered pretty crazy/risky whereas Defender is now much better and clearly more sanely integrated. I will say, it still feels odd suggesting Defender though, it used to be the worst.

                            I should also add a note that if you are talking about Windows production servers or things that are a bit more in the “change management” role, most of the time I still suggest using Application Whitelisting over AV (or as a supplement).

                            1. 1

                              That’s similar to my thoughts. Makes sense too.

                              1. 1

                                In my personal experiencing pentesting (although I do far less of it now than I used to), the paid software rarely offers anything over Windows Defender, but often comes with a bunch of overheads. Most people think AV does something it doesn’t. In over 20 years of testing AV hasn’t stopped me once. That’s not AV’s fault, it’s just not generally designed to stop the workarounds people use. What is AV’s fault is that it’s marketed as a catch-all.

                                Having said that, MalwareBytes is one of the better set and forget antimalware tools I’ve used. For people who need that confidence or are attacked it’s something I’ve been comfortable with suggesting for peace of mind in some cases.

                              1. 10

                                I often notice that the lack of the use of UNIX principles in the software world more or less directly translates to planned obsolescence in products. When software is modular, and each component does only one job and does it well with well-defined and simple interfaces, you can easily hack with it and adapt it to your needs.

                                The author (@yingw787) describes this process with his headphones, where he was able to make use of the headphone jack for a bluetooth adaptor and simply attach a microphone suiting his needs. Most notably, the batteries and cushions are replaceable. If we look at competing all-in-one-headphones, when the batteries age, when the supported Bluetooth standard becomes older and older or simply when the non-replaceable cushions deteriorate, you have no other choice but to throw them away at some point. And that’s what planned obsolescence is in a nutshell.

                                We need to be more critical of companies designing their products for the landfill. Everybody’s talking about “climate change” and environmentalism, but many have no issues buying a MacBook with glued in batteries or headphones where nothing is replaceable, even things that are used heavily. Things used to be much more serviceable in the past, but somehow most people do not care about it anymore.

                                1. 8

                                  Not that I disagree, but even as a staunch Apple critic I somehow contest the MacBook example. Most people don’t buy it for the hardware, but for the software. And if it needs to be mobile.. it’s not an iMac without a battery. Also 90% of the people preferred the old models with replaceable batteries and nobody I know ever said “Thank god it’s thinner and lighter and I happily accept that I can’t change the battery” - so they accept it because there’s no alternative, if you want/need to stay on OSX. And that is a different discussion not suited here.

                                  Not my personal problem in this case, but for everyone there are things where it’s easy/medium/hard to change their habits and to make compromises. I know a few people who would prefer to get rid of their car a lot easier than switching from OS X.

                                  1. 1

                                    Maybe I’m not most people, but I personally buy laptops for the hardware, and not the software :) I had a Dell Precision 5530 early last year (enterprise-grade XPS 15), and I didn’t like the feel of the hardware. I got the Lenovo PI Gen 2 (enterprise-grade X1 Extreme), and I liked it, so I installed Ubuntu on it. I would say the OS is our interface to the hardware, and an open OS allows us to commoditize the hardware and give us lever.

                                    I won’t say it’s easy, but for me the biggest oof in transitioning from macOS to Ubuntu was the lack of SelfControl.app, and Timing.app, which managed my blockers and time tracking. Instead, I purchased a tool called RescueTime, which comes with a redirect proxy (?) that blocks certain distracting websites even after if I block using /etc/hosts, and tracks my time. Not as polished as macOS, but it works for my personal dev workflows. Laptop sleep, night shift, etc. work fine out of the box for 19.10 and Linux kernel 5.x.

                                  2. 2

                                    None of the 2 latest laptops I’ve gotten from work have had user-replaceable batteries.

                                    1. 2

                                      Hmm, what do you mean by user-replaceable batteries? Like ones with the release latch baked into the battery case, or ones where it has a detachable power bus inside a unibody laptop case?

                                      I personally consider both user-replaceable, and have replaced the battery in my Mid-2012 15’’ MacBook Pro twice with no complications. I looked online at the MacBook Air battery, and to my surprise other companies do make replacement batteries for those too. The batteries aren’t glued in or anything.

                                      1. 2

                                        The ones with the latch. But if one can open the laptop using standard screws and there’s no glue involved that’s fine too.

                                        1. 1

                                          Okay cool! I personally don’t mind the Apple-specific screw heads. You can get a full set of different screw heads on Amazon for $20. And nope, no glue involved!

                                    2. 1

                                      I’m a big fan of voting with my wallet. I’m willing to pay much more for things that give me the consumer as much lever as possible. Maybe it won’t counterbalance the rest of society, and maybe it will. I consider it to be my stand and my voice :)

                                    1. 9

                                      I’m quite uncomfortable with the idea of discord recording voice calls. Keeping records of chat logs is obviously necessary with the way Discord is designed, which is around long duration searchable history of channels, anyone being able to invite anyone to the server, etc.

                                      But voice calls are totally ephemeral. And people expect them to be treated that way. Someone keeping logs of a text conversation in Discord wouldn’t be considered odd. Someone recording a voice call they were in, without telling anyone? That’d be considered a breach of trust in every Discord community I’ve been in. So Discord the company having the ability to do so is just creepy.

                                      1. 6

                                        I’m not sure what drives you to expect privacy from a communications platform fueled with venture capital money. I wouldn’t be surprised they’re trying to do at least two things:

                                        1. Applying a censor to voice depending on server/user DM configuration. I know they’ve got some kind of OCR that tries to identify and block offensive words contained in images, such as the N word, when people are not friends and at least one side hasn’t changed the “safe direct messaging” option down to “I live on the edge”.
                                        2. Store records at least temporarily for law enforcement.

                                        And the obvious other things are keeping for post-processing and derive user interests for advertising, or batching and forwarding the information to intelligence agencies.

                                        It’s hard to tell, realy.

                                        1. 4

                                          If voice calls are being recorded, users should be shown a very clear warning, at the very least.

                                          On a side node, the fact that a behavior is not surprising does not make it acceptable or not worthy of discussion.

                                          1. 2

                                            Is there a mention of this in the ToS? (I don’t get a hit for the string “audio” there).

                                            At least in Sweden (and maybe in the EU in general), if you call a contact center that employs “sentiment analysis” and “quality control”, you are informed of this beforehand.

                                            If Discord does record voice but doesn’t inform beforehand (through a ToS), they could get in big trouble in the EU.

                                            1. 2

                                              I’m not a lawyer, get a lawyer for good advice.

                                              I couldn’t find anything related to recording and retention, or user deletion outside of copyright-infringement contexts, which is what a good section of this doc appears to be (dcma, etc).

                                              There is a dense “Your Content” paragraph, which I have modified to bullet by sentence, and also bold the major points:

                                              You represent and warrant that:

                                              • Your Content is original to you and that you exclusively own the rights to such content including the right to grant all of the rights and licenses in these Terms without the Company incurring any third party obligations or liability arising out of its exercise of such rights and licenses.
                                              • All of Your Content is your sole responsibility and the Company is not responsible for any material that you upload, post, or otherwise make available.
                                              • By uploading, distributing, transmitting or otherwise using Your Content with the Service, you grant to us a perpetual, nonexclusive, transferable, royalty-free, sublicensable, and worldwide license to use, host, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, perform, and display Your Content in connection with operating and providing the Service.

                                              @gerikson, this appears to be full grant and indemnification, which also covers traditional voice chat.

                                              1. 2

                                                Thanks for this. The “content” section seems to be standard boilerplate that many content platforms include to allow them to duplicate content over CDNs etc. Periodically there’s a panic in the form of “OMG Facebook owns all your content!!!” based on misunderstanding of these clauses.

                                                Possibly Discord reserves the right to terminate service if they can determine that someoene is abusive in voice chat. It would be interesting to hear if anyone has lost access in this way - i.e. been unfailingly polite in text but violating the ToS in voice. That would be somewhat strong proof that audio is recorded and monitored, at least after complaints are made.

                                              2. 1

                                                Putting some fine print in the ToS that nobody reads doesn’t count as ‘notifying beforehand’ in my opinion.

                                          2. 2

                                            If they’re up-front with it, I say there’s nothing wrong. Otherwise, I agree. I use discord all the time because many communities are using it these days, but never the voice chat, just because text is more consistent and easier to communicate with many people and ideas.

                                            1. 10

                                              If they’re up-front with it, I say there’s nothing wrong.

                                              Muggers are often quite up-front too, and less opaque than most web TOSes these days.

                                              1. 3

                                                Thanks for that comment, you made my morning :)

                                                1. 3

                                                  Muggers and TOSes are not comparable…

                                                  1. 1

                                                    Honest Americans offering a service of stress release, with clear and direct terms of service agreements. God bless

                                                2. 2

                                                  There’s nothing suggesting they do any recording of voice calls. I wouldn’t at all be surprised they have the ability to, they own the server and the proprietary service you’re using to communicate with.

                                                  1. 5

                                                    Discord provides a policy regarding user privacy, which explains it may capture “transient VOIP data”. While it’s a bit unclear what this may entail, our research shows that this “data” includes all voice and video data.

                                                    This suggests to me they’re recording voice calls.

                                                    1. 2

                                                      They could be doing literally anything with this unspecified data, and I’d basessly assert it’s probably related to audio processing features like noise cancelling and echo reduction, versus being vague terminology for nefarious purposes.

                                                  2. 2

                                                    Are there any well-polished and E2EE (or selfhosted) voice + video call applications that people here on lobste.rs would recommend? The ones I could find don’t seem to work very well on slow connections (dynamic video bitrate pls), so I’m looking for more alternatives.

                                                    1. 2

                                                      The only thing I can recommend right now is Matrix.org. You can self-host it and compared to many many other solutions, the protocol is rather consistent and nothing is bolted on. I like the idea how encryption keys are first-class-citizens compared to XMPP and others.

                                                      1. 1

                                                        Does matrix.org support reactions in text chat (thumbs up, etc.)?

                                                        I tried the Fractal client and I couldn’t find a way to see or create reactions.

                                                        1. 2

                                                          i currently use the riot client and it supports emoji-style reactions in text. so i assume it’s part of matrix itself and maybe some clients haven’t implemented it (or it’s buried in the UI?)

                                                  1. 2

                                                    The more macOS moves away from its UN*X-core and a relatively open hardware platform on its Macs, the less appealing it is to me. Thankfully, I jumped ship in 2012, back when you could still replace your hard drive and there was no T2-HAL9000-chip blocking your path, which is totally uncalled for and just a means of arrogant control over the users (A Mac is not an iPhone).

                                                    Just like with this example here, where it is hard to discern if it was the result of arrogance or incompetence.

                                                    1. 43

                                                      If you want to heavily invest into GNUisms (bash, RECIPEPREFIX, …) and make your makefile less portable, go ahead and follow the advice in this article. If you don’t, be my guest, use sh(1) instead of bash(1), read the make-standard and test your makefiles with other make-implementations than GNU make.

                                                      This helped me tremendously with writing good, concise and simple makefiles. We need to stop writing unportable makefiles, and especially stop relying on the bloated GNU ecosystem.

                                                      1. 11

                                                        Alternatively, name your makefile GNUmakefile to mark it as “only tested with GNU”.

                                                        1. 10

                                                          and especially stop relying on the bloated GNU ecosystem.

                                                          I know this site likes to hate on GNU, but its make has more useful features than any other make. % rules make writing pattern rules so much easier. The text-manipulation functions make working with variables so much nicer. These are “killer” features of GNU make for me.

                                                          1. 11

                                                            I used to be pretty harsh on GNU stuff until I realized how decrepit pure POSIX implementations are. For better or for worse, GNU tools do what people actually want, instead of some religious interpretation of Unix or minimum specification compliance.

                                                            1. 5

                                                              Seconded. POSIX make is pretty sparse. GNU Make has a bunch of warts but it has some conveniences that are nice to work with. Reading what automake dumps out isn’t a good way to judge it, either.

                                                            2. 2

                                                              I agree. Writing a portable (POSIX compatible) works when you do “regular” stuff, like building .c -> .o -> binaries, but anything more complex ends up having to lash out to shell script, either in them or around.

                                                            3. 6

                                                              I don’t agree with the article as well, although for a different reason (that might be added up to yours, really).

                                                              My standpoint is that shell scripting and makefiles are poorly understood by many people in the industry. Spicing up a makefile like this is likely to end up in wrong makefiles when a colleague having a shallow experience with these tools is going to change something.

                                                              To give an example, it is expected that individual failures in the recipes will make the whole build rule fail, but setting ONESHELL will change this sematics dramatically, with no change in syntax! Unless you also set SHELLOPTS += -e, error checking will work in an often unexpected way!

                                                            1. 9

                                                              I don’t like the concept of systemd (too bloated for my taste), but it’s good to see some diversity on that front. The systemd developers always claim that they were just implementing an “interface” of which there could be many implementations. Maybe we’ll hear more from this project and see this excuse become a reality.

                                                              1. 23

                                                                I completely agree with this article.

                                                                When people want to show me a video on YouTube, mostly using their smartphones and the official YouTube app, they are often presented with multiple (!) ads, as it has now become the norm. What surprises me is how patiently they wait for the ads to run through, but then I realize that the “normal” consumers not aware of ad blockers and alternative YouTube apps (like NewPipe) don’t have any other choice (read: knowledge) but to sit through these ads. It would make me crazy and probably lead me to actively avoiding YouTube for the most part.

                                                                What this shows is that consuming ads regularly actually wears you down in way. It increases your tolerance for noisy inputs into your brain, which in turn, I believe, leads to shorter and shorter attention spans, which I also notice with these same people.

                                                                Total advertising denial is not only a statement, I think it’s an active measure towards a better quality of life and focus on things that are actually important. The moment we stop selling our attention towards things we don’t need we can start thinking about things that do matter.

                                                                1. 3

                                                                  This should really talk about the privacy side of writing this feature - it’s super hard to get right (what “right” even means depends a lot on your threat model), and if you want to build one of these, you need to know what the privacy implications are. It looks like the linked implementation returns the image url to the client - unless you go out of your way to cache it on the server, linking the image like that will leak details about the user who sees the preview.

                                                                  See https://signal.org/blog/i-link-therefore-i-am/ for some thoughtful discussion of this.

                                                                  1. 2

                                                                    Most modern web developers don’t consider these things, which is a huge issue. The notion is to “ship fast, deploy young”, and we are all paying the price.

                                                                    Granted, one little link preview won’t break the camel’s back. But given how much of this complexity is used everywhere - hundreds and hundreds of little JavaScript snippets loaded on things as simple as blogs - the disaster is almost comically insurmountable.

                                                                    1. 1

                                                                      In a web app, it could be actually harmful to trust the client to generate it for every other user (versus in a messenger setting where there’s only one “other”)

                                                                      1. 1

                                                                        Yeah, this is true - possibly the signal blog post isn’t the best thing to link to - for a web app, I’d be more concerned about leaking information about the people who view it (and in order to not do this, you need to think about cacheing the image on the server, etc, which this article doesn’t go into at all). Your threat model depends on your usecase, but I think it’s bad that this article doesn’t talk about privacy at all.

                                                                      2. 0

                                                                        You can see this feature in other use-cases than private messages. Anyway, this article is about generating preview data from url, it’s not about building a private messenger.

                                                                        1. 1

                                                                          Sure - in the webapp usecase, it might be bad for the client to generate it, but it’s also bad to have the client directly request the image from the server that’s being linked to, for privacy reasons. You need to think about the threat model for the thing you’re building.

                                                                      1. 7

                                                                        Nice work (I especially liked the trick with the favicon)! One objection though, which I think is more of a remark about scale: I prefer to have a separate css-file, especially when the website has more CSS or changes often, as then the browser can leverage caching and, if the main site changes, it only has to transfer the information, and not the style.

                                                                        In particular, I’m a big fan of semantic (X)HTML, and strive to challenge myself by first having an (X)HTML structure, then applying an external CSS to it, using as little id’s/classes as possible.

                                                                        For dynamic websites with interactive content I do it as follows: I first create a website that has all the functions without Javascript. Then, I write the Javascript such that it changes the DOM so the “Javascript enabled”-experience is present. This way, you can have a fully static, but also fully dynamic site.

                                                                        1. 5

                                                                          I first create a website that has all the functions without Javascript. Then, I write the Javascript such that it changes the DOM so the “Javascript enabled”-experience is present.

                                                                          “Progressive enhancement”. (Or “graceful degradation” if you look at it from the other direction.) This was the standard in the noughts, and largely seems to have been forgotten.

                                                                          1. 4

                                                                            SPAs have pretty well killed progressive enhancement, which is one of the many reasons they should be chosen carefully.

                                                                            1. 2

                                                                              How sad that it has come to this…

                                                                          1. 19

                                                                            I’m glad to hear that. I’ve been using Gentoo since 2012 and I really appreciate their dedication towards init-system-diversity.

                                                                            The Debian developers were optimistic and kind-hearted in regard to systemd, hoping that it would not expand further despite having a monopoly. The systemd developers seized more and more control, suffocating more and more aspects of the user space below it.

                                                                            Does systemd work? Yes. Does it work better than sysvinit? Definitely. But this is not the point! The point is that it should not matter which init system you run when trying to run a desktop environment or something else high in user space. Systemd has become too complicated and a huge baggage. Another matter is that systemd drastically increases Red Hat’s influence on the Linux ecosystem in general. Wasn’t pulseaudio enough?

                                                                            The approach to possibly adopt elogind is a good thing. It brings many technological advances brought by systemd, but keeps it manageable and well-separated; as it should be.

                                                                            1. 11

                                                                              I read this the other way around, more as a move that might end token init-system diversity because it’s a large burden with few people interested in doing the work. (Which, might be argued, proves the point of those claiming systemd’s non-modular approach results in “embrace extend extinguish”.)

                                                                              1. 6

                                                                                Systemd has become too complicated and a huge baggage.

                                                                                I was thinking about this recently, and was surprised to realize that there hasn’t ever been a major fork or reimplementation of systemd, desipte Lennart claiming that the concept is modular…

                                                                                1. 6

                                                                                  there is, e.g. elogind is a fork of systemd-logind.

                                                                                  Or what would you consider “fork”? After all “systemd” is just like “KDE” or “OpenBSD”, a project developing on many different pieces of software, so it’s not like you can fork it — you can fork individual parts, like you can fork KDE’s Plasma desktop, or systemd’s login daemon, or OpenBSD’s SSH server.

                                                                                  1. 2

                                                                                    There was uselessd and a few others I can’t find now. They’re all abandoned though.

                                                                                    https://github.com/abandonware/uselessd

                                                                                1. 7

                                                                                  I think there are some factual errors here.

                                                                                  We take the equation “3 + 6 + 2 + 4” and cut it down into the smallest set of equations, which is [3 + 6, 2 + 4]. It could also be [2 + 3, 4 + 6]. The order doesn’t matter, as long as we turn this one long equation into many smaller equations.

                                                                                  This only works because addition is commutative. It wouldn’t work with, for example, matrix multiplication.

                                                                                  Why do we break it down to individual numbers at stage 1? Why don’t we just start from stage 2? Because while this list of numbers is even if the list was odd you would need to break it down to individual numbers to better handle it.

                                                                                  You could have your D&C algorithm take the last problem element and leave it alone, and then it works out anyway.

                                                                                  Recursion is when a function calls itself. It’s a hard concept to understand if you’ve never heard of it before. This page provides a good explanation.

                                                                                  Is that link supposed to be the the google search for “Recursion”? If that’s intentional, that’s a mean thing to do to your audience.

                                                                                  Also, recursion is a huge part of understanding D&C. You said earlier that “This article is designed to be read by someone with very little programming knowledge.” You can’t just gloss over the hard bits!

                                                                                  With the code from above, some important things to note. The Divide part is also the recursion part. We divide the problem up at return n * recur_factorial(n-1).

                                                                                  You aren’t dividing anything here, you’ve turned one problem into one slightly-smaller subproblem. If this is D&C, then pretty much any recurrent solution is D&C.

                                                                                  (Also, your algorithm is wrong: 0! = 1, but recur_factorial(0) doesn’t halt.)

                                                                                  It knows that 13 is the smallest in the first list, and 10 is the smallest in the right list. 10 is smaller than 13, therefore we don’t need to compare 13 to 64.

                                                                                  By following this logic, we’d merge (13, 51), (10, 11) incorrectly.

                                                                                  Towers of Hanoi 🗼

                                                                                  I wasn’t sure if this counted as D&C or not, so I did a bit of googling and found Mastering Javascript Functional Programming, which wrote this as their ToH solution:

                                                                                  const hanoi = (disks, from, to, extra) => {
                                                                                      if (disks === 1) {
                                                                                          console.log(`Move disk 1 from post ${from} to post ${to}`);
                                                                                      } else {
                                                                                          hanoi(disks - 1, from, extra, to);
                                                                                          console.log(`Move disk ${disks} from post ${from} to post ${to}`);
                                                                                          hanoi(disks - 1, extra, to, from);
                                                                                      }
                                                                                  };
                                                                                  

                                                                                  By contrast, here is your solution:

                                                                                  FUNCTION MoveTower(disk, source, dest, spare):
                                                                                  IF disk == 0, THEN:
                                                                                      move disk from source to dest
                                                                                  ELSE:
                                                                                      MoveTower(disk - 1, source, spare, dest)   // Step 1
                                                                                      move disk from source to dest              // Step 2
                                                                                      MoveTower(disk - 1, spare, dest, source)   // Step 3
                                                                                  END IF
                                                                                  

                                                                                  That’s… way too similar for me to be comfortable with this. Did you get your solution from this book? If so, you really should have cited it.

                                                                                  With recursion, we know 2 things: […] 1. It always has a base case (if it doesn’t, how does the algorithm know to end?)

                                                                                  You need a base case for a recurrence relation, but not generalized recursion. Consider a function that recursively calls itself until one of its parameters is larger than 100.

                                                                                  The algorithm gets a little confusing with steps 1 and 3. They both call the same function. This is where multi-threading comes in. You can run steps 1 and 3 on different threads - at the same time.

                                                                                  You can’t. If the function returned the list of steps, then it would be okay, but here you’re mutating the state of the game, so you have a race condition.

                                                                                  We can find Fibonacci numbers in nature. The way rabbits produce is in the style of the Fibonacci numbers.

                                                                                  Rabbits don’t produce that way. They have litters.

                                                                                  Fibonacci Numbers 🐰 […] With knowledge of divide and conquer, the above code is cleaner and easier to read.

                                                                                  A beginner asked “why is the recursive version so much slower?” and your response was “On many cores, recursion is faster. Divide and conquer is designed to be used on many different cores, so it’s full effects may not be experienced on a single core :(”

                                                                                  Even with multiple cores, the sequential solution is going to be way faster. It’s O(n), your solution for Fibonacci is O(2^n). This is a pretty big thing to miss and it makes me mad that you’re teaching beginners something you don’t understand yourself.

                                                                                  The next step is to explore multi-threading. Choose your programming language of choice and Google, as an example, “Python multi-threading”.

                                                                                  Python has the global interpreter lock, which means that multithreaded algorithms are almost never faster than single-threaded algorithms (and often much slower).


                                                                                  Final nitpick:

                                                                                  I will explain this using 3 examples.

                                                                                  You used four.

                                                                                  1. 2

                                                                                    I completely agree with your points, just regarding the following.

                                                                                    Is that link supposed to be the the google search for “Recursion”? If that’s intentional, that’s a mean thing to do to your audience.

                                                                                    This is a Google easteregg. Look at what it suggests at the top.

                                                                                  1. 3

                                                                                    @skerritt, nice article, but there are some editing errors. The paragraph on the swiss forest appears twice in a row and 3^2 is not 6 and 3^3 is not 27.

                                                                                    1. 2

                                                                                      3^3 isn’t 27?

                                                                                      1. 2

                                                                                        My mathematician’s brain just couldn’t stand this mistake and fixed it subconsciously. :) I meant “3^3 is not 9”, thanks for noticing.

                                                                                        Now let’s hope the author fixes it in the article.

                                                                                        1. 1

                                                                                          It’s funny; you made me doubt myself for a while, because my default assumption on math is that other people are almost always right :)

                                                                                    1. 12

                                                                                      Even though it seems as though operating system package managers are almost falling out of favor in exchange for a proliferation of language-specific package managers

                                                                                      if this is the case it is a mistake and a trend we should work to reverse.

                                                                                      To help OS maintainers, keep the software as simple as possible, avoid CMake and autotools if possible, write very simple Makefiles.

                                                                                      1. 11

                                                                                        I totally agree, especially in regard to Makefiles, and am glad to see that you linked one of ours as an example. We only write Makefiles for all suckless tools and stay away from CMake, autohell or other solutions. This simplicity makes it easy to package software (see Gentoo, Alpine and NixOS as examples).

                                                                                        Granted, we keep the scope of our software small, but maybe we should generally question the tendency nowadays to build massive behemoths with tons of dependencies and complex build logic. If you think about it, the reasons why things like autohell exist are not present anymore. 10-20 years ago, the ecosystem was much more diverse, but nowadays, every time I see a configure-script check if my compiler supports trigonometric functions or something, I just shake my head. 99% of configure scripts are copy-pasted from GNU code and they are a huge waste of time and energy. To make matters worse, these configure-scripts effectively prevent me from easily compiling such software on a RaspberryPI (or similar) as it takes so long to run these configure-scripts every time.

                                                                                        In contrast to this, a Makefile takes mere seconds, and if you run a more “nonstandard” system, you just change config.mk (example taken from farbfeld) to your needs, but this is not necessary in 99% of the cases.

                                                                                        To make it short, @xorhash: Keep it simple! :)

                                                                                        1. 6

                                                                                          To help OS maintainers, keep the software as simple as possible, avoid CMake and autotools if possible, write very simple Makefiles.

                                                                                          This is madness if you intend to support anything other than Linux.

                                                                                          1. 3

                                                                                            All suckless programs are simple enough that any experienced Unix user should be able to compile them without any Makefile at all. Most can be compiled by just cc -lX11 -lXft -o dwm *.c, or something along those lines.

                                                                                            It’s probably not a good solution for, say, Firefox or KDE, but not all software is Firefox or KDE.

                                                                                            1. 2

                                                                                              The makefile I linked is cross-platform.

                                                                                              1. 1

                                                                                                I disagree, the only platform that can’t deal with a simple makefile in my experience is Windows.

                                                                                              2. 2

                                                                                                Even though it seems as though operating system package managers are almost falling out of favor in exchange for a proliferation of language-specific package managers

                                                                                                if this is the case it is a mistake and a trend we should work to reverse.

                                                                                                Why is this a mistake?

                                                                                                1. 2

                                                                                                  Most language package managers are poorly designed, poorly implemented and modify the system in ways that are not deterministic, not reproducible and cannot be reversed. I’m thinking of pip in particular, but I believe npm suffers from similar issues as well. If we want something inbetween OS package managers and language package managers then probably something like Nix is required.

                                                                                                  1. 1

                                                                                                    I don’t think it’s a mistake. Only 1% of open source components are packaged on a distro like Debian. Best chance to have a packaged ecosystem is CPAN and that hovers between 10-15%. Distros aren’t in that business anymore. What OS package managers should do is deliver native packaged and be good stewards of the rest of the software in the system. For example, making apt “npm aware” so that apt can singly emit a package install operation even if it was triggered by npm locally.

                                                                                                  2. 1

                                                                                                    It doesn’t take much for simple Makefiles to not scale. Try adding a dependency on something as basic as iconv or curses while remaining portable. Autoconf is not as bad as all that and is common enough that any OS packager will know how to deal with it. I’m rather less fond of libtool, though.

                                                                                                    1. 1

                                                                                                      I maintain a project that has iconv as a dependency, the Makefile supports Linux and the BSDs, trivially.

                                                                                                      edit: disambiguate