Threads for NinjaTrappeur

  1. 6

    But wait, isn’t there that one nonguix project that allows you to install a normal kernel and Steam?

    Yeah, but talk about that in the main #guix channel and you risk getting banned. GG. You just have to know that it exists and you can’t learn that it exists without knowing someone that tells you that it exists under the table.

    Has this actually happened? Getting banned for talking about nonguix?

    1. 9

      Not sure about getting banned, per se, but it’s explicitly discouraged. The second paragraph of nonguix’s readme:

      Please do NOT promote this repository on any official Guix communication channels, such as their mailing lists or IRC channel, even in response to support requests! This is to show respect for the Guix project’s strict policy against recommending nonfree software, and to avoid any unnecessary hostility.

      1. 12

        even in response to support requests

        Holy shit, that’s extremely disrespectful to users.

        1. 2

          I would recommend actually reading the help-guix archives to see how often support issues are created and how many issues users have are ignored or told they are out of place.

        2. 12

          I admit I fucked up and misunderstood the rules. My complaint now reads:

          Yeah, but talk about that in the main #guix channel and you get told to not talk about it. You just have to know that it exists and you can’t learn that it exists without knowing someone that tells you that it exists under the table, like some kind of underground software drug dealer giving you a hit of wifi card firmware. This means that knowledge of the nonguix project (which may contain tools that make it possible to use Guix at all) is hidden from users that may need it because it allows users to install proprietary software. This limits user freedom from being able to use their computer how they want by making it a potentially untrustable underground software den instead of something that can be properly handled upstream without having to place trust in too many places.

        3. 9

          That’s made up, like most of that article, it’s full of misconceptions. Can’t tell whether or not this has been written in good faith.

          But hey, outrage is good to attract attention. Proof to the point: I’m commenting this out of outrage.

          1. 6

            But hey, outrage is good to attract attention.

            Hehe, yeah, the FSF and SFC use outrage constantly! I get emails all the time telling me that Microsoft and Apple are teaming up to murder babies or whatever. It’s pretty much all they have left at this point, and I say this as someone who donated and generally supported their mission for many, many years (which is why I still get the emails).

            1. 3

              Hyperbole and untruths are like pissing in your shoes to get warm; backfire once the initial heat is gone.

          2. 6

            When I wrote that bit I made the assumption that violating the rules of the channel could get you banned. I admit that it looks wrong in hindsight, so I am pushing a commit to amend it.

            1. 2

              Not to my knowledge. No. I’ve seen it tut-tutted but I’ve yet to see someone get banned.

              1. 1

                That’s 100% messed up if true.

              1. 4

                I get a 403 Forbidden.

                1. 7

                  Me too now. As they themselves say:

                  As a Professional DevOps Engineer I am naturally incredibly haphazard about how I run my personal projects.


                  1. 6

                    Sorry, I think my Private Cloud has a bad power supply which is having a knock-on effect of upsetting the NFS mounts on the webserver. I’m acquiring a replacement right now, and in the meantime I am going to Infrastructure-as-Code it by adding a cronjob that fixes the NFS mount.

                    1. 1

                      Me too.

                      You can use

                    1. 18

                      I’ve been reading the Gemini specification, as well as this post, and my conclusion is that it’s just a worse version of HTTP 1.1 and a worse version of Markdown.

                      1. 6

                        worse version of HTTP 1.1

                        Strong disagree. TLS + HTTP 1.1 requires to perform a upgrade dance involving quite a few extra steps. The specification is also pretty empty regarding SNI management. Properly implementing that RFC is pretty demanding. There’s also a lot of blind spots left to the implementer better judgement.

                        In comparison, the Gemini TLS connection establishing flow is more direct and simpler to implement.

                        TLS aside, you mentioning

                        I’ve been reading the Gemini specification

                        sounds like a clear win to me. The baseline HTTP 1.1 RFC is already massive let alone all its required extensions to work in a modern environment.

                        1. 7

                          I agree that the specification is simple to read, but the specification itself is too limited and don’t find it suitable for the real world.

                          For example, I prefer HTTP’s optional end-to-end encryption because when working with internal routers within an infrastructure dealing with certificates is a PITA and a completely unnecessary bump in complexity and performance overhead being inside an already secured network.

                          I also disagree on that “extensibility is generally a bad idea” as the article says. Extensibility can work if you do it properly, like any other thing on software engineering.

                          EDIT: Or the requirement of closing the connection and re-opening it with every request, and all the handshakes that means.

                          For clarity about what I think could be an actual improvement: I would prefer an alternative evolution of HTTP 1.0, with proper readable specs, test suites, clearer https upgrade paths, etc; instead of an evolution of Gopher.

                          1. 4

                            TLS + HTTP just requires connecting to port 443 with TLS. I’ve worked on lots of things using HTTP for >20 years and I don’t think I’ve ever seen the upgrade protocol used in real life. Is it commonly used by corporate proxies or something like that?

                          2. 6

                            When I looked at it (months ago), I got the same impression. I find this article irresponsible, as Gemini does not merit the support.

                            Gemini’s intentions are good. The specification isn’t. For instance, not knowing the size of an object before receiving it makes it a non-starter for many of its intended purposes.

                            This is an idea that should be developed properly and openly, allowing for input from several technically capable individuals. Not one person’s pet project.

                            I’ll stick to Gopher until we can actually do this right. Gemini doesn’t have the technical merit to be seen as a possible replacement to Gopher.

                            1. 3

                              It does accept input from individuals. I was able to convince the author to expand the number of status codes, to use client certificates (instead of username/password crap) and to use the full URL as a request.

                            2. 4

                              I prefer to think of Gemini as a better version of gopher with a more sane index page format.

                            1. 4

                              Interesting, I’ve been porting nixos to the nano pi m4 v2, this looks quite a bit less painful in some ways. Added to my: try this out some day list of stuff to look at.

                              1. 7

                                It is!

                                Conceptually speaking, Guix got a lot of things right. On the other hand, Nix precedes Guix, the opposite would have been concerning.

                                The documentation is amazing, they are very careful about their tooling vendor lock-in. Their clean API around the toolchains and the abstractions around derivations is the selling point to me. The language is also somehow standard and comes full batteries included tooling-wise.

                                There’s a catch however: the Guix packageset is much smaller, you won’t have all the nice language-specific build systems you have with Nix, overall you’re likely to miss some stuff packaging-wise. Also: no systemd (I guess it might be a selling point for some people though).

                                1. 1

                                  Yep yep, this just intrigued me as I’ve gotten a bit deep in the guts of how nixos sd images are built. Its honestly not too big of a deal its just probably in need of a bit of a refactor tbh for the overall function to do this stuff but it honestly just strikes me as more: this was evolved not planned. Which is fine, just not as polished as it could be.

                                  The scheme bit made a lot more sense to me off the bat versus having to do a fair amount of digging to figure out how I can adjust the partition sizes and make sure that my custom u-boot and its spl files etc… are getting put in the “blessed” right spot for this board (still not sure i am doing it right tbh as its not booting).

                                  And the systemd bit is water under the bridge to me, that ship has sailed. I’ve had to port/add custom derivations to nixpkgs a lot so i’m not too averse to that if needed.

                                  My real reason for all this is i’m building a little k8s cluster out of arm boards for shits, so nixops is my ultimate goal here.

                              1. 2

                                Nice post! Actually nice blog altogether, I started to binge read it tonight!

                                I couldn’t help but to notice something a tiny bit ironic though:

                                ~ » nslookup -query=A                                                                                                                 
                                Non-authoritative answer:
                                ~ » nslookup -query=AAAA                                                                                                              
                                Non-authoritative answer:
                                *** Can't find No answer
                                1. 5

                                  After using it for a while I started to find the Nix expression language as one of the best designed syntaxes ever. It doesn’t have separators for records or lists so it’s friendly to diff. The multiline strings are perfect. Writing nested records with “a.b.c” keys is super convenient. The lambda syntax is as simple as possible. Etc etc.

                                  1. 9

                                    It doesn’t have separators for records or lists so it’s friendly to diff.

                                    Records are separated with the ; symbol.

                                    As for lists, I beg to disagree. The list symbols are separated with a whitespace, which is unfortunate since whitespace is also used to represent function application. It means you’ll have to be careful enough to wrap your function applications in parenthesis every time you’ll perform it in a list.

                                    That’s a easy trap to fell into, especially in multi-lines statements. Add the lazy nature of the language on top of that, you can end up with stacktraces that are pretty difficult to decipher. Especially if you end up doing that in a NixOS machine description :/.

                                    I see a lot of newcomers falling into this trap on IRC.


                                    let pkgs = [
                                      import ./local-pkgs.nix
                                    ]; in ...

                                    Instead of

                                    let pkgs = [
                                      (import ./local-pkgs.nix)
                                    ]; in ...
                                    1. 2

                                      Sorry, I meant separators to mean comma-like separators where the last item doesn’t end with the separator.

                                      The issue you mentioned is real, yeah. I still love the syntax.

                                  1. 14

                                    While it might be true that 1500 Bytes is now the de facto MTU standard on the Internet (minus whatever overhead you throw at it), everything’s not lost. The problem is not that we don’t have the link layer capabilities to offer larger MTUs, the problem is that the transport protocol has to be AWARE of it. One mechanism for finding out whether what size MTU is supported by a path over the Internet is an Algorithm called DPLPMTUD. It is currently being standardized by the IETF and is more or less complete There are even plans for QUIC to implement this algorithm, so if we’ll end up with a transport that is widely deployed and also supports detection of MTUs > 1500 we’ll actually might have a chance to change the link layer defaults. Fun fact: All of the 4G networking gear actually supports jumbo frames, most of the providers just haven’t enabled the support for it since they are not aware of the issue.

                                    1. 6

                                      Wow, it might even work.

                                      I can hardly believe it… but if were able to send jumbo frames and most users’ browsers support receiving it, it might get deployed by ISPs as they look for benchmark karma. Amazing. I thought 1500 was as invariant as π.

                                      1. 5

                                        I was maintainer for an AS at a previous job and set up a few BGP peers with jumbo frames (4470). I would have made this available on the customer links as well, except none of them would have been able to receive the frames. They were all configured for 1500, as is the default in any OS then or today. Many of their NICs couldn’t handle 4470 either, though I suppose that has improved now.

                                        Even if a customer had configured their NIC to handle jumbo frames, they would have had problems with the other equipment on their local network. How do you change the MTU of your smartphone, your media box or your printer? If you set the MTU on your Ethernet interface to 4470 then your network stack is going to think it can send such large frames to any node on the same link. Path MTU discovery doesn’t fix this because there is no router in between that can send ICMP packets back to you, only L2 switches.

                                        It is easy to test. Try to ping your gateway with ping -s 4000 (or whatever your gateway is). Then change your MTU with something like ip link set eth0 mtu 4470 and see if you can still ping your gateway. Remember to run ip link set eth0 mtu 1500 afterwards (or reboot).

                                        I don’t think that DPLPMTUD will fix this situation and let everyone have jumbo frames. As a former network administrator reading the following paragraph, they are basically saying that jumbo frames would break my network in subtle and hard to diagnose ways:

                                           A PL that does not acknowledge data reception (e.g., UDP and UDP-
                                           Lite) is unable itself to detect when the packets that it sends are
                                           discarded because their size is greater than the actual PMTU.  These
                                           PLs need to rely on an application protocol to detect this loss.

                                        So you’re going to have people complain that their browser is working, but nothing else. I wouldn’t enable jumbo frames if DPLPMTUD was everything that was promised as a fix. That said, it looks like DPLPMTUD will be good for the Internet as a whole, but it does not really help the argument for jumbo frames.

                                        And I don’t know if it has changed recently, but the main argument for jumbo frames at the time was actually that they would lead to fewer interrupts per second. There is some overhead per processed packet, but this has mostly been fixed in hardware now. The big routers use custom hardware that handles routing at wire speed and even consumer network cards have UDP and TCP segmentation offloading, and the drivers are not limited to one packet per interrupt. So it’s not that much of a problem anymore.

                                        Would have been cool though and I really wanted to use it, just like I wanted to get us on the Mbone. But at least we got IPv6. Sorta. :)

                                        1. 3

                                          If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                                          Back when I tried running an email server on there, I actually did run into trouble with this, because some bank’s firewall blocked ICMP packets, so… I thought you’d like to know, neither of us used “jumbo” datagrams, but we still had MTU trouble, because their mail server tried to send 1500 octet packets and couldn’t detect that the DSL link couldn’t carry them. The connection timed out every time.

                                          If your application can’t track a window of viable datagram sizes, then your application is simply wrong.

                                          1. 2

                                            If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                                            It’s even worse: in the current situation[1], your system’s MTU won’t matter at all. Most of the network operators are straight-up MSS-clamping your TCP packets downstream, effectively discarding your system’s MTU.

                                            I’m very excited by this draft! Not only it will fix the UDP situation we currently have, but will also make tunneling connections way more easy. That said, it also means that if we want to benefit from that, the network administrators will need to quit mss-clamping. I suspect this to take quite some time :(

                                            [1] PMTU won’t work in many cases. Currently, you need ICMP to perform a PMTU discovery, which is sadly filtered out by some poorly-configured endpoints. Try to ping for instance ;)

                                            1. 2

                                              If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                                              Very true, one can’t assume an MTU of 1500 on the Internet. I disagree that it’s on the application to handle it:

                                              If your application can’t track a window of viable datagram sizes, then your application is simply wrong.

                                              The network stack is responsible for PMTUD, not the application. One can’t expect every application to track the datagram size on a TCP connection. Applications that use BSD sockets simply don’t do that, they send() and recv() and let the network stack figure out the datagram size. There’s nothing wrong with that. For UDP the situation is a little different, but IP can actually fragment large UDP datagrams and PTMUD works there too (unless, again, broken by bad configurations, hence DPLPMTUD).

                                              1. 3

                                                I disagree that it’s on the application to handle it

                                                Sure, fine. It’s the transport layer’s job to handle it. Just as long as it’s detected at the endpoints.

                                                For UDP the situation is a little different, but IP can actually fragment large UDP datagrams and PTMUD works there too

                                                It doesn’t seem like anyone likes IP fragmentation.

                                                • If you’re doing a teleconferencing app, or something similarly latency-sensitive, then you cannot afford the overhead of reconstructing fragmented packets; your whole purpose in using UDP was to avoid overhead.

                                                • If you’re building your own reliable transport layer, like uTP or QUIC, then you already have a sliding size window facility; IP fragmentation is just a redundant mechanism that adds overhead.

                                                • Even DNS, which seems like it ought to be a perfect use case for DNS with packet fragmentation, doesn’t seem to work well with it in practice, and it’s being phased out in favour of just running it over TCP whenever the payload is too big. Something about it acting as a DDoS amplification mechanism, and super-unreliable on top of that.

                                                If you’re using TCP, or any of its clones, of course this ought to be handled by the underlying stack. They promised reliable delivery with some overhead, they should deliver on it. I kind of assumed that the “subtle breakage” that @weinholt was talking about was specifically for applications that used raw packets (like the given example of ping).

                                                1. 1

                                                  You list good reasons to avoid IP fragmentation with UDP and in practice people don’t use or advocate IP fragmentation for UDP. Broken PMTUD affects everyone… ever had an SSH session that works fine until you try to list a large directory? Chances are the packets were small enough to fit in the MTU until you listed that directory. As breakages go, that one’s not too hard to figure out. The nice thing about the suggested MTU discovery method is that it will not rely on other types of packets than those already used by the application, so it should be immune to the kind of operator who filters everything he does not understand. But it does mean some applications will need to help the network layer prevent breakage, so IMHO it doesn’t make jumbo frames more likely to become a thing. It’s also a band-aid on an otherwise broken configuration, so I think we’ll see more broken configurations in the future, with less arguments to use on the operators who can now point to how everything is “working”.

                                        1. 6

                                          According to zmap it takes 45min to scan all of IPv4 on a Gigabit connection. That could be a slow but interesting way to reliably bootstrap the network in case of an attack.

                                          1. 1

                                            I like the idea.

                                            The 45 mins scan advertised on the homepage is probably the result of a TCP SYN scan though. You’ll probably need to add an application layer scanner on top of that (zgrab?). Not sure how this will affect the overall latency of the scan :/

                                          1. 1

                                            There’s also a detailed wrietup about the business card design on the very same blog.

                                            I’d be interested to understand the hardware design of the board. The post sadly doesn’t cover that part :(

                                            1. 31

                                              Nice ad. :|

                                                1. 3

                                                  Also at the moment according to the pricing page, payment is optional.

                                                2. 16

                                                  It’s advertising an open source project, Source Hut, but also Janet, Zig, Nim, Samurai, Sway and other open source projects I like. Projects that get very little payment or gratitude for the work they do.

                                                  Yes is a service too, a useful one at that. They support BSD well, unlike other companies, how else are they supposed to let people know this fact? Should they be paying largely unethical companies like google for ad space? Or should they just be more subversive so people don’t complain.

                                                  Let me put it this way, if every open source project was also a business, should we hate on every single one for advertising? didn’t game the upvotes to get on the front page, people upvoted it by themselves.

                                                  I suppose there could be a tag ‘sponsored’ so people can ignore them. Not suggesting allowing lower quality from sponsored content either, probably the inverse.

                                                  1. 21

                                                    The issue is that I see a Sourcehut “ad” every few days: “Sourcehut supports OpenBSD”, “Sourcehut supports migrations from Bitbucket”, “Sourcehut supports ASCII”. Yeah … we got it … A lot of these posts don’t have a lot of meat to them and at this point, it’s just getting spammy.

                                                    1. 16

                                                      Yeah … we got it … A lot of these posts don’t have a lot of meat to them and at this point, it’s just getting spammy.

                                                      They don’t always have a lot of “meat,” but posts about SourceHut represent a capitalist ideology I can actually get behind. A single proprietor, working their ass off to try to change the software world, which has gotten extremely out of hand with regards to complexity, and the marketing of products that fix the complex systems we don’t need, at all, to begin with.

                                                      What’s the difference between a SourceHut post, and an post ad that complains that as an open source author I am not compensated fairly? Hint: one should be inspiration, for the other is actually possible.

                                                      1. 0

                                                        SourceHut represent a capitalist ideology

                                                        payment for the service is optional, so no it doesn’t. All the things that make Sourcehut great in my opinion are the ways in which it denies capitalist ideology. Open Source Software, optional payments, etc.

                                                        1. 3

                                                          optional payments

                                                          It’s optional, right now, while in Alpha. It doesn’t seem the plan is that forever. Also, if it wasn’t clear, I’m extremely in favor of this model of charging people for a service, but releasing your software under a permissive license.

                                                      2. 10

                                                        Just let me other another data point here. It was thanks to the “migration from Bitbucket” post that I found out Sourcehut had a nifty script to help migrations from Bitbucket and that saved hours of work as I migrated 20+ repos effortlessly. This current post made me realize that maybe I should be paying more attention to their CI system as it looks much simpler than others I’ve used. So, in the end, I’m appreciating these blog posts a lot. Yes they are related to a commercial venture but so what? You can self-host it if you’re not into SaaS outside your control. If we set a hard line like this, then it becomes impossible to post about any commercial project at all. It is already hard to monetize FOSS projects to make them sustainable, now imagine if they are not even allowed blog posts…

                                                        1. 4

                                                          Same here. This string of posts made me aware of sourcehut and when I had to migrate from bitbucket, I then gave them a hard eval. I like their human, non-shitty business model of “I give them money and they give me services”, and that their products are professionally executed and no-frills.

                                                          I don’t know how to reconcile it. These articles were very useful to me, when most product ads weren’t and I’d be disappointed if this site became a product advert platform. I think people are right for flagging it is almost-an-ad, but in this one vendor’s case I’m glad I saw them and am now a happy sourcehut customer.

                                                        2. 2

                                                          every few days

                                                          A lot of these posts don’t have a lot of meat to them and at this point, it’s just getting spammy.

                                                          That is fair I guess. I’ll have to check the guidelines on things like that.

                                                        3. 6

                                                          if every open source project was also a business, should we hate on every single one for advertising?

                                                          Yes. I flag those too. Advertising is a mind killer.

                                                          1. 6

                                                            But there is no other way to get large numbers of people to know about something, following your advice would be suicide.

                                                            I also hate advertising, I just don’t see a way around it. I won’t argue further against banishing advertising from at least.

                                                            1. 7

                                                              But there is no other way to get large numbers of people to know about something, following your advice would be suicide.

                                                              All these conversations are done like it’s all or nothing. We allow politics/marketing/etc on Lobsters or… it never happens anywhere with massive damage to individuals and society. Realistically, this is a small site with few monetary opportunities for a SaaS charging as little as he does. If the goal is spreading the word, it’s best done on sites and platforms with large numbers of potential users and (especially) paying customers. Each act of spreading the word should maximize the number of people they reach for both societal impact and profit for sustainability.

                                                              Multiple rounds on Lobsters means, aside from the first announcement with much fan fare, the author sacrificed each time opportunities to reach new, larger audiences to show the same message again to the same small crowd. Repeating it here is the opposite of spreading the word. Especially since most here that like Sourcehut are probably already following it. Maybe even buying it. He’s preaching to the choir here more than most places.

                                                              Mind-killer or not, anyone talking about large-scale adoption of software, ideology, etc should be using proven tactics in the kinds of places that get those results. That’s what you were talking about, though. I figured he was just trying to show latest BSD-related progress on one of his favorite tech forums. More noise than signal simply because he was sharing excitement more than doing technical posts or focused marketing.

                                                            2. 5

                                                              Every blog post is an ad for something. It may not be a product, directly, but it’s advertising an idea, the person, or persons the idea was thought by, the writing (which, btw can be a product) of the author, etc.

                                                              If you want to sincerely flag advertising, you might as well get offline—it’s pervasive.

                                                              1. 3

                                                                It may not be a product, directly, but it’s advertising an idea

                                                                Not a native english speaker here. I may be wrong, but after looking at the dictionnary definition



                                                                A paid notice that tells people about a product or service.

                                                                it seems that an advertisement has a precise definition: an ad is directly related to a paid product, not an idea.

                                                                1. 1

                                                                  it seems that an advertisement has a precise definition: an ad is directly related to a paid product, not an idea.

                                                                  This is a fairly pedantic interpretation. A person promotes an idea to sell something, if even themselves. That “sale” might only come later in the form of a job offer, or support through Patreon, etc, etc.. But, to say that you can’t advertise an idea is wrong. The cigarette industry’s ad campaigns have always been about selling an image, an idea that if you smoke you become part of something bigger. Oh, and btw, you’ll probably remember the brand name, and buy that kind instead of something else.

                                                                  iPods were sold on the very basis of white headphones, TO THE POINT, that people without iPods started wearing white headphones to be part of the “club.” Advertisements sell you the idea of a better life, and hopefully you’ll buy my product to get it.

                                                          2. 20

                                                            You’re right, and how virtuous Sourcehut may or may not be doesn’t change that. The line between ad and article is a spectrum, but this seems to be pretty well into the ad side of things. I apologise, I’ll be more discerning in the future.

                                                            1. 4

                                                              If you crack some other good places to get the word out, I’d be interested in hearing. My online circle is pretty small ( and HN), but I’m working on something I want to ‘advertise’ the hell out of quite soon…

                                                              1. 5

                                                                I’ve been trying to engage more with Reddit for this reason. I don’t really like it as a platform or see it as doing a social good, but there are users there and I’d like to be there to answer their questions. I was going to make a Twitter account, too, but they wanted my phone number and a pic of my ID and a blood sample to verify my account so I abandoned that. Finding good ways to ethically grow Sourcehut’s audience is not an entirely solved problem.

                                                                1. 2

                                                                  The reason Twitter – and many platforms – asks for phone numbers is because spam and trolls are a persistent problem. Ban one neo-Nazi troll tweeting obscenities at some black actor for DesTROyinG WhITe SocIEtY and they’ll create a new account faster than you can say “fuck off Nazi”.

                                                                  Reddit is often toxic as hell by the way, so good luck with that.

                                                                  1. 1

                                                                    Huh…I have a twitter account and all I needed for it was an email. Maybe things have changed.

                                                                    1. 1

                                                                      Nowadays they let you in with just an email, but after some time “block” your account and only unblock it after you give your phone number.

                                                                2. 3

                                                                  While I also see it as an ad, I’m interested in what it being announced as a Sourcehut user. But it seems you don’t have a RSS/Atom feed for the official blog… Or is there a mailing list I missed?

                                                                  1. 2


                                                                    I’ve been meaning to make this more visible… hold please done.

                                                                3. 3

                                                                  Somewhat amusing that this post with an interesting fully FOSS service, is marked -29 spam, whereas an actual advertisement about Huawei making macbook clones that run Linux has only -3 spam (one of which is mine).

                                                                  1. 3

                                                                    Said FOSS service has been on the Lobsters front page multiple times recently. I suspect the reaction is: “We get it, exists and SirCmpwn is apparently desperate to attract a paying customerbase, but a clickbaity title for a blogspam ad on the usual suspect’s software is probably crossing the line.”

                                                                1. 3

                                                                  This is already happening with one specifically requiring mails to be sent from another Big Mailer Corp to hit the inbox, or requiring that senders be added to the contacts for others. Any other sender will hit spambox unconditionnally for a while before being eventually upgraded to inbox.

                                                                  Anybody knows which bigcorp player he’s talking about?

                                                                  1. 14

                                                                    My mailserver, for many months, could not send mails to outlook addresses. The outlook server replied “OK” but the mail was transparently discarded. Not inbox, not spam, not trash, nothing. As if the mail had never been sent.

                                                                    I believe nowadays outlook “only” send my mails to spam.

                                                                    1. 9

                                                                      I have had the same experience. With Gmail it was even more difficult to evade their hyper-aggressive spam filters.

                                                                      I can’t call any of this “easy” and I had to struggle and learn a lot of new concepts (like DKIM, which is a pain to set up). It’s also very tricky to verify, if it fails it can fail silently; your mail is just dropped or goes to spam. I had that happen when my DNSsec signatures weren’t renewed, for example, and also when I had made a small mistake that made my DKIM invalid or not used (I don’t remember which).

                                                                      You need to be an expert at mail before this stuff is “easy”. When you get redirected to the spamfolder, those hosts aren’t giving any information about why this happened, so you’re left guessing. Also, you sometimes don’t even know unless you’re in contact with the recipient in some other way than just e-mail (and sometimes people don’t bother to notify you that the mail got flagged as spam). There are tools out there that can help verify your technical setup like rDNS, SPF, DKIM etc. But it’s still annoying as fuck to get it set up. Once you’ve done the work, it basically runs itself though.

                                                                      So I appreciate the article’s attempt to get more people to try hosting their own mail, I would say it’s quite one-sided and assumes a whole lot of technical sysadmin competency that the author has probably simply become blind to himself.

                                                                      1. 1

                                                                        I had a similar problem and my solution was to route all mail to them via a separate, dedicated IP which didn’t suffer the same problem. A solution possible thanks to the flexibility of Exim. As much as these simpler MTAs seem attractive I wonder how they would cope with such scenarios.

                                                                      2. 4

                                                                        I had this problem sending from my own mail server to Gmail addresses. After a couple of months I just gave up on my own mail server and went to

                                                                        1. 8

                                                                          They could have responsibly disclosed instead of being an asshat, stealing information and posting a ton of github issues from a fresh account.

                                                                          1. 3

                                                                            stealing… information?

                                                                            1. 2

                                                                              I’m both supportive of and we participate in the responsible disclosure process for Xen, even those times we don’t make the cut for pre-disclosure. I’m sad someone would go to the effort they have here in a criminal manner when there is more [market] demand for the skillset on display here than I have ever seen before.

                                                                            2. 7

                                                                              Why the hell did github allow people to remove issues? This is annoying.

                                                                              1. 4

                                                                                It appears the issues were removed by GitHub when a third party reported the user that posted the issues.

                                                                                1. 2

                                                                                  Unfortunate that GitHub was powerless to prevent nuking their account after being reported.

                                                                              2. 4

                                                                                I was telling a coworker about this and similar writeups and it turns out he wasn’t aware of the Hacking Team writeup from 2016. It’s detailled and very interesting. I would advise anyone to read it: .

                                                                                1. 1

                                                                                  A 0day in an embedded device seemed like the easiest option, and after two weeks of work reverse engineering, I got a remote root exploit.

                                                                                  thanks a lot, the whole walkthrough is quite amazing and insighful with a wide variety of tools used

                                                                                2. 3

                                                                                  Did you get a copy of them? They’re deleted now :(

                                                                                  1. 10

                                                                                    They’ve been reposted here: (and this site has been archived here)

                                                                                    1. 2


                                                                                    2. 1

                                                                                      I think web archive has some of them. Maybe not every comments.

                                                                                    3. 1

                                                                                      Concerning #358, what is “Flywheel” in this context?

                                                                                      Side-note: I hate locked threads on free software projects.

                                                                                      Update: I think it’s a hostname of one of their machines?

                                                                                      1. 1

                                                                                        Seems like it’s the hostname of their jenkins build slave

                                                                                        1. 2

                                                                                          yup, it was the hostname of the jenkins build slave.

                                                                                        2. 1
                                                                                      1. 2

                                                                                        I guess I rather should have linked this page, it’s a bit more descriptive:

                                                                                        1. 4

                                                                                          Neither link was very clear to me without close reading and hard thought. I think you could make both pages clearer by displaying some example execline scripts. For example, show a script that cds into a directory and then uses forx or forstdin to loop through all the ‘.wav’ files in that directory and convert them to MP3 with ffmpeg. I have written a similar script before for my preferred shell (Fish), so the differences would be instructive. An example would also make it easier to visualize execline‘s compilation process.

                                                                                          1. 3
                                                                                            #!/usr/bin/env execlineb
                                                                                            # Some commands look like their shell equivalent, but "cd" is its own binary and
                                                                                            # not a built-in. Note that the whole script could have been written in a single
                                                                                            # line without any ';'.
                                                                                            cd directory
                                                                                            # '*' has no special meaning in execline. The elglob program shipped with
                                                                                            # execline provides file name globbing. It immediately subtitute the pattern.
                                                                                            elglob g *.wav
                                                                                            # "$g" is now expanded to a list of file names. "forx" loops over the list
                                                                                            # filling "x" environment variable successively with each entry.
                                                                                            forx x { $g }
                                                                                            # The words starting with '$' are not automatically expanded into the content of
                                                                                            # the matching environment variable. This is the job of importas, which has
                                                                                            # some of the ${special:?features} of ${shell:-expansion}
                                                                                            importas wav x
                                                                                            # "backtick" has the role of x=$(sub shell expansion)
                                                                                            backtick x {
                                                                                                    # heredoc replaces the bash-specific "sed 's///' <<<string" or the
                                                                                                    # POSIX sh's "echo string | sed 's///'"
                                                                                                    heredoc 0 $wav sed "s/.wav/.opus/"
                                                                                            importas opus x
                                                                                            # note that there is no problem with spaces in the name of the files: they are
                                                                                            # not split automatically (this requires a flag of importas, where you can also
                                                                                            # specify the IFS).
                                                                                            ffmpeg -i $wav $opus
                                                                                        1. 6

                                                                                          Here’s my 2019 take.

                                                                                          Two big changes since last year:

                                                                                          1. I bought a nice chair, a second hand Hermann Miller Aeron. Best 250€ I invested in my setup. The benefits are radical, my post long-session back pain completely disappeared.
                                                                                          2. A Kinesis Advantage 2 keyboard. I have been bad-mouthing fancy keyboards for the last few years, but after hearing some crazy wrist pain stories from my friends/co-workers, I decided to bite the bullet and take a pro-active approach to this whole mess. The thumb cluster and the wells makes it really comfortable. However, it’s all made of plastic, it clearly doesn’t worth 375€, it’s damn overpriced. But hey, they are the only one selling this kind of keyboard, I guess that’s kind of expected.

                                                                                          Regarding the desk, I still use my DIY hand crafted wooden joined desk. It ages pretty well. I also still use my m-audio 2x2 sound card together with a shotgun mic for the sound/videocalls. I store my music on my server, I mount the music repository using FUSE and sshfs on my machines. A raspberrypi 3 is connected to my audio setup and stream the music from this very same server using MPD.

                                                                                          I have an arduino nano + some sensors + some custom scripts to display the temperature, humidity and C02 concentration on i3bar.

                                                                                          Which leads us to software. At this point, I’m pretty much all in in NixOS. I try to setup everything declaratively. I merged all my various dot files/custom ~/.local/bin scripts into my NixOS configuration. Everything is in one repo, the same configuration tree is shared across my machines.

                                                                                          Other than that, I still use the classic I3 + neovim + ghcid + firefox combination.

                                                                                          [edit]: I totally forgot to talk about my AMAZING green slide whistle. Great to vent out during some annoying bug fixing session and creating a bit of comic relief during long video meetings. My neighbors hate it.

                                                                                          1. 3

                                                                                            What CO2 sensor do you use?

                                                                                            1. 2

                                                                                              A Chinese module based on a MG811.

                                                                                            2. 3

                                                                                              Shout out to the MX518, I still use mine from over a decade ago

                                                                                              1. 2

                                                                                                Aeron is super worth it, even at full price. I have one that is (I think) 19 years old now. Had to replace a wheel one time, that’s it.

                                                                                                1. 2

                                                                                                  “However, it’s all made of plastic, it clearly doesn’t worth 375€, it’s damn overpriced. But hey, they are the only one selling this kind of keyboard”

                                                                                                  Business opportunity is what Im seeing in this.

                                                                                                  1. 3

                                                                                                    There seem to be quite a lot of custom keyboards brewing recently, esp. with the proliferation of 3D printers. As to ones that appear similar to a Kinesis Advantage, I’m interested in the Dactyl and Dactyl Manuform. Xah Lee seems rather impressed.

                                                                                                  2. 2

                                                                                                    I’m envious of your chair - where’d you get it that cheap?! :D

                                                                                                    1. 4

                                                                                                      I bought a used Aeron with a chrome base back in 2012 from London on eBay, and had it shipped to Sweden. I think the chair was around £300. Companies sell them for cheap all the time in London. I ended up selling it again, at a £50 profit, even after the shipping I paid!

                                                                                                      1. 2

                                                                                                        On a french local advert website (similar to Craiglist for the US).

                                                                                                        In my experience, you often get a better deal from these websites than eBay for this kind of stuff. Not only you cut out the transaction/delivery fees, but the market also tends to be a bit less competitive for the buyers.

                                                                                                        If you’re not in a hurry and automate your search process with some web scrappers, you should get some pretty good deals :)

                                                                                                    1. 1

                                                                                                      I posted this mostly for this paragraph:

                                                                                                      My third remark introduces you to the Buxton Index, so named after its inventor, Professor John Buxton, at the time at Warwick University. The Buxton Index of an entity, i.e. person or organization, is defined as the length of the period, measured in years, over which the entity makes its plans. For the little grocery shop around the corner it is about 1/2,for the true Christian it is infinity, and for most other entities it is in between: about 4 for the average politician who aims at his re-election, slightly more for most industries, but much less for the managers who have to write quarterly reports. The Buxton Index is an important concept because close co-operation between entities with very different Buxton Indices invariably fails and leads to moral complaints about the partner. The party with the smaller Buxton Index is accused of being superficial and short-sighted, while the party with the larger Buxton Index is accused of neglect of duty, of backing out of its responsibility, of freewheeling, etc.. In addition, each party accuses the other one of being stupid. The great advantage of the Buxton Index is that, as a simple numerical notion, it is morally neutral and lifts the difference above the plane of moral concerns. The Buxton Index is important to bear in mind when considering academic/industrial co-operation.

                                                                                                      1. 2

                                                                                                        Why not simply go the Nix way? I have found it to be very reasonable during merges.

                                                                                                        Specifically, one can either stick with a JSON like format or expand the data into a dotted format.

                                                                                                        1. 3

                                                                                                          My understanding is that Nix provides a DSL, but TOML and JSON are just data. Are these really equivalent?

                                                                                                          1. 1

                                                                                                            Nix provides the DSL as a configuration language, which can be used instead of TOML here. For example, see this example (not mine).

                                                                                                            1. 2

                                                                                                              Yeah, but it’s Turing Complete. Which is nice if you intend to always modify it by hand anyway, since you can write custom functions to reduce the file size, but it also means that programs like dependabot go from parse-tweak-serialize to IDE Refactoring Engine.

                                                                                                              1. 6

                                                                                                                Perhaps Dhall may be a better choice then?

                                                                                                              2. 1

                                                                                                                Looks like a nice language! Thanks for the example. How is the parser support across various languages?

                                                                                                                1. 2

                                                                                                                  Well, lacking at this point. There is only Java, Go, Rust, and Haskell, Ocaml and I suppose C.

                                                                                                                  I would like to note that Dhall might also be a better choice (In that it is better designed to be a configuration language than any of the alternatives, and has a number of language bindings in progress. Unfortunately F# is not one of them.)

                                                                                                                2. 1

                                                                                                                  Haven’t they simply copied HOCON? At least it looks a lot like it.

                                                                                                                  1. 3

                                                                                                                    Good point

                                                                                                                    According to the git history, HOCCON started out in 2011

                                                                                                                    commit 9ca157d34a4f2e14ac0d88de001611bcf3e911d0
                                                                                                                    Author: Havoc Pennington <hp@redacted>
                                                                                                                    Date:   Sat Nov 5 16:45:25 2011 -0400
                                                                                                                        WIP initial sketch

                                                                                                                    where the nix project started out in 2003:

                                                                                                                    commit 2766a4b44ee6eafae03a042801270c7f6b8ed32a
                                                                                                                    Author: Eelco Dolstra <eelco.dolstra@redacted>
                                                                                                                    Date:   Fri Mar 14 16:43:14 2003 +0000
                                                                                                                        * Improved Nix.  Resources (package descriptors and other source
                                                                                                                          files) are now referenced using their cryptographic hashes.
                                                                                                                          This ensures that if two package descriptors have the same contents,
                                                                                                                          then they describe the same package.  This property is not as
                                                                                                                          trivial as it sounds: generally import relations cause this property
                                                                                                                          not to hold w.r.t. temporality.  But since imports also use hashes
                                                                                                                          to reference other packages, equality follows by induction.
                                                                                                                        svn path=/nix/trunk/pkg/; revision=5

                                                                                                                    I guess Nix gets the precedence by ~ a decade here.

                                                                                                                    But to be completely honest, if we make abstraction of some implementation details (builtins, builtin types, strings contexts, …), Nix is just about combining sets using some lambdas. They are probably not the first to come with this design and are unlikely to be the last.

                                                                                                                    1. 1

                                                                                                                      I didn’t know. You are right, it does look a lot like it.

                                                                                                              1. 5

                                                                                                                This is pure madness.

                                                                                                                I’ve been particularly impressed by the LVDS setup.

                                                                                                                If, like me, you’re not a hardware guy and are wondering what the hell is this device tree configuration, here’s a good introduction to it: .

                                                                                                                1. 41

                                                                                                                  Wow, that’s pretty terrible.

                                                                                                                  On the other hand, I can’t help but to feel sorry about Dominic, we all make mistakes, this public shaming is pretty violent.

                                                                                                                  I guess we should sometimes take some time off to read the license before using a library:


                                                                                                                  (F)OSS is not a consumer good.

                                                                                                                  1. 11

                                                                                                                    I agree that shaming people is toxic and unproductive. No one wants to be shamed and no one is perfect.

                                                                                                                    But I see another dimension to the negative responses Dominic has received. Non-hierarchical, self-governing communities like open source software are organized by social norms. Social norms work through peer pressure - community members conform to the norms of the community not because they are compelled to by law but because it would cost them standing in the community not to. This isn’t inherently good. Some norms are toxic and self-policing via peer pressure can lead to shaming. What I see in some of the critical comments addressed to Dominic is an attempt to establish a clear social norm about what to do when you are ready to abandon a package. The norm is desirable because it increases the general level of trust. Even if the landscape is generally untrustworthy, you can have some confidence that people aren’t handing their packages off to strangers because it’s the norm not to do that. The desire for some norm here, whatever it is in the end, is reasonable.

                                                                                                                    Ending the discussion with “don’t worry about it Dominic, everyone makes mistakes, and anyways you’re not liable for it” signals to everyone that they’re not responsible for the consequences of what they do. In a strictly legal sense, that might be true. Even then, I’m skeptical that the warranty clause would cover negligence in the distribution of the software rather than the software itself. But in either case, don’t we want a community where people do feel responsible for the actions they take and are open to receiving feedback when an action they’ve taken has a bad result? This dialogue can occur without shaming, without targeting anyone personally, and can be part of the same give-and-take process that produces the software itself.

                                                                                                                    1. 7

                                                                                                                      Blaming people in any security issue is toxic, no matter what happens. In any organization with paid people where you should expect better, the most important rule of a post-mortem is to remain blameless. It doesn’t get anyone anywhere and doesn’t get remotely close to actual root cause. Instead of asking about why Dominic gave away a critical package, people should be asking why some random maintainer were able to give away a critical package.

                                                                                                                      Ending the discussion with “don’t worry about it Dominic, everyone makes mistakes, and anyways you’re not liable for it” signals to everyone that they’re not responsible for the consequences of what they do.

                                                                                                                      By putting blame on Dominic, people are not taking responsibilities. The main issue is that many core libraries in the JavaScript ecosystems still depends on external, single-file, non-core, likely unmaintained library. People who should take responsabilities are the ones who chose to add a weak single point of failure by depending on event-stream.

                                                                                                                      1. 2

                                                                                                                        It depends what you mean by blame. If you mean assigning moral responsibility, especially as a pretext for shaming them, then I agree it’s toxic. I think I was clear that I agree this shouldn’t happen. But if blame means asserting a causal relationship between Dominic’s actions and this result, it’s hard to argue that there isn’t such a relationship. The attack was only possible because Dominic transferred the package. This doesn’t mean he’s a bad person or that he should be “in trouble” or that anything negative should happen to him as a consequence. A healthy social norm would be to avoid transferring packages to un-credentialed strangers when you’re ready to abandon the package because we’ve seen this opens an attack vector. Then what’s happened here is instructive and everyone benefits from the experience. And yes, ideally these dilemmas are prohibited by the system. Until that is the case, it helps to have norms around the best way to act.

                                                                                                                        1. 1

                                                                                                                          I understand you don’t condone the attacks and shaming going around. However I would argue that even if you agree that the blaming is toxic, that building some social norm around it is better than nothing, I believe that even hinting that it was somehow Dominic’s fault is a net negative.

                                                                                                                          The attack was only possible because Dominic transferred the package.

                                                                                                                          This is exactly what I’m condoning. By looking at individual and their action you scope the issue at that level. The attack was taking over a dependancy. It is possible to do so in so many way, especially for packages such as Dominic’s. This time it was a case of social engineering, next time it might as well be a case of credential hijacking, phishing or maintainer going rogue.

                                                                                                                          A healthy social norm would be to avoid transferring packages to un-credentialed strangers when you’re ready to abandon the package because we’ve seen this opens an attack vector.

                                                                                                                          I would say pushing this rethoric is actually unhealty and only lead people to rely on those social norm and use it as an excuse to disown their accountability. It would be much healthier to set expectation right and learn proper risk assessment about dependancies management.

                                                                                                                          Then what’s happened here is instructive and everyone benefits from the experience. And yes, ideally these dilemmas are prohibited by the system. Until that is the case, it helps to have norms around the best way to act.

                                                                                                                          The same issue have come up so many time in the past few years, especially in the NPM ecosystem, it should be well past the “learn from the experience” and I believe it’s time the relevant actors actually move toward a solution.

                                                                                                                    2. 17

                                                                                                                      I’ve done a similar thing before. After leaving the Elm community, I offered to transfer most of my repos over to the elm-community organisation. They accepted the most popular ones, but not elm-ast (and maybe one or two others). A few months later I received an e-mail from @wende asking if he could take over so I took a look at his profile and stuff he’s done in the past and happily gave him commit access thinking users would continue getting updates and improvements without any hassle. Now, @wende turns out to be a great guy and I’m pretty sure he hasn’t backdoored anyone using elm-ast, but I find it hilarious that people somehow think that maintainers should be responsible for vetting who they hand over control of their projects to or that they could even do a good job of it OR that it would even make any sort of a difference. Instead of trusting one random dude on the internet (me) you’re now trusting another.

                                                                                                                      Don’t implicitly trust random people on the internet and run their code. Vet the code you run and keep your dependency tree small.

                                                                                                                      1. 25

                                                                                                                        Vet the code you run

                                                                                                                        Or trust well-known, security-oriented distributions.

                                                                                                                        keep your dependency tree small

                                                                                                                        Yes, and stay away from environment, frameworks, languages that force dependency fragmentation on you.

                                                                                                                        1. 4

                                                                                                                          Or trust well-known, security-oriented distributions.

                                                                                                                          That too! :D

                                                                                                                          1. 3

                                                                                                                            and stay away from […] frameworks

                                                                                                                            I wouldn’t say that as absolutely for the web. I suspect that things would likely go a lot more haywire if people started handling raw HTTP in Python or Ruby or what have you. There’s a lot of stuff going on under the hood such as content security policies, CSRF protection and the like. If you’re not actively, consciously aware of all of that, a web framework will probably still end up providing a net security benefit.

                                                                                                                            1. 5

                                                                                                                              Please don’t quote words without context:

                                                                                                                              […] that force dependency fragmentation on you

                                                                                                                              Frameworks and libraries with few dependencies and a good security track record are not the problem. (If anything, they are beneficial)

                                                                                                                              1. 2

                                                                                                                                I interpreted “Yes, and stay away from environment, frameworks, languages that force dependency fragmentation on you.” as (my misunderstandings in brackets) “Yes, and stay away from [(a) integrated development] environments, [(b)] frameworks, [(c)] languages that force dependency fragmentation on you.” with a and b being separate from the “that” in c.

                                                                                                                                I apologize for the misunderstanding caused.

                                                                                                                            2. 2

                                                                                                                              Isn’t it the case that reputable, security-focused distributions acquire such status and the continuity thereof by performing extensive vetting of maintainers?

                                                                                                                              The responsible alternative being abandoning the project and letting the community fork it if they want to.

                                                                                                                              1. 1

                                                                                                                                Or trust well-known, security-oriented distributions.

                                                                                                                                Then how do You deal with things like that: “The reason the login form is delivered as web content is to increase development speed and agility” ?

                                                                                                                                1. 2

                                                                                                                                  As a distribution? Open a bug upstream, offer a patch, and sometimes patch the packaged version.

                                                                                                                                  1. 1

                                                                                                                                    That’s a good idea in general but sometimes the bug is introduced downstream.

                                                                                                                            3. 9

                                                                                                                              Most proprietary software also comes with pretty much the same warranty disclaimer. For example, see section 7c of the macOS EULA:


                                                                                                                              I mean, have we held accountable Apple or Google or Microsoft or Facebook in any substantial ways for their security flaws?

                                                                                                                              1. 4

                                                                                                                                In many other products accountability is enforced by law and it overrides any EULA. And that is tied to profit in the broad sense: sales or having access to valuable customer data & so on.

                                                                                                                                Software companies got away with zero responsibility and this only encourages bad software.

                                                                                                                                1. 1

                                                                                                                                  And how have we enforced that by law for those companies, regardless of what those EULAs have said? When macOS allowed anyone to log in as root, what were the legal consequences it faced?

                                                                                                                                  1. 3

                                                                                                                                    other products

                                                                                                                                    e.g. selling cars without safety belts, electrical appliances without grounding…

                                                                                                                              2. 2

                                                                                                                                It is a security disaster given how easy it is for js stuff to hijack cookies and sessions.

                                                                                                                                1. 1

                                                                                                                                  It really isn’t if a well thought out CORS policy is defined.

                                                                                                                              1. 9

                                                                                                                                Hey Crustaceans,

                                                                                                                                I’m against this.

                                                                                                                                I’m also tired of these outrage-based threads.

                                                                                                                                What about:

                                                                                                                                1. Setup a reply cooldown: let say we have to wait for 2 hours before posting again to a same story thread. It could prevent heated argument and would oblige people (I include myself in this set) to act a bit more rationally.
                                                                                                                                2. Disable upvotes for newcomers and activate them when the user reach X karma point. A bit like what we currently do with flags.

                                                                                                                                [edit]: I’m willing to write the code associated to both of these proposals.

                                                                                                                                1. 8

                                                                                                                                  I don’t think #1 is so broadly right; I ran a few queries and it doesn’t look like the average time from a reply to its parent is dropping. A quick reply isn’t necessarily bad like this rule implies; we often have excellent, deep back-and-forths between individuals.

                                                                                                                                  If we want to only put it in place where a thread is catching downvotes, hm, there’s a handful of users who regularly downvote things they disagree with. Like most activity this is a logarithmic distribution so it really is like 5-9 users who do most of this, but it takes human judgment to tell it apart from a really active user… so it’s a mod messaging them rather than easily algorithmic.

                                                                                                                                  For #2, as you offered to write code - want to take a pass at a query or queries to get at whether new/low-karma users upvoting “outrage-based threads” is a problem?

                                                                                                                                  1. 3

                                                                                                                                    For #2, as you offered to write code - want to take a pass at a query or queries to get at whether new/low-karma users upvoting “outrage-based threads” is a problem?

                                                                                                                                    Very good idea! I’ll update my local install and come back to you with the query.

                                                                                                                                  2. 2

                                                                                                                                    I’m also against this, but support this open approach to trying to solve perceived problems.

                                                                                                                                    As for these two proposals, I think the disadvantages of #1 outweigh the potential advantages. The whole point of a forum is to foster discussion, and I do not believe adding friction to those discussions is an acceptable solution to personality conflicts.

                                                                                                                                    #2 seems reasonable, but it’s not entirely clear to me how it would help. I suppose I would need a better understanding of the problem.

                                                                                                                                  1. 3

                                                                                                                                    My head is spinning. I appreciate it’s neat to enable dynamic-like workflows on top of a blog that’s actually just static files, but I would take a makefile+rsync on my laptop over this any day.

                                                                                                                                    I guess this enables webmentions which my setup doesn’t, but … wow, what a cost of entry. Is there some other advantage I’m missing?

                                                                                                                                    1. 3

                                                                                                                                      I guess this enables webmentions which my setup doesn’t, but … wow, what a cost of entry. Is there some other advantage I’m missing?

                                                                                                                                      I feel exactly the same!

                                                                                                                                      I’m very fond of the static website approach. My website has always been static. This approach is really simple and has very few moving parts. For instance:

                                                                                                                                      1. My website is hosted on a raspberry-pi sitting in a associative datacenter. Some of my posts reached the top section of some link aggregators (here, hn, reddit, …); my webserver just took the hit pretty easy, the load never exceeded 0.4. When I see some websites sitting on fat ass server being ddosed in the same condition, I feel like serving static files makes your website clearly easy to maintain/scale.
                                                                                                                                      2. Yesterday, my HDD failed. Shit happens [1]. I was just a make publish and a DNS zone modification away of being online again. I could never done that with a dynamic website.

                                                                                                                                      However, I’m also very fond of the ideas pushed by the indie web community. The amount of tooling they built to make POSSE [2] easier is pretty impressive. The whole micropub/microsub workflow and associated UX is amazing. I can’t make this fit with my current workflow; it drives me crazy.

                                                                                                                                      I built a whole micropub/sub prototype based on @myfreeweb’s sweetroll using a hugo backend and generating webpages on the fly. It does not work reliably, it is crazily complex. It feels like I’m re-inventing a whole cache system. I feel like the two approach simply does not fit together. (I’ll probably write an article on this experiment in a near future).

                                                                                                                                      So, in the meantime, I try to advertise the indieweb values by posting stuff they write to a broader community (here for instance :)) and pushing their standards where it is possible without adding too much complexity. I do that in the secret hope somebody smarter than I am will manage to make the two approach fit together :D

                                                                                                                                      [1] Have to admit I have been lucky on this one. My backups are fired at 12:00, the hard-drive failed at 2PM. I almost did not lose anything at all.

                                                                                                                                      [2] Publish (on your) Own Site, Syndicate Elsewhere”.