1. 11

    Heh, the Steven Black adblock host file includes cliqz.com, an attempt to remove it was denied: https://github.com/mitchellkrogza/Badd-Boyz-Hosts/issues/34

    I flagged this as spam. Even though the premise of the article is correct (we need more search engines), it’s clearly an advertisement. (the title of the article on the webpage is “The world needs Cliqz. The world needs more search engines.”)

    1. 5

      Aren’t you saying, then, that the only people who can post something titled “the world needs more search engines” are those who do nothing about it?

      1. 6

        That’s a rather reductive argument given that the title of the article includes “The world needs Cliqz”. If we allow anyone and everyone who says they should use their product, we will be inundated with spam.

        1. 4

          And if we allow none, then people who work on something because they consider it important can’t post about it.

          The way to avoid these undesirable consequences is IMO to not use such tests, ie. to not decide whether a posting is permissible based on who posts, but rather to consider its content.

          1. 7

            We should allow no advertisements. There’s a difference between a product announcement, and a product advertisement. The arguments you’re making here are hyperbolic.

            1. -1

              I have good (offline) reason believe that at least some of the people at Cliqz work on Cliqz because they believe in that headline and the article, so my argument is directly relevant to the case at hand.

              1. 4

                Many people who advertise believe in what they’re advertising, that doesn’t make it not advertising. It makes it honest advertising, but it’s still advertising. For the record I didn’t (yet) mark it spam, but I felt that your argument that it wasn’t spam was also lacking.

                1. 1

                  I assume s/advertiting/marketing/g

                  If I understand you correctly, then I personally could post something about some random hack on Lobsters, and I could post about deep technical details of my work, but I could not post a user-level or api-level description of the thing I’m working on, because that would be marketing and marketing is impermissible. Other people, who know less about it, could however post such a description. Do I understand you correctly?

                  If so, then I feel that this is an unfortunate affordance. Fora such as lobste.rs have a tendency to boost people who spend more time reading and posting stuff on the internet, compared to people who spend more time in an editor or IDE. Lobste.rs isn’t particularly bad (certainly not compared to horrors like twitter) but I still feel that the tendency is an unfortunate one, and any rule or affordance that strengthens the tendency is bad. IMO.

                  1. 1

                    Again you’re failing to consider the tone of the article. If your overarching goal with the written piece is to sell me something, I don’t want to read it. If you are trying to describe something, say a user-level or api-level description that is fine. Saying the world needs my product is needless self aggrandizing, and in my mind drives me to question the rigor of the article, if any. I would not mind if the author said, “Here is my product, here is why I think it is valuable, here’s what I think it does better than the competition”, however if instead they said “My product is the best, here’s why you should use it” it has a tone of a sales pitch, and really undermines the value as an article.

                    I know it sounds like I’m splitting hairs here but tone matters.

            2. 5

              I’d be okay with an established community member submitting an article they wrote (under the show tag). I object to advertising that comes from someone with no other post history (“I joined to sell you my product”), but “I hang out here and this is what I work on” is fine by me.

      1. 3

        Good effort. I’ll give it a try while backing up with ddg. The interface is clean.

        1. 7

          Have you actually tried it? This Cliqz thing appears to have a smaller index than www.Gigablast.com, which is not only 100% independent and does not depend on the big 4 — Google, Bing, Yandex or Baidu — but is actually OSS on GitHub, and, as mentioned, appears to have a much bigger index than this Cliqz thingy.

          Also, Cliqz is broken for me, because they show the whole UI in a GeoIP-based language which I don’t understand, without any way to switch to English. GeoIP is so 1999, BTW.

          1. 3

            There are seven notable search engines, right? Google, Bing, Yandex, Baidu, Gigablast, Seznam and now Cliqz.

            I tried a search on all. I searched for reviews of an expensive machine that’s been produced for several decades by a small company with more engineering expertise than SEO skills. There are (at least) two good reviews on the web of that machine, as well as innumerable pages titled “buy [name] here”, and not a few pages with shallow or uninformed opinions. None of the seven search result pages included either of the two well-informed pages. All of the seven search results were almost interchangeable. Notably, the search engine with the biggest index did not find the two needles in the haystack.

            I’m not sure what this shows. Perhaps that distinguishing well-informed pages from uninformed, shallow or shilling pages is terribly difficult? That doing better than Google is actually really difficult? Or maybe I was just unlucky.

            BTW, I wondered whether Clicq is regional. They don’t claim to be, but I tested with a search for someone down the street from Cliqz’ main office. The results were a year old. No, Cliqz isn’t regional ;) Good luck to them. It’s a worthwhile task, but not a simple one.

            1. 1

              How abour Duckduckgo? Many people use it

              1. 1

                DDG outsources most of the search work to Bing.

                The web page search results generally come from pages crawled by Bingbot. IIRC the “instant answer” part is done by DDG iself, the web page search is all Bing, I have no idea about the image or video searches.

              2. 1

                Naver is #1 in South Korea.

                1. 2

                  Neat, I hadn’t heart of it.

                  FWIW gives worse results than the other seven on my query (which I haven’t described in detail, just in case I want to use it again). The wikipedia page on Naver implies that it’s focused on Hangul/Korean pages and that machine is not built and perhaps not even sold in South Korea, perhaps that’s why.

          1. 9

            RFC 3676 is very widely implemented and solves this in the way you want. You’re either looking for that or looking to reinvent it.

            I’m curious how you might get vim to insert/preserve those spaces, though.

            1. 3

              vimrc: autocmd FileType mail setlocal formatoptions+=aw
              muttrc: set text_flowed=yes

            1. 2

              Raspberry π and a NAS, in my case a Synology, although I might pick a QNAP if I were to buy one now, for the encryption. Being able to throw away broken drives without a worry is so nice.

              The key for me is fanless operation (to the degree I have servers at home any more). An rπ is quite fast, really, if you compare it with nineties hardware/software, and if you don’t particularly care to do deepfakes or reencode videos it might serve you well.

              1. 2

                I think the great value of Meetup.com isn’t so much in “RSVP” and whatnot, but in discovery. Most meetups I’ve attended I had no idea existed; especially after moving to a new location. All of that is lost if everyone uses their own “IndieWeb” RSVP solution.

                1. 1

                  While I’m not aware of anyone who crawls and indexes based on those microformats, in principle that is possible, IIRC it was the main point of the specification. That said, it looks as if Jamie Tanna’s policy is that everyone will write the details from scratch and get it right, without experience, without a linter, without a validator.

                  When sitemaps were introduced someone built a validator that would parse your sitemap and report both errors and payload immediately. Very actionable.

                  1. 2

                    Yeah, it’s all doable; it’s just not something the article addresses. Something like ActivityPub might also be a solution (although I know very little of the spec, so not sure if it’s a good here).

                    In general, the “federated web” is new and very interesting, although I’m also concerned that the overall system is much more complex – even if individual sites, like this one – are much simpler, which means it’ll never really be mainstream and restricted to tech communities. I’m not entirely sure what a good solution to that is, if there even is one.

                    1. 1

                      Getting moderate to wide adoption isn’t a black art any more:

                      1. Find something that some group of people want.
                      2. Make something that can be used, an MVP. The V means “can be used”.
                      3. Collect metrics, analyse what works and what doesn’t.
                      4. Iterate.
                      5. Profit-oriented people need some more steps but that’s irrelevant here.

                      Assuming point 0 exists this one seems to be just short of point 1 so far. Perhaps they’re working on the V bits now, and there’ll be a new blog posting at some point.

                1. 1

                  Usecase: User does not want to see colors on their terminal -> Disable colors in Terminal configuration

                  Usecase: Program is not writing to a terminal -> Programs should check if stdout is a tty and check for existence of $TERM variable

                  Is there something i miss? I don’t get the necessity for this.

                  1. 6

                    The article goes into this. Users might well want to explicity configure some tools (e.g. an editor) to output color, but don’t want to spammed with useless color output by a package manager.

                    (I’m particularly irritated by command line tools whose color output was clearly not tested outside leet darkmode terminals.)

                    1. 4

                      (I’m particularly irritated by command line tools whose color output was clearly not tested outside leet darkmode terminals.)

                      Thank goodness! Someone else who thinks the same thing.

                      1. 3

                        I think the term you want is “angry fruit salad”, and I quite concur. The people who choose default colours in terminal programs I use occasionally (but not often enough to bother with configuring) all seem to have a fetish for garish shrieking colours.

                      2. 3

                        I use eshell, and ansi-color codes aren’t always supported, but it’s not quite a dumb term either. I stumbled upon this link while trying to find a way to disable unnecessary color-codes being generated by apt (I know there are other ways, but I wanted to know if this was possible). If this were a pervasive standard, I would have a better experience with tools like these, by default.

                        1. 1

                          My predicament is similar: my main working terminal emulator is M-x shell, and some of the newer cli tools completely disregard the fact that it is pretty much a dumb terminal.

                      1. 7

                        There is one really ugly thing that ruins IPv6 for end users and makes it worse than IPv4 for reaching one another directly without an intermediate party.

                        It’s called DHCP-PD. There is, technically nothing wrong with it, it’s just a protocol for telling the customer’s router what /64 network it should use. However, many ISPs treat it like dynamic IPv4 at its worst and force frequent prefix changes, even on business connections.

                        With dynamic IPv4, you can use dynamic DNS and DNAT to keep things reachable by the same address. It’s an ugly and fragile but somewhat usable solution.If your very network changes every day, you can’t even reach a box right next to you unless you are using a router above consumer grade that can do DHCP with DDNS updates, or use mDNS etc.

                        If that approach becomes the default, it’s the end of end user networking as we know it. Everything will be useless without a third party that has a fixed address.

                        1. 2

                          I completely agree with you. I was testing DHCP-PD in our data center and almost all open source implementations suck pretty much. At the moment our approach is to use a custom, REST based API to dispatch static /64 networks to VPS and static /48s for VPN customers.

                          However, in theory DHCP-PD could solve all of this, but by default there is no easy way to map a prefix to a customer statically.

                          Maybe it’s time to write a new RFC.

                          1. 3

                            It’s not just about implementations. I don’t know if proprietary implementations suck less, but I do know that ISP are often forcing a prefix change intentionally, to force customers to get a much more expensive connection with a statically allocated prefix if they want it to stop.

                            My fear is that even if good, easy to use implementations appear, ISPs will choose to make it a premium service that an average user will not want to pay for.

                            1. 2

                              …, but I do know that ISP are often forcing a prefix change intentionally, to force customers to get a much more expensive connection with a statically allocated prefix if they want it to stop.

                              In a previous life, wearing a network sysadmin hat, broken equipment was the bane of all our lives. One example were DHCP clients that ignored the lease time either treating it as infinity or worse a single second.

                              IPv6 to some degree has offered packet pushers the opportunities of a greenfield deployment and arguably is Good Practice(tm) to have everything in flux from day zero to tickle out bugs.

                              Personally I would not be so quick to point to malice where there are good practical reasons to do so. After all, supposedly this is part of the whole infrastructure-as-code malarkey line of thought that is actively preached here.

                              Of course I understand that there are people who want to run a service from their home connection, but the majority do not. In the minority that do (game servers maybe, but probably am showing my age here…xpilot w00t!) you likely need service discovery which requires a central authority (DDNS), or if you are hipster enough DHT, blockchain or IPFS.

                              Personally myself, I’m more upset that global IPv6 multicast (IIRC you have some useable space with every /48) is not available and all the amazing use cases (such as streaming your own radio/video) that would bring.

                              1. 1

                                My ISP (which admittedly is like heaven on earth) hands out dynamic prefixes by default, but if you want a static prefix, all you need to do is send them an email or even a tweet.

                                They even offer reverse DNS delegation for free once you have your static prefix.

                                You still use DHCP-PD to ask for the prefixes for your subnets (they give you a /48), but the prefix remains static.

                                1. 1

                                  Do you even think there’s a market for such a premium service any more? Something to motivate the ISPs?

                                  When I got rid of the rack I had in the basement, I offered parts to the people I know who also have racks. “Do you want it, or any parts? Spare nuts?” None did, “I’ve also gotten rid of my rack and replaced it with colo servers”.

                                  I can believe that ISPs force address changes intentionally, but I’m reluctant to believe the reason you suggest.

                              2. 1

                                I think we’re in a world without fixed addresses already. My primary internet access device has had two address changes so far today. One when I changed from WLAN to 4G, the second when I changed back to WLAN. Am I unusual?

                                That I can’t run servers on the connection that serves my WLAN is an annoyance. But one that has to be weighed against the effect of dhcp-pd+privext on web clients. I see that when I use firefox focus, there isn’t really any way to track me on the web. If I had a permanent address, or a permanent prefix shared with noone, avoiding tracking would not be within my power.

                              1. 11

                                It seems like you want something that can handle modern applications. I have a soft spot for fairphone as it’s quite open and their supply chain, while not perfect is the best at the moment insofar as sustainability is concerned. You can put LineageOS, Ubuntu touch, and PostmarketOS on it if you want. Fairphone 2 is better supported than 3, but 3 only came out recently, so it should be quite well supported soon.

                                1. 11

                                  You may also want to consider a used phone (whose battery can be replaced without too much frustration) if security isn’t a top priority. Used iPhone SE/6s, or android phones with decent LineageOS support come to mind.

                                  1. 2

                                    Yes, I’m looking at buying a used Samsung S5 Neo for its replaceable battery and LineageOS support.

                                    1. 2

                                      Keep in mind the tension between rainproofness and replaceable batteries, and check whether lineageos for your chosen phone supports an encrypted file system. Some don’t, perhaps even most.

                                      I personally hate the thought of losing a device with personal data on a cleartext storage medium, even if there’s password protection.

                                  2. 2

                                    Thanks for this first suggestion. This seems a really good one concerning the environment. Concerning privacy, security and minimality, it seems interesting too. I like I can go google-less.

                                    1. 2

                                      I wish the fairphone had a low-tech device. I do not need a smart phone, I need a pocket client to various protocols (GSM / SMS / IMAP / IRC / SSH / X11 even maybe?)

                                      1. 4

                                        In other words, you want a not-smart phone that can maintain data and voice connections at the same time and that lets you install and run arbitrary apps. What’s not-smart about such a phone? What makes a smartphone smart?

                                        1. 1

                                          Marketing.

                                          1. 3

                                            I see, thanks. That’s a weird point of view for me, but now I understand what some people mean, that I didn’t understand before.

                                            FYI my android phone is marketing-free. It serves me no ads and according to activity.google.com it tells Google nothing about what I do. Sony XZ1C, stock ROM, all I did was go through the settings carefully and turn off stuff, deinstalled or deactivated apps I didn’t want, and installed Nova (Prime) instead of the stock launcher. It took an hour, maybe a little more.

                                            1. 1

                                              There is marketing as in shipping it to the user (advertizing, metrics, surveillance). It is a relief to take them out of android and I like my termux + connectbot + fancy better now it’s gone.

                                              There is a lot of marketing that became features in AOSP, whose entire interface is an analogy to the physical world (material design, oh, name of a ground-breaking ad campaign). This is fun. This is not useful to me. I value having good battery life and access to my various resources to have the metaphor of paper and ink incarnated in 3d-accelerated user interface.

                                              Turning off the animations (developer tools) helps a bit with this.

                                              The Google I/O is a show of marketing making its way into Google software like Android (without sarcasm, it’s what they officially do, they are a company after all…).

                                              But I’m not a heavy phone user. I usually have my laptop around for anything fancy.

                                    1. 11

                                      I do still agree with Thomas Ptacek: DNSSEC is not necessarily.

                                      https://sockpuppet.org/blog/2015/01/15/against-dnssec/

                                      1. 2

                                        From that article:

                                        Had DNSSEC been deployed 5 years ago, Muammar Gaddafi would have controlled BIT.LY’s TLS keys.

                                        1. 1

                                          Is that true, though?

                                          He would have been in a position to take over the public key, if he were willing to do so visibly. The DNS isn’t like a CA — a CA can issue n certificates for the same domain, but the DNS makes it difficult to give me one set of answers and you quite another, particularly if either of us is suspicious, as a monitoring service might be.

                                          Bit.ly controlled its own private key. Gaddafi’s possibility was to take over control of the domain and publish other RRs, in full view of everyone. A concealed or targeted attack… I don’t think so.

                                          1. 1

                                            Read the article, the quote comes from the part about DANE that extends DNSSEC, which is about putting public keys for TLS in TLSA resource records.

                                            Certificate Transparency has pretty much solved this for the CA system, where it is directly visible if unauthorized certificates are being used.

                                            1. 1

                                              I read it when it was new… I suppose things have changed a bit. The trened towards JSON-over-HTTPS has been very strong and gone very far, so securing only application protocols like HTTP isn’t as much of a problem as it was.

                                              DNSSEC and DANE provide assurance that a given IP address is what I asked for. But if IP addresses aren’t relevant, assurances about them aren’t either…

                                        2. 1

                                          So what do you think about DNS-over-HTTPS, which AIUI is also motivated by much the same thing, but only secures the path from the endpoint to the caching DNS server?

                                          I once saw advertising for some $%⚠雷𝀲☠⏣☡☢☣☧⍾♏♣⚑⚒⏁ game on my own website while holding a presentation. The venue’s WLAN “enhanced” my site. Both DNS-over-HTTPS and DNSSEC would have prevented that attack, at least if I had used google’s or cloudflare’s resolvers instead of the presentation venue’s.

                                          1. 1

                                            I do like that, although I would prefer that all authoritative DNS servers would implement TLS, so that my own recursor could do secure look-ups instead of only having a few centralized DoH resolvers.

                                            1. 1

                                              Oh, in that case you’d still have much the same bottleneck: You’d need to do DoH/DoT to the root/tld name servers, of which there aren’t many.

                                              1. 1

                                                Correct. But I’d like to see that development, which would be far better than DNSSEC.

                                          2. 1

                                            I feel like many arguments in this article are misleading and/or omit important details.

                                            DNSSEC is Cryptographically Weak

                                            Except.. it’s not. You can use ECDSA keys just fine for signing. Sure, you can use insecure keys. Just like you can use insecure keys or methods in TLS or pretty much anywhere else. We’ve come to distrust insecure configurations in TLS and we will probably have to move in that direction in DNSSEC. But first we should at least halfway get there.

                                            DNSSEC is Expensive To Adopt

                                            That seems to depend a lot on your point of view. A client trusting a validating recursor only needs to check a single flag in the DNS response to know if a record was signed correctly. Insecure results are therefore clearly visible and incorrectly signed results won’t be returned by the resolver. For clients, very little seems to need changing, but this is also the place where the least adaption has happened up until now.

                                            DNSSEC is Expensive To Deploy

                                            Two or three lines of configuration in knot-dns w/ automatic zone signing. No extra configuration on any of my nsd secondary servers. Not sure if I’d call that expensive to deploy. For a small zone, getting basic signing going is easier than configuring a Letsencrypt acme client. The biggest pain point is finding a registrar that allows you to set DS records for your zone.

                                            DNSSEC is Incomplete

                                            Securing the “last mile” is not what DNSSEC tries to do. We’ve got DoT and DoH for this, so that’s a different issue from a DNSSEC point of view.

                                            DNSSEC is a Government-Controlled PKI

                                            This is the only truely interesting point and it’s a difficult and interesing one for sure. Not sure if I’d open that can of worms right away, because the TLS CA system is also far from ideal. But I suppose it is true that DNSSEC has one central anchor for trust, which would usually be the keys for the root zone. It is of course also true that any local registrar might be influenced by a local government. But all of this is true today. The implications this has for DANE should probably discussed in the context of DANE and not of DNSSEC, but that’s just my 2 cents on this.

                                          1. 9

                                            The author builds LLVM, which is an odd-shaped job. The work done at the start is very CPU-intensive, at the end there are RAM-intensive parts. This is a bit of an LLVM FAQ: People configure enough workers for their CPUs and then the build crawls because there isn’t enough RAM in the last part of the build.

                                            Something like cmake -DLLVM_PARALLEL_LINK_JOBS:STRING:2 and then ninja -j6 might be more informative. That runs at most two link jobs (which require enormous amounts of RAM), but up to six jobs generally. If configured that way, the build can use both most of the RAM and most of the CPU without overstraining either, and so it says more about the hardware’s capabilities.

                                            1. 3

                                              Apparently the “octoverse” is the Github “community”, and this is an ad disguised as an infographic. Flagged as spam.

                                              1. 4

                                                I think the GitHub community is large enough that this kind of analysis is interesting and helpful in understanding development trends.

                                                Even given its limitations, I think it’s probably the best view of software development happenings. Maybe the stackoverflow survey.

                                                1. 3

                                                  Yes, agreed. In particular I found the opensource/country statistics interesting, and I can’t think of anyone else has that data and will make such a report. Of course I’ve given it about 30 seconds of thought, but still…

                                                  If I want to know such country statistics, then I must accept that they’re compiled and published in a way that makes sense for the people who have the data and do the work. It may be an ad, but if advertising were generally as informative as this I think we’d be well off.

                                                2. 1

                                                  I posted the URL to octoverse because contain a very useful report based on a huge amount of open source codebase and developers, I found it very useful for me and I decide to share with other here. No intent to spam or promote github.

                                                1. 6

                                                  Seven years later, the lack of GNU software isn’t something I’ve noticed that Mac users complain about. Or even developers. I’ve noticed that developers complain about the touch bar, yes. About the lack of an ESC key. But about having to make do with tmux instead of screen? I definitely hadn’t noticed that.

                                                  I suppose this really means that the FSF doesn’t have great software to advance its cause any more.

                                                  1. 9

                                                    The only one I’ve seen is complaining about an outdated bash. Apple switched the default shell to zsh this year which will help there.

                                                    1. 4

                                                      The old buggy rsync is another problem. I wish they’d just drop the pretense of shipping a useful unixy userland.

                                                      1. 6

                                                        One step toward this: they’re removing the out-of-date Python, Ruby, and Perl.

                                                        Future versions of macOS won’t include scripting language runtimes by default, and might require you to install additional packages. If your software depends on scripting languages, it’s recommended that you bundle the runtime within the app.

                                                    2. 5

                                                      Lots of alternative software these days is very available to install via things like homebrew, pkgsrc, macports, nix, etc. Historically, I recall bundled software being much more important as internet pipes were tiny (or nonexistent!), and sidecar packaging systems (if they existed) were more immature/buggy. In fact, many of the BSDs still include “kitchen sink” base systems with tons of seldom used tools. A few of the BSDs (OpenBSD is an example), do make it a point to at least remove (and/or move it into ports) some of the old base system stuff that isn’t much used anymore.

                                                      1. 3

                                                        I see someone flagged this as trolling. It’s not. Obtuse, perhaps.

                                                        The GPL’s viral nature depends on the compelling (or at least attractive) nature of the existing GPL’d software. I think (I’ll be happy to hear any arguments to the contrary) that exactly one of these is true:

                                                        • The GPL’d software is good enough that developers complain if it’s removed from an OS.
                                                        • The GPL’d software is not good enough to help the FSF’s cause.

                                                        That blog posting was written in 2012; my reaction when I read it now, seven years later, is surprise, because I haven’t heard protests. Developers are usually quick to complain when something sucks. With seven years hindsight the first possibility seems not to be the case, so I infer that the second possibility is what’s true.

                                                        I’m eager to hear any arguments that developers have protested and I not noticed, or that there’s a third alternative, or, or, or.

                                                        1. 2

                                                          It’s kinds sad really. I do believe in the FSF and GPL and even GPLv3. I think many of the great ideas behind the FSF/GPL have pretty much been lost today. Open source is all about middleware today .. use our middle wear so you can build apps around our (Facebook, Google, Microsoft, Amazon) systems. We don’t have a lot of good FOSS end-apps. There’s Firefox and Darktable and Libreoffice I guess, but Gimp never eclipsed Photoshop and you’re more likely to see Mac users in a coffee shop than Linux laptops.

                                                        1. 3

                                                          I use blosxom in static mode but it’s way out of date, unsupported, and is written in Perl.

                                                          1. 4

                                                            I too started with blosxom, then changed to loathsxome almost ten years ago. I eventually rewrote that in c++11 as an experiment, and now have both in production.

                                                          1. 3

                                                            I tried to run Plex briefly on a Synology RB411+, which I think has an 1.6GHz Atom CPU. I’ve no idea how fast that 1.6GHz Atom is, but at any rate Plex was severely unhappy. It often wanted to do on-demand reencoding (even though the only client was connected via gigabit ethernet and supported 1920×1080) and the user experience left much to wish for. I got the impression that Plex tried to serve video streams that wouldn’t overstrain the clients, even if that overstrained the CPU Plex was running on.

                                                            Few video files require more than 10Mbps (that’s megabits, not megabytes) sustained, but if you also use the same storage for other things (such as backups or other crontabs), the bandwidth requirements add up.

                                                            I suggest that you try it briefly, and consider the HD durability issue only when you see whether the CPU does what Plex asks of it.

                                                            (FWIW I now serve my files via NFS and use either Kodi or VLC as clients.)

                                                            1. 9

                                                              It is not just the compiler. It depends on the CPU if 4+4+4 or 3*4 is faster. The instruction set itself does not give any guaranties. So even assembly language is not how the computer works?

                                                              1. 22

                                                                Nowadays I’d say that no, assembly language is not how the computer works. The assembly language is also running in another abstract machine.

                                                                1. 6

                                                                  Yeah, modern CPUs do all of the following: (micro-)instruction buffering, out-of-order / parallel scheduling, branch prediction, speculative execution…

                                                                  Assembly language is definitely not how the machine works anymore, and that’s how we end up with Meltdown and Spectre.

                                                                2. 4

                                                                  Hasn’t that been true since microcode was invented? ;)

                                                                  1. 2

                                                                    It’s differently true today… back in the day, reading assembly was informative in a way it isn’t now that a conditional branch can take zero cycles if it goes one way and many dozen cycles if it goes the other way. Assembly is still the most machine-like language we have, but reading it gives a much less complete picture of what the code does.

                                                                    It’s so difficult to read assembly and understand that one read’s very likely to be in L1 or L2, while another read is likely to go to main memory, and when it happens the delay will impact the next 50 instructions. Or that when a conditional branch goes one way the CPU will already have prefetched a lot, and the next 25 instructions are already being executed, whereas when it goes the other way the next instruction will take longer to start than those 25 to finish.

                                                                    1. 1

                                                                      We might need tools that explain it to us based on the CPU from several, optional perspectives. “Click expand or type this to see more.”

                                                                1. 4

                                                                  Assuming your domain is constantine.su, I wonder whether the issue might be a combination of:

                                                                  1. 4

                                                                    Hetzner

                                                                    Oh boy. Possibly related, possibly unrelated, but at work recently we had to block an entire IP range from Hetzner due to misbehaving crawlers that were not respecting various robots.txt rules and nofollow on internal links. There is a chance that there are probably some legitimate IPs in that range, but not worth the BS we were getting from those crawlers.

                                                                    Also seconding your recommendation of rDNS. It has been essential for many, many years now.

                                                                    1. 9

                                                                      Well in that case you won’t get my mails, or be able to interact with any of my services, or update Quasseldroid.

                                                                      Hetzner is one of the few hosters offering dedicated hosting powered with fully renewable energy, and one of the few hosters actually handing abuse reports correctly (as in, not terminating service from any abuse report, but only from court orders, which is useful behavior if you’re getting SWATed by internet trolls, who’ve also found they can use abuse reports for the same purpose)

                                                                      1. 4

                                                                        +1 for Hetzner. Their support and service is great! I’m using them as well because of their use of renewable energy. Changed from Linode a while back.

                                                                        1. 3

                                                                          They also aren’t crooks like some of their competitors. I’ve had Scaleway (Online SAS) increase prices for old dedicated servers without much advance notice, either; which is really a shame, because the only reason I bought the server was a low price (one of them I didn’t even have powered on, apparently). OVH appears to have played similar games as well. Hetzner does the opposite for long-term customers.

                                                                        2. 2

                                                                          Not to worry, I will still get your mail and all the rest!

                                                                          AFAIK it the block was various front-end web services. I do not think it even applies to API instances, just those serving up full web pages. So you couldn’t access the various websites from a script that is deployed to Hetzner. And I suppose if you did mail a web instance, it wouldn’t receive them, but the IP block wouldn’t be the only reason for that.

                                                                          Also good to hear another anecdote on Hetzner as a host. Aside from your comment, my only exposure to them is as the host of a hive of over-aggressive and poorly-configured crawlers over the last year.

                                                                          I shared my anecdote because it might be relevant to the article’s main concern: If we had to block one of their IP ranges for web traffic, it is conceivable that other entities have blocked them for email.

                                                                        3. 1

                                                                          Oh that’s unfortunate. They’re a good host. I only moved off them because they finally stopped offering the VPS I was on after seven years.

                                                                        4. 5

                                                                          No, I’ve never used that domain for mail; it’s too long.

                                                                          • Note that this is not a TLD issue, either, because only one of my domains is affected by “low reputation”, the other ones in the very same TLD are not. This has been 100% reproducible over the last few weeks.

                                                                          • Hetzner IP space is not involved here, either — none of these rejects or accepts were over Hetzner IP space. Regardless, you’re ignoring the fact that Google has blacklisted a specific domain name, not the IP address which I’m using, because the very same IP address with the very same email body and the very same TLD, just a different (rarely-used) domain itself in From and MAIL FROM, gets accepted by Gmail, and doesn’t even end up in the Spam folder, either — goes straight to Inbox. Again, this has been reproducible 100% in the last few weeks. And just because some users report issues with their newly purchased servers at a huge provider like Hetzner doesn’t mean that it’s something that’s not supported or isn’t supposed to work. Of course, with enough volume and enough churn, some individual IPs may come blacklisted, which doesn’t mean that it’s representative for the whole space.

                                                                          • And let’s not get all McCarthyism here on Lobsters, shall we? All those stories from 2013 about .su being used for spam and scam have zero credence, and are built around some scammer from abuse.ch shopping the very same story across multiple venues, going as far as Fox News (reprinting AP, I guess). Their suggestion on their own blog at the time was to completely block .su. (I don’t recall ever communicated with anyone from .ch. Should I maybe block .ch? Why don’t we all just block and blacklist each other?) And even if you disregard the potential bias of these databases and unclear methodologies, .su is still one of the cleanest TLDs out there, especially for how many domain name registrations that it has. Your own Spamhaus link reports .us at 33% bad (ouch!), .biz at 24%, .cn at 18,4%, so, .su at 11,5% bad comes out pretty clean in comparison (.com and .net are between 4 and 5%, which is hardly very clean, either, especially given the absolute numbers). This is even if you disregard the potential bias of their methodologies in the first place.

                                                                          1. 2

                                                                            I just re-read your email and it looks like the sequence of events is this:

                                                                            • you configured your server to forward mail from your primary domain to your free GMail account
                                                                            • GMail began thinking a significant portion of emails from your domain were malicious
                                                                            • after a few months of this happening, GMail began blocking emails from your domain

                                                                            I can see how this situation suggests that there should an easy way to get your domain unblocked. I also can see why Google doesn’t make it easy for actual malicious actors.

                                                                            I ran my own email server (on a VPS provider with as many reputation issues as Hetzner) for more than a decade. I stopped not because my emails were being sent to spam or were being rejected, but because running your own email server correctly is hard. I think I can assume you weren’t running an open relay and had SPF and DKIM set up correctly, but without knowing the domain (which you didn’t mention in your original email and haven’t mentioned here) or the contents of the messages you were forwarding to GMail, it’s impossible for anyone to state that Google is overreaching by not accepting email from your domain.

                                                                            1. 2
                                                                              • The server has been forwarding the mail and running cron jobs for many years. Same domain, same IP, same recipient Gmail account. It’s not actually a free Gmail, BTW, because I was duped into believing that the mailbox size is infinite, whereas it has stopped growing at 15GB; so, due to all the mailing list archives, I now have to pay 1,99 USD/mo to be able to continue to receive new mail.

                                                                              • In a newly added cron job a couple of months back, I’ve started sending myself a list of a few dozen domain names which I don’t control over to my Gmail. This has been done exclusively to my own Gmail address. How could you possibly classify a few dozen of plaintext domain names as malicious in a clean room?

                                                                              • You make it a point that I’ve been sending these “malicious” emails for a “few months”, but you’re ignoring the fact that they aren’t actually malicious, nor were these the only emails that were being sent. How was I even supposed to know that one or two of these emails daily, in the presence of dozens of emails not so marked, would turn my domain name into having a persistent “low reputation”?

                                                                              BTW, I do not actually use DKIM, but do use SPF and DMARC; note that these rejected emails do pass both SPF and DMARC; DMARC requires either SPF or DKIM to pass with domain alignment in order to generate a DMARC pass. My forwarding doesn’t appear to mangle existing DKIM signatures, but it would seem that even those emails are rejected, too. (However, emails from my own secondary domains without DKIM but with an SPF pass do get through.)

                                                                          2. 1

                                                                            Just as a semi-relevant data point, I send bulk mail from a server hosted at Hetzner and Gmail doesn’t block that. Gmail blocked that mail at the start and so did several others, because the server’s IPv4 address had been used for all kinds of evil things (the previous customer ran an unpatched wordpress site and was 0wned). But then I

                                                                            • investigated each and every 4xx and 5xx SMTP response, and took care of every problem
                                                                            • signed everything with DKIM and added an explicit SPF yes
                                                                            • made the hostnames match, even ones that shouldn’t need to

                                                                            It took a month or two for the old reputation to age away, and investigating every SMTP transaction for bulk mail was tedious, but the mail has been flowing smootly since. I don’t know what OP is doing, but “being hosted at Hetzner” isn’t a problem in itself, even if you start with your IPv4 address on a half-dozen blacklists.

                                                                            1. 1

                                                                              It took a month or two for the old reputation to age away

                                                                              You don’t really have to do that, BTW. I think it’s pretty standard practice for providers to exchange the IP address in case you get one that’s burned and where it’s an issue for you (it might as well not be for their next customer).

                                                                              1. 1

                                                                                It’s not much time, anyway, and mostly overlapped with the time to investigate other possible problems. Noone had checked the recipient list, for a start.

                                                                          1. 2

                                                                            I’m the initial designer of the Qt documentation, IMO the best-ever developer documentation written by a comparably sized team, and I don’t think even much bigger teams have managed anything much better.

                                                                            What I managed was decent, at very low cost, and I didn’t do that by making tables a priority. Tables? What do tables have to do with getting the text right? Or with getting the text written in the first place?

                                                                            I suggest that you consider what you really want, and then implement that in top of something like markdown. Make sure the features are ones that matter for your goal and pay close attention to what makes people write and what makes them not write, then iterate. In my case, making people write very-nearly-plain-text in their usual editors, version-controlled, was a key, having someone who showed interest in the result was key, the lack of code examples in the “documentation” part of the documentation mattered, but the style of the output, or tables, did not.

                                                                            1. 3

                                                                              There’s at least one other way to get full sudo from restricted sudo, and I don’t think that one’s going to be fixed, so fixing this CVE won’t really help anyway.

                                                                              1. 2

                                                                                That’s a slightly different kind of restriction, though. Things giving away root is arguably the things’ problem (or at least the administrator’s, for giving root access to insufficiently secure things). This is the opposite problem—sudo giving out root access in an otherwise-secure configuration (even if it would take a bit of work to make (ALL !root) secure in practice).

                                                                                1. 2

                                                                                  Sure. What sudo does is widen the attack surface by providing that root-for-some-commands feature, and implement the feature such that the things have to be maximally careful, by providing the originating user’s own environment unfiltered instead of providing root’s environment.

                                                                                  IIRC providing the user’s environment is a deliberate design decision.

                                                                                  1. 1

                                                                                    What to do with the environment is configurable. I admit I’m not sure quite why preserving it is the default.

                                                                              1. 16

                                                                                To me it’s the burden of recovering the context.

                                                                                Find it really hard to restart doing something when that was enough complicated so that I don’t have mental picture of it anymore. Building that card house from scratch, oh no.

                                                                                1. 8

                                                                                  Oh man, this happened to me today. I was an hour into reading some very indirect Scala when my boss came over and asked if I had dealt with an email our colleague had sent five minutes earlier. I could physically feel the zone and context draining out.

                                                                                  The email involved grabbing a file on one of our hosts and emailing it back. I didn’t do a thing for the next hour, hour and a half though.

                                                                                  1. 1

                                                                                    This gets to the heart of my question, and I feel your pain. I do hope your colleague was very productive during the hour.

                                                                                    There’s some difference in the brain during that hour, and at trying to get back into the zone triggers procrastination. I think procrastination must be tied to that peculiar state. The drained state.

                                                                                  2. 4

                                                                                    This. Filling your brain with the state required to work on hard problems is expensive. You’re actually storing quite a bit of information and then using that information for intricate problem solving. This is why switching from one deep work task to another is so difficult. If you’ve ever finished one project or task then tried to start another but ended up browsing the internet instead then you’ve run into this. I have a half-baked theory that repeatedly overcoming this context load requirement barrier without adequate breaks causes burn out but that could just be me being lazy.

                                                                                    He only asked why but not how to overcome it but in the case of programming I just close all my tabs and environments, open the one that I need along with the design doc then literally sit on my hands and stare and wait until the information is loaded. It’s the only way I’ve found that doesn’t require much willpower because I don’t need to actually focus and resist temptation as much; the words are right there staring back at me.

                                                                                  1. 1

                                                                                    I’m going to go off on a tangent… the additional complexity largely comes from

                                                                                    • what to do when the vpn connection dies (how long to wait? which end reconnects?)
                                                                                    • dealing with devices that change address (neither end? one end? both?)
                                                                                    • what to do when packet transmission takes longer than usual (will the packets arrive? has the connection died?)
                                                                                    • supporting different OSes
                                                                                    • dealing with old versions of the same software (with, perhaps, different crypto)
                                                                                    • dealing with attacks, including replay attacks (can the clocks be trusted?)
                                                                                    • whether the VPN admin has root access on either end, or even both

                                                                                    Many of these things aren’t really crypto/VPN factors. Dealing with devices that change address and recovering from a sudden change of address isn’t a crypto matter, but it may be important to the practical usability of the package, for example. IMO it’s another example of the general rule that writing software is hard. There are usually many minor problems to solve, problems that don’t have much to do with the core purpose of the software.

                                                                                    FWIW I used tappet for years, but might choose wireguard now if I needed a VPN.