1. 8

    The year is 2037. People are still writing about how M1 Macs “hold up pretty well”.

    1. 37

      In 2037 the only supported operating systems for your M1 Mac will be NetBSD and Linux.

      1. 2

        Yeah, probably way earlier than that lol

        1. 2

          s/Linux/Debian/g

        2. 5

          By 2037, the average developer will finally be able to afford an M1 Mac :)

          1. 3

            I know you are probably joking, but… The median salary for developers in the US is apparently somewhere between $90,000 and $100,000 dollar. If you or your employer are not spending $1500-3000 on hardware every 2-3 year, you are doing something wrong. Pretty much the same story in most western countries. (Of course, this is not applicable to every other part of the world.)

            Then, the resale value of MacBooks is very high. I usually buy a new Mac every 1.5 years or so and sell my old MacBook for ~70% of the old price. Which means that I have a modern laptop for ~400-500 Euro per year. Most other laptops with a lower resale value are in the same ballpark yearly (e.g. 1500, write off after 3 years).

            1. 2

              Well, obviously it won’t be 2037, but the developers I know also tend to expect their hardware to last a bit longer.

              Salaries in the US have long stopped making sense (and in my opinion, probably aren’t sustainable for companies without a huge market cap). Elsewhere in the world, developer pay is more in line with that of other professionals.
              And most companies make rational decisions about hardware: buying a single model (which probably costs €500 in total) in bulk that works for the entire company, not just the developers; not writing them off in just three years.

              A MacBook Air isn’t prohibitively expensive compared to other computer hardware, but on the other hand, times when software development required a top-of-the line computer are long gone.

        1. 18

          Neat idea. I’m not sure this is a captcha, but rather just a rate limiter.

          1. 13

            So much this. A proof-of-work scheme will up the ante, but not the way you think. People need to be able to do the work on the cheap (unless you want to put mobile users at a significant disadvantage) and malware/spammers can outscale you significantly.

            Ever heard of parasitic computing? TLDR: It’s what kickstarted monero. Any website (or an ad in that website) can run arbitrary code on the device of every visitor. You can even shard the work, do it relatively low-profile if you have the scale. Even if pre-computing is hard, with ad networks and live-action during page views an attacker can get challenges solved just-in-time.

            1. 9

              The way I look at it, it’s meant to defeat crawlers and spam bots; they attempt to cover the whole internet, they want to spend 99% of their time parsing and/or spamming, but if this got popular enough to prompt bot authors to take the time to actually implement WASM/WebWorkers or a custom Scrypt shim for it, they might still end up spending 99% of their time hashing instead.

              Something tells me they will probably give up and start knocking on the next door down the lane. And if I can force bot authors to invest in a $1M USD+ /year black hat “distributed computing” project so they can more effectively spam Cialis and Micheal Kors Handbags ads, maybe that’s a good thing? I never made $1M a year in my life, probably never will, I would be glad to be able to generate that much value tho.

              If it comes down to a targeted attack on a specific site, captchas can already be defeated by captcha farm services or various other exploits (https://twitter.com/FGRibreau/status/1080810518493966337). Defeating that kind of targeted attack is a whole different problem domain.

              This is just an alternate approach to put the thumb screws on the bot authors in a different way, without requiring the user to read, stop and think, submit to surveillance, or even click on anything.

              1. 9

                This sounds very much like greytrapping. I first saw this in OpenBSD’s spamd: the first time you got an SMTP connection from an IP address, it would reply with a TCP window size of 1, one byte per second, with a temporary failure error message. The process doing this reply consumed almost no resources. If the connecting application tried again in a sensible amount of time then it would be allowed to talk to the real mail server.

                When this was first introduced, it blocked around 95% of spam. Spammers were using single-threaded processes to send mail and so it also tied each one up for a minute or so, reducing the total amount of spam in the world. Then two things happened. The first was that spammers moved to non-blocking spam-sending things so that their sending load was as small as the server’s. The second was that they started retrying failed addresses. These days, greytrapping does almost nothing.

                The problem with any proof-of-work CAPTCHA system is that it’s asymmetric. CPU time on botnets is vastly cheaper than CPU time purchased legitimately. Last time I looked, it was a few cents per compromised machine and then as many cycles as you can spend before you get caught and the victim removes your malware. A machine in a botnet (especially one with an otherwise-idle GPU) can do a lot of hash calculations or whatever in the background.

                Something tells me they will probably give up and start knocking on the next door down the lane. And if I can force bot authors to invest in a $1M USD+ /year black hat “distributed computing” project so they can more effectively spam Cialis and Micheal Kors Handbags ads, maybe that’s a good thing?

                It’s a lot less than $1M/year that they spend. All you’re really doing is pushing up the electricity consumption of folks with compromised computers. You’re also pushing up the energy consumption of legitimate users as well. It’s pretty easy to show that this will result in a net increase in greenhouse gas emissions, it’s much harder to show that it will result in a net decrease in spam.

                1. 2

                  These days, greytrapping does almost nothing.

                  postgrey easily kills at least half the SPAM coming to my box and saves me tonnes of CPU time

                  1. 1

                    The problem with any proof-of-work CAPTCHA system is that it’s asymmetric. [botnets hash at least 1000x faster than the legitimate user]

                    Asymmetry is also the reason why it does work! Users probably have at least 1000x more patience than a typical spambot.

                    I have no idea what the numbers shake out to / which is the dominant factor, and I don’t really care; the point is that I can still make the spammers lives hell & get the results I want right now (humans only past this point) even though I’m not willing to let Google/CloudFlare fingerprint all my users.

                    If botnets solving captchas ever becomes a problem, wouldn’t that be kind of a good sign? It would mean the centralized “big tech” panopticons are losing traction. Folks are moving to a more distributed internet again. I’d be happy to step into that world and work forward from there 😊.

                  2. 5

                    captchas can already be defeated by […] or various other exploits (https://twitter.com/FGRibreau/status/1080810518493966337)

                    An earlier version of google’s captcha was automated in a similar fashion: they scraped the images and did a google reverse image search on them!

                    1. 3

                      I can’t find a link to a reference, but I recall a conversation with my advisor in grad school about the idea of “postage” on email where for each message sent to a server a proof of work would need to be done. Similar idea of reducing spam. It might be something in the literature worth looking into.

                      1. 3

                        There’s Hashcash, but there are probably other systems as well. The idea is that you add a X-Hashcash header with a comparatively expensive hash of the content and some headers, making bulk emails computationally expensive.

                        It never really caught on; I used it for a while years ago, but I’ve never received an email with this header since 2007 (I just checked). It seems used in Bitcoin nowadays according to the Wikipedia page, but it started out as an email thing. Kind of ironic really.

                        1. 1

                          “Internet Mail 2000” from Daniel J. Bernstein? https://en.m.wikipedia.org/wiki/Internet_Mail_2000

                      2. 2

                        That is why we can’t have nice things… It is really heartbreaking how almost all technology advance can and will be turned for something evil.

                        1. 1

                          The downsides of a global economy for everything :-(

                      3. 3

                        Captchas are essentially rate limiters too, given enough determination from abusers.

                        1. 4

                          Maybe. The difference I would make is that a captcha attempts to assert that the user is human where this scheme does not.

                          1. 2

                            I mean, objectively, yes. But, since spammers are automating passing the “human test” captchas, what is the value of that assertion? Our “human test” captchas come at the cost of impeding actual humans, and are failing to protect us from the sophisticated spammers, anyway. This proposed solution is better for humans, and will still prevent less sophisticated attackers.

                            If it can keep me from being frustrated that there are 4 pixels on the top left tile that happen to actually be part of the traffic light than by all means, sign me the hell up!

                      1. 5

                        It is somehwat crazy that .com TLD registries are subverted for political purposes, I feel like they should be neutral (as long as the bank involved is not also involved in funding objectively questionable or violent things, I’m not familiar with the context here).

                        And yeah I also agree with your point that it might not be smart to use an Iranian TLD – especially when it comes to blogging, authoritative regimes seem to be a bit touchy. Any time you hear the word “Iran” and “blogger” in the same sentence in the news, it usually is not a positive story (prison or worse).

                        As a German, I would like to advertise for the .de TLD because it is very affordable – only 5.97€ on inwx.de rather than the 13.69€ you pay for a .com, and also at least to my knowledge it is not involved very much in censorship. The downside is that people often expect content to be in the native language, but that doesn’t matter. You can also just register it as a backup TLD, in case your main one gets in trouble.

                        Another suggestion I have is using the .dev TLDs, because they already imply that it would be a technical blog, and I think they are operated by Google but I don’t think the US government would go through the trouble to censor that TLD, I think their sanctions are mostly only targeted at businesses (rather than personal blogs).

                        1. 2

                          No, the bank was not even sanctioned directly until recently.

                          Correct. For example, Sattar Beheshti was a blogger that was sadly killed in jail. His crime was blogging.

                          I was thinking about .fr, .ch, .se, and .no. What do you think about them? Well I have nothing against .de but Germany usually cooperates with U.S. in these matters.

                          The sanctions and seizing targets everybody, not just businesses. Recently, they seized dozens of domains claiming they spread misinformation. I agree that they spread misinformation and they were harmful websites promoting Iranian regime’s propaganda but seizing domains is not acceptable in any case. Link: https://www.cnn.com/2021/06/22/politics/us-seizes-iran-website-domains/index.html

                          1. 8

                            I did a bit of research. This article claims that .de, .at, .is and .ru are good because those are the only TLDs where censorship can only occur by the federal court. I have checked with DENIC (that are responsible for .de domains) and they affirm this. Federal court decisions here are publicly accessible, so I took a look if I can find any relevant decisions. However, I was able to find barely any, mostly related to objectively criminal matters.

                            1. 4

                              Being the subject of a court decision doesn’t mean much in countries where the independency of courts is questionable (.ru).

                              1. 1

                                That is very correct.

                              2. 3

                                Thank you very much. Very helpful.

                              3. 2

                                I was thinking about .fr, .ch, .se, and .no. What do you think about them?

                                The problem you may have is that some of those (like .fr or .no) require a presence in the some part of the world: see “Eligibility requirements”. You can pay a service (than Gandi offers sometimes) to have an address in the EU, but that’s quite costly.

                                1. 1

                                  That is true, but what about the domain extensions themselves? What about legal process and court orders? Should one worry about the influence of USA or other countries?

                                  1. 2

                                    If we refer to this EFF document, your mileage may vary. For .fr for instance, there is removal by arbitrator order based on intellectual property rights. For .fr still, it appears though that the only other venue to get a domain removed is through a French court order, so another country’s order would be scrutinized by a local court.

                            1. 1

                              I have a similar setup except I use a corporate virtual machine. For a shell connection I use EternalTerminal with tmux in control mode and iTerm2. This lets me create native terminal tabs that are actually remote tmux tabs. EternalTerminal ensures that the connection never breaks even if my IP address changes (e.g. if I move from office back to home or I am on a wonky mobile connection).

                              1. 1

                                The point about sudo is irrelevant on single-user systems (which, I believe, are the most common kind of macOS installations) where infecting $USER is enough. Obligatory xkcd.

                                1. 3

                                  By the way, this is a part of what Ubuntu motd contains now:

                                  • Check out 6 great IDEs now available on Ubuntu. There may even be something worthwhile there for those crazy EMACS fans ;)

                                  1. 2

                                    The Ubuntu Blog advertising proprietary software? I hope they got paid for it at least.

                                    1. 1

                                      Wouldn’t want all those crazy Stallmanites hanging around calling them on advertising non-free software, which you can get from their new package manager that caters to for-profit companies.

                                      1. 1

                                        The word “emacs” doesn’t even appear in the listicle so I suppose it’s just clickbait.

                                      1. 1

                                        Does Intel mention all those CPU bugs and vulnerabilities in their (updated) system programming manuals / errata?

                                        1. 4

                                          Why would you ever want to access a string by a code point index and not a byte offset is absolutely beyond me. Let alone the fact that this article ignores the presence of grapheme clusters (aka user-perceived characters).

                                          1. 1

                                            I don’t understand how it’s possible pick three here: “full-native speed”, single address space OS (everything in ring 0) and security. I believe you can only pick two.

                                            1. 1

                                              Well, that’s what nebulet is trying to challenge.

                                                1. 1

                                                  I haven’t yet read the whole paper but in the conclusion they say that performance was a non-goal. They “also improved message-passing performance by enabling zero-copy communication through pointer passing”. Although I don’t see why zero-copy IPC can’t be implemented in a more traditional OS design.

                                                  The only (performance-related) advantage such design has in my opinion is cheaper context-switching, but I’m not convinced it’s worth it. Time (and benchmarks) will show, I guess.

                                                  1. 1

                                                    When communication across processes becomes cheaper than posting a message to a queue belonging to another thread in the same process in a more traditional design, I’d say that that’s quite a monstrous “only” benefit.

                                                    I should have drawn your attention to section 2.1 in the original comment, that’s where you original query is addressed. Basically the protection comes from static analysis, a bit like the original Native Client or Java’s bytecode verifier

                                              1. 2

                                                I remember making a procedure that dynamically generated functions with “bound” this pointer. It worked by allocating a trampoline and writing the object’s address. It was horrible.

                                                1. 7

                                                  i put on my robe and wizard hat

                                                  1. 4

                                                    Curious what it would take to flash a modified version of this to an old iPhone. Could one theoretically boot a Linux kernel if the signing check was omitted?

                                                    1. 4

                                                      Not sure if it’s entirely relevant to this, but I did get Android installed on my 1st gen iPhone back in the day using this: https://www.theiphonewiki.com/wiki/IDroid

                                                      1. 1

                                                        I’m guessing the keys themselves have not been released so the issue is getting anything non-apple onto the device in the first place? Also guessing, if we had the keys we could easily modify iboot, or relatively easily port core boot or whatever the cool kids are using these days and ignore signing?

                                                        1. 2

                                                          You don’t really need keys these days to boot something. You can use kloader which is basically kexec for (32-bit) iOS. It has been used for dual-booting a signed iOS installation with an unsigned one.

                                                          1. 2

                                                            Wow, that’s awesome. I have an old iPhone 4 that I’d love to re-purpose in this way. Where should I start reading/researching in order to do this myself? Thanks!

                                                        2. 1

                                                          There was the OpeniBoot project – an open source reimplementation of iBoot that works on older iPhones up to iPhone 4.

                                                        1. 2

                                                          Any security minded people have thoughts on this?

                                                          1. 13

                                                            Debian’s security record regarding CAs is atrocious. By this I mean default configuration and things like the ca-certificates package.

                                                            Debian used to include non-standard junk CAs like CACert and also refuse to consider CA removal a security update, so it’s hugely hypocritical of this page to talk about many insecure CAs out of 400+.

                                                            Signing packages is a good idea, as that is bound to the data and not to the transport like https so in principle I agree that using https for debian repositories doesn’t gain much in terms of extra security. However these days the baseline expectation should be that everything defaults to https, as in no more port 80 unauthenticated http traffic.

                                                            Yes, moving over to https for debian repositories breaks local caching like apt-cacher (degrades it to a tcp proxy) and requires some engineering work to figure out how to structure a global mirror network, but this will have to be done sooner or later. I would also not neglect the privacy implications, with https people deploying passive network snooping have to apply heuristics and put in more effort than simply monitoring http.

                                                            Consider the case where someone sitting passively on a network just monitors package downloads that contains a fix for a vulnerability that is exploitable remotely. That passive attacker can just try to race the host and exploit the vulnerability before the update can be installed.

                                                            Package signing in debian suffers from problems with the underlying gpg level, gpg is so 90s in that it’s really hard to sustainably use it long-term: key rotation, key strength are problem areas.

                                                            1. 4

                                                              Package signing in debian suffers from problems with the underlying gpg level, gpg is so 90s in that it’s really hard to sustainably use it long-term: key rotation, key strength are problem areas.

                                                              What do you consider a better alternative to gpg?

                                                              1. 10

                                                                signify is a pretty amazing solution here - @tedu wrote it and this paper detailing how OpenBSD has implemented it.

                                                              2. 4

                                                                non-standard junk CAs like CACert

                                                                imho CACert feels more trustworthy than 90% of the commercial cas. i really would like to see cacert paired with the level of automation of letsencrypt. edit: and being included in ca packages.

                                                                1. 2

                                                                  With the dawn of Let’s Encrypt, is there still really a use case for CACert?

                                                                  1. 4

                                                                    i think alternatives are always good. the only thing where they really differ is that letsencrypt certificates are cross signed by a ca already included in browsers, and that letsencrypt has automation tooling. the level of verification is about the same. i’d go as fas as to say that cacert is more secure because web of trust, but that may be just subjective.

                                                            1. 1

                                                              It would also be nice to be able to compose multiple articles into single books.

                                                              1. 3

                                                                Writing something that, I hope, will eventually become a text editor, multithreaded and extensible with MoonScript/Lua (or any other language via loadable libraries and external processes). The implementation language is Rust and I’m going to use tokio-rs for async IO and luajit for Lua. At the moment I have a basic rope implementation with Unicode support (including extended grapheme clusters thanks to the unicode-segmentation crate) that can pass some tests. The source code is here.

                                                                1. 2

                                                                  I wonder if there’s some lightweight browser that just displays HTML/CSS webpages and maybe runs some JavaScript on trusted websites, without WebRTC, WebGL, WebDRM and other bloatware that is being baked into the web standards these days, eats resources and extends the attack surface.

                                                                  Why can’t modern software just do the damn thing it’s asked to without doing anything behind my back?

                                                                  1. 1

                                                                    Dillo

                                                                    Just HTML/CSS2 – no Javascript, “HTML5”, or CSS3 and it’s blazing fast

                                                                    1. 1

                                                                      What bothers me is that dillo appears to be unmaintained and has “alpha” SSL support that I failed to enable (the suggested –enable-ssl didn’t work).