Threads for rzhikharevich

  1. 1

    I am not sure if the number of executed instructions is a good proxy for performance given the presence of cache misses and different execution times for different instructions. I wonder if perf shows the number of machine code instructions or the number of actual uops executed and how it handles REP-prefixed instructions?

    1. 2

      My desktop. macOS Monterey, yabai (tiling window manager). Main screen typically looks like this.

      1. 10

        Another example (other than hyperscan) of a regex implementation with streaming support is pire. I’m a bit surprised that this is supposedly not a very common feature of regex libraries.

        Would be nice to see benchmarks.

        1. 2

          Oh interesting, I can’t believe I’d never seen pire before. Looks really cool.

          Ergex is relatively fast in what limited benchmarking I’ve done, but because it tracks POSIX submatches, it will always be slower than engines that don’t.

          The push-oriented Aho-Corasick does help considerably with simultaneous matching - there’s only a single AC automaton, regardless of how many regular expressions are in the database (versus the simpler approach of having an AC per expression).

          I experimented with multi-Rabin-Karp and Commentz-Walter in place of AC, but AC ended up being the fastest in my testing. I’m sure there are cases where that would not be true.

        1. 15

          You lost me at “the great work from homebrew”

          Ignoring UNIX security best practices of the last. I dunno, 30, 40 years, and then actively preventing people from using the tool in any fashion that might be more secure, and refusing to acknowledge any such concerns is hardly “great work”.

          I could go on about their abysmal dependency resolution logic, but really if the security shit show wasn’t enough to convince you, the other failings won’t either.

          But also - suggesting Apple ship a first party container management tool because “other solutions use a VM”, suggests that either you think a lot of people want macOS containers (I’m pretty sure they don’t) or that you don’t understand what a container is/how it works.

          The “WSL is great because now I don’t need a VM” is either ridiculously good sarcasm, or yet again, evidence that you don’t know how something works. (For those unaware, WSL2 is just a VM. Yes it’s prettied up to make it more seamless, but it’s a VM.).

          1. 23

            I don’t know what’s SO wrong about Homebrew that every time it’s mentioned someone has to come and say that it sucks.

            For the use case of a personal computer, Homebrew is great. The packages are simple, it’s possible and easy to install packages locally (I install mine in ~/.Homebrew) and all my dependencies are always up to date. What would a « proper » package manager do better than Homebrew that I care about? Be specific please because I have no idea what you’re talking about in terms of security « shit show » or « abysmal » dependency resolution.

            1. 12
              • A proper package manager wouldn’t allow unauthenticated installs into a global (from a $PATH perspective) location.
              • A proper package manager wouldn’t actively prevent the user from removing the “WTF DILIGAF” permissions Homebrew sets and requiring authenticated installs.
              • A proper package manager that has some form of “install binaries from source” would support and actively encourage building as an untrusted user, and requiring authentication to install.
              • A proper package manager would resolve dynamic dependencies at install time not at build time.
              • A proper open source community wouldn’t close down any conversation that dares to criticise their shit.
              1. 11

                Literally none of those things have ever had any impact on me after what, like a decade of using Homebrew? I’m sorry if you’ve run into problems in the past, but it’s never a good idea to project your experience onto an entire community of people. That way lies frustration.

                1. 5

                  Who knew that people would have different experiences using software.

                  it’s never a good idea to project your experience onto an entire community of people

                  You should take your own advice. The things I stated are objective facts. I didn’t comment on how they will affect you as an individual, I stated what the core underlying issue is.

                  1. 6

                    You summarized your opinion on “proper” package managers and presented it as an authoritative standpoint. I don’t see objectiveness anywhere.

                2. 3

                  I don’t really understand the fuss about point 1. The vast majority of developer machines are single user systems. If an attacker manages to get into the user account it barely matters if they can or cannot install packages since they can already read your bank passwords, SSH keys and so on. Mandatory relevant xkcd.

                  Surely, having the package manager require root to install packages would be useful in many scenarios but most users of Homebrew rightfully don’t care.

                3. 8

                  As an occasional Python developer, I dislike that Homebrew breaks old versions of Python, including old virtualenvs, when a new version comes out. I get that the system is designed to always get you the latest version of stuff and have it all work together, but in the case of Python, Node, Ruby, etc. it should really be designed that it gets you the latest point releases, but leaves the 3.X versions to be installed side by side, since too much breaks from 3.6 to 3.7 or whatever.

                  1. 8

                    In my opinion for languages that can break between minor releases you should use a version manager (python seems to have pyenv). That’s what I do with node: I use Homebrew to install nvm and I use nvm to manage my node versions. For Go in comparison I just use the latest version from Homebrew because I know their goal is retro compatibility.

                    1. 5

                      Yeah, I eventually switched to Pyenv, but like, why? Homebrew is a package manager. Pyenv is a package manager… just for Python. Why can’t homebrew just do this for me instead of requiring me to use another tool?

                      1. 1

                        Or you could use asdf for managing python and node.

                      2. 7

                        FWIW I treat Homebrew’s Python as a dependency for other apps installed via Homebrew. I avoid using it for my own projects. I can’t speak on behalf of Homebrew officially, but that’s generally how Homebrew treats the compilers and runtimes. That is, you can use what Homebrew installs if you’re willing to accept that Homebrew is a rolling package manager that strives always to be up-to-date with the latest releases.

                        If you’re building software that needs to support a version of Python that is not Homebrew’s favored version, you’re best off using pyenv with brew install pyenv or a similar tool. Getting my teams at work off of brewed Python and onto pyenv-managed Python was a short work that’s saved a good bit of troubleshooting time.

                        1. 2

                          This is how I have started treating Homebrew as well, but I wish it were different and suitable for use as pyenv replacement.

                          1. 2

                            asdf is another decent option too.

                          2. 5

                            I’m a Python developer, and I use virtual environments, and I use Homebrew, and I understand how this could theoretically happen… yet I’ve literally never experienced it.

                            it should really be designed that it gets you the latest point releases, but leaves the 3.X versions to be installed side by side, since too much breaks from 3.6 to 3.7 or whatever.

                            Yep, that’s what it does. Install python@3.7 and you’ve got Python 3.7.x forever.

                            1. 1

                              Maybe I’m just holding it wrong. :-/

                            2. 3

                              I found this article helpful that was floating around a few months ago: https://justinmayer.com/posts/homebrew-python-is-not-for-you/

                              I use macports btw where I have python 3.8, 3.9 and 3.10 installed side by side and it works reasonably well.

                              For node I gave up (only need it for small things) and I use nvm now.

                            3. 8

                              Homebrew is decent, but Nix for Darwin is usually available. There are in-depth comparisons between them, but in ten words or less: atomic upgrade and rollback; also, reproducibility by default.

                              1. 9

                                And Apple causes tons of grief for the Nix team every macOS release. It would be nice if they stopped doing that.

                                1. 2

                                  I stopped using Nix on macOS after it is required to create an separate unencrypted volume just for Nix. Fortunately, NixOS works great on VM.

                                  1. 2

                                    It seems to work on an encrypted volume now at least!

                              2. 4

                                I really really hate how homebrew never ask me for confirmation. If I run brew upgrade it just does it. I have zero control over it.

                                I come from zypper and dnf, which are both great examples of really good UX. I guess if all you know is homebrew or .dmg files, homebrew is amazing. Compared to other package managers, it might even be worse than winget….

                                1. 2

                                  If I run brew upgrade it just does it

                                  … yeah? Can we agree that this is a weird criticism or is it just me?

                                2. 2

                                  Overall I like it a lot and I’m very grateful brew exists. It’s smooth sailing the vast majority of the time.

                                  The only downside I get is: upgrades are not perfectly reliable. I’ve seen it break software on upgrades, with nasty dynamic linker errors.

                                  Aside from that it works great. IME it works very reliably if I install all the applications I want in one go from a clean slate and then don’t poke brew again.

                                3. 4

                                  you think a lot of people want macOS containers (I’m pretty sure they don’t)

                                  I would LOVE macOS containers! Right now, in order to run a build on a macOS in CI I have to accept whatever the machine I’m given has installed (and the version of the OS) and just hope that’s good enough, or I have to script a bunch of install / configuration stuff (and I still can’t change the OS version) that has to run every single time.

                                  Basically, I’d love to be able to use macOS containers in the exact same way I use Linux containers for CI.

                                  1. 1

                                    Yes!!

                                    1. Headless macos would be wonderful
                                    2. Containers would be fantastic. Even without the docker-like incremental builds, something like FreeBSD jails or LXC containers would be very empowering for build environments, dev servers, etc
                                    1. 1

                                      Containers would be fantastic. Even without the docker-like incremental builds, something like FreeBSD jails or LXC containers would be very empowering for build environments, dev servers, etc

                                      These days, Docker (well, Moby) delegates to containerd for managing both isolation environments and image management.

                                      Docker originally used a union filesystem abstraction and tried to emulate that everywhere. Containerd provides a snapshot abstraction and tries to emulate that everywhere. This works a lot better because you can trivially implement snapshots with union mounts (each snapshot is a separate directory that you union mount on top of another one) but the converse is hard. APFS has ZFS-like snapshot support and so adding an APFS snapshotter to containerd is ‘just work’ - it doesn’t require anything else.

                                      If the OS provides a filesystem with snapshotting and a isolation mechanism then it’s relatively easy to add a containerd snapshotter and shim to use them (at least, in comparison with writing a container management system from scratch).

                                      Even without a shared-kernel virtualisation system, you could probably use xhyve[1] to run macOS VMs for each container. As far as I recall, the macOS EULA allows you to run as many macOS VMs on Apple hardware as you want.

                                      [1] xhyve is a port of FreeBSD’s bhyve to run on top of the XNU hypervisor framework, which is used by the Mac version of Docker to run Linux VMs.

                                  2. 2

                                    Ignoring which particular bits of Unix security practices is problematic? There are functionally no Macs in use today that are multi-user systems.

                                    1. 3

                                      All of my macs and my families macs are multi-user.

                                      1. 2

                                        The different services in OS are running as different users. It is in general good thing to run different services with minimal required privileges, different OS provided services run with different privileges, different Homebrew services run with different privileges, etc. So reducing the blast radius, even if there is only one human user is a pro, as there are often more users at once, just not all users are meatbags.

                                      2. 1

                                        I’ve been a homebrew user since my latest mac (2018) but my previous one (2011) I used macports, given you seem to have more of an understanding of what a package manager should do than I have, do you have any thoughts on macports?

                                        1. 4

                                          I believe MacPorts does a better job of things, but I can’t speak to it specifically, as I haven’t used it in a very long time.

                                          1. 1

                                            Thanks for the response, it does seem like it’s lost its popularity and I’m not quite sure why. I went with brew simply because it seemed to be what most articles/docs I looked at were using.

                                            1. 3

                                              I went with brew simply because it seemed to be what most articles/docs I looked at were using.

                                              Pretty much this reason. Homebrew came out when macports was still source-only installs and had some other subtle gotchas. Since then, those have been cleared up but homebrew had already snowballed into “it’s what my friends are all using”

                                              I will always install MP on every Mac I use, but I’ve known I’ve been in the minority for quite awhile.

                                              1. 1

                                                Do you find the number of packages to be comparable to brew? I don’t have a good enough reason to switch but would potentially use it again when I get another mac in the future.

                                                1. 3

                                                  I’ve usually been able to find something unless it’s extremely new, obscure, or has bulky dependencies like gtk/qt or specific versions of llvm/gcc. The other nice thing is that if the build is relatively standard, uses ‘configure’ or fits into an existing PortGroup, it’s usually pretty quick to whip up a local Portfile(which are TCL-based so it’s easy to copy a similar package config and modify to fit).

                                                  Disclaimer: I don’t work on web frontends so I usually don’t deal with node or JS/TS-specific tools.

                                                  1. 3

                                                    On MacPorts vs Homebrew I usually blame popularity first and irrational fear of the term ‘Ports’ as in “BSD Ports System”, second. On the second cause, a lot of people just don’t seem to know that what started off as a way to have ‘configure; make; make install’ more maintainable across multiple machines has turned into a binary package creation system. I don’t anything about Homebrew so I can’t comment there.

                                        1. 8

                                          The year is 2037. People are still writing about how M1 Macs “hold up pretty well”.

                                          1. 37

                                            In 2037 the only supported operating systems for your M1 Mac will be NetBSD and Linux.

                                            1. 2

                                              Yeah, probably way earlier than that lol

                                              1. 2

                                                s/Linux/Debian/g

                                              2. 5

                                                By 2037, the average developer will finally be able to afford an M1 Mac :)

                                                1. 3

                                                  I know you are probably joking, but… The median salary for developers in the US is apparently somewhere between $90,000 and $100,000 dollar. If you or your employer are not spending $1500-3000 on hardware every 2-3 year, you are doing something wrong. Pretty much the same story in most western countries. (Of course, this is not applicable to every other part of the world.)

                                                  Then, the resale value of MacBooks is very high. I usually buy a new Mac every 1.5 years or so and sell my old MacBook for ~70% of the old price. Which means that I have a modern laptop for ~400-500 Euro per year. Most other laptops with a lower resale value are in the same ballpark yearly (e.g. 1500, write off after 3 years).

                                                  1. 2

                                                    Well, obviously it won’t be 2037, but the developers I know also tend to expect their hardware to last a bit longer.

                                                    Salaries in the US have long stopped making sense (and in my opinion, probably aren’t sustainable for companies without a huge market cap). Elsewhere in the world, developer pay is more in line with that of other professionals.
                                                    And most companies make rational decisions about hardware: buying a single model (which probably costs €500 in total) in bulk that works for the entire company, not just the developers; not writing them off in just three years.

                                                    A MacBook Air isn’t prohibitively expensive compared to other computer hardware, but on the other hand, times when software development required a top-of-the line computer are long gone.

                                              1. 18

                                                Neat idea. I’m not sure this is a captcha, but rather just a rate limiter.

                                                1. 13

                                                  So much this. A proof-of-work scheme will up the ante, but not the way you think. People need to be able to do the work on the cheap (unless you want to put mobile users at a significant disadvantage) and malware/spammers can outscale you significantly.

                                                  Ever heard of parasitic computing? TLDR: It’s what kickstarted monero. Any website (or an ad in that website) can run arbitrary code on the device of every visitor. You can even shard the work, do it relatively low-profile if you have the scale. Even if pre-computing is hard, with ad networks and live-action during page views an attacker can get challenges solved just-in-time.

                                                  1. 9

                                                    The way I look at it, it’s meant to defeat crawlers and spam bots; they attempt to cover the whole internet, they want to spend 99% of their time parsing and/or spamming, but if this got popular enough to prompt bot authors to take the time to actually implement WASM/WebWorkers or a custom Scrypt shim for it, they might still end up spending 99% of their time hashing instead.

                                                    Something tells me they will probably give up and start knocking on the next door down the lane. And if I can force bot authors to invest in a $1M USD+ /year black hat “distributed computing” project so they can more effectively spam Cialis and Micheal Kors Handbags ads, maybe that’s a good thing? I never made $1M a year in my life, probably never will, I would be glad to be able to generate that much value tho.

                                                    If it comes down to a targeted attack on a specific site, captchas can already be defeated by captcha farm services or various other exploits (https://twitter.com/FGRibreau/status/1080810518493966337). Defeating that kind of targeted attack is a whole different problem domain.

                                                    This is just an alternate approach to put the thumb screws on the bot authors in a different way, without requiring the user to read, stop and think, submit to surveillance, or even click on anything.

                                                    1. 9

                                                      This sounds very much like greytrapping. I first saw this in OpenBSD’s spamd: the first time you got an SMTP connection from an IP address, it would reply with a TCP window size of 1, one byte per second, with a temporary failure error message. The process doing this reply consumed almost no resources. If the connecting application tried again in a sensible amount of time then it would be allowed to talk to the real mail server.

                                                      When this was first introduced, it blocked around 95% of spam. Spammers were using single-threaded processes to send mail and so it also tied each one up for a minute or so, reducing the total amount of spam in the world. Then two things happened. The first was that spammers moved to non-blocking spam-sending things so that their sending load was as small as the server’s. The second was that they started retrying failed addresses. These days, greytrapping does almost nothing.

                                                      The problem with any proof-of-work CAPTCHA system is that it’s asymmetric. CPU time on botnets is vastly cheaper than CPU time purchased legitimately. Last time I looked, it was a few cents per compromised machine and then as many cycles as you can spend before you get caught and the victim removes your malware. A machine in a botnet (especially one with an otherwise-idle GPU) can do a lot of hash calculations or whatever in the background.

                                                      Something tells me they will probably give up and start knocking on the next door down the lane. And if I can force bot authors to invest in a $1M USD+ /year black hat “distributed computing” project so they can more effectively spam Cialis and Micheal Kors Handbags ads, maybe that’s a good thing?

                                                      It’s a lot less than $1M/year that they spend. All you’re really doing is pushing up the electricity consumption of folks with compromised computers. You’re also pushing up the energy consumption of legitimate users as well. It’s pretty easy to show that this will result in a net increase in greenhouse gas emissions, it’s much harder to show that it will result in a net decrease in spam.

                                                      1. 2

                                                        These days, greytrapping does almost nothing.

                                                        postgrey easily kills at least half the SPAM coming to my box and saves me tonnes of CPU time

                                                        1. 1

                                                          The problem with any proof-of-work CAPTCHA system is that it’s asymmetric. [botnets hash at least 1000x faster than the legitimate user]

                                                          Asymmetry is also the reason why it does work! Users probably have at least 1000x more patience than a typical spambot.

                                                          I have no idea what the numbers shake out to / which is the dominant factor, and I don’t really care; the point is that I can still make the spammers lives hell & get the results I want right now (humans only past this point) even though I’m not willing to let Google/CloudFlare fingerprint all my users.

                                                          If botnets solving captchas ever becomes a problem, wouldn’t that be kind of a good sign? It would mean the centralized “big tech” panopticons are losing traction. Folks are moving to a more distributed internet again. I’d be happy to step into that world and work forward from there 😊.

                                                        2. 5

                                                          captchas can already be defeated by […] or various other exploits (https://twitter.com/FGRibreau/status/1080810518493966337)

                                                          An earlier version of google’s captcha was automated in a similar fashion: they scraped the images and did a google reverse image search on them!

                                                          1. 3

                                                            I can’t find a link to a reference, but I recall a conversation with my advisor in grad school about the idea of “postage” on email where for each message sent to a server a proof of work would need to be done. Similar idea of reducing spam. It might be something in the literature worth looking into.

                                                            1. 3

                                                              There’s Hashcash, but there are probably other systems as well. The idea is that you add a X-Hashcash header with a comparatively expensive hash of the content and some headers, making bulk emails computationally expensive.

                                                              It never really caught on; I used it for a while years ago, but I’ve never received an email with this header since 2007 (I just checked). It seems used in Bitcoin nowadays according to the Wikipedia page, but it started out as an email thing. Kind of ironic really.

                                                              1. 1

                                                                “Internet Mail 2000” from Daniel J. Bernstein? https://en.m.wikipedia.org/wiki/Internet_Mail_2000

                                                            2. 2

                                                              That is why we can’t have nice things… It is really heartbreaking how almost all technology advance can and will be turned for something evil.

                                                              1. 1

                                                                The downsides of a global economy for everything :-(

                                                            3. 3

                                                              Captchas are essentially rate limiters too, given enough determination from abusers.

                                                              1. 4

                                                                Maybe. The difference I would make is that a captcha attempts to assert that the user is human where this scheme does not.

                                                                1. 2

                                                                  I mean, objectively, yes. But, since spammers are automating passing the “human test” captchas, what is the value of that assertion? Our “human test” captchas come at the cost of impeding actual humans, and are failing to protect us from the sophisticated spammers, anyway. This proposed solution is better for humans, and will still prevent less sophisticated attackers.

                                                                  If it can keep me from being frustrated that there are 4 pixels on the top left tile that happen to actually be part of the traffic light than by all means, sign me the hell up!

                                                            1. 5

                                                              It is somehwat crazy that .com TLD registries are subverted for political purposes, I feel like they should be neutral (as long as the bank involved is not also involved in funding objectively questionable or violent things, I’m not familiar with the context here).

                                                              And yeah I also agree with your point that it might not be smart to use an Iranian TLD – especially when it comes to blogging, authoritative regimes seem to be a bit touchy. Any time you hear the word “Iran” and “blogger” in the same sentence in the news, it usually is not a positive story (prison or worse).

                                                              As a German, I would like to advertise for the .de TLD because it is very affordable – only 5.97€ on inwx.de rather than the 13.69€ you pay for a .com, and also at least to my knowledge it is not involved very much in censorship. The downside is that people often expect content to be in the native language, but that doesn’t matter. You can also just register it as a backup TLD, in case your main one gets in trouble.

                                                              Another suggestion I have is using the .dev TLDs, because they already imply that it would be a technical blog, and I think they are operated by Google but I don’t think the US government would go through the trouble to censor that TLD, I think their sanctions are mostly only targeted at businesses (rather than personal blogs).

                                                              1. 2

                                                                No, the bank was not even sanctioned directly until recently.

                                                                Correct. For example, Sattar Beheshti was a blogger that was sadly killed in jail. His crime was blogging.

                                                                I was thinking about .fr, .ch, .se, and .no. What do you think about them? Well I have nothing against .de but Germany usually cooperates with U.S. in these matters.

                                                                The sanctions and seizing targets everybody, not just businesses. Recently, they seized dozens of domains claiming they spread misinformation. I agree that they spread misinformation and they were harmful websites promoting Iranian regime’s propaganda but seizing domains is not acceptable in any case. Link: https://www.cnn.com/2021/06/22/politics/us-seizes-iran-website-domains/index.html

                                                                1. 8

                                                                  I did a bit of research. This article claims that .de, .at, .is and .ru are good because those are the only TLDs where censorship can only occur by the federal court. I have checked with DENIC (that are responsible for .de domains) and they affirm this. Federal court decisions here are publicly accessible, so I took a look if I can find any relevant decisions. However, I was able to find barely any, mostly related to objectively criminal matters.

                                                                  1. 4

                                                                    Being the subject of a court decision doesn’t mean much in countries where the independency of courts is questionable (.ru).

                                                                    1. 1

                                                                      That is very correct.

                                                                    2. 3

                                                                      Thank you very much. Very helpful.

                                                                    3. 2

                                                                      I was thinking about .fr, .ch, .se, and .no. What do you think about them?

                                                                      The problem you may have is that some of those (like .fr or .no) require a presence in the some part of the world: see “Eligibility requirements”. You can pay a service (than Gandi offers sometimes) to have an address in the EU, but that’s quite costly.

                                                                      1. 1

                                                                        That is true, but what about the domain extensions themselves? What about legal process and court orders? Should one worry about the influence of USA or other countries?

                                                                        1. 2

                                                                          If we refer to this EFF document, your mileage may vary. For .fr for instance, there is removal by arbitrator order based on intellectual property rights. For .fr still, it appears though that the only other venue to get a domain removed is through a French court order, so another country’s order would be scrutinized by a local court.

                                                                  1. 1

                                                                    I have a similar setup except I use a corporate virtual machine. For a shell connection I use EternalTerminal with tmux in control mode and iTerm2. This lets me create native terminal tabs that are actually remote tmux tabs. EternalTerminal ensures that the connection never breaks even if my IP address changes (e.g. if I move from office back to home or I am on a wonky mobile connection).

                                                                    1. 1

                                                                      The point about sudo is irrelevant on single-user systems (which, I believe, are the most common kind of macOS installations) where infecting $USER is enough. Obligatory xkcd.

                                                                      1. 3

                                                                        By the way, this is a part of what Ubuntu motd contains now:

                                                                        • Check out 6 great IDEs now available on Ubuntu. There may even be something worthwhile there for those crazy EMACS fans ;)

                                                                        1. 2

                                                                          The Ubuntu Blog advertising proprietary software? I hope they got paid for it at least.

                                                                          1. 1

                                                                            Wouldn’t want all those crazy Stallmanites hanging around calling them on advertising non-free software, which you can get from their new package manager that caters to for-profit companies.

                                                                            1. 1

                                                                              The word “emacs” doesn’t even appear in the listicle so I suppose it’s just clickbait.

                                                                            1. 1

                                                                              Does Intel mention all those CPU bugs and vulnerabilities in their (updated) system programming manuals / errata?

                                                                              1. 4

                                                                                Why would you ever want to access a string by a code point index and not a byte offset is absolutely beyond me. Let alone the fact that this article ignores the presence of grapheme clusters (aka user-perceived characters).

                                                                                1. 1

                                                                                  I don’t understand how it’s possible pick three here: “full-native speed”, single address space OS (everything in ring 0) and security. I believe you can only pick two.

                                                                                  1. 1

                                                                                    Well, that’s what nebulet is trying to challenge.

                                                                                      1. 1

                                                                                        I haven’t yet read the whole paper but in the conclusion they say that performance was a non-goal. They “also improved message-passing performance by enabling zero-copy communication through pointer passing”. Although I don’t see why zero-copy IPC can’t be implemented in a more traditional OS design.

                                                                                        The only (performance-related) advantage such design has in my opinion is cheaper context-switching, but I’m not convinced it’s worth it. Time (and benchmarks) will show, I guess.

                                                                                        1. 1

                                                                                          When communication across processes becomes cheaper than posting a message to a queue belonging to another thread in the same process in a more traditional design, I’d say that that’s quite a monstrous “only” benefit.

                                                                                          I should have drawn your attention to section 2.1 in the original comment, that’s where you original query is addressed. Basically the protection comes from static analysis, a bit like the original Native Client or Java’s bytecode verifier

                                                                                    1. 2

                                                                                      I remember making a procedure that dynamically generated functions with “bound” this pointer. It worked by allocating a trampoline and writing the object’s address. It was horrible.

                                                                                      1. 7

                                                                                        i put on my robe and wizard hat

                                                                                        1. 4

                                                                                          Curious what it would take to flash a modified version of this to an old iPhone. Could one theoretically boot a Linux kernel if the signing check was omitted?

                                                                                          1. 4

                                                                                            Not sure if it’s entirely relevant to this, but I did get Android installed on my 1st gen iPhone back in the day using this: https://www.theiphonewiki.com/wiki/IDroid

                                                                                            1. 1

                                                                                              I’m guessing the keys themselves have not been released so the issue is getting anything non-apple onto the device in the first place? Also guessing, if we had the keys we could easily modify iboot, or relatively easily port core boot or whatever the cool kids are using these days and ignore signing?

                                                                                              1. 2

                                                                                                You don’t really need keys these days to boot something. You can use kloader which is basically kexec for (32-bit) iOS. It has been used for dual-booting a signed iOS installation with an unsigned one.

                                                                                                1. 2

                                                                                                  Wow, that’s awesome. I have an old iPhone 4 that I’d love to re-purpose in this way. Where should I start reading/researching in order to do this myself? Thanks!

                                                                                              2. 1

                                                                                                There was the OpeniBoot project – an open source reimplementation of iBoot that works on older iPhones up to iPhone 4.

                                                                                              1. 2

                                                                                                Any security minded people have thoughts on this?

                                                                                                1. 13

                                                                                                  Debian’s security record regarding CAs is atrocious. By this I mean default configuration and things like the ca-certificates package.

                                                                                                  Debian used to include non-standard junk CAs like CACert and also refuse to consider CA removal a security update, so it’s hugely hypocritical of this page to talk about many insecure CAs out of 400+.

                                                                                                  Signing packages is a good idea, as that is bound to the data and not to the transport like https so in principle I agree that using https for debian repositories doesn’t gain much in terms of extra security. However these days the baseline expectation should be that everything defaults to https, as in no more port 80 unauthenticated http traffic.

                                                                                                  Yes, moving over to https for debian repositories breaks local caching like apt-cacher (degrades it to a tcp proxy) and requires some engineering work to figure out how to structure a global mirror network, but this will have to be done sooner or later. I would also not neglect the privacy implications, with https people deploying passive network snooping have to apply heuristics and put in more effort than simply monitoring http.

                                                                                                  Consider the case where someone sitting passively on a network just monitors package downloads that contains a fix for a vulnerability that is exploitable remotely. That passive attacker can just try to race the host and exploit the vulnerability before the update can be installed.

                                                                                                  Package signing in debian suffers from problems with the underlying gpg level, gpg is so 90s in that it’s really hard to sustainably use it long-term: key rotation, key strength are problem areas.

                                                                                                  1. 4

                                                                                                    Package signing in debian suffers from problems with the underlying gpg level, gpg is so 90s in that it’s really hard to sustainably use it long-term: key rotation, key strength are problem areas.

                                                                                                    What do you consider a better alternative to gpg?

                                                                                                    1. 10

                                                                                                      signify is a pretty amazing solution here - @tedu wrote it and this paper detailing how OpenBSD has implemented it.

                                                                                                    2. 4

                                                                                                      non-standard junk CAs like CACert

                                                                                                      imho CACert feels more trustworthy than 90% of the commercial cas. i really would like to see cacert paired with the level of automation of letsencrypt. edit: and being included in ca packages.

                                                                                                      1. 2

                                                                                                        With the dawn of Let’s Encrypt, is there still really a use case for CACert?

                                                                                                        1. 4

                                                                                                          i think alternatives are always good. the only thing where they really differ is that letsencrypt certificates are cross signed by a ca already included in browsers, and that letsencrypt has automation tooling. the level of verification is about the same. i’d go as fas as to say that cacert is more secure because web of trust, but that may be just subjective.

                                                                                                  1. 1

                                                                                                    It would also be nice to be able to compose multiple articles into single books.

                                                                                                    1. 3

                                                                                                      Writing something that, I hope, will eventually become a text editor, multithreaded and extensible with MoonScript/Lua (or any other language via loadable libraries and external processes). The implementation language is Rust and I’m going to use tokio-rs for async IO and luajit for Lua. At the moment I have a basic rope implementation with Unicode support (including extended grapheme clusters thanks to the unicode-segmentation crate) that can pass some tests. The source code is here.

                                                                                                      1. 2

                                                                                                        I wonder if there’s some lightweight browser that just displays HTML/CSS webpages and maybe runs some JavaScript on trusted websites, without WebRTC, WebGL, WebDRM and other bloatware that is being baked into the web standards these days, eats resources and extends the attack surface.

                                                                                                        Why can’t modern software just do the damn thing it’s asked to without doing anything behind my back?

                                                                                                        1. 1

                                                                                                          Dillo

                                                                                                          Just HTML/CSS2 – no Javascript, “HTML5”, or CSS3 and it’s blazing fast

                                                                                                          1. 1

                                                                                                            What bothers me is that dillo appears to be unmaintained and has “alpha” SSL support that I failed to enable (the suggested –enable-ssl didn’t work).