1. 2

    These are the slides, just to save anyone from having to type in the URL again…

    1. 3

      I agree with these points, as does my wife. My wife occasionally writes in Polish and macOS is much better for this. Windows is woefully poor at inputing accented characters: Alt+nnnn is a ridiculous system and navigating the character map is tedious.

      I find the file I/O performance completely kills the appeal of the WSL. Firing up Emacs is noticably slower, even without any init file, and good luck if you have to run find and grep on large-ish directory structures. Builds that touch a lot of files will crawl. If you’re used to speedy file ops, working in a Unix-like way on Windows will make you miss an actual Unix environment. (It’s for this reason that I think my Win10 experiement will be ending soon.)

      1. 3

        Would you care to elaborate regarding switching keyboard layout to PL, please? For me, the issue I have is using UK/GB layout and switching to PL as the latter is a modified US layout so @ and “ are swapped.

        1. 1

          I’m sorry I can’t provide a lot more information. As far as I know, you have to add the language you want, then add keyboard layouts for those languages. You can switch keyboard layout using Win-Space. I do not know how to view the keyboard layouts that you choose, and there doesn’t seem to be an obvious way to do it.

          1. 1

            I’m not a Windows user myself but, AFAICR, this is done in one go and the default layout will be chosen for a selected language.

            I do have to agree though - most (all?) things are smoother on macOS :^)

            Aah, viewing the layout. I’ve never thought of that, TBH - I usually know it before selecting it. In terms of Polish layout specifically, it’s quite intuitive - all the extra letters are produced by using AltGr as the modifier key + the base letter (bar ź, for which x is repurposed).

            P.S. I’ve just done it on macOS and I’m amazed that Polish Programmer’s layout is still not the default and the first one in order is the Typist’s layout. Who uses that layout!?

        2. 1

          Have you tried the US keyboard with international dead keys?

          1. 1

            No, and I didn’t know about that, although I suspect my wife won’t like typing that way.

          2. 1

            Regarding character input: I use a Swedish keyboard but most often I use a US layout. When I need Swedish characters I just switch keyboards with Alt-Shift (Windows). I use both Mac and Windows and the process is the same However I’m not familiar with how keyboard support for Polish is in Windows.

            Agree 100% with file I/O. I’m trying to learn PowerShell more for remote work where I can’t use Cygwin, and stuff that’s trivial on Unix (grepping through thousands of files) is painfully slow in PS. I adapted one script to essentially summarize large log files and there you have to create a local copy of the file and use System.IO.StreamReader to access them in reasonable time.

            1. 3

              The support for Polish language is present since very early days of MS Windows. It is full well possible to use it as one of the Alt-Shift alternative keyboards. As such, I’m really surprised with the grandparent post; is it possible that the author doesn’t know about this feature on Windows? The Polish keyboard layout is based on the US one, with AltGr-a for “ą”, AltGr-l for “ł”, etc. — you get the idea. The only “slight” surprise might be AltGr-x for “ź”, as AltGr-z is already taken by “ż”. Does macOS somehow have it even simpler?

              1. 2

                I’m not sure about Polish, but for the longest time Windows didn’t support Bulgarian Phonetic out of the box, only the older State Standard, which I never learnt (I’m not a native Bulgarian Speaker).

                I think Windows is roughly fine for internationalization support now, for me the killer feature on macOS is the integration with all my devices and the accessibility. I’m hard of hearing, so the ability for my laptop to ring when my phone does is huge. So, the combination of internationalization and accessibility, in conjunction with a decent Unix setup, keep me on macOS for the time being.

                1. 1

                  Using the Left Alt key with any character on the Polish language on Win10 just results in either the chime sound or jumping to a menu. Right Alt doesn’t do anything. There is no AltGr key on the keyboard. As far as I can tell there is no keyboard viewer in order to see how the keyboard is mapped, either. The only reliable way we’ve found to get accented characters is to use the character map. (Depending on the keyboard choice, some keys, such as ‘[’, will insert accented characters instead of what one might expect.)

                  On macOS, you can press and hold a key (such as ‘z’) and you get a popup that lets you select what accent you want for the character using keys ‘1’ through the number of possible accent choices. This works in any native text entry area. You can also bring up the Keyboard Viewer, which shows a hovering keyboard window that displays what keys map to what symbols, including modifier keys. It’s reasonably intuitive to start typing on a different language keyboard layout on macOS. Windows, not so much.

                  Perhaps there’s a setting I’m missing. I’ve scoured the settings and found nothing that suggests there’s an easier way. You can use the Touch Keyboard to enter symbols in a similar way to a Mac, but it requires using the mouse (or your finger, or a stylus).

                  1. 1

                    I tried some googling, but didn’t manage to find a way to see the keys of the current layout on Win10 quickly. I managed to find some list of Windows keyboard layouts; at letter P there, you can see 2 keyboards for Polish. When I clicked them, I was able to see a JS applet showing the keyboard layout. For Polish (programmers) (which is what you should use), hovering over the AltGr a.k.a. Right Alt on the preview, shows the positions of the accented letters. From what you say, if you’re getting accented letters instead of ] etc, then it’s most probable you’ve got the dreaded Polish (214) (known in older versions as Polish (typist)) layout. Nobody in Poland uses it, really :) I thought it was already removed from Win10, or at least well hidden, but it may well be that it’s still there, and you just stumbled upon it, unknowingly :( I sincerely hope for you it’s possible somehow to remove it, and get back to the standard “Polish (programmers)” one…

            1. 9

              I’m delivering a 5-day TLA+ workshop next week and in the process of radically restructuring everything to better teach the mental models. As part of this process I’m writing a lot of beamer and tikz slides, which means hitting all sorts of weird beamer and tikz quirks, which means shaving many, many yaks. Unrelatedly, writing Practical TLA+ errata.

              For personal stuff I’m starting to feel the pain of not having a good exobrain, so more work on that. Probably set up a research notebook system.

              1. 2

                Let us know what you land on for that exobrain :)

                1. 2

                  I’ve been using Zotero for a while now to store citations and their attached documents (PDFs, full webpages, etc. all get stored in a Zotero SQLite DB). With the browser plugin, it’s usually one click to import a citation and its attached document. Zotero also supports fully-searchable tags and notes. In retrospect I should have probably set up the online syncing sooner.

                  For more unstructured notes I use the Mac OS Notes app. My complaint with both apps is their default notes font is too small.

                  1. 2

                    Zotero looks great! I’ve been feeling overwhelmed at the amount I reading I plan to do this year. This should greatly help in keeping it structured. Just installed it.

                  2. 1

                    where is that TLA+ workshop being delivered? That sounds interesting to me…

                    1. 1

                      Chicago! David Beazley’s been a great mentor to me.

                    2. 1

                      I ended up, after too much thinking about research notebooks, with a template text file. The app that wasn’t

                    1. 3

                      There were a few different Sweet Expressions-style SRFIs, SRFI-49 being one of them. There are also various Scheme dialects that eventually morphed into things like Dylan, and interstitial variants thereof (like the Marlais Dylan system which was Dylan pre-D-Exprs). Various schemes have also included alternative syntaxes, six in Gambit-C (linking there so you can look for .six, rather than the main Gambit pages which seem to be down) comes immediately to mind. Bigloo even had an ML-style syntax for a while, tho that quickly died out (you could follow along and create one tho).

                      I was a professional Scheme programmer for a long time, I even wrote Several Implementations. However, I think the time for it is largely passed; I mean, R7RS-large is… a markdown document of votes from the steering committees? The R6RS divide kinda hurt the language, and I don’t think it ever really recovered (but that may just be an old guy’s view).

                      1. 4

                        It’s interesting, I’ve seen way more ReasonML content of late, are people still working on Elm? I’ve had a few clients come in with Elm apps, but otherwise it seems like ReasonML won out. I’d be curious to know more, if anyone has any data or the like on it…

                        1. 6

                          My few interactions with Evan were disappointing - he is too closed. I love the developer experience that Elm offers, but I wouldn’t trust it for the long term.

                          1. 2

                            The developer experience is great, when everything is going to plan, but otherwise terrible for me. Littering _ = Debug.log "x" x around is awful. The fact that there’s a known compiler bug preventing the debug mode working in some cases, and appears to be ignored (or is just under Evan’s philosophy of to not say anything) is amazing. There’s also no good/official equivalent of something like the React developer tools AFAIK too.

                            1. 2

                              Littering _ = Debug.log “x” x around is awful.

                              Yeah, Elm could really benefit from typed holes for this kind of stuff! (OCaml/Reason also lacks these AFAIK, which is a shame)

                              1. 1

                                Merlin apparently has typed holes ( https://github.com/ocaml/merlin/commit/ef97ebfa23bb81c4b4b1c8fb2316a29f7052a514 ), but I’m not totally sure how to trigger it.

                            2. 1

                              Interesting; do you still use it often, or have moved away? was it because of that?

                            3. 4

                              Yes, it’s safe to say there’s high interest in Reason. The annual State of JavaScript survey backs this up: https://2018.stateofjs.com/javascript-flavors/conclusion/ –long story short, it’s something to ‘keep an eye on’ for mainstream users.

                              There are other interesting data points in there. Strikingly, Elm’s ‘heard of it, would like to learn’ metric is declining: https://2018.stateofjs.com/javascript-flavors/elm/ and Reason’s is shooting up.

                              1. 1

                                ah, that’s a great pair of posts, thank you! it’s interesting to see the shift away like that, even if it’s likely from people who may not have used Elm first.

                                I wonder if that’s just natural settling or issues folks see with it or what?

                              2. 4

                                Folks are still using Elm heavily, and there’s lots to like about the language. I prefer much of the language design and library design to ReasonML (although the Reason/OCaml’s module system would be super nice to have in Elm). But it just seems very hard to trust the community model. There doesn’t seem to be much work put into trying to upskill contributors to be as effective as Evan at design and implementation, and the compiler has a high bus factor.

                                1. 1

                                  Agreed! It seems to have some nicer design points compared to Reason, and I like what it has done (tho last I seriously looked at it Signals were still a thing).

                                  That’s interesting re: Evan, as that’s been mentioned here and elsewhere. I wonder why that is?

                                2. 2

                                  In case you’re interested, I reviewed Elm’s ports here: https://lobste.rs/s/1jftsw/philip2_elm_ocaml_compiler#c_prnskf

                                  1. 1

                                    oh very much so, thank you for the link! That’s an interesting set of critiques as well… I need to try rewriting something large in Elm again to see where the edge cases are (I was originally waiting for Elm in Action to be finished and work through that, but…)

                                  2. 2

                                    Popularity and hype are not excellent indicators of whether a technology functions well and/or is appropriate for your use case.

                                    I will continue using Elm. It’s good. It works. Aside from the common “community” complaints — which I don’t care about — most people’s complaints around Elm seem to be that it makes synchronous IO hard. None of my projects need synchronous IO. I can’t think of a case where anyone needs synchronous IO. Every time I have this discussion, it goes along these lines:

                                    “Elm broke us! We can’t use synchronous IO anymore!”

                                    “Why can’t you use ports?”

                                    “Umm…”

                                    1. 2

                                      Are the issues mentioned here still valid? https://www.reddit.com/r/elm/comments/7fx856/seeking_problems_with_ports/?st=jqiy249g&sh=54bfcdb4

                                      • Ports are one-way only and don’t support the request/response model
                                      • Ports can’t be packaged and distributed for reuse. So, ports aren’t portable?

                                      Here are the issues I see from reading https://guide.elm-lang.org/interop/ports.html :

                                      • They provide a very restricted window to the JS world–the more dynamic interactions you have with third-party libraries, the more complex your messages grow
                                      • Message complexity and and handling logic complexity grows on the JavaScript side as well as on the Elm side–not only do you have to remember to implement the JavaScript side of things, but your JavaScript code grows along with your Elm port code. In time, you might find yourself managing quite a lot of JavaScript–and your original goal was to get rid of JavaScript and just write Elm
                                      • Some of the advice and commentary there doesn’t really gel. It talks about how using ports isn’t ‘as glamorous’ as rewriting everything in Elm, but it helps when you’re trying to port an existing JS project to Elm. Well, no, I don’t have an existing JS project–I have a pure Elm project but now I need to write ports because Elm doesn’t support my use-case
                                      • Apparently ‘package flooding’, the desire to contribute Elm bindings to JS packages, would be a pitfall of allowing FFI or other non-port techniques.
                                      1. 1

                                        The first two issues seem like non-issues to me.

                                        Ports are one-way only and don’t support the request/response model

                                        Ports are one-way insofar as one would use subscriptions as the other side of that coin. It’s not clear how this would not “support the request/response model”. I have written an application that did HTTP requests through ports. It’s not a difficult concept — you call the function that Elm is subscribed to in the response callback.

                                        For those curious: I only needed to use ports because the server streamed back JSON, which I needed to incrementally parse.

                                        As for making it awkward to call external JS math libraries — that’s the cost of safety. If you want an escape hatch from type safety, you could use Html.Attributes.attribute. Otherwise you can use ports.

                                        The people complaining about this don’t understand why it has to be this way. They also don’t understand that they don’t understand, and they complain that Evan Czaplicki has an “I know best, you are wrong” attitude. I’ll address an example from this post:

                                        In response to these kinds of problems, the current supported way to integrate Javascript in Elm is to use ports. Ports are often fine for side-effectful code that you do not trust. Compared to using Tasks (which compose in ways that ports do not), they can be very ugly. But without a shadow of a doubt they are very often hopelessly inadequate when it comes to pure code [12], and anyone who tells you differently is smoking something or a victim of Stockholm syndrome.

                                        In the linked post, Evan assumes that if it possible to get rid of native modules from your project, then that will be an acceptable solution. It ignores the many reasons why people might not want to get rid of native code. These include:

                                        • The overall architecture of my app is much better if I have this native code implementing a pure function or Task, as opposed to using ports.
                                        • My current code has been thoroughly reviewed,
                                        • or was compiled to Javascript/WebAssembly from a much better language than Elm,
                                        • or has been subject to formal analysis techniques,
                                        • or has been war-hardened by years in production.

                                        I don’t smoke, so apparently I am qualified to address this.

                                        It does not matter if an Elm user thinks some arbitrary JavaScript function is pure. It is fundamentally, mathematically impossible to guarantee this, probably for more reasons than I know of. One of those is the halting problem. You don’t know that a supposedly “pure” function won’t recursively call itself infinitely and blow the stack resulting in a runtime error.

                                        The overall architecture of my app is much better

                                        This is subjective, and context specific. If the argument is that everyone knows what’s best for themselves, I’d argue that no, they just don’t. There’s an incredible amount of stubbornness and cargo-cult in the JavaScript “community”.

                                        My current code has been thoroughly reviewed

                                        Elm cannot prove this, so it is irrelevant.

                                        or was compiled to Javascript/WebAssembly from a much better language than Elm

                                        ??? ¯\_(ツ)_/¯

                                        or has been subject to formal analysis techniques

                                        Elm cannot prove this, so it is irrelevant.

                                        or has been war-hardened by years in production

                                        Elm cannot prove this, so it is irrelevant.

                                        Ports can’t be packaged and distributed for reuse. So, ports aren’t portable?

                                        It is not clear to me why anyone would want to package and distribute highly context-specific glue code.

                                        nb. My understanding of the word “portable” in a software context is that it’s synonymous with “cross-platform”. That doesn’t really apply here.

                                        They provide a very restricted window to the JS world–the more dynamic interactions you have with third-party libraries, the more complex your messages grow

                                        That’s quite necessary, and it’s not like collaborators having to adhere to a protocol is an issue specific to Elm and/or its port system. Everyone sends JSON back and forth between their server and client — I don’t hear anyone complaining about this.

                                        Message complexity and and handling logic complexity grows on the JavaScript side as well as on the Elm side–not only do you have to remember to implement the JavaScript side of things, but your JavaScript code grows along with your Elm port code. In time, you might find yourself managing quite a lot of JavaScript–and your original goal was to get rid of JavaScript and just write Elm

                                        ¯\_(ツ)_/¯

                                        Nobody is forced to work this way. Programming is always about trade-offs. You can implement things in Elm, or you can implement things in JavaScript. I don’t really see what the issue is here. It seems like the argument is “I want to use JavaScript, but also I do not want to use JavaScript.”

                                        Some of the advice and commentary there doesn’t really gel. It talks about how using ports isn’t ‘as glamorous’ as rewriting everything in Elm, but it helps when you’re trying to port an existing JS project to Elm. Well, no, I don’t have an existing JS project–I have a pure Elm project but now I need to write ports because Elm doesn’t support my use-case

                                        I have no idea what this means. Why is “glamour” being used as a unit of measurement here? If anything, this would show that other people’s advice and commentary can be incredibly [and in this case, this word functions in the sense of the commenters being not credible] irrational.

                                        The final “issue” I won’t comment on, because I don’t maintain a package repository and so it isn’t something I care about.

                                        1. 1

                                          It’s not clear how this would not “support the request/response model”

                                          Let’s look at a concrete example, https://package.elm-lang.org/packages/evancz/elm-http/latest/Http#getString :

                                          getString : String -> Task Error String
                                          

                                          This is a pretty standard example of a request/response model, a function from an input to a ‘promise’ of an output. To my understanding you can’t implement this with an Elm port. Happy to be proven wrong.

                                          As for making it awkward to call external JS math libraries

                                          I didn’t actually bring that up, but are you saying that Elm makes it difficult to do pure mathematical operations?

                                          If you want an escape hatch from type safety, you could use Html.Attributes.attribute.

                                          I don’t see how that is an escape hatch for calling JS math functions.

                                          I don’t smoke, so apparently I am qualified to address this.

                                          Did you forget the other criterion? ;-)

                                          Elm cannot prove this, so it is irrelevant.

                                          I think this is the crux of the problem. Elm is not the be-all-and-end-all of type safety. There are lots of things it can’t prove. Other languages have explicit escape hatches for these cases, look at Rust unsafe for example. I hate to say ‘people say’ but really, this is why people say Elm’s philosophy is ‘my way or the highway’.

                                          It is not clear to me why anyone would want to package and distribute highly context-specific glue code.

                                          Yes, the ports guide encourages writing ports in a highly-coupled way to your application, but even looking at the simple LocalStorage example, it’s clear that they could be written in a more generalized way, just wrapping the actual LocalStorage API. E.g.:

                                          port module Main exposing (..)
                                          import Json.Encode as E
                                          port setItem : E.Value -> E.Value -> Cmd msg
                                          

                                          JavaScript side:

                                          app.ports.setItem.subscribe(function(key, value) {
                                            localStorage.setItem(key, value);
                                          });
                                          

                                          I understand that the guide specifically discourages that, but that just means it’s possible but we’re being told not to do it. (Why? It’s not clear, but it seems mostly because Elm doesn’t allow packaging and distributing ports.)

                                          My understanding of the word “portable” in a software context is that it’s synonymous with “cross-platform”. That doesn’t really apply here.

                                          It was a pun :-)

                                          You can implement things in Elm, or you can implement things in JavaScript. I don’t really see what the issue is here. It seems like the argument is “I want to use JavaScript, but also I do not want to use JavaScript.”

                                          Well, no. If you want to write Elm, and do anything outside of its supported API surface area, you are forced to write and maintain glue JavaScript for ports. If you need sophisticated behaviour, the JavaScript side might grow quite complex with business logic to support that. In fact, the more tightly coupled it is to your Elm app, the more complex it would become. This is one reason to make ports just dumb wrappers over JS APIs, because you want Elm to own the business logic, not JavaScript.

                                          Why is “glamour” being used as a unit of measurement here?

                                          You should ask the person who wrote that–Evan, re: https://guide.elm-lang.org/interop/ports.html#notes

                                          The final “issue” I won’t comment on, because I don’t maintain a package repository and so it isn’t something I care about.

                                          This sounds like:

                                          Elm cannot prove this, so it is irrelevant.

                                          :-)

                                          1. 1

                                            This is a pretty standard example of a request/response model, a function from an input to a ‘promise’ of an output. To my understanding you can’t implement this with an Elm port. Happy to be proven wrong.

                                            I don’t think this maps 1:1 exactly, but I achieve this with:

                                            app.ports.foo.subscribe(function(a) {
                                              // …
                                              app.ports.bar.send(b);
                                            });
                                            

                                            I didn’t actually bring that up, but are you saying that Elm makes it difficult to do pure mathematical operations?

                                            No, that is not what I am saying.

                                            I took that example from this post which you linked to. I was addressing the comment “This makes simple use cases, like calling out to a synchronous Javascript math library, more unwieldy than you would expect.”

                                            I don’t see how that is an escape hatch for calling JS math functions.

                                            window.foo = function () {
                                              return Math.ceil(Math.random() * 100);
                                            }
                                            
                                            // …
                                            
                                            input [ attribute "oninput" "this.value = window.foo()" ] []
                                            

                                            It works.

                                            Did you forget the other criterion? ;-)

                                            Sadly, I did spend some time living in Stockholm, so I may not be qualified to chime in after all :-/

                                            Also, yes, I know what SS is. :)

                                            I think this is the crux of the problem. Elm is not the be-all-and-end-all of type safety. There are lots of things it can’t prove. Other languages have explicit escape hatches for these cases, look at Rust unsafe for example. I hate to say ‘people say’ but really, this is why people say Elm’s philosophy is ‘my way or the highway’.

                                            I’m not sure it’s claimed to be any kind of be-all-and-end-all either. It takes a strong position on how much safety it wants to enforce. If a developer finds it too much, there are less safe alternatives available. Personally, I want constraints and safety. I don’t want to shoot myself in the foot or struggle with runtime errors.

                                            Yes, the ports guide encourages writing ports in a highly-coupled way to your application, but even looking at the simple LocalStorage example, it’s clear that they could be written in a more generalized way, just wrapping the actual LocalStorage API. E.g.:

                                            I think localStorage probably ought to be wrapped, and in this case I think it’s the exception rather than the rule. I’m also not sure what assumptions tooling can make around where Elm code is being run. Should a package give you both some Elm port code and some JavaScript port handler code?

                                            This sounds like:

                                            Elm cannot prove this, so it is irrelevant.

                                            :-)

                                            I don’t intend for it to sound that way. I think the claim is that the package repository maintainers don’t want to worsen the signal:noise ratio. The claim may be valid, I don’t know. Since I’m not a package repository maintainer, I don’t have the insight to form a valid opinion.

                                            1. 1

                                              I don’t think this maps 1:1 exactly,

                                              Right, you need two ports to achieve a simple request/response model like a -> Task b c. Why am I harping on this btw? Because this is a function (an effectful function to be exact) and functions are composeable. That’s the essence of FP and Elm breaks that model for ports.

                                              It works

                                              It does but it’s a classic case of what happens when the philosophy is ‘abstinence is better than protection’.

                                              I’m not sure it’s claimed to be any kind of be-all-and-end-all either.

                                              It’s implied by saying: ‘Elm cannot prove this so it’s irrelevant’.

                                              If a developer finds it too much, there are less safe alternatives available.

                                              Of course. But the claim I’m responding to right now is the one that people have no valid arguments against ports. As you can see, they do.

                                              I don’t want to shoot myself in the foot or struggle with runtime errors.

                                              No one does :-) but the fact remains that Elm forces you to write JavaScript for anything but the most trivial example code. Here’s an assertion: when people talk about how Elm eliminates runtime errors, look carefully at how they phrase it. I’ll bet you they say it like this: ‘We’ve had no runtime errors from our Elm code!’ Of course they’re not gonna talk about how Elm eliminates runtime errors from the JavaScript port handler code! It’s classic Stockholm Syndrome.

                                              in this case I think it’s the exception rather than the rule.

                                              And other people think that other cases are exceptional :-) But don’t you find it interesting that the one case you think is the exception, is pretty clearly called out by Evan himself, in the ports guide, as not a good idea to wrap?

                                              Should a package give you both some Elm port code and some JavaScript port handler code?

                                              That’s a good question that arises when you try to package up interop code that relies on writing JavaScript handler code. I don’t know the answer because I haven’t thought through it deeply. I don’t know how deeply Evan thought about it, but as we can see, his answer is to disallow packaging as a reuse mechanism for ports. Of course, this won’t stop people from just packaging and distributing them as npm packages. So in practice you may end up with a hybrid Elm/npm application.

                                              Since I’m not a package repository maintainer, I don’t have the insight to form a valid opinion.

                                              Fair enough!

                                              1. 1

                                                Because this is a function (an effectful function to be exact) and functions are composeable. That’s the essence of FP and Elm breaks that model for ports.

                                                Fair point. Perhaps monadic IO like in Haskell would work here. Or, perhaps it would alienate less experienced developers. I don’t have a strong opinion on which of the two is more true.

                                                it’s a classic case of what happens when the philosophy is ‘abstinence is better than protection’.

                                                In earnest, I do not know what is meant by this and/or how it applies here.

                                                the fact remains that Elm forces you to write JavaScript for anything but the most trivial example code.

                                                That’s quite a bold claim, and I don’t believe it would stand up to much scrutiny.

                                                It’s implied by saying: ‘Elm cannot prove this so it’s irrelevant’.

                                                Perhaps I was too terse, and I should unpack what I am driving at with that repeated sentence. I mean it doesn’t matter if the user “knows” some foreign code to be safe. Elm can’t prove that in the general case. Elm makes guarantees about the safety of the code it manages, and makes no guarantees of the code it doesn’t.

                                                Elm cannot make guarantees about the safety of foreign code unless it can prove that it is indeed safe. The language could of course be modified to allow for developers to arbitrarily declare some parts of foreign code as “safe”, but then Elm is no safer than e.g. TypeScript. It would then not be able to make the safety claims that it does.

                                                when people talk about how Elm eliminates runtime errors, look carefully at how they phrase it. I’ll bet you they say it like this: ‘We’ve had no runtime errors from our Elm code!’ Of course they’re not gonna talk about how Elm eliminates runtime errors from the JavaScript port handler code! It’s classic Stockholm Syndrome.

                                                I’m confused by this. Why would people talk down Elm for not protecting them against runtime errors in foreign code, when the tooling has never made the claim that it can do that?

                                                So in practice you may end up with a hybrid Elm/npm application.

                                                This is what all my projects have anyway. Perhaps I’ve been doing it wrong this whole time?

                                                1. 1

                                                  In earnest, I do not know what is meant by this and/or how it applies here.

                                                  It’s an analogy referring to the ineffectiveness of abstinence-only birth control education ( https://www.npr.org/sections/health-shots/2017/08/23/545289168/abstinence-education-is-ineffective-and-unethical-report-argues ), which fell flat. But the point was that I just fundamentally disagree with the ‘disallow all unsafety’ philosophy. You can’t stop people from doing what they’re going to do, you can try to protect them though.

                                                  That’s quite a bold claim, and I don’t believe it would stand up to much scrutiny.

                                                  I’m surprised to hear that, because there’s port handler JavaScript code even in Richard Feldman’s Elm SPA starter example: https://github.com/rtfeldman/elm-spa-example/blob/c8c3201ec0488f17c1245e1fd2293ba5bc0748d5/index.html#L29 . And I’m not the only one making the claim, people experienced with Elm are saying, ‘In my experience (with 0.18), the JS part is likely to be large’ ( https://www.reddit.com/r/elm/comments/99bzf8/elm_or_reasonml_with_reasonreact/e4n83jk/?context=3&st=jqk20s8w&sh=7e54fe5e ).

                                                  By the way, note how The Elm SPA example port handler code above has business logic (both adding and removing data from the cache in a single port).

                                                  Elm makes guarantees about the safety of the code it manages, and makes no guarantees of the code it doesn’t.

                                                  Right, it does that by disallowing code that it can’t verify.

                                                  The language could of course be modified to allow for developers to arbitrarily declare some parts of foreign code as “safe”

                                                  No, you don’t make that code as ‘safe’, you mark it as unsafe, like Rust, so that people know where to look if things are wonky.

                                                  but then Elm is no safer than e.g. TypeScript.

                                                  There’s a wide gap between Elm-level and TypeScript-level.

                                                  Why would people talk down Elm for not protecting them against runtime errors in foreign code,

                                                  Because Elm forced them to write the foreign code. You seem to keep seeing port handler code as ‘some weird JavaScript stuff that we don’t have to worry about’, whereas it’s an intrinsic part of your Elm project, by design.

                                                  Perhaps I’ve been doing it wrong this whole time?

                                                  Perhaps! Certainly the ELM Spa example project is a pure Elm project, no sign of npm anywhere. That’s presented as the idiomatic way to write an Elm project, and yet we both know in practice you need more.

                                                  Edit: Richard Feldman says you don’t need NodeJS or npm to use Elm: https://news.ycombinator.com/item?id=17810088 , which at the very least deserves an asterisk and a footnote.

                                                  1. 2

                                                    No, you don’t make that code as ‘safe’, you mark it as unsafe, like Rust, so that people know where to look if things are wonky.

                                                    Perhaps we’re arguing two slightly different things here. My point was, Elm already treats all foreign code as unsafe. The guy who wrote the article I quoted from wanted a way to say to the Elm compiler “don’t worry about this part, I’ve checked that it’s safe with formal proofs/code review.

                                                    It seems you’re saying Elm should have something akin to unsafePerformIO (sorry, I’ve never written any Rust) directly in the language, without the need for ports.

                                                    Because Elm forced them to write the foreign code. You seem to keep seeing port handler code as ‘some weird JavaScript stuff that we don’t have to worry about’, whereas it’s an intrinsic part of your Elm project, by design.

                                                    It seems this boils down to a difference of opinion on how much JavaScript there ends up being in a non-trivial real-world business application. In my experience, the JavaScript code doesn’t grow that much. You may call this Stockholm Syndrome, but I’m going to back it up with numbers.

                                                    Two of my three businesses are taking revenue. I quit my day job, so these are now all I have (I say this to defend against the idea that these are “toy” projects). All three are primarily Haskell and Elm.

                                                    In one of my revenue-generating businesses, JavaScript accounts for 3% of the UI code I wrote:

                                                    github.com/AlDanial/cloc v 1.80  T=0.12 s (456.3 files/s, 64683.8 lines/s)
                                                    -------------------------------------------------------------------------------
                                                    Language                     files          blank        comment           code
                                                    -------------------------------------------------------------------------------
                                                    Elm                             19            243              6           3194
                                                    CSS                             11            260             38           1612
                                                    Haskell                         20            287            199           1578
                                                    Sass                             1             38              0            191
                                                    JavaScript                       2             25              0             98
                                                    HTML                             2              0              0             28
                                                    -------------------------------------------------------------------------------
                                                    SUM:                            55            853            243           6701
                                                    -------------------------------------------------------------------------------
                                                    

                                                    In the other revenue-generating business, JavaScript accounts for 8% of the UI code I wrote:

                                                    github.com/AlDanial/cloc v 1.80  T=0.12 s (504.6 files/s, 66595.5 lines/s)
                                                    -------------------------------------------------------------------------------
                                                    Language                     files          blank        comment           code
                                                    -------------------------------------------------------------------------------
                                                    Haskell                         42            608             90           4252
                                                    Elm                             14            196              0           2357
                                                    JavaScript                       1             48              0            196
                                                    Markdown                         1              5              0             25
                                                    HTML                             1              0              0             10
                                                    -------------------------------------------------------------------------------
                                                    SUM:                            59            857             90           6840
                                                    -------------------------------------------------------------------------------
                                                    

                                                    I’m not suggesting the small amount of necessary JavaScript code isn’t something people should worry about.

                                                    1. 1

                                                      It seems you’re saying Elm should have something akin to unsafePerformIO (sorry, I’ve never written any Rust) directly in the language, without the need for ports.

                                                      Right, basically a section of the code you can jump to, to look for potential issues.

                                                      In one of my revenue-generating businesses, JavaScript accounts for 3% of the UI code I wrote

                                                      I see, thank you for providing that analysis. I personally don’t have any Elm apps, but I took a look at Ellie as a ‘representative sample small app’ and it seems to have six ports: https://github.com/ellie-app/ellie/search?q=app.ports&unscoped_q=app.ports . Let’s look at one of these ports: https://github.com/ellie-app/ellie/blob/45fc52ef557e3a26162b33b950874002b43d9072/assets/src/Pages/Editor/Main.js#L25

                                                      This looks like a catch-all logic port for doing various top-level operations. It’s quite clever, in the way that it uses a JavaScript array to encode a sum type. But again here’s my problem. This is quite a significant chunk of logic, in JavaScript. I have to test this, and update it in parallel with the Elm code. I get no help from Elm’s compiler while I’m doing that. So let’s say realistically, port handler code is somewhere around 5 to 10% of a typical Elm project. It’s likely to be the most stateful, effectful, and complex code in the project.

                                                      Here’s another thing that goes to your initial question, about people not being able to come up with good reasons to not use ports. The thing is, they did, e.g.: https://www.reddit.com/r/elm/comments/81bo14/do_we_need_to_move_away_from_elm/dv3j1q3/ . But often they did it in one of the forums controlled by the Elm mods, who very quickly shadowbanned those posts and their entire discussion threads from visibility. They did it, ostensibly to avoid flamewars and hurt feelings. But a side effect is that a lot of valid criticisms got swept under the carpet as well. Elm’s community management does this, effectively–it hides valid criticisms and gives people the impressions that everything’s good.

                                                      That post I linked above, and its surrounding thread, from ten months ago–all the criticisms of ports are there. Exactly as I came up with in the past couple of days during our discussion. But I bet you never saw that post before, or if you did, you quickly forgot about it. Out of sight, out of mind.

                                                      Edit: look at the overall Philip2 discussion thread, btw. You’ll see it’s hidden by 4 users (maybe more by the time you read this). Want to bet whether they’re Elm users or not? :-)

                                                      1. 1

                                                        But again here’s my problem. This is quite a significant chunk of logic, in JavaScript. I have to test this, and update it in parallel with the Elm code. I get no help from Elm’s compiler while I’m doing that. So let’s say realistically, port handler code is somewhere around 5 to 10% of a typical Elm project. It’s likely to be the most stateful, effectful, and complex code in the project.

                                                        I’m not sure what you suggest as the solution to this. Ideally, you’d implement whatever you can in Elm. Where that’s not possible, you can use ports. JavaScript being unwieldy and dangerous is hardly an argument against Elm.

                                                        It’s not reasonable for you to criticise Elm for not managing the code that it explicitly states it will not manage for you.

                                                        But often they did it in one of the forums controlled by the Elm mods, who very quickly shadowbanned those posts and their entire discussion threads from visibility. They did it, ostensibly to avoid flamewars and hurt feelings. But a side effect is that a lot of valid criticisms got swept under the carpet as well. Elm’s community management does this, effectively–it hides valid criticisms and gives people the impressions that everything’s good.

                                                        I don’t agree with this either. Let’s take a look at that user’s comments (emphasis mine).

                                                        Asking people to write their own port implementation every time they want to use LocalStorage is insane.

                                                        Typical Elm-speak for “this is fine” while the entire room is on fire.

                                                        I understand what ports are and how they work. They suck.

                                                        Frankly, the guy is being a total asshole. A petulant child. This is not valid criticism, it’s one prima donna saying that everything is terrible because he can’t get his own way.

                                                        I would have banned him too.

                                                        It’s tiring enough for me to hear people make the same arguments over, and over, and over again. “I want to run unsafe code from anywhere!” “No.” “But reasons reasons reasons! Screw you, community! I’m leeaaavvviiinngggg!!!”

                                                        …And I’m not even a maintainer!

                                                        look at the overall Philip2 discussion thread, btw. You’ll see it’s hidden by 4 users (maybe more by the time you read this). Want to bet whether they’re Elm users or not? :-)

                                                        To be honest I wouldn’t be surprised if they weren’t Elm users. You should know that programming is mostly pop culture these days, especially in the front-end world. Everybody cargo cults and adopts a technology as their own personal identity.

                                                        I think it’s pretty unfair of you to project malice onto other people based on the technology they choose to use.

                                                        1. 1

                                                          I’m not sure what you suggest as the solution to this. … It’s not reasonable for you to criticise Elm for not managing the code that it explicitly states it will not manage for you.

                                                          Look, I’m not here to criticize Elm and provide armchair solutions to all its problems, Elm is a great language, I’m just replying to your original assertion that people can’t come up with good arguments against ports. The fact is that ‘Elm explicitly makes you write JS for interop’ when the alternative could be that you could bind explicitly typed Elm values to existing JS code, is a valid criticism of ports.

                                                          This is not valid criticism, it’s one prima donna saying that everything is terrible because he can’t get his own way.

                                                          Really? The criticisms he made included that ports can’t be composed, which you’ve already acknowledged. But you don’t acknowledge it if it’s put in the wrong tone? I get it, those words can be hurtful. They shouldn’t have said it like that. I don’t think it takes away though from the strength of the arguments though. People expressed similar thoughts in the thread. Those comments got plenty of upvotes. Not everybody spoke up, but they did vote up.

                                                          “I want to run unsafe code from anywhere!” “No.” “But reasons reasons reasons! Screw you, community! I’m leeaaavvviiinngggg!!!”

                                                          From my perspective, it was more ‘I want to run unsafe code.’ ‘Why would you want to do that? You haven’t understood Elm, you need to rearchitect.’ ‘I need it because of X, Y, Z.’ ‘That’s not valid and we’re not going to support that.’ Then people getting frustrated and speaking rash words. Rinse and repeat. It’s not fair to either side, I think.

                                                          I think it’s pretty unfair of you to project malice

                                                          OK, in retrospect, I shouldn’t have added the last bit. I wasn’t projecting malice, but more a stubbornness to not see criticism. But I guess that’s more a human quality than an Elm community quality, so it’s not really valid. I apologize, and retract that.

                                      2. 1

                                        Popularity and hype are not excellent indicators of whether a technology functions well and/or is appropriate for your use case.

                                        Well, I wouldn’t argue that, I just noticed that there seems to have been a shift from hype about Elm to hype about Reason. I’ve used both and like both, so I was curious what others were seeing here, yknow?

                                        Every time I have this discussion, it goes along these lines

                                        yes, these are good points. is it education based? like the Elm in Action book is… still going. I remember hearing the same arguments about Node way back when, and those mostly died off as folks figured out how to do things.

                                        1. 1

                                          I couldn’t say definitively, but pessimistically I suspect people are resistant to change. Like “Oh, this is how I’m used to things working in JavaScript; why can’t it just work the way I expect.”

                                          I can empathise — I remember on multiple occasions wanting to use unsafePerformIO when I knew less in Haskell. I also recall being frustrated when several experienced Haskellers on IRC told me not to do that.

                                          1. 2

                                            Again, I’m not sure which complaints or discussion you’re referring to. It would be helpful to provide some links or quotes. Of course there will always be some people who want to do things with a tool that it wasn’t meant to do. But that doesn’t account for all of the recent frustration. Here’s an example that might help you understand better:

                                            My company uses an app-shell architecture for the UI and uses hash routing to route individual apps. Hash routing was supported in 0.18 and is not supported in 0.19. Upgrading to 0.19 means a non-trivial architectural change for all of the apps. We can’t justify the cost of making that change, so we won’t upgrade to 0.19 unless hash routing is supported again. The specifics are described here: https://github.com/elm/url/issues/24

                                            1. 2

                                              Can’t you just copy the code of the old parseHash and vendor it with your codebase in 0.19? At first glance, it seems to be written in pure Elm, so no Native/Kernel code required that would preclude such a copy? edit: also, first page of google results for me seems to show some workaround

                                              1. 1

                                                The point is not that there aren’t workarounds. And even if we were to use a workaround, it would still be hours of work because we have to update, test, and deploy every app that uses hash routing (which is around 20). The point is that there are legitimate frustrations related to breaking changes in 0.19. I offered this example as evidence that a frustrated user should not be assumed to be incompetent, e.g. “Oh, this is how I’m used to things working in JavaScript; why can’t it just work the way I expect.”

                                                1. 1

                                                  I don’t really understand the part about “assuming incompetence”. Other than that, I do understand that migration from 0.18 to 0.19 is a non-zero cost and thus needs to be evaluated, and accepted or rejected depending on business case. I only replied because I understood the last sentence as if parseHash was the only thing stopping you from upgrading. That made me curious and confused, as on first glance, this seemed to me easily enough resolved that it shouldn’t be a blocker. Based on your reply, I now assume I was probably just misled by the last sentence, and there’s more missing than just parseHash for you; or otherwise maybe you generally just don’t have resources to upgrade at all (regardless if it is Elm or something else; React also has breaking changes on some versions I believe). Though I may still be wrong as well.

                                        2. 1

                                          Can you provide a link to someone making a complaint about synchronous IO? I’ve read a lot of the user feedback and discussion about Elm 0.18 and 0.19, and I’ve never seen anything about synchronous IO. I’m curious what the use case is.

                                          1. 1

                                            This article was widely discussed when it was published: https://dev.to/kspeakman/elm-019-broke-us--khn

                                            There are two complaints:

                                            1. No more custom operators — solved easily enough with a find/replace across the project.
                                            2. No more native modules — this is what I’m referring to when I say “synchronous IO”.

                                            In Elm, IO effects should all be run asynchronously. Any effects not modelled by an Elm library should go through ports. Despite having asked many people, I’ve never seen a clear answer for why any given problem can not be solved with ports.

                                            1. 1

                                              Now I understand. Yes, I was also surprised by the amount of people who depend on native modules. But not every use case is related to effects or impure function calls. Sometimes native modules are used to wrap pure functions that haven’t been implemented in Elm. One example is currency formats. Not every currency uses the same numeric notation (number of places to the right of decimal varies), so you need a big dictionary to map currency to format. Last time I looked (it’s been awhile), this doesn’t exist in Elm. Of course, you could use a port for this and you could implement this is in Elm, but both incur a complexity cost that isn’t clearly justified.

                                              Here’s another example of when using a port is not clearly the correct answer: https://github.com/elm/html/issues/172. There’s a legitimate use case for innerHTML, which has recently been disabled, and using a port means that you have to do a lot of accounting to make sure the inner html isn’t clobbered by Elm.

                                              Elm’s inner circle has lately recommended using custom elements. To me, this is essentially a tacit acknowledgment that some form of synchronous foreign function interface is called for. The argument has always been that FFI is unsafe. But custom elements, which run arbitrary JavaScript, are just as unsafe. And they’ve been used to wrap libraries like Moment, not just to create custom DOM elements. So they are essentially foreign functions that you can’t get a return value from. The same people encourage the community to use a less-useful form of FFI on the one hand and respond with very opinionated criticism of full FFI on the other hand. This is the kind of thing that causes frustration in the community.

                                              1. 1

                                                Here’s the previous discussion (I was curious about it).

                                        1. 9

                                          There is no such thing as a clever computer. There are just human tricking themself into thinking the complex mathematics and statistics they observe is getting self-conscious.

                                          There will not be any machine revolt against human, but human handing their resources to something they do not master fully.

                                          Just like killing the ssh daemon and pretending that the computer is taking the control, while it was just that the connection closing unexpectedly and you locked yourself out.

                                          1. 4

                                            Why not? Is there any reason, other than literally believing in souls, that human behavior cannot be replicated in mathematics?

                                            Which isn’t to say that it’s been done yet. Personally, I think the most interesting thing about watching the whole deep-neural-net bubble pop is seeing how utterly inhuman they are. Remember the old rumor about someone defining mankind as a “featherless biped” and being presented a plucked chicken as a specimen? Pretty much the exercise the AI community has been going through with defining “intelligence”.

                                            Which is, itself, not to say that the presence of bizarre flaws disqualifies something from counting as intelligence. If it did, then there isn’t any intelligent life down here.

                                            1. 2

                                              As I elaborted in a sibling reply, personally, I believe the main problem is that we don’t actually know what constitutes “human behavior” to start with — in other words, as you say, we can’t even define intelligence/sentience, nor test for its presence in animals; in fact, not even in fellow humans!

                                              1. 1

                                                After reading this thread again, I understood something better:

                                                Given something we refer as an artificial intelligence, conputer-based one.

                                                There would be an operating sydtem, of course, as it runs on computer hardware and needs to take this into account.

                                                On top of it we can run our application: the “AI engine”: one large program.

                                                Everything since then that may appear clever is the work of many engineers. Let’s not take this for “Artificial Intelligence”. Just collective human intelligence.

                                                Then on top of this, we feed the pipe with data: we “forge the AI” by trial and error loops, until we find something that match our needs.

                                                Then any intelligence that appear to us from this might be what we are talking about : the source is the events of Nature, it feeds its learning from the same source as us: the event happening during its existance.

                                                Oops, not exactly nature, we made up the conditions that lead to its apparition, so it might be an “Artificial Intelligence In Silico”. As we controlled the parameters.

                                                Some totally free “Artificial Intelligence” confrounted to all the natural events coming to it would then be a “Natural Intelligence In Silico”. As after all, what appears to us as intelligent is the num bercrunching that came out of natural events.

                                                After all, what is called whizardy today might be called science tomorrow.

                                                Thank you all for widening the angle of view.

                                              2. 3

                                                We imbue our creations with elements of ourselves. Our machines are temperamental, our equipment unhappy, and our houses healthy. It is no wonder to me that practitioners of this technical craft fall into the same trap as practitioners of every other. The worrisome part is how close attention the public and AI scientists pay to the programmers talking to their machines.

                                                1. 2

                                                  My core view on “computer/AI self-consciousness/sentience/intelligence” is, that as far as I know, we don’t even have any sensible test for consciousness/sentience at all; not only we can’t say if any animal (dog, ape, dolphin) is sentient/conscious/intelligent; we don’t really even know how to definitively test if any random human has consciousness/sentience/intelligence (even the well known IQ score is disputed and has many significant flaws)! From a different angle, a “test of sentience” would be important for the pro-/anti-abortion activism question of “when does an embryo become a human” — AFAIK, we also can’t answer this question, at least in the aspect of “consciousness/sentience/intelligence/soul/…”. Given this, I find it ridiculous that some people dare to claim we “will soon create conscious/sentient/intelligent computers”.

                                                  1. 4

                                                    we don’t even have any sensible test for consciousness/sentience at all;

                                                    there’s a few “theory of mind” tests for great apes & dolphins, but I don’t know how you would apply that to something without a corporeal form (most of the ones I know of are “watch something that looks like me and intuit what it will be thinking” or “do you recognize yourself in the mirror or do you see it as another animal? do you try to look for the spot we placed on the mirror on yourself?”). I think it’s definitely an interesting place to see some more research, that’s for sure.

                                              1. 3

                                                I’ve said this a few times (even here!), but I really stopped trusting google when Google+ recommended a sensitive client contact to me as a Google+ connection.

                                                I had:

                                                • all phone contacts set to “Device Only” on that phone
                                                • no backups of contacts to cloud or the like
                                                • (I thought) set Google+ to be disallowed from accessing phone contacts

                                                I don’t think Apple is “better” about this sort of thing, but they seem to mainly use it to make improvements to the daily use-cases of their users… for now. Once they start to tap or sell that data in a large spread way, it’ll be too late in any case, and it’ll just be another Facebook/Google situation.

                                                1. 1

                                                  Apple says they are focused on customer privacy. Ironically, the massive amount of money they make is seen as the best guarantee that they’ll actually respect privacy - they don’t need the money like Google and Facebook does.

                                                  1. 2

                                                    I don’t disagree at all, I just mean it is possible at some point for that to change, and at that point they’re sitting on a mountain of data, similar to that of Google and Facebook.

                                                    1. 3

                                                      Yes, I agree. It’s not unreasonable for a future management at Apple to be tempted to monetize this data.

                                                      The comparison of personal data with toxic waste is quite appropriate. It’s hard to store securely, it can cause a lot of problems if misused, and it can last a long time.

                                                      1. 3

                                                        exactly. I usually phrase it as:

                                                        • it’s difficult to store securely
                                                        • it’s difficult to identify procedures for safe handling
                                                        • it lasts forever
                                                        • when it leaks it destroys your environment.

                                                        I also like the nuclear materials comparison as well, since it has many of the same features, but also has the “extremely useful when used correctly, extremely harmful when put in the wrong hands” dichotomy.

                                                        1. 3

                                                          As long as the cost of business of losing or misusing personal information is lower than the benefits of utilizing it (in user tracking, ad targeting etc), there’s zero chance of the industry handling personal details changing.

                                                          1. 3

                                                            oh for sure; forget OPM, look at how poorly Equifax responded and their punishment has been squints at notes they’re selling more services than ever?

                                                            Their customer were not the folks impacted; their customers are other businesses and such, so nothing happened. It’s a complete shame.

                                                1. 2

                                                  Always nice to see these sort of comprehensive overviews, since I always feel like the space is a bit overwhelming to know where to start learning.

                                                  One question, not necessarily related to the material at hand, but something that stuck out at me:

                                                  Soundness prevents false negatives, i.e., all possible unsafe inputs are guaranteed to be found, while completeness prevents false positives, i.e., input values deemed unsafe are actually unsafe.

                                                  Did anyone else learn these definitions as switched from the above? In my education (and in informal usage of the terms), “sound” meant “if you’re given an answer, it is actually valid” whereas “complete” meant “if it’s valid, it’ll be guaranteed to be given as an answer” (e.g. so certain logic programmings might be sound but not complete), which is the opposite. Do different sub-disciplines use these terms in the other way that I learned it? (Or, did I learn it incorrectly?)

                                                  1. 1

                                                    Sorry for the late reply, this week has been trying!

                                                    Wikipedia says it better than I will:

                                                    In mathematical logic, a logical system has the soundness property if and only if every formula that can be proved in the system is logically valid with respect to the semantics of the system.

                                                    “Complete” is a bit more complex, but basically something is complete when you can use it to derive every formula within the system. There are slight differences based one what completeness element you’re discussing, such as complete formal languages vs complete logical semantics.

                                                    I don’t think you learnt it incorrectly, just probably focused on the area you were learning. Wrt the section outlined, the difference there is that you can either detect all possibly unsafe inputs (they are guaranteed to be logically valid for the domain and thus possibly unsafe) OR you can ensure that everything found is actually unsafe (i.e. it actually expresses the nature of “un-safety” to a particular program’s semantics).

                                                    Does that make more sense? It’s quite early here and I’m still ingesting caffeine, so I apologize if not…

                                                  1. 3

                                                    I was going to ask questions about “the kernel stack is executable”, but then I saw “MIPS”

                                                    1. 2

                                                      Interestingly, @brucem and I had that conversation point this morning as well, since MIPS limits options with certain things, but also brings new restrictions.

                                                    1. 4

                                                      $work:

                                                      • I’m on research week, so more symbolic execution and future of smart contract stuff
                                                      • some report editing
                                                      • fixing up some client tooling
                                                      • looking into some F# stuff I’m seeing in the space

                                                      !$work:

                                                      • I removed 369 lines of type parsing code from my compiler, which resulted from a simple grammar change I made
                                                      • need to finish some more work on the match form
                                                      • I’ve started stubbing out a new CTF I’m working on, a historical CTF with historical machines & languages
                                                      1. 2

                                                        Any more info on the historical CTF? That sounds really interesting.

                                                        1. 2

                                                          so I’ve written a historical CTF once before: Gopher, a modified RSH, and MUSH running atop Inferno, which was pretty interesting.

                                                          For this one, I’d like to have a MULTICS/PR1MOS-like system and a VMS/TWENEX-like system that players must attack and defend. The code would be written in languages appropriate for those two systems (like a DCL-clone, some Algol clones, and so one), with flags planted throughout. It’s a lot of work, but I think the result would be really fun, if quite challenging for participants (new languages, structures, protocols).

                                                      1. 15

                                                        Your thinkpad is shared infrastructure on which you run your editor and forty-seven web sites run their javascripts. If that a problem for you?

                                                        1. 2

                                                          Mmm what did you mean by this? I didn’t get it.

                                                          1. 13

                                                            In We Need Assurance, Brian Snow summed up much of the difficulty securing computers:

                                                            “The problem is innately difficult because from the beginning (ENIAC, 1944), due to the high cost of components, computers were built to share resources (memory, processors, buses, etc.). If you look for a one-word synopsis of computer design philosophy, it was and is SHARING. In the security realm, the one word synopsis is SEPARATION: keeping the bad guys away from the good guys’ stuff!

                                                            So today, making a computer secure requires imposing a “separation paradigm” on top of an architecture built to share. That is tough! Even when partially successful, the residual problem is going to be covert channels. We really need to focus on making a secure computer, not on making a computer secure – the point of view changes your beginning assumptions and requirements! “

                                                            Although security features were added, the fact that many things are shared and closer together only increased over time to meet market requirements. Then, researchers invented hundreds of ways to secure code and OS kernels, Not only were most ignored, the market shifted to turning browsers into OS’s running a malicious code in a harder-to-analyze language whose compiler (JIT) was harder to secure due to timing constraints. Only a handful of projects in high-security, like IBOS and Myreen, even attempted it. So, browsers running malicious code are a security threat in a lot of ways.

                                                            That’s a subset of two, larger problems:

                                                            1. Any code in your system that’s not verified to have specific safety and security properties might be controlled by attackers upon malicious input.

                                                            2. Any shared resource might leak your secrets to a malicious observer via covert channels, storage or timing. Side channels are basically the same concept applied more broadly, like in physical world. Even the LED’s on your PC might internal state of the processor depending on design.

                                                            1. 2

                                                              Hmm. I had a friend yonks ago who worked on BAE’s STOP operating system, that supposedly uses complex layers of buffers to isolate programs. I wonder how it’s stood up against the many CPU vulnerabilities.

                                                              1. 4

                                                                I’ve been talking about STOP for a while but rarely see it. Cool you knew someone that worked on it. Its architecture is summarized here along with GEMSOS’s. I have a detailed one for GEMSOS tomorrow, too, if not previously submitted. On the original implementation (SCOMP), the system also had an IOMMU that integrated with the kernel. That concept was re-discovered some time later.

                                                                Far as your question, I have no idea. These two platforms, along with SNS Server, have had no reported hacks for a long time. You know they have vulnerabilities, though. The main reasons I think the CPU vulnerabilities will effect them is (a) they’re hard to avoid and (b) certification requirements mean they rarely change these systems. They’re probably vulnerable, esp to RAM attacks. Throw network Rowhammer at them. :)

                                                              2. 2

                                                                Thanks, that was really interesting and eye opening on the subject. I never saw it that way! :)

                                                              3. 5

                                                                I think @arnt is saying that website JavaScript can exploit CPU bugs, so by browsing the internet you are “shared infrastructure”.

                                                                1. 6

                                                                  Row Hammer for example had a JavaScript implementation, and Firefox (and others) have introduced mitigations to prevent those sorts of attacks. Firefox also introduced mitigations for Meltdown and Spectre because they could be exploited from WASM/JS… so it makes sense to mistrust any site you load on the internet, especially if you have an engine that can JIT (but all engines are suspect; look at how many pwn2own wins are via Safari or the like)

                                                                  1. 3

                                                                    If browsers have builtin mitigation for this sort of thing, isn’t this an argument in favor of disabling the OS-level mitigation? Javascript is about the only untrusted code that I run on my machine so if that’s already covered I don’t see a strong reason to take a hit on everything I run.

                                                                    1. 4

                                                                      I think the attack surface is large enough even with simple things like JavaScript that I’d be willing to take the hit, though I can certainly understand certain workloads where you wouldn’t want to, like gaming or scientific computing.

                                                                      For example, JavaScript can be introduced in many locations, like PDFs, Electron, and so on. Also, there are things like Word documents such as this RTF remote code execution for MS Word. Additionally, the mitigations for Browsers are just that, mitigations; things like retpolines and the like work in a larger setting with more “surface area” covered, vs timing mitigations or the like in browsers. It’s kinda like W^X page protections or ASLR: the areas you’d need that are quite small, but it’s harder to find individual applications with exploits and easier to just apply it wholesale to the entire system.

                                                                      Does that make sense?

                                                                      tl;dr: JS is basically everywhere in everything, so it’s harder to just apply those fixes in a single location like a browser, when other things may have JS exposed as well. Further more, there are other languages, attack surfaces, and the like I’d be concerned about that it’s just not worth it to only rely on browsers, which can only implement partial mitigations.

                                                                      1. 1

                                                                        Browsers do run volatile code supplied by others more than most other attack surfaces. You may have an archive of invoices in PDF format, as I have, and those may in principle contain javascript, but those javascripts aren’t going to change all of a sudden, and they all originate from a small set of parties (in my case my scanning software and a single-digit number of vendors). Whereas example.com may well redeploy its website every Tuesday morning, giving you a the latest versions of many unaidited third-party scripts, and neither you nor your bank’s web site really trust example.com or its many third-party scripts.

                                                                        IMO that quantitative difference is so large as to be described as qualitative.

                                                                        1. 1

                                                                          The problem is when you bypass those protections you can have things like this NitroPDF exploit, which uses the API to launch malicious JS. I’ve used these sorts of exploits on client systems during assessments, adversarial or otherwise. So relying on one section of your system to protect you against something that is a fundamental CPU design flaw can be problematic; there’s nothing really stopping you from launching rowhammer from PostScript itself, for example. This is why the phrase “defense in depth” is so often mentioned in security circles, since there can be multiple failures throughout a system, but in a layered approach you can catch it at one of the layers.

                                                                          1. 1

                                                                            Oh, I’m not arguing that anyone should leave out everything except browser-based protection. Defense in depth is indisputably good.

                                                                      2. 3

                                                                        There’s also the concept of layers of defense. Let’s say the mitigation fails. Then, you want the running, malicious code to be sandboxed somehow by another layer of defense. You might reduce or prevent damage. The next idea folks had was mathematically-prove the code could never fail. What if a cosmic ray flips a bit that changes that? Uh oh. Your processor is assumed to enable security, you’re building an isolation layer on it, make it extra isolated just in case shared resources have effect, and now only one of Spectre/Meltdown affected you if you’re Muen. Layers of security are still good idea.

                                                                    2. 2

                                                                      That’s not what I got from it. I perceived it as “You’re not taking good precautions on this low hanging fruit, why are you worried about these hard problems?”

                                                                      I see it constantly, everyone’s always worried about X, and then they just upload everything to an unencrypted cloud.

                                                                      1. 1

                                                                        I actually did mean that when you browse the net, your computer runs code supplied by web site operators you may not trust, and some of those web site operators really are not trustworthy, and your computer is shared infrastructure running code supplied by users who don’t trust each other.

                                                                        Your bank’s site does not trust those other sites you have open in other tabs, so that’s one user who does not trust others.

                                                                        You may not trust them, either. A few hours after I posted that, someone discovered that some npmjs package with millions of downloads has been trying to steal bitcoin wallets, so that’s millions of pageviews that ran malevolent code on real people’s computers. You may not have reason to worry in this case, but you cannot trust sites to not use third-party scripts, so you yourself also are a distrustful user.

                                                                  2. 2

                                                                    This might be obvious, but I gotta ask anyway: Is there a real threat to my data when I, let’s say, google for a topic and open the first blog post that seems quite right?

                                                                    • Would my computer be breached immediately (like I finished loading the site and now my computers memory is in north korea)?
                                                                    • How much data would be lost, and would the attacker be able to read any useful information from it?
                                                                    • Would I be infected with something?

                                                                    Of course I’m not expecting any precise numbers, I’m just trying to get a feel for how serious it is. Usually I felt safe enough just knowing which domains and topics (like pirated software, torrents, pron of course) to avoid, but is that not enough anymore?

                                                                    1. 5

                                                                      To answer your questions:

                                                                      Would my computer be breached immediately (like I finished loading the site and now my computers memory is in north korea)?

                                                                      Meltdown provides read-access to privileged memory (including enclave-memory) at rates of a couple of megabits per second (lets assume 4). This means that if you have 8GB of ram it is now possible to dump the entire memory of your machine in about 4,5 hours.

                                                                      How much data would be lost, and would the attacker be able to read any useful information from it?

                                                                      This depends on the attackers intentions. If they are smart, they just read the process table, figure out where your password-manager or ssh-keys for production are stored in ram and transfer the memory-contents of those processes. If this is automated, it would take mere seconds in theory, in practice it won’t be that fast but it’s certainly less than a minute. If they dump your entire memory, it will probably be all data in all currently running applications and they will certainly be able to use it since it’s basically a core dump of everything that’s currently running.

                                                                      Would I be infected with something?

                                                                      Depends on how much of a target you are and whether or not the attacker has the means to drop something onto your computer with the information gained from what I described above. I think it’s safe to assume that they could though.

                                                                      These attacks are quite advanced and regular hackers will always go for the low-hanging fruit first. However if you are a front-end developer in some big bank, big corporation or some government institution which could face a threat from competitors and/or economic espionage. The answer is probably yes. You are probably not the true target the attackers are after, but you system is one hell of a springboard towards their real target.

                                                                      It’s up to you to judge how much of a potential target you are, but when it happens, you do not want to be that guy/girl with the “patient zero”-system.

                                                                      Usually I felt safe enough just knowing which domains and topics (like pirated software, torrents, pron of course) to avoid, but is that not enough anymore?

                                                                      Correct. Is not enough anymore, because Rowhammer, Spectre and Meltdown have JavaScript or wasm variants (If they didn’t we wouldn’t need mitigations in browsers). All you need is a suitable payload (the hardest part by far) and one simple website you frequently visit, which runs on an out-of-date application (like wordpress, drupal or yoomla for example) to get that megabit-memory-reading meltdown-attack onto a system.

                                                                      The attacker still has to know which websites those are, but they could send you a phishing-mail which has a link or some attachment that will be opened in some environment which has support for javascript (or something else) to obtain your browsing history. In that light it’s good to know that some e-mail clients support the execution of javascript in received e-mail messages

                                                                      If there is one lesson to take home from rowhammer, spectre and meltdown, it’s that there is no such thing as “computer security” anymore and that we cannot rely on the security-mechanisms given to us by the hardware.

                                                                      If you are developing sensitive stuff, do it on a separate machine and avoid frameworks, libraries, web-based tools, other linked in stuff and each and every extra tool like the plague. Using an extra system, abandoning the next convenient tool and extra security precautions are annoying and expensive, but it’s not that expensive if your livelihood depends on it.

                                                                      The central question is: Do you have adversaries or competitors willing to go this far and spend about half a million dollars (my guesstimate of the required budget) willing to pull off an attack like this?

                                                                      1. 1

                                                                        Wow, thanks! Assuming you know what you’re talking about, your response is very useful and informative. And exactly what I was looking for!

                                                                        […] figure out where your password-manager or ssh-keys for production are stored in ram […]

                                                                        That is a vivid picture of the worst thing I could imagine, albeit I would only have to worry about my private|hobby information and deployment.

                                                                        Thanks again!

                                                                        1. 1

                                                                          You’re welcome!

                                                                          I have to admit that what I wrote above, is the worst case scenario I could come up with. But it is as the guys from Sonatype (from the Maven Nexus repository) stated it once: “Developers have to become aware of the fact that what their laptops produce at home, could end up as a critical library or program in a space station. They will treat and view their infrastructure, machines, development processes and environments in a fundamentally different way.”

                                                                          Yes, there are Java programs and libraries from Maven Central running in the ISS.

                                                                      2. 1

                                                                        The classic security answer to that is that last years’s theoretical attack is this year’s nation-state attack and next year it can be carried out by anyone who has an midprice GPU. Numbers change, fast. Attacks always get better, never worse.

                                                                        I remember seeing an NSA gadget for $524000 about ten years ago (something to spy on ethernet traffic, so small as as be practically invisible), and recently a modern equivalent for sale for less than $52 on one of the Chinese gadget sites. That’s how attacks change.

                                                                    1. 29

                                                                      I share the author’s frustrations, but I doubt the prescriptions as presented will make a big difference, partly because they have been tried before.

                                                                      And they came up with Common Lisp. And it’s huge. The INCITS 226–1994 standard consists of 1153 pages. This was only beaten by C++ ISO/IEC 14882:2011 standard with 1338 pages some 17 years after. C++ has to drag a bag of heritage though, it was not always that big. Common Lisp was created huge from the scratch.

                                                                      This is categorically untrue. Common Lisp was born out of MacLisp and its dialects, it was not created from scratch. There was an awful lot of prior art.

                                                                      This gets at the fatal flaw of the post: not addressing the origins of the parts of programming languages the author is rejecting. Symbolic representation is mostly a rejection of verbosity, especially of that in COBOL (ever try to actually read COBOL code? I find it very easy to get lost in the wording) and to more closely represent the domains targetted by the languages. Native types end up existing because there comes a time where the ideal of maths meets the reality of engineering.

                                                                      Unfortunately, if you write code for other people to understand, you have to teach them your language along with the code.

                                                                      I don’t get this criticism of metaprogramming since it is true of every language in existence. If you do metaprogramming well, you don’t have to teach people much of anything. In fact, it’s the programmer that has to do the work of learning the language, not the other way around.

                                                                      The author conveniently glosses over the fact that part of the reason there are so many programming languages is that there are so many ways to express things. I don’t want to dissuade the author from writing or improving on COBOL to make it suitable for the 21st century; they can even help out with the existing modernization efforts (see OO COBOL), although they may be disappointed to find out COBOL is not really that small.

                                                                      If you do click through and finish the entire post you’ll see the author isn’t really pushing for COBOL. The key point is made: “Aren’t we unhappy with the environment in general?” This, I agree, is the main problem. No solution is offered, but there is a decent sentiment about responsibility.

                                                                      1. 1

                                                                        Also if you want a smaller Lisp than CL with many of it’s more powerful features, there’s always ISLisp, which is one of the more under-appreciated languages I’ve seen. It has many of the nicer areas of CL, with the same syntax (unlike Dylan which switched to a more Algol-like), but still has a decent specification weighing in at a mere 134 pages.

                                                                      1. 10

                                                                        That was a surprisingly fun quick read.

                                                                        As an example of another language that would foot the bill but be more…modern than the one described in TFA (no spoilers) would be REBOL.

                                                                        It was the path not taken, sadly.

                                                                        1. 9

                                                                          You might be aware, but Red is following that path. But they’ve gone off on a cryptocurrency tangent; I’m not quite sure what’s going on there anymore.

                                                                          1. 4

                                                                            I think dialecting ala Rebol is super interesting, but I also think this sort of “wordy” input like AppleScript and DCL will eventually just become short forms that often require just as much effort to read later… that’s how you’d have things like show device... foreshortened to sho dev ....

                                                                            Having said that, SRFI-10 or #. form from CommonLisp is a happy medium, I think.

                                                                            1. 3

                                                                              that’s how you’d have things like show device… foreshortened to sho dev

                                                                              I have not been responsible for a Cisco router in at least 15 years but I still find myself typing “sh ip int br” on occasion.

                                                                              1. 2

                                                                                hahahaha oh lord, I know what you mean. I still have ancient devices burned in my brain as well, like OpenVMS and what not. Still, I think it goes to show that making things more “natural language-like” doesn’t really mean we want to write like that… there’s probably some balance to be struck between succinctness and power that we haven’t figured out yet

                                                                            2. 2

                                                                              I also loved the bit of engagement at the end with the buttons. Been a string of really well written (light) technical articles lately, hope the trend continues.

                                                                              I ported a REBOL app (using the full paid stack) to C# – the code inflation and challenge of making a 1:1 exact copy (no retraining port) was phenomenal. Most stuff took nearly an order of magnitude more code. There were some wins (dynamic layouts, resizing, performance) – but REBOL had shockingly good bang for the buck and dialects only really took a few days to grok.

                                                                            1. 7

                                                                              Pentesters always want to sound like it’s some sort of action movie, and I am tired of it.

                                                                              Good on the company for having their security in order. Breaking in and prying out disks of laptops in storage is a bit over the top.

                                                                              1. 14

                                                                                The hardest part of any security job is communicating your findings effectively to your audience.

                                                                                A pen-test of a corporate network is not the most exciting topic in the world of security so I’m sure attempts at adding some drama and a story helps.

                                                                                1. 4

                                                                                  Depends on the scope of the assessment; I have had clients that have wanted me to break into things, and device theft was definitely in scope. Working adversary simulation, OPFOR, whatever, has different scope. On the flip side, I’ve definitely seen pentesters/red teamers who just want to “win” regardless of the scope or cost. This provides almost nothing of value to a client: if they knew their physical security was weak, breaking into the data center provides nothing to a client who wanted to know how well their validation schemes worked.

                                                                                  I remember once being on site with another company that usually did “full scope” assessments as their bread-and-butter. The first day of their web app test, they:

                                                                                  • tried to unplug a phone
                                                                                  • spoof the phone’s MAC address
                                                                                  • bypass network restrictions and NAC via the phone to get to a database

                                                                                  on a web app… The client wanted to know about their web app, not their network security (which was actually fairly decent). Anyway, I finished my application early and was asked to step in and take over that assessment…

                                                                                1. 8

                                                                                  $work:

                                                                                  • finishing a symbolic execution engine for a client’s custom programming language; need to add more primitives, and add my computation traces to an actual SMT.
                                                                                  • assessment work
                                                                                  • writing some templates for our findings, some sales engineering and client meetings
                                                                                  • Talk on blockchain security

                                                                                  !$work:

                                                                                  • finally finishing pattern matching in carML
                                                                                  • adding some more threat hunting items to wolf-lord
                                                                                  1. 2

                                                                                    How did your client end up with a custom programming language?

                                                                                    1. 2

                                                                                      believe it or not, it’s surprisingly common in the blockchain space, esp wrt validator languages for proof of authority, as well as for “novel” smart contract languages.

                                                                                  1. 15

                                                                                    Q: is the HTTP protocol really the problem that needs fixing?

                                                                                    I’m under the belief that if the HTTP overhead is causing you issues then there are many alternative ways to fix this that don’t require more complexity. A site doesn’t load slowly because of HTTP, it loads slowly because it’s poorly designed in other ways.

                                                                                    I’m also suspicious by Google’s involvement. TCP HTTP 1.1 is very simple to debug and do by hand. Google seems to like closing or controlling open things (Google chat support for XMPP, Google AMP, etc). Extra complexity is something that should be avoided, especially for the open web.

                                                                                    1. 10

                                                                                      They have to do the fix on HTTP because massive ecosystems already depend on HTTP and browsers with no intent to switch. There’s billions of dollars riding on staying on that gravy train, too. It’s also worth noting lots of firewalls in big companies let HTTP traffic through but not better-designed protocols. The low-friction improvements get more uptake by IT departments.

                                                                                      1. 7

                                                                                        WAFs and the like barely support HTTP/2 tho; a friend gave a whole talk on bypasses and scanning for it, for example

                                                                                        1. 6

                                                                                          Thanks for feedback. I’m skimming the talk’s slides right now. So far, it looks like HTTP/2 got big adoption but WAF’s lagged behind. Probably just riding their cash cows minimizing further investment. I’m also sensing business opportunity if anyone wants to build a HTTP/2 and /3 WAF that works with independent testing showing nothing else or others didn’t. Might help bootstrap the company.

                                                                                          1. 3

                                                                                            ja, that’s exactly correct: lots of the big-name WAFs/NGFWs/&c. are missing support for HTTP/2 but many of the mainline servers support it, so we’ve definitely seen HTTP/2 as a technique to bypass things like SQLi detection, since they don’t bother parsing the protocol.

                                                                                            I’ve also definitely considered doing something like CoreRuleSet atop HTTP/2; could be really interesting to release…

                                                                                            1. 4

                                                                                              so we’ve definitely seen HTTP/2 as a technique to bypass things like SQLi detection, since they don’t bother parsing the protocol.

                                                                                              Unbelievable… That shit is why I’m not in the security industry. People mostly building and buying bullshit. There’s exceptions but usually setup to sell out later. Products based on dual-licensed code are about only thing immune to vendor risk. Seemingly. Still exploring hybrid models to root out this kind of BS or force it to change faster.

                                                                                              “I’ve also definitely considered doing something like CoreRuleSet atop HTTP/2; could be really interesting to release…”

                                                                                              Experiment however you like. I can’t imagine what you release being less effective than web firewalls that can’t even parse the web protocols. Haha.

                                                                                              1. 5

                                                                                                Products based on dual-licensed code

                                                                                                We do this where I work, and it’s pretty nice, tho of course we have certain things that are completely closed source. We have a few competitors that use our products, so it’s been an interesting ecosystem to dive into for me…

                                                                                                Experiment however you like. I can’t imagine what you release being less effective than web firewalls that can’t even parse the web protocols. Haha.

                                                                                                pfff… there’s a “NGFW” vendor I know that…

                                                                                                • when it sees a connection it doesn’t know, analyzes the first 5k bytes
                                                                                                • this allows the connection to continue until the 5k+1 byte is met
                                                                                                • subsequently, if your exfiltration process transfers data in packages of <= 5kB, you’re ok!

                                                                                                we found this during an adversary simulation assessment (“red team”), and I think it’s one of the most asinine things I’ve seen in a while. The vendor closed it as works as expected

                                                                                                edit fixed the work link as that’s a known issue.

                                                                                                1. 3

                                                                                                  BTW, Firefox complains when I go to https://trailofbits.com/ that the cert isn’t configured properly…

                                                                                                  1. 2

                                                                                                    hahaha Nick and I were just talking about that; its been reported before, I’ll kick it up the chain again. Thanks for that! I probably should edit my post for that…

                                                                                                    1. 2

                                                                                                      Adding another data point: latest iOS also complains about the cert

                                                                                        2. 3

                                                                                          They have to do the fix on HTTP

                                                                                          What ‘fix’? Will this benefit anyone other than Google?

                                                                                          I’m concerned that if this standard is not actually a worthwhile improvement for everyone else, then it won’t be adopted and IETF will lose respect. I’m running on the guess that’s it’s going to have even less adoption than HTTP2.

                                                                                        3. 13

                                                                                          I understand and sympathize with your criticism of Google, but it seems misplaced here. This isn’t happening behind closed doors. The IETF is an open forum.

                                                                                          1. 6

                                                                                            just because they do some subset of the decision making in the open shouldn’t exempt them from blame

                                                                                            1. 3

                                                                                              Feels like Google’s turned a lot public standards bodies into rubber stamps for pointless-at-best, dangerous-at-worst standards like WebUSB.

                                                                                              1. 5

                                                                                                Any browser vendor can ship what they want if they think that makes them more attractive to users or what not. Doesn’t mean it’s a standard. WebUSB has shipped in Chrome (and only in Chrome) more than a year ago. The WebUSB spec is still an Editor’s Draft and it seems unlikely to advance significantly along the standards track.

                                                                                                The problem is not with the standards bodies, but with user choice, market incentive, blah blah.

                                                                                                1. 3

                                                                                                  Feels like Google’s turned a lot public standards bodies into rubber stamps for pointless-at-best, dangerous-at-worst standards like WebUSB.

                                                                                                  “WebUSB”? It’s like kuru crossed with ebola. Where do I get off this train.

                                                                                                2. 2

                                                                                                  Google is incapable of doing bad things in an open forum? Open forums cannot be influenced in bad ways?

                                                                                                  This does not displace my concerns :/ What do you mean exactly?

                                                                                                  1. 4

                                                                                                    If the majority of the IETF HTTP WG agrees, I find it rather unlikely that this is going according to a great plan towards “closed things”.

                                                                                                    Your “things becoming closed-access” argument doesn’t hold, imho: While I have done lots of plain text debugging for HTTP, SMTP, POP and IRC, I can’t agree with it as a strong argument: Whenever debugging gets serious, I go back to writing a script anyway. Also, I really want the web to become encrypted by default (HTTPS). We need “plain text for easy debugging” to go away. The web needs to be great (secure, private, etc.) for users first - engineers second.

                                                                                                    1. 2

                                                                                                      That “users first-engineers second” mantra leads to things like Apple and Microsoft clamping down on the “general purpose computer”-think of the children the users! They can’t protect themselves. We’re facing this at work (“the network and computers need to be secure, private, etc) and it’s expected we won’t be able to do any development because of course, upper management doesn’t trust us mere engineers with “general purpose computers”. Why can’t it be for “everybody?” Engineers included?

                                                                                                      1. 1

                                                                                                        No, no, you misunderstand.

                                                                                                        The users first / engineers second is not about the engineers as end users like in your desktop computer example.

                                                                                                        what I mean derives from the W3C design principles. That is to say, we shouldn’t avoid significant positive change (e.g., HTTPS over HTTP) just because it’s a bit harder on the engineering end.

                                                                                                        1. 6

                                                                                                          Define “positive change.” Google shoved HTTP/2 down our throats because it serves their interests not ours. Google is shoving QUIC down our throats because again, it serves their interests not ours. That it coincides with your biases is good for you; others might feel differently. What “positive change” does running TCP over TCP give us (HTTP/2)? What “positive change” does a reimplementation of SCTP give us (QUIC)? I mean, other than NIH syndrome?

                                                                                                          1. 3

                                                                                                            Are you asking what how QUIC and H2 work or are you saying performance isn’t worth improving? If it’s the latter, I think we’ve figured out why we disagree here. If it’s the former, I kindly ask you to find out yourself before you enter this dispute.

                                                                                                            1. 3

                                                                                                              I know how they work. I’m asking, why are they reimplementing already implemented concepts? I’m sorry, but TCP over TCP (aka HTTP/2) is plain stupid—one lost packet and every stream on that connection hits a brick wall.

                                                                                                              1. 1

                                                                                                                SPDY and its descendants are designed to allow web pages with lots of resources (namely, images, stylesheets, and scripts) to load quickly. A sizable number of people think that web pages should just not have lots of resources.

                                                                                                1. 1

                                                                                                  Super interesting post; I deal with this quite a bit during assessments of blockchain code that use TypeScript on the front end, and discussing “why you can’t use floats for currency” often comes up. I like what I’m seeing here tho; I don’t know if I can directly recommend to clients, but it’s an interesting discussion point for me to use.

                                                                                                  1. 3

                                                                                                    Yep, dealing with the actual monetary values is another big topic which I didn’t really bother to cover here, mostly because I think it’s already been covered in detail (Javascript, Haskell). Thankfully to turn them into bills and coins I only have to deal with the resulting values; all the heavy lifting in my project is done by Dinero.js with this typings file.

                                                                                                    1. 1

                                                                                                      that’s really interesting, thanks for that!

                                                                                                      Where I usually see issues with clients is code that has two different rounding mechanisms (such as between their own bespoke safemath library for Ethereum and JavaScript). It’s an interesting discussion point to be had, and those links are also interesting, thanks for those!

                                                                                                      1. 1

                                                                                                        Good links, dude!

                                                                                                    1. 3

                                                                                                      $work:

                                                                                                      • reaping the joy of automating a bunch of infrastructure by bringing up a few new instances of our app in various geolocations for local use there. Super satisfying seeing the work we put in ahead of time pay off. (We knew a few months ago we’d absolutely have to do this, so it was just a case of when not if.)

                                                                                                      !$work:

                                                                                                      • Monthly pub quiz with family
                                                                                                      • Finally got the spare Microserver booting reliably from the SSD (… by making it boot from USB which then loads everything from the SSD. Three cheers for grub.) which means I need to invest some time into making everything run on the server now.
                                                                                                      • Flying to Madrid on Friday for a long weekend visit. First time visiting Spain 🇪🇸, really looking forward to it. (Not taking a laptop 🙃)
                                                                                                      1. 2

                                                                                                        I like this format, gonna steal it :)

                                                                                                        1. 2

                                                                                                          Hah, more than welcome to. Fairly sure I’m just regurgitating prior art from other people on these threads previously. 😁

                                                                                                      1. 7

                                                                                                        When I first read about Capsicum back in 2010 I thought it was a very cool idea, much like the later pledge system call in OpenBSD. I especially liked the idea that they introduced Capsicum calls to Google Chromium, as browsers are just piles and piles of code that you just generally have to trust. It’s just really unfortunate that these things are all tied to a specific operating system.

                                                                                                        I wonder if those Capsicum changes were ever accepted upstream and are still maintained?

                                                                                                        1. 21

                                                                                                          It was intended to be cross-platform concept. Lots of the big companies have Not Invented Here syndrome which sort of ties into them liking to control and patent anything they depend on, too. Examples:

                                                                                                          1. Google’s NaCl was weaker, but faster, than capability security. Just used Java and basic permissions for Android.

                                                                                                          2. Microsoft Research made a lot of great stuff but Windows division only applies tiniest amount of what they do.

                                                                                                          3. I saw a paper once about Apple trying to integrate Capsicum into Mac OS. I’m not sure if that went anywhere.

                                                                                                          4. Linux tried a hodgepodge of things with SELinux containing most malware at one point. It was a weaker version of what MLS and Type Enforcement branches of high security were doing in LOCK project. These days, it’s even more hodgepodge with lots of techniques focused on one kind of protection or issue sprinkled all over the ecosystem. Too hard to evaluate its actual security.

                                                                                                          5. FreeBSD, under TrustedBSD, was originally doing something like SELinux, too. Capsicum team gave them capability-security. That’s probably better match for today’s security policies. However, one might be able to combine features from each project for stronger security.

                                                                                                          6. OpenBSD kept a risky architecture but did ultra-strong focus on code review and mitigations for specific attacks. It’s also hard to evaluate. It should be harder to attack, though, since most attackers focus on coding errors.

                                                                                                          7. NetBSD and DragonflyBSD. I have no idea what their state of security is. Capsicum might be easy to integrate into NetBSD given they design for portability and easy maintenance.

                                                                                                          8. High-security kernels. KeyKOS and EROS were all-in on the capability model. Separation kernels usually have capabilities as a memory access and/or communication mechanism, but policies are neutral for various security models. The consensus in high-assurance security is that the above OS’s need to be sandboxed entirely in their own process/VM space since there’s too much risk of them breaking. Security-critical components are to run outside of them on minimal runtimes and/or tiny kernels directly. These setups use separation kernels with VMM’s designed to work with them and something to generate IPC automatically for developer convenience. Capsicum theoretically could be ported to one but they’re easier to use directly.

                                                                                                          9. Should throw in IBM i series (formerly AS/400). The early version, System/38, described in this book was capability-secure at the hardware level. They appear to have ditched hardware protections in favor of software checks. Unless I’m dated on it, it’s still a capability-based architecture at low levels of the system with PowerVM used to run Linux side-by-side to get its benefits. That makes it a competitor to Capsicum and longest-running capability-based product in the market. Whereas, longest-running descriptor architecture, which also ditched full protections in hardware, is Burroughs 5500 sold by Unisys as ClearPath Libra in modern form.

                                                                                                          1. 5

                                                                                                            Nice listing, thanks! If you haven’t heard of it, and going in a slightly different direction, you may be interested in CheriBSD which is a port of FreeBSD on top of capability hardware, the CHERI machine. (This makes it undeployable pretty much anymore, but it’s interesting research that I expect to pay dividends in many ways.) The core people working on Capsicum are also working on CHERI.

                                                                                                            1. 4

                                                                                                              My post was for software stuff mainly. On hardware side, I’m following that really closely along with research like Criswell’s SVA-OS (FreeBSD-based) and Hardbound/Watchdog folks. They’re all doing great work of making fundamental problems disappear with minimal, performance hit. I was pushing some hardware people to port CHERI to Rocket RISC-V. There weren’t any takers. One company ported SAFE to RISC-V as CoreGuard.

                                                                                                              CHERI is still one of my favorite possibilities, though. I plan to run CheriBSD if I ever get a hold of a FPGA board and the time to make adjustments.

                                                                                                            2. 3

                                                                                                              Wow, thank you for the extremely thorough reply (this is the sort of thing I really like about the lobste.rs community)!

                                                                                                              It makes sense that there are multiple experiments and various OSes having a completely different approach (the hardware protection of System/38 you mentioned sounds particularly interesting), but I was mostly thinking about the POSIX OSes. The Capsicum design fits quite well into the POSIX model of the world.

                                                                                                              I wonder why Apple did not follow through with Capsicum. They’re not too afraid to take good ideas from other OSes (dtrace comes to mind, and their userland comes mostly from FreeBSD IIRC).

                                                                                                              1. 3

                                                                                                                Capsicum might be easy to integrate into NetBSD given they design for portability and easy maintenance

                                                                                                                There was a port of CloudABI to NetBSD, which kind of “includes” Capsicum (just not for NetBSD-native binaries).

                                                                                                                one might be able to combine features from each project for stronger security

                                                                                                                Indeed. Sandboxes protect the world from applications touching things they’re not supposed to, MAC things like TrustedBSD and SELinux were (at least originally) designed to implement policies on an organizational level, like documents having sensitivity levels (not secret, secret, top secret) and people having access to levels only lower than some value, etc.

                                                                                                                1. 2

                                                                                                                  Re CloudABI. Thanks for the tip.

                                                                                                                  Re 2nd paragraph. You’re on right track but missing the overlap. SELinux came from reference monitor concept where every subject/object access was denied by default unless a security policy allowed it. So, sandboxing or, more properly, an isolation architecture done strong as possible was the first layer. If anything, modern sandboxing is weaker at same goal by lacking enforcement consistently by simple mechanism.

                                                                                                                  From there, you’re right that organizational design often influenced the policies. Since military invented most INFOSEC, their rules, Multilevel Security, became default which commercial sector couldnt do easily. Type Enforcement was more flexible, doing military and some commercial designs. Note you could also do stuff like Biba to stop malware (deployed in Windows, too), enforcing database integrity, or even some for competing companies to make sure they didnt share resources. The mechanism itself wasn’t rooted in organizational stuff. That helped adoption.

                                                                                                                  Eventually they just dropped policy enforcement out of kernel entirely so it just did separation. Middleware enforced custom policy. Still hotly debated since it’s most flexible but gived adopters plenty of rope. Hence, language-based coming back with strong type systems and hardware/software schemes mitigating attacks entirely.

                                                                                                                2. 2

                                                                                                                  High-security kernels.

                                                                                                                  just to add, there’s also Coyotos in the EROS family, which gave us BitC, which is an interesting (if dead) language.

                                                                                                                  Zircon is also working on an object capability model, but I haven’t looked too deeply at it myself.

                                                                                                                  edit: Also, CapLore has some really interesting articles, such as this one on KeyKos…

                                                                                                                  1. 2

                                                                                                                    Yeah, they were interesting. People might find neat ideas looking into them. I left them off cuz Shapiro got poached by Microsoft before completing them.

                                                                                                                    Far as Zircon, someone told me the developers were ex-Be, Danger, Palm, and Apple. None of those companies made high-security projects. The developers may or may not have at another company or in spare time. This is important to me given the only successes seem to come from people that learned the real thing from experienced people. Google’s NIH approach seems to consistently dodge using such people. Whereas, Microsoft and IBM played it wise hiring experts from high-security projects to do their initiatives. Got results, too. Google should’ve just hired CompSci folks specialized in this like the NOVA people. Them plus some industry folks like on Zircon to keep things balanced between ideal architecture and realistic compromise.

                                                                                                                    I’ll still give the final product a fair shake, regardless, though. I look forward to seeing what they come up with.

                                                                                                                    1. 2

                                                                                                                      totally agreed re: Google; I also have concerns about some of the items I’ve seen such as this, which discusses systems within Fuchsia that could be used for adverts, as well as Google’s tendency to do something cool and then drop it.

                                                                                                                      Also, re: Shapiro: I think he’s interesting, but I also (having dealt with him on the mailing lists) wonder about his ability to produce, since Coyotos/EROS/and-so-on were largely embryonic (at best).

                                                                                                                      1. 2

                                                                                                                        re Google. They’re an ad company. Assume the worst. I even assumed Android itself would get locked up somehow over time where we’d loose it, too. Maybe with technique like this. Well, anything that wasn’t already open. We’re good so long as they open source enough to build knock-off phones with better privacy and good-enough usability. People wanting best-in-class will be stuck with massive companies without reforms about patent suits and app store lock-in.

                                                                                                                        re Shapiro. He was a professional researcher. Their incentives are sadly about how many papers they publish with new, research results. Most don’t build much software at all, much less finish it. He was more focused than most with the EROS team having running prototype they demo’d at conferences. Since he’s about research, he started redoing it to fix its flaws instead of turn it into a finished product. They did open-source it in case anyone else wanted to do that. I’m not sure whether these going nowhere says something about him, FOSS developers’ priorities, or both. ;)

                                                                                                                        1. 2

                                                                                                                          Completely agreed re: Google. I don’t even disagree re: Shapiro either, but I’ll add one comment: I looked at the source code for EROS/Coyotos/BitC such that they were… it wasn’t something you could just dive into. Describing it as “hairy” and “embryonic” is about as kind as I can be for someone who has been awake since 0300 local.

                                                                                                                          1. 2

                                                                                                                            Thanks for the tip. Yeah, that’s another problem common with academics. It’s why I don’t even use stuff with great architecture if they coded it. I tell good coders about it hoping they’ll do something like it with good code. For some reason, the people good at one usually aren’t good at the other. (shrugs) Then you get those rare people like Paul Karger or Dan Bernstein that can do both. Rare.

                                                                                                                            1. 2

                                                                                                                              so Bernstein’s father was one of my professors in college; definitely an interesting fellow… I can see at least why he has practical chops, since his father is a very practical (if nitpicky) coder himself.

                                                                                                                              1. 2

                                                                                                                                That’s cool. I didn’t know his dad was a programmer. That makes sense.

                                                                                                                  2. 1

                                                                                                                    I never understood why NaCl didn’t take off. I loved that framework.

                                                                                                                    1. 1

                                                                                                                      I was never sure about that myself. A few guesses are:

                                                                                                                      1. It’s hard to get any security tech adopted.

                                                                                                                      2. Chrome was still having vulnerabilities. Might have been seen as ineffective.

                                                                                                                      3. Couldve been a burden to use.

                                                                                                                      4. Other methods existed and were being developed that might be more effective or usable.

                                                                                                                  3. 3

                                                                                                                    Unfortunately Google never accepted those changes :(

                                                                                                                  1. 0

                                                                                                                    You have a binary that is fast (2 ms), small (107 kB) and dependency-free.

                                                                                                                    Ya, that’s true because nim compiles to c! Then it compiles to a binary by using gcc or clang (for example).

                                                                                                                    So it’s not actually dependency free… you’ll need a unix environment at least to provide stdout/in IO.

                                                                                                                    Nevertheless it’s interesting – although I haven’t had the impression that it’s quite that unknown… I believe I heard about it the first time somewhere around 2014. Although I’ve never used it myself, I’ve always seen articles about it from time to time.

                                                                                                                    1. 6
                                                                                                                      $ nim c hello.nim 
                                                                                                                      Hint: used config file '/nix/store/ab449wa2wyaw1y6bifsfwqfyb429rw1x-nim-0.18.0/config/nim.cfg' [Conf]
                                                                                                                      Hint: system [Processing]
                                                                                                                      Hint: hello [Processing]
                                                                                                                      CC: hello
                                                                                                                      CC: stdlib_system
                                                                                                                      Hint:  [Link]
                                                                                                                      Hint: operation successful (11717 lines compiled; 2.748 sec total; 22.695MiB peakmem; Debug Build) [SuccessX]
                                                                                                                      $ ./hello 
                                                                                                                      Hello, world!
                                                                                                                      $ ldd hello
                                                                                                                      	linux-vdso.so.1 (0x00007ffe06dd1000)
                                                                                                                      	libdl.so.2 => /nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/lib/libdl.so.2 (0x00007f4a0c356000)
                                                                                                                      	libc.so.6 => /nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/lib/libc.so.6 (0x00007f4a0bfa2000)
                                                                                                                      	/nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/lib/ld-linux-x86-64.so.2 => /nix/store/fg4yq8i8wd08xg3fy58l6q73cjy8hjr2-glibc-2.27/lib64/ld-linux-x86-64.so.2 (0x00007f4a0c55a000)
                                                                                                                      $ ls -hal hello
                                                                                                                      -rwxr-xr-x 1 andy users 185K Sep 22 15:53 hello
                                                                                                                      

                                                                                                                      This depends on libc and a runtime dynamic linker. If you built this on your machine and sent me this binary I wouldn’t be able to run it because NixOS has a non hard-coded dynamic linker path.

                                                                                                                      Can’t help but do a comparison here…

                                                                                                                      $ time zig build-exe hello.zig 
                                                                                                                      real	0m0.309s
                                                                                                                      user	0m0.276s
                                                                                                                      sys	0m0.035s
                                                                                                                      $ ./hello 
                                                                                                                      Hello, world!
                                                                                                                      $ ldd hello
                                                                                                                      	not a dynamic executable
                                                                                                                      
                                                                                                                      $ ls -ahl hello
                                                                                                                      -rwxr-xr-x 1 andy users 390K Sep 22 16:01 hello
                                                                                                                      
                                                                                                                      1. 2

                                                                                                                        I’ll be honest and say that I don’t know how to achieve this using C off the top of my head, but I’m willing to be that it is possible. If it’s possible in C, it’s also possible in Nim.

                                                                                                                        Please keep in mind that Nim links with libc dynamically by default, there is nothing stopping you from statically linking libc into your executables if you so wish.

                                                                                                                        1. 1

                                                                                                                          But will they still be as small?

                                                                                                                          1. 2

                                                                                                                            Of course not. But then I also don’t really care about binary sizes, as long as they’re not ridiculously large.

                                                                                                                      2. 1

                                                                                                                        I think that’s a pretty silly definition of dependency free

                                                                                                                        1. 4

                                                                                                                          I guess it depends on your perspective, but it does seem like an extremely pedantic definition. In that case, every Unix C program is dependent on a libc and Unix kernel… but generally we don’t talk about dependencies like that.

                                                                                                                          I will say this tho: I wish languages like nim & zig focused more on tree shaking to get down to the size of C, or as close as possible. Would help in other environments, such as the embedded space, and would be generally better for all.

                                                                                                                          1. 2

                                                                                                                            I wish languages like nim & zig focused more on tree shaking to get down to the size of C, or as close as possible. Would help in other environments, such as the embedded space, and would be generally better for all.

                                                                                                                            Do you have an example of how Zig doesn’t do this?

                                                                                                                            1. 1

                                                                                                                              (edit also I apologize for the late reply, I’m on client site this week)

                                                                                                                              it’s been a while since I last built Zig (I use a local Homebrew, so I ended up having to manually link LLVM and Clang, which wasn’t bad once I figured out how to do so), but even the example displayed above was 390K, so potentially large parts of the Zig RTS is included therein. I think Zig is probably the best of the bunch (I’ve recommended several clients to look into it as part of their roadmap for future embedded projects!), but I do think some room for improvement wrt what’s included could be made.

                                                                                                                              As an aside, I thought I’d try and see if Zig was included in Homebrew now, but the build is dying:

                                                                                                                              [ 65%] Built target embedded_softfloat
                                                                                                                              make[1]: *** [CMakeFiles/embedded_lld_elf.dir/all] Error 2
                                                                                                                              make: *** [all] Error 2
                                                                                                                              
                                                                                                                              1. 2

                                                                                                                                There are a few things to know about the size of the above example. One is that it’s a debug build, which means it has some extra safety stuff in there. It even has a full debug symbol parsing implementation so that you get stack traces when your program crashes. On the other hand, if you use --release-small then you get a 96KB executable. (Side note - this could be further improved and I have some open bug reports in LLVM to pursue this.) The other thing to note is that the executable is static. That means it is fully self-contained. The nim version (and the equivalent C version) dynamically link against the C runtime, which is over 1MB.

                                                                                                                                So the Zig runtime is smaller than the C runtime.

                                                                                                                                I recommend to wait 1 week until zig 0.3.0 is out before trying to get it from homebrew. The Zig PR to homebrew had llvm@6 in it, to prevent this exact problem. They rejected that and said we had to drop the @ suffix. So naturally it broke when llvm 7 came out.

                                                                                                                                1. 1

                                                                                                                                  Oh I realized that Zig was statically linked, but I did not realize that it had no further dependency on libc; that’s pretty interesting. Zig has been on my radar since I first caught wind of it some time ago (I enjoy languages & compilers, it’s part of my job & my hobby), but it’s interesting to see no further links!

                                                                                                                                  Previously I fought with getting Zig built out of git directly; the fighting was mostly surrounding linking to LLVM deps in Homebrew, because the two didn’t seem to like one another. Once it was working tho, it was pretty sweet, and I used it for some internal demos for clients. I’ll certainly wait for 0.3.0, it’ll be neat to see, esp. given the new info above!

                                                                                                                                  1. 2

                                                                                                                                    As of this morning 0.3.0 is out! And on the download page there are binaries available for Windows, MacOS, and Linux.

                                                                                                                                    1. 2

                                                                                                                                      Trying it now, and thank you so much! It runs right out of the box (which is so much easier than fighting with a local homebrew install) on Mojave!

                                                                                                                          2. 1

                                                                                                                            My point is that it couldn’t be executed in a Windows or Plan 9 environment. When people like saying the only IDE they need is Unix, it’s worth pointing out that that means they don’t only need a specific program, but a whole OS – and that’s a dependency in my eyes.

                                                                                                                            1. 1

                                                                                                                              WSL exists and Plan 9 is an irrelevant research operating system. Something that depends on POSIX is depending on the portable operating system standard. It’s the standard for portable operating systems. It’s a standard that portable software can rely on existing on every operating system. If your operating system doesn’t support POSIX then you have no right to complain that software isn’t ported to it, IMO.

                                                                                                                              You don’t need a particular OS, you need any OS out there that implements POSIX, which is all of them in common use.

                                                                                                                              1. 1

                                                                                                                                I don’t care about rights, and that’s not what I meant. I understand your point, but I wanted to say was that the way the author phrased it made me hope (naively, maybe) that there was some actual technology behind Nim that makes it OS independent (since, as I’ve already said, I think a OS is a dependency, regardless of which standards may or may not exist).