1. 15

    Very nice rephrasing (with clear definitions and usage boundaries) of Dave Cheney’s three level system:

    Debug: Things only developers care about while developing.
    Info: Things users/administrators of your product care about.
    Error: Things users care about.

    These three levels may not be the best or they may be too broad, but having a small number of options, each associated with a clear meaning removes a lot of guesswork and bikeshedding.

    Orthogonally, I’d love to see more logging frameworks adopt KDE’s approach of kDebug areas (now QLoggingCategory). It allows you to decide at runtime that, for example, you want debug-level info for a specific library, but otherwise only error-level statements for the rest of the application.

    1. 9

      This goes strongly against my experience, maybe (again) because I don’t work on server code.

      Having only one log level for “stuff admins care about” implies that logging is free — that you should always be logging everything that might be useful for maintenance or troubleshooting. This is no bad idea when your target system does not have 32 cores nor super fast I/O nor unlimited disk space.

      • Logging can consume significant CPU (printf functions are surprisingly expensive) and cause noticeable slowdowns in mobile or embedded apps.
      • Logs take up space, which may be limited, especially if log rotation is missing or insufficient. (This is exactly why Tesla is having to recall car console displays right now — the firmware logged too much crap and it caused the flash storage to wear out too early.)

      My support team has made it very clear to us developers that they need configurable logging to respond to customer issues. When a customer is having weird problems, they need to be able to have the customer turn on detailed logging in the problem area, so support or devs can pore over them.

      But normally you don’t want the firehouse, but neither do you want no logging or only error logging. Frequently a problem will first manifest in normal usage. And it might be difficult to reproduce. So you want some logs all the time — not just errors, because that tells you nothing the end user couldn’t already tell you; you want to know about anything unusual or unexpected that happened beforehand. Voila, warnings.

      1. 5

        Having only one log level for “stuff admins care about” implies that logging is free — that you should always be logging everything that might be useful for maintenance or troubleshooting.

        […]

        My support team has made it very clear to us developers that they need configurable logging to respond to customer issues. When a customer is having weird problems, they need to be able to have the customer turn on detailed logging in the problem area, so support or devs can pore over them.

        This is why debugging zones are more important than debug levels and, in fact, fundamental. And fundamental is also the ability to change which zone should be logged and at what level.

        What I would suggest is:

        • Three levels: DEBUG, INFO, ERROR. Always “compiled in”.
        • One zone for each subsystem (say, library, service or application module).
        • Normally, for all zones: only ERROR statements are processed (= evaluated + sent + stored). INFO are also processed but dropped after a few minutes/hours/days. DEBUG are turned into NOPs.
        • When debug-time comes: Increase decay time for INFO statements and start processing DEBUG statements, but only for the zones that you care about (and progressively).

        The aforementioned QLoggingCategory in Qt allow you to do all this, and so do other mature frameworks like Log4j.

        You want to know about anything unusual or unexpected that happened beforehand. Voila, warnings.

        If you stretch the definition of ERROR to mean “Anything that is unusual, not well handled and thus may lead to user-visible errors”, then those “warnings” would fit that level very well. :)

        1. 2

          INFO are also processed but dropped after a few minutes/hours/days.

          What do you mean by this? They’re kept, but in a high-rotation store, or that they’re kept like error logs, but after a warmup period the app stops logging them?

          1. 1

            INFO are also processed but dropped after a few minutes/hours/days.

            What do you mean by this? They’re kept, but in a high-rotation store, or that they’re kept like error logs, but after a warmup period the app stops logging them?

            It depends on the capabilities of the logging framework you are working with, as well as other characteristics of your infrastructure (and infrastructural budget).

            A couple of options:

            • Send and rotate: Statements are evaluated (≈ formatted) by the client, then send to the server and the server rotates them quickly (simple client-side logging code, high network costs, low server storage costs).
            • Send if requested: Statements are evaluated by the client and locally stored for a few minutes/hours, ready to be sent to the server if/when needed (complex client-side logging code, low network costs, low server storage costs).
            • Grab when needed: Statements are evaluated by the client and locally stored for a few minutes/hours, ready to be grabbed by the server if/when needed (simple client-side logging core, requires access to the client machine, low network costs, low server storage costs).
            • Send if frequent: Statements are evaluated by the client and locally stored in a cache. If a statement is logged more than x/times per hour, it is sent to the server and there stored permanently (semi-complex client-side logging code, low network costs, lowish server storage costs).
    1. 4

      I hate to say it, but log4j circa 2003 is light years beyond anything in Ruby or Rails land.

      Log4j is one of these libraries that you despise at first (“what a weird and convoluted way to set up something that prints a string to the stdout”), then you drop them for something simpler, and finally you miss them when your application evolves into something more complicated and your requirements change (“wait, this logging library can’t log to multiple outputs configured with different filters when running in production?”).

      This is often the case with libraries developed by people with more experience than you currently have.

      1. 5

        It is always nice to see new AWK scripts pop up!

        Here is an alternative for those who want to stick to classic Unix commands

        $ find ~/gh -type | cut -d/ -f5,6,7 | column -s/ -t
        
        1. 1

          That’s lovely. In fact, I’d started looking at column at the outset of this little adventure, then got completely distracted by the lure of the AWK language, and promptly forgot about everything else. I’ve added an update to the post. Thanks!

        1. 2

          Steam’s login method is a precursor to what is being standardized by the IETF under the name OPAQUE https://tools.ietf.org/html/draft-irtf-cfrg-opaque-01: a browser-side authentication technique that makes it possible for the password to never leave the browser, not even during the registration phase.

          The OPAQUE Asymmetric PAKE Protocol

          Abstract

          This document describes the OPAQUE protocol, a secure asymmetric password-authenticated key exchange (aPAKE) that supports mutual authentication in a client-server setting without reliance on PKI and with security against pre-computation attacks upon server compromise. In addition, the protocol provides forward secrecy and the ability to hide the password from the server, even during password registration. This document specifies the core OPAQUE protocol, along with several instantiations in different authenticated key exchange protocols.

          1. 3

            SRP is a precursor to OPAQUE. Simply encrypting the password with RSA is… ehhhh not really.

          1. 5

            Dear @susam, this kind of CSS is usually referred to as classless.

            Here is a list of such classless CSSes: https://github.com/dbohdan/classless-css. SPCSS would fit nicely there.

            1. 22

              This is page is a very useful resource, but it misses one important dimension: “with compositor X”.

              Take for example the category “Screenshot/screen capture & share tool: grim/slurp, OBS, swappy, wayvnc”. Yes, these programs work, but only (according to Arch’s documentation [1]) with wlroots-based compositors. Other compositors (say, GNOME Shell/Mutter) do not support that extra protocol needed to grab screenshots.

              Another issue is that many programs in this list are MVP-like applications or proof-of-concept implementations that lack all the “edge-case” functionalities that make those programs useful in the first place. Take for instance ydotool. Yes, you can use ydotool for basic input simulation, but there is no equivalent of xdotool search --name $TITLE (to find the WID of all windows with a certain title) or xdotool windowactivate $WID (to programmatically focus a specific window).

              In many cases, like in ydotool’s case, it can be argued that certain functionalities are lacking because there is no widespread support for the needed extra protocols (or those protocols do not even exist yet). Yeah, that is a reasonable justification, but it does not solve the problems that the end users have.

              [1] https://wiki.archlinux.org/index.php/Screen_capture#Wayland

              1. 4

                Type

                lemon "-?"
                

                to get a list of options. Seriousy? What’s wrong with -h, which 99% of all other command line options offer?

                Apart from that it’s an interesting project! It’s nice to see that it has passed the test of time, and I can understand why the Sqlite-project has developed its own solution.

                1. 5

                  Type lemon "-?" to get a list of options. Seriousy? What’s wrong with -h, which 99% of all other command line options offer?

                  From the article:

                  Lemon was originally written by Richard Hipp sometime in the late 1980s on a Sun4 Workstation using K&R C.

                  Back in 1980 such conventions were not established, or ubiquitous, as today. At least lemon uses the POSIX utility argument convention of staring options with an hyphen. Common tools like dd do not even follow that convention, following instead IBM’s JCL syntax.

                  BTW, the convention for DOS command tools still is foobar /?.

                  1. 4

                    Well I WAS about to say “well that should be easy to fix”, but then I took a look at the code… and you know what? I can’t find it. From looking at what calls OptPrint() it seems like it just prints out help when it gets an invalid flag, and so I think that’s just the author saying that -? will never be a valid flag. -h is currently also not a valid flag, so, that should work fine too!

                    What irks me more is using -x for printing the version instead of -v. At least it does have an option to print the version, though. (I’m looking at you, OpenCV.)

                  1. 25

                    If the contents in the sections here comes off as a bit dense and hard to ingest, fret not! They are more teasers or appetisers, the full course meal will have better pacing and technical depth.

                    Indeed all the points are quite dense and imperscrutable.

                    But this point caught my attention:

                    The ‘professional’ is acknowledging that there is a large number (millions) of key individuals in a wide gamut of professions that are [currently] wasting much of their time and talents thanks to misguided attempts at software ergonomics […]

                    This is more or less how I feel when I use UIs based on GNOME 3/GTK 3.

                    Right now I’m using a desktop computer (not even a laptop, a proper desktop computer) with a hi-res widescreen and yet the interface is presented to me as if I were on a tablet. For example my mouse has to carefully move over an oversized title bar to click on a button instead of being able to swiftly smash my cursor against the top edge of the screen and click on a menu. Yes, if I had a touchscreen and that touchscreen were few centimeters away from my fingers, that huge-button-in-title would probably be OK. But I am not. I am at my desktop and my mouse is about one meter away from my screen.

                    Now, the key questions are: How many GNOME users in 2020 use GNOME via a tablet and benefit from this design? How many users interact with the system via a desktop PC or a laptop? How does the benefits to the tablet users compare to the costs that these design choices bring to the desktop users? Wouldn’t these two kinds of fruition be better served by two separate UIs backed by the same core logic?

                    (These are not rhetorical questions, but sincere requests for data-driven UI design.)

                    1. 8

                      How many GNOME users in 2020 use GNOME via a tablet and benefit from this design?

                      Agreed. Tablets, phones, and desktop are very different form factors with very different ways of using them. Trying to unify this under one user interface concept –even partially– has failed (Windows 8/Windows Phone. IIRC, KDE tried something like this too). Contrary, macOS and iOS applications follow very different interface concepts. And Apple’s success shows that they were on the right side of history here.

                      I’m using GNOME as daily driver coming from all iOS and Mac. It sometimes hurts a lot.

                      1. 4

                        Indeed all the points are quite dense and imperscrutable.

                        It’s a damned if you do, damned if you don’t situation. Putting out individual posts without an overview and the ability to forward-reference would not be much better. It is already well in TL;DR and about half of it was cut.

                        How many users interact with the system via a desktop PC or a laptop? How does the benefits to the tablet users compare to the costs that these design choices bring to the desktop users? Wouldn’t these two kinds of fruition be better served by two separate UIs backed by the same core logic?

                        You can do both. How about, for instance, having the mechanism to ask for alternate views of an application. One that is friendlier to touch, or for screen readers, or for debugging? In the video in leveraging the display server to improve debugging, I show one way to do that.

                        For an example on more dynamic options over a single UI: I have an unhealthy thing for input devices. This photo is from one of my setups in the lab. There’s an eye tracker, stream-deck, 3d-mouse, rotary dial, touch screens, screenpad, input stylus. All of them in use, mostly for command-line / IDE stuff.

                        The WM on all of those machines, Durden, is built as a virtual filesystem.

                        echo 'hi' > /windows/all/input/keyboard/type
                        

                        Would write that into well, all windows. Last I counted there’s over 600 such paths at roughly that level of abstraction. Sets of such paths can be tied to certain events, such as a device entering or leaving idle state. When I touch the touchscreen with my finger, something like this is run:

                        /global/settings/visual/font/size=18
                        /windows/all/window/titlebar_toggle=1
                        

                        Decorations appear and are resized to match text at 18pt, which was picked based on my finger accuracy on the display in question. The buttons on the title-bars are set to be dynamically populated by bindings announced by the clients, mixed with my custom defined ones for the touch state specifically. In other states they might be mixed with my ‘todo’ list on the stream deck. The clients aren’t written to take any of these devices into consideration.

                      1. 4

                        One of my favorite language design choices in the realm of thrown exceptions: A function in Swift that throws must be declared to throw, and when you call it you must write a try keyword in the calling expression. try itself doesn’t do anything, it’s just required as a clue to the reader: You can always tell which lines might throw and which will only return. (There is a do keyword which starts a block like most languages’ try.)

                        1. 2

                          One of my favorite language design choices in the realm of thrown exceptions: A function in Swift that throws must be declared to throw, and when you call it you must write a try keyword in the calling expression.

                          That style of exception is called “explicit exceptions”. Probably the most common language that follow that paradigm is Java.

                          In Java there are in fact two kinds of exceptions: implicit exceptions (inherit from RuntimeException, do not need a try block or forwarding via the throws attribute) and explicit exceptions (the compile will stop with an error if these exceptions are not handled).

                          In my Java years I’ve seen a lot of “exception fatigue”: library writers like to provide detailed (explicit) exceptions, but application programmers just make every method that uses those libraries “throw Exception” (equivalent to a catchall “ignore and forward”). (For the record: I do not think that that is a bad pattern in itself.)

                          1. 2

                            Sorry, I think you’ve missed my point. I’m not talking about try block statements as in Java. (In Swift that would be the do keyword).

                            Here’s another attempt: Swift requires the try keyword on the specific expressions that can throw, just so the reader can tell which lines might throw out of the normal flow of execution. To my knowledge, Java has no equivalent.

                            do {
                                let encrypted = try encrypt("secret information!", withPassword: "12345")
                                print(encrypted)
                            } catch {
                                print("Something went wrong!")
                            }
                            

                            Here, encrypt may throw out of the do block. Print will not. You can tell when reading it! Code sample borrowed. It’s also worth noting that the try keyword does not do anything. It’s like a required comment.

                            If at first you don’t explain try, try again.

                            For what it’s worth, Swift has errors, which are the application-level thing you throw and catch as above, and also has lower level exceptions raised by the operating system or CPU, like invalid pointer reads or divide by zero. Errors are explicit and checked and common; exceptions are implicit and unchecked and rare, and probably just crash, but are guarded against generally by language safety features. For instance, we don’t have a pervasive null check situation because we have optionals, so the nil keyword is not the same thing as a zero sentinel value pointer. Maybe someday functions will express in the language which kinds of error they throw, but that’s not the case as of Swift 5. In total, this arrangement helps cut down on the kind of error handling fatigue you mentioned.

                        1. 4

                          Wouldn’t it make more sense to have some kind of HTTP header and/or meta tag that turns off javascript, cookies and maybe selected parts of css?

                          If we could get browser vendors to treat that a bit like the https padlock indicators, some kind of visual indicator that this is “tracking free”

                          Link tracking will be a harder nut to crack. First we turn off redirects. Only direct links to resources. Then we make a cryptographic proof of the contents of a page - something a bit fuzzy like image watermarking. Finally we demand that site owners publish some kind of list of proofs so we can verify the page is not being individually tailored to the current user.

                          1. 11

                            The CSP header already allows this to an extent. You can just add script-src none and no JavaScript can run on your web page.

                            1. 1

                              very true. not visible to the user though!

                            2. 5

                              Browsers already render both text/html and application/pdf, and hyperlinking works. There is no technical barrier to add, say, text/markdown into mix. Or application/ria (see below), for that matter. We could start by disabling everything which already requires permission, that is, audio/video capture, location, notification, etc. Since application/ria would be compat hazard, it probably should continue to be text/html, and what-ideally-should-be-text/html would be something like text/html-without-ria. This clearly works. The question is one of market, that is, whether there is enough demand for this.

                              1. 5

                                Someone probably should implement this as, say, Firefox extension. PDF rendering in Firefox is already done with PDF.js. Do the exact same thing for Markdown by: take GitHub-compatible JS Markdown implementation with GitHub’s default styling. Have “prefer Markdown” preference. When preference is set, send Accept: text/markdown, text/html. Using normal HTTP content negotiation, if server has text/markdown version and sends it, it is rendered just like PDF. Otherwise it works the same, etc. Before server supports arrive, the extension probably could intercept well known URLs and replace content with Markdown, for, say Discourse forums. Sounds like an interesting side project to try.

                                1. 8

                                  Browsers already render both text/html and application/pdf, and hyperlinking works. There is no technical barrier to add, say, text/markdown into mix.

                                  Someone probably should implement this as, say, Firefox extension.

                                  Historical note: this is how Konqueror (the KDE browser) started. Konqueror was not meant be a browser, but a universal document viewer. Documents would flow though a transport protocol (implemented by a KIO library) and be interpreted by the appropriate component (called KParts) (See https://docs.kde.org/trunk5/en/applications/konqueror/introduction.html)

                                  In the end Konqueror focused on being mostly a browser, or an ad-hoc shell around KIO::HTTP and KHTML (the parent of WebKit) and Okular (the app + the KPart) took care of all main “document formats” (PDFs, DejaVu, etc).

                                  1. 2

                                    Not saying it’s a bad idea, but there are important details to consider. E.g. you’d need to agree on which flavor of Markdown to use, there are… many.

                                      1. 2

                                        Eh, that’s why I specified GitHub flavor?

                                        1. 1

                                          Oops, my brain seems to have skipped that part when I read your comment, sorry.

                                          The “variant” addition in RFC 7763 linked by spc476 to indicate which of the various Markdowns you’ve used when writing the content seems like a good idea. No need to make Github the owner of the specification, IMHO.

                                        2. 1

                                          What’s wrong with Standard Markdown?

                                      2. 2

                                        markdown

                                        Markdown is a superset of HTML. I’ve seen this notion put forward a few times (e.g., in this thread, which prompted me to submit this article), so it seems like this is a common misconception.

                                      3. 4

                                        Why would web authors use it? I can imagine some small reasons (a hosting site might mandate static pages only), but they seem niche.

                                        Or is your hope that users will configure their browsers to reject pages that don’t have the header? There are already significant improvements on the tracking/advertising/bloat front when you block javascript, but users overwhelmingly don’t do it, because they’d rather have the functionality.

                                        1. 2

                                          I think the idea is that it is a way for web authors to verifiably prove to users that the content is tracking free. Markdown renderer would be tracking free unless buggy. (It would be a XSS bug.) The difference with noscript is that script-y sites still transparently work.

                                          In the invisioned implementation, like HTTPS sites getting padlock, document-only sites will get cute document icon to generate warm fuzzy feeling to users. If icon is as visible as padlock, I think many web authors will use it if it is in fact a document and it can be easily done.

                                          Note that Markdown renderer could still use JavaScript to provide interactive features: say collapsible sections. It is okay because JavaScript comes from browser, which is a trusted source.

                                        2. 3

                                          Another HTTP header that maybe some browsers will support shoddily, and the rest will ignore?

                                          1. 2

                                            I found HTTP Accept header to be well supported by all current relevant softwares. That’s why I think separate MIME type is the way to go.

                                          2. 2

                                            I think link tracking is essentially impossible to avoid, as are redirects. The web already has a huge problem with dead links and redirects at least make it possible to maintain more of the web over time.

                                            1. 2
                                            1. 3

                                              Please note that this article is not suggesting to publish your feed at /feeds, but to create a page at that URL where you list which other feeds you like or you are subscribed to. In other words, the feed version of your blogroll.

                                              There is a de-facto standard format for list of web feeds: OPML from UserLand: https://en.wikipedia.org/wiki/OPML. OPML is supported by most feed aggregators.

                                              Edit: Fixed comment after reading the post again. Thanks @danburzo.

                                              1. 3

                                                The article suggests publishing a list of your various feeds, not the ones you’re subscribed to.

                                                That being said, OPML is pretty nice for sharing collections of feeds, and I’ve bounced around the idea of using OPML (or another, more appropriate format) to generate a bundle of external links for each blog post

                                              1. 6

                                                Meanwhile in Ruby-land the logger API allows the message to be passed in a block (i.e. a lambda) to avoid eager evaluation:

                                                Always evaluated:

                                                logger.debug("total number: #{get_object_counts()}")
                                                

                                                Evaluated only is debug level >= DEBUG:

                                                logger.debug {"total number: #{get_object_counts()}" }
                                                

                                                This well-thought API makes the logging statements stand out less. (And is part of the stdlib.)

                                                1. 8

                                                  Isn’t there a difference between functional code and side-effect-free code? I feel like, by trying to set up all of the definitions just right, this article actually misses the point somewhat. I am not even sure which language the author is thinking of; Scheme doesn’t have any of the three mentioned properties of immutability, referential transparency, or static type systems, and neither do Python nor Haskell qualify. Scouring the author’s websites, I found some fragments of Java; neither Java nor Clojure have all three properties. Ironically, Java comes closest, since Java is statically typed in a useful practical way which has implications for soundness.

                                                  These sorts of attempts to define “functional programming” or “functional code” always fall flat because they are trying to reverse-engineer a particular reverence for some specific language, usually an ML or a Lisp, onto some sort of universal principles for high-quality code. The idea is that, surely, nobody can write bad code in such a great language. Of course, though, bad code is possible in every language. Indeed, almost all programs are bad, for almost any definition of badness which follows Sturgeon’s Law.

                                                  There is an important idea lurking here, though. Readability is connected to the ability to audit code and determine what it cannot do. We might desire a sort of honesty in our code, where the code cannot easily hide effects but must declare them explicitly. Since one cannot have a decidable, sound, and complete type system for Turing-complete languages, one cannot actually put every interesting property into the type system. (This is yet another version of Rice’s theorem.) Putting these two ideas together, we might conclude that while types are helpful to readability, they cannot be the entire answer of how to determine which effects a particular segment of code might have.

                                                  Edit: Inserted the single word “qualify” to the first paragraph. On rereading, it was unacceptably ambiguous before, and led to at least two comments in clarification.

                                                  1. 7

                                                    Just confirming what you said: Did you say that Haskell doesn’t have immutability, referential transparency, or a static type system?

                                                    1. 3

                                                      I will clarify the point, since it might not be obvious to folks who don’t know Haskell well. The original author claims that two of the three properties of immutability, referential transparency, and “typing” are required to experience the “good stuff” of functional programming. On that third property, the author hints that they are thinking of inferred static type systems equipped with some sort of proof of soundness and correctness.

                                                      Haskell is referentially transparent, but has mutable values and an unsound type system. That is only one of three, and so Haskell is disqualified.

                                                      Mutable values are provided in not just IO, but also in ST and STM. On one hand, I will readily admit that the Haskell Report does not mandate Data.IORef.IORef, and that only GHC has ST and STM; but on the other hand, GHC, JHC, and UHC, with UHC reusing some of GHC’s code. Even if one were restricted to the Report, one could use basic filesystem tools to create a mutable reference store using the filesystem’s innate mutability. In either case, we will get true in-place mutation of values.

                                                      Similarly, Haskell is well-known to be unsound. The Report itself has a section describing how to do this. To demonstrate two of my favorite examples:

                                                      GHCi, version 8.6.3: http://www.haskell.org/ghc/  :? for help
                                                      Prelude> let safeCoerce = undefined :: a -> b
                                                      Prelude> :t safeCoerce
                                                      safeCoerce :: a -> b
                                                      Prelude> data Void
                                                      Prelude> let safeVoid = undefined :: Void
                                                      Prelude> :t safeVoid
                                                      safeVoid :: Void
                                                      

                                                      Even if undefined were not in the Report, we can still build a witness:

                                                      Prelude> let saferCoerce x = saferCoerce x
                                                      Prelude> :t saferCoerce
                                                      saferCoerce :: t1 -> t2
                                                      

                                                      I believe that this interpretation of the author’s point is in line with your cousin comment about type signatures describing the behavior of functions.

                                                      1. 4

                                                        I don’t really like Haskell, but it is abusive to compare the ability to write a non-terminating function with the ability to reinterpret an existing object as if it had a completely different type. A general-purpose programming language is not a logic, and the ability to express general recursion is not a downside.

                                                        1. 3

                                                          A “mutable value” would mean that a referenced value would change. That’s not the case for a value in IO. While names can be shadowed, if some other part of the code has a reference to the previous name, that value does not change.

                                                          1. 1

                                                            Consider the following snippet:

                                                            GHCi, version 8.6.3: http://www.haskell.org/ghc/  :? for help
                                                            Prelude> :m + Data.IORef
                                                            Prelude Data.IORef> do { r <- newIORef "test"; t1 <- readIORef r; writeIORef r "another string"; t2 <- readIORef r; return (t1, t2) }
                                                            ("test","another string")
                                                            

                                                            The fragment readIORef r evaluates to two different actions within this scope. Either this fragment is not referentially transparent, or r is genuinely mutable. My interpretation is that the fragment is referentially transparent, and that r refers to a single mutable storage location; the same readIORef action applied to the same r results in the same IO action on the same location, but the value can be mutated.

                                                            1. 1

                                                              The value has been replaced with another. It is not quite the same thing as mutating the value itself.

                                                          2. 2

                                                            From your link:

                                                            When evaluated, errors cause immediate program termination and cannot be caught by the user.

                                                            That means that soundness is preserved–a program can’t continue running if its runtime types are different from its compile-time types.

                                                            1. 1

                                                              If we have to run the program in order to discover the property, then we run afoul of Rice’s theorem. There will be cases when GHC does not print out <loop> when it enters an infinite loop.

                                                              1. 1

                                                                Rice’s theorem is basically a fancier way of saying ‘Halting problem’, right?

                                                                In any case, it still doesn’t apply. You don’t need to run a program which contains undefined to have a guarantee that it will forbid unsoundness. It’s a static guarantee.

                                                        2. 5

                                                          Thank you for bringing up this point. Unfortunately, “functional programming” is almost-always conflated, today, with lack of side-effects, immutability, and/or strong, static, typing. None of those are intrinsic to FP. Scheme, as you mentioned, is functional, and has none of those. In fact, the ONLY language seeing any actual use today that has all three (enforced) is Haskell. Not even Ocaml does anything to prevent side-effects.

                                                          And you absolutely can write haskell-ish OOP in e.g., Scala. Where your object methods return ReaderT-style return types. It has nothing at all to do with funcitonal vs. OOP. As long as you do inversion of control and return “monads” or closures from class methods, you can do all three of: immutable data, lack of side-effects, and strong types in an OOP language. It’s kind of ugly, but I can do that in Kotlin, Swift, Rust, probably even C++.

                                                          1. 2

                                                            Scheme, as you mentioned, is functional, and has none of those.

                                                            Why is Scheme functional? It’s clearly not made of functions:

                                                            Lisp Is Not Functional

                                                            A functional language is a programming language made up of functions

                                                            What defun and lambda forms actually create are procedures or, more accurately still, funcallable instances

                                                            I would say Haskell and Clojure are functional, or at least closer to it, but Scheme isn’t. This isn’t a small distinction…

                                                            1. 2

                                                              That’s a good point and I actually do agree completely. The issue, I think, is that most programmers today will have a hard time telling you the difference between a procedure and a function when it comes to programming. And it’s totally fair- almost every mainstream programming language calls them both “function”.

                                                              So, Scheme is “functional” in that it’s made up of things-that-almost-everyone-calls-functions. But you’re right. Most languages are made of functions and procedures, and some also have objects.

                                                              But with that definition, I don’t think Clojure counts as functional either. It’s been a couple of years, but am I not allowed to write a “function” in Clojure that takes data as input and inside the function spawns an HTTP client and orders a pizza, while returning nothing?

                                                              It would appear that only Haskell is actually a functional language if we use the more proper definition of “function”

                                                              1. 1

                                                                But with that definition, I don’t think Clojure counts as functional either. It’s been a couple of years, but am I not allowed to write a “function” in Clojure that takes data as input and inside the function spawns an HTTP client and orders a pizza, while returning nothing?

                                                                Hey, the type for main in Haskell is usually IO (), or “a placeholder inside the IO monad”; using the placeholder type there isn’t mandatory, but the IO monad is. Useful programs alter the state of the world, and so do things which can’t be represented in the type system or reasoned about using types. Haskell isn’t Metamath, after all. It’s general-purpose.

                                                                The advantage of Haskell isn’t that it’s all functions. It’s that functions are possible, and the language knows when you have written a function, and can take advantage of that knowledge. Functions are possible in Scheme and Python and C, but compilers for those languages fundamentally don’t know the difference between a function and a procedure, or a subroutine, if you’re old enough. (Optimizers for those languages might, but dancing with optimizers is harder to reason about.)

                                                              2. 1

                                                                That article is about Common Lisp, not Scheme. Scheme was explicitly intended to be a computational representation of lambda calculus since day 1. It’s not purely functional, yes, but still functional.

                                                                1. 2

                                                                  If anything that underscores the point, because lambda calculus doesn’t have side effects, while Scheme does. The argument applies to Scheme just as much as Common Lisp AFAICT.

                                                                  Scheme doesn’t do anything to control side effects in the way mentioned in the original article. So actually certain styles of code in OO languages are more functional than Scheme code, because they allow you to express the presence of state and I/O in type signatures, like you would in Haskell.

                                                                  That’s probably the most concise statement of the point I’ve been making in the thread …

                                                                  1. 2

                                                                    I take it we’re going by the definition of ‘purely functional programming’ then. In that case, I don’t understand why Clojure, a similarly impure language, gets a pass. Side-effects are plentiful in Clojure.

                                                                    1. 2

                                                                      Well I said “at least closer to it”… I would have thought Haskell is very close to pure but it appears there is some argument about that too elsewhere in the thread.

                                                                      But I think those are details that distract from the main point. The main point isn’t about a specific language. It’s more about how to reason about code, regardless of language. And my response was that you can reap those same benefits of reasoning in code written in “conventional” OO languages as well as in functional languages.

                                                                      1. 1

                                                                        That’s fair. It’s not that I disagree with the approach (I’m a big fan of referential transparency!) but I feel like this is further muddying the (already increasingly divergent) terminology surrounding ‘functional programming’. Hence why I was especially confused by the OO remarks. It doesn’t help that the article itself also begs the question of static typing.

                                                            2. 2

                                                              Isn’t there a difference between functional code and side-effect-free code?

                                                              It depends on who you ask to. :)

                                                              You may be interested in the famous Van Roy’s organization of programming paradigms: https://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf. Original graphical summary: https://continuousdevelopment.files.wordpress.com/2010/02/paradigms.jpg, revised summary: https://upload.wikimedia.org/wikipedia/commons/f/f7/Programming_paradigms.svg.

                                                              (reposted from https://lobste.rs/s/aw4fem/unreasonable_effectiveness#c_ir0mnq)

                                                              1. 1

                                                                I like your point about the amount of information in type signatures.

                                                                I agree that the type can’t contain everything interesting to know about the function.

                                                                I do think you can choose to put important information in the type. In Haskell it’s normal to produce a more limited effect system, maybe one for database effects only, and another for network effects only, and then connect those at the very top level.

                                                                So, you can put more in the type signature if you wish, and it can be directly useful to prevent mixing effects.

                                                              1. 57

                                                                Honestly, for the vast majority of users, the security domain model of modern *nix desktops is incorrect.

                                                                The vast majority of machines only ever have one user on them. If something running as that user is compromised, that’s it. Even if there were no privilege escalation, so what? You can’t install device drivers…but you can get the user’s email, overwrite their .profile file, grab their password manager’s data, etc, etc.

                                                                I think that if I were designing a desktop operating system today, I would do something along the lines of VM/CMS…a simple single-user operating system running under virtualization. The hypervisor handles segregating multiple users (if any), and the “simple” operating system handles everything else.

                                                                (I know Qubes does something like this, but its mechanism is different from what I’m describing here.)

                                                                In that hypothetical simple single-user operating system, every application runs with something akin to OpenBSD’s pledge baked in. Your web browser can only write to files in the Downloads directory, your text editor can’t talk to the network, etc.

                                                                The *nix permissions model was designed to deal with a single shared machine with a lot of users and everything-is-a-file. The modern use case is a single machine with a single user and the need for semantic permissions rather than file-level permissions.

                                                                1. 16

                                                                  This is very insightful, and definitely has changed the way that I’m thinking about security for my OS design.

                                                                  Here’s the thought that I got while reading your comment: “The original UNIX security model was one machine, many users, with the main threat being from other users on the machine. The modern security model is (or should be) one machine, one user, but multiple applications, with the main threat being from other/malicious applications being run by that single user.”

                                                                  1. 9

                                                                    To make one small tweak to your statement, I would propose the modern model be “many machines, one user, with multiple applications…”. The idea being with those applications you will be dealing with shared risk across all of the accounts you are syncing and sharing between devices. You might only be controlling the security model on one of those machines, but the overall security risk is likely not on the one you have control over and that may make a difference. Do you let applications sync data between every device? Does that data get marked differently somehow?

                                                                    1. 3

                                                                      If you are planning some standard library/API please also consider testability. For example: Global filesystem with “static (in OOP sense)” API make it harder to mock/test than necessary. I think the always available API surface should be minimized, to provide APIs which can be tested, secured, versioned more easily, providing more explicit interactions and failure modes than the APIs we are used to.

                                                                    2. 10

                                                                      This the reason why plan9 removed completely the concept of “root” user. It has a local account used to configure the server, yet cannot access its resources, and then users will connect to it and get granted the permissions from a dedicated server (could be running on the same machine). It is much cleaner when considering a machine that is part of a larger network, because the users are correctly segragated and simply cannot escalate their privilege, they need access to the permission server to do that.

                                                                      1. 14

                                                                        I agree, and would like to extend it with my opinion:

                                                                        Global shared filesystem is an antipattern (it is a cause of security, maintenance, programming problems/corner cases). Each program should have private storage capability, and sharing data between application should be well regulated either by (easily) pre-configured data pipelines, or with interactive user consent.

                                                                        Global system services are also antipatterns. APIs suggesting services are always available by default, and unavailability is an edge case is an antipattern.

                                                                        Actually modern mobile phone operating systems are gradually shifting away from these antiqued assumptions, and are having the potential to be much more secure than existing desktop OSs. These won’t reach the mainstream UNIX worshipping world. On desktop Windows is moving in this direction, eg. desktop apps packaged and distributed via Microsoft Store each run in separate sandboxes (had quite a hard time finding HexChat logs), but Microsoft’s ambition to please Mac users (who think they are Linux hackers) is slowing the adaptation (looking at you winget, and totally mismanaged Microsoft Store with barely working search and non-scriptabilty).

                                                                        1. 10

                                                                          Global shared filesystem is an antipattern (it is a cause of security, maintenance, programming problems/corner cases).

                                                                          If your only goal is security, this is true. If your goal is using the computer, then getting data from one program to another is critical.

                                                                          Actually modern mobile phone operating systems are gradually shifting away from these antiqued assumptions, and are having the potential to be much more secure than existing desktop OSs.

                                                                          And this (along with the small screens and crappy input devices) is a big part of why I don’t do much productive with my phone (and the stuff I do use for it tends to be able to access my data, eg – my email client).

                                                                          1. 4

                                                                            Actually I have seen many (mostly older) people for whom the global filesystem is a usability problem. It is littered by uninteresting stuff for them, they just want to see their pictures, when pressing the “attach picture” button on a website, not the programs, the musics, not /boot or C:\Windows, etc…

                                                                            Also it creates unnecessary programming corner cases: if your program wants to create a file with name foo, another process may create a directory with the same name to the same location. There are conventions to lower this risk, still it is an unnecessary corner case.

                                                                            Getting data from one place to another can be solved a number of ways without global filesystem. For example you can create a storage location and share it to multiple applications, though this still creates the corner cases I mentioned above. Android does provide a globally shared storage for this task, which is not secure, its access needs explicit privilege at least. Also you can specifically share data from one app to another without the need for any filesystem, as in Android’s Activities or Intents.

                                                                            I think there are proven prototypes for these approaches, though I think the everything is a file approach is also a dead-end in itself, which also limits the need for a “filesystem”.

                                                                            Note: the best bolted-on security fix to traditional UNIX filesystem me seems to be the OpenBSD pledge approach, too bad OpenBSD has other challenges which limit its adoption. I also like the sandbox based approached, but then I’d rather go steps further.

                                                                            1. 2

                                                                              Getting data from one place to another can be solved a number of ways without global filesystem. […] Android does provide a globally shared storage for this task, which is not secure, its access needs explicit privilege at least.

                                                                              That is a great example of how hard it is to find the right balance between being secure and not nagging the user.

                                                                              In order not to bother the users too much or too often, Android will ask a simple question: do you want this app to access none of your shared files (but I want this funny photo-retouch app to read and modify the three pictures I took ten minutes ago) or do you allow it to read all your shared files (and now the app can secretly upload all your photos to a blackmail/extortion gang). None of these two options are really good.

                                                                              The alternative would be fine-grained access, but then the users would complain about having too many permission request dialogs.

                                                                              In the words of an old Apple anti-Windows ad: «You are coming to a sad realization: cancel or allow?»

                                                                              1. 5

                                                                                Meanwhile in ios you can use the system image picker (analogous to setuid) to grant access to select files without needing any permission dialogs.

                                                                                1. 1

                                                                                  This is a valid option on Android as well

                                                                          2. 6

                                                                            I disagree. Having files siloed into application specific locations would destroy my workflow. I’m working on a project that includes text documents, images and speadsheets. As an organization method, all these files live under a central directory for the project as a whole. My word processor can embed images. The spreadsheet can embed text files. This would be a nightmare under a siloed system.

                                                                            A computer should adapt to how I work, not the other way around.

                                                                            1. 7

                                                                              In a properly designed silo’d filesystem, this would still be perfectly possible. You’d just have to grant each of those applications access to the shared folder. Parent is not suggesting that files can’t be shared between applications:

                                                                              sharing data between application should be well regulated either by (easily) pre-configured data pipelines, or with interactive user consent.

                                                                              1. 1

                                                                                Even you could create security profiles, based on projects, with the same applications having different set of shared access patterns based on security profile.

                                                                                It could be paired with virtual desktops, for example, to have a usable UX for this feature. I’d be happy in my daily work when shuffling projects, to have only the project-relevant stuff in my view at a time.

                                                                            2. 3

                                                                              I think that snaps (https://snapcraft.io/) have this more granular permission model, but nobody seems to like them (partially because they’re excruciatingly slow, which is a good reason).

                                                                              1. 2

                                                                                Yeah, Flatpak does this too. It’s why I’m generally on board with Flatpak, even though the bundled library security problem makes me uncomfortable: yes they have problems, but I think they solve more than they create. (I think.) Don’t let perfect be the enemy of good, etc.

                                                                              2. 3

                                                                                Global shared filesystem is an antipattern (it is a cause of security, maintenance, programming problems/corner cases). Each program should have private storage capability, and sharing data between application should be well regulated either by (easily) pre-configured data pipelines, or with interactive user consent.

                                                                                I would not classify a global shared filesystem as antipattern. It has its uses and for most users it is a nice metaphor. As all problems that are not black or white, what is needed is to find the right balance between usefulness, usability and security.

                                                                                That said, I agree on the sentiment that the current defaults are “too open”, and reminiscent of a bygone era.

                                                                                Before asking for pre-configured data pipelines (hello selinux), or interactive user consent (hello UAC), we need to address real-world issues that users of Windows 7+ and macOS 10.15 know very well. Here are a couple of examples:

                                                                                • UAC fatigue. People do not like being constantly asked for permission to access their own files. “It is my computer, why are you bothering me?” «Turn off Vista’s overly protective User Account Control. Those pop-ups are like having your mother hover over your shoulder while you work» (from the New York Times article “How to Wring a Bit More Speed From Vista”)
                                                                                • Dialog counterfeits. If applications have the freedom to draw their own widgets on the screen (instead of being limited to a fixed set of UI controls), then applications will counterfeit their own “interactive user consent” panel. (Zoom was caught faking a “password needed” dialog, for example). Are we going to forbid apps from drawing arbitrary shapes or do we need a new syskey?
                                                                                • Terminal. Do the terminal and the shell have access to everything by default, or do you need to authorize every single cd and ls?
                                                                                • Caching. How long should authorization tokens be cached? For each possible answers there are pros and cons. Ask administrators of Kerberos clusters for war stories.
                                                                                1. 4

                                                                                  If you insist on global filesystem (on this imaginary OS design meeting we are at), I’d rather suggest two shared filesystems, much like how a Harvard Architecture separates code and data, one for “system files, application installs”, and one for user data.

                                                                                  By preconfigured data pipelines I’d rather imagine something like https://genode.org/ Sculpt . I’d create “workspaces” with the virtual desktop methaphor, which could be configured in an overlay, where apps (and/or their storage spaces, or dedicated storage spaces) as graph nodes could be linked via “storage access” typed links graphically.

                                                                                  On a workspace for ProjectA and its related virtual desktop I could provide Spreadsheets, Documents, and Email apps StorageRW links to ProjectAStorage object. Even access to this folder can be made default in the given security context.

                                                                                  Regarding the terminal: I don’t think it is a legitimate usecase to have access to everything just because you use a text base tool. Open a terminal in the current security context.

                                                                                  Regarding the others: stuff can be tuned, once compartmentalization is made user-friendly, stuff gets simpler. Reimagining stuff from a clean sheets would be beneficial, as reverse compat costs us a lot, and I’d argue maybe more than it gets.

                                                                                  With SeLinux my main problem is its bolted-on nature, and the lack of its easy and intuitive confiiuration, which is worsened by the lack of permission/ACL/secontext inheritance in unix filesystems. Hello relabeling after extracting a tar archive…

                                                                                  About the other points i partly agree, they are continuously balancing these workflows, and fighting abuse (a8y features abused on android, leading to disabling some features to avoid fake popups, if I recall correctly)

                                                                                2. 3

                                                                                  Global shared filesystem is an antipattern

                                                                                  I’d make that broader: A global shared namespace is an antipattern. Sharing should be via explicit delegation, not as a result of simply being able to pick the same name. This is the core principle behind memory-safe languages (you can’t just make up an integer, turn it into a pointer, and access whatever object happens to be there). It’s also the principle behind capability systems.

                                                                                  The Capsicum model retrofits this to POSIX. A process in capability mode loses access to all global namespaces: the system calls that use them stop working. You can’t open a file, but you can openat a file if you have a directory descriptor. You can’t create a socket with socket, but you can use recvfrom on a socket that you have to receive a new file descriptor for a socket. Capsicum also extends file descriptors with fine-grained rights so you can, for example, delegate append-only access to a log file to a process, but not allow it to read back earlier log messages or truncate the log.

                                                                                  Capsicum works well with the Power Box model for privilege elevation in GUI applications, where the Open… and Save… dialog boxes run as more privileged external processes. The process invoking the dialog box then receives a file descriptor for the file / directory to be opened or a new file to be written to.

                                                                                  It’s difficult to implement in a lot of GUI toolkits because their APIs are tightly coupled with the global namespace. For example, by returning a string object representing the path from a save dialog, rather than an object representing the rights to create a file there.

                                                                                3. 4

                                                                                  A while ago I dreamed up – but never really got around to trying to build one (although I do have a few hundred lines of very bad Verilog for one of the parts) – an interesting sort of machine, which kind of takes this idea to its logical conclusion, only in hardware. Okay I didn’t exactly dream it up, the idea is very old, but I keep wondering how a modern attempt at it would look like.

                                                                                  Imagine something like this:

                                                                                  • A stack of 64 or so small SBCs, akin to the RPi Zero, each of them running a bare-metal system – essentially, MS-DOS 9.0 + exactly one application :)…
                                                                                  • …with a high-speed interconnect so that they can pass messages to/from each other …
                                                                                  • …and another high-speed interconnect + a central video blitter, that enables a single display to show windows from all of these machines. Sort of like a Wayland compositor, but in hardare.

                                                                                  Now obviously the part about high-speed interconnect is where this becomes science fiction :) but the interesting parts that result from such a model are pretty fun to fantasize about:

                                                                                  • Each application has its own board. Want to run Quake V? You just pick the Quake V cartridge – which is actually a tiny computer! – and plug it in the stack. No need to administer anything, ever, really.
                                                                                  • All machines are physically segregated – good luck getting access to shared resources, ‘cause there aren’t any (in principle – in my alternate reality people haven’t quite figured out how to write message-passing code that doesn’t suffer from buffer overflows, I guess, and where a buffer can be overflown, anything can happen given enough determination).
                                                                                  • Each machine can come with its own tiny bit of fancy hardware. High-resolution, hi-fi DAC for the MP3 FLAC player board, RGB ambient LEDs for the radio player, whatever.
                                                                                  • Each machine can make its own choices in terms of all hardware, for that matter, as long as it plays nice on the interconnect(s). “Arduino Embedded Development Kit” board. the one that runs the IDE? Also sports a bunch of serial ports (real serial ports, none of that FTDI stuff), four SPI ports, eight I2C ports, and there’s a tiny logic analyzer on it, too. The Quake V board is mostly a CPU hanging off two SLI graphics cards probably.

                                                                                  I mean, with present-day tech, this would definitely be horrible, but I sometimes wonder if my grandkids aren’t going to play with one of these one day.

                                                                                  Lots and lots and lots of things in the history of computing essentially happened because there was no way to give everyone their own dedicated computer for each task, even though that’s the simplest model, and the one that we use to think about machines all the time, too (even in the age of multicore and little.BIG and whatnot). And lots of problems we have today would go away if we could, in fact, have a (nearly infinite) spool of computers that we could run each computational task on.

                                                                                  1. 3

                                                                                    I would, 100%, buy such a machine.

                                                                                    I seem to recall someone posted onto lobste.rs something about a “CP/M machine of the future” box a while back: a box with 16 or 32 Z80’s, each running CP/M and mulitplexing the common hardware like the screen. SOunds similar in spirit to what you’re describing, maybe.

                                                                                    1. 3

                                                                                      This reminds me GreenArrays even if there are major differences.

                                                                                      1. 2

                                                                                        the EOMA68 would probably have benefited from this idea. They were working on the compute engine being in Cardbus format that could be exchanged…

                                                                                        1. 1

                                                                                          What could we call the high-speed interconnect?

                                                                                          Well, it’s an Express Interconnect, and it’s for Peripheral Components, so I guess PCIE would be a good name.

                                                                                          It could implement hot-swapping, I/O virtualization, etc for the “cartridges” (that’s a long word, lets call them “PCIE cards”).

                                                                                          1. 1

                                                                                            I think I initially wanted to call it Infiniband but I was going through a bit of an Infiniband phase back when I first concocted this :).

                                                                                        2. 2

                                                                                          Sounds to me like an object capabilities system with extra segregation of users. Would that be a fair assessment?

                                                                                          1. 3

                                                                                            I think, in my mental model, it would be a subset or a particular instance of an object capabilities system.

                                                                                          2. 2

                                                                                            (I know Qubes does something like this, but its mechanism is different from what I’m describing here.)

                                                                                            Can you elaborate on how it’s different? What you’re describing sounds exactly like Qubes.

                                                                                            1. 3

                                                                                              Let me preface this with “I may be completely wrong about Qubes.”

                                                                                              From what I understand, Qubes is a single-user operating system with multiple security domains, implemented as different virtual machines (or containers? I don’t remember).

                                                                                              In my idea different users, if any, run in different virtual machines under a hypervisor. The individual users run a little single-user operating system. That single user system has a single security domain under which all applications run, but applications are granted certain capabilities at runtime and are killed if they violate those capabilities. So all the applications are in the same security domain but accorded different capabilities (I had used OpenBSD’s pledge as an example, which isn’t quite like a classic capability system but definitely in the same vein).

                                                                                              In my mind, it’s basically a Xen hypervisor running an instance of HaikuOS per user, with a little sand boxing mechanism per app. There are no per-file permissions or ownership, but rather application-specific limitations as expressed by the sandbox program associated with them.

                                                                                              The inspiration was VM/CMS, in its original incarnation where CMS was still capable of running on the bare metal; if your machine doesn’t have multiple users you can just run the little internal single-user OS directly on your hardware. Only on physically shared machines would you need to run the hypervisor.

                                                                                            2. 2

                                                                                              It’s obviously a different approach but very fine grained permissions is a feature of recent macOS releases.

                                                                                            1. 6

                                                                                              this seems to be a so called ‘fluent style’ api that sets initial ‘configuration’ parameters. What makes it ‘declarative’?

                                                                                              1. 3

                                                                                                I see declarative as a spectrum. Fluent style is probably still close to imperative but is a step in the direction of declarative. To me, this library looks like functional reactive programming but domain specific. It’s not uncommon for people to view functional reactive as being closer to declarative. What puts it in that category is that some control flow is abstracted away and it seems like the functions are referentially transparent.

                                                                                                1. 4

                                                                                                  I see declarative as a spectrum

                                                                                                  You may be interested in the famous Van Roy’s organization of programming paradigms: https://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf. Original graphical summary: https://continuousdevelopment.files.wordpress.com/2010/02/paradigms.jpg, revised summary: https://upload.wikimedia.org/wikipedia/commons/f/f7/Programming_paradigms.svg.

                                                                                                  It is definitely a spectrum, but a multi-dimensional one. :)

                                                                                                2. 1

                                                                                                  I too find the post a bit strange and unorganized. The following passage is repeated twice for some reason:

                                                                                                  As hinted above, since our specification of the animation was entirely declarative, it can’t really “do anything else” like manipulate the DOM. This gives us fantastic debugging and editing capabilities. As it’s “just” a mathematical function:

                                                                                                  1: 
                                                                                                  2: anim_circle: (t:Time) -> (cx: float, cr: float)
                                                                                                  3:
                                                                                                  
                                                                                                1. 15

                                                                                                  We had a syntax for HTML-in-JS in the early 2000s. Standardized as E4X in ECMA-357 but never properly implemented by the most popular browsers and then removed. https://developer.mozilla.org/en-US/docs/Archive/Web/E4X.

                                                                                                  Example from https://en.wikipedia.org/wiki/ECMAScript_for_XML:

                                                                                                  var sales = <sales vendor="John">
                                                                                                      <item type="peas" price="4" quantity="6"/>
                                                                                                      <item type="carrot" price="3" quantity="10"/>
                                                                                                      <item type="chips" price="5" quantity="3"/>
                                                                                                    </sales>;
                                                                                                  
                                                                                                  for each( var price in sales..@price ) {
                                                                                                    alert( price );
                                                                                                  }
                                                                                                  delete sales.item[0];
                                                                                                  sales.item += <item type="oranges" price="4"/>;
                                                                                                  sales.item.(@type == "oranges").@quantity = 4;
                                                                                                  
                                                                                                  1. 2

                                                                                                    If Apple and (mostly) Microsoft hadn’t killed ES4 back in the 2000s, only to re-invent it and call it TypeScript 10 years layer, we’d all be a lot better off by now.

                                                                                                    1. 1

                                                                                                      Interesting, surprised I’ve never seen this!

                                                                                                      As someone that wasn’t a professional dev during peak XML, this provokes a similar response to seeing eg. XPath in the wild - “wow, this looks super powerful but maybe a tad too baroque”

                                                                                                      1. 3

                                                                                                        For me, E4X isn’t peak XML. It’s the period after the peak where we realised that SOAP was too complex and we were trying to transition away from it to something that was more web-friendly, but had to rely on XML representations.

                                                                                                        There were definitely people using the full gamut of XML, including XPath, but for many developers the trend was towards SOAP because automatic serialization/deserialization machinery existed and the developer requirement was getting smallish amounts of data from here to there right now.

                                                                                                        Developers using .NET could just point at a WSDL file and interact with a service without know anything about the underlying mechanisms. This promised interoperability with Java, but was excruciatingly painful because the tools were close, but not close enough, in terms of compliance. Over time, people forgot the full breadth of what XML could do since you weren’t really working with XML at the level where XPath could be used.

                                                                                                        XMLSpy and other specialist XML tools appeared near the end of the 20th century, but usage dropped, except for niches like publishing and special processes like FDA submissions. Web services took over and morphed into classic SOA. The originators of these were pushing the WS-* standards (that were never intended for human consumption) while implementers and the internet envisioned a simpler model building on REST and JSON. Some people couldn’t get to JSON without an intermediate step of XML, hence E4X.

                                                                                                        1. 1

                                                                                                          For me, E4X isn’t peak XML. It’s the period after the peak where we realised that SOAP was too complex and we were trying to transition away from it to something that was more web-friendly, but had to rely on XML representations.

                                                                                                          Interesting prospective. For me E4X is part of that period where XHTML was starting to be a thing (a desirable thing) but was killed by draconian error handling.

                                                                                                          All that aside, lest not forget that E4X is also a legacy of ActionScript, the JavaScript-like language that is (was) used to write Flash programs. The ability to seamlessly interact with HTTP endpoints that returned XML data made Flash (at least in the eyes of Adobe) a viable platform for business environments and not just for games.

                                                                                                          1. 2

                                                                                                            XHTML arrived at the peak, but it took a long time before people really understood what it meant and tried to apply it 2002-2003 (approx.) It was desirable in a number of ways, but the W3C mishandled the transition because they were staunch XML fans, and the implementers where relatively unresponsive to the market. The WHATWG appeared on the scene in 2004 and the rest is history.

                                                                                                            E4X grew from the identification of some of the better perspectives at the time rather than being an outgrowth from the XML juggernaut. In those times Flash might still have been considered an option for the next era of the Web (aka. Web 2.0.) Very quickly you’d see Adobe identify that something like Flex was needed, only for Flash to collapse because as a runtime there was never any real outreach/integration with the web. While I don’t think the implementation of Flash would have succeeded for much longer than it has done, I think Adobe made many strategic failures which resulted in a shorter fight. The death knell was Apple, but people on the other side weren’t even going to advocate for a closed platform like Flash.

                                                                                                        2. 1

                                                                                                          XPath itself I don’t think is too baroque. At least, if you have to deal with XML, it’s very handy to have on hand, provided that XML document isn’t huge.

                                                                                                          XSLT would be my point of “eh, this feels like too much”. But Xpath on its own can be very handy.

                                                                                                        3. 1

                                                                                                          Ah, good old E4X.

                                                                                                          We had JavaScript style sheets for a while in the old days, too, but they didn’t catch on.

                                                                                                        1. 12

                                                                                                          “I also wish they debloated packages; maybe I’ve just been spoilt by KISS. I now have D-Bus on my system thanks to Firefox. :(”

                                                                                                          …why is D-Bus bad, exactly?

                                                                                                          Also, this guy reminds me of the time I spent in my early high-school years distro-hopping. That’s not a good thing or a bad thing, just an observation… That time’s also when I discovered my love of OpenBSD, so I’m glad they’ve gone down a similar path.

                                                                                                          1. 11

                                                                                                            Hi, I’m “this guy”.

                                                                                                            …why is D-Bus bad, exactly?

                                                                                                            I just don’t like how GNOME-y it is. It uses glib types, instead of just C ones, when it has nothing to do with GNOME. Many apps that shouldn’t need D-Bus hard-depend on it, which is annoying. I think it’s just my general disdain for anything freedesktop.org.

                                                                                                            Also, this guy reminds me of the time I spent in my early high-school years distro-hopping.

                                                                                                            I only hop once or twice a year, at max. Besides, I’m holed up at home with nothing better to do, so why not?

                                                                                                            That said, I didn’t even want to post this here. These are just my opinions. You can have your own, I won’t stop you. My previous post here got some pretty… interesting responses. I didn’t think people would be this annoyed at my choices. Anyway…

                                                                                                            1. 4

                                                                                                              Nice writeup.

                                                                                                              Did you check out using ifconfig join in lieu of ifconfig nwid? (http://man.openbsd.org/ifconfig#join) You can even put it in your /etc/hostname.iwm0 and list a few networks and their wpakey’s so when you take your laptop to other known networks it should automatically detect and join them.

                                                                                                              1. 2

                                                                                                                It’s a great feature. My life got significantly better when it landed. Thanks, @phessler!

                                                                                                              2. 3

                                                                                                                I didn’t think people would be this annoyed at my choices. Anyway…

                                                                                                                :trollhat: – you do realize that this is the Internet and people need validation that they are in the 95th percentile of “smart,” right?

                                                                                                                1. 2

                                                                                                                  Good to see you here. Thanks for writing that up. I haven’t looked at openBSD for quite some time. A post like this, informs people like me that there is actually good support on recent laptops. That’s a good thing.

                                                                                                                  Also, Nice window mgr screen/rice. Very clean.

                                                                                                                  1. 2

                                                                                                                    Hi, I posted it, sorry about that.

                                                                                                                    While I was slightly concerned about the responses that would come up due to what was said in your previous post, I thought that this article was interesting and relevant while not being quite so divisive - like Plan 9, OpenBSD seems to be one of those things that people can’t help but approve of. So far the discussion seems to be quite civil and cover various facets of the post, so I think it’s been a success.

                                                                                                                    I enjoy your blog, thanks a lot.

                                                                                                                    1. 2

                                                                                                                      Hi, I posted it, sorry about that.

                                                                                                                      Hey! No apologies needed. Thanks for reading, and posting. :)

                                                                                                                    2. 2

                                                                                                                      Again, not saying hopping’s a bad thing at all! I found it to be a good way to learn about different systems. I took a similar path to yours, actually, but with CRUX instead of KISS [since KISS didn’t exist].

                                                                                                                      1. 2

                                                                                                                        [D-Bus] uses glib types, instead of just C ones, when it has nothing to do with GNOME.

                                                                                                                        GLib is used by (and developed by) GNOME, but it does not have any GNOME-ism in it. It is just another library meant to make dealing with C more tolerable on a variety of systems.

                                                                                                                        «GLib is the low-level core library that forms the basis for projects such as GTK and GNOME. It provides data structure handling for C, portability wrappers, and interfaces for such runtime functionality as an event loop, threads, dynamic loading, and an object system.»

                                                                                                                        Should projects really implement yet another base64 conversion function?

                                                                                                                        Many apps that shouldn’t need D-Bus hard-depend on it, which is annoying.

                                                                                                                        D-Bus is a serialization and messaging library with a central broker. If an application needs to talk with another application, it needs to speak the (de)serialization format used by the daemon. How is having a standardized (de)serialization format a bad thing?

                                                                                                                        Similarly, if an application does not want to send messages synchronously with this other (possibly slow) application, it needs to implement some sort of dispatching mechanism and a dispatching thread. How is having a standardized dispatch mechanism and daemon a bad thing?

                                                                                                                        I doubt that nowadays there are many interactive apps that 1) do not need to talk with another application and 2) don’t need to do that asynchronously and reliably.

                                                                                                                        That said, D-Bus could be improved or provided by the kernel. But something like or with the role of D-Bus is a necessary piece of every dynamic operating system.

                                                                                                                        I think it’s just my general disdain for anything freedesktop.org.

                                                                                                                        Again, freedesktop.org is just a consortium of people working hard on desktop environments. Every now and then these experts sit together and harmonize/standardize things that are, up to that point, wildly incompatible with each other. https://www.freedesktop.org/wiki/Specifications/ How is that a bad thing?

                                                                                                                        Is, for example, having a drag-n-drop standard spoken by GTK, Qt and all other GUI toolkits such a bad thing?

                                                                                                                        1. 1

                                                                                                                          Running linux in vmm(4) has been on my todo list as well - I know it’s possible, I’ve just used vmm for OpenBSD on OpenBSD :~)

                                                                                                                          I’m also a lazy and my qemu linux and windows xp installs both still work on OpenBSD, so I’ve not ventured down the vmm for everything else path yet. I would be interested reading about in your experiements with running linux in vmm.

                                                                                                                          1. 2

                                                                                                                            I was considering building a laptop with NetBSD Xen, for the sole purpose of running VMs that I’d selectively share hardware with. Something like a cheap and (probably) ineffective QubesOS… The “beauty” in the idea was that I could run a small linux distro and get a better driver for my wireless card, then bridge other VMs that need the network to it, etc.

                                                                                                                            But, now that that machine is my 3rd grader’s primary way to do school work, and runs Ubuntu, I guess the other plans I had for linux based VMs could probably be achieved via vmm(4), or even qemu as you point out–thanks for the reminder! :)

                                                                                                                      1. 2

                                                                                                                        It was interesting to hear Greg Kroah-Hartman’s suprising to me comments on the stability of ZFS on Linux in a recent AMA.

                                                                                                                        “You are relying on a kernel module that no one in the kernel community can ever touch, help out with, or debug. The very existence of the kernel module is at the whim of the kernel itself not doing something that might end up breaking it either with api changes, or functional changes, as the kernel community does not know what is in that code, nor does it care one bit about it.”

                                                                                                                        https://www.reddit.com/r/linux/comments/fx5e4v/im_greg_kroahhartman_linux_kernel_developer_ama/fn5t6t4

                                                                                                                        Slightly scary considering how important correctness is when talking about filesystems. I’ve been using btrfs recently but prefer ZFS and would like more explicit kernel support.

                                                                                                                        1. 8

                                                                                                                          It was interesting to hear Greg Kroah-Hartman’s suprising to me comments on the stability of ZFS on Linux in a recent AMA: “You are relying on a kernel module that no one in the kernel community can ever touch, help out with, or debug. […]”

                                                                                                                          Slightly scary considering how important correctness is when talking about filesystems.

                                                                                                                          That ZFS on Linux is an unsupported external module is scary. But relying on filesystems such as ext4 or XFS that do not have any measures in place against data silent data corruption is, to me, even scarier.

                                                                                                                          1. 1

                                                                                                                            What about btrfs? I use it at work and it seems to work pretty well.

                                                                                                                          2. 6

                                                                                                                            They have a point, but you’ve got to pick the least risky option that provides the features you want.

                                                                                                                            I’ve ran btrfs. I’ve ran zfs. I’ve used snapshots/subvolumes on both. Btrfs was flaky and I lost data. ZFS was a joy to use and I have not lost data.

                                                                                                                            1. 5

                                                                                                                              the kernel community does not know what is in that code, nor does it care one bit about it

                                                                                                                              An even stronger argument could be made for 99% of flagship Android phones shipping proprietary, out of tree modules. Yet, ZFS is open source and gets flak anyway because it’s not GPL.

                                                                                                                              I think we’d all like to see ZFS ship with the Linux kernel, but adding boot.supportedFilesystems = ["zfs"] to my nix config isn’t really ever going to be a huge deal. So, really, this entire line of argument is just more party-line GPL bickering, nothing new to see here.

                                                                                                                              1. 4

                                                                                                                                The position Greg, and seemingly others in the Linux project, take on this is persistently tedious – for ZFS users and developers alike. All I can suggest is that there are other UNIX platforms that make different trade-offs, and which don’t have a frustrating relationship with ZFS; e.g., illumos or FreeBSD.

                                                                                                                              1. 29

                                                                                                                                www.goatcounter.com by @arp242 might be an alternative, too.

                                                                                                                                1. 5

                                                                                                                                  One thing I like about Goat Counter is its lightweight UI that loads without a noticeable delay. I may go for Plausile for some sites for the features Goat Counter doesn’t have, but it’s trading those features for spinner-free UI.

                                                                                                                                  Also, not using Google Analytics (or anything) on your site is easy… when other people are involved, it becomes much harder, especially if your project is not a website/service, and you want to offload that work to someone else.

                                                                                                                                  1. 2

                                                                                                                                    Fair point.

                                                                                                                                    Staying lean and lightweight has been a goal of mine with Plausible. Of course, tradeoffs have been made. As the UI grew more complex I started using React but now I’m planning to move to Preact to save on bundle size.

                                                                                                                                    The spinners are there because the actual stats engine is quite naive at the moment. I don’t pre-aggregate any of these graphs and they take linear time to draw. It worked fine for a while but this approach is starting to become a problem.

                                                                                                                                    The next things on my list are adding annual plans and then re-writing the stats engine. The goal is to fetch these graphs in constant time, under 500ms at minimum but a good goal would be 200ms.

                                                                                                                                    No promises, but you can expect less spinning in the near-term future :)

                                                                                                                                  2. 1

                                                                                                                                    yeah that’s another option!

                                                                                                                                    1. 1

                                                                                                                                      Looks similar to Clicky.

                                                                                                                                      1. 1

                                                                                                                                        https://simpleanalytics.com/ is another alternative in the same space (no cookies, GDPR-compliant, more privacy oriented, “we don’t track you”).

                                                                                                                                      1. 12

                                                                                                                                        stoves which are increasingly induction-based and safe rather than fire hazards burners/gas

                                                                                                                                        As an actual cook, I don’t find it uncontroversially progressive. You can do useful things with a readily available open flame. I get the point about efficiency, but it’s not all there is to cooking.

                                                                                                                                        1. 3

                                                                                                                                          I don’t think the author was saying that everything is 100% ‘better’, and I think the pont about induction was supposed to be that it makes electric cooking better, and more of a viable alternative to gas, not that it is somehow equivalent or better in all regards.

                                                                                                                                          The same goes for a lot of this list. I like that it reminds us that a lot of changes to normal life - even just extra options being available - can make a big difference in quality of life for many, especially those with particular traits.

                                                                                                                                          Just consider the introduction of self checkouts at supermarkets. For those who sometimes find it hard to interact with a human the option to avoid this when it’s not a good day/time for it can be a huge relief.

                                                                                                                                          1. 2

                                                                                                                                            and I think the pont about induction was supposed to be that it makes electric cooking better, and more of a viable alternative to gas, not that it is somehow equivalent or better in all regards.

                                                                                                                                            I’d go further, and say that for most people cooking at home induction is better than gas. I cook a lot, have used both gas an induction a bunch, and I would not choose gas over induction for my home. I’m actually about to move to a modern house with a nice gas cooktop and I know one of the things I’m going to miss is my induction system.

                                                                                                                                            Compared to previous electric cooking options… wow… induction is just a complete game changer. I don’t think that’s an overstatement.

                                                                                                                                            1. 2

                                                                                                                                              I use both gas and induction and I’m not sure which I like better. Touch controls - that activate accidentally when you move pans - high pitched whines and beeping all cause me unnecessary irritation when using the induction job I have access to. Induction jobs with manual controls do exist, however.

                                                                                                                                              1. 4

                                                                                                                                                Touch controls - that activate accidentally when you move pans

                                                                                                                                                Most induction systems can be controlled by ovens that have physical knobs. For example the Bosch HND21MR50 [1]. The interface is de-facto standardized. For unfathomable reasons such combined induction system + compatible oven with physical controls are not sold in all markets.

                                                                                                                                                [1] https://www.euronics.de/haus-und-haushalt/kochen-und-backen/einbaugeraete/herd-sets/hnd21mr50-herdset-mit-induktionskochfeld-bestehend-aus-hea23t351-nib645b17-m-edelstahl-edelstahl-4051168901759

                                                                                                                                          2. 2

                                                                                                                                            Yeah, I won’t ever move in a house without a gas stove.