1. 2

    While it’s great Microsoft is finally adding features to the Windows console, I’ve been a happy user of http://cmder.net/ whenever I wanted a real terminal on Windows.

    1. 7

      The problem isn’t just what terminal to use: Most Windows Applications use the console API which sucks because it doesn’t support basic functionality that UNIX-based environments have had for years or map real-world expectations (e.g. the powershell example), and there’s nothing you can do about it except rewrite all the programs.

      The good news is that’s exactly what Microsoft is suggesting now:

      This issue further amplifies the recommendation to start writing (or update existing) Command-Line apps to emit VT enriched text, rather than calling Win32 Console APIs to control/format Console output: VT-enabled apps will not only benefit from rendering improvements like those described above (with more on the way), but they’ll also enjoy richer features available via VT-sequences, like 24-bit color support, and default foreground/background color support.

      That’s big news, and hopefully it’ll get more publicity than the closing remarks of a very long (and self-congratulatory) blog post, since it’ll only make tools like cmder better.

      1. 5

        Most Windows Applications use the console API which sucks because it doesn’t support basic functionality that UNIX-based environments have had for years

        As a developer, I often feel the opposite. The unix terminal is a crippled mess, while the Windows console works fairly easily. Look at keyboard input, for example. The Windows API will return nice key codes with modifier flags. It’ll even tell you key up events if you care! Unix terminals can only send you character events easily, and the rest of the keyboard is a mix of various sequences that may or may not give you the modifiers. Assuming the sequences are all xterm or using ncurses etc kinda sorta takes care of the various sequences. Kinda. Sorta. They still frequently don’t work or get broken up or have awkward delays (you can’t assume the esc char is the escape key without checking for more input). Some terminals send stuff you cannot detect without a full list.

        Oh, and try to treat shift+enter differently than enter in a Unix terminal program. Some terminals send codes to distinguish it… which can make programs do random stuff when they don’t recognize it.

        On Windows, the structs are tagged. You can easily skip messages you don’t want to handle. Of course, both Windows and unix are based on some underlying hardware and I appreciate the history of it, but…

        …I loathe the VT “protocol”. …but alas, rewriting all those programs isn’t going to happen, so I get it. And I know there’s some advantages. I like the pty thing as much as the next person - it is cool to be able to easily redirect all this stuff, and it is reasonable efficient in terms of bytes transferred, and the cooked mode has potential to massively decrease latency (though most programs turn that off as soon as the can since it sucks in practice)… I just wish we were redirecting a sane, well-defined protocol instead, and I prefer the console API as the base features.

        Oh well. Hopefully everyone will at least just agree to use xterm escape sequences everywhere.

        1. 1

          If they are referring to non-interactive programs with color output, and even shells with readline-style editing, then it’s better to port to standard VT protocols. Input is not much an issue for such programs, benefits are using more standard interface and being usable from SSH.

          For heavyweight TUI programs it might be better to stay with WinAPI, for the same reason that many users of Emacs prefer its GUI version (even on unixes).

          1. 3

            readline-style editing has a lot to gain by using a sane input system, too. No more ^[[3~ showing up when trying to edit something!

            But on ssh, note that Microsoft are actually handling this: they are putting a layer in the OS console driver to translate api calls to VT sequences when used through a pty. So it is still possible to use those programs over ssh. They are covering a lot bases here - I am excited to try it (my computer refuses to take the windows update with the new pty stuff so far though, so I gotta wait a lil longer. But I can test both my terminal emulator and terminal client library with it and see how it works - and I expect it will be cool to mix and match.)

    1. 2

      recommendation to start writing (or update existing) Command-Line apps to emit VT enriched text, rather than calling Win32 Console APIs to control/format Console output: VT-enabled apps will not only benefit from rendering improvements like those described above (with more on the way), but they’ll also enjoy richer features available via VT-sequences, like 24-bit color support, and default foreground/background color support.

      Do they recommend to update input handling too? I don’t see problems with VT-style output, but input is messy, compared to WinAPI. I can’t imagine Far Manager to be rewritten to use VT-like input, with quirky alt/meta key and escape key repurposed both for sending escape codes and for “cancel” action. (On the positive side, it will be usable through ssh and closer to porting to other OSes).

      (At first I thought it was about XBox, referring to terminal as “console” is one of these strange Microsoft terms)

      1. 9

        I guess this makes sense, but the same could be said for JavaScript? When you load a script, doesn’t it affect the whole page? (Technically, the global is actually window?) And a script loaded via src is usually for more than one page, right? So should JavaScript be loaded via link ref tags too?

        Actually, the more I think about it the weirder link is for css. The other link types, next page, rss, lang=fr, etc. connect this page to others, but don’t change it. Browsers don’t generally load link targets. But a css link does get loaded and it affects this page. It’s not really a link at all.

        1. 7

          The semantics of <script src> are such that if you have

          <div id=a></div>
          <script src=...></script>
          <div id=b></div>

          Then the script can see ‘a’ but it cannot see ‘b’. So it’s evaluated exactly at the point it’s pulled in.

          Further, the script can actually document.write and insert more DOM/script before ‘b’.

          1. 1

            I’m not that clear on the timeline, but was this formalized before the event handlers for page ready were in place? This rationale seems like it could have made sense at the time but is now a bit of a dead metaphor.

            1. 3

              Formalized is a strong word to associate with Javascript’s history. I don’t know whether document events were present in the original implementation, but when JS was originally added to Netscape they definitely had no idea what paradigms would become dominant years later. At the time, it made perfect sense to run a script inline that would output raw HTML into the document, because that’s how everything else worked.

          2. 6

            Yeah honestly I didn’t find the response very enlightening.

            To me, it still seems like a mistake. Every time I write an HTML document (and I write by hand fairl often), I notice this inconsistency and have to copy from previous documents to maintain it.

            What else is the link tag used for? I’ve only ever used it for CSS.

            1. 7

              Also used to get used for “here’s the RSS feed for the blog that you’re currently reading”. There’s a bunch of early Web 2.0 stuff that used various <link> tags in page <head>s for stuff like pingbacks.

              1. 6

                <link>: The External Resource Link element

                tl;dr - icons and fonts, commonly

              2. 1

                The only difference I can recall is document.write, which writes right after its <script> tag, and I remember that it was quite popular in “Web 1.0”, even for “client-side templating” for static sites.

                With async attribute, designed especially to overcome document.write problems, <script> finally loses its location-dependency.

              1. 3

                What’s interesting is that Google promotes PWAs but is there any PWA made by Google? Moreover, I can’t remember encountering any webpage with offline and “installation to home screen” capabilities in the whole internets.

                1. 3

                  I’ve seen pages that do the “install to home screen” thing. discourse.org is a good OSS example. I’ve never seen the works offline thing, though.

                  1. 1

                    Maps & Photos both have PWAs.

                    1. 2

                      Tried to analyze Google Maps now with Google’s own “Lighthouse” tool:

                      • Does not respond with a 200 when offline

                      • User will not be prompted to Install the Web App

                        Failures: Site does not register a service worker.

                      • Does not register a service worker

                      both on mobile and desktop

                      Google Photos has no third warning, it has service worker, but it probably does nothing related to “PWA” functionality.

                      For Youtube, there are the same three warnings plus other, minor warnings, such as “brand colors in address bar”.

                      Google Play Music refuses to load altogether, and says “open native app or go away” if it detects that browser is “mobile” (!), but on desktop, it loads, but all Lighthouse audits fail, except “Uses HTTPS”.

                      I don’t think idea of PWAs is bad, it’s how the first Iphone and later Firefox OS were supposed to work, but Google’s notion of PWAs is complete bullshit, with “service workers”, “brand colors in address bar” and other nonsense. Even Google itself does not try to conform to this.

                    2. 1

                      I think Google Play Music and YouTube are PWAs, in chrome at least you can add to homescreen by the chrome menu.

                    1. 5

                      For numbers, > looks not so confusing (expect for range comparisons), but for dates and times, I prefer always using <. For some reason, I become confused with “comes after” and “comes before” notions, but if dates are arranged from left to right, it becomes clear.

                      But maybe for people who are native speakers of languages using right-to-left writing, use of < might be even more confusing.

                      1. 1

                        I find surprising only that it converts elements to string, despite JS having “partial order” defined for any types (at least <, >, … operations). Other aspects are not so non-standard:

                        • In-place sorting: python has it too (non-mutating version is called sorted)
                        • Returning receiver after mutating operation: Ruby’s sort! does it too
                        • Interior mutability: what languages has “constant variable” modifier restricting mutation of its contents? I can remember for sure only Rust. Maybe PHP too (it has weird references system, I don’t remember how it works).
                        • String comparison: almost everywhere it’s lexicographical, who will expect “18” to be more than “2” because of larger length?
                        1. 2

                          I like Droid fonts that were used in older versions of Android, before they were replaced by Roboto. It feels more “warm and cozy” than Roboto, which is quite standard and dull, despite Google described it as more “emotional”.

                          1. 2

                            As pointed out in this article, it barely differs from Helvetica Neue, among others. Certainly is quite standard.

                            1. 4

                              I don’t think that’s fair at all. Helvetica Neue is not particularly nice at small sizes, because of small apertures and terminals only being sliced at the horizontal axis, and I think Roboto does a very good job taking design cues from Helvetica while being much more useful at small sizes. I do think it’s fair to call Roboto “standard,” and even to say that its design brief is to nail standardness more squarely than just about any other font before or since.

                              I do agree that Roboto feels “cooler” (in the sense of not as warm) as Droid Sans, but I personally like that.

                              Disclosure: friends with the Christian Robertson, creator of Roboto, and have been lightly involved with a few aspects of it.

                              1. 2

                                At that size, and in the comparison given in the article, there are very few differences.

                                Roboto does do a great job at small sizes.

                                I should have qualified that being “standard” isn’t necessarily a bad thing. It’s not a bad font in any sense, but it doesn’t have anything particularly outstanding about it.

                            2. 2

                              Roboto for Ice Cream Sandwich is pretty different from Roboto for Lollipop. If you can’t tell the difference, look at the numeral 7 (the ICS one has an exaggerated curve in it’s diagonal stroke) and the capital R (the diagonal, again, is different). The ICS one is a multi-headed frankenfont. The Lollipop one is Generic Sans #2844.

                              1. 1

                                Droid sans mono is by far one of the best terminal fonts I’ve ever used

                              1. 3

                                Reasons for using ORMs are not especially compelling

                                There are many things that attract folks to using ORMs, here’s a short list of the most common:

                                Familiarity with object oriented concepts.

                                ORMs are not especially object-oriented, usually they treat classes like data structs. There’s almost no OOP concepts, objects/classes are used just because they are used to model data structures in most today’s languages. OO is too shallow in ORMs. See “anemic domain model”.

                                No need to learn SQL.

                                No, you have to learn SQL because all ORMs are too leaky abstractions.

                                Your data is accessible as class objects, as opposed to tuples that you later coerce into objects.

                                This is a real reason to use ORMs, in addition to:

                                • Ability to build trees (rarely, arbitrary graphs) from tables (i.e. to preload User.posts for each returned User)
                                • Ability to build SQL composably, as workaround for “5th generation programming language” traits of SQL (no, don’t try to build queries by concatenating strings directly)
                                • Ability to propagate changes in in-memory data structures to database (or to describe changes separately if your data structures can’t be changed). Sometimes this also includes validation, which is valuable feature, but it does not have to be a part of ORM.

                                The interface is abstracted above the database engine, so if for some reason you need to switch from SQLite to MySQL to PostgreSQL, your application will keep functioning without code changes (barring some corner cases of course).

                                It’s rarely feasible, because of high leakiness of ORM’s abstraction.

                                There are migration tools built on these systems that help update the structure of your database over time, by applying or reverting schema changes.

                                Usually migrations are part of ORM’s package, for convenience, but conceptually, migrations are unrelated to ORM.

                                1. 2

                                  Described features like sum types are cool and useful, but once you get language with static types, it’s all going down the rabbit hole: eventually, to express your domain model, you’re going to need ad-hoc polymorphism, higher-kinded types, GADTs, existential types, etc. Or to think how to overcome lack of some of these features in language.

                                  It would be interesting to read about experience in building “real-world” applications with reason/ocaml and how it feels to use language without ad-hoc polymorphism and some other features. If going from plain JS, everyone is accustomed to using overridden methods and just plain duck typing, which is a form of ad-hoc polymorphism, but with Reason, there’s no such feature. It would be interesting to hear how to deal with that.

                                  1. 8

                                    OCaml has (multiple) inheritance, you can pass records of closures if you want, and you can parameterize modules over other modules. There are a lot of options here already. Reason doesn’t really advertise the class based features so much but it’s still supported.

                                    1. 5

                                      …but once you get language with static types, it’s all going down the rabbit hole: eventually, to express your domain model, you’re going to need ad-hoc polymorphism, higher-kinded types, GADTs, existential types, etc. Or to think how to overcome lack of some of these features in language.

                                      One isn’t required to express the whole domain model with types, just because the language has them. There is a middle way.

                                      1. 2

                                        In a language without ad-hoc polymorphism, you’d write a map function for each type instead of an instance of Functor for each type, etc. It would be mostly as simple as that.

                                      1. 4

                                        This is currently limited to “big data” databases: this class of products is aimed to large companies. So it’s largely irrelevant, they are trying to extract more money from these large companies and there’s basically no “end user” to give freedoms to use, modify, etc. Yes, these companies’ staff are people too, and users of these cloud things can indirectly benefit from freedoms given to companies using these databases, but at that level all these bigdata-using cloud/saas/paas/adtech companies are nowadays perceived as “evil” and abusive to people.

                                        If Postgresql or Gimp or Blender changes license to some ShadyPL with “no commercial use”, then it would be real reason to concern, but all these mongodbs changing licenses does not mean immediate GNUapocalypse.

                                          1. 6

                                            There are two different worlds here: prototyping and production. In production machine learning systems today you can easily find C++, Java and even Scala, Haskell and Ocaml, but not for prototyping: C++ and Java are too low level and lack REPL, and all of these languages lack even decent libs for visualization.

                                            For prototyping, you need REPL, dynamic code loading and similar things. Only advanced static languages like Haskell and Scala has these features (ghci still has lots of problems), and maybe if such languages were relatively mainstream at the time when Matlab and R appeared, maybe Matlab and R wouldn’t have their own languages but used some existing static language (there was no even decent popular dynamic languages, only maybe Perl). Dynamic language is more obvious choice here (at least for writing their own language, as authors of Matlabs had no resources for PLT research): dynamic language is much simpler to implement.

                                            Then, for those who can’t stand horrible languages of Matlab and R, the same tools were developed, but now for existing language Python. Numpy is still very matlab-y and you can’t even just easily do map over data points for feature engineering.

                                            Now, we have Python and it’s even usable in production, so why we might need types? Maybe for building large systems at high level, types would be useful, but for algorithms — not much. You can’t even encode sizes of matrix in most practical languages, maybe except that with some shady type hackery which is not very promising.

                                            1. 8

                                              Filled out the survey. I spent a few months trying to get haskell to work for me but I found it a frustrating experience. I got the hang of functional programming fairly quickly but found the haskell libraries very hard to work with. They very rarely give examples on how to do the basic stuff and require you to read 10,000 words before you can understand how to use the thing. I wanted to do some ultra basic XML parsing which I do in Ruby with nokogiri all the time but with the haskell libraries I looked at it was just impossible to quickly work out how to do anything. And whenever I ask a question to other haskell devs they just tell me its easy and to look at the types.

                                              1. 3

                                                There’s often way too few examples, yeah :( And type sigs are definitely not the best way to learn. That said, once you get it up and running, parsing XML in Haskell is quite nice (we use xml-conduit for this at work).

                                                Someone actually took it upon themselves to write better doc’s for containers at https://haskell-containers.readthedocs.io/en/latest/ and shared their template for ReadTheDocs: https://github.com/m-renaud/haskell-rtd-template in case anyone else feels inspired :)

                                                1. 3

                                                  I agree. The language is beautiful, but we need to put more work into making libraries easier to understand and use. What makes it even worse for newbies is that as an experienced developer, I can understand when a library is using a familiar pattern for configuration or state management, but you have to figure out that pattern itself at the same time.

                                                  You shouldn’t have to piece together the types or, worse, read the code, to understand how a library works. I dislike the “I learned it this way, so you should too” attitude I often see. We can do better.

                                                  1. 5

                                                    I agree too. Hackage suffers from the same disease as npm: it’s a garbage heap that contains some buried gems. The packages with descriptive names are rarely the good ones. Abandoned academic experiments rub elbows with well engineered, production-ready modules. Contrast with Python’s standard library and major projects like Numpy: a little curation could go a long way.

                                                  2. 3

                                                    I think the challenge is unless the documentation includes an example or even documentation at all it can be hard to know where to interact many libraries. While reading the types is often the way you figure it out, I wish more libraries pointed me towards the main functions I should be working with.

                                                    1. 2

                                                      It’s a skill to look at the types, but it is how I do Haskell development. I’d love to teach better ways to exercise this skill.

                                                      1. 6

                                                        I started to get the hang of it but it really felt like the language was used entirely for academic purposes rather than actually getting things done and every time I wanted to do something new people would point me to a huge PDF to do something simple that took me 3 minutes to work out in ruby.

                                                        1. 2

                                                          I use Haskell everywhere for getting things done. Haskell allows a massive amount of code reuse and people write up massive documents (e.g. Monad tutorials) about the concepts behind that reuse.

                                                          I use the types and ignore most other publications.

                                                      2. 1

                                                        Ruby and Haskell are on opposite sides of documentation spectrum.

                                                        Ruby libs usually have great guide but very poor API docs, so if you want to do something outside of examples in guide, you have to look at source. Methods are usually undocumented too and it’s hard to figure out what’s available and where to look due to heavy use of include.

                                                        Haskell libs have descriptions of each function and type, and due to types you can be sure what function takes and what it returns. Haddock renders source docs to nice looking pages. However, usually there are no guides, getting started and high-level overviews (or guides are in the form of academic papers).

                                                        I wish to have best of both worlds in both languages.

                                                        When I started to learn Haskell, the first thing that I wanted to do for my project is to parse XML too. I used hxt and that was really hard: it’s not a standard DOM library and probably has great stream processing capabilities, and it’s based on arrows which is not easiest concept when you are writing your first Haskell code. At least hxt has decent design, I remember that XML libs from python standard library are not much easier to use. Nokigiri is probably the best XML lib ever if you don’t use gigabyte-sized XML files.

                                                      1. 2

                                                        supported by the Mozilla WebRender rendering engine

                                                        So… electron.rs? ☹️

                                                        But, no javascript? 😀

                                                        I’m so conflicted.

                                                        1. 12

                                                          So… electron.rs? ☹️

                                                          Doesn’t seem so: https://hacks.mozilla.org/2017/10/the-whole-web-at-maximum-fps-how-webrender-gets-rid-of-jank/ & https://github.com/servo/webrender & https://github.com/servo/webrender/wiki

                                                          As I seem to understand, WebRender is nowhere close to be an Electron alternative. Seems to be an efficient and modern rendering engine for GUI and provide nothing related to JavaScript/Node/Web API.

                                                          So it looks like you can be free of conflict and enjoy this interesting API :) . I personally definitely keep an eye on it for my next pet project and find it refreshing to have an API for UI that look both usable and simple.

                                                          1. 4

                                                            If you’re a fan of alternative, Rust-native GUI’s, you might want to have a look at xi-win-ui (in the process of being renamed to “druid”). It’s currently Windows-only, because it uses Direct2D to draw, but I have plans to take it cross-platform, and use it for both xi and my music synthesizer project.

                                                            1. 1

                                                              Please, give us some screenshots! ;)

                                                              1. 1

                                                                Soon. I haven’t been putting any attention into visual polish so far because I’ve been focusing on the bones of the framework (just spent the day making dynamic mutation of the widget graph work). But I know that screenshot are that important first impression.

                                                                1. 1

                                                                  Please do submit a top-level post on lobste.rs once you add the screenshots :)

                                                                  1. 1

                                                                    Will do. Might not be super soon, there’s more polishing I want to do.

                                                                    1. 1

                                                                      Thanks! :) And sure, take your time :)

                                                          2. 6

                                                            If comparing with Chromium stack, Webrender is similar to Skia, and this is GUI toolkit on top of it, instead of on top of whole browser. BTW, there’s example of app that has whole (non-native) UI on top of Skia: Aseprite.

                                                            (AFAIK, Skia is something a la Windows GDI, immediate mode, and Webrender is scene graph-style lib, more “retained-mode”)

                                                            And seems that, despite there’s no components from real browser, Azul has DOM and CSS. So, Azul is something in the spirit of NeWS and Display Postscript, but more web-ish, instead of printer-ish?

                                                            1. 4

                                                              There is also discussion of making an xml format for specifying the dom, like html.

                                                              1. 1

                                                                It’s using Mozilla’s WebRender, so how about XUL?

                                                                1. 1

                                                                  Considering that Mozilla is actively trying to get rid of XUL, doing anything new with it seems like a bad idea.

                                                                  But also, if I understand what XUL is correctly it’s mostly a defined list of widgets over a generic XML interface, if I understand that proposal properly it’s to make the list of widgets completely user controllable (though there will no doubt be some default ones, including HTML like ones).

                                                            2. 1

                                                              WebRender is basically a GPU-powered rectangle compositor, with support for the kinds of settings / filters you can put on HTML elements. It’s nowhere near the bloated monstrosity that is electron.

                                                            1. 11

                                                              As much as I like to dislike systemd, this article is just wrong.

                                                              In the late 1990s and early 2000s, we learned that parsing input is a problem. The traditional ad hoc approach you were taught in school is wrong. It’s wrong from an abstract theoretical point of view. It’s wrong from the practical point of view, error prone and leading to spaghetti code.

                                                              This is just simply untrue. Parser combinators and parser generators are good for some things, and not good for some other things. There’s a reason that essentially no production programming language parser uses a parser generator: their error handling is bad for humans.

                                                              In a situation where you’re parsing network input or other machine-generated input, having only a simple ‘this is completely valid vs. this is invalid in some unspecified way’ distinction is probably fine. Invalid input? Disregard it. Valid input? Process it. But ‘parsing input’ is much broader than that, and ‘handwritten’ parsing is completely fine. It’s mandatory if you’re parsing user input IMO.

                                                              The first thing you need to unlearn is byte-swapping.

                                                              No, you don’t. Byte-swapping is fine. There’s nothing wrong with ntohl.

                                                              Among the errors here is casting an internal structure over external data. From an abstract theory point of view, this is wrong. Internal structures are undefined. Just because you can sort of know the definition in C/C++ doesn’t change the fact that they are still undefined.

                                                              No, they aren’t undefined. They’re quite well-defined. It’s completely reasonable to write that code, just as it is completely reasonable to memcpy stuff from buffers into internal data structures. You need to be aware of what is valid and what is not, what is well-defined behaviour and what is not. But just as the relatively common misconception that ‘pretty much anything that you can think of in terms of bytes is fine’ is not true, ‘it’s all undefined’ is not true either.

                                                              For example, there is no conditional macro here that does one operation for a little-endian CPU, and another operation for a big-endian CPU – it does the same thing for both CPUs.

                                                              Yes there absolutely is a conditional macro, it’s just in the compiler instead. On a big endian target the compiler does one thing, on a little endian target the compiler does another.

                                                              The other thing that you shouldn’t do, even though C/C++ allows it, is pointer arithmetic. Again, it’s one of those epiphany things C programmers remember from their early days. It’s something they just couldn’t grasp until one day they did, and then they fell in love with it. Except it’s bad. The reason you struggled to grok it is because it’s stupid and you shouldn’t be using it. No other language has it, because it’s bad. … I mean, back in the day, it was a useful performance optimization.

                                                              This is also just simply wrong. It’s not used because it’s a performance optimisation, and it never has been. It’s used because it’s standard C style to use pointer arithmetic, it leads to more easily understandable code (for experienced C programmers) and it’s much more concise than wading through &p[0] crap that just gets converted into pointer arithmetic anyway.

                                                              In my code, you see a lot of constructs where it’s buf, offset, and length. The buf variable points to the start of the buffer and is never incremented. The length variable is the max length of the buffer and likewise never changes. It’s the offset variable that is incremented throughout.

                                                              That’s fine, if you prefer doing it that way, but so is buf, offset being a pointer that is your offset plus your buf, and length. They’re equivalent, but the latter is much easier to read and understand for most C programmers.

                                                              1. 5

                                                                The traditional ad hoc approach you were taught in school is wrong. It’s wrong from an abstract theoretical point of view.

                                                                Oooh - an abstract theoretical point of view. Sounds like math without actually using any math!

                                                                1. 4

                                                                  The alternative to shotgun parsers is not necessarily parser combinators or parser generators. Shotgun parsers are parsers that intermingle parsing code and processing logic, rather than doing all the parsing up-front (http://langsec.org/brucon/ShotgunParsersBruCON.pdf). You can write a hand-written parser that does all the parsing work in one step.

                                                                  (The blog post doesn’t actually use the word “shotgun”, but I think that’s what’s meant).

                                                                  1. 5

                                                                    No, you don’t. Byte-swapping is fine. There’s nothing wrong with ntohl.

                                                                    It’s still a hacky thing. Just by looking at its signature:

                                                                    uint32_t htonl(uint32_t hostlong);

                                                                    It converts “normal” integer to “network” integer, both having the same type. It’s somewhat weird. It’s not a function from integer to 4 bytes, it’s a function from integer to integer. Probably C defines two meanings for integers: a number, and a blob of bytes? I’m not even completely sure that treating integers as blobs is okay by measures of C’s specification.

                                                                    Now, what we can do with networky integers? Can we put them into sockets/files right away? That involves implicit conversion from integer to 4 bytes. What endingness will be during conversion? And why casting structs over blobs of memory and then applying all these byte-swapping hackery is still an idiomatic way to parse protocols? It’s all very weird and frightening, especially when seen from perspective of “non-system” developer using “scripting” languages.

                                                                    Why we still don’t have better system programming tools in 2018? It’s cruel to blame humans for making bugs while using such abstractions.

                                                                    1. 6

                                                                      htonl isn’t hacky because it’s a function from u32 to u32 any more than the (+1) function in Haskell is hacky because it’s a function from Int to Int. It’s a function, defined on 32-bit integers. Everything in C is a blob of bytes. Integers are blobs of bytes. Literally everything is (except functions I guess). There is no abstraction over this. ‘Integer’ and ‘bytes’ are the same thing in C and in the real world on actual processors.

                                                                      And why casting structs over blobs of memory and then applying all these byte-swapping hackery is still an idiomatic way to parse protocols? It’s all very weird and frightening, especially when seen from perspective of “non-system” developer using “scripting” languages.

                                                                      It’s not hackery. It’s the idiomatic way to parse protocols because it’s actually what is happening. You are taking bytes off a network and interpreting them as something. They aren’t integers on the network, they’re bytes. They aren’t integers in memory either, they’re just bytes. You interpret them as integers, but they’re just bytes.

                                                                      C’s design is not built around being friendly to newbie programmers that have never written anything but Python. That’s not the goal, and it shouldn’t be the goal.

                                                                      Why we still don’t have better system programming tools in 2018? It’s cruel to blame humans for making bugs while using such abstractions.

                                                                      Because nobody has ever actually explained what ‘better system programming tools’ would actually be. Rust is not an improvement in this space, and that’s all I ever actually see proposed. Rust and pie-in-the-sky ‘formally verified languages with linear types’.

                                                                      1. 4

                                                                        Ideally, you do the conversion at the border; when you read the packet off the network, or just prior to sending the packet to the network. Sadly, this isn’t the case and most programmer will do the conversion “when needed” because of speed or some crap like that (DNS handling is notorious for this). I wrote a library to encode and decode DNS packet because of this and as a user, you get the blob of data from the network, call dns_decode() with the packet, and get back a fully decoded structure you can use.

                                                                        For the conversion itself, there are a few ways of doing it. I’ll use DNS as an example (because I know it). The raw structure is:

                                                                        struct dns_header
                                                                          uint16_t id;
                                                                          uint8_t  opcode;
                                                                          uint8_t  rcode;
                                                                          uint16_t qdcount;
                                                                          uint16_t ancount;
                                                                          uint16_t nscount;
                                                                          uint16_t arcount;
                                                                        struct dns_packet
                                                                          struct dns_header hdr;
                                                                          unsigned char     rest[1460];

                                                                        I’m only concerning myself with the header portion, which is straightforward enough to show differences in parsing the data. The value of 1460 for the rest of the packet was chosen because of the MTU of Ethernet (1500) minus the overhead of IP, UDP and the DNS header. With that out of the way, one way to read this data is:

                                                                        struct dns_packet  packet;
                                                                        struct sockaddr_in remote;
                                                                        socklen_t          remsize;
                                                                        remsize = sizeof(remote);
                                                                        bytes   = recvfrom(sock,&packet,sizeof(packet),0,&remote,&remsize);
                                                                        if (bytes < sizeof(struct dns_header)) handle_error();
                                                                        /* convert data from network byte order (big endian) to host byte order */
                                                                        /* on systems that are already big endian, these become no-ops */
                                                                        packet.hdr.id      = ntohs(packet.hdr.id);
                                                                        packet.hdr.qdcount = ntohs(packet.hdr.qdcount);
                                                                        packet.hdr.ancount = ntohs(packet.hdr.ancount);
                                                                        packet.hdr.nscount = ntohs(packet.hdr.nscount);
                                                                        packet.hdr.arcount = ntohs(packet.hdr.arcount);
                                                                        /* work on data */

                                                                        Here we see how ntohs() (and similar functions) work. It converts a 16 bit quantity from network order (big endian) to host order. As noted, on systems that are already big endian, these do nothing and the compiler can optimize these out entirely. On little endian systems (like Intel), this translate to a few instructions—one to load the data, one to byteswap the data, and one to store it back. There aren’t issues with alignment since the 16-bit quantities are aligned on even addresses (and C guarantees proper alignment of structures). So this is very straightforward code.

                                                                        The other option is actual byte manipulations:

                                                                        struct dns_header   hdr;
                                                                        uint8_t             packet[1472];
                                                                        uint8_t            *ptr;
                                                                        struct sockaddr_in  remote;
                                                                        socklen_t           remsize;
                                                                        remsize = sizeof(remote);
                                                                        bytes   = recvfrom(sock,&packet,sizeof(packet),0,&remote,&remsize);
                                                                        if (bytes < sizeof(struct dns_header)) handle_error();
                                                                        ptr = packet;
                                                                        hdr.id      = (packet[ 0] << 8) | packet[ 1];
                                                                        hdr.opcode  =  packet[ 2];
                                                                        hdr.rcode   =  packet[ 3];
                                                                        hdr.qdcount = (packet[ 4] << 8) | packet[ 5];
                                                                        hdr.ancount = (packet[ 6] << 8) | packet[ 7];
                                                                        hdr.nscount = (packet[ 8] << 8) | packet[ 9];
                                                                        hdr.arcount = (packet[10] << 8) | packet[11];
                                                                        ptr += 12;

                                                                        There’s copying going on, and furthermore, you (the programmer) are having to track individual offsets manually—I’d rather leave that up to the compiler to do. You could also do:

                                                                        hdr.id      = (*ptr++ << 8) | *ptr++;
                                                                        hdr.opcode  =  *ptr++;
                                                                        hdr.rcode   =  *ptr++;
                                                                        hdr.qdcount = (*ptr++ << 8) | *ptr++;
                                                                        hdr.ancount = (*ptr++ << 8) | *ptr++;
                                                                        hdr.nscount = (*ptr++ << 8) | *ptr++;
                                                                        ndr.arcount = (*ptr++ << 8) | *ptr++;

                                                                        No tracking of offsets required, but it’s a lot of pointer manipulation that scares programmers.

                                                                        And that’s pretty much the ways you parse networking data. As I said, ideally, this is handled right at the border, preferably using a library to handle the details, and said library is easy to use.

                                                                        1. 2

                                                                          It’s somewhat weird. It’s not a function from integer to 4 bytes, it’s a function from integer to integer.

                                                                          No, it’s a function from 4-bytes wide integers to 4-bytes wide integers. That’s right there in the prototype: “uint32_t”. So there you have it, your final representation of data: 4-bytes, with the proper bits set.

                                                                          And why casting structs over blobs of memory and then applying all these byte-swapping hackery is still an idiomatic way to parse protocols? It’s all very weird and frightening, especially when seen from perspective of “non-system” developer using “scripting” languages.

                                                                          It will always be so. You might get better and better abstractions describing serialization, access to external buffers, data validation. You will always bear the cost of doing a copy. That won’t, ever, disappear, even with the higher abstraction of a non-system programming language. So you will always have system developpers going back and saying “oh by the way, why are we copying data here? I see a glaring and obvious optimization, let’s do zerocopy!”.

                                                                          Systemd has abstraction leakage left and right, due to its pervasive function, but networking will always involve an interface between systems and non-systems developpers. When writing routing code in kernel or userland, you can be certain that the structures won’t be copied for safe validation. When your host is the end-point, the same validation will be used usually, meaning that the zerocopy method will always exist and always be preferred.

                                                                      1. 5

                                                                        It’s basically nice and clean API to core of qemu, embeddable into your applications, which is a cool idea because qemu is huge, complex and not designed to be used as a library.

                                                                        Sadly, it has no MMIO support, so it’s not yet suitable for making full-system emulators. It was designed to run pieces of code during reverse engineering and malware analysis, where emulation of external devices is not necessary. There’s branch with support for MMIO, and it’s probably in finished state, only language bindings are not finished.

                                                                        1. 1

                                                                          Is 100,000 sprites good? I understand that this is a large complex engine, but don’t modern game engines render millions of tri’s at 60 fps on console hardware?

                                                                          1. 4

                                                                            It’s just a demo that it has sprite batching, and it’s abstracted away in RenderBundle, so you can just attach SpriteRender and Transform components and everything is handled automatically by framework.

                                                                            Usually simpler 2D game libraries/frameworks use immediate blitting, which draws sprite to screen right away with blocking method call, which is way slower. Example in libsdl.

                                                                            1. 2

                                                                              The sprites are batched but they are each individual game objects. Usually when meshes are rendered they are sent as a whole (so you rarely have more than say, 10k game objects in normal games); the interesting part here is showing how the cpu bottleneck of bookkeeping for 100,000 objects is handled by amethyst’s ECS architecture since sprite batching has been implemented.

                                                                              1. 0

                                                                                It depends. If they move around and you have to simulate that and upload new positions to the GPU every frame then 100k isn’t bad.

                                                                                Otherwise it’s a couple orders of magnitude too slow - modern hardware can (conservatively) do one billion triangles per second with little effort.

                                                                                1. 1

                                                                                  I already answered this question below and it is not “it depends” because they are moving and having their state tracked in their components. https://github.com/cart/amethyst-bunnymark/blob/master/src/bunnymark/move_bunnies_system.rs#L32-L68

                                                                              1. 3

                                                                                Every pro-OOP discussion is always:

                                                                                • You are doing it wrong
                                                                                • Learn Object Oriented Design
                                                                                • Learn Design Patterns
                                                                                • Worship SOLID
                                                                                • Don’t overuse each everything (inheritance, composition, interfaces, OOP itself)

                                                                                Then is OOP really useful abstraction, if it requires studying of lots of tomes with vague language and vague conclusions? What tools you’ll get after reading all these works? You still have to use inheritance to model sum types. You praise “tell, don’t ask”, but your code is still full of getters because you just can’t figure out how to avoid this — no one can figure out.

                                                                                ECS is limited, arbitrary and not enough abstract, but I can use it right away, just like, for example, React, which is too, arbitrary and not enough abstract. But I can’t completely understand second part of this article and advices in it. ECS violates OOP and SOLID rules? C++ standard library (“STL”) violates them too, so what?

                                                                                virtual void update()? What correct signature should be for time-step update function? Why is it “anti-pattern”?

                                                                                OOP is computer religion, not computer science.

                                                                                1. 2

                                                                                  This is the same thing with every pro-X discussion, where X is some way to program computers. “You’re doing it wrong” and “You’re not X-ing hard enough”

                                                                                  1. 2

                                                                                    Cross out everything after the first comma.

                                                                                  2. 2

                                                                                    virtual void update()? What correct signature should be for time-step update function? Why is it “anti-pattern”?

                                                                                    Because the author has worked on a bunch of game systems and found that ones which had that in them were difficult to work on because of it. They do actually talk about why, particularly in the comments on that article.

                                                                                  1. 28

                                                                                    I often enjoy accidentally clicking or touching something and then having to start all over when I press the back button.

                                                                                    1. 12

                                                                                      The infernal cousin of infinite scroll is “left or right swipe to navigate between articles,” which usually resets your scroll position when you swipe back. It makes scrolling impossible on a phone if you are holding it in any imprecisely aligned way (such as one-handed scrolling with a cup of coffee in hand). Luckily this has mostly died out, but the NYT iOS app still does this.

                                                                                      1. 5

                                                                                        And, of course, the sites that implement navigation like this are quite bloated so one false swipe leads to a deluge of ads there and a deluge of ads back to your original place. The whole process taking 15 seconds and drains a tiny bit of the life force from your ever dwindling data plan, like a technological vampire.

                                                                                        I’m not bitter.

                                                                                      2. 8

                                                                                        Often followed by trying to get back ‘down’ with ctrl+f, which of course never yields any result.

                                                                                        1. 4

                                                                                          It’s especially bad in mobile apps: not only clicking (and you don’t have separate scrolling wheel, you scroll by dragging the same clickable elements), but locking and unlocking screen, switching between apps clears UI state, causing scroll position and loaded data to be lost.

                                                                                          What frightens is that everyone agrees with these patterns, despite big companies having the best UX designers and so much talks about mobile UX everywhere. It feels like political choice: you must have short attention span and indifference to content, you just scroll, scroll, scroll. It’s like mix of TV and slot machine.

                                                                                        1. 3

                                                                                          This is one of the reasons I always disliked the GUI Linux desktop experience. There are so many possible toolkits that on a regular desktop installation you’re easily using 3-5 at the same time. Getting GTK and Qt to look roughly the same is possible, but more niche ones always stick out.

                                                                                          The fact that CSS via obscure user stylesheets is the only way to theme most of these is clearly a big issue, because CSS is way to powerful and thus leads to a lot of unexpected interactions between software and themes. Maybe it’s time to agree on a unified way to define style for desktop GUI applications that clearly defines the boundaries of what a theme can affect. This could probably even work for Electron apps.

                                                                                          1. 4

                                                                                            The situation is almost the same on Windows. Microsoft programs use ribbons, older generations of Microsoft apps use toolbars, and each generation uses different toolbar button styles and oddities. Even menu bar is different in different generations of Microsoft apps. Adobe programs use their own widgets, probably their own cross-platform toolkit targeted to Mac OS too. Except Lightroom, which uses its own unique style. Chrome uses goofy “mix of material design and raygun gothic” widgets, including round buttons (inconsistent with other Google’s UIs). Various Electron apps use their own css-based designs. 3D modeling and DAWs have especially bizzare UIs. There are TUIs too, for example Far Manager, which has complex and rich TUI. Java apps have distinct style, and there are two varieties: Swing and JavaFX. There are even two completely different official GUI subsystems now: Win32 and WinRT (or what is proper name for it), and “classical” and “tiled” apps.

                                                                                            So, I don’t think it’s very important to have consistent visual style. This unification had failed long ago.

                                                                                            On MacOS it’s almost the same as on Linux, only Apple makes apps according to their guidelines, except iTunes. Sometimes you have to use apps which have menu bar embedded in window.

                                                                                            1. 3

                                                                                              I don’t think it’s very important to have consistent visual style

                                                                                              It’s not a hard problem, but we gave up on it because the context we operate in values stupid things like branding above actual users. What’s worse is we don’t even see it as a problem because we’ve gotten used to five different spinner styles.

                                                                                              “But then how would I use HTML/CSS to write my desktop app?” you might ask. Already, you’re asking the wrong question by starting at what you want over what your users want.

                                                                                              1. 2

                                                                                                I think marketing and “corporate identity” is rarely a reason for such differences. In case of Chrome and iTunes on Windows it’s probably main driving factor. But I can’t imagine graphics or 3D editor with standard Windows 95 widgets: they are too large and rough, so I think such programs use custom controls out of necessity. In case of DAWs it’s because of weird culture where ultra-skeuomorphism is still valuable. Different applications have different needs: data entry form will have elements that look differently from widgets in 3D modeling app property editor.

                                                                                                Web apps, despite easy customizability and lack of standard look, chose to unify with Bootstrap-like style and I don’t like it, because Bootstrap is ugly and was designed for Twitter which has horrible UI.

                                                                                              2. 3

                                                                                                I haven’t been on Windows in a while, so I don’t know about that, but on macOS I feel like 90% of the native apps looks pretty consistent. On the other hand there’s no custom styling, apart from switching between light and dark mode.

                                                                                                I actually really dislike software that comes with its own weird toolkit because it usually breaks workflows that work across the rest of the system. For example I hate that in Firefox i can’t use C-n/-p to select entries in the address bar drop down, which works pretty much across the system.

                                                                                                1. 2

                                                                                                  That really is horrible from a UX POV, but sometimes specialized software has to get a pass.

                                                                                                  I hate not having ctrl+pgup/pgdown change tabs in Godot or XFCE’s file manager, and haven’t even found config options. Maybe they don’t ecist :(

                                                                                                  I don’t see the value in Evince’s shitty menus, but I wouldn’t know how to do Blender more properly so it’s still Blender.

                                                                                                  When I was young, and this may still be true, we’d get crap like totally custom-looking dvd-playing apps bundled with dvd drives. They all taught me “never trust software that looks like candy”, but yeah, some apps need to be given the benefit of a doubt.

                                                                                              3. 3

                                                                                                easily using 3-5 at the same time

                                                                                                In my experience, it’s all GTK3 the vast majority of the time. The only Qt5 app I use semi-regularly is MusicBrainz Picard.

                                                                                                This could probably even work for Electron apps

                                                                                                Look at the “But it’s fine as long as you follow best practices” section of the article — if a “unified way to define style” can’t work for just GTK, there’s no way everyone would agree on that across multiple toolkits, especially not web-apps-on-desktop.

                                                                                                1. 2

                                                                                                  GTK3 apps have always felt bad to use, with bizarre hamburger menus and strange feng shui all over. Maybe a unified way can’t work when you all for such UI/UX design choices, and the world would be better off with stricter sets of rules.

                                                                                                  And easier theming would follow as a consequence.