1. 1

    this topic is worth addressing and this blog post does a good job of summarizing the problem. Dealing with cancellation is one of the bigger warts in Go. The context package is a nice attempt but it’s no panacea.

    This is frustrating, and it made me wonder why something like the following interface isn’t more common: interface PreemptibleReader { Read(ctx context.Context, p []byte) (n int, err error) }

    the author leaves this a bit unanswered. There’s a few reasons io.Reader looks this way (and doesn’t use context.Context):

    • the io package is older than the context package; io.Reader predates context.Context by years. Because of the Go compatibility guarantees, io.Reader wouldn’t be changed after the introduction of context.Context even if the Go team thought it was a good idea.
    • the io.Reader api as it exists right now essentially maps 1 to 1 to the read syscall on most operating systems. That is: provide a buffer, and read will fill that buffer with data and tell you how much data it filled in. The linux read syscall docs are strikingly similar: http://man7.org/linux/man-pages/man2/read.2.html

    It is, as the author wrote, intended to be as universal as possible, perhaps at the expense of usability.

    The provided example has a few oddities:

    • can only read in chunks of up to 1024 bytes, so depending on your data source this implementation may result in a much larger number of syscalls than you need.
    • has to copy all of the data in userspace. That kinda defeats the purpose of slices being reference types; in the majority of situations, you want to read some data, then process that data, then read some more data, reusing the same slice buffer as you go, in order to avoid continually allocating and freeing memory. The benefit of being able to read and process at the same time has to be weight against the cost of allocating new slices and copying all of the data. You could toss the buffers in a sync.Pool, but now you have additional coordination work to do to manage the buffers between the reader and the consumer. In some cases this sort of additional work may be faster, in some cases it may be slower, depending on your data source and what sort of processing you’re doing.

    so … yes, it’s a real problem, and the provided example is a solution, but there are a lot of tradeoffs being made here. This gets back to the original question of why is io.Reader blocking by default and it’s because the alternatives involve tradeoffs that probably wouldn’t be a great fit for the standard library.

    anyway, I agree with the author that this is a challenging situation in Go.

    1. 1

      The patter of using untyped constants that way is due to a poor coding habit. Constants should be named.

      1. 2

        in the expression fn(5) the untyped constant is 5. You don’t really name every numeric literal in your code before using it, do you?

        1. 1

          I do, at least for code that will be used in production. So does my entire team.

          1. 1

            ok so if you wanted to create a value meaning “five second”, you would write this:

            five := time.Duration(5)
            fiveSeconds = five * time.Second
            

            oh, no, you can’t do that, because we’ve written the lexeme 5 into our source code. Hmm, that’s “due to poor coding habit”. Hmm.

            1. 1

              Why not just do fiveSeconds := time.Second * time.Duration(5)?

              Or, alternately: secondsUntilFoo := 5 then you can use time.Second * secondsUntilFoo as needed.

              1. 1

                because in practice if you wanted a value that represented a duration, you would have it of type time.Duration, so you would have something like fooTimeout := 5 * time.Second. Now we’re back to square one. Writing fooTimeout := time.Duration(5) * time.Second is just needlessly verbose.

        2. 1

          I think there are two things here. First, how idiomatic is the language. For example Python and Go are both idiomatic (Pythonic way / Go way). Second, how much are these idioms being enforced. Python doesn’t enforce idioms at all (it is possible to write C-like Python easily). It seems that in this case, Go enforces “Constants should be named” idiom. However I believe that Go will (or maybe already is) be hacked in a big project (code-base) that escapes from these enforced idioms, since beauty of the code (and everything else) is in its diversity.

        1. 1

          his complaint about using an untyped constant as a time.Duration doesn’t really hold water, because if it was not this way, and it was the way he was expecting, you would never be able to do 5 * time.Second; you would have to do time.Duration(5) * time.Second. Would he ever want to do that? Hmmm….

          Normally you want to pass this function time.Second * x to timeout in x seconds. The multiplication with time.Second which is of type time.Duration could perform the type cast and make this usage type safe.

          this is very frustrating, because he’s complaining that he doesn’t like untyped constants because they’re too flexible, but his proposed fix is to … have type casts? What? You either have context.WithTimeout(ctx, 5) and 5 * time.Second or you have to do context.WithTimeout(ctx, time.Duration(5)) and time.Duration(5) * time.Second. Even beyond this, the idea that “I see a function that takes time, and I assume it takes seconds without looking into it, and it didn’t take seconds, so now I think it’s wrong” really makes me think the author is overly difficult to please.

          1. 1

            Its is easily doable in most language by overloading operators which results in implicit cast. Or if you can’t, simply use method (time.Second.times(5)).

            1. 1

              there’s no operator overloading in Go, so now you’re proposing multiple language changes.

              1. 1

                Or if you can’t, simply use methods (time.Second.times(5)).

                1. 1

                  two things:

                  • an exported function would have to be time.Second.Times
                  • what is the type of the lexeme 5 containing only the ascii character 5 in that example? If you remove untyped constants, that 5 would only have its default type of int. Which means time.Second.Times would only take a value of type int. Now you’ve got a method time.Second.Times that multiplies a time.Duration with an int to produce a time.Duration, but also, you can multiply two time.Duration values, so now you have … multiplication via the * operator and a Times method. That really doesn’t strike me as being at all preferable to being able to do 5 * time.Second, which you can do right now because of untyped constants.
                  1. 1

                    an exported function would have to be time.Second.Times

                    ok sure.

                    what is the type of the lexeme 5 containing only the ascii character 5 in that example? If you remove untyped constants, that 5 would only have its default type of int.

                    Yes, that’s the whole point here.

                    Which means time.Second.Times would only take a value of type int. Now you’ve got a method time.Second.Times that multiplies a time.Duration with an int to produce a time.Duration.

                    That’s the idea.

                    but also, you can multiply two time.Duration values, so now you have … multiplication via the * operator and a Times method.

                    * operator on time.Duration doesn’t make any sense. What does 5 seconds * 10 minutes results? Why would you even need to do that.

                    To be fair, multiplication should only be seen as repeated addition, so when you are doing time.Duration * N, this really is time.Duration + time.Duration + ...N, therefor it makes sense to keep N typed as an integer (Or whatever type backing the constant). Same thing where time.Duration / time.Duration = N, I don’t expect N to be time.Duration. So why the language couldn’t type simply type them as so. In the end, typed constant should prevent me from write time.Second + 1 and context.WithTimeout(..., 10).

                    1. 1

                      * operator on time.Duration doesn’t make any sense.

                      that’s -literally- how it works right now, and it means you can compose time.Duration values without needing operator overloading, multiple dispatch, or a bunch of redundant methods. A time.Duration value is just an int64 in nanosecond precision. That’s it. That’s all it is. When you say 5 * time.Second, the 5 is of type time.Duration, so it’s 5 nanoseconds. 5 nanosesconds times 1 second is 5 seconds. Why? Because 1 second is just 1,000,000,000 nanoseconds. A time.Duration value literally just means “a number of nanoseconds”.

                      What does 5 seconds * 10 minutes results? Why would you even need to do that.

                      if you wrote that as a constant expression it would fail to compile: https://play.golang.org/p/syX8LYlq15E

                      if you read the blog post on constants, the Go team does explain why this stuff is the way that it is, and what the problems with the alternatives are. I guarantee you if it did not work this way, people would complain about not being able to do 5 * time.Second, and all the proposed ways of making that work involve many, many changes to the language that are fundamentally at odds with the language’s goals.

                      1. 1
                        * operator on time.Duration doesn’t make any sense.
                        

                        that’s -literally- how it works right now, and it means you can compose time.Duration values without needing operator overloading, multiple dispatch, or a bunch of redundant methods. A time.Duration value is just an int64 in nanosecond precision. That’s it. That’s all it is. When you say 5 * time.Second, the 5 is of type time.Duration, so it’s 5 nanoseconds. 5 nanosesconds times 1 second is 5 seconds. Why? Because 1 second is just 1,000,000,000 nanoseconds. A time.Duration value literally just means “a number of nanoseconds”.

                        I understand how thing works and I don’t see how it is relevant to any of my points. “This is the way it works right now” doesn’t bring anything to the table.

                        What does 5 seconds * 10 minutes results? Why would you even need to do that.
                        

                        if you wrote that as a constant expression it would fail to compile: https://play.golang.org/p/syX8LYlq15E

                        You spend a lot of time pointing irrelevant issue that is irrelevant to the discussion, I suggest you avoid doing that, this at best annoy the other party. The point still stand if you do 1 ns * 10 ns.

          1. 2

            my main keyboard at my office is an HHKB, my keyboard at home is a KBD75 that I built from a kit that I purchased from kbdfans.com. I like that kit a lot, and I’m very happy with that keyboard. Being able to program the firmware with QMK was really fun to me. Here’s an image gallery of the KBD75 that I built: https://imgur.com/gallery/5pSva2A

              1. 5

                +1, although I recommend the Type-S variant.

                1. 2

                  Yes, I have the original at home, and I tried to bring it into the office, and my co-workers wanted to kill me. I ended up getting a type-S variant for the office and keeping my other one at home. I absolutely love this keyboard!

                2. 3

                  I have multiple keyboards, but my favorite for working is a Topre Realforce 87u which uses similar switches, while also giving arrow keys.

                  1. 2

                    My favourite variant is Drop’s Tokyo 60

                    1. 2

                      I can second this. I recently got the Tokyo60 and have been absolutely loving it.

                      1. 2

                        I want to see what you’re talking about but everything I click on there wants my email address.

                        1. 1

                          Sorry, for some reason drop requires an account to view products. I’ll link you an image instead.

                      2. 2

                        I use this keyboard for work and absolutely love it. I’m grateful CTRL is in the CAPS LOCK position.

                        1. 1

                          I’m currently typing this from a Happy Hacking Keyboard Pro 2, and I enjoy it for the most part. But I do miss the lack of physical arrow keys.

                          1. 1

                            I used an HHKB Lite2 for years and years (over a decade, I believe): it has an inverted-T in the lower right. They are an odd rectangular shape, which I imagine is why I have never seen a mechanical keyboard with the same feature. I thought it was basically the perfect layout for a long time.

                            An under-appreciated feature is the placement of the ESC, \, ~ and BS keys. Once you’re used to it, you really don’t want to go back.

                            There are only one and a half problems with the HHKB. The first is the CTRL key. Yes, replacing Caps Lock with something useful is good, and yes that location is far better than way out on the left and right corners. But it is not ideal for touch-typing, in which one should press modifiers with the opposite hand.

                            The second half-problem is the staggered key layout. I believe that a non-staggered (‘ortholinear’) layout may be more ergonomic.

                            These days I am experimenting with the Boardwalk and XD75 layouts, but with heavy inspiration from the HHKB. I have Hyper, Super (GUI or ‘Windows’), Alt, Ctrl, Raise to the left of the space bar and Lower, Ctrl, Alt, Super & Hyper to the right. The Caps Lock location is used for Compose — since it is not a modifier, having a single version is okay.

                            For arrow keys on the Boardwalk I have Lower+EDSF (like WASD, but fingers never leave the home position). On the XD75 the arrow is in the centre.

                            I know for a fact that I do not want to go back to a full-size board with a keypad, and I know I want to stick with mechanical keyswitches. I may someday want to get into something even more ergonomic, such as a split keyboard or Ergodox.

                          2. 1

                            I have used this, as well as a Realforce 87U, for a few years each. Both are great but these days I prefer the Leopold FC660C (with Type-S switches). Specifically with the Hasu PCB Mod, which is also available for the HHKB2, one can turn the board into the custom tool of programmers’ dreams.

                            1. 1

                              Does anyone find value on missing the F keys or the arrow keys?

                              1. 4

                                the form factor being small has value, because it means the device takes up less physical space on your desk and is much more portable. It’s very easy to toss into a bag with a laptop, because it’s only going to be as wide as the laptop itself.

                                The problem with the missing arrow keys is that in order to use the arrow keys you have to use a chorded combination (with the fn key to the right of the right shift). That’s fine when you’re using the arrows on their own, and the key combos are easy to learn and remember and use. Where it falls down is when you want to use the arrow keys and press other keys simultaneously that do NOT want the fn key held down, which is a struggle for games that use arrow keys. That’s the only situation in which I’ve found it bothersome, but that may be more of an issue for me than most since I’m a game developer.

                                1. 3

                                  I personally use a fullsize + ten-key WASD v2, which is pretty good. The whole tiny keyboard thing (people are unironically making 40% size boards on /r/mechanicalkeyboards) makes very little sense to me.

                                  I miss my old huge compaq keyboard, with F13-F24 on a strip down the left side :)

                              1. 4

                                database functions and triggers and constraints are nice, but this post fails to discuss any of the tradeoffs being made.

                                • it’s not really simpler, because the logic is mixed between being in your app layer and your database layer. If you want to, for example, grep around your codebase to see where something is happening, you now have to search across two different compute environments.
                                • keeping your procedures in sync between dev/prod/staging is an added complication.
                                • you still have to figure out how to get these definitions into version control or they’re effectively undocumented.
                                • if you’re unit testing these things, you’re probably doing from the app layer and not from SQL itself, meaning the thing you’re testing and the tests are expressed in different languages.

                                Those things are pretty navigable. The big hurtle for a lot of people is that burning DB node resources means you’re likely to have the DB be the bottleneck earlier. Scaling a database is harder than scaling a stateless HTTP layer in 90% of projects.

                                this style was a lot more common years ago, but a lot of people have been burned by it and have turned to using the database just for its indexing, durability, and replication properties, which postgres does extremely well and is very difficult to get right on your own. With the email example, it’s … not really all that tough to check that a string matches a regex in the app layer.

                                1. 4

                                  haha you cited my piece :)

                                  and from all I’ve seen, they are sensible packages - not leftpad-style idiocy.

                                  oh stick around and I promise you’ll find some absolutely dreadful Go packages ^_^

                                  I’ve yet to encounter a single 3rd-party uber-package that is effectively a requirement for doing any “real” work in the language

                                  after eight years of programming Go there’s basically only one package I find completely indispensable and it’s github.com/spf13/cobra. It has some peculiar design choices but it’s very comprehensive; it’s the one thing I use in every Go project that I write.

                                  I’m not a fan of its oppositions to exceptions as a normal error-handling mechanism.

                                  that comes with time. The error handling I’ve grown quite fond of after all these years. My Go programs have very well defined error paths; much more so than my code written in any other language.

                                  The lack of generics or sum types continues to be annoying. Dealing with timeouts and cancellation is still tedious, and context poisoning in the APIs is a bit frustrating. But overall I’m still happy after eight years of using the language.

                                  1. 3

                                    The main workflow we use: every feature or fix is initially worked on in its own branch that diverges directly from master.

                                    A branch is ready to merge when ci passes, commits represent isolated change sets, commit messages are descriptive and peer reviewers are satisfied.

                                    In the peer review process, adapting earlier commits in the feature branch after pushing and receiving feedback is encouraged. And thus force pushing is common practice.

                                    Then 1) rebase to master and 2) merge back to master without fast forwarding. A github bot makes sure that only one merge can happen at any time and that a branch is rebased to master one last time before it is merged. This results in a very simple, linear graph, but all commits from a single feature/fix branch stay visually grouped.

                                    EDIT: The master branch represents a big future release of the product, which is not the public version. The latest public version and legacy versions are in their own branches, and hotfixes follow the same pattern as in master, except that the fix branches derive from the version branch.

                                    Changes on master can be cherry-picked into the public versions branches if needed. Hotfixes on the public version branch are merged into master on a weekly basis as a single, squashed commit.

                                    1. 3

                                      I can highly recommend to consider using something like https://bors.tech for this workflow. It makes sure the merge commits are meaningful (“merges this branch, ticket here…”) and has a merge model that makes it frictionless.

                                      1. 1

                                        This is great to see in the wild. My team had to build the merge queue feature and it was a huge pain to orchestrate and keep devs happy.

                                        1. 1

                                          It’s an extraction of what Rust uses for rust-lang/rust.

                                      2. 1
                                        1. rebase to master and 2) merge back to master without fast forwarding.

                                        This results in a very simple, linear graph,

                                        If every commit has exactly one parent, what difference does it make if you’re not fast forwarding? You get, what, a … merge commit that has just one parent and no content?

                                        1. 2

                                          The merge commit can hold quite a lot of metadata - who reviewed it, where can I find the documentation for what was merged (Issue, PR number), etc. Merge commits are only junk if you don’t care about editing them.

                                          1. 2

                                            merge commit that has just one parent and no content?

                                            Exactly. This way, the information of which commits were done together within a branch is not lost. In our case the merge commit message usually holds the description of the pull request.

                                            1. 1

                                              oh, I just put that stuff in an annotated tag. The commits you’re talking about are just the commits between the two tags. Then I use git diff thing-v0.3.2 thing-v0.3.3 or git log thing-v0.3.2..thing-v0.3.3. The urls you get out of bitbucket or github then have the tag names instead of commit hashes, so the urls actually describe whats on the page. Um, I’m pretty sure every benefit that people have mentioned works the same with annotated tags?

                                              1. 1

                                                That seems like a good approach too. The only thing is that you don’t get is the nice graph in a GUI git client: one line representing master and single offsprings getting grouped back in.

                                            2. 1

                                              Merges with two parents, but one of the parents is a descendant of the other.

                                          1. 1

                                            How it is nonblocking? bounded queues as always?

                                            1. 3

                                              Looks like it. Upon quick glance, with same question, seems to use a channel for each loglevel, and then merges them out to the destination.

                                              1. 7

                                                I know this is slightly tangential to your post, but the dependency issue in getlantern/idletiming is getting a fix: https://github.com/getlantern/mtime/pull/2

                                                FWIW, thanks for pointing it out :D

                                                1. 1

                                                  tbh this whole package might not even be necessary any more. The time.Time method Sub now gives you a monotonic difference between two time.Time values that ignores changes in system clock time. This was added in Go 1.9, it’s mentioned here in the change notes. The original source for what you’re using predates the release of Go 1.9; it cites the issue that Go 1.9 closed, and the issue history in turn cites the library you’re using as an example of why the standard library needs fixing. So … that once-useful thing is now a part of the standard library, and I’m going to wager you can probably dispense with it entirely and just use the time package as-is. Maybe there’s something subtle going on here that I’m not noticing, but it looks like you can get some free dependency-shedding.

                                                1. 20

                                                  This post makes some valid points, but it’s complaining about things that apply to almost all major languages.

                                                  Like, the first third of the post is simply saying that platform-agnostic stuff assumes a unix-like interface, and then simulates it if it doesn’t exist. Which is… exactly what c does, and exactly what python does, and exactly what java does, and exactly what nodejs does, and exactly what tcl does. You can theoretically design a totally new synthetic interface that exposes all the features somebody might possibly want & then simulate it on all other platforms, but then you’ve alienated everybody who has ever used a different programming language. Or you can do some kind of windows-first thing & simulate windows behavior on non-windows systems, but even stuff developed at microsoft doesn’t tend to do that.

                                                  The last third is complaining that go packages can have arbitrarily large dependency trees. This is true, and it’s a problem. It’s also endemic across every language with its own package management ecosystem, & it’s hard to find packages that don’t pull in the whole universe when dependencies are automatically pulled in. (In languages where automatic dependency management is rare, dependencies tend to stand alone more often, simply because if they required too much nobody would bother to use them. In either case, it’s a matter of package maintainers having varying levels of sloppiness!)

                                                  In other words, it’s a little weird for this post to be framed as an exhortation against using go, and not a complaint about the norms of modern languages! Everything it says about go also applies to python and javascript.

                                                  1. 12

                                                    The last third is complaining that go packages can have arbitrarily large dependency trees. This is true, and it’s a problem. It’s also endemic across every language with its own package management ecosystem, & it’s hard to find packages that don’t pull in the whole universe when dependencies are automatically pulled in.

                                                    While true, I’m pretty sure the OP was making a different point. The OP was using a package that imported github.com/aristanetworks/goarista/monotime located at https://github.com/aristanetworks/goarista/tree/master/monotime, but as a consequence, it also pulled in everything else in that repo, even though monotime doesn’t explicitly depend on any of it. You can try this out yourself:

                                                    $ go mod init gobig
                                                    $ cat main.go
                                                    package main
                                                    
                                                    import (
                                                            "fmt"
                                                    
                                                            "github.com/aristanetworks/goarista/monotime"
                                                    )
                                                    
                                                    func main() {
                                                            fmt.Println(monotime.Now())
                                                    }
                                                    $ go build
                                                    $ less go.sum
                                                    

                                                    As far as I can tell, it’s just grabbing everything from the repo’s go.mod file: https://github.com/aristanetworks/goarista/blob/master/go.mod

                                                    Now, when I run go build, it doesn’t actually seem to download/build everything that’s in go.sum. So perhaps the problem isn’t as bad as it appears, but that go.sum file is still pretty scary.

                                                    (I’ve also run into annoying problems similarish to this, where it’s impossible to specify that a dependency is only used in tests. But I guess that goes back to the whole “simplicity” question…)

                                                    1. 1

                                                      That’s certainly a problem. It’s not Rob Pike’s problem (or the Go development team’s problem) that a popular third party package was sloppy about minimizing its dependencies (although the decision to have automatic dependency management tooling ship with the language makes it more likely for this to happen, as does sheer popularity). Go doesn’t seem to be unusual in this either, though I’m sure the static-build-by-default setting makes it more visible in the form of large binaries.

                                                      1. 8

                                                        I think the point here is that the Go package manager makes it very easy to be sloppy in this regard. AFAIK, other language package managers do not have this very specific type of problem.

                                                    2. 6

                                                      exactly what c does, and exactly what python does, and exactly what java does, and exactly what nodejs does, and exactly what tcl does

                                                      These languages, though, don’t market themselves as “simple” - and at least in the case of C, it’s kind of dubious to say that it “assumes a UNIX-like interface, and then simulates it if it doesn’t exist”, since at its inception there was no such thing as UNIX, and since it is standard on Windows to use file handling libraries that are written, in C or C++, for the Windows API.

                                                      Everything it says about go also applies to python and javascript.

                                                      The point of this article, at least as I understood it, is that Go’s claims of “simplicity” aren’t well-founded. Python, for example, doesn’t hold simplicity as a core value, either in interface or in implementation. Obviousness, clarity, etc, but not simplicity.

                                                      1. 10

                                                        … “simple is better than complex” is literally the third thing in pep-20 https://www.python.org/dev/peps/pep-0020/#the-zen-of-python

                                                        1. 3

                                                          Isn’t the point of the Zen of Python being written the way it is to invite discussion of the nuances and interactions and inherent contradictions between these statements?

                                                          I believe so, and this is clearly unlike the Go team’s philosophy regarding “simplicity”.

                                                          1. 2

                                                            huh, I never really read it that way. I never really found it very Zen-like, the usage of Zen in this context honestly seems … just like weirdly appropriative? They’re not exactly mysteries or parables, they’re just short statements of values.

                                                            do you read any of the other statements in the Zen of Python as contradicting “simple is better than complex”? My time programming Python (years ago I was a professional Python programmer but I don’t use Python much any more) definitely made me feel that simplicity was a stated value of the language and the ecosystem, although the follow-through on that value was … somewhat inconsistently applied.

                                                            1. 5

                                                              It’s perhaps appropriative, though not really weirdly so; there’s a practice in Zen of recitation and meditation on the meaning of collections of koan, each of which is a small, relatively simple — at first blush —statement generally about a single philosophical truth, but it’s the whole that shows the ebb and flow between each individual statement. They don’t have to be either parables or mysteries, merely a set of things that push and pull on each other, like the wind blowing the bamboo about.

                                                              So “simple is better than complex” doesn’t need contradiction, but it is balanced by “complex is better than complicated” and given more flavor by “sparse is better than dense” … in all things lean towards the left of each of the first six statements, but accept that nature of the problem may push you towards the right… something can necessarily be the far right on all six and still be Pythonic, but only if the problem (and not our own vanity) requires it.

                                                              So simplicity is a virtue, relative to complexity, but complexity is a virtue relative to complication, and so on.

                                                              As appropriations of Zen go it’s not all that weird, and also not that far off the mark.

                                                              1. 4

                                                                Sure, PEP 20 is full of contradictions, which I think we are meant to contemplate productively (though I don’t disagree with you that “Zen” is perhaps going a bit far.) Consider even just lines 1 and 2:

                                                                Beautiful is better than ugly. Explicit is better than implicit.

                                                                The beautiful interface is often the one that hides some of the ugliness inherent in the domain; the explicit interface is often uglier.

                                                                To illustrate the value of such contradictions, consider that “Special cases aren’t special enough to break the rules.” - except for errors, which we are told “should never pass silently.”

                                                                Errors are dealt with in Python via exceptions - literally designed and named to deal with “special [or exceptional] cases.” In other words, this contradiction leads us to understand that, in order to hold both of these principles at once, we must make new rules, which our error handling can follow. In less rigidly dogmatic language, we are exhorted simultaneously to constrain our designs by some set of rules (presumably so we can better understand them), and to make sure that our designs explicitly handle errors. Ergo, those rules must contain rules for handling errors, which Python does via exceptions.

                                                                Leading back to line 3, “Simple is better than complex.”, I think we can evaluate the principles by their fruits; Python, by the standards of the Go language, is extremely complex. As Pike stated in his original talk about the language, his definition of “simple” is “Few keywords, parsable without a symbol table.”

                                                                I tend to think about simplicity as being multi-dimensional. Go is both simple in the implementation and fairly simple to learn, but not necessarily simple to use. As perhaps a counterpoint, Rust is not simple in implementation, learning, or use. Python does focus on simplicity, but more in learning and especially use than in implementation.

                                                                1. 1

                                                                  this seems like a huge reach. read the original thread that prompted pep-20: https://groups.google.com/forum/#!topic/comp.lang.python/B_VxeTBClM0%5B126-150%5D

                                                                  there’s no evasive, mr miyagi-like mystery here, they’re just talking about rules of thumb. “Beautiful is better than ugly” is not treated like a strange, otherworldly paradox. It’s merely trying to say “hey, caring about aesthetics is ok and you can do that”. Such a claim is so obvious today that we struggle to take it at face value and assume there must be some deeper, more profound double-meaning. Read the original thread and you find no such discussion. There’s no mysticism here, just people that found Python’s attitude to be a refreshing change from the utilitarianism of Java or the tedium of C.

                                                                  I’m not arguing that Python is simple today. It’s not.

                                                                  What I’m arguing is that Python had simplicity as a design goal thirty years ago when it was created, and ten years later when the principles that appear pep-20 were written. This statement:

                                                                  Python, for example, doesn’t hold simplicity as a core value

                                                                  Strikes me as very untrue. I think Python does hold simplicity as a core value, but that it has lost its way and become mired in baggage over the years. The pep-20 principles were written in 1999; the first commercial multi-core processor didn’t appear until 2 years later. A lot of what makes Python complicated today has to do with the difficulty of adding concurrency to a language long after the language was invented. How many people were trying to do async i/o in the application layer in 1999? kqueue didn’t even exist until the year after these principles were written. Even relatively straightforward Python constructs like the list comprehension didn’t exist in 1999; it was first proposed in pep-0202 the next year.

                                                                  Errors are dealt with in Python via exceptions - literally designed and named to deal with “special [or exceptional] cases.”

                                                                  What’s exceptional about an exception isn’t that the events that cause exceptions are out of the ordinary; it’s that the control flow of the program is out of the ordinary inasmuch as it doesn’t flow line-by-line. When you read until the end of a file, Python throws EOFError exception. Nobody really thinks (or has ever thought) that hitting the end of a file is a special case. It would be much more irregular to never hit the end of the file. Rather, the thing that is a special case is how control flow is handled for that event.

                                                                  And of course, I’m not here to stan exceptions, I don’t like exceptions and I’m glad that a lot of people and a lot of ecosystems have moved on from them. Exceptions are among the biggest mistakes of language design that has cursed programming, second only to inheritance (or maybe null values).

                                                                  Python, by the standards of the Go language, is extremely complex.

                                                                  um, sure, but that’s not really a fair comparison, inasmuch as you’re comparing Python 3 after 30 years of development and Go 1 with about a decade’s worth of development. Python predates Go by at least 17 years. The community has learned a lot in that time, hardware and software have changed a lot in that time, and Go itself has the benefit of being able to learn from Python.

                                                                  I tend to think about simplicity as being multi-dimensional.

                                                                  ah, my favorite definition here comes from Rich Hickey’s talk Simple Made Easy.

                                                                  1. 5

                                                                    I think most of what you’ve mentioned here is a worthy counterpoint to my comment; I don’t want to argue unduly. However, I would like to point out that this particular issue seems to me to be a microcosm of some larger ideas in the software industry. For example:

                                                                    What’s exceptional about an exception isn’t that the events that cause exceptions are out of the ordinary; it’s that the control flow of the program is out of the ordinary inasmuch as it doesn’t flow line-by-line.

                                                                    There are definitely a lot of people who think about exceptions this way; there are also a lot of people who don’t, including Bob Martin ( Clean Code, p109). Both my college CS curriculum and the training I underwent at my first internship talked about exceptions in the way I conceptualize them here. So, I think not only do we disagree about this, but many people do!

                                                                    I think it’s a similar situation with the discussion of what “simple” means; Hickey’s definition is good, but misses out on some nuance of what we might want to call “simple”, or what some people do call “simple”.

                                                                    um, sure, but that’s not really a fair comparison, inasmuch as you’re comparing Python 3 after 30 years of development and Go 1 with about a decade’s worth of development. Python predates Go by at least 17 years. The community has learned a lot in that time, hardware and software have changed a lot in that time, and Go itself has the benefit of being able to learn from Python.

                                                                    I do agree with this - obviously, Python has evolved more than Go has. But, at the same time, Go’s team is explicitly trying to avoid adding features to the language, because their idea of simplicity lies within the implementation more than the usage of the language.

                                                                    In any case, I’m not a big Go fan and I absolutely love Python (and even moreso Rust, which is much more complex!). All I’m saying is that “simple” is a hugely overloaded word and we have to be careful about making sure we understand how others are using it.

                                                                    1. 1

                                                                      you’re comparing Python 3 after 30 years of development and Go 1 with about a decade’s worth of development. Python predates Go by at least 17 years

                                                                      It’s also totally reasonable to claim that Go is tightly integrated with the lineage of C (sharing some of its designers) & some of the other languages Pike worked on/with at Bell/Lucent in the meantime (like Alef, Limbo, and newsqueak), all of which were attempts to simplify C while adding pipelining to coroutines as a first class feature of the language. In other words, you can say that rather than being a new language, Go is really the current point in a continuous line of development of C (in parallel with the current of standard C) in the same way that python3 is part of the continuous line of python development, & that it therefore should have benefit from the past 50 years of experience in language design. Certainly, I think some of the key people involved in Go consider it that way (from comments like “we wanted to fix the mistakes we made in the design of C” & from the strong similarities between Go and Limbo).

                                                            2. 1

                                                              Yes it is mentioned as one of the guiding principles, but simplicity is not the overriding principle. The difference this decision can make is aptly summed in this (rather well known) essay.

                                                            3. 5

                                                              at its inception there was no such thing as UNIX

                                                              Common misconception. UNIX was written in assembler, C was written on it, & then UNIX was rewritten in C.

                                                              and since it is standard on Windows to use file handling libraries that are written, in C or C++, for the Windows API.

                                                              You may be right, though I’ve never seen this done (and the file & stream handling facilities that are part of standard c – the ones you’d use if you were writing cross-platform code – are very UNIX-ish).

                                                              The point of this article, at least as I understood it, is that Go’s claims of “simplicity” aren’t well-founded.

                                                              He certainly mentions this in the opening paragraphs, but it didn’t seem like the focus of the rest. Certainly, in terms of language / stdlib implementation, the ‘simplest’ thing to do when running into a situation like file access where UNIX has a very rich interface and popular non-UNIX platforms have much less rich interfaces is to adopt the UNIX interface & use existing simulation facilities, especially if you come from a UNIX + C background (as the designers and authors of Go do).

                                                              Lua is marketed as simple, and exposes file i/o the same way – because it requires the least wrapper code.

                                                              Simulating UNIX (and common/conventional parts of UNIX environments) is so widespread that it’s done even in situations where it’s arguably more complicated. For instance, TK’s internal model is very close to X, and TK tries to wrap everything else to make it enough like X for all the facilities to still work, which caused some problems…

                                                              1. 4

                                                                Lua exposes the standard C I/O module and nothing more. Last month I started working out what a platform independent directory API for Lua would look like (doing this on the mailing list). I wouldn’t say it was trivial but there was some form of mapping from POSIX to Windows (and some other less used systems these days) but what you end up with is the least common denominator. For instance, you can get similar information from Windows that you get from stat() on POSIX, but not all fields are available, nor are they named the same (and in the end, I think I ended up pleasing no one).

                                                                Then I tried abstracting I/O events and well … it can’t be done. POSIX is “when can I start IO? do IO” and Windows is “do IO. When is it done?” That difference in order makes it impossible to implement a unified API for events. You can do it with a framework, but a framework assumes control of the logic—it’s not an API. Implementing cross platform APIs are hard.

                                                                Go is controlled by Google [1]. Google is a Unix shop. Is it any wonder that it’s POSIX centric? Windows is a second class citizen at Google.

                                                                [1] Yeah yeah, open source, outside developers can contribute, etc. But in the end, what Google wants, Google gets. If Google doesn’t care, then yes, outsiders can influence the direction of a feature. But if it goes against how Google does things, no go.

                                                                1. 7

                                                                  Yup. That’s sort of what I’m getting at.

                                                                  Windows is a second class citizen in go (and lua, and all these other languages) because windows is a second-class citizen in c (the lowest common denominator for FFIs, bindings, writing portable standard libs) and a second-class citizen in the kinds of places that develop new languages (university CS departments, tech companies that scale big enough to want a new language & actually push it). Least common denominator for anything is usually a subset of unix functionality that has decent simulation on other platforms somebody’s already written and open sourced, because the other way around usually doesn’t exist & nobody knows about it even if it does. Windows provides posix emulation layers of varying quality & has for 30 years, same as beos did, same as mac os did before they actually became a unix, etc.

                                                                  This probably isn’t a great thing in the long run. Unix is the best 1969 has to offer, & still beats out most of what 2020 has to offer, but arguably its popularity has prevented major improvements from happening or being adopted at scale simply because anything that totally breaks unix assumptions (rather than soft-breaking them and simply running slower with a workaround or something) will be rejected. No OS will become popular if it doesn’t have files and directories, unless it has something that can pretend to have a unix-style file & directory structure, at which point that’s all that will be used in portable code. Only the subset of permissions that correspond to unix permissions will be used. Only OSes with processes that have unique numeric IDs and can be killed will survive.

                                                                  It’s a shame, but it’s not Go’s fault – and I take advantage of the fact that every popular modern operating system is either unix or a crappy approximation of unix every time I write code (no matter the language), because if the only OS features I use are the parts of unix that any freshman CS student is aware of, I will never need to write platform-specific code. I think we all do.

                                                          1. 3

                                                            Refreshing to see an entrenched Golang user break free and come clean about the true realities that the other blubbites appear blind to.

                                                            My tldr: Go makes the programmer do the computer’s job.

                                                            Wait, we’re doing the “unpopular opinion” thing again, right?

                                                            /ducks

                                                            1. 13

                                                              the whole concept of blub is predicated on PG’s arrogance, and the whole concept of lobste.rs is to escape the arrogance of hacker news so … this is kinda the wrong audience for that comment?

                                                              1. 2

                                                                Somewhat agree, but I think the concept of “blub” is still useful (maybe there’s another name for it?), and it predates the season of arrogance as far as I can tell.

                                                            1. 14

                                                              Error handling also causes repetition. Many functions have more if err != nil { return err } boilerplate than interesting code.

                                                              Whenever I see return err, I see a missed opportunity. Every error return is a chance to add additional context to the error, stringing together the exact sequence of events leading to the error directly into the error message. Done well, you end up with a lovely semantic “stack trace” that completely identifies the situation leading to the error.

                                                              You could have logs full of ERROR: connect timed out, or you could have:

                                                              ERROR: failed to zing the bop "abcd": failed to fetch bibble diddle: failed to initialize HTTPS connection to "https://bleep-bloop.domain": timed out waiting for DNS response

                                                              1. 7

                                                                yes, returning an error without wrapping it is, nine times out of ten, Doing It Wrong. At my company we have a package that is similar to this one that contains various tools for manipulating and handling errors: https://github.com/pkg/errors

                                                                also, after 8 years of programming Go, I strongly dislike stack traces now. Stack traces do not tell me a story of how an error happened, they give me homework of how to divine that story by reading the source code and running the program in my head. If you don’t have the source code, or if you’re running many versions of many programs, the utility of the stack trace further decreases. My Go code consistently has the best error-handling semantics of any code that I actually put into production.

                                                                1. 6

                                                                  That’s just assembling a stacktrace by hand.

                                                                  1. 3

                                                                    I’m not a go programmer, so a stupid question: How would the error handling code look then? Like this?

                                                                    return err, "timed out waiting for DNS response"
                                                                    

                                                                    Or something more complex? Would this affect the function signature of everything in the call chain?

                                                                    1. 12

                                                                      Go 1.13 added error wrapping, so you can now do this:

                                                                      return fmt.Errorf(“timed out waiting for DNS response: %w”, err)
                                                                      
                                                                      1. 3

                                                                        that’s been around since 2010; it didn’t actually take the Go team a decade to come up with that.

                                                                        https://github.com/golang/go/commit/558477eeb16aa81bc8bd7776c819cb98f96fc5c1

                                                                        1. 7

                                                                          The %w is what’s new in 1.13, permitting e.g. errors.Is.

                                                                          1. 4

                                                                            ah! Nice! Yeah, that’s a useful improvement. Wasn’t clear from the comment before how 1.13 changed it; I thought trousers was saying that 1.13 added fmt.Errorf. Thanks for the clarification :)

                                                                        2. 2

                                                                          In addition, depending on the case where this return is located, the “time out” info may already be included in err, so it might even be potentially removed from the error message; personally, I recently start to lean towards, in each function, including what “extra” info this function can add to the error, about context it controls, and the function itself; so, I might for example write instead something like:

                                                                          return fmt.Errorf("waiting for DNS response: %w", err)
                                                                          

                                                                          or maybe even:

                                                                          return fmt.Errorf("retrieving DNS record %q: waiting for server response: %w", recordName, err)
                                                                          

                                                                          This is based on how errors in stdlib are composed, e.g. IIRC an error from os.Open would often look like: "open 'foobar.txt': file not found"

                                                                      2. 2

                                                                        I haven’t yet used the “new” errors packages (e.g., github.com/pkg/errors) in anger yet. How do they work with respect to checking if an error in the chain was of a specific type (e.g., os.IsNotExist() or io.EOF or converting to a specific type)?

                                                                        1. 2

                                                                          errors.Is(err, os.ErrExist)

                                                                          There are some other helper functions so that you can quickly handle wrapped errors.

                                                                      1. 5

                                                                        You can’t write functions with a receiver in a different package, so even though interfaces are ‘duck typed’ they can’t be implemented for upstream types making them much less useful.

                                                                        am i mistaken or does embedding the upstream type this, but in reverse? the composition is often overlooked in go, while it is one of the best things. not being allowed to fiddle around in other packages is a good restriction as this is a symptom for other problems.

                                                                        1. 4

                                                                          Yes, embedding types is one way to solve this in Go. Rust and Go are very different in this regard.

                                                                          In Rust, it’s common to extend a type with additional functionality using traits so you don’t need to convert between the types. In Go this isn’t possible. See this for an example. The basic AsyncRead provides low level methods and AsyncReadExt provides utility functions. It means that if someone else implements AsyncRead for a type, they can use any of the AsyncReadExt methods on it (provided AsyncReadExt is imported). Something like that just isn’t easily possible in Go to the same level because it’s only possible in Rust due to generics.

                                                                          1. 2

                                                                            if you extend a type, can your extension affect what the original code was doing? Part of the motivation for Go’s typing system is that it’s designed to avoid the fragile base class problem. As someone with little Rust experience, it’s not clear how extending types in Rust avoid a fragile base class scenario.

                                                                            1. 2

                                                                              The original code is unaffected. The alternative implementation is only available to code that uses the implemented trait and Rust doesn’t allow conflicting method names within the same scope, IIRC, even if they’re for two different traits on the same type.

                                                                              1. 1

                                                                                You can. But when you try to call such a method, if it is otherwise ambiguous, then Rust will yield a compiler error. In that case, you have to use UFCS (“universal function call syntax”) to explicitly disambiguate: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=a602c67af78a73308808e9a45a51ead4

                                                                                One could argue Rust programs could suffer from the fragile base class problem via default method impls on traits. But it’s not something I’ve experienced much (if at all) in practice. Rust doesn’t have inheritance, so you don’t really wind up with complex inheritance hierarchies where this sort of complexity is difficult to manage.

                                                                                1. 1

                                                                                  I’ve seen it happen with extension libraries like itertools that want to add functionality that makes sense in the base trait. It’s always possible to avoid it by using UFCS, but at that point you already lost method chaining and might as well use a free function.

                                                                                  https://github.com/rust-lang/rust/issues/48919

                                                                              2. 2

                                                                                No, because traits are only available to use if they’re imported. So you’re not actually modifying the actual type, but extending it.

                                                                              3. 2

                                                                                I don’t know rust, but isn’t that kind of having io.Reader in go and other types which take io.Reader implementations and implement other functionality on top of that? Like bufio.Reader?

                                                                                1. 1

                                                                                  Wrapping a type to implement an interface is somewhat similar. But in Rust, you do not have to write a wrapper to implement traits. E.g. have a method to reverse code points of a String you can just define a trait and implement it directly for String:

                                                                                  trait ReverseCodepoints {
                                                                                    fn reverse_codepoints(&mut self);
                                                                                  }
                                                                                  
                                                                                  impl ReverseCodepoints for String {
                                                                                    fn reverse_codepoints(&mut self) {
                                                                                      // Implementation.
                                                                                    }
                                                                                  }
                                                                                  

                                                                                  After that you could just call somestring.reverse_codepoints() when the trait is in scope. It’s often more convenient than wrapping, because you do not have to wrap/unwrap depending on the methods you need (or write delegation methods).

                                                                                  That said, there are some limitations in that the orphan rules have to be satisfied. Very roughly, this means that the implementation should be defined in the same crate as the trait or as the type the trait is implemented for. Otherwise, two different implementations could be defined for the same type. If you cannot satisfy the orphan rules (e.g. because the type and trait come from another trait), you do need a wrapper type.

                                                                                  1. 2

                                                                                    This seems dangerous, since now users of the original type may not realize it has suddenly grown new talents.

                                                                                    1. 2

                                                                                      It actually doesn’t, because the function is not associated with the type. It’s associated with the pair of (type, trait).

                                                                                      You have to import the trait ReverseCodepoints before you can call it.

                                                                                      1. 1

                                                                                        Or, worse yet, that existing talents may have been redefined. (Is that possible?)

                                                                                        1. 2

                                                                                          Nope - and even if you could override an impl, the orphan rule would stop you overriding impls you don’t own.

                                                                                          1. 1

                                                                                            👍 Good to hear.

                                                                                      2. 1

                                                                                        thanks for the explanation!

                                                                                        It’s often more convenient than wrapping, because you do not have to wrap/unwrap depending on the methods you need (or write delegation methods).

                                                                                        usually the wrapped versions tend to be used from there on (at least the way i use them ;), so that’s not really an issue for me. i like the verbosity of go, which may be a bit unusual, but i like that things are written down explicitly.

                                                                                      3. 1

                                                                                        I had forgotten about interfaces somehow. Yes, sort of. But you’re limited to wrapping stuff one level at a time and you have to actually make new wrapper types.

                                                                                        1. 1

                                                                                          i kind-of like that in go i have to add new types for new functionality, but i see why traits may be good (without having written rust yet..)

                                                                                  1. 14

                                                                                    There is a reason for the proliferation of Electron apps. There is a huge ecosystem, and the time to ship is fairly low. There are tonnes of FOSS IDEs in electron that you could take inspiration from as well. Don’t worry about the size of binary - the intersection of people interested in IDEs/notebooks and people interested in minimal memory footprint is tiny.

                                                                                    1. 37

                                                                                      No, just.. no. The whole idea of making individual applications that each depend on their own copy of a fully featured web browser that, get this, will almost never be updated to patch future security issues is an extremely flawed and dangerous practice. You do not need an entire copy of chromium to edit text.

                                                                                      1. 11

                                                                                        you’re right that you don’t need it, but that analysis is only considering the user’s perspective, and is only considering it from a narrow frame of reference.

                                                                                        For one thing: most Electron apps in most situations are being deployed to users who will run just a few applications at a time; less than ten. I agree that you don’t need to run a web browser to edit text. The reality is that the vast majority of users will only run one instance of VS Code (or Atom). The question is not whether or not you need it, it’s whether or not you can get away with it.

                                                                                        For the majority of orgs, staffing is significantly simplified with Electron, because it has significant overlap with the web as a platform. You can’t seriously consider the merits of Electron without acknowledging how much Electron lowers the barrier to entry.

                                                                                        With that said: I absolutely despise building Electron apps personally, and I loathe using them. It is, in my opinion, a terrible platform. It does, however, solve real problems that are not solved by the alternatives.

                                                                                        My hope is that the proliferation of Electron will give Microsoft pause, and will encourage innovation in the desktop application development space. (I don’t think this is likely, but that’s a topic for another day.) It’s an absolute embarrassment that Slack takes about 3x as much memory to run as Blender, when the former is just a glorified IRC client and the latter is literally a world-building tool. But at the end of the day, Slack is taking 350mb of memory in an age where entry-level machines have 4 or 8gb of memory. For most users in most situations, the bloat just doesn’t actually matter. The irony is that the people most affected by this bloat are software people, who are the exact people that have the power to stop it.

                                                                                        1. 14

                                                                                          The irony is that the people most affected by this bloat are software people, who are the exact people that have the power to stop it.

                                                                                          This is a pretty shallow analysis. The people most affected by this bloat are the people with the least capable hardware, which is not usually people in software engineering positions, and certainly not the people choosing to write Electron apps in the first place.

                                                                                          1. 3

                                                                                            I think that’s broadly true but I left it off because I’m having a very hard time imagining a user persona that describes this problem in a way where it really is a problem, and where there are realistic alternatives.

                                                                                            A big sector of the low-end PC market now is Chromebooks (you can get a Chromebook with 4gb of memory for under a hundred dollars), but that’s a circular issue since Chromebooks can’t run Electron apps directly anyway, they have to run Chrome Apps, which are … themselves Chromium contexts. That user persona only increases the utility of Electron, inasmuch as that entire market is only capable of running the execution context that Electron is already using: the web. By targeting that execution context, you’re lowering the barrier to entry for serving that market since much of what you write for Electron will be portable to a Chrome App. The existence of Electron is probably a net positive for that market, even if, yes, as I said before, it’s a very wasteful foundation on which to build your software.

                                                                                            The Raspberry Pi userbase is particularly notable here. Electron is probably a net harm to RPi 3 and Pi Zero users. Electron is a problem for the RPi userbase, but that’s a highly specialized market to begin with, and newer models of the RPi are fast enough that Electron’s bloat stops being as punitive. (when I say specialized here I don’t mean unimportant or rare, I mean that it’s notably different from other desktop environments both in terms of technical constraints and user needs.)

                                                                                            It’s easy to say “Electron is bloated, therefore harmful to people with slow computers”, but as a decision-making tool, that conclusion is too blunt to be useful. Which users, on which hardware, in which situations, attempting to access which software?

                                                                                            And besides, it’s not like Electron has cornered the market on writing bloated software. Adobe Photoshop is written in C++ and uses a cool 1gb of memory without a single document open. The reality is that Electron empowers beginner developers to create cross-platform desktop apps in a way that is absolutely dominating the space because it focuses on solving problems that actually exist, instead of problems that are only believed to exist. The path to getting people away from Electron is not to say “don’t use Electron because it’s bloated”, it’s for other tools to figure out what needs Electron is satisfying that are not satisfied by the alternatives.

                                                                                            1. 4

                                                                                              And besides, it’s not like Electron has cornered the market on writing bloated software. Adobe Photoshop is written in C++ and uses a cool 1gb of memory without a single document open. The reality is that Electron empowers beginner developers to create cross-platform desktop apps in a way that is absolutely dominating the space because it focuses on solving problems that actually exist, instead of problems that are only believed to exist. The path to getting people away from Electron is not to say “don’t use Electron because it’s bloated”, it’s for other tools to figure out what needs Electron is satisfying that are not satisfied by the alternatives.

                                                                                              That’s not a fair comparison given how many plugins and features out of the box Photoshop has.

                                                                                              1. 1

                                                                                                The path to getting people away from Electron is not to say “don’t use Electron because it’s bloated”, it’s for other tools to figure out what needs Electron is satisfying that are not satisfied by the alternatives

                                                                                                We need basically the Flash Player, without the legacy timeline or embedded VM. A cross-platform, high-performance scene graph with a small but complete API surface that developers can mate to the language of their choice, be it a VM like JS or Lua, or Python, or with D / Rust or C++ code.

                                                                                                1. 2

                                                                                                  the timeline and the AS3 VM are … kinda the core of Flash, so I’m not really sure what would be left. Without that stuff isn’t it basically just Cairo?

                                                                                                  anyway, you know about Scaleform, right? Not clear by your answer if it’s already on your radar, but it was a licensed implementation of Flash, significantly more performant that Adobe’s implementation, that supported C++ interoperability, that was in its later years owned and run by AutoDesk. Using Scaleform to build the 2D UI for 3D games was a dominant trend in the games industry for over 15 years. Some people still use it today but it was cancelled years ago. https://en.wikipedia.org/wiki/Scaleform_GFx

                                                                                                  1. 1

                                                                                                    Not at all. Most (complex) software written in Flash over the last few years of its meaningful existence completely ignored the timeline, and consisted of only 2 frames, once being the preloader and the other being the application. The reason I think it should separate out the VM and provide only an API is to share the load / interest across people coming from different language communities who all want to show something on-screen without Electron.

                                                                                                    Flash 2d was a lot more than Cairo, which is comparable to the flash.display.Graphics API used to draw each individual item on the stage. It’s a proper retained-mode scene graph with events and a pretty good text API built in. And Flex, which was built entirely in AS3 on top of the basic scene graph was and still is the most well-thought-out, well-documented, and easy-to-use (both for beginners and advanced cases) UI framework I’ve ever had the pleasure of working with, and I’ve messed about with bunch of them over the years. Adobe Corporate (and Jobs’ pique) screwed over a great team of talented people who built and maintained Flex, robbed us hackers of an excellent cross-platform… platform, and created billions of dollars of waste heat running multiple redundant entire copies of Chromium on desktops everywhere for years.

                                                                                            2. 3

                                                                                              My hope is that the proliferation of Electron will give Microsoft pause, and will encourage innovation in the desktop application development space. (I don’t think this is likely, but that’s a topic for another day.)

                                                                                              Nope, they’re huge users of it.

                                                                                              1. 1

                                                                                                I mean, my first example was VS Code and I said I thought this result was highly unlikely so … I feel like I’ve already demonstrated an awareness of that fact and I’m not sure what you’re getting at.

                                                                                                1. 2

                                                                                                  Eep, I read it but didn’t catch that line. Sorry.

                                                                                          2. 3

                                                                                            the intersection of people interested in IDEs/notebooks and people interested in minimal memory footprint is tiny.

                                                                                            In my experience the more someone crafts code, the more they care about memory footprint- even if it’s in the sense of “I’ll have less memory for testing my application”.

                                                                                            1. 1

                                                                                              In my experience the more someone crafts code, the more they care about memory footprint- even if it’s in the sense of “I’ll have less memory for testing my application”.

                                                                                              I am with you to some degree - but we are talking about Notebook/repl-style applications here. These aren’t traditional applications in the sense that they are never daemonized, are always foreground applications, and are tested by manually tweaking and observing (or at least that is how I use notebooks). Also, I probably should clarify - if the application actually ships and people feel it is slow, then the effort involved in porting over to Qt or something might be justified. Most of the times, in crowded spaces, getting things to ship is more important than quibbling about memory use.

                                                                                          1. 3

                                                                                            The shift click to expand selection looks nice.

                                                                                            I really love watching this thing evolve. A really nice console / terminal was the missing link that kept WSL from being a first class Linux development environment, and Windows Terminal is filling that gap REALLY fast.

                                                                                            I have a funny feeling that there are going to be a LOT of developers who live outside the rarefied bubble of desktop UNIX users whose work-a-day quality of life is going to be markedly improved by this.

                                                                                            1. 2

                                                                                              I have a funny feeling that there are going to be a LOT of developers who live outside the rarefied bubble of desktop UNIX users whose work-a-day quality of life is going to be markedly improved by this.

                                                                                              I work (as a network server developer) in the game industry and this is absolutely transformative for my work. I have to use Windows. A lot of the tooling in the game industry only runs on Windows. I will likely start doing the entirety of my work inside of this terminal. Right now I’m running an XFCE desktop inside of a VirtualBox VM just so that I can run Konsole because the terminal emulators on Windows are all in some way insufficient.

                                                                                              1. 2

                                                                                                rarefied bubble of desktop UNIX users

                                                                                                You mean, like, Mac users? I’m not even sure I know a dev who uses Windows. Not saying they don’t exist, just questioning the hyperbole here :-)

                                                                                                1. 9

                                                                                                  I’m not even sure I know a dev who uses Windows.

                                                                                                  While I agree that “rarefied bubble” seems hyperbolic, “not even sure I know a dev who uses Windows” seems much more hyperbolic to me.

                                                                                                  Vast armies of “enterprise” developers run Windows all the time. And many who do product work are Windows first.

                                                                                                  To counter your anecdata with some of my own, on a project I worked last year, we were delivering software that ran exclusively on RHEL servers. On days when the whole team met, you’d see a dozen Windows laptops, two or three Macs and two of us running Linux. One of the Linux holdouts was a Red Hat employee, and I was the other, having switched when my 2011 17” MBP died of GPU failure and I couldn’t find anything in Apple’s lineup to replace it.

                                                                                                  Outside that group, I know plenty who run Macs, but the vast majority I’ve worked with use Windows as a daily driver.

                                                                                                  That said, every single Windows user I know who suffered from the trash fire that is the old terminal has already helped themselves in terms of quality of life by installing cmder or something like it. “Markedly improved” is probably overstatement for most, even though it will be much better than the default from before.

                                                                                                  1. 8

                                                                                                    Hello! Nice to meet you! I am a software engineer and I use Windows.

                                                                                                    I like coffee and tiling as many windows as Windows lets me. Now you know a dev who uses Windows!

                                                                                                    1. 3

                                                                                                      macOS is really popular in USA. But it’s not really the case for Europe. We struggled to find a macOS developer that is able to do system programming so much that we had to hire Linux devs and trained them with knowledge about macOS.

                                                                                                      1. 3

                                                                                                        Windows devs are many (I am one also). Just as any popular platform, I imagine developing is popular there too.

                                                                                                        1. 2

                                                                                                          You should get out more ;)

                                                                                                          Seriously, just type “I run Windows” into the Lobsters search box and take a gander at the number of articles this yields.

                                                                                                          Not just Mac users, but even developers who’ve been on Windows for any number of reasons (It’s what their IT department supports, for example).

                                                                                                          Sorry about the hyperbole, it’s a core part of my personality and when I’m excited about something it’s very hard to hide, but I shall endeavor to do so in future interactions here.

                                                                                                          1. 1

                                                                                                            Hehe, I definitely deserved what I got here, I managed to use hyperbole in a comment calling out hyperbole! :-)

                                                                                                            1. 1

                                                                                                              I think your post is actually a really usefu could l window into how many developers think about this situation.

                                                                                                              We all surround ourselves with communities of like minded people, and as a result it’s very easy to start thinking thata this represents the status quo for all developers, everywhere.

                                                                                                              Somebody felt so strongly about my comment that they flagged it as spam!

                                                                                                              I would love it if the greater technology community could take a giant step back and recognize that the bubbles we inhabit can narrow our perception of reality.

                                                                                                              I think for myself I will stop interacting with Windows related posts on lobsters. I will totally cop to having been over-zealous in my interactions on the previous thread, but I was exceedingly careful in my response this time and yet still got flagged.

                                                                                                              1. 2

                                                                                                                I agree that we all have bubbles. Sometimes it’s hard to escape your bubble even if you want to because it has to do with geography or industry. But I do agree that we should do our best to at least be aware that those bubbles exist. Honestly, my original comment was meant to be light-hearted, trading hyperbole for hyperbole. In retrospect, it probably wasn’t a good topic for that kind of thing.

                                                                                                                1. 2

                                                                                                                  Jokes (and lighthearted comments) often don’t land in text. Can’t add intonation beyond SCREAMING something really. Sometimes just you just have to call it out rather dryly ..

                                                                                                                2. 2

                                                                                                                  I hear you.. I agree with you. I think it would be bad if you refrained from interacting with Windows posts.

                                                                                                            2. 2

                                                                                                              I’m on windows at work. I think hwayne is too. Don’t forget that there is an entire dev community built around the C# and the CLR which until recently was largely focused around windows. Not too mention lots of game devs work in windows.

                                                                                                          1. 7

                                                                                                            Go hit 1.0 eight years ago. Can we please stop circulating these thinkpieces written by people who have spent all of one weekend with the language that just whinge about how it’s not a clone of something else?

                                                                                                            the timestamp example is exceptionally contrived because in reality you would never call time.Parse with two string literals. If you are writing the string literal into the source, you already know the value of the timestamp; you’d never start with a string literal and then parse it with another string literal because you’re turning what can be a compile-time error into a runtime error; you would just construct the value directly with time.Date, which cannot give you an error. I have been programming Go professionally since 2012 and this has never once been a problem for me.

                                                                                                            the XML thing is a quagmire because XML itself is rife with redundancies and ambiguities. Try using an XML parser in any other statically typed language and compare them. They’re all a nightmare. Go sucks here but the alternatives suck more so I don’t understand why anyone would feel the need to belabor this point beyond using it to bring traffic to their blog.

                                                                                                            The Go grammar definition contains semicolons, but you don’t see them much in idiomatic Go code.

                                                                                                            The semicolon is a statement separator, every for loop delimits its statements with the semicolon for the same reason. Every if err := fn(); err != nil { invocation is a standard usage of the semicolon. So no, not really.

                                                                                                            Taking shots at javascript for semicolon insertion in (checks watch) 2020? Yawn.

                                                                                                            To me Go doesn’t really seem to be designed as a general purpose programming language

                                                                                                            how could you possibly conclude such a thing from this tiny example? did you try to use it for any other domain? Talk about that. What’s established does not come anywhere close to being sufficient evidence to support such a conclusion.

                                                                                                            Go also seems to suffer from ignorance towards well established patterns that are proven to work

                                                                                                            this is where I just, quite frankly, lose it. Without this comment I would not have bothered to respond to this piece.

                                                                                                            You can’t just hand waive and say they’re ignorant of “well established patterns” without naming which patterns specifically. I’m willing to bet for the majority of patterns you can name, somebody on the Go team has written publicly about how the Go team has thought about that problem. It will never cease to amaze me that the Go team can, for example, write what now must be hundreds of pages of explanations as to why generics aren’t there and people will still stand back and say “the Go team hasn’t thought about generics”. There are a lot of things that the Go team has thought about and intentionally decided to omit. They didn’t decide to do what you wanted them to do. You can talk about why you disagree with their conclusions but if you’re going to just make a blanket statement that they haven’t thought about it, you’re just making unsubstantiated accusations of incompetence.

                                                                                                            The Go team has made some very opinionated choices and some of those choices I don’t agree with. There are aspects of the language that I find absolutely irritating (the lack of sum types is particularly aggravating to me). Go is not a perfect tool. It has its problems. But to say that the Go team is ignorant is something that I will always stand up to oppose. The Go team is absolutely exemplary when it comes to explaining their design philosophy, their strategy, and design problems that are works in progress.

                                                                                                            the getopt thing does suck though lol

                                                                                                            1. 2

                                                                                                              I don’t understand where the hate is coming from. I used Go to write software and made some observations while doing so. Then I shared them with other people on a platform whose purpose is to share links. What’s wrong about that?

                                                                                                              Yes, the timestamp example is contrived. I did that to make a point that the format and the value to be parsed look the same.

                                                                                                              the XML thing is a quagmire because XML itself is rife with redundancies and ambiguities. Try using an XML parser in any other statically typed language and compare them. They’re all a nightmare. Go sucks here but the alternatives suck more so I don’t understand why anyone would feel the need to belabor this point beyond using it to bring traffic to their blog.

                                                                                                              I praised the XML parser in Go: “Parsing XML in Go is relatively easy, and I have to admit rather neat. Definitely more comfortable to use than Python’s ElementTree.” The personal attack was completely uncalled for.

                                                                                                              Go also seems to suffer from ignorance towards well established patterns that are proven to work

                                                                                                              this is where I just, quite frankly, lose it. Without this comment I would not have bothered to respond to this piece.

                                                                                                              Damn, I chose some unfortunate wording there… Let me repeat a comment here that I wrote in response to someone else complaining about the same issue:

                                                                                                              Yes, datetime parsing and flag handling is what I was refering to.

                                                                                                              Go is developed by smart people, I never assumed they are not aware of how timestamp formats look like in other languages. I assumed they did it differently anyway ignoring the established practices, that’s why i wrote ‘ignorance’. Maybe not the most fortunate wording, didn’t mean to offend.

                                                                                                              1. 4

                                                                                                                I had the same reaction to your blog post as scraps. On seeing your response and reflecting, I think it’s just a matter of being so tired of surface-level criticisms of Go that seem to sweep up the rankings on hacker news etc. Which is 100% not your fault; of course you’re entitled to write about your experiences.

                                                                                                                While your article was pretty balanced in pro/anti sentiments, the fact that it was sandwiched between a contrived and sarcastic example at the start, and closing remarks that used “ignorance” to mean “ignoring”, “disregarding”, or “choosing differently”, meant that my brain put your article into the “sigh, another one of these; why are they always so popular?” category :-)

                                                                                                                (fwiw, I agree with scraps that sum types are the biggest obvious hole in Go)

                                                                                                                1. 3

                                                                                                                  this is exactly right.

                                                                                                            1. 4

                                                                                                              It looks like “Observability” is the cool new trend nowadays

                                                                                                              It’s not clear from this post what your goals are and what your current setup is, because you’re conflating observability and centralized logging, but those are independent topics. Related, yes, but independent.

                                                                                                              What is it you wish to observe, and to what ends?

                                                                                                              Observability is a somewhat vague term, but I assume that you mean it by the common definition of observing the current (and past) states of a system for the purposes of operating that system safely. That is: observability generally means operational observability and not business observability (or legal/financial/compliance observability). Operational observability generally entails some sort of alerting system. Most people would not put auditing for legal or compliance reasons under the umbrella of observability, even though auditing is literally the process of observing what has happened. Centralized logging is essential for dealing with issues of legal compliance, and it’s fantastic for doing so. If you are trying to solve some sort of legal or compliance problem the rest of my post will not help you. If you are, however, concerned with operating live systems, the rest will apply.

                                                                                                              I’ve run ELK in the past and don’t bother with it now; I just use dsh to ssh into my nodes and tail their logs directly and pipe them into grep. Using journalctl -f is the same thing, sometimes I do that too. I log to file and logrotate my logs and throw them out after a week because I don’t have a legal requirement to keep them and I’m never going to look at them. I don’t personally believe in the efficacy of logs for the purposes of operational observability.

                                                                                                              To clarify where I’m coming from and my experience with the ELK stack: I spent a few months working on ELK infrastructure full time at Etsy; it was my responsibility to productionize the log shipping from every node in the fleet (Elastic’s “beats” did not exists yet; the predecessor to file beat was lumberjack and it was not production-ready). Not only did I find a large number of problems with how the ELK stack shipped logs, I also found a large number of problems with how Splunks ship logs (do you know what happens when one process continues to write log lines after another process has deleted the file handle?). We were able to scale this stack and when I was there, we were shipping many billions of log lines every day. I would guess that less than one one-hundredth of one percent of all of that data was ever read in some way. My conclusion is that centralized logging is primarily a concept that is pushed by people who are trying to sell things. If what you really want is some sort of graph or chart or alerting system, you probably don’t want to get there by parsing logs.

                                                                                                              My conclusion after all this time was the centralized logging is ill-suited to operational observability inasmuch as centralized logging is concerned with completeness; the metric of success for centralized logging systems is to not lose log data. But observability in the general ops parlance is not concerned with the completeness of records inasmuch as it is concerned with the timeliness of signals. Alerting systems are latency-bounded systems. Case in point: downsampling is the process of throwing away data to make your pipeline more efficient in terms of something (disk space, network utilization, CPU utilization, or end-to-end latency, take your pick).

                                                                                                              Centralized logging systems nearly always involve queuing systems or backpressure; they’re nearly always designed around the idea that you want to get all of the data eventually, not a current view of your system right now. The time-to-alert for your alerts driven by centralized logging is in every case going to be the end-to-end latency of the entire system. Especially if your alert is defined as “some system is in some state for some amount of time”, such as “for 3 minutes this endpoint is not responding”, which is how a lot of production alerts are driven in order to avoid false positives. When some portion of your centralized logging infrastructure slows down, so does your logging pipeline and so does your alerting system. If your logging system is delayed by five minutes and you want to alert on something being in some state for five minutes you’re looking at ten minutes from the time an event begins to the time an alert is fired. Is that ok?

                                                                                                              I strongly prefer, for the purposes of operational observability, to use a time series database and to focus on metrics instead of logs. I use InfluxDB and Grafana for this purpose and my experience has been positive. I install Telegraf on every node and have my local processes talk to telegraf, which performs aggregations at the source and sends them to an influx database, which I view through grafana and drive alerts through pagerduty. This is working well for me.

                                                                                                              If we want to be exceptionally pedantic: technically any append-only, ordered stream of events is a log of some form, and technically writing to a time series database is itself a form of logging. I assume what you mean is: “I have textual application logs, I want to know how my system is doing both now and historically”, and my post is written through that lens.

                                                                                                              1. 10

                                                                                                                I get what the author is trying to get at with calling it “serverless” and not sure if it’s a good or bad overloading of terms. But, I do think that SQLite is an underappreciated tool for the reasons they described. I wrote the following on Hacker News, but figured I’d add it here to:

                                                                                                                I think a good under-appreciated use case for SQLite is as a build artifact of ETL processes/build processes/data pipelines. Seems like lot of people’s default, understandably, is to use JSON as the output and intermediate results, but if you use SQLite, you’d have all the benefits of SQL (indexes, joins, grouping, ordering, querying logic, and random access) and many of the benefits of JSON files (SQLite DBs are just files that are easy to copy, store, version, etc and don’t require a centralized service).

                                                                                                                I’m not saying ALWAYS use SQLite for these cases, but in the right scenario it can simplify things significantly.

                                                                                                                Another similar use case would be AI/ML models that require a bunch of data to operate (e.g. large random forests). If you store that data in Postgres, Mongo or Redis, it becomes hard to ship your model alongside with updated data sets. If you store the data in memory (e.g. if you just serialize your model after training it), it can be too large to fit in memory. SQLite (or other embedded database, like BerkleyDB) can give the best of both worlds– fast random access, low memory usage, and easy to ship.

                                                                                                                1. 8

                                                                                                                  I think SQLIte is great, and an amazing feat of engineering.

                                                                                                                  However, I really wish it would just check my types. If the database will happily write a string to my int column, and my language is dynamically typed… well, there’s only the fallible human left to ensure there’s no silent data corruption.

                                                                                                                  1. 4

                                                                                                                    You can add check constraints using typeof, e.g. check(typeof(col) == 'INTEGER').

                                                                                                                    I agree static types are useful and important, but dynamic types are also useful for plenty of things, e.g. using SQLite with unclean data from external sources.

                                                                                                                    select typeof(col), count(*) from imported group by 1;
                                                                                                                    select col from imported where typeof(col) != 'INTEGER';
                                                                                                                    update imported set col = ... where typeof(col) != 'INTEGER';
                                                                                                                    
                                                                                                                  2. 5

                                                                                                                    It seems like the initial documentation might be older than the widespread usage of serveless as “no visible servers for you to manage”.

                                                                                                                    I think it would be a bit silly to choose this hill to die on, it is not like the older meaning of serveless has ever caught on in any way or form, nor there’s really a trend on building the sort of thing that could be called the sqlite kind of serveless.

                                                                                                                    The text on itself doesn’t mean that the author is choosing to die on this hill, though, maybe it’s just about clarifying a specific piece of documentation

                                                                                                                    1. 11

                                                                                                                      This page is at least 12 years old, and remains largely unmodified since its creation, except for the second section added 2 years ago. See this archive of the page from 2007. No one is dying on any hill, it was just written long before the term was otherwise used.

                                                                                                                      1. 6

                                                                                                                        “Serverless” here means literally what it says: the work is done in-process, not in a separate server. This is beyond a trend, it’s the way regular libraries work. ImageMagick is “serverless”. Berkeley DB is “serverless”. OpenGL is “serverless”. Get it?

                                                                                                                        The only reason the developer of SQLite calls this out is because most SQL databases are client-server, so someone familiar with MySQL or SQLServer might otherwise be confused.

                                                                                                                        (And may I add that I, personally, find the current meaning of “serverless” ridiculous. Just because you don’t have to configure or administer a server doesn’t mean there isn’t one. When I first came across this buzzword a few years ago, I thought the product was P2P until I dug into the docs. But then, a lot of buzzwords are ridiculous and we get used to them.)

                                                                                                                      2. 3

                                                                                                                        I get what the author is trying to get at with calling it “serverless” and not sure if it’s a good or bad overloading of terms.

                                                                                                                        I’m sympathetic to this line of thinking, but in this case “serverless” is an utterly and completely lost cause. It’s beyond any hope of redemption. All use is fair play.

                                                                                                                        1. 2

                                                                                                                          I think a good under-appreciated use case for SQLite is as a build artifact of ETL processes/build processes/data pipelines.

                                                                                                                          ha, I built pretty much exactly that at Etsy years ago. We had an ETL that transformed the output of hadoop jobs into sqlite files that could be queried from the site. It worked because without writers you don’t have any locking problems.