1. 8

    I switched to Visual Studio Code with Neovim backend yesterday. Neovim provides all the Ext functionality so you can :s/foo/bar to your heart’s content. It’s finally Good Enough to operate as a Vim without having to spend months tuning your .vimrc. I have been using Vim for 5+ years and wrote all my Go code in it.

    I think this is what the future of Vim actually is for the majority of people: Neovim providing a backend for their preferred IDE. Interacting in a terminal is incredibly antiquated, even if it’s the sort of thing you are super used to. You can spend your time actually understanding and learning Vim, not trying to make Vim do what you think is reasonable/behaves like your previous editor of choice.

    1. 5

      Despite being somewhat of a diehard vim fan, 99% of my ‘vim’ usage these days is via emulators - either in VS, VSCode or Firefox.

      For me the true value of vim is modal editing (and the associated muscle memory); the plugin ecosystem etc is fine (and at one point I spent a lot of time honing my plugin config) but there’s very little I miss.

      1. 2

        My experience is the same. I don’t even have gVIm installed on my workstation anymore, but I love love working with the vim extensions in VS, Code, and Firefox.

      2. 3

        Little off-topic, but what do you use to do that integration?

        1. 3

          The VSCode Vim plugin will do it out of the box, just check “Enable NeoVim”

        2. 2

          Maybe some day an interface to neovim will appear for Emacs, that would be a nice thing to happen. Perhaps I could start writing it, if I have a chance to learn elisp. Emacs as the extensible front end, with a proper modal backend. In fact the front end could be something better than Emacs, an scheme implementation would be amazing, in order to preserve separation of interests and provide users with a lightweight but infinitely extensible editing environment. If someone with adecuate skills for this (I don’t have them at the moment, so I will have to invest some time learning) is willing to start a with me such project, I would be more than honored to do so, if no interest is shown, eventually I would do it on my own.

          1. 2

            Check out Oni!

            1. 1

              Thanks for the recommendation, but I’m not interested in bringing the web to the desktop with the Electron framework, as exciting as it may be for many programmers I think it is still a bad idea. Personally, I think we don’t need tenths of MB in programs’ binaries in order to do text editing, and Javascript isn’t a coherent nor a well defined language to justify its expansion on servers and home computers, I think there are better alternatives to this. Nevertheless, if you like it and it solves your problems, then that’s all that matters in the end.

              1. 1

                I don’t actually use it - I use plain neovim in my terminal. I agree with you on the criticisms of electron - it’s just the only program of its kind that I’ve found.

                1. 2

                  Sorry If I assumed something incorrect. Some of the ideas in Oni seem interesting, and would be a worthwhile effort to have a look at the source code.

        1. 3

          It’d be good in general for commands that have an effect on the state of your filesystem to have a way to declare their inputs and their outputs for a given set of options. This way you’d be able to analyze e.g. install scripts to review which files would be read and written before even running it, and check if you’re missing anything the script depends on.

          1. 4

            They are unfortunately very chatty though. Alias rm to rm -i and it’s just an obnoxious amount of y y y for any non-trivial removal. I wish someone would fix this to something more like a table output showing a reasonable summary of what’s going and let me confirm the once.

            1. 4

              This is a powerful paradigm.

              Incremental confirmation can result in a state where the system has a half-performed operation when the operator decides to abort.

              Displaying a plan – and requiring confirmation of the entire plan – ensures that the operator intends to do everything, or intends to do nothing.

              1. 3

                The book Unix Power Tools, originally published in 1993, includes a recipe for the behavior to rm you’re describing. It is surprising this feature hasn’t made it in to coreutils sometime in the intervening 2 1/2 decades.

                1. 2

                  I’ve worked on systems that enforce -i and it just made me develop a very bad -f habit.

              1. 6

                In the general case, I have developed a deep and long-lasting skepticism of DSLs. I was a very heavy proponent of them during my grad studies, and investigated pretty thoroughly using a rules engine for Space Invaders Enterprise Edition and a runtime monitor for Super Mario World.

                I went a little further down this path before I abandoned it for reasons unrelated to the DSL skepticism. That happened later. I just wanted to give context that I was actually naturally predisposed to liking them.

                What has happened in my time on this earth as a software engineer is the feeling that it is axiomatic that all DSLs eventually tend towards something Turing complete. New requirements appear, features are added, the DSL heads further towards Turing completeness. Except the DSL does not have the fundamental mechanics to express Turing completeness, it is by fundamental design supposed to not do that. What you end up with is something very complex, where users are performing all sorts of crazy contortions to get behavior they want, and you can never roll that back. I feel like DSLs are essentially doomed from the outset.

                I am much, much more optimistic about opinionated libraries as the means to solve the problems DSLs do (Ruby on Rails being the most obvious one). That way any of the contortions can be performed in a familiar language that the developer is happy to use and won’t create crazy syntax, and the library then called to do whatever limited subset of things it wants to support. For basic users, they’ll interact with the library only and won’t see the programming language. As things progress, the base language can be brought in to handle more complex cases as pre/post-processing by the caller, without infringing on the design of the library.

                At Google, we have a number of DSLs to perform many different tasks which I won’t go into here. Each one requires a certain learning curve and a certain topping-out where you can’t express what you want. I was much happier with an opinionated library approach in Python, where I could do a great deal of what I wanted without peering behind the curtain of what was going to be performed.

                1. 6

                  sklogic on Hacker News had a different view: you start with a powerful, Turing-complete language that supports DSL’s with them taking the place of libraries. He said he’ll use DSL’s for stuff like XML querying, Prolog where logic approach makes more sense, Standard ML when he wants it type-safe in simple form, and, if all else fails or is too kludgy, drops back into LISP that hosts it all. He uses that approach to build really complicated tools like his mbase framework.

                  I saw no problem with the approach. The 4GL’s and DSL’s got messy because they had to be extended toward powerful. Starting with something powerful that you constrain where possible eliminates those concerns. Racket Scheme and REBOL/Red are probably best examples. Ivory language is an example for low-level programming done with Haskell DSL’s. I have less knowledge of what Haskell’s can do, though.

                  1. 3

                    I think it’s a good approach, but it’s still hard to make sure that the main language hosting all the DSLs can accomodate all of their quirks. Lisp does seem to be an obvious host language, but if it were that simple then this approach would have taken off years ago.

                    Why didn’t it? Probably because syntax matters and error messages matter. Towers of macros produce bad error messages. And programmers do want syntax.

                    And I agree that syntax isn’t just a detail; it’s an essential quality of the language. I think there are fundamental “information theory” reasons why certain syntaxes are better than others.

                    Anything involving s-expressions falls down – although I know that sklogic’s system does try to break free of s-expression by adding syntax.

                    Another problem is that ironically by making it too easy to implement a DSL, you get bad DSLs! DSLs have to be stable over time to be made “real” in people’s heads. If you just have a pile of Lisp code, there’s no real incentive for stability or documentation.

                    1. 4

                      “but if it were that simple then this approach would have taken off years ago.”

                      It did. The results were LISP machines, Common LISP, and Scheme. Their users do little DSL’s all the time to quickly solve their problems. LISP was largely killed off by AI Winter in a form of guilt by association. It was also really weird vs things like Python. At least two companies, Franz and LispWorks, are still in Common LISP business with plenty of success stories on complex problems. Clojure brought it to Java land. Racket is heavy on DSL’s backed by How to Design Programs and Beautiful Racket.

                      There was also a niche community around REBOL, making a comeback via Red, transformation languages like Rascal, META II follow-ups like Ometa, and Kay et al’s work in STEPS reports using “IS” as foundational language. Now, we have Haskell, Rust, Nim, and Julia programmers doing DSL-like stuff. Even some people in formal verification are doing metaprogramming in Coq etc.

                      I’d say the idea took off repeatedly with commercial success at one point.

                      “Probably because syntax matters and error messages matter. Towers of macros produce bad error messages. And programmers do want syntax.”

                      This is a good point. People also pointed out in other discussions with sklogic that each parsing method had its pro’s and con’s. He countered that they can just use more than one. I think a lot of people don’t realize that today’s computers are so fast and we have so many libraries that this is a decent option. Especially if we use or build tools that autogenerate parsers from grammars.

                      So, IIRC, he would use one for raw efficiency first. If it failed on something, that something would get run through a parser designed for making error detection and messages. That’s now my default recommendation to people looking at parsers.

                      “Anything involving s-expressions falls down – although I know that sklogic’s system does try to break free of s-expression by adding syntax.”

                      Things like Dylan, Nim, and Julia improve on that. There’s also just treating it like a tree with a tree-oriented language to manipulate it. A DSL for easily describing DSL operations.

                      “nother problem is that ironically by making it too easy to implement a DSL, you get bad DSLs!”

                      The fact that people can screw it up probably shouldn’t be an argument against it since they can screw anything up. The real risk of gibberish, though, led (per online commenters) a lot of teams using Common LISP to mandate just using a specific coding style with libraries and no macros for most of the apps. Then, they use macros just handling what makes sense like portability, knocking out boilerplate, and so on. And the experienced people wrote and/or reviewed them. :)

                      1. 2

                        Probably because syntax matters and error messages matter. Towers of macros produce bad error messages. And programmers do want syntax.

                        Another problem is that ironically by making it too easy to implement a DSL, you get bad DSLs! DSLs have to be stable over time to be made “real” in people’s heads. If you just have a pile of Lisp code, there’s no real incentive for stability or documentation.

                        I’m so glad to see this put into words. Although for me, I find it frustrating that this seem to be universally true. I was pretty surprised the first time around when I felt my debugger was telling me almost nothing because my syntax was so uniform, I couldn’t really tell where I was in the source anymore!

                        Some possibilities for this not to be true that I’m hoping for: maybe its like goto statements and if we restrict ourselves to make DSLs in a certain way, they won’t become bad (or at least won’t become bad too quickly). By restricting the kind of gotos we use (and presenting them differently), we managed to still keep the “alter control flow” aspect of goto.

                        Maybe there’s also something to be done for errors. Ideally, there’d be a way to spend time proportional to the size of the language to create meaningful error messages. Maybe by adding some extra information somewhere that currently implicit in the language design.

                        I don’t know what to do about stability though. I mean you could always “freeze” part of the language I guess.

                        For this particular project, I’m more afraid that they’ll go the SQL route where you need to know so much about how the internals work that it mostly defeats the purpose of having a declarative language in the first place. I’d rather see declarative languages with well-defined succinct transformations to some version of the code that correspond to the actual execution.

                        1. 1

                          (late reply) Someone shared this 2011 essay with me, which has apparently been discussed to death, but I hadn’t read it until now. It says pretty much exactly what I was getting at!


                          In this essay, I argue that Lisp’s expressive power is actually a cause of its lack of momentum.

                          I said:

                          Another problem is that ironically by making it too easy to implement a DSL, you get bad DSLs!

                          So that is the “curse of Lisp”. Although he clarifieds that they’re not just “bad” – there are too many of them.

                          He mentions documentation several times too.

                          Thus, they will have eighty percent of the features that most people need (a different eighty percent in each case). They will be poorly documented. They will not be portable across Lisp systems.

                          Domain knowledge is VERY hard to acquire, and the way you share that is by developing a stable and documented DSL. Like Awk. I wouldn’t have developed Awk on my own! It’s a nice little abstraction someone shared with me, and now I get it.

                          The “bipolar lisp programmer” essay that he quotes also says the same things… I had not really read that one either but now I get more what they’re saying.

                          1. 1

                            Thanks for sharing that link again! I don’t think I’ve seen it before, or at least have forgotten. (Some of the links from it seem to be broken unfortunately.)

                            One remark I have is that I think you could transmit information instead of code and programs to work around this curse. Implicit throughout the article is that collaboration is only possible if everyone uses the same language or dialect of it; indeed, this is how version controlled open-source projects are typically structured: around the source.

                            Instead, people could collaboratively share ideas and findings so everyone is able to (re)implemented it in their own DSL. I say a bit more on this in my comment here.

                            In my case, on top of documentation (or even instead of it), I’d like to have enough instructions for rebuilding the whole thing from scratch.

                            To answer your comment more directly

                            Domain knowledge is VERY hard to acquire, and the way you share that is by developing a stable and documented DSL

                            I totally agree that domain knowledge is hard to acquire but I’m saying that this only one way of sharing that knowledge once found. The other way is through written documents.

                    2. 4

                      Since I like giving things names, I think of this as the internal DSL vs external DSL argument [1]. This applies to your post and the reply by @nickpsecurity about sklogic’s system with Lisp at the foundation. If there is a better or more common name for it, I’d like to know.

                      I agree that internal DSLs (ones embedded in a full programming language) are preferable because of the problems you mention.

                      The external DSLs always evolve into crappy programming languages. It’s “failure by success” – they become popular (success) and the failure mode is that certain applications require more power, so they become a programming language.

                      Here are my examples with shell, awk, and make, which all started out non Turing-complete (even Awk) and then turned into programming languages.


                      Ilya Sher points out the same problems with newer cloud configuration languages.


                      I also worked at Google, and around the time I started, there were lots of Python-based internal DSLs (e.g. the build system that became Blaze/Bazel was literally a Python script, not a Java interpreter for a subset of Python).

                      This worked OK, but these systems eventually got rewritten because Python isn’t a great language for internal DSLs. The import system seems to be a pretty significant barrier. Another thing that is missing is Ruby-style blocks, which are used in configs like Vagrantfile and I think Puppet. Ruby is better, but not ideal either. (Off the top of my head: it’s large, starts up slowly, and has version stability issues.)

                      I’m trying to address some of this with Oil, although that’s still a bit far in the future :-/ Basically the goal is to design a language that’s a better host for internal DSLs than Python or Ruby.

                      [1] https://martinfowler.com/bliki/InternalDslStyle.html

                      1. 3

                        If a programming language is flexible enough, the difference between DSL and library practically disappears.

                        1. 1

                          DSL’s work great when the domain is small and stays small and is backed by corporal punishment. Business Software is an astronomically large domain.

                        1. 1

                          I use Google Books for a one-by-one search. Google Books is a digital search inside an analog book. I search for the phrase I think I want in the book I own, use the snippet to figure out which page I want, then find it in the book. Works very well.

                          I usually don’t have multiple books on the same topic, I don’t have that much space :)

                          1. 1

                            “the phrase I think I want in the book I own” -> this is interesting, why would you like something like that before you buy a book? I’m more like, curious to explore what the book gives me.

                            1. 1

                              I meant I already own the book. So I have a book, and I think “I think there’s a word or phrase in here that will get me what I want, but the index in this book is garbage”. So I use Google Books.

                          1. 1

                            I wrote my thesis in LaTeX, which got converted into a book. The publisher, who is technical (not O’Reilly) and you can probably find the book if you doxx me hard enough, made me convert it into Word so the editor could leave comments. The editor would never actually do the edits, just leave comment after comment, which drove me nuts.

                            It was not a good experience. I wish I could have just gotten a LaTeX template or Pandoc or something.

                            1. 1

                              I use paper a lot for kinda ephemeral stuff but by far the number one problem I have is that it prevents being able to always capture. If I’m away from my desk I now need a secondary capture system to get it onto that list.

                              This is a bit inevitable of course, I think you can’t avoid having more than one and doing some cleanup work. The trick I have at the moment when I’m away from my primary todo-capture systems is to use my phone to set a reminder to remind myself to write it down later.

                              For the “don’t keep a long time” thing, I tend to make daily to-do’s and review the previous day’s to-do’s to actively build the following one. This requires a lot of honesty about not doing something though (otherwise you end up aggregating dead to-do’s in your list). For some things that have been in the list too long I set a calendar event to come back to it in a week or two (time for me to accept failure/have a different perspective on the task).

                              1. 2

                                Yeah, for me it’s a bit different, I’m a kind of sitting-all-day guy, I do have a second list at home as well, the two lists for two kinds of tasks. So I guess, if you gonna need a list when you’re away from your desk, that list should have a different type of tasks from the one you have at your desk :D

                                1. 2

                                  The Bullet Journal app on iOS is pretty good for this. It lets you add tasks… but it deletes them after 2 days. You have to put them in your book or they’re gone. So when you’re away from your desk you can enter stuff, but you have to jot it down if you care about it.

                                1. 5

                                  I greatly enjoyed using Bullet Journaling. I tried Project Evo and bought 4 from the Kickstarter, and it’s kinda good, but I kinda regret it. I thought “Oh, I would use this monthly layout! Oh, I would use this weekly todo list” and I never do. I do like the gratitude/wellness prompts, but I could have come up with those myself. I get no particular value out of the app aspect.

                                  The forcing function of rewriting tasks is the clutch bit of Bullet Journalling. I used to put all my tasks in Inbox, snooze them, snooze them, snooze them, I got anxious and deluged. I think I will go back to Bullet Journals once I fill up my current Evo.

                                  1. 3

                                    Does anyone know any more about this? I’ve never heard of it and it seems very new, but there is already a BallerinaCon in July? Looks like it’s owned by WSO2 who I’ve never heard of before either.

                                    1. 3

                                      It has been about 3 years in development but we really started talking about it earlier this year. The origins indeed have been in WSO2’s efforts in the integration space (WSO2 is an open-source integration company and had a research project on code-first approach to integration). Ballerina is an open-source project - at this moment has 224 contributors.

                                      It is getting a lot of interest in the microservices and cloud-native (CNCF) space because it supports all the modern data formats and protocols (HTTP, WebSockets, gRPC, etc.), has native Docker and Kubernetes integration (build directly into a Docker image and K8S YAMLs), is type-safe, compiled, has parallel programming and distributed constructs baked in, etc.

                                      You can see lots of language examples in Ballerina by Example and Ballerina Guides.

                                      1. 2

                                        I actually posted this hoping someone would have more info. The language looks interesting and far along to be under the radar.

                                        1. 1

                                          The company seems to be based in Sri Lanka. It is nice to see cool tech coming from countries like that.

                                          1. 1

                                            The company seems to be based in Sri Lanka. It is nice to see cool tech coming from countries like that.

                                            The project has non-WSO2 contributors as well, and WSO2 has also offices in Mountain View, New York, Pao Paolo, London, and Sydney, but indeed Colombo (Sri Lanka) is the biggest office so at the moment my guess would be that Ballerina is 90% from Sri Lanka - which indeed is a fantastic place! :)

                                        1. 1

                                          I think standups are all doomed to devolve into misery without an incredibly strong and dedicated hand leading and cutting people off. That would be step 1 for me: find someone who is willing to say “No, you’re standing up so you are as uncomfortable as we are… No your time is up…”

                                          I only find the first two or three sentences of what anyone is doing to be useful. The other thing that is useful is to know what problem anyone is wrestling with at the time in case someone knows how to help.

                                          If I was running my Iron Fist standups, I think I could get it down to 30s per person ;)

                                          1. 5

                                            Examples of major changes:


                                            simplified, improved error handling?

                                            I am glad to see they are considering generics for Go2.

                                            1. 5

                                              Russ has more background on this from his Gophercon talk: https://blog.golang.org/toward-go2

                                              The TL;DR for generics is that Go 2 is either going to have generics or is going to make a strong case for why it doesn’t.

                                              1. 1

                                                As it should be…

                                                1. 1

                                                  Glad to hear that generics are very likely on the way from someone on the Go team.

                                                  The impression I got was that generics were not likely to be added without a lot of community push in terms of “Experience Reports”, as mentioned in that article.

                                                  1. 1

                                                    They got those :)

                                                2. 1

                                                  Wouldn’t generic types change Go’s error handling too? I mean that when you can build a function that returns a Result<Something, Error> type, won’t you use that instead of returning Go1 “tuples” ?

                                                  1. 5

                                                    For Result type, you either need boxing, or sum type (or union, with which you can emulate sum type), or paying memory cost of both value and error. It’s not automatic with generics.

                                                    1. 1

                                                      I see, thanks for clarifying! :)

                                                    2. 1

                                                      As I understand it Go has multiple return values and does not have a tuple type, so not sure how your example would work. There are some tickets open looking at improving the error handling though.

                                                  1. 5

                                                    Google contributes suprisingly little back to in terms of open source compared to the size of the company and the number of developers they have. (They do reciprocate a bit, but not nearly as much as they could.)

                                                    For example this is really visible in the area where they do some research and/or set a standard like with compression algorithms (zopfli, brotli), network protocols (HTTP/2, QUIC), the code and glue they release is minimal.

                                                    It’s my feeling that Google “consumes”/relies on a lot more open source code than they then contribute back to.

                                                    1. 10

                                                      Go? Kubernetes? Android? Chromium? Those four right there are gargantuan open source projects.

                                                      Or are you specifically restricting your horizon to projects that aren’t predominantly run by Google? If so, why?

                                                      1. 11

                                                        I’m restricting my horizon for projects that aren’t run by Google because it better showcases the difference between running and contributing to a project. Discussing how Google runs open source projects is another interesting topic though.

                                                        Edit: running a large open source project for a major company is in large part about control. Contributing to a project where the contributor is not the main player running the project is more about cooperation and being a nice player. It just seems to me that Google is much better at the former than the latter.

                                                        1. 2

                                                          It would be interesting to attempt to measure how much Google employees contribute back to open source projects. I would bet that it is more than you think. When you get PRs from people, they don’t start off with, “Hey so I’m an engineer at Google, here’s this change that we think you might like.” You’d need to go and check out their Github profile and rely on them listing their employer there. In other words, contributions from Google may not look like Contributions From Google, but might just look like contributions from some random person on the Internet.

                                                          1. 3

                                                            I don’t have the hat, but for the next two weeks (I’m moving teams) I am in Google’s Open Source office that released these docs.

                                                            We do keep a list of all Googlers who are on GitHub, and we used to have an email notification for patches that Googlers sent out before our new policy of “If it’s a license we approve, you don’t need to tell us.” We also gave blanket approval after the first three patches approved to a certain repo. It was ballpark 5 commits a day to non-Google code when we were monitoring, which would exclude those which had been given the 3+ approval. Obviously I can share these numbers because they’re all public anyway ;)

                                                            For reasons I can’t remember, we haven’t used the BigQuery datasets to track commits back to Googlers and get a good idea of where we are with upstream patches now. I know I tried myself, and it might be different now, but there was some blocker that prevented me doing it.

                                                            I do know that our policies about contributing upstream are less restrictive than other companies, and Googlers seem to be happy with what they have (particularly since the approved licenses change). So I disagree with the idea that Google the company doesn’t do enough to upstream. It’s on Googlers to upstream if they want to, and that’s no different to any other person/group/company.

                                                            1. 2

                                                              So I disagree with the idea that Google the company doesn’t do enough to upstream.

                                                              Yeah, I do too. I’ve worked with plenty of wonderful people out of Google on open source projects.

                                                              More accurately, I don’t even agree with the framing of the discussion in the first place. I’m not a big fan of making assumptions about moral imperatives and trying to “judge” whether something is actually pulling its weight. (Mostly because I believe its unknowable.)

                                                              But anyway, thanks for sharing those cool tidbits of info. Very interesting! :)

                                                              1. 3

                                                                Yeah, sorry I think I made it sound like I wasn’t agreeing with you! I was agreeing with you and trying to challenge the OP a bit :)

                                                                Let me know if there’s any other tidbits you are interested in. As you can tell from the docs, we try to be as open as we can, so if there’s anything else that you can think of, just ping me on this thread or cflewis@google.com and I’ll try to help :D

                                                                1. 1

                                                                  FWIW I appreciate the effort to shed some light on Google’s open source contributions.Do you think that contributions could be more systemic/coordinated within Google though, as opposed to left to individual devs?

                                                                  1. 1

                                                                    Do you think that contributions could be more systemic/coordinated within Google though, as opposed to left to individual devs?

                                                                    It really depends on whether a patch needs to be upstreamed or not, I suppose. My gut feeling (and I have no data for this) and entirely personal and not representative of my employer opinion, is that teams as a whole aren’t going to worry about it if they can avoid it… often the effort to convince the upstream maintainers to accept the patch can suck up a lot of time, and if the patch isn’t accepted then that time was wasted. It’s also wasted time if the project is going in a direction that’s different to yours, and no-one really ever wants to make a competitive fork. It’s far simpler and a 100% guarantee of things going your way if you just keep a copy of the upstream project and link that in as a library with whatever patches you want to do.

                                                                    The bureaucracy of upstreaming, of course, is working as intended. There does have to be guidance and care to accepting patches. Open source != cowboy programming. That’s no problem if you are, say, a hobbyist who is doing it in the evenings here and there, where timeframes and so forth are less pressing. But when you are a team with directives to get your product out as soon as you can, it generally isn’t something a team will do.

                                                                    I don’t think this is a solved problem by any company that really does want to commit back to open source like Google does. And I don’t think the issue changes whether you’re a giant enterprise or a small mature startup.

                                                                    This issue is also why you see so much more open source projects released by companies rather than working with existing software: you know your patches will be accepted (eventually) and you know it’ll go in your direction, It’s a big deal to move a project to community governance as you now lose that guarantee.

                                                        2. 0


                                                          Did you ever tried to compile it?

                                                          1. 2

                                                            Yeah, and?

                                                            1. 0

                                                              How much time it took? On which hardware?

                                                              1. 1

                                                                90 minutes, on a mid-grade desktop from 2016.

                                                                1. 1

                                                                  Cool! You should really explain to Google your build process!

                                                                  And to everybody else, actually.

                                                                  Because a convoluted and long build process, concretely reduce the freedom that an open source license gives you.

                                                                  1. 1

                                                                    Cool! You should really explain to Google your build process!

                                                                    Google explained it to me actually. https://chromium.googlesource.com/chromium/src/+/lkcr/docs/linux_build_instructions.md#faster-builds

                                                                    Because a convoluted and long build process, concretely reduce the freedom that an open source license gives you.

                                                                    Is the implication that Google intentionally makes the build for Chromium slow? Chromium is a massive project and uses the best tools for the job and has made massive strides in recent years to improve the speed, simplicity, and documentation around their builds. Their mailing lists are also some of the most helpful I’ve ever encountered in open source. I really don’t think this argument holds any water.

                                                        3. 5

                                                          The amount Google invests in securing open source software basically dwarfs everyone else’s investment, it’s vaguely frightening. For example:

                                                          • OSS-Fuzz
                                                          • Patch Rewards for OSS projects
                                                          • Their work on Clang’s Sanitizers and libFuzzer
                                                          • Work on the kerne’s self protection program and syzkaller
                                                          • Improvements to linux kernel sandboxing technologies, e.g. seccomp-bpf

                                                          I don’t think anyone else is close, either by number (and severity) of vulnerabilities reported or in proactive work to prevent and mitigate them.

                                                          1. 2

                                                            Google does care a lot about security and I know of plenty of positive contributions that they’ve made. We probably could spend days listing them all, but in addition to what you’ve mentioned project zero, pushing the PKI towards sanity, google summer of code (of which I was one recipient about a decade ago), etc all had a genuinely good impact.

                                                            OTOH Alphabet is the world’s second largest company by market capitalization, so there should be some expectation of activity based on that :)

                                                            Stepping out of the developer bubble, it is an interesting thought experiment to consider if it would be worth trading every open source contribution Google ever made for changing the YouTube recommendation algoritm to stop promoting extremism. (Currently I’m leaning towards yes.)

                                                        1. 5

                                                          It is kind of funny how companies refuse to use software with any freedom restrictions, it is almost as if they know it is a bad thing to have done to you.

                                                          1. 4

                                                            FWIW Google does not refuse to use GPL, it’s right there in the docs. Using GPL and other restrictive licenses at Google does have some legal overhead involved, and most teams understandably don’t want to have to jump those hoops. The non-restrictive licenses like MIT/BSD/Apache (Apache 2 being the license the vast number of our projects use because of the patent grant) just make staying compliant easier.

                                                            The Open Source team deeply cares about being compliant, not just because of the legal issues, but because it’s the right thing to do. It’s important to us as engineers who joined the team because we <3 open source in the first place that we do right by authors. I think it’s easy to think about these things as being created by faceless entities, and not realize that the people that staff these teams all had previous lives, and with our team, all of them released open source projects one way or another.

                                                            1. 4

                                                              The GPL is not restrictive, given that it starts with copyright and gives freedoms from that.

                                                              1. 1

                                                                The only time I ever wanted to use GPL was to restrict competition to software I was working on.

                                                            2. 1

                                                              Or default on the opposites of how they acquire 3rd party software for their own software that they build or license for their users with careful exceptions for open source:

                                                              “Google gives you a personal, worldwide, royalty-free, non-assignable and non-exclusive license to use the software provided to you by Google as part of the Services. This license is for the sole purpose of enabling you to use and enjoy the benefit of the Services as provided by Google, in the manner permitted by these terms. You may not copy, modify, distribute, sell, or lease any part of our Services or included software, nor may you reverse engineer or attempt to extract the source code of that software, unless laws prohibit those restrictions or you have our written permission.

                                                              Open source software is important to us. Some software used in our Services may be offered under an open source license that we will make available to you. There may be provisions in the open source license that expressly override some of these terms.”

                                                              Right there Google tells you what kind of software is most valuable to them and funds all the FOSS they support. Maybe that should be more developers’ default, too, if they can find a business model to pull it off. ;)

                                                            1. 4

                                                              If I move off of OS X, it will be to Windows. For what I use my machine for, the applications simply aren’t there on any Unix other than OS X.

                                                              1. 4

                                                                I’ve been using Linux as my main desktop since about 3 years and used all of the major desktop environments. KDE Plasma looks good but either its file indexer (baloo) is taking hostage of one CPU core or the desktop crashes if you type specific words or too fast in the launcher, in short a horrible experience. I used Gnome for about a year and it was not much better, the plugins/extensions are often buggy and especially under wayland it crashes often and can’t restart like on X11, i.e. you loose all of your session state. Additionally, it feels laggy even on a beefed out machine (6 cores, latest gen. AMD GPU) because the compositor is single-threaded. GDM, gnome’s display manager, is also sluggish, runs since gnome 3.26 a process for each settings menu and starts a pulseaudio session which breaks bluetooth headset connections. Also unsuable for a productive environment in my opinion. Eventually I switched back to the desktop environment with what I started my Linux journey, namely XFCE with lightdm as a display manager. With compton as compositor it looks quite okay, is rock solid (in relation to the other DE I used) and everything feels snappy. As a note, I run all of the DEs on Arch Linux and I haven’t even talked about display scaling and multi-monitor usage, still a horror story.

                                                                TL;DR The year of the Linux desktop is still far away in the future.

                                                                1. 5

                                                                  I wouldn’t really know where to go. I have an Arch desktop at home (quad Xeon, 24 GB RAM, 2 SSDs), while the machine is much faster than my MacBook Pro, I usually end up using the MacBook Pro at home (and always at work), simply because there are no equivalents for me for applications like OmniGraffle, Pixelmator/Acorn, Microsoft Office (project proposals are usually floated in Word/Excel format with track changes), Beamer, etc. Also, both at work and home, AirPlay is the standard way to get things on large screens, etc.

                                                                  Also, despite what people are saying. The Linux desktop is still very buggy. E.g. I use GNOME on Wayland with the open amdgpu drivers on Arch (on X I can’t drive two screens with different DPIs). And half of the time GNOME does not even recover from simple things like switching the screen on/off (the display server crashes, HiDPI applications become blurry, or application windows simply disappear).

                                                                  Windows would probably have more useful applications for me than Linux or BSD (since many open source applications run fine on WSL). But my brain is just fundamentally incompatible with any non-unix.

                                                                  1. 8

                                                                    Linux has been my main desktop for 20 years or so? Although I am a software developer and either do not need the applications you mentioned or use alternatives.

                                                                    Anyway, what I actually wanted to say: on the hardware side I’ve had little issues with Linux, certainly not more than with Windows or OS X and at least with Linux (if I put the time into it) the issues can generally be fixed. I’ve been running multiple monitors for years and hibernation used to be a pain in the ass in the early 2000’s but has been good for me on a wide array of hardware for years (definitely better than both Windows and OS X which run on supported hardware!). Granted, I can’t blindly grab hardware off the shelf and have to do some research up front on supported hardware. But that’s what you get if hardware vendors do not officially support your OS and it does come with many upsides as well.

                                                                    I run pretty bare systems though and try to avoid Window’isms that bring short-term convenience but also bring additional complexity, so no systemd, pulseaudio, desktop environments like Gnome for me. Still, I’m running Linux because I want to be able to run Dropbox (actually pCloud in my case), Steam, etc.

                                                                    1. 4

                                                                      Linux has been my main desktop for 20 years or so?

                                                                      Different people, different requirements. I have used Linux and BSD on the desktop from 1994-2007. I work in a group where almost everybody uses Macs. I work in a university where most of the paperwork is done in Word (or PDF for some forms). I have a fair teaching load, so I could mess around for two hours to get a figure right in TikZ (which I sometimes do if I think it is worth the investment and have the time) or I could do it in five minutes in OmniGraffle and have more time to do research.

                                                                      It’s a set of trade-offs. Using a Mac saves a lot of time and reduces friction in my environment. In addition, one can pretty run much the same open source applications as on Linux per Homebrew.

                                                                      I do use Linux remotely every day, for deep learning and data processing, since it’s not possible to get a reasonable Mac to do that work.

                                                                      Anyway, what I actually wanted to say: on the hardware side I’ve had little issues with Linux, certainly not more than with Windows or OS X and at least with Linux (if I put the time into it) the issues can generally be fixed.

                                                                      The following anecdote is not data, but as a lecturer I see a lot of student presentations. Relatively frequently, students who run Linux on their laptops have problems getting projectors working with their laptops, often ending up borrowing a laptop from one of their colleagues. Whereas the Mac-wielding students often forget their {Mini DisplayPort, USB-C} -> VGA connectors, but have no problems otherwise.

                                                                  2. 2

                                                                    Same. I don’t use them every day, but I do need Adobe CS. I also want (from my desktop) solid support for many, many pixels of display output. Across multiple panels. And for this, Windows tends to be better than Mac these days.

                                                                    1. 1

                                                                      The Windows Linux Subsystem is also surprisingly good. I would say that it offers just enough for most OS X users to be happy. Linux users, maybe not.

                                                                    2. 1

                                                                      One thing I’m finding is that a lot of Mac apps I rely on have increasingly capable iOS counterparts (Things, OmniOutliner, Reeder, etc.) so I could potentially get away with not having desktop versions of those. That gets me closer to cutting my dependency on macOS, though there’s still a few apps that keep me around (Sketch, Pixelmator) and the ever-present requirement of having access to Xcode for iOS development.

                                                                    1. 4

                                                                      I can’t speak for Go’s genesis within Google, but outside of Google, this underanalysed political stance dividing programmers into “trustworthy” and “not” underlies many arguments about the language.”

                                                                      Correct, but not unique. Java is probably as popular as it is precisely because encapsulation is so strong, allowing you to protect your code from others who you worry will do harm.

                                                                      On the unskilled programmers side, the language forbids features considered “too advanced.”… This is the world in which Go programmers live - one which is, if anything, even more constrained than Java 1.4 was.

                                                                      My feelings on Go is that it’s largely defined by what isn’t in there than what is. What isn’t in there allows me to read the code in the stdlib and understand it. I can look at any piece of code from another developer and not worry that there are multiple abstraction levels that I’m missing that alter the behavior. There isn’t enough in the language to support subset dialects like you’ll see with something like Perl or even Java (a Spring code base looks very different to a POJO code base looks very different to someone obsessed with interfaces looks very different to…)

                                                                      I think Go succeeds very admirably along the dimension of “everyone at the company should be able to read everyone else’s code and modify it safely.” That necessarily means some blunting of the scissors. If you want the bleeding edge tool, Rust is a great alternative, but there’s enough in it that I would be afraid letting a junior programmer at it without guidance, and that new people to Rust will have a long ramp up time to figure out how the code base works and be able to modify it.

                                                                      EDIT: This is not to say I don’t have my own axes to grind: I do not like not being able to easily differentiate between types of errors in particular. I don’t feel like interface{} is a good solution to anything, but I don’t necessarily have any desire for generics either. I don’t care for the way contexts work, although I understand why they exist (I like this proposal about this).

                                                                      1. 10

                                                                        I have found meaning by:

                                                                        1. Letting go of thinking that I, at 34, will do something that will meaningfully change the world. I am too late in my life to realistically expect me to have one of those Big Ideas, and I came too late to the party of realizing I found meaning in making the planet better (I am really ambivalent about humanity, whereas when I grew up I felt like technology would improve everything for everyone, which is not a belief I hold anymore). I think you and I have had very similar feelings, I felt very much like the world wasn’t benefitting from what I did, so why bother?
                                                                        2. Assuming I won’t have that impact I think I should make, my best bet is to act as an enabler for someone who can. So I work on foundational products like cloud systems, backend APIs, that sort of thing. The cheaper and more accessible foundational systems get, the more likely I can enable someone that will do something amazing. That way I don’t have to buy into the vision of a consumer product in order to get meaning. Consumer products come and go and very few are ever going to make the world a better place (maybe the last one was Facebook or even as far back as Google Maps). Two years ago I took a position to try and optimize for the day-to-day (working on smaller systems after burning out on a big one) that caused the enabling dimension to suffer, and I lost my sense of purpose and meaning. That has affected my excitement about my work much more negatively than the issues of what I left.

                                                                        I think of projects like Mozilla, cloud, even something like Tensorflow or Kubenetes, as being these kind of enablers.

                                                                        Sure, the most meaningful thing in my life is my family. The most meaningful things in your life are almost certainly going to be outside of work. I’m trying to do more meaningful things like Hour of Code and encourage girls and minorities into STEM (again following the enabling track). But it is important, I think, to find some sort of thing you can extract from the day-to-day as worth it to you, otherwise you’re just going to stop getting out of bed.

                                                                        1. 3

                                                                          I think when I posed the question, your second bullet point is what I secretly wanted to hear the most.

                                                                          1. 3

                                                                            Last night, I saw that @cflewis and @freddyb as two sides of same coin in terms of doing meaningful work in IT. One is making sure it’s easy to create things that benefit humanity using technology. One is making sure it’s easy to consume them. In each case, you want what you make to be designed for high uptake for one. Usability, marketing/branding, and cost trumps internal tech on either of these almost every time. Then, if dollars and/or code contributions are rolling in, you can use your influence to make sure whatever it is goes in public-benefiting rather than predatory directions.

                                                                            Firefox is actually a good example where they make money off ad-driven search but let you do private search easily. Always having an affordable, private version of anything ad driven is another example. On organizational side, you might charter or contract in basic protections/benefits for everyone from employees to users. On the technology stack, you might build on better foundations to shift more money or effort into quality tech that deserves it. Quick example from my field would be things like routers most half-ass with shoddy tech instead using OpenBSD, secure admin interface, and automatic updates. On web side, it might be those using tech like Erlang to be efficient and highly-reliable with big, success stories getting more people investing in its tooling.

                                                                            There’s a lot of possibilities that involve doing something that people want to use or buy that’s just more effective and/or less evil than the norm. Sadly, the norm has so much of those two that there’s plenty of ways to differentiate on those. The relief being that there’s plenty of ways to differentiate on those. :)

                                                                        1. 25

                                                                          I used to do the things listed in this article, but very recently I’ve changed my mind.

                                                                          The answer to reviewing code you don’t understand is you say “I don’t understand this” and you send it back until the author makes you understand in the code.

                                                                          I’ve experienced too much pain from essentially rubber-stamping with a “I don’t understand this. I guess you know what you’re doing.” And then again. And again. And then I have to go and maintain that code and, guess what, I don’t understand it. I can’t fix it. I either have to have the original author help me, or I have to throw it out. This is not how a software engineering team can work in the long-term.

                                                                          More succinctly: any software engineering team is upper-bound architecturally by the single strongest team member (you only need one person to get the design right) and upper-bound code-wise by the single weakest/least experience team member. If you can’t understand the code now, you can bet dollars to donuts that any new team member or new hire isn’t going to either (the whole team must be able to read the code because you don’t know what the team churn is going to be). And that’s poison to your development velocity. The big mistake people make in code review is to think the team is bound by the strongest team member code-wise too and defer to their experience, rather than digging in their heels and say “I don’t understand this.”

                                                                          The solution to “I don’t understand this” is plain old code health. More functions with better names. More tests. Smaller diffs to review. Comments about the edge cases and gotchas that are being worked around but you wouldn’t know about. Not thinking that the code review is the place to convince the reviewer to accept the commit because no-one will ever go back to the review if they don’t understand the code as an artifact that stands by itself. If you don’t understand it as a reviewer in less than 5 minutes, you punt it back and say “You gotta do this better.” And that’s hard. It’s a hard thing to say. I’m beginning to come into conflict about it with other team members who are used to getting their ungrokkable code rubber stamped.

                                                                          But code that isn’t understandable is a failure of the author, not the reviewer.

                                                                          1. 7

                                                                            More succinctly: any software engineering team is upper-bound architecturally by the single strongest team member (you only need one person to get the design right) and upper-bound code-wise by the single weakest/least experience team member.

                                                                            Well put – hearing you type that out loud makes it incredibly apparent.

                                                                            Anywhoo, I think your conclusion isn’t unreasonable (sometimes you gotta be the jerk) but the real problem is upstream. It’s a huge waste when bad code makes it all the way to review and then and then needs to be written again; much better would be to head it off at the pass. Pairing up the weaker / more junior software engineers with the more experienced works well, but is easier said than done.

                                                                            1. 4

                                                                              hmm, you make a good point and I don’t disagree. Do you think the mandate on the author to write understandable code becomes weaker when the confusing part is the domain, and not the code itself? (Although I do acknowledge that expressive, well-structured and well-commented code should strive to bring complicated aspects of the problem domain into the picture, and not leave it up to assumed understanding.)

                                                                              1. 3

                                                                                I think your point is very much applicable. Sometimes it takes a very long time to fully understand the domain, and until you do, the code will suffer. But you have competing interests. For example, at some point, you need to ship something.

                                                                                1. 2

                                                                                  Do you think the mandate on the author to write understandable code becomes weaker when the confusing part is the domain, and not the code itself?

                                                                                  That’s a good question.

                                                                                  In the very day-to-day, I don’t personally find that code reviews have a problem from the domain level. Usually I would expect/hope that there’s a design doc, or package doc, or something, that explains things. I don’t think we should expect software engineers to know how a carburetor works in order to create models for a car company, the onus is on the car company to provide the means to find out how the carburetor works.

                                                                                  I think it gets much tricker when the domain is actually computer science based, as we kind of just all resolved that there are people that know how networks work and they write networking code, and there’s people who know how kernels work and they write kernel code etc etc. We don’t take the time to do the training and assume if someone wants to know about it, they’ll learn themselves. But in that instance, I would hope the reviewer is also a domain expert, but on small teams that probably isn’t viable.

                                                                                  And like @burntsushi said, you gotta ship sometimes and trust people. But I think the pressure eases as the company grows.

                                                                                  1. 1

                                                                                    That makes sense. I think you’ve surfaced an assumption baked into the article which I wasn’t aware of, having only worked at small companies with lots of surface area. But I see how it comes across as particularly troublesome advice outside of that context

                                                                                2. 4

                                                                                  I’m beginning to come into conflict about it with other team members

                                                                                  How do you resolve those conflicts? In my experience, everyone who opens a PR review finds their code to be obvious and self-documenting. It’s not uncommon to meet developers lacking the self-awareness required to improve their code along the lines of your objections. For those developers, I usually focus on quantifiable metrics like “it doesn’t break anything”, “it’s performant”, and “it does what it’s meant to do”. Submitting feedback about code quality often seems to regress to a debate over first principles. The result is that you burn social capital with the entire team, especially when working on teams without a junior-senior hierarchy, where no one is a clear authority.

                                                                                  1. 2

                                                                                    Not well. I don’t have a good answer for you. If someone knows, tell me how. If I knew how to simply resolve the conflicts I would. My hope is that after a while the entire team begins to internalize writing for the lowest common denominator, and it just happens and/or the team backs up the reviewer when there is further conflict.

                                                                                    But that’s a hope.

                                                                                    1. 2

                                                                                      t’s not uncommon to meet developers lacking the self-awareness required to improve their code along the lines of your objections. For those developers, I usually focus on quantifiable metrics like “it doesn’t break anything”, “it’s performant”, and “it does what it’s meant to do”. Submitting feedback about code quality often seems to regress to a debate over first principles.

                                                                                      Require sign-off from at least one other developer before they can merge, and don’t budge on it – readability and understandability are the most important issues. In 5 years people will give precisely no shits that it ran fast 5 years ago, and 100% care that the code can be read and modified by usually completely different authors to meet changing business needs. It requires a culture shift. You may well need to remove intransigent developers to establish a healthier culture.

                                                                                      The result is that you burn social capital with the entire team, especially when working on teams without a junior-senior hierarchy, where no one is a clear authority.

                                                                                      This is a bit beyond the topic at hand, but I’ve never had a good experience in that kind of environment. If the buck doesn’t stop somewhere, you end up burning a lot of time arguing and the end result is often very muddled code. Even if it’s completely arbitrary, for a given project somebody should have a final say.

                                                                                      1. 1

                                                                                        The result is that you burn social capital with the entire team, especially when working on teams without a junior-senior hierarchy, where no one is a clear authority.

                                                                                        This is a bit beyond the topic at hand, but I’ve never had a good experience in that kind of environment. If the buck doesn’t stop somewhere, you end up burning a lot of time arguing and the end result is often very muddled code. Even if it’s completely arbitrary, for a given project somebody should have a final say.

                                                                                        I’m not sure.

                                                                                        At very least, when no agreement is found, the authorities should document very carefully and clearly why they did take a certain decision. When this happens everything goes smooth.

                                                                                        In a few cases, I saw a really seasoned authority to change his mind while writing down this kind of document, and finally to choose the most junior dev proposal. And I’ve also seen a younger authority faking a LARGE project just because he took any objection as a personal attack. When the doom came (with literally hundreds of thousands euros wasted) he kindly left the company.

                                                                                        Also I’ve seen a team of 5 people working very well for a few years together despite daily debates. All the debates were respectful and technically rooted. I was junior back then, but my opinions were treated on pars with more senior colleagues. And we were always looking for syntheses, not compromises.

                                                                                    2. 2

                                                                                      I agree with the sentiment to an extent, but there’s something to be said for learning a language or domain’s idioms, and honestly some things just aren’t obvious at first sight.

                                                                                      There’s “ungrokkable” code as you put it (god knows i’ve written my share of that) but there’s also code you don’t understand because you have had less exposure to certain idioms, so at first glance it is ungrokkable, until it no longer is.

                                                                                      If the reviewer doesn’t know how to map over an array, no amount of them telling me they doesn’t understand will make me push to a new array inside a for-loop. I would rather spend the time sitting down with people and trying to level everyone up.

                                                                                      To give a concrete personal example, there are still plenty of usages of spreading and de-structuring in JavaScript that trip me up when i read them quickly. But i’ll build up a tolerance to it, and soon they won’t.

                                                                                    1. 12

                                                                                      It is very, very unlikely that he was rejected purely for a binary tree search issue. He says in the OP he had 7 interviews. When a hiring committee looks at the data from 7 interviews, a single bad interview isn’t enough to torpedo a whole application. Sometimes an interviewer and and interviewee just don’t jive. It might have been that he was just middling in the interviews, and middling is great for most companies.

                                                                                      From all I’ve ever read on this topic, it seems to me that Howell thinks his previous record should have counted in the process, and that’s a legitimate feeling, but that’s not the way Google interviews. A good record gets you through the door, but once you’re in the interview room, interviewers barely look at the resume past some warmup questions or a “Hey cool, you did Homebrew? That’s wild! I use that all the time!” and then move on to the questions they are used to asking and calibrating their recommendations for. Personally, I am pretty glad that external record isn’t counted, I like that a fresh university graduate or someone who has been stuck working in a secret at a company that doesn’t produce exciting software has the same starting potential as Howell. But I get how that can burn.

                                                                                      1. 3

                                                                                        We find experience can actually work against candidates; if the interviewer reads too deeply into a resume and well known projects on it they can set unrealistically high expectations on how they’ll do in the interview.

                                                                                        Starting from a more or less blank slate is really the only fair way to do interviews in volume with lots of different interviewers.

                                                                                      1. 23

                                                                                        As a programmer, I read this and think, yes, that’s an unsurpassable moat. How will Apple–much less OpenStreetMaps–ever catch up?

                                                                                        As someone who worked with Sanborn fire insurance maps in a previous career, I’m not so sure. Sanborn maps had even more detail than today’s Google Maps–they showed not only building footprints but building function, number of floors, construction materials, and other features of interest to fire insurers (sometimes interior structural walls were indicated). They existed for big cities but also for places like Circle City, Alaska and Adrian, Michigan. I have a hard time imagining the level of effort that went into creating and maintaining these maps (they were literally updated by mailing out sheets with bits of paper you would cut out and paste over your existing map, usually at the level of individual buildings). But people managed to do it without aerial or satellite imagery, or ML image recognition, or any of the other tools available to us today. It’s hard to imagine that Apple couldn’t–if it wanted to–replicate something a much smaller company (the Sanborn Map Company employed 700 people at its peak) was doing a hundred years ago.

                                                                                        1. 10

                                                                                          The Sanborn example is a really good one. As a fire insurance company, they invested in good maps because they were potentially liable for a lot of unexpected costs from inaccurately-estimated risks. The maps were literally Sanborn’s business. Apple sells phones and computers so they just need a good-enough map to keep people from jumping ship to Android or allowing GMaps an enclave in iOS territory. What does Google need this level of detail for? Ultimately they sell ads, and they’ve been very creative in figuring out ways to expand the potential surface area for ad sales and improve consumer data flowing back to them.

                                                                                          1. 3

                                                                                            I agree with you, it’s all about who the detail is for. It’s not unsurpassable. It really depends what you’re trying to do.

                                                                                            Mapping is one of the most subjective pieces of data you can offer: a map’s value is only what the reader gets from it. That’s why we have so many… hiking maps, road maps, the fire maps you point out. Is the information that the article notes others don’t have really that valuable? Not to me. I’m sure it’s valuable to some. I’ve tried Apple Maps again because it worked nicely with the iPhone X out the box (note to app developers: you don’t get second chances, you gotta be there at the beginning) and it seems fine for road maps, which is what I need it for. I also like the Yelp reviews that are embedded. I remain skeptical about the traffic information, though.

                                                                                            Waze is a really good example of a map that’s hyperfocused on a single use case: driving. You got roads. You got traffic. You got where you can get a donut. You don’t need to know the shape of a building.

                                                                                            I guess I just don’t find the idea of moats all that compelling. I think we’ve seen time and again in tech and elsewhere that when people see a moat, what you usually have is a very broad offering which leaves opportunities for very focused offerings to do better (even Google was this at the beginning, Yahoo had all the content out the wazoo, and everyone thought you couldn’t compete with that, and Larry and Sergey just built a very good search engine that rocked the one Yahoo had).

                                                                                          2. 3

                                                                                            I sat next to a Apple maps engineer on a flight recently. Was told it’s a 1500 person org. Kind of shocked me.

                                                                                            1. 1

                                                                                              Most of those people are acquiring and processing data, similar to other mapping orgs.

                                                                                          1. 2

                                                                                            Test doubles for services is a real problem. I am glad this exists.

                                                                                            That said: Conflating together mocks, fakes and stubs does not convey the authority necessary to show that you can help developers work better. It conveys that you don’t really know what you are doing. The blog post mentions fakes, which isn’t what this service offers. The docs themselves indicate to me that what are really being offered are stubs.

                                                                                            I would be really worried about investing time in this service when its not clear what I am being offered, and its not clear that the developers really understand what they are offering me either.

                                                                                            For background, this is a good rundown between fakes, mocks and stubs: https://stackoverflow.com/questions/346372/whats-the-difference-between-faking-mocking-and-stubbing

                                                                                            EDIT: Also, $50 a month for such a service sounds pretty high. I’m not usually one for “but I can spin up my own [x] on [y]” as it usually comes with the caveat of “but I didn’t do any of the number crunching about how to maintain this”, but you could build something pretty quickly to deploy on Heroku or App Engine and get the same result.

                                                                                            1. 5

                                                                                              Hi! Blog post author here. I appreciate the feedback that conflating mocks/fakes/stubs might make it less clear to some potential users what exactly the service is offering. Overall, I am less interested in using precisely Martin Fowler’s definitions of these overloaded terms than conveying in practical terms to the target audience what the service does, but I may revisit the language. The service presently offers “mock servers”, not “fakes”.

                                                                                              EDIT: the accepted answer for your linked stackoverflow post says “in fact, it doesn’t really matter what you call it” ;)

                                                                                            1. 2

                                                                                              For those looking for a TL;DR, my watch of the first 10 minutes is that Troy is showing that Face ID has UX problems in many situations he finds himself in, such as wearing sunglasses outdoors. Imma take a stab at guessing he makes the case that people might turn it off because of this, which is bad security too.

                                                                                              I have bad, but forgivable, usabilyt issues too, like wanting to read my phone when I have my CPAP machine on, or if part of my head is hidden by a pillow and I’m lying down.

                                                                                              I don’t understand why Apple doesn’t allow FaceID to train multiple faces, so you could train it with your sunglasses or with your pillow or whatever. Maybe it’s a processing limitation that it takes too much time to run through the options? I’d be happy if the fallback to PIN was quicker or otherwise accessible from the get go too.