1. 24

    Half this article is out of date as of 2 days ago. GOPATH is mostly going to die with vgo as is the complaint about deps.

    Go is kind of an example of what happens when you focus all effort on engineering and not research.

    Good things go has:

    • Go has imo the best std library of any language.
    • Go has the best backwards compatibility I have seen (I’m pretty sure code from go version 1.0 still works today.).
    • Go has the nicest code manipulation tools I have seen.
    • The best race condition detector tool around.
    • An incredibly useful in practice interface system. (I once used the stdlibrary http server over a serial port because net.Listener is a simple interface)
    • The fastest compiler to use, and to build from source.
    • Probably the best cross compilation story of any language, and uniformity across platforms, including ones you haven’t heard of.
    • One of the easiest to distribute binaries across platforms (this is why hashicorp, cockroachdb, ngrok etc choose go imo).
    • A very sophisticated garbage collector with low pause times.
    • One of the best runtime performance to ease of use ratios around.
    • One of the easier to learn languages around.
    • A compiler that produces byte for byte identical binaries.
    • incredibly useful libraries maintained by google: (e.g. Heres a complete ssh client and server anyone can use: https://godoc.org/golang.org/x/crypto/ssh)
    • Lots of money invested in keeping it working well from many companies: cloud flare, google, uber, hashicorp and more.

    Go is getting something that looks like a damn good versioning story, just way too late:

    Go should have in my opinion and order of importance:

    • Ways to express immutability as a concurrent language.
    • More advanced static analysis tools that can prove properties of your code (perhaps linked with the above).
    • Generics.
    • Some sort of slightly more sophisticated pattern matching .

    Go maybe should have:

    • More concise error handling?
    1. 27

      I have been involved with Go since the day of its first release, so almost a decade now, and it has been my primary language for almost as long. I have written the Solaris port, the ARM64 port, and the SPARC64 port (currently out of tree). I have also written much Go software for myself and for others.

      Go is my favorite language, despite everything I write below this line.

      Everything you say is true, so I will just add more to your list.

      My main problem with Go is that, as an operating system it’s too primitive, it’s incomplete. Yes, Go is an operating system, almost. Almost, but not quite. Half and operating system. As an operating system it lacks things like memory isolation, process identifiers, and some kind of a distributed existence. Introspection exists somewhat, but it’s very weak. Let me explain.

      Go presents the programmer with abstractions traditionally presented by operating systems. Take concurrency, for example. Go gives you goroutines, but takes away threads, and takes away half of processes (you can fork+exec, but not fork). Go gives you the net package instead of the socket interface (the latter is not taked away, but it’s really not supposed to be used by the average program). Go gives you net/http, instead of leaving you searching for nginx, or whatever. Life is good when you use pure Go packages and bad when you use cgo.

      The idea is that Go not only has these rich features, but that when you are programming in Go, you don’t have to care about all the OS-level stuff underneath. Go is providing (almost) all abstractions. Go programming is (almost) the same on Windows, OpenBSD and Plan 9. That is why Go programs are generally portable.

      I love this. As a Plan 9 person, you might imagine my constant annoyance with Unix. Go isolates me from that, mostly, and it is great, it’s fantastic.

      But it doesn’t go deep enough.

      A single Go program instance is one operating system running some number of processes (goroutines), but two Go program instances are two operating systems, instead of one distributed operating system, and in my mind that is one too many operating systems.

      “Deploying” a goroutine is one go statement away, but deploying a Go program still requires init scripts, systemds, sshs, puppets, clouds, etc. Deploying a Go program is almost the same as deploying C, or PHP, or whatever. It’s out of scope for the Go operating system. Of course that’s a totally sensible option, it’s just doesn’t align with what I need.

      My understanding about Erlang (which I know little of, so forgive me if I misrepresent it) is that once you have an Erlang node running, starting a remote Erlang process is almost as easy as starting a local Erlang process. I like that. I don’t have to fuck with kubernetes, ansible, it’s just a single, uniform, virtual operating system.

      Goroutines inside a single process have very rich communication methods, Go channels, even mutexes if you desire them. But goroutines in different processes are handicaped. You have to think about how to marshal data and RPC protocols. The difficulty of getting two goroutines in different processes to talk to each other is the about the same as getting some C, or Python code, to talk to Go. Since I only want Go to talk to Go, I don’t think that’s right. It should be easier, and it should feel native. Again, I think Erlang does better here.

      Goroutines have no process ids. This makes total sense if you restrict yourself to a single-process universe, but since I want a multi-process universe, and I want to avoid thinking about systemds and dockers, I want to supervise goroutines from Go. Which means goroutines should have process ids, and I should be able to kill and prioritize them. Erlang does this, of course.

      What I just described in the last two paragraph would preclude shared memory. I’m willing to live with that in order to get network transparency.

      Go programs have ways to debug and profile themselves. Stack traces are one function call away, and there’s a easy to use profiler. But this is not enough. Sometimes you need a debugger. Debugging Go programs is an exercise in frustration. It’s much difficult than debugging C programs.

      I am probably one of the very few people on planet Earth that knows how to profile/debug Go programs with a grown-up tool like DTrace or perf. And that’s because I know assembly programming and the Go runtime very well. This is unacceptable. Some people would hope that something would happen to Go so that it works better with these tools, but frankly, I love the “I am an operating system” aspect of Go, so I would want to use something Go-native. But I want something good.

      This post is getting too long, so I will stop now. Notice I didn’t feel a need for generics in these 9 years. I must also stress out that I am a low-level programmer. I like working in the kernel. I like C and imperating programming. I am not one of those guys that prefers high-level languages (that do not have shared memory), so naturally wants Go to be the same. On the contrary. I found out what I want only through a decade of Go experience. I have never used a language without shared memory before.

      I think Go is the best language for writting command-line applications. Shared memory is very useful in that case, and the flat, invisble goroutines prevent language abuse and “just work”. Lack of debugger, etc, are not important for command-line applications, and command-line applications are run locally, so you don’t need dockers and chefs. But when it comes to distributed systems, I think we could do better.

      In case it’s not clear, I wouldn’t want to change Go, I just want a different language for distributed systems.

      1.  

        I’ve done some limited erlang programming and it is very much a distributed OS to the point where you are writing a system more than a program. You even start third party code as “applications” from the erlang shell before you can make calls to them. erlang’s fail fast error handling and let supervisors deal with problems is also really fun to use.

        I haven’t used dtrace much either, but I have seen the power, something like that on running go systems would also be neat.

        1.  

          Another thing that was interesting about erlang is how the standard library heavily revolves around timers and state machines because anything could fail at any point. For example gen_server:call() (the way to call another process implementing the generic service interface) by default has a 5 second timeout that will crash your process.

      2.  

        Perfect list (the good things, and the missing things).

        1.  

          The fixes the go team have finally made to GOROOT and GOPATH are great. I’m glad they finally saw the light.

          But PWD is not a “research concern” that they were putting off in favor of engineering. The go team actively digs their heals in on any choice or concept they don’t publish first, and it’s why in spite of simple engineering (checking PWD and install location first) they argued for years on mailing lists that environment variables (which rob pike supposedly hates, right?) are superior to simple heuristics.

          Your “good things go has” list is also very opinionated (code manipulation tools better that C# or Java? Distrribution of binaries.. do you just mean static binaries?? Backwards compatibility that requires recompilation???), but I definitely accept that’s your experience, and evidence I have to the contrary would be based on my experiences.

        1. 7

          Nearly everything he says about J is also true of DrRacket.

          1.  

            Which means that what’s not could be a roadmap for bridging the gap. I did say in another thread one might make a DSL out of these array languages in something like Racket. So, what does DrRacket lack that author said J environment has?

            1.  

              Probably some of the “Labs” features. There is the “gracket” format as a starting point.

          1. 12

            The Go project is absolutely fascinating to me.

            How they managed to not solve many hard problems of a language, it’s tooling or production workflow, but also solve a set to get a huge amount of developer mindshare is something I think we should get historians to look into.

            I used Go professionally for ~2+ years, and so much of it was frustrating to me, but large swaths of our team found it largely pleasant.

            1. 12

              I’d guess there is a factor depending on what you want from a language. Sure, it doesn’t have generics and it’s versioning system leaves a lot to be wished for. But personally, if I have to write anything with networking and concurrency, usualy my first choice is Go, because of it’s very nice standard library and a certain sense of being thought-thorugh when it comes to concurrency/parallelism - at least so it appears to be when comparing it to other imperative Java, C or Python. Another popular point is how the language, as compared to C-ish languages doesn’t give you too much freedom when it comes to formatting – there isn’t a constant drive to use as few characters as possible (something I’m very prone to doing), or any debates like tabs vs. spaces, where to place the opening braces, etc. There’s really something reliving about this to me, that makes the language, as you put it, “pleasant” to use (even if you might not agree with it)

              And regarding the standard library, one thing I always find interesting is how far you can get by just using what’s already packaged in Go itself. Now I haven’t really worked on anything with more that 1500+ LOC (which really isn’t much for Go), and most of the external packages I used were for the sake of convince. Maybe this totally changes when you work in big teams or on big projects, but it is something I could understand people liking. Especially considered that the Go team has this Go 1.x compatibility promise, so that you don’t have to worry that much about versioning when it comes to the standard lib packages.

              I guess the worst mistake one can make is wanting to treat it like Haskell or Python, forcing a different padigram onto it. Just like one might miss macros when one changes from C to Java, or currying when one switches from Haskell to Python, but learns to accept these things, and think differently, so I belive, one should approach Go, using it’s strengths, which it has, instead of lamenting it’s weaknesses (which undoubtedly exist too).

              1. 7

                I think their driving philosophy is that if you’re uncertain of something, always make the simpler choice. You sometimes go to wrong paths following this, but I’d say that in general this is a winning strategy. Complexity can always be bolted on later, but removing it is much more difficult.

                The whole IT industry would be a happier place if it followed this, but seems to me that we usually do the exact opposite.

                1.  

                  I think their driving philosophy is that if you’re uncertain of something, always make the simpler choice.

                  Nah - versioning & dependency management is not some new thing they couldn’t possibly understand until they waited 8 years. Same with generics.

                  Where generics I can understand a complexity argument for sure, versioning and dependency management are complexities everyone needed to deal with either way.

                  1.  

                    If you understand the complexity argument for generics, then I think you could accept it for dependency management too. For example, Python, Ruby and JavaScript have a chaotic history in terms of the solution they adopted for dependency management, and even nowadays, the ecosystem it not fully stabilized. For example, in the JavaScript community, Facebook released yarn in October 2016, because the existing tooling was not adequate, and more and more developers are adopting it since then. I would not say that dependency management is a fully solved problem.

                    1.  

                      I would not say that dependency management is a fully solved problem.

                      Yes it is, the answer is pinning all dependencies, including transitive dependencies. All this other stuff is just heuristics that end up failing later on and people end up pinning anyways.

                      1.  

                        I agree about pinning. By the way, this is what vgo does. But what about the resolution algorithm used to add/upgrade/downgrade dependencies? Pinning doesn’t help with this. This is what makes Minimal Version Selection, the strategy adopted by vgo, original and interesting.

                        1.  

                          I’m not sure I understand what the selection algorithm is doing then. From my experience: you change the pin, run your tests, if it passes, you’re good, if not, you fix code or decide not to change the version. What is MVS doing for this process?

                          1.  

                            When you upgrade a dependency that has transitive dependencies, then changing the pin of the upgraded dependency is not enough. Quite often, you also have to update the pin of the transitive dependencies, which can have an impact on the whole program. When your project is large, it can be difficult to do manually. The Minimal Version Selection algorithm offers a new solution to this problem. The algorithm selects the oldest allowed version, which eliminates the redundancy of having two different files (manifest and lock) that both specify which modules versions to use.

                            1.  

                              Unless it wasn’t clear in my original comment, when I say pin dependencies I am referring to pinning all dependencies, including transitive dependencies. So is MVS applied during build or is it a curation tool to help discover the correct pin?

                              1.  

                                I’m not sure I understand your question. MVS is an algorithm that selects a version for each dependency in a project, according to a given set of constraints. The vgo tool runs the MVS algorithm before a build, when a dependency has been added/upgraded/downgraded/removed. If you have the time, I suggest you read Russ Cox article because it’s difficult to summarize in a comment ;-)

                                1.  

                                  I am saying that with pinned dependencies, no algorithm is needed during build time, as there is nothing to compute for every dependency version is known apriori.

                                  1.  

                                    I agree with this.

                2. 4

                  I had a similar experience with Elm. In my case, it seemed like some people weren’t in the habit of questioning the language or thinking critically about their experience. For example, debugging in Elm is very limited. Some people I worked with came to like the language less for this reason. Others simply discounted their need for better debugging. I guess this made the reality easier to accept. It seemed easiest for people whose identities were tied to the language, who identified as elm programmers or elm community members. Denying personal needs was an act of loyalty.

                  1.  

                    How they managed to not solve many hard problems of a language, it’s tooling or production workflow, but also solve a set to get a huge amount of developer mindshare is something I think we should get historians to look into.

                    I think you’ll find they already have!

                  1. 2

                    Maybe a dumb questions, but in semver what is the point of the third digit? A change is either backwards compatible, or it is not. To me that means only the first two digits do anything useful? What am I missing?

                    It seems like the openbsd libc is versioned as major.minor for the same reason.

                    1. 9

                      Minor version is backwards compatible. Patch level is both forwards and backwards compatible.

                      1. 2

                        Thanks! I somehow didn’t know this for years until I wrote a blog post airing my ignorance.

                      2. 1

                        PATCH version when you make backwards-compatible bug fixes See: https://semver.org

                        1. 1

                          I still don’t understand what the purpose of the PATCH version is? If minor versions are backwards compatible, what is the point of adding a third version number?

                          1. 3

                            They want a difference between new functionality (that doesn’t break anything) and a bug fix.

                            I.e. if it was only X.Y, then when you add a new function, but don’t break anything.. do you change Y or do you change X? If you change X, then you are saying I broke stuff, so clearly changing X for a new feature is a bad idea. So you change Y, but if you look at just the Y change, you don’t know if it was a bug-fix, or if it was some new function/feature they added. You have to go read the changelog/release notes, etc. to find out.

                            with the 3 levels, you know if a new feature was added or if it was only a bug fix.

                            Clearly just X.Y is enough. But the semver people clearly wanted that differentiation, they wanted to be able to , by looking only at the version #, know if there was a new feature added or not.

                            1. 1

                              To show that there was any change at all.

                              Imagine you don’t use sha1’s or git, this would show that there was a new release.

                              1. 1

                                But why can’t you just increment the minor version in that case? a bug fix is also backwards compatible.

                                1. 5

                                  Imagine you have authored a library, and have released two versions of it, 1.2.0 and 1.3.0. You find out there’s a security vulnerability. What do you do?

                                  You could release 1.4.0 to fix it. But, maybe you haven’t finished what you planned to be in 1.4.0 yet. Maybe that’s acceptable, maybe not.

                                  Some users using 1.2.0 may want the security fix, but also do not want to upgrade to 1.3.0 yet for various reasons. Maybe they only upgrade so often. Maybe they have another library that requires 1.2.0 explicitly, through poor constraints or for some other reason.

                                  In this scenario, releasing a 1.2.1 and a 1.3.1, containing the fixes for each release, is an option.

                                  1. 2

                                    It sort of makes sense but if minor versions were truly backwards compatible I can’t see a reason why you would ever want to hold back. Minor and patch seem to me to be the concept just one has a higher risk level.

                                    1. 4

                                      Perhaps a better definition is library minor version changes may expose functionality to end users you did not intend as an application author.

                                      1. 2

                                        I think it’s exactly a risk management decision. More change means more risk, even if it was intended to be benign.

                                        1. 2

                                          Without the patch version it makes it much harder to plan future versions and the features included in those versions. For example, if I define a milestone saying that 1.4.0 will have new feature X, but I have to put a bug fix release out for 1.3.0, it makes more sense that the bug fix is 1.3.1 rather than 1.4.0 so I can continue to refer to the planned version as 1.4.0 and don’t have to change everything which refers to that version.

                                2.  

                                  I remember seeing a talk by Rich Hickey where he criticized the use of semantic versioning as fundamentally flawed. I don’t remember his exact arguments, but have sem ver proponents grappled effectively with them? Should the Go team be wary of adopting sem ver? Have they considered alternatives?

                                  1.  

                                    I didn’t watch the talk yet, but my understanding of his argument was “never break backwards compatibility.” This is basically the same as new major versions, but instead requiring you to give a new name for a new major version. I don’t inherently disagree, but it doesn’t really seem like some grand deathblow to the idea of semver to me.

                                    1.  

                                      IME, semver itself is fundamentally flawed because humans are the deciders of the new version number and we are bad at it. I don’t know how many times I’ve gotten into a discussion with someone where they didn’t want to increase the major because they thought high major’s looked bad. Maybe at some point it can be automated, but I’ve had plenty of minor version updates that were not backwards compatible, same for patch versions. Or, what’s happened to me in Rust multiple times, is the minor version of a package incremented but the new feature depends on a newer version of the compiler, so it is backwards breaking in terms of compiling. I like the idea of a versioning scheme that lets you tell the chronology of versions but I’ve found semver to work right up until it doesn’t and it’s always a pain. I advocate pinning all deps in a project.

                                      1.  

                                        It’s impossible for computers to automate. For one, semver doesn’t define what “breaking” means. For two, the only way that a computer could fully understand if something is breaking or not would be to encode all behavior in the type system. Most languages aren’t equipped to do that.

                                        Elm has tools to do at least a minimal kind of check here. Rust has one too, though not widely as used.

                                        . I advocate pinning all deps in a project.

                                        That’s what lockfiles give you, without the downsides of doing it manually.

                              1. 19

                                This is something I pushed against a lot at my last job. We wanted to hire Juniors and Associates, but every time we interviewed one we always rejected them as “not experienced enough”. The training is always someone else’s problem.

                                We’ve known for a long time how to fix this: train people. Companies don’t like it because they don’t “have the time or money”, but this is the exact opposite of the truth. Edwards Deming calls “emphasis on short term profits over long term consistency” one of the deadly diseases of modern industry.

                                One idea I had to make this more palatable to managers is to hire juniors as programming assistants, that spend part of their time doing training and another part doing helpful work for other developers.

                                The reality is that most software developers don’t stay one place very long, so maybe it doesn’t make sense to invest a lot in training someone?

                                Good thing investing in training leads to higher retention!

                                1. 2

                                  Our industry’s inability to mentor and train people effectively in software engineering is due to the wizard myth that somehow keeps going and going, and is ruining everything from interviews, training, quality, and process.

                                1. 1

                                  Walking around Tokyo, I often get the feeling of being stuck in a 1980’s vision of the future and in many ways it’s this contradiction which characterises the design landscape in Japan.

                                  Could this also be because many American films in the 80’s about the future used Japanese culture? Rewatching the original Blade Runner made me think about this.

                                  1. 3

                                    Japan is one of our favorite places to visit, but there is a definite retro-futuristic vibe going on. Cash everywhere, or single-purpose cash cards instead of credit cards, fax machines, high-speed Internet access on your feature phone, no air conditioning or central heat but a robot vending machine at 7/11.

                                    (We kept having children and so we haven’t gotten to travel internationally for a while now, but that’s our memory of it.)

                                    1. 2

                                      The feature phones have died – everybody on the train is staring at their iPhone or Android, now. Contactless smart cards (Suica, Passmo, etc), used for train fares, are gaining momentum as payment cards in 7/11 etc, but otherwise it’s still mostly a cash-only.

                                      Otherwise it’s pretty much the same.

                                    2. 2

                                      Living in NYC, it feels like the 70’s version of the future!

                                    1. 3

                                      Critique of Everyday Life. Almost completed my workthrough, but largely reading anything that expands my model of my world I see as a positive.

                                      1. 5

                                        While I have personally not used it, is this not something orgmode (emacs) does?

                                        1. 4

                                          Org could be one component of a solution for this, but on its own it lacks: a way to edit via mobile/other devices, any means of uploading images, a blessed rendering path (there are many ways to render/export org files into something for display).

                                          For instance, one solution might be to use Org’s “publish” feature. You could render to HTML, push that to some web host somewhere with rsync (that handles viewing on other/mobile devices). For editing you could sync your org source files (and any org-rendered images via things like plantuml, as well as static images) with something like syncthing/git/Dropbox/Box/iCloud/OneDrive etc. in combination with a non-Emacs editing app like Beorg (iOS) or Orgzly (Android).

                                          That would be a workable and powerful system, but I think we have to admit it’s not as simple to use as just clicking “edit” in a wiki page from something like dokuwiki/mediawiki :-)

                                          1. 2

                                            I’ve found I don’t do any significant note editing on the phone - just capture.

                                            So I use Google Photos + Orgzly + Syncthing + emacs. It used to be MobileOrg, and I started with org ~2005, so these files got bones.

                                            1. 2

                                              I have been looking for something like beorg for a long time. Thanks!!

                                            2. 1

                                              I love orgmode and use it on and off but last I looked sharing it was read-only and meant exporting the static document or running something (node, ruby) that parses the format on the fly.

                                            1. 4

                                              This reminds me of a neat lecture by Greg Wilson about this type of hype killing research: Greg Wilson - What We Actually Know About Software Development, and Why We Believe It’s True.

                                              1. 5

                                                Along the same lines, his book Making Software is great also.

                                              1. 2

                                                GDPR is covered by trashing encryption keys.

                                                1. 2

                                                  I’d like trashable per-customer keys to be a good answer, but:

                                                  • You have to back up the keys (or risk losing everything), and those backups need to be mutable (so you’re back to square one with backups)
                                                  • Your marketing department still want a spreadsheet of unencrypted customer data
                                                  • Your fraud department need to be able to efficiently identify similar customer records (hard when they’re all encrypted with different keys)
                                                  • Your customer support department wants to use SAAS instead of a crufty in-house thing (and answer users who tweet/facebook at them)
                                                  1. 3

                                                    You have to back up the keys (or risk losing everything), and those backups need to be mutable (so you’re back to square one with backups)

                                                    Generally backups are done daily and expire over time. GDPR requires that a user deleting itself is effective within 30 days, so this can be solved by expiring backups after 30 days.

                                                    Your marketing department still want a spreadsheet of unencrypted customer data

                                                    Depending on what marketing is doing, often aggregates are sufficient. I’m not sure how often marketing needs personally identifiable information.

                                                    Your fraud department need to be able to efficiently identify similar customer records (hard when they’re all encrypted with different keys)

                                                    Again, aggregates are usually sufficient here. But to do more one probably does need to build specialized data pipeline jobs that know how to decrypt the data for the job.

                                                    Your customer support department wants to use SAAS instead of a crufty in-house thing (and answer users who tweet/facebook at them)

                                                    I’m not quite sure what this means so I don’t have a response to it.

                                                    1. 1

                                                      you also have to make sure re-identification is not possible… This is quite challenging and they are no guidelines to which extent this should be achieved

                                                      1. 1

                                                        Generally backups are done daily and expire over time. GDPR requires that a user deleting itself is effective within 30 days, so this can be solved by expiring backups after 30 days.

                                                        Fair point - that’s really only a slight complication.

                                                        Depending on what marketing is doing, often aggregates are sufficient. I’m not sure how often marketing needs personally identifiable information.

                                                        Marketing don’t like being beholden to another team to produce their aggregates, but this is much more of an organizational problem than a technical one. Given the size of the fines I think the executive team will solve it.

                                                        Again, aggregates are usually sufficient here. But to do more one probably does need to build specialized data pipeline jobs that know how to decrypt the data for the job.

                                                        Fraud prevention is similar in difficulty to infosec, and it can hit margins pretty hard.

                                                        There are generally two phases: detecting likely targets, and gathering sufficient evidence.

                                                        For instance, I worked on a site where you could run a contest with a cash prize. Someone was laundering money through it by running lots of competitions and awarding their sockpuppets (which was bad for our community since they kept trying to enter the contests).

                                                        The first sign something was wrong came from complaints that obviously-bad entries were winning contests. We found similarities between the contest holder accounts and sockpuppet accounts by comparing their PII.

                                                        Then, we queried everyones PII to find out how often they were doing this, and shut them down. I’m not clear how we could have done this without decrypting every record at once (I suppose we could have done it to an ephemeral DB and then shut it down after querying).

                                                        Customer support

                                                        For instance, lots of companies use (eg) ZenDesk to help keep track of their dealings with customers. This can end up holding information from emails, phone systems, twitter messages, facebook posts, letters, etc.

                                                        This stuff isn’t going to be encrypted per-user unless each of your third-party providers happen to also use the technique.

                                                        Summary: It’s not a complete technique, but you’ve gotten past my biggest objections and I could see it making the problem tractable.

                                                    2. 1

                                                      Lobsters is open source. Anybody want to make a patch to make it use per user keys? I’m curious to see what’s involved.

                                                      1. 1

                                                        Good question though: what happens if a citizen of the EU uses his right to be forgotten? Does the user have a shiny “permanently forget me” button? The account deletion feature seems to fall a bit short of that?

                                                        1. 1

                                                          I suspect it’s “the site admin writes a query”.

                                                      2. 1

                                                        Actually you are wrong… as you have to make sure that user’s data is portable, meaning that it can be exported and transferred to someone else, and you cannot keep data if you do not need it… You also have to be able to show what data you have about the user… so if you cannot decrypt what you have to show the user… you are not compliant.

                                                        1. 1

                                                          Those are two separate requirements of GDPR, and being able to export a user’s data in a reusable format is only required if they haven’t asked for their data to be deleted.

                                                          I think you’re missing a key part. If a user asks for their account to be deleted, you don’t need to be able to make their data portable anymore, you just need to get rid of it. If you delete the encryption key for your user’s data, you can no longer decrypt any data you have on a user - which means legally you don’t have that data. There is nothing to show the user, or make portable.

                                                          1. 2

                                                            I see your point and that indeed works only for deletion requests.

                                                      1. 10

                                                        I used to use a receipt printer to print the weather, my agenda, and some todo items each morning. I’d just rip it off in the morning, and use it through the day.

                                                        Eventually, I recreated the workflow with my phone and org-mode (as opposed to printer & org-mode), but I truly think that trying out a prototype system is one of the most powerful ways of understanding what does and doesn’t work for you. Paper & cardboard are amazing ways to prototype these systems quickly, and sometimes they even become the system.

                                                        1. 3

                                                          That receipt printer thing sounds awesome. Do you have a write-up?

                                                          1. 7

                                                            Nope!

                                                            Most heat-printers are serial devices, so I just used a dumb shell script that directly echo’d to the device file the output of ‘(org-agenda)’ and some weather scripts. They are quite easy to hack around with, which means pretty unrefined solutions.

                                                        1. 5

                                                          If you’re scared how a growth-stage startup CTO is judging your work:

                                                          • Tribes - Is the company making a profit, or has piles of capital? If not, they will judge you on the amount you show you’re committed to getting the company back to profit / capital, no matter your role. Another wording this is are you part of the groupthink or not? If you’re not part of the tribe, doing a good job is going to be pointless if you’re not high level enough to have a board back you up.

                                                          • Commitments - How “on-time” are your teams accomplishing their goals? If not, they will judge you on your failed commitments. Most likely your management has a severe lack of experience in executive roles, and being able to meet commitments is going to be magic to them. If you aren’t meeting your commitment - you are out of their control.

                                                          • Bugs & PR - are your customers largely happy or disappointed in the quality of your product/solution? Could sales/bizdev blame you for lost accounts? This is where things start to get a little less political, this is about product quality control. How many bugs do you ship with, and how committed is the organization to handling any damage those bugs might cause?

                                                          So, how do you judge your architecture in the face of this?

                                                          • Tribal values - Your software should be as fungible in each direction as your company’s commitments are. If you are developing a backend for a mobile application - don’t worry about writing OS/kernel portable C++. But should you be able to give an estimate for adding Android support to your backend in a month? Maybe.

                                                          • Commitments - Can people add things to your system without having to understand all of it? Are team’s commitments independent so failures don’t impact your entire system? Being able to have teams that can solve common problems, rewrite entire subsystems, etc without killing the productivity of the rest of your team is usually the sign of a relatively decent set of abstractions.

                                                          • Bugs & PR - if you’re not tracking defects in any statistical manner, then start now. You have no idea how bad the situation until you have some visibility into your defect rate. One of the most useful stats that may be much harder to get in “modern” software engineering processes is the defect removal rate of each part of your engineering process. If you can’t get that, then you should have some visibility into how to associate parts of your product and parts of your process to what bugs came out of them.

                                                          1. 3

                                                            Those are some good ways to consider the architecture, thank you. I’m not scared of how my manager is judging me - we both seem happy with what’s going on - my motivations are to provide meaningful measurement of progress/benefit to the company, and from the selfish/career-centric side to have something more concrete than “well we both feel OK about this” in career reviews.

                                                            1. 2

                                                              You are thinking the right things as far as I can tell, and this was an awesome thread.

                                                              My point, which was belabored I admit, was the most successful technical architectures for companies are not ones with the best technical output, but the best social output. I have many scars around that issue, because a lot of the time I thought I was solving a social issue that was technical, or vice versa. Being explicit about how they map to each other can help you navigate all kinds of issues, not just with management.

                                                          1. 16

                                                            I was just getting into GTD with Emacs org mode when I discovered Bullet Journals: http://bulletjournal.com/

                                                            With bullet journals, you keep everything in a small notebook in your pocket. It’s satisfyingly analogue, and less complex than GTD. I don’t do any of the fancy colouring or artistry. My journals are raw and scrawly, and don’t require batteries or a screen.

                                                            For everyday tech notes and writing, I still use org mode. But my personal and work stuff is now all tracked through bullet journals: a small pocket-sized Leuchturm 1917 for personal stuff, and a lined Blueline record book for work. I’ve been doing it for four months now, and it’s pretty decent. I think it’s worth a look-in if you would like an easy system to start with.

                                                            1. 5

                                                              I can’t enough good things about tracking my work with a bullet journal. I’ve been at it for almost three years and really appreciate the monthly (or weekly, as desired/needed) culling of unnecessary tasks.

                                                              1. 4

                                                                I love the bullet journal approach, especially how it is specifically intended to be customized and improved upon. I discovered it about 3 months ago, and it’s the only productivity system I’ve ever used that I’ve managed to keep using for more than a couple of weeks.

                                                                I personally use a dotted Moleskine notebook that is just small enough to stick in my back pocket so I can keep it with me everywhere I go.

                                                                1. 2

                                                                  I use org-mode very heavily, but I don’t really like being tied to a computer 24/7. Given that you have experience with both, do you think there is a way to integrate Bullet Journals with org-mode? For now I have a pocket notebook that I will sometimes use to write lists of things that eventually just get transcribed to org-mode.

                                                                  1. 1

                                                                    After about a week of using a bullet journal I think org-mode serves a different but complimentary purpose. I’m using bullet journal for daily life tasks like dentist appointment and weekend plans with friends; org-mode for software, anything I do on the computer etc.

                                                                  2. 2

                                                                    Those who like bullet journals, but dislike the daily rewriting ritual / table of contents focus, should check out “final version perfected” by Mark Forster.

                                                                    http://markforster.squarespace.com/blog/2015/5/21/the-final-version-perfected-fvp.html

                                                                    This really helped me get out of a rut, and reboot my GTD workflow. Mind you, that happened in ~2012 or so, and only for a short period. I’m a full-time GTD person, and have been for a while. And i use org-mode and emacs to manage it.

                                                                    1. 1

                                                                      Been using a bullet journal here now for about 5 months, and absolutely agree! Mine’s not pocket sized, and I’ve recently teetered between using it for only work, or for work and other personal things. Seems to work best for just work, and I hadn’t thought of just getting another yet. Might give that a go!

                                                                    1. 6

                                                                      Awesome! This is what makes emacs so useful for me, eshell, dired, ffap combined mean I use the same keypress for “find file” and it just goes wherever I’m pointed, including urls.

                                                                      Reminds me of plan9 by using the clipboard as your way of getting data from every application. I wonder what it would take to get the current cursor location & file descriptor, then you could launch things from a key press without having to highlight the right section.

                                                                      1. 1

                                                                        Out of curiosity, does anybody use this “framework” in a daily basis here? How long have you been doing that?

                                                                        1. 6

                                                                          I’m not quite answering your question but hopefully you find it useful:

                                                                          I’ve spent the last 3 - 4 years trying to get better at Getting Things Done. I have a few techniques I use:

                                                                          • OKRs
                                                                          • TODO List
                                                                          • Timers

                                                                          The way these break down is:

                                                                          With OKRs I specify long term goals and how I’ll measure success towards that goal. This works great for work, for home life not so great (turns out my ambitions are a lot bigger than my will when it comes to my personal life) but in both situations they at least give me clear direction.

                                                                          With the TODO List I use org-mode, which is great, and work goes it, gets prioritized in it and acted on.

                                                                          I user Timers when I’m in crunch/focus period, so not all the time. I use timers in a few, but related, ways. For crunch period, in the morning I’ll plan out my day to the minute with everything I’ll do having a duration with it, including relaxation (but I don’t include bathroom breaks because they are a bit more random). Then I follow the schedule blindly. I can only work on that item during that time period and regardless of if I’m done or not I move onto the next item and work on it for that duration. This is useful in that getting stuck on one item cannot block other items. Also, since I know I’m committed to working on one thing for that time period, I tend to power through blockers. The other use case for timers is more standard Pomodoro where the day is not as tightly scheduled but when I decide to do something I can only do that thing for some duration. This is just a great way to stop watching Netflix or dicking around on the internet because you know when you’ll be back to dick around again (when the timer goes off). For me this works well when I feel I need the extra focus.

                                                                          So GTD fits into the third component for me: TODO List. This is also one of my weakest points so I’m moving towards following GTD a bit better. My problem is I’m happy to put work into my todo list and not actually do it. I’m very bad at distinguishing work I really should do from work I’d just like to do. I think GTD will help with this in a few ways:

                                                                          • Differentiating between something that is just in the Todo list from Next. Right now everything is equal in my system so it’s really hard to know what I want to do next.
                                                                          • More liberally declaring things projects. Right now I only put really big things in my projects bin so larger things that are projects sort of fall between the cracks.
                                                                          • Only giving required things a deadline. Right now I’ll say when I’d like something done by and set a deadline and often it slips, so missing a deadline looses all meaning.
                                                                          • I have an Incoming list now that is on my phone so I won’t forget as many things. Before I only recorded things I remembered at my computer.
                                                                          • The weekly review will be valuable so things can be thrown out and reorganized.

                                                                          We’ll see how it goes. Really, the problem I have is lack of motivation to do a lot of things rather than organizing it. But I think some of the tricks in GTD are just to get you to give, even an artificial, sense of urgency to some tasks (like distinguishing Todo from Next as well as making sure every Project always has a Next).

                                                                          Hope some of this long comment was useful and thanks for reading.

                                                                          1. 1

                                                                            Hope some of this long comment was useful and thanks for reading.

                                                                            It is! thanks for taking the time writing it.

                                                                          2. 2

                                                                            I have a basic understanding of GTD (basically on the level described in this blog post) and tried to follow it on several occasions always failing.

                                                                            I then started using todoist but didn’t like a third party storing every task item I want to do, hence I went back to taskwarrior which I used in the past.

                                                                            Taskwarrior is great but I started missing on-the-go notes so I configured & self-host a taskd sync server and have the taskwarrior app on my Android. After doing this I decided to give GTD yet another but this time ‘proper’ try and ordered the book (it should arrive today) and intend to implement GTD with taskwarrior as outlined in this article

                                                                            1. 2

                                                                              Been using it for ~10 years consistently, though have been attempting to use it since 2003.

                                                                              The only way it really becomes useful is if you have to read your lists to know what to do next. I see a lot of people using it as a “backup” system, which is a lot of work for little gain, IMO.

                                                                            1. 6

                                                                              This should be true everywhere, but it might not be:

                                                                              • You have the right to ask every single senior programmer on the team for time for them to explain the code, architecture, and production environment (i.e. how it runs) of any component they work on. DO THIS FIRST. If they push back because they have good documentation, great, read the docs and then ask them again with all your questions in tow.

                                                                              • Get a way to browse & search your code as fast as possible, with as much semantic support as possible. OpenGrok, sourcegraph, cscope, gnu global, language server protocol, or just an IDE that can parse everything all at once. Someone’s already written it, so go search for it. This is important because of the next point.

                                                                              • Find bugs to fix, and work until you understand everything you can about the bug you’re fixing. Go deep rather than wide on the bugs. Don’t wait for them to assign you some new small feature, go for the bugs. They teach you things no one thinks of, and will not be nearly as isolated. Usually you’ll also get to work with people interested in the bug who you wouldn’t have known about.

                                                                              Finally, go easy on yourself. It’s just ascii in some files, and you’ll learn a lot. Good luck and congrats!

                                                                              1. 2

                                                                                This is great advice, and I’d add that you shouldn’t just ask the senior engineers just once. The first time they will probably “explain” a bunch of stuff, forgetting that 2/3rds of it makes no sense without the context that they have and you don’t. The other 1/3rd will be a grab bag of things that stick in your head, but don’t form a coherent picture. You’ll forget half of that 1/3rd in the first week, and it’ll turn out that you thought you understood the other half, but when you look at the actual code there’s a lot more going on than you thought.

                                                                                Don’t stress about any of this. It can take months, even for experienced engineers, to get “up to speed” on even moderately complex systems. Putting that picture together in your head takes a long time. Don’t worry about that. Just keep asking questions, and keep reading the code. I agree with others about working on bugs, and going deep, rather than wide. Wide will just confuse you. Become and expert in one corner of the system, and follow the connections outwards from there.

                                                                                Also, remember that the debugger is your friend. Set a breakpoint and patiently step through the code. You’ll end up going off on all kinds of tangents, but it’s a great way to learn how things hang together (especially in Javascript, where a lot of stuff isn’t really visible until runtime).

                                                                              1. 3

                                                                                Very interesting work, these tools can really help people take legacy infrastructure forward by not even needing to know how to write the tests, but generating them before you create changes.

                                                                                From the results, you can see they get worse coverage, but I think that’s an entirely fair trade-off. Write the hard to discern/annotate tests manually as you create the program, and have things like KLOVER generate as many other tests as possible.

                                                                                These symbolic annotations though, wish they had examples of how they are annotated in the original source, they just show what KLOVER expands the functions out to, unless I’m misreading this paper.

                                                                                1. 11

                                                                                  Go has a lot of weird semantics that require you to understand the implementation, from it’s type system to it’s function call behavior.

                                                                                  See the ugliness here: https://play.golang.org/p/yPxfK5VvLw

                                                                                  In this case it’s that s… to a function that has …type sends the array directly, Go has no concept of “apply” from lisp. So, if you read the implementation of how they handle their variadic calls, you’ll see they kind of grasp around with some heuristics to make some subset of things make sense.

                                                                                  Secondly, the types []string and []interface do not exist in a hierarchy, so you cannot make []string => []interface. Single values do exist in a hierarchy with interface, as all are assignable. It’s a bottom type for single values, because Go’s type system doesn’t interact well with their slice type due to all it’s magic they’ve added on function calls.

                                                                                  Go leaves a lot to be desired.

                                                                                  1. 1

                                                                                    Naming is about communicating from one human to another with an extremely low amount of information (function name) with a high amount of meaning (that function’s behavior).

                                                                                    Really wish all this code cross-refrence tooling focused on showing documentation & linking it, not code. Texinfo is mediocre, but supports the type of indexing that is useful.

                                                                                    1. 2

                                                                                      Agreed. I have compared this to the saying that “sometimes the only way to escape the fire is to run through it”.

                                                                                      I don’t mean this in a practical sense for doing today in your source code, but as a philosophical concept. It’s better to name something “oldPanda” than “findLastUserUnpaidInvoiceSomethingSomething”.

                                                                                      In the first instance you just assign a name, a symbol, to a concept. You are not fighting to pack lots of information into a tiny space. Because the symbol is meaningless it can precisely mean what it is representing.

                                                                                      In the second instance you make an attempt at packing information into somewhere that simply does not fit. Now this incomplete and inaccurate name will become one of your worst enemies for years to come.

                                                                                      The relation to that saying at the top is that just like running through fire having obscure symbols and names is something we want to naturally avoid so we try to cram meaning into variable names which is perhaps the “obvious” solution like running away from a fire, but it’s not necessary always the best.

                                                                                    1. 1

                                                                                      The problem with the example implementation of String() is that it needs to be kept in sync with the set of values in the enum type.

                                                                                      1. 2

                                                                                        I don’t know if this was updated after your post, but he does introduce the stringer tool. Which I realize is a little ridiculous as a language tool, but thought I’d point it out.

                                                                                        1. 1

                                                                                          No, Stringer recommendation was there.

                                                                                          In the post, first I’m trying to explain how enums work rather than the best practices. Then, I widening it more about what to use for enums. Including stringer and iota.