1. 4

    These vary pretty heavily in quality. Many seem to be missing proper quoting. Use with caution.

    1. 4

      Use bash with caution.

      1. 1

        Yeah, but its the same as any script you find online, don’t run it if you don’t understand it. The benefit here is that some of the better one are explained or corrected by other users.

      1. 8

        I had been vaguely aware of Copperhead OS but never looked into it or used it (I used Cyanogenmod before they imploded, and Lineage OS thereafter). I don’t know anything about the context for this other than the reddit and hacker news links here. Everything I’ve seen so far makes me feel inclined to be sympathetic to this Daniel Micay fellow, so I can’t help but wonder if there’s any information from his former business partner’s side of the story that would make me feel less sympathetic.

        1. 12

          He’s a fellow Arch Linux Trusted User. He seemed like a pretty ok dude in my interactions.

          1. 8

            I also chill in a few old irc channels with strncat post my major arch days, he has a lot of people in the open source community that respect his contributions. My bet is he’ll come out ahead of this if he can get untangled from the copperheados company.

          2. 16

            Daniel Micay was a prolific Rust contributor. (In fact, he is still in the top 20 even if he has been inactive since 2015.) In his Rust work, I found him to be a straight person.

            1. 2

              I have a good impression of Daniel Micay after talking with him om IRC. He’s also an unusually knowledgeable programmer.

            1. 2

              Getting XOAUTH2 to work with isync / mbsync.

              If not, I’ll be writing yet another program to scrape mail out of Google’s email walled garden.

              1. 7

                Folks - take notes!

                I am shocked at the number of developers/engineers I work with that are debugging an extremely complex problem, and force themselves to keep so much state in their head. If you can write out the debugging steps more like a journal / record of every action you took, it’s much easier to reinflate your subconscious state.

                Make it refined enough someone else could reasonably follow along, and you’ll be able to as well. Lots of coworkers in other functions take detailed daily notes as a habit to show their progress to management, software gets lucky as there is an “output” on a small granularity of work.

                As I get more and more reprioritizations & interruptions in my work, I’ve found it’s helpful to have confidence that all but maybe the last 30 mins of work are recorded in a decent fashion (org-mode!).

                1. 2

                  I took notes on running a specific regression test at work. It’s something like 50 steps just to set it up [1]. And even then, others that have tried running it have had to fill in information I’ve neglected. It is hard to know at times what should be written down and what doesn’t have to be written down. And that changes over time, unfortunately.

                  [1] Why not automate it? Not that easy when you have to set up the programs and associated data across four machines. And then when it’s automated, it’s an even harder issue to debug if the automation breaks down [2].

                  [2] About half the time the test fails anyway because the clocks are out of sync. I had to code a specific test for that in the regression test, and yes, for some reason, ntpd isn’t working right. The other times the test fails is because the Protocol Stack From Hell [3] fell over because someone looked at it funny.

                  [3] Six figures to license a proprietary SS7 stack that isn’t worth the magnetic flux used to store it. This is the “best of breed” SS7 stack, sadly.

                1. 4

                  I hacked up a small tool the other day that would buffer output from a command into memory until it receives a signal to reconnect to stdout, when it would dump everything that was output in the interim. I want to integrate this into dtach so emacs can have resumable shell sessions on remote hosts for TRAMP workflows.

                  Let’s just say it’s a huge distraction from the work I actually need to do and I hope I don’t make too much progress on it.

                  1. 2

                    The link to your tool is currently 404ed

                    1. 1

                      Oops, had no http on it: https://github.com/codemac/sigbuffer

                      It’s a dumb tool, but it was just a proof of concept that I knew how to use dup2+pipe again.

                  1. 3

                    I always love thesis dedications. It reminds me how much human life goes into each of these papers I tuck into ~/docs/pdf.

                    I dedicate this thesis to you, NH. Your continuous support and love throughout the writing of this thesis and also within my own life helped me in more ways than you probably realize. In the vastness of space and immensity of time, it is my joy to spend a planet and an epoch with you.

                    1. 2

                      I remember reading one dedication where it was obvious that the author was not pleased with the support of his advisors or something. Basically they said “my parents are awesome, my wife’s wonderful. My advisors were Bob, Sue, and Joe.”

                      1. 2

                        And don’t forget Olin Shivers’ acknowledgements section for Scsh (1994).

                        Who should I thank? My so-called “colleagues,” who laugh at me behind my back, all the while becoming famous on my work? My worthless graduate students, whose computer skills appear to be limited to downloading bitmaps off of netnews? My parents, who are still waiting for me to quit “fooling around with computers,” go to med school, and become a radiologist? My department chairman, a manager who gives one new insight into and sympathy for disgruntled postal workers?

                        My God, no one could blame me – no one! – if I went off the edge and just lost it completely one day. I couldn’t get through the day as it is without the Prozac and Jack Daniels I keep on the shelf, behind my Tops-20 JSYS manuals. I start getting the shakes real bad around 10am, right before my advisor meetings. A 10 oz. Jack ‘n Zac helps me get through the meetings without one of my students winding up with his severed head in a bowling-ball bag. They look at me funny; they think I twitch a lot. I’m not twitching. I’m controlling my impulse to snag my 9mm Sig-Sauer out from my day-pack and make a few strong points about the quality of undergraduate education in Amerika.

                        If I thought anyone cared, if I thought anyone would even be reading this, I’d probably make an effort to keep up appearances until the last possible moment. But no one does, and no one will. So I can pretty much say exactly what I think.

                        Oh, yes, the acknowledgements. I think not. I did it. I did it all, by myself.

                    1. 2

                      Thanks for posting this paper - really excited for what these types of tools could do for testing and verification.

                      nickpsecurity: do you regularly review publications? if so - which? As a storage nerd I read a smaller subset than I shuold.

                      1. 1

                        I just run through dozens of them at a time with my Google-fu (now DuckDuckGo-fu) to find the most interesting or practical along many areas of application. I submit some of those regularly to places where people enjoy or can use them. I also keep an eye out of folks building usable tools that might benefit from seeing specific papers. I try to get it to them.

                        I also constantly look for connections between it all for new methods of doing things. Ive spotted some decent ones recently that could simultaneously boost productivity and code confidence by about an equal amount. Usually inverses of each other. So, if not that valuable now, I hope to make something useful later that builds on stuff you see me submit here.

                      1. 5

                        This a fascinating case. It’s very unfortunate that the cyclist had to die for it to come before us. However, had the car been driven by a human, nobody would be talking about it!

                        That said, the law does not currently hold autonomous vehicles to a higher standard than human drivers, even though it probably could do so given the much greater perceptiveness of LIDAR. But is there any precedent for doing something like this (having a higher bar for autonomous technology than humans)?

                        1. 13

                          Autonomous technology is not an entity in law, and if we are lucky, it never will be. Legal entities designed or licensed the technology, and those are the ones the law finds responsible. This is similar to the argument that some tech companies have made that “it’s not us, it’s the algorithm.” The law does not care. It will find a responsible legal entity.

                          This is a particularly tough thing for many of us in tech to understand.

                          1. 25

                            It’s hard for me to understand why people in tech find it so hard to understand. Someone wrote the algorithm. Even in ML systems where we have no real way of explaining its decision process, someone designed it the system, someone implemented it, and someone made the decision to deploy it in a given circumstance.

                            1. 11

                              Not only that, but one other huge aspect of things nobody is probably thinking about. This incident is going to probably start the ball rolling on certification and liability for software.

                              Move fast and break things is probably not going to fly in the faces of too many deaths to autonomous cars. Even if they’re safer than humans, there is going to be repercussions.

                              1. 8

                                Even if they’re safer than humans, there is going to be repercussions.

                                Even if they are safer than humans, a human must be held accountable of the deaths they will cause.

                                1. 2

                                  Indeed, and I believe those humans will be the programmers.

                                  1. 4

                                    Well… it depends.

                                    When a bridge breaks down and kills people due to bad construction practices, do you put in jail the bricklayers?

                                    And what about a free software that you get from me “without warranty”?

                                    1. 4

                                      No - but they do take the company that build the bridge to court.

                                      1. 5

                                        Indeed. The same would work for software.

                                        At the end of the day, who is accountable for the company’s products is accountable for the deaths that such products cause.

                                      2. 2

                                        Somewhat relevant article that raised an interesting point RE:VW cheating emissions tests. I think we should ask ourselves if there is a meaningful difference between these two cases that would require us to shift responsibility.

                                        1. 2

                                          Very interesting read.

                                          I agree that the AI experts’ troupe share a moral responsibility about this death, just like the developers at Volkswagen of America shared a moral responsibility about the fraud.

                                          But, at the end of the day, software developers and statisticians were working for a company that is accountable for the whole artifact they sell. So the legal accountability must be assigned at the company’s board of directors/CEO/stock holders… whoever is accountable for the activities of the company.

                                        2. 2

                                          What I’m saying is this is a case where those “without warranty” provisions may be deemed invalid due to situations like this.

                                        3. 1

                                          I don’t think it’ll ever be the programmers. It would be negligence either on the part of QA or management. Programmers just satisfy specs and pass QA standards.

                                    2. 2

                                      It’s hard to take reponsability for something evolving in a such dynamic environment, with potentially used for billions of hours everyday, for the next X years. I mean, knowing that, you would expect to have a 99,99% of cases tested, but here it’s impossible.

                                      1. 1

                                        It’s expensive, not impossible.

                                        It’s a business cost and an entrepreneurial risk.

                                        If you can take the risks an pay the costs, that business it not for you.

                                  2. 4

                                    It’s only a higher bar if you look at it from the perspective of “some entity replacing a human.” If you look at it from the perspective of a tool created by a company, the focus should be ok whether there was negligence in the implementation of the system.

                                    It might be acceptable and understandable for the average human to not be able to react that fast. It would not be acceptable and understandable for the engineers on a self-driving car project to write a system that can’t detect an unobstructed object straight ahead, for the management to sign off on testing, etc.

                                  1. 2

                                    A lot of good stuff, reading C++ errors is like learning an extra language on top of the C++ language, I can’t wait to try it out.

                                    How do I get at some private field?

                                    That one is interesting! I wonder how smart it is.

                                    For instance in the following example:

                                    class foo
                                    {
                                    public:
                                      std::pair<int, int> get_coordinates() const { return std::make_pair(m_x, m_y); }
                                    
                                    private:
                                      int m_x;
                                      int m_y;
                                    };
                                    
                                    
                                    void test(foo *ptr)
                                    {
                                      if (ptr->m_x >= 3)
                                        ;// etc
                                    }
                                    

                                    I wonder if the compiler would be able to figure out that m_x is accessible via ptr->get_coordinates().first ?

                                    1. 2

                                      Hah, you’re also cross-posting to HN as I am.

                                      1. 1

                                        :) yes, the author was able to reply on HN and even took time to open a suggestion on GCC’s bugzilla

                                      2. 1

                                        According to godbolt’s trunk gcc, it is not smart enough:

                                        <source>: In function 'void test(foo*)':
                                        <source>:20:12: error: 'int foo::m_x' is private within this context
                                           if (ptr->m_x >= 3)
                                                    ^~~
                                        <source>:13:7: note: declared private here
                                           int m_x;
                                               ^~~
                                        Compiler returned: 1
                                        
                                      1. 3

                                        I’ve found when I’m initially developing something that it’s hard to find 3 use cases for everything, just very common things.

                                        This has lead me to do two things:

                                        • Find places in older / unrelated code where my new abstraction might be useful, because I like it so much. This has been a great motivation for me to refactor.

                                        • Throw out abstractions with confidence. If I cant find 3 use cases, then throw it away. The moment a third use case is added or discovered, it’ll be obvious it should be abstracted.

                                        1. -2

                                          Here’s my issue: if you’re asking me for an estimate, you’re communicating that what I’m doing isn’t that important. If it were important, you’d get out of my way. A deadline is a resource limit; you’re saying, “This isn’t important enough to merit X, but if you can do it with <X, I guess you can go ahead”. If you ask me for an estimate, I have to guess your X (and slip under it if I want to continue the project, or exceed it if I want to do something else). If that seems self-serving or even dishonest, well let’s be honest about the dishonesty, at least here: estimates are bullshit anyway, so why not use the nonsense game for personal edge? Of course, if you’re the one being asked for an estimate, the system and odds are against you in nearly all ways, and you probably made some career-planning mistakes if you’re my age and still have to give estimates, but never mind that for now….

                                          There are projects that are nice-to-have but not important and might be worth doing, but that aren’t worth very much and therefore should be given resource/time limits from on high. I just don’t want to work on those. If it’s not doing with an open deadline, then assign someone at a lower skill level, who can still learn something from low-grade work. This isn’t me being a prima donna; this is me being realistic and thinking about job security. If having it done cheaply is more important than having it done well, I can (and should be) replaced by someone else.

                                          1. 3

                                            Businesses regularly put resource limits on investments, I don’t see why software engineering salaries are exempt from this.

                                            1. 0

                                              I don’t see why software engineering salaries are exempt from this.

                                              It might have something to do with the fact that the top 5% of us, at least, are smart enough that we ought to be calling the shots, rather than being pawns on someone else’s board.

                                              1. 2

                                                Unless you are literally on the board of a privately held company, you are pawns on someone else’s board. This isn’t hopeless, it’s just being honest with where actual final financial votes are cast.

                                                How “smart” you are doesn’t mean you deserve to call any shots, as much as anyone who owns the company doesn’t deserve to either. Building relationships, managing expectations, cost analysis and collecting requirements are all part of making engineering estimates, and they are tools for you to exert influence over someone who has ownership/authority.

                                            2. 2

                                              What if all work is estimated? These inferences depend on selective estimation.

                                              1. 1

                                                I’ll disagree with you here a bit–I agree with your last paragraph’s approach, but I think you are leaving out a little bit.

                                                It’s worth it to send overqualified engineers into certain projects exactly because they are more likely to know how to fix problems preemptively and because they are more likely to have a narrower distribution on the time taken to achieve the task. If you want something with a known problemspace done correctly and to a guaranteed standard and in a timely fashion, you shouldn’t send people who are still learning.

                                                “This isn’t important enough to merit X, but if you can do it with <X, I guess you can go ahead”.

                                                Unfortunately, this is a lot of business, right? Like, scheduling and organizing coverage and resources for projects often means that, say, a full rewrite would take too many engineers off of customer-facing work, but incremental cleanups are possible.

                                                From the employee side, it is arbitrary, but there is at least a chance of method to the madness.

                                              1. 22

                                                Working on polishing up my new backup tool. I set out to solve a set of problems:

                                                • Client side encryption.
                                                • Deduplication of similar backups to save space.
                                                • Efficient server side pruning of backups.
                                                • Write only mode so a compromised device does not compromise your backups.
                                                • Work over ssh and use ssh keys for access control.
                                                • Trivial user interface that integrates well with unix: accept directories, arbitrary streams (e.g. pipe in pgdump) or tar streams.

                                                approaching something i’m happy with people reviewing and using, though there is lots of testing and stabilization that i want to do.

                                                1. 11

                                                  This sounds like the tool I’ve been looking for, plus some features I didn’t know I wanted :-D

                                                  1. 3

                                                    Sound’s interesting! Did you, by chance, try borgbackup and could elaborate on the differences between borg and yours? I am not sure about your last point but at least the others all seem to be supported by it as far as I understand.

                                                    1. 4

                                                      I was unsatisfied with borg for a few reasons, which I will probably elaborate more on in a post somewhere, in general I am highlighting ease of use and I think I have a more user friendly design. I will see if anyone agrees with me once I get it out there.

                                                    2. 3

                                                      Write only mode

                                                      Yes!! Thank you!

                                                      I have been so jealous of Borg users for so long, but can’t switch because only Duplicity has this feature.

                                                      1. 1

                                                        Isn’t borg serve --append-only what we are talking about here?

                                                        1. 4

                                                          No. Borg only supports symmetric encryption, and closed the public key encryption issue as wontfix: https://github.com/borgbackup/borg/issues/672

                                                          By implementing public key encryption, you allow data sources to operate in what @ac calls “write only mode”, because if a compromised device only has your public key, it cannot compromise your backups (there is also the issue of data destruction by overwriting, but even raw S3 can be used as an append only store if you enable object versioning).

                                                          My use case is installing backup software liberally on every device I use (and I use more devices than I have sole control over). For example, with Borg, you could not back up your home directory on a shared server without giving the administrator of that system the ability to decrypt your entire repository.

                                                          1. 3

                                                            My implementation is currently not exactly as you described, but perhaps I can accommodate this with not too much difficulty. edit: I am sitting in a cafe thinking carefully about how to do it without affecting usability for less advanced users right now.

                                                            1. 2

                                                              Good points, thanks for the explaination!.

                                                              1. 2

                                                                If you trust the server to not leak data a next best approach is to have a symmetric key per device and then use ssh access controls to prevent access.

                                                                1. 1

                                                                  If you trust the server just use TLS or SSH tunnels to encrypt in motion. If that’s really your threat model there is no need for additional complexity.

                                                                  1. 2

                                                                    For example, with Borg, you could not back up your home directory on a shared server without giving the administrator of that system the ability to decrypt your entire repository.

                                                                    You have to backup to a different machine with a different administrator, it is true the first admin can decrypt your data, but he cannot fetch it because the ssh key can be granted write only access, even with borg via append only. a random key that is encrypted with a public key then discarded by the client is probably better though, still thinking how to do it well.

                                                          2. 2

                                                            Looking forward to test this! Much struggling with actual backup solutions!

                                                            1. 1

                                                              Awesome! Can’t wait to hear more about this.

                                                              1. 1

                                                                I’d really like to know about how you tackle the intersection of client-side encryption and de-duplication.

                                                                1. 2

                                                                  Its relatively straight forward using a https://en.wikipedia.org/wiki/Rolling_hash function. The ‘shape’ or ‘fingerprint’ of data guides you in finding split points, and each split chunk is encrypted independently. There is potential that size of chunks may give some clues about potential contents, but there are a few mitigations you can do such as random padding, keeping your hash function secret and a few others.

                                                                  Another sticking point is allowing the server to do garbage collection of chunks that are no longer needed while at the same time not being able to read the user data. I came up with a solution I hope to get reviewed around layering trust.

                                                                  1. 1

                                                                    I know about splitting a file into chunks, but how do you derive a repeatable IV/key for a given chunk without leaking the contents of it, or opening yourself up to some form of chosen-plaintext attack?

                                                                    1. 3

                                                                      I use a random IV, and random encryption key, but the content address (i.e. dedup key) generated is repeatable by the client as HMAC(DATA, CLIENT_SECRET). AFAIK the attacker cannot recover the secret or decryption key even if he has a chosen plaintext, and has no way to derive the data without the secret. An attacker also cannot forge chunks because the HMAC address will be obviously wrong to a client.

                                                                      There is also a write cache that prevents the same data from being uploaded twice with the same content address but different IV. Though that is more a performance thing than security, I could be wrong. I hope people can shoot down any flaws in my design which is why I need to get it finalized a bit.

                                                                2. 1

                                                                  That sounds like a really useful tool! I’d seen a reference to Convergent Encryption today/yesterday, which “deduplication of similar backups to save space” sounded like. https://en.wikipedia.org/wiki/Convergent_encryption sounds like there are fundamental security implications to using it btw; deduping sounds pretty orthogonal to the rest of what it does, and I’d be excited to see a Show and Tell post :)

                                                                  1. 3

                                                                    Yeah, I rejected that specific approach for the reasons described. My keys are random, but some of the ideas are similar from a distance.

                                                                  2. 1

                                                                    This sounds similar to something I’ve had a couple of stabs at (one such stab “currently”). What language are you writing in?

                                                                    My approach is built around two basic “processes”.

                                                                    A “collection” phase during which a series of scripts do source specific “dumps” (eg dump mysql, ldap, etc, identify user generated file system data, etc) into service backup directories

                                                                    A “store” process which compares a hash of each raw file (created by the collection phase) to an existing hash (if found). If no match is made, the hash is stored and the file is gpg encrypted using the specified public key. Once this process finishes, the hash files and gpg files are stored remotely via rsync, using the --link-dest option to create “time machine style” snapshots.

                                                                    The heavy lifting is obviously all “written already” in terms of shashum, gpg and rsync. The glue layer is just shell.

                                                                    I’d be keen to see how our approaches differ and if we can take any ideas from each other’s solutions.

                                                                    1. 1

                                                                      Mine is written in go currently. A big difference is It doesn’t sound like your approach deduplicates within files or across similar but not identical files, a tool such as mine could easily be hooked into your store phase to deal with cross file deduplication.

                                                                      There are similar tools currently out there such as ‘bup’, ‘borg’ and ‘restic’ you should look into. I feel like mine is superior, but those all work and are ready today.

                                                                      1. 1

                                                                        No, it doesn’t attempt any kind of de-dupe except for not storing dupes of the same file if it hasn’t changed.

                                                                        That’s part of why I’m not using those other tools - I want pubkey encryption (as mentioned elsewhere here, it means eg two+ devices can share backup space without leaking data they don’t already possess to the other) and I’d prefer if, when all else fails, I/someone can restore data from a backup by just running regular shell commands.

                                                                        This part can of course be built into a companion tool, but being able to do ssh backup-space -- cat backups/20180210-2300/database/users.sql | gpg | mysql prod-restore-copy is a huge bonus to me. No need for the remote end to support anything beyond basic file access, no worrying about recombining files. No worrying about whether I have the same version of the backup tool installed, and/or if the format/layout has changed.

                                                                        So possibly we have not as many overlapping goals as I originally thought, but it’s always nice to hear about activity in the same space.

                                                                        1. 1

                                                                          Yeah, tbh I wouldn’t release a backup tool as without fully documenting the formats and having them re-implementable in a simple way, e.g. as a python script. You need a complexity cap to protect you from yourself. I agree using public/private key pairs is a good idea.

                                                                          Your system seems decent, though you don’t really have access controls protecting the machine from deleting its own backups (perhaps a worm that spreads via ssh). Do you deal with backup rotation?

                                                                          1. 2

                                                                            So, the original version of this was built to store on an internally controlled file server, and the “store” process finished by touching a trigger file, which (via inotify) caused a root-perms having daemon to run on the storage host and remove write accesss to the last backup from the ssh user rsync connected as

                                                                            The same daemon also handled pruning of old backups.

                                                                            The new version is designed to work with offsite storage like rsync.net/similar so for now it relies on remote end functionality to protect previous versions (eg zfs snapshots).

                                                                  1. 46

                                                                    Half this article is out of date as of 2 days ago. GOPATH is mostly going to die with vgo as is the complaint about deps.

                                                                    Go is kind of an example of what happens when you focus all effort on engineering and not research.

                                                                    Good things go has:

                                                                    • Go has imo the best std library of any language.
                                                                    • Go has the best backwards compatibility I have seen (I’m pretty sure code from go version 1.0 still works today.).
                                                                    • Go has the nicest code manipulation tools I have seen.
                                                                    • The best race condition detector tool around.
                                                                    • An incredibly useful in practice interface system. (I once used the stdlibrary http server over a serial port because net.Listener is a simple interface)
                                                                    • The fastest compiler to use, and to build from source.
                                                                    • Probably the best cross compilation story of any language, and uniformity across platforms, including ones you haven’t heard of.
                                                                    • One of the easiest to distribute binaries across platforms (this is why hashicorp, cockroachdb, ngrok etc choose go imo).
                                                                    • A very sophisticated garbage collector with low pause times.
                                                                    • One of the best runtime performance to ease of use ratios around.
                                                                    • One of the easier to learn languages around.
                                                                    • A compiler that produces byte for byte identical binaries.
                                                                    • incredibly useful libraries maintained by google: (e.g. Heres a complete ssh client and server anyone can use: https://godoc.org/golang.org/x/crypto/ssh)
                                                                    • Lots of money invested in keeping it working well from many companies: cloud flare, google, uber, hashicorp and more.

                                                                    Go is getting something that looks like a damn good versioning story, just way too late:

                                                                    Go should have in my opinion and order of importance:

                                                                    • Ways to express immutability as a concurrent language.
                                                                    • More advanced static analysis tools that can prove properties of your code (perhaps linked with the above).
                                                                    • Generics.
                                                                    • Some sort of slightly more sophisticated pattern matching .

                                                                    Go maybe should have:

                                                                    • More concise error handling?
                                                                    1. 53

                                                                      I have been involved with Go since the day of its first release, so almost a decade now, and it has been my primary language for almost as long. I have written the Solaris port, the ARM64 port, and the SPARC64 port (currently out of tree). I have also written much Go software for myself and for others.

                                                                      Go is my favorite language, despite everything I write below this line.

                                                                      Everything you say is true, so I will just add more to your list.

                                                                      My main problem with Go is that, as an operating system it’s too primitive, it’s incomplete. Yes, Go is an operating system, almost. Almost, but not quite. Half and operating system. As an operating system it lacks things like memory isolation, process identifiers, and some kind of a distributed existence. Introspection exists somewhat, but it’s very weak. Let me explain.

                                                                      Go presents the programmer with abstractions traditionally presented by operating systems. Take concurrency, for example. Go gives you goroutines, but takes away threads, and takes away half of processes (you can fork+exec, but not fork). Go gives you the net package instead of the socket interface (the latter is not taked away, but it’s really not supposed to be used by the average program). Go gives you net/http, instead of leaving you searching for nginx, or whatever. Life is good when you use pure Go packages and bad when you use cgo.

                                                                      The idea is that Go not only has these rich features, but that when you are programming in Go, you don’t have to care about all the OS-level stuff underneath. Go is providing (almost) all abstractions. Go programming is (almost) the same on Windows, OpenBSD and Plan 9. That is why Go programs are generally portable.

                                                                      I love this. As a Plan 9 person, you might imagine my constant annoyance with Unix. Go isolates me from that, mostly, and it is great, it’s fantastic.

                                                                      But it doesn’t go deep enough.

                                                                      A single Go program instance is one operating system running some number of processes (goroutines), but two Go program instances are two operating systems, instead of one distributed operating system, and in my mind that is one too many operating systems.

                                                                      “Deploying” a goroutine is one go statement away, but deploying a Go program still requires init scripts, systemds, sshs, puppets, clouds, etc. Deploying a Go program is almost the same as deploying C, or PHP, or whatever. It’s out of scope for the Go operating system. Of course that’s a totally sensible option, it’s just doesn’t align with what I need.

                                                                      My understanding about Erlang (which I know little of, so forgive me if I misrepresent it) is that once you have an Erlang node running, starting a remote Erlang process is almost as easy as starting a local Erlang process. I like that. I don’t have to fuck with kubernetes, ansible, it’s just a single, uniform, virtual operating system.

                                                                      Goroutines inside a single process have very rich communication methods, Go channels, even mutexes if you desire them. But goroutines in different processes are handicaped. You have to think about how to marshal data and RPC protocols. The difficulty of getting two goroutines in different processes to talk to each other is the about the same as getting some C, or Python code, to talk to Go. Since I only want Go to talk to Go, I don’t think that’s right. It should be easier, and it should feel native. Again, I think Erlang does better here.

                                                                      Goroutines have no process ids. This makes total sense if you restrict yourself to a single-process universe, but since I want a multi-process universe, and I want to avoid thinking about systemds and dockers, I want to supervise goroutines from Go. Which means goroutines should have process ids, and I should be able to kill and prioritize them. Erlang does this, of course.

                                                                      What I just described in the last two paragraph would preclude shared memory. I’m willing to live with that in order to get network transparency.

                                                                      Go programs have ways to debug and profile themselves. Stack traces are one function call away, and there’s a easy to use profiler. But this is not enough. Sometimes you need a debugger. Debugging Go programs is an exercise in frustration. It’s much difficult than debugging C programs.

                                                                      I am probably one of the very few people on planet Earth that knows how to profile/debug Go programs with a grown-up tool like DTrace or perf. And that’s because I know assembly programming and the Go runtime very well. This is unacceptable. Some people would hope that something would happen to Go so that it works better with these tools, but frankly, I love the “I am an operating system” aspect of Go, so I would want to use something Go-native. But I want something good.

                                                                      This post is getting too long, so I will stop now. Notice I didn’t feel a need for generics in these 9 years. I must also stress out that I am a low-level programmer. I like working in the kernel. I like C and imperating programming. I am not one of those guys that prefers high-level languages (that do not have shared memory), so naturally wants Go to be the same. On the contrary. I found out what I want only through a decade of Go experience. I have never used a language without shared memory before.

                                                                      I think Go is the best language for writting command-line applications. Shared memory is very useful in that case, and the flat, invisble goroutines prevent language abuse and “just work”. Lack of debugger, etc, are not important for command-line applications, and command-line applications are run locally, so you don’t need dockers and chefs. But when it comes to distributed systems, I think we could do better.

                                                                      In case it’s not clear, I wouldn’t want to change Go, I just want a different language for distributed systems.

                                                                      1. 11

                                                                        I’ve done some limited erlang programming and it is very much a distributed OS to the point where you are writing a system more than a program. You even start third party code as “applications” from the erlang shell before you can make calls to them. erlang’s fail fast error handling and let supervisors deal with problems is also really fun to use.

                                                                        I haven’t used dtrace much either, but I have seen the power, something like that on running go systems would also be neat.

                                                                        1. 5

                                                                          Another thing that was interesting about erlang is how the standard library heavily revolves around timers and state machines because anything could fail at any point. For example gen_server:call() (the way to call another process implementing the generic service interface) by default has a 5 second timeout that will crash your process.

                                                                        2. 2

                                                                          Yes, Go is an operating system, almost. Almost, but not quite. Half and operating system. As an operating system it lacks things like memory isolation, process identifiers, and some kind of a distributed existence.

                                                                          This flipped a bit in my head:

                                                                          Go is CMS, the underlying operating system is VM. That is, Go is an API and a userspace, but doesn’t provide any security or way to access the outside world in and of itself. VM, the hypervisor, does that, and, historically, two different guests on the same hypervisor had to jump through some hoops to talk to each other. In IBM-land, there were virtual cardpunches and virtual cardreaders; these days, we have virtual Ethernet.

                                                                          So we could, and perhaps should, have a language and corresponding ecosystem which takes that idea as far as we can, implementation up, and maybe it would look more like Erlang than Go; the point is, it would be focused on the problem of building distributed systems which compile to hypervisor guests with a virtual LAN. Ideally, we’d be able to abstract away the difference between “hypervisor guest” and “separate hardware” and “virtual LAN” and “real LAN” by making programs as insensitive as possible to timing variation.

                                                                        3. 18

                                                                          How can vgo - announced just two days ago - already be the zeitgeist answer for “all of go’s dependency issues are finally solved forever”?

                                                                          govendor, dep, glide - there’s been many efforts and people still create their own bespoke tools to deal with GOPATH, relative imports being broken by forks, and other annoying problems. Go has dependency management problems.

                                                                          1. 2

                                                                            We will see how it pans out.

                                                                          2. 15

                                                                            Go has the best backwards compatibility I have seen (I’m pretty sure code from go version 1.0 still works today.).

                                                                            A half-decade of code compatibility hardly seems remarkable. I can still compile C code written in the ’80s.

                                                                            1. 11

                                                                              I have compiled Fortran code from the mid-1970s without changing a line.

                                                                              1. 1

                                                                                Can you compile Netscape Communicator from 1998 on a modern Linux system without major hurdles?

                                                                                1. 13

                                                                                  You do understand that the major hurdles here are related to the external libraries and processes it interacts with, and that Go does not save you from such hurdles either (other than recommending that you vendor compatible version where possible), I hope.

                                                                                2. 1

                                                                                  A valid point not counting cross platform portability and system facilities. Go has a good track record and trajectory but you may be right.

                                                                                3. 5

                                                                                  Perfect list (the good things, and the missing things).

                                                                                  1. 3

                                                                                    The fixes the go team have finally made to GOROOT and GOPATH are great. I’m glad they finally saw the light.

                                                                                    But PWD is not a “research concern” that they were putting off in favor of engineering. The go team actively digs their heals in on any choice or concept they don’t publish first, and it’s why in spite of simple engineering (checking PWD and install location first) they argued for years on mailing lists that environment variables (which rob pike supposedly hates, right?) are superior to simple heuristics.

                                                                                    Your “good things go has” list is also very opinionated (code manipulation tools better that C# or Java? Distrribution of binaries.. do you just mean static binaries?? Backwards compatibility that requires recompilation???), but I definitely accept that’s your experience, and evidence I have to the contrary would be based on my experiences.

                                                                                    1. 4

                                                                                      The fixes the go team have finally made to GOROOT and GOPATH are great.

                                                                                      You haven’t had to set, and should’t have set GOROOT since Go 1.0, released six years ago.

                                                                                      (which rob pike supposedly hates, right?)

                                                                                      Where did you get that idea?

                                                                                      1. 5

                                                                                        Yes, you do have to set GOROOT if you use a go command that is installed in a location different than what it was compiled for, which is dumb considering the go command could just find out where it exists and work from there. See: https://golang.org/doc/go1.9#goroot for the new changes that are sane.

                                                                                        And I got that idea from.. rob pike. Plan9 invented a whole new form of mounts just to avoid having a PATH variable.

                                                                                        1. 4

                                                                                          you do did have to set GOROOT if you use a go command that is installed in a location different than what it was compiled for

                                                                                          So don’t do that… But yes, that’s also not required anymore. Also, if you move /usr/include you will find out that gcc won’t find include files anymore… unless you set $CPPFLAGS. Go was hardly unique. Somehow people didn’t think about moving /usr/include, but they did think about moving the Go toolchain.

                                                                                          Plan9 invented a whole new form of mounts just to avoid having a PATH variable.

                                                                                          No, Plan 9 invented new form of mounts in order to implement a particular kind of distributed computing. One consequence of that is that $path is not needed in rc(1) anymore, though it is still there if you want to use it.

                                                                                          In Plan 9 environment variables play a crucial role, for example $objtype selects the toolchain to use and $cputype selects which binaries to run.

                                                                                          Claiming that Rob Pike doesn’t like environment variables is asinine.

                                                                                          1. 20

                                                                                            “So don’t do that…” is the best summary of what I dislike about Go.

                                                                                            Ok, apologies for being asinine.

                                                                                          2. 1

                                                                                            I always compiled my own go toolchain because it takes about 10 seconds on my PC, is two commands (cd src && ./make.bash). Then i could put it wherever I want. I have never used GOROOT in many years of using Go.

                                                                                        2. 0

                                                                                          C# and Java certainly have great tools, C++ has some ok tools. All in the context of bloated IDE’s that I dislike using (remind me again why compiling C++ code can crash my text editor?). But I will concede the point that perhaps C# refactoring tools are on par.

                                                                                          I was never of the opinion GOPATH was objectively bad, it has some good properties and bad ones.

                                                                                          Distrribution of binaries.. do you just mean static binaries? Backwards compatibility that requires recompilation???

                                                                                          Dynamic libraries have only ever caused me problems. I use operating systems that I compile from source so don’t really see any benefit from them.

                                                                                      1. 9

                                                                                        Nearly everything he says about J is also true of DrRacket.

                                                                                        1. 4

                                                                                          Which means that what’s not could be a roadmap for bridging the gap. I did say in another thread one might make a DSL out of these array languages in something like Racket. So, what does DrRacket lack that author said J environment has?

                                                                                          1. 3

                                                                                            Probably some of the “Labs” features. There is the “gracket” format as a starting point.

                                                                                        1. 12

                                                                                          The Go project is absolutely fascinating to me.

                                                                                          How they managed to not solve many hard problems of a language, it’s tooling or production workflow, but also solve a set to get a huge amount of developer mindshare is something I think we should get historians to look into.

                                                                                          I used Go professionally for ~2+ years, and so much of it was frustrating to me, but large swaths of our team found it largely pleasant.

                                                                                          1. 12

                                                                                            I’d guess there is a factor depending on what you want from a language. Sure, it doesn’t have generics and it’s versioning system leaves a lot to be wished for. But personally, if I have to write anything with networking and concurrency, usualy my first choice is Go, because of it’s very nice standard library and a certain sense of being thought-thorugh when it comes to concurrency/parallelism - at least so it appears to be when comparing it to other imperative Java, C or Python. Another popular point is how the language, as compared to C-ish languages doesn’t give you too much freedom when it comes to formatting – there isn’t a constant drive to use as few characters as possible (something I’m very prone to doing), or any debates like tabs vs. spaces, where to place the opening braces, etc. There’s really something reliving about this to me, that makes the language, as you put it, “pleasant” to use (even if you might not agree with it)

                                                                                            And regarding the standard library, one thing I always find interesting is how far you can get by just using what’s already packaged in Go itself. Now I haven’t really worked on anything with more that 1500+ LOC (which really isn’t much for Go), and most of the external packages I used were for the sake of convince. Maybe this totally changes when you work in big teams or on big projects, but it is something I could understand people liking. Especially considered that the Go team has this Go 1.x compatibility promise, so that you don’t have to worry that much about versioning when it comes to the standard lib packages.

                                                                                            I guess the worst mistake one can make is wanting to treat it like Haskell or Python, forcing a different padigram onto it. Just like one might miss macros when one changes from C to Java, or currying when one switches from Haskell to Python, but learns to accept these things, and think differently, so I belive, one should approach Go, using it’s strengths, which it has, instead of lamenting it’s weaknesses (which undoubtedly exist too).

                                                                                            1. 7

                                                                                              I think their driving philosophy is that if you’re uncertain of something, always make the simpler choice. You sometimes go to wrong paths following this, but I’d say that in general this is a winning strategy. Complexity can always be bolted on later, but removing it is much more difficult.

                                                                                              The whole IT industry would be a happier place if it followed this, but seems to me that we usually do the exact opposite.

                                                                                              1. 1

                                                                                                I think their driving philosophy is that if you’re uncertain of something, always make the simpler choice.

                                                                                                Nah - versioning & dependency management is not some new thing they couldn’t possibly understand until they waited 8 years. Same with generics.

                                                                                                Where generics I can understand a complexity argument for sure, versioning and dependency management are complexities everyone needed to deal with either way.

                                                                                                1. 3

                                                                                                  If you understand the complexity argument for generics, then I think you could accept it for dependency management too. For example, Python, Ruby and JavaScript have a chaotic history in terms of the solution they adopted for dependency management, and even nowadays, the ecosystem it not fully stabilized. For example, in the JavaScript community, Facebook released yarn in October 2016, because the existing tooling was not adequate, and more and more developers are adopting it since then. I would not say that dependency management is a fully solved problem.

                                                                                                  1. 1

                                                                                                    I would not say that dependency management is a fully solved problem.

                                                                                                    Yes it is, the answer is pinning all dependencies, including transitive dependencies. All this other stuff is just heuristics that end up failing later on and people end up pinning anyways.

                                                                                                    1. 1

                                                                                                      I agree about pinning. By the way, this is what vgo does. But what about the resolution algorithm used to add/upgrade/downgrade dependencies? Pinning doesn’t help with this. This is what makes Minimal Version Selection, the strategy adopted by vgo, original and interesting.

                                                                                                      1. 1

                                                                                                        I’m not sure I understand what the selection algorithm is doing then. From my experience: you change the pin, run your tests, if it passes, you’re good, if not, you fix code or decide not to change the version. What is MVS doing for this process?

                                                                                                        1. 1

                                                                                                          When you upgrade a dependency that has transitive dependencies, then changing the pin of the upgraded dependency is not enough. Quite often, you also have to update the pin of the transitive dependencies, which can have an impact on the whole program. When your project is large, it can be difficult to do manually. The Minimal Version Selection algorithm offers a new solution to this problem. The algorithm selects the oldest allowed version, which eliminates the redundancy of having two different files (manifest and lock) that both specify which modules versions to use.

                                                                                                          1. 1

                                                                                                            Unless it wasn’t clear in my original comment, when I say pin dependencies I am referring to pinning all dependencies, including transitive dependencies. So is MVS applied during build or is it a curation tool to help discover the correct pin?

                                                                                                            1. 1

                                                                                                              I’m not sure I understand your question. MVS is an algorithm that selects a version for each dependency in a project, according to a given set of constraints. The vgo tool runs the MVS algorithm before a build, when a dependency has been added/upgraded/downgraded/removed. If you have the time, I suggest you read Russ Cox article because it’s difficult to summarize in a comment ;-)

                                                                                                              1. 1

                                                                                                                I am saying that with pinned dependencies, no algorithm is needed during build time, as there is nothing to compute for every dependency version is known apriori.

                                                                                                                1. 1

                                                                                                                  I agree with this.

                                                                                              2. 4

                                                                                                I had a similar experience with Elm. In my case, it seemed like some people weren’t in the habit of questioning the language or thinking critically about their experience. For example, debugging in Elm is very limited. Some people I worked with came to like the language less for this reason. Others simply discounted their need for better debugging. I guess this made the reality easier to accept. It seemed easiest for people whose identities were tied to the language, who identified as elm programmers or elm community members. Denying personal needs was an act of loyalty.

                                                                                                1. 2

                                                                                                  How they managed to not solve many hard problems of a language, it’s tooling or production workflow, but also solve a set to get a huge amount of developer mindshare is something I think we should get historians to look into.

                                                                                                  I think you’ll find they already have!

                                                                                                1. 2

                                                                                                  Maybe a dumb questions, but in semver what is the point of the third digit? A change is either backwards compatible, or it is not. To me that means only the first two digits do anything useful? What am I missing?

                                                                                                  It seems like the openbsd libc is versioned as major.minor for the same reason.

                                                                                                  1. 9

                                                                                                    Minor version is backwards compatible. Patch level is both forwards and backwards compatible.

                                                                                                    1. 2

                                                                                                      Thanks! I somehow didn’t know this for years until I wrote a blog post airing my ignorance.

                                                                                                    2. 1

                                                                                                      PATCH version when you make backwards-compatible bug fixes See: https://semver.org

                                                                                                      1. 1

                                                                                                        I still don’t understand what the purpose of the PATCH version is? If minor versions are backwards compatible, what is the point of adding a third version number?

                                                                                                        1. 3

                                                                                                          They want a difference between new functionality (that doesn’t break anything) and a bug fix.

                                                                                                          I.e. if it was only X.Y, then when you add a new function, but don’t break anything.. do you change Y or do you change X? If you change X, then you are saying I broke stuff, so clearly changing X for a new feature is a bad idea. So you change Y, but if you look at just the Y change, you don’t know if it was a bug-fix, or if it was some new function/feature they added. You have to go read the changelog/release notes, etc. to find out.

                                                                                                          with the 3 levels, you know if a new feature was added or if it was only a bug fix.

                                                                                                          Clearly just X.Y is enough. But the semver people clearly wanted that differentiation, they wanted to be able to , by looking only at the version #, know if there was a new feature added or not.

                                                                                                          1. 1

                                                                                                            To show that there was any change at all.

                                                                                                            Imagine you don’t use sha1’s or git, this would show that there was a new release.

                                                                                                            1. 1

                                                                                                              But why can’t you just increment the minor version in that case? a bug fix is also backwards compatible.

                                                                                                              1. 5

                                                                                                                Imagine you have authored a library, and have released two versions of it, 1.2.0 and 1.3.0. You find out there’s a security vulnerability. What do you do?

                                                                                                                You could release 1.4.0 to fix it. But, maybe you haven’t finished what you planned to be in 1.4.0 yet. Maybe that’s acceptable, maybe not.

                                                                                                                Some users using 1.2.0 may want the security fix, but also do not want to upgrade to 1.3.0 yet for various reasons. Maybe they only upgrade so often. Maybe they have another library that requires 1.2.0 explicitly, through poor constraints or for some other reason.

                                                                                                                In this scenario, releasing a 1.2.1 and a 1.3.1, containing the fixes for each release, is an option.

                                                                                                                1. 2

                                                                                                                  It sort of makes sense but if minor versions were truly backwards compatible I can’t see a reason why you would ever want to hold back. Minor and patch seem to me to be the concept just one has a higher risk level.

                                                                                                                  1. 4

                                                                                                                    Perhaps a better definition is library minor version changes may expose functionality to end users you did not intend as an application author.

                                                                                                                    1. 2

                                                                                                                      I think it’s exactly a risk management decision. More change means more risk, even if it was intended to be benign.

                                                                                                                      1. 2

                                                                                                                        Without the patch version it makes it much harder to plan future versions and the features included in those versions. For example, if I define a milestone saying that 1.4.0 will have new feature X, but I have to put a bug fix release out for 1.3.0, it makes more sense that the bug fix is 1.3.1 rather than 1.4.0 so I can continue to refer to the planned version as 1.4.0 and don’t have to change everything which refers to that version.

                                                                                                              2. 1

                                                                                                                I remember seeing a talk by Rich Hickey where he criticized the use of semantic versioning as fundamentally flawed. I don’t remember his exact arguments, but have sem ver proponents grappled effectively with them? Should the Go team be wary of adopting sem ver? Have they considered alternatives?

                                                                                                                1. 3

                                                                                                                  I didn’t watch the talk yet, but my understanding of his argument was “never break backwards compatibility.” This is basically the same as new major versions, but instead requiring you to give a new name for a new major version. I don’t inherently disagree, but it doesn’t really seem like some grand deathblow to the idea of semver to me.

                                                                                                                  1. 2

                                                                                                                    IME, semver itself is fundamentally flawed because humans are the deciders of the new version number and we are bad at it. I don’t know how many times I’ve gotten into a discussion with someone where they didn’t want to increase the major because they thought high major’s looked bad. Maybe at some point it can be automated, but I’ve had plenty of minor version updates that were not backwards compatible, same for patch versions. Or, what’s happened to me in Rust multiple times, is the minor version of a package incremented but the new feature depends on a newer version of the compiler, so it is backwards breaking in terms of compiling. I like the idea of a versioning scheme that lets you tell the chronology of versions but I’ve found semver to work right up until it doesn’t and it’s always a pain. I advocate pinning all deps in a project.

                                                                                                                    1. 2

                                                                                                                      It’s impossible for computers to automate. For one, semver doesn’t define what “breaking” means. For two, the only way that a computer could fully understand if something is breaking or not would be to encode all behavior in the type system. Most languages aren’t equipped to do that.

                                                                                                                      Elm has tools to do at least a minimal kind of check here. Rust has one too, though not widely as used.

                                                                                                                      . I advocate pinning all deps in a project.

                                                                                                                      That’s what lockfiles give you, without the downsides of doing it manually.

                                                                                                            1. 20

                                                                                                              This is something I pushed against a lot at my last job. We wanted to hire Juniors and Associates, but every time we interviewed one we always rejected them as “not experienced enough”. The training is always someone else’s problem.

                                                                                                              We’ve known for a long time how to fix this: train people. Companies don’t like it because they don’t “have the time or money”, but this is the exact opposite of the truth. Edwards Deming calls “emphasis on short term profits over long term consistency” one of the deadly diseases of modern industry.

                                                                                                              One idea I had to make this more palatable to managers is to hire juniors as programming assistants, that spend part of their time doing training and another part doing helpful work for other developers.

                                                                                                              The reality is that most software developers don’t stay one place very long, so maybe it doesn’t make sense to invest a lot in training someone?

                                                                                                              Good thing investing in training leads to higher retention!

                                                                                                              1. 2

                                                                                                                Our industry’s inability to mentor and train people effectively in software engineering is due to the wizard myth that somehow keeps going and going, and is ruining everything from interviews, training, quality, and process.

                                                                                                              1. 1

                                                                                                                Walking around Tokyo, I often get the feeling of being stuck in a 1980’s vision of the future and in many ways it’s this contradiction which characterises the design landscape in Japan.

                                                                                                                Could this also be because many American films in the 80’s about the future used Japanese culture? Rewatching the original Blade Runner made me think about this.

                                                                                                                1. 3

                                                                                                                  Japan is one of our favorite places to visit, but there is a definite retro-futuristic vibe going on. Cash everywhere, or single-purpose cash cards instead of credit cards, fax machines, high-speed Internet access on your feature phone, no air conditioning or central heat but a robot vending machine at 7/11.

                                                                                                                  (We kept having children and so we haven’t gotten to travel internationally for a while now, but that’s our memory of it.)

                                                                                                                  1. 2

                                                                                                                    The feature phones have died – everybody on the train is staring at their iPhone or Android, now. Contactless smart cards (Suica, Passmo, etc), used for train fares, are gaining momentum as payment cards in 7/11 etc, but otherwise it’s still mostly a cash-only.

                                                                                                                    Otherwise it’s pretty much the same.

                                                                                                                  2. 2

                                                                                                                    Living in NYC, it feels like the 70’s version of the future!

                                                                                                                  1. 3

                                                                                                                    Critique of Everyday Life. Almost completed my workthrough, but largely reading anything that expands my model of my world I see as a positive.

                                                                                                                    1. 5

                                                                                                                      While I have personally not used it, is this not something orgmode (emacs) does?

                                                                                                                      1. 4

                                                                                                                        Org could be one component of a solution for this, but on its own it lacks: a way to edit via mobile/other devices, any means of uploading images, a blessed rendering path (there are many ways to render/export org files into something for display).

                                                                                                                        For instance, one solution might be to use Org’s “publish” feature. You could render to HTML, push that to some web host somewhere with rsync (that handles viewing on other/mobile devices). For editing you could sync your org source files (and any org-rendered images via things like plantuml, as well as static images) with something like syncthing/git/Dropbox/Box/iCloud/OneDrive etc. in combination with a non-Emacs editing app like Beorg (iOS) or Orgzly (Android).

                                                                                                                        That would be a workable and powerful system, but I think we have to admit it’s not as simple to use as just clicking “edit” in a wiki page from something like dokuwiki/mediawiki :-)

                                                                                                                        1. 2

                                                                                                                          I’ve found I don’t do any significant note editing on the phone - just capture.

                                                                                                                          So I use Google Photos + Orgzly + Syncthing + emacs. It used to be MobileOrg, and I started with org ~2005, so these files got bones.

                                                                                                                          1. 2

                                                                                                                            I have been looking for something like beorg for a long time. Thanks!!

                                                                                                                          2. 1

                                                                                                                            I love orgmode and use it on and off but last I looked sharing it was read-only and meant exporting the static document or running something (node, ruby) that parses the format on the fly.