1. 11

    i’ve been really digging lwn lately

    1. 6

      Same. Just subscribed!

      1. 6

        oh you know I just realized all the posts I’ve loved recently have been @benhoyt. Good job, Ben!

        1. 14

          You’re welcome. Do subscribe – that’s how the (very small) team makes their living and how the site keeps going.

    1. 3

      This is great! We wrote https://godoc.org/github.com/spacemonkeygo/tlshowdy (which takes a slightly different approach) to make it take even less code than 105 lines, if that helps anyone. See the Peek method (which will return the ClientHello and a new Conn with the handshake bytes restarted)

      1. 12

        This blog post exactly describes the workflow of Gerrit, except that Gerrit handles most of this workflow for you, especially in the case where PART1 or something needs updates in response to code review.

        GitHub adopting the Gerrit model would be amazing. In the meantime, boy oh boy this is such an advertisement for Gerrit.

        1. 3

          So, what does this mean for Signal?

          1. 1

            Lots of discussion on this at the moment. It means that Secure Value Recovery could be used by a malicious Signal server to exfiltrate user data while attesting to some benign version of the code. That user data could be used to “recover” someone else’s Signal account.

          1. 12

            That’s interesting, however OpenRA is already great if you want to play Red Alert :)

            1. 11

              OpenRA is indeed super, amazingly great. One comment EA said is they are releasing this code under the GPL so it will be compatible with OpenRA, so I assume the expectation is OpenRA can become even better with this.

              1. 4

                I’m pretty sure the original code will help with some edge cases, but still… it could be remarkable if they released that code when OpenRA needed it for real. Releasing it when OpenRA is already a better engine overall sounds more like “since assets is all we can sell now…”. Even then, idSoftware used to open source their engines before they became retrogaming engines.

                1. 3

                  so I assume the expectation is OpenRA can become even better with this.

                  Unless there were secret ancient programming techniques locked away, I doubt this would be the case.

                  1. 11

                    Perhaps “better” means “a more exact rendition of the original”, and yeah, I do think the original code could help there.

                    1. 3

                      It can help understand some game behaviours.

                    2. 1

                      Will the assets be there too?

                  1. 9

                    I was really interested in IPFS a few years ago, but ultimately was disappointed that there seemed to be no passive way to host content. I’d like to have had the option to say along the lines o “I’m going to donate 5GB for hosting IPFS data, and the software will take care of the rest”.

                    My understanding was that, one has to explicitly mark some file as something you’d like to serve too, and only then will be really be permanent. Unless it got integrated into a browser-like boomark system, I have the feeling that most content will be lost because. Can anyone who has been following their developments tell me if they have improved on this situation?

                    1. 3

                      I thought they were planning to use a cryptocurrency (“Filecoin”) to incentivize hosting. I’m not really sure how that works though. I guess you “mine” Filecoins by hosting other people’s files, and then spend Filecoins to get other people to host your files.

                      1. 2

                        This is a hard problem to solve, because you want to prevent people from flooding all hosters; so there has to be either some kind of PoW or money involved. And with money involved, there’s now an incentive for hosters to misbehave, so you have to deal with them, and this is hard; there are some failed projects that tried to address it.

                        IPFS’ authors’ solution to this is Filecoin which, afaik, they had in mind since the beginning of IPFS, but it’s not complete yet.

                        1. 2

                          My understanding was that, one has to explicitly mark some file as something you’d like to serve too,

                          Sort of… my recollection is that when you run an IPFS node (which is just another peer on the network), you can host content on IPFS via your node, or you can pull content from the network through your node. If you publish content to your node, the content will always be available as long as your node is online. If another node on the network fetches your content, it will only be cached on the other node for some arbitrary length of time. So the only way to host something permanently on IPFS is to either run a node yourself or arrange for someone else’s node to keep your content in their cache (probably by paying them). It’s a novel protocol with interesting technology but from a practical standpoint, doesn’t seem to have much benefit over the traditional Internet in terms of content publishing and distribution, except for the fact that everything can be massively (and securely) cached.

                          There are networks where you hand over a certain amount of disk space to the network and are then supposedly able to store your content (distributed, replicated) on other nodes around the Internet. But IPFS isn’t one of those.

                          1. 1

                            There are networks where you hand over a certain amount of disk space to the network and are then supposedly able to store your content (distributed, replicated) on other nodes around the Internet.

                            What are some of them? Is Storj one of those?

                            1. 3

                              Freenet is one. You set aside an amount of disk space and encrypted chunks of files will be stored on your node. Another difference from IPFS is that when you add content to Freenet it pushes it out to other nodes immediately, so you can turn your node off and the content remains in the network through the other nodes.

                              1. 2

                                VP Eng of Storj here! Yes, Storj is (kinda) one of them, with money as an intermediary. Without getting into details, if you give data to Storj, as long as you have enough STORJ token escrowed (or a credit card on file), you and your computers could walk away and the network will keep your data alive. You can earn STORJ tokens by sharing your hard drive space.

                                The user experience actually mimics AWS much more than you’d guess for a decentralized cryptocurrency storage product. Feel free to email me (jt@storj.io) if some lobste.rs community members want some free storage to try it out: https://tardigrade.io/satellites/

                                1. 1

                                  Friend, I’ve been following your work for ages and have had no real incentive to try it. As a distributed systems nerd, I love what you’ve come up with. The thing which worries me is this bit:

                                  decentralized cryptocurrency storage product.

                                  I’m actually really worried about the cryptocurrency part of this, since it imbues an otherwise-interesting product with a high degree of sketchiness. Considering that cryptocurrency puts you in the same boat as Bitcoin (and the now-defunct art project Ponzicoin), why should I rethink things? Eager to learn more facts in this case. Thanks for taking the time to comment in the first place!

                                  1. 4

                                    Hi!

                                    I guess there’s a couple of things you might be saying here, and I’m not sure which, so I’ll respond to all of them!

                                    On the technical side:

                                    One thing that separates Storj (v3) from Sia, Maidsafe, Filecoin, etc, is that there really is no blockchain element whatsoever in the actual storage platform itself. The whitepaper I linked above is much more akin to a straight distributed systems pedigree sans blockchain than you’d imagine. Cryptocurrency is not used in the object storage hotpath at all (which I continue to maintain would be latency madness) - it’s only used for the economic system of background settlement. The architecture of the storage platform itself would continue to work fine (albeit less conveniently) if we swapped cryptocurrency for live goats.

                                    That said, it’s hard to subdivide goats in a way that retain many of the valuable properties of live goats. I think live goats make for a good example of why we went with cryptocurrency for the economic side of storage node operation - it’s really much more convenient to automate.

                                    As a user, though, our primary “Satellite” nodes will absolutely just take credit cards. If you look up “Tardigrade Cloud Storage”, you will be able to sign up and use the platform without learning one thing about cryptocurrency. In fact, that’s the very reason for the dual brands (tardigrade.io vs storj.io)

                                    On the adoption side:

                                    At a past cloud storage company I worked at before AWS existed, we spent a long time trying to convince companies it was okay to back up their most sensitive data offsite. It was a challenge! Now everyone takes it for granted. I think we are in a similar position at Storj, except now the challenge is decentralization and cryptocurrency.

                                    On the legal/compliance side:

                                    Yeah, cryptocurrency definitely has the feeling of a wild west saloon in both some good ways and bad. To that end, Storj has spent a significant investment in corporate governance. There’s definitely a lot of bad or shady actors in the ecosystem, and it’s painfully obvious that by choosing cryptocurrency we exist within that ecosystem and are often judged by the actions of neighbors. We’re not only doing everything we can to follow existing regulations with cryptocurrency tokens, we’re doing our best to follow the laws we think the puck could move towards, and follow those non-existent laws as well. Not that it makes a difference to you if you’re averse to the ecosystem in general, but Storj has been cited as an example of how to deal with cryptocurrency compliance the right way. There’s definitely a lot of uncertainty in the ecosystem, but our legal and compliance team are some of the best in the business, and we’re making sure to not only walk on the right side of the line, but stay far away from lines entirely.

                                    Without going into details I admit that’s a bit vague.

                                    Anyway, given the length of my response you can tell your point is something I think a lot about too. I think the cryptocurrency ecosystem desperately needs a complete shaking out of unscrupulous folks, and it seems like that’s about as unlikely to happen as a complete shaking out of unscrupulous folks from tons of other money-adjacent industries, but perhaps the bar doesn’t have to be raised very far to make things better.

                                    1. 2

                                      The lack of a blockchain is a selling point. Thanks for taking the time to respond. I’ll check out the whitepaper ASAP!

                                      1. 1

                                        if we swapped cryptocurrency for live goats.

                                        … I kinda want to live in this world

                              2. 1

                                You might want to check out Arweave.org.

                                1. 1

                                  I have the feeling that most content will be lost

                                  Only if the person hosting it turns off their server? IPFS isn’t a storage system, like freenet, but a protocol that allows you to fetch data from anywhere it is stored on the network (for CDN, bandwidth, and harder-to-block). The person making the content available is still expected to bother storing/serving it somewhere themselves, just like with the normal web.

                                  1. 1

                                    If you want to donate some disk space you can start following some of the clusters here: https://collab.ipfscluster.io .

                                  1. 1

                                    ¯_(ツ)_/¯ https://www.jtolio.com/ (using some old hugo release with my own theme)

                                    1. 2

                                      You might enjoy https://shru.gg/r for shrug copypasta (you dropped an arm)

                                      1. 2

                                        lol! i love that it escapes it for you

                                        1. 2

                                          That’s what I made it for! Could never remember the sequence

                                      2. 1

                                        I really like your theme! That being said in my opinion the hyperlink underlines are a bit jarring though - I’d get rid of them. As well as 120+ characters per line being a bit hard to chew through.

                                        1. 1

                                          yeah i’ve been thinking about narrowing it again. screens are so big though! maybe i can do something where when i float images out they’re allowed to go outside of the reading width to make it look less empty

                                      1. 4

                                        (I posted this to the HN thread but was late and missed the window of getting insight, so sorry for the x-post from HN)

                                        I’m very puzzled by the consensus group load balancing section. The article emphasizes correctness of the Raft algorithm was super important (to the point that they skipped clear optimizations!!11), but, then immediately follows up with (as far as I can tell) a load-balancer wrapper approach for rebalancing and scaling. My “this feels like consensus bug city” detectors immediately went off. Consensus algorithms (including Raft and Paxos) are notoriously picky and hard to get right around cluster membership changes. If you try to end run around this by sharding to different clusters with a simple traffic director to choose which cluster, how does the traffic director achieve consensus with the clusters that the traffic is going to the right cluster? You haven’t solved any consensus problem, you’ve just moved it to your load balancers.

                                        A solution for this problem (to agree on which cluster the data is owned by) is 2-phase commit on top of the consensus clusters. It didn’t appear from the diagrams that that’s what they did here, so either I missed something, or this wouldn’t pass a Jepsen test.

                                        Did I miss something?

                                        (If you did build 2PC on top of these consensus clusters, you’d have built a significant portion of Spanner’s architecture inside of a secure enclave. That’s hilarious.)

                                        1. 3

                                          I once bought a Sharp Zaurus SL-C1000 to polish source code en route. The screen was good enough, but the keyboard wasn’t.

                                          1. 1

                                            I miss my Zaurus a lot. What a great little device.

                                          1. 4

                                            I am still undecided if async is something nice, or some sort of infectious disease that fragments code bases.

                                            (Though leaning towards nice)

                                            1. 11

                                              I’m firmly in the infection camp. http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ remains my go-to explanation for why.

                                              1. 3

                                                Python is a language that had green threads via gevent and monkey patching, a surprisingly nice solution that lets you have it both ways… Though they still added an async keyword haha.

                                                1. 3

                                                  IMO the async keyword feels really hacky, but I get why: they have to differentiate to maintain compatibility.

                                                  The idea of gevent/monkey patching seems like a better approach. Ideally, the language runtime exposes an interface that low-level scheduling/IO libraries hook into, much like the Rust approach.

                                                  1. 2

                                                    gevent doesn’t really work with C modules (of which there are a lot), which means you still have a split codebase where you have to worry about what is blocking and what isn’t.

                                                    contrast to go as described in the above link, which just assumes all C packages will be blocking and transparently starts a threadpool for you so you don’t have to worry about it.

                                                  2. 2

                                                    Same, but Rust seems like it has to go this route due to it’s unique design philosophy.

                                                    If this were any other language I’d argue that the runtime should provide it transparently and let users get at the details if they wish.

                                                  3. 4

                                                    In Rust specifically it feels non-native; something people are importing from more “managed” languages.

                                                    I find one std::thread per resource (and mpsc channels) rock solid, and at the right abstraction level for the Rust programs I write, so personally, I won’t be partaking in async.

                                                    1. 4

                                                      Threading is the 95% solution, it will almost always be just fine. Async is for the 5% of the time when a system really needs minimum overhead for I/O.

                                                      Really, I find that Rust is by default so fast that I seldom have to worry about performance. Even doing things The Dumb And Simple Way is often more than fast enough.

                                                  1. 6

                                                    I can’t form an opinion about the language, there’s too little information available. I don’t even know if it is garbage collected or requires manual allocation/deallocation. The syntax is a mashup of Go and Rust, closer to Go, and it is “safe”, so probably garbage collected?

                                                    Based solely on the web site information, this claim seems incorrect and/or misleading:

                                                    This tool supports the latest standard of notoriously complex C++ and allows full automatic conversion to human readable code.

                                                    It says “full automatic conversion (of C++) to human readable code”. But this language is missing a lot of features that would be needed to make this possible, such as classes with multiple inheritance, destructors, exceptions, and C++-equivalent templates. The language is “simple”, so it can’t have all the features of C++. You could translate C++ to V by inline expanding all of the C++ features with no V equivalent into lower-level C-like code. For example, inline expand destructors at each point where they would be called. But then the code is not maintainable, and I dispute that it is human readable, even if it is indented nicely, because many of the higher level abstractions have been destroyed. The translation might plausibly be done by using clang to translate C++ to LLVM IR, then converting the IR to V. The resulting V code will not be maintainable unless maybe you use a very restricted subset of C++ that corresponds to the features present in V.

                                                    1. 10

                                                      “No global state” means you can’t translate the full C++ language to V.

                                                      No GC, “safe”, and “fearless concurrency” (a feature of Rust) is a bold claim. How can this be done without the complexity of the Rust type system and borrow checker? Maybe that is enabled by “no global state”, which might mean a very restricted programming model compared to the competition.

                                                      1. 1

                                                        V allows globals for code translated from C/C++. Perhaps I’ll be able to remove this hack in the future. Not sure it’s possible.

                                                        1. 2

                                                          How do you handle undefined behaviour in the C++ sources when translating? Does V also suffer from undefined behaviour? For example, what would I get if I tried to translate this C++ to V:

                                                          unsigned int foo = -2.0;
                                                          
                                                          1. 3

                                                            It would translate to

                                                            foo := u32(-2.0)

                                                            It will not compile.

                                                      2. 5

                                                        I think the thing that made me mentally eject was this part:

                                                        V can translate your C/C++ code to human readable V code. Let’s create a simple program test.cpp first:

                                                        #include <vector>
                                                        #include <string>
                                                        #include <iostream>
                                                        
                                                        int main() {
                                                                std::vector<std::string> s;
                                                                s.push_back("V is ");
                                                                s.push_back("awesome");
                                                                std::cout << s.size() << std::endl;
                                                                return 0;
                                                        } 
                                                        

                                                        Run v translate test.cpp and V will generate test.v:

                                                        fn main {
                                                                mut s := []string 
                                                        	s << 'V is '
                                                        	s << 'awesome'
                                                        	println(s.len) 
                                                        }
                                                        
                                                        1. 10

                                                          The combination of seemingly impossible claims with no source code, and not even an explanation of how those feats are accomplished, is concerning.

                                                          1. 2

                                                            Why?

                                                            1. 10

                                                              This is just an unbelievably ambitious project. If it works the way you’re implying here, there are either lots of special cases and this won’t generalize to real codebases, or you have solved an unfathomable amount of hard dependent subproblems. I’m definitely willing to wait and see!

                                                              1. 7

                                                                In about two weeks I’ll post that Doom article with the resulting .v code. I’ll send you the link once it’s live ;)

                                                                I spent most of December and January working on the translator.

                                                                1. 7

                                                                  Most people doing this spent months to years to decades depending on how much they wanted to translate with or without human readability. I want C++ experts to weigh in on the translator when you post it. For me, that’s your biggest claim needing the most replication across code you thought of and didn’t wishing it never existed. Also, the biggest opportunity for improving major projects in terms of QA.

                                                                  1. 5

                                                                    You would would come across as more trustworthy if you acknowledged that the translator has limitations, and explained what those limitations are. What you are proposing to do instead is to only show a handpicked example that happens to avoid those limitations. Doom uses a restricted subset of C++, so you can translate it into V without dealing with most of the parts of C++ that make a C++ translator challenging.

                                                                    1. 4

                                                                      I will cover everything in the article.

                                                                      The end goal is to support the entire C++ standard.

                                                                      1. 2

                                                                        You’re likely to find that the last 10% of the standard requires 3x the effort.

                                                                        Nothing wrong in that, and I think getting to 100% is actually not worth the effort. There’s a lot of value in a translator that does 90% of the translation and makes the remaining 10% easy to identify. I’d market it like that until I’m sure the final 10% is done.

                                                                        1. 1

                                                                          Agreed, hopefully I’ll get some help from the community. V will be open sourced before the C++ translator is complete.

                                                                    2. 1

                                                                      You would would come across as more trustworthy if you acknowledged that the translator has limitations, and explained what those limitations are. What you are proposing to do instead is to only show a handpicked example that happens to avoid those limitations. Doom uses a restricted subset of C++, so you can translate it into V without dealing with most of the parts of C++ that make a C++ translator challenging.

                                                              2. 4

                                                                I don’t even know if it is garbage collected or requires manual allocation/deallocation

                                                                Over here it says “No GC”, but can’t find any details other than that.

                                                                1. 2

                                                                  The reference manual gives syntax for obtaining a pointer to a struct. But there is no mention of a pointer type, nor an explanation of what a pointer is or what you can do with pointers. Can you dynamically allocate a new object and get a pointer to it? If so, how does the object get freed?

                                                                1. 8

                                                                  My summary of most of the defensive comments I’ve seen to this article: C doesn’t kill people, people kill people!

                                                                  1. 2

                                                                    A single person can run many nodes, right? Can someone run multiple nodes with the same backing storage? Does this affect redundancy?

                                                                    1. 2

                                                                      The whitepaper describes mitigations for Sybil attacks. Original Storj designs had some mitigations for this IIRC though not this PoW/Kademlia tree scheme.

                                                                      The concern I would have is not Sybil attacks, but centralization related to Storj Labs’ satellites. It will be interesting to see whether other non-SL satellites become trusted by the network in practice.

                                                                      1. 1

                                                                        A single person can run many nodes, yes. You can choose to run multiple nodes with the same backing storage, but our node selection algorithm chooses nodes based on IP route, geographic, and identification redundancy. You may not receive more data just because you have more nodes. Our recommendation is a node per hardware failure domain (probably one node per hard drive).

                                                                      1. 2

                                                                        @pushcx , this is a duplicate of https://lobste.rs/s/5wp65s/protobuffers_are_wrong , I missed it when posted this one. Could you please merge it?

                                                                        (sorry for the duplicate…)

                                                                        1. 2

                                                                          1. 1

                                                                            Not sure if it was intentional or if something else happened but the merged story has less upvotes than the unmerged one did.

                                                                            1. 3

                                                                              The count left of the headline is always votes on the original story, but the ranking (“hotness”) of the merged story and its comments are taken into effect by Story#calculated_hotness.

                                                                              1. 1

                                                                                oh neat

                                                                        1. 5

                                                                          Category-theoretic thinking of products/sums is a good logical model, but I think it’s awful if your physical memory layout is the same thing as your logical model.

                                                                          For an example, lets take a list [1,2,3]. In product/sum design your representation for this structure is: 1:(2:(3:nil)).

                                                                          Imagine it costs you 1 byte to store a number, and 2 bytes to store a structure. If you take the literal in-memory interpretation for this structure, it is formed from pairs of references (total cost=10): 01 *> 02 *> 03 *> 00

                                                                          If you’re dealing with packed representations, you terminate by empty structure, you end up with: 01 02 03 00. But if you didn’t treat physical memory layout as logical memory layout, you could also give it a different representation where the sequence is annotated with a number first: 03 01 02 03.

                                                                          I think protobuffer sucks because the schemas are directly compiled into the user language. They would have a much better system if they first converted protobuffer schemas into protobuffer files, then have an interpreter for such files in each dynamic language, and compilers from the schema for the compiled languages.

                                                                          I also think that the post illustrates the common error that people tend to do, that is to not recognize that implementation details come and go. You really should not let your language be influenced by them, and if you force implementation detail with your language then you open the barn door for that.

                                                                          1. 1

                                                                            I think protobuffer sucks because the schemas are directly compiled into the user language. They would have a much better system if they first converted protobuffer schemas into protobuffer files, then have an interpreter for such files in each dynamic language, and compilers from the schema for the compiled languages.

                                                                            Just from a pragmatism perspective, that sounds like significantly more work for every language that wants to have a protobuf library. As it stands, having a straightforward translation from the object in memory to the wire format greatly assists implementation across all of the potential languages that need implementing. I think this is the key reason Lua, for example, has seen such broad adoption as a scripting language. It’s easy to embed because it has a very natural layout for interoperability (all calls just push and pop stuff on a stack). It’s very easy to write a Lua FFI.

                                                                            1. 1

                                                                              It’d be a bit more work in each dynamically typed language that you need to support. You’d need a wire format decoder and a script that decodes the schema file and uses it to translate between wire format objects and their legible counterparts in the client language. But that’d be nice to use when you got to read from or write into a protobuffer file because you could just do the pip install protobuf -equivalent of your scripting language and then start rolling:

                                                                              schema = protobuf.load("api_schema.pb")
                                                                              dataset_0 = schema.open("dataset_0.pb")
                                                                              print(dataset_0[0].user_data)
                                                                              

                                                                              It’s quite involving to get the .proto3 -compiler to work. It’s almost like compiling a C project in complexity. It produces plain code that reserves its own directory in your project.

                                                                              1. 4

                                                                                I think protobuffer sucks because the schemas are directly compiled into the user language.

                                                                                IMO, this is an example of a tooling problem being perceived as a problem with protobuf because the prevailing implementations do it that way. If you want an interpreter-style proto library for C, check out nanopb. protoc will produce data and interfaces (struct definitions) instead of a full C implementation.

                                                                          1. 26

                                                                            No, this guy is wrong. Protocol buffers are endlessly pragmatic and many of the “bad decisions” he points out have concrete reasons.

                                                                            For instance - he suggests all of the fields should be required. required fields existed in at least proto2 and I assume proto1, but were discovered to be terrible for forwards compatibility. I agree with his footnote that there’s a debate, but one side of it decisively won. If a field is required in one release of your code, that code can never talk with any protocol buffer serializations from future releases without that field being required without blowing up. The most frequent internal advice I saw was “avoid required. required is forever.” As a result, most feedback encouraged everything to be optional or repeated, which was made official in proto3.

                                                                            Second, here’s how he wants to implement repeated:

                                                                            coproduct List<t> {
                                                                              Unit empty = 0;
                                                                              Pair<t, List<t>> cons = 1;
                                                                            }
                                                                            

                                                                            This just reeks of a complete ignorance of a couple of things -

                                                                            1. How is this going to look for serialization/deserialization? Sure, we’ve embedded a list into a data structure, but what matters is being fast. Protocol buffers pragmatically describe useful data structures that also are very close to their native wire format. This is not that, but he says
                                                                            2. “the actual serialization logic is allowed to do something smarter than pushing linked-lists across the network—after all, implementations and semantics don’t need to align one-to-one.” The protocol buffer implementation must be simple, straightforward, bugfree, and implemented in every language anyone wants to use. Static analysis to detect these patterns could work, but good luck maintaining that logic in every language of your lingua franca language interoperability system.

                                                                            Third, as an example of the designers of protobufs being amateurs, he says:

                                                                            It’s impossible to differentiate a field that was missing in a protobuffer from one that was assigned to the default value.

                                                                            headdesk proto2 definitely supported this functionality. It was stripped out in proto3 after literally decades of experience from thousands of engineers said that on balance, the tradeoff wasn’t worth it. You can’t claim that a hard look of the tradeoffs is a result of being amateurs.

                                                                            Fourth:

                                                                            With the one exception of routing software, nothing wants to inspect only some bits of a message and then forward it on unchanged.

                                                                            This is almost entirely the predominant programming pattern at Google, and in many other places too. Protocol buffers sound… perfectly designed for their use case!

                                                                            What a frustrating read.

                                                                            1. 4

                                                                              Thanks for this critique, you’re right on. I do agree with one part though - you need to make application specific non-proto data structures that often mirror the protos themselves, which isn’t exactly DRY.

                                                                              Here’s an example that I’m struggling to find a “nice” solution for. Locally running application has a SQLite database managed via an ORM that it collects structured log entries into. Periodically, it bundles those log entries up into proto, removes them from the local database, and sends them (or an aggregated version of them) up to a collection server.

                                                                              The data structures are the exact same between the protos and the database, yet I need to define the data structures twice.

                                                                              1. 3

                                                                                Hmm, yeah, that’s a tough one. One thing that the protobuf compiler supports though is extensible plugins (e.g., check out all of the stuff gogoproto adds as extensions to the compiler: https://github.com/gogo/protobuf/blob/master/extensions.md)

                                                                                Perhaps the right thing in ORM situations at a certain scale (maybe you’re not at this scale yet) is to write a generator that generates the ORM models from the protobuf definitions?

                                                                                1. 2

                                                                                  Yeah, that would seem like the right solution in this case. In any case, what I described isn’t even a problem with the proto way of thinking, it’s just a tooling issue.

                                                                              2. 4

                                                                                Nice critique, better than I could have enunciated. I worked with the author at a company that chose protocol buffers and assume in part that the article is directed at those of us (myself included) who chose to use protocol buffers as our wire serialization format. It was the most viable option given the constraints and the problems a prior system was exhibiting. That said, the author has predilections and they show in the article’s tone and focus.

                                                                                1. 1

                                                                                  Were you replacing XML?

                                                                                2. 3

                                                                                  This is the best critique of this rant I’ve read, and you didn’t even go into the author’s attitude. Kudos and thank you.

                                                                                1. 6

                                                                                  Overall there is a lot of depth in this post and contains a great amount of detail about terminal I/O. I can understand the author’s hatred of ncurses, but just the sheer length of the post kinda shows the point of needing some level of abstraction when starting on a TUI projects, just due to the sheer number of gotchas between different terminal types. Are there any alternative / more modern libraries in existence?

                                                                                  From a meta-post perceptive, things I don’t like: sentences aren’t capitalized. The author uses correct code blocks, syntax highlighting, most of it seems correctly proofread/edited, and yet she has chosen to not capitalize the first word of each sentence.

                                                                                  Maybe I’m just nitpicky here, or maybe she’s trying to start a trend in the way language so be directed. After all at one time English use to capitalize all nouns (like German still does), and we use to indent paragraphs (which has been replaced with block formatting, except in novels). So maybe this is just the next thing.

                                                                                  also, i’m a) a nobody and b) a woman. nothing i wrote would ever gain any traction; any project designed to supplant ncurses needs to come from someone who’s actually known to the FOSS community. and a maintainer who isn’t a cripple.

                                                                                  She shouldn’t shoot herself down here. I don’t really don’t think there’s a lot of evidence to support this “open source is hostel to women” idea that has been gaining traction. A lot of projects big and small have that Contributors Code of Conduct (or something based off of it) on them now. Are we still seeing backlash against women for being women? Are there any specific examples? (I’m not trying to troll; I really want to see real examples that don’t involve simply trying to get a CoC added to a project or removal of words like master/slave).

                                                                                  The fact is, a lot of projects never gain traction in the OSS community. It’s difficult to make something people would use and to get other people to use your shit. A lot of big OSS libraries and projects today are backed by huge investment by big industry, or are supported by people in academia who can work on them between research and classes. That’s a bigger meta problem in the way we think of open source today.

                                                                                  1. 3

                                                                                    I’m a nobody is of course not actually a reason to not do something. I’m a woman is especially not a reason to not do something. The idea that you need to be established for a project to gain traction is reasonable and maybe she doesn’t like being directly in the limelight.

                                                                                    After all if every woman had the attitude of “I’m a woman, therefore my project would not be well received” and that caused them not to start then no woman could ever be successful. It’s definitely not the right frame of mind even if it were true (and I’m not saying it’s not or that it is). I think it’s absurd to say that no discrimination happens in OSS, especially considering how recent the push to CoC has been and how much push back there has been. However I also think it’s absurd to say that one could never have a successful project as a woman as obviously women have lead successful OSS projects.

                                                                                    I don’t think saying “She shouldn’t shoot herself down” is very effective. We should try and evaluate what pressures make people feel this way and how we can help them overcome them. On boarding is something that is almost universally bad in OSS, and could be improved irrespective of gender. Saturn for F# is a good counterexample. https://github.com/SaturnFramework/Saturn . With it are words of encouragement, clear and direct expectations, and clear documentation. It’s missing a code of conduct which I think could help someone feel a little bit more secure about contributing, but otherwise pretty good. Those words of encouragement at the beginning help set a tone and example about how you’ll be treated when you contribute, and if you’re sharing your hard work to see what this thing can become it’s important that the culture around it is positive enough that you don’t feel punished for doing so.

                                                                                    1. 2

                                                                                      I don’t think saying “She shouldn’t shoot herself down” is very effective.

                                                                                      We know a lot of what pressures make people think that way. Her other essays, if some are about her, indicate she might have more pressures than most people. In any case, the negative attitude of “I shouldn’t even try because I believe X” coupled with dropping that into a write-up she put a lot of time into are both No No’s in general. It’s common in pep talk, therapy, self-help books, guides on entrepreneurship… everything about getting ahead despite challenges… to avoid thinking like that.

                                                                                      If the guidance includes others, they’ll also tell you not to whine to or accuse strangers by default since vast majority won’t be cool with that. I mean, let’s look at it it. The communities most likely to be giving her that much shit will not care if she writes that in her blog post. They’ll laugh. The ones who wouldn’t would be anywhere from concerned to wondering if she’s overly negative or having psychological problems. In other words, even many of them might think she’s just a pain to work with given she’s dropping stuff like that in middle of tech write-ups and most women aren’t.

                                                                                      It doesn’t matter which light I look at it. It’s some defeatist, over-projecting BS in her mind which isn’t going to help her if it’s true for projects she deals with or false for others where she looks like a downer. Showing some sympathy on top of discouraging such negative thinking or outbursts is a good piece of advice. It’s also common sense for a lot of people struggling in my area, esp minorities. Got to keep your head up and clear pressing forward they’d say. Well, the ones that aren’t in a slump or wanting to just quit.

                                                                                      Btw, that said, I totally agree with you that a welcoming community with good onboarding and support is a good thing. It will definitely help in general. It can also be a relief for these types of people. I’m just saying it’s general advice of all kinds of people to combat these negative, self-reinforcing beliefs and practices. They’re bad habits.

                                                                                      1. 2

                                                                                        I guess what I was trying to say and be clear I didn’t mean it quite so strongly as it came off, is that individual advice isn’t a solution to the systemic problem. I know you weren’t claiming it was and if I phrased it like a refutation that’s my mistake. I just saw the moment as an opportunity to talk about broader strategies that can help accommodate large groups instead of focusing on individuals. You’re right though we do broadly speaking know the problem. I also agree that no matter what the reality is, believing you can’t is never in your favor. Helping a person move past that mindset of “I can’t so I shouldn’t” is very important.

                                                                                        1. 1

                                                                                          Thanks for clarifying. That makes sense. I think we’re in agreement.

                                                                                      2. 0

                                                                                        think it’s absurd to say that no discrimination happens in OSS, especially considering how recent the push to CoC has been and how much push back there has been.

                                                                                        A lot of the push back on CoCs is that there doesn’t seem to be any evidence that they’re actually necessary, that there is any discrimination that needs to be addressed. I’m not sure I could discriminate against women even if I wanted to, I don’t know which ones are women! The only people in open source projects’ whose gender I’ve even noticed are people with obviously gendered names: if someone’s username is adam1423 (not a real example) then it’s obviously a guy, but otherwise I don’t even think about it, they’re just a person.

                                                                                        On boarding is something that is almost universally bad in OSS, and could be improved irrespective of gender. Saturn for F# is a good counterexample. https://github.com/SaturnFramework/Saturn . With it are words of encouragement, clear and direct expectations, and clear documentation.

                                                                                        I don’t think there’s much extra here that isn’t in most projects. A lot of projects I’ve seen (that are of decent size, at least) have some sort of ‘CONTRIBUTING’ file. I think people mistake documentation existing and documentation being rendered as HTML on GitHub. GitHub is not the only website out there for open source. There are a lot of resources on the internet. Most of the things mentioned in that file for Saturn are common to basically every project anyway.

                                                                                        People often don’t care about onboarding because it doesn’t actually matter to them whether you start contributing to their project.

                                                                                      3. 4

                                                                                        “She shouldn’t shoot herself down here.”

                                                                                        With her saying that, she has a negative attitude that casts every project in the same, discriminating light. She might be a pain in the ass to work with. That comment alone might get her ignored if a person reading a contribution saw it before. There’s others that would try to pull her in to help her out. I just think people should avoid saying stuff like that by default since politically-focused negativity is immediate turn off for many potential collaborators. Person A has it, Person B doesn’t, and so path of least effort and headaches is going with Person B.

                                                                                        “The fact is, a lot of projects never gain traction in the OSS community. It’s difficult to make something people would use and to get other people to use your shit.”

                                                                                        This right here is something to remember about introducing any new idea or practice. The default response is apathy. There’s an entire site dedicated to strategies for countering apathy to new products. Most that make it talk about how hard it was. There’s others that just took off but that’s rare. So, if wondering about negative responses, the first variable to eliminate (somehow) should be apathy. Most people just won’t give a shit.

                                                                                        1. 1

                                                                                          From a meta-post perceptive, things I don’t like: sentences aren’t capitalized. The author uses correct code blocks, syntax highlighting, most of it seems correctly proofread/edited, and yet she has chosen to not capitalize the first word of each sentence.

                                                                                          Maybe I’m just nitpicky here, or maybe she’s trying to start a trend in the way language so be directed. After all at one time English use to capitalize all nouns (like German still does), and we use to indent paragraphs (which has been replaced with block formatting, except in novels). So maybe this is just the next thing.

                                                                                          Like the CSS feedback from @johnaj, you are being nitpicky.

                                                                                          1. 2

                                                                                            I don’t understand why you’re dragging me into whatever discussion you are having. I merely informed the author that her HTML wasn’t written properly – breaking popular things like reader mode – so that she could fix it.

                                                                                            I haven’t read the entire article yet, but I think it is interesting so far.

                                                                                            1. 2

                                                                                              You were actually informing Lobsters that her HTML was broken. Usually I’d say send it to their comment box, email, or whatever. I didn’t see anything listed. Perhaps they don’t want to be contacted.

                                                                                              Anyway, probably best to tell just the author that kind of stuff since we can’t change it. If there’s no contact, then I’d say don’t even mention it since you’re just griping to us about what they are doing with their own site. Goes nowhere.

                                                                                              1. 3

                                                                                                Yeah, that’s true. I just don’t think it was pedantic. The page was difficult for me to read. I hoped the author would be on Lobste.rs and see it, which isn’t unusual, but I guess that isn’t the case here.

                                                                                            2. 1

                                                                                              I don’t think it’s nitpicking to criticise an article for not using capital letters or being formatted in a way that makes it hard to read. When I opened the article, I immediately closed it again instinctively. I didn’t read any of it. I literally opened it, saw it was light-on-dark low-constrast nearly-unformatted text and closed it, instinctively.

                                                                                              I’ve since read it, but I’ve found I do this a lot. If I open something and it’s badly formatted in this kind of way I unconsciously/instinctively close it within a few hundred milliseconds.

                                                                                          1. 1

                                                                                            I like Homebank. Gets the job done. Seems a bit more modern than GnuCash

                                                                                            1. 1

                                                                                              Flagged for press release and advertising, and it isn’t even tagged properly.

                                                                                              1. 1

                                                                                                Fair enough. I did think it was interesting timing given the prior discussions around open source monetization with Redis.

                                                                                                What would the correct tags have been?

                                                                                                1. 1

                                                                                                  The nearest I could argue for would be release (since it’s a new program), but even then it’s a business thing and not a “hey, go download new software with changes X, Y, Z” thing.