1. 1

    ¯_(ツ)_/¯ https://www.jtolio.com/ (using some old hugo release with my own theme)

    1. 1

      You might enjoy https://shru.gg/r for shrug copypasta (you dropped an arm)

      1. 2

        lol! i love that it escapes it for you

        1. 2

          That’s what I made it for! Could never remember the sequence

      2. 1

        I really like your theme! That being said in my opinion the hyperlink underlines are a bit jarring though - I’d get rid of them. As well as 120+ characters per line being a bit hard to chew through.

        1. 1

          yeah i’ve been thinking about narrowing it again. screens are so big though! maybe i can do something where when i float images out they’re allowed to go outside of the reading width to make it look less empty

      1. 4

        (I posted this to the HN thread but was late and missed the window of getting insight, so sorry for the x-post from HN)

        I’m very puzzled by the consensus group load balancing section. The article emphasizes correctness of the Raft algorithm was super important (to the point that they skipped clear optimizations!!11), but, then immediately follows up with (as far as I can tell) a load-balancer wrapper approach for rebalancing and scaling. My “this feels like consensus bug city” detectors immediately went off. Consensus algorithms (including Raft and Paxos) are notoriously picky and hard to get right around cluster membership changes. If you try to end run around this by sharding to different clusters with a simple traffic director to choose which cluster, how does the traffic director achieve consensus with the clusters that the traffic is going to the right cluster? You haven’t solved any consensus problem, you’ve just moved it to your load balancers.

        A solution for this problem (to agree on which cluster the data is owned by) is 2-phase commit on top of the consensus clusters. It didn’t appear from the diagrams that that’s what they did here, so either I missed something, or this wouldn’t pass a Jepsen test.

        Did I miss something?

        (If you did build 2PC on top of these consensus clusters, you’d have built a significant portion of Spanner’s architecture inside of a secure enclave. That’s hilarious.)

        1. 3

          I once bought a Sharp Zaurus SL-C1000 to polish source code en route. The screen was good enough, but the keyboard wasn’t.

          1. 1

            I miss my Zaurus a lot. What a great little device.

          1. 4

            I am still undecided if async is something nice, or some sort of infectious disease that fragments code bases.

            (Though leaning towards nice)

            1. 11

              I’m firmly in the infection camp. http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ remains my go-to explanation for why.

              1. 3

                Python is a language that had green threads via gevent and monkey patching, a surprisingly nice solution that lets you have it both ways… Though they still added an async keyword haha.

                1. 3

                  IMO the async keyword feels really hacky, but I get why: they have to differentiate to maintain compatibility.

                  The idea of gevent/monkey patching seems like a better approach. Ideally, the language runtime exposes an interface that low-level scheduling/IO libraries hook into, much like the Rust approach.

                  1. 2

                    gevent doesn’t really work with C modules (of which there are a lot), which means you still have a split codebase where you have to worry about what is blocking and what isn’t.

                    contrast to go as described in the above link, which just assumes all C packages will be blocking and transparently starts a threadpool for you so you don’t have to worry about it.

                  2. 2

                    Same, but Rust seems like it has to go this route due to it’s unique design philosophy.

                    If this were any other language I’d argue that the runtime should provide it transparently and let users get at the details if they wish.

                  3. 4

                    In Rust specifically it feels non-native; something people are importing from more “managed” languages.

                    I find one std::thread per resource (and mpsc channels) rock solid, and at the right abstraction level for the Rust programs I write, so personally, I won’t be partaking in async.

                    1. 4

                      Threading is the 95% solution, it will almost always be just fine. Async is for the 5% of the time when a system really needs minimum overhead for I/O.

                      Really, I find that Rust is by default so fast that I seldom have to worry about performance. Even doing things The Dumb And Simple Way is often more than fast enough.

                  1. 6

                    I can’t form an opinion about the language, there’s too little information available. I don’t even know if it is garbage collected or requires manual allocation/deallocation. The syntax is a mashup of Go and Rust, closer to Go, and it is “safe”, so probably garbage collected?

                    Based solely on the web site information, this claim seems incorrect and/or misleading:

                    This tool supports the latest standard of notoriously complex C++ and allows full automatic conversion to human readable code.

                    It says “full automatic conversion (of C++) to human readable code”. But this language is missing a lot of features that would be needed to make this possible, such as classes with multiple inheritance, destructors, exceptions, and C++-equivalent templates. The language is “simple”, so it can’t have all the features of C++. You could translate C++ to V by inline expanding all of the C++ features with no V equivalent into lower-level C-like code. For example, inline expand destructors at each point where they would be called. But then the code is not maintainable, and I dispute that it is human readable, even if it is indented nicely, because many of the higher level abstractions have been destroyed. The translation might plausibly be done by using clang to translate C++ to LLVM IR, then converting the IR to V. The resulting V code will not be maintainable unless maybe you use a very restricted subset of C++ that corresponds to the features present in V.

                    1. 10

                      “No global state” means you can’t translate the full C++ language to V.

                      No GC, “safe”, and “fearless concurrency” (a feature of Rust) is a bold claim. How can this be done without the complexity of the Rust type system and borrow checker? Maybe that is enabled by “no global state”, which might mean a very restricted programming model compared to the competition.

                      1. 1

                        V allows globals for code translated from C/C++. Perhaps I’ll be able to remove this hack in the future. Not sure it’s possible.

                        1. 2

                          How do you handle undefined behaviour in the C++ sources when translating? Does V also suffer from undefined behaviour? For example, what would I get if I tried to translate this C++ to V:

                          unsigned int foo = -2.0;
                          
                          1. 3

                            It would translate to

                            foo := u32(-2.0)

                            It will not compile.

                      2. 5

                        I think the thing that made me mentally eject was this part:

                        V can translate your C/C++ code to human readable V code. Let’s create a simple program test.cpp first:

                        #include <vector>
                        #include <string>
                        #include <iostream>
                        
                        int main() {
                                std::vector<std::string> s;
                                s.push_back("V is ");
                                s.push_back("awesome");
                                std::cout << s.size() << std::endl;
                                return 0;
                        } 
                        

                        Run v translate test.cpp and V will generate test.v:

                        fn main {
                                mut s := []string 
                        	s << 'V is '
                        	s << 'awesome'
                        	println(s.len) 
                        }
                        
                        1. 10

                          The combination of seemingly impossible claims with no source code, and not even an explanation of how those feats are accomplished, is concerning.

                          1. 2

                            Why?

                            1. 10

                              This is just an unbelievably ambitious project. If it works the way you’re implying here, there are either lots of special cases and this won’t generalize to real codebases, or you have solved an unfathomable amount of hard dependent subproblems. I’m definitely willing to wait and see!

                              1. 7

                                In about two weeks I’ll post that Doom article with the resulting .v code. I’ll send you the link once it’s live ;)

                                I spent most of December and January working on the translator.

                                1. 7

                                  Most people doing this spent months to years to decades depending on how much they wanted to translate with or without human readability. I want C++ experts to weigh in on the translator when you post it. For me, that’s your biggest claim needing the most replication across code you thought of and didn’t wishing it never existed. Also, the biggest opportunity for improving major projects in terms of QA.

                                  1. 5

                                    You would would come across as more trustworthy if you acknowledged that the translator has limitations, and explained what those limitations are. What you are proposing to do instead is to only show a handpicked example that happens to avoid those limitations. Doom uses a restricted subset of C++, so you can translate it into V without dealing with most of the parts of C++ that make a C++ translator challenging.

                                    1. 4

                                      I will cover everything in the article.

                                      The end goal is to support the entire C++ standard.

                                      1. 2

                                        You’re likely to find that the last 10% of the standard requires 3x the effort.

                                        Nothing wrong in that, and I think getting to 100% is actually not worth the effort. There’s a lot of value in a translator that does 90% of the translation and makes the remaining 10% easy to identify. I’d market it like that until I’m sure the final 10% is done.

                                        1. 1

                                          Agreed, hopefully I’ll get some help from the community. V will be open sourced before the C++ translator is complete.

                                    2. 1

                                      You would would come across as more trustworthy if you acknowledged that the translator has limitations, and explained what those limitations are. What you are proposing to do instead is to only show a handpicked example that happens to avoid those limitations. Doom uses a restricted subset of C++, so you can translate it into V without dealing with most of the parts of C++ that make a C++ translator challenging.

                              2. 4

                                I don’t even know if it is garbage collected or requires manual allocation/deallocation

                                Over here it says “No GC”, but can’t find any details other than that.

                                1. 2

                                  The reference manual gives syntax for obtaining a pointer to a struct. But there is no mention of a pointer type, nor an explanation of what a pointer is or what you can do with pointers. Can you dynamically allocate a new object and get a pointer to it? If so, how does the object get freed?

                                1. 8

                                  My summary of most of the defensive comments I’ve seen to this article: C doesn’t kill people, people kill people!

                                  1. 2

                                    A single person can run many nodes, right? Can someone run multiple nodes with the same backing storage? Does this affect redundancy?

                                    1. 2

                                      The whitepaper describes mitigations for Sybil attacks. Original Storj designs had some mitigations for this IIRC though not this PoW/Kademlia tree scheme.

                                      The concern I would have is not Sybil attacks, but centralization related to Storj Labs’ satellites. It will be interesting to see whether other non-SL satellites become trusted by the network in practice.

                                      1. 1

                                        A single person can run many nodes, yes. You can choose to run multiple nodes with the same backing storage, but our node selection algorithm chooses nodes based on IP route, geographic, and identification redundancy. You may not receive more data just because you have more nodes. Our recommendation is a node per hardware failure domain (probably one node per hard drive).

                                      1. 2

                                        @pushcx , this is a duplicate of https://lobste.rs/s/5wp65s/protobuffers_are_wrong , I missed it when posted this one. Could you please merge it?

                                        (sorry for the duplicate…)

                                        1. 2

                                          1. 1

                                            Not sure if it was intentional or if something else happened but the merged story has less upvotes than the unmerged one did.

                                            1. 3

                                              The count left of the headline is always votes on the original story, but the ranking (“hotness”) of the merged story and its comments are taken into effect by Story#calculated_hotness.

                                              1. 1

                                                oh neat

                                        1. 5

                                          Category-theoretic thinking of products/sums is a good logical model, but I think it’s awful if your physical memory layout is the same thing as your logical model.

                                          For an example, lets take a list [1,2,3]. In product/sum design your representation for this structure is: 1:(2:(3:nil)).

                                          Imagine it costs you 1 byte to store a number, and 2 bytes to store a structure. If you take the literal in-memory interpretation for this structure, it is formed from pairs of references (total cost=10): 01 *> 02 *> 03 *> 00

                                          If you’re dealing with packed representations, you terminate by empty structure, you end up with: 01 02 03 00. But if you didn’t treat physical memory layout as logical memory layout, you could also give it a different representation where the sequence is annotated with a number first: 03 01 02 03.

                                          I think protobuffer sucks because the schemas are directly compiled into the user language. They would have a much better system if they first converted protobuffer schemas into protobuffer files, then have an interpreter for such files in each dynamic language, and compilers from the schema for the compiled languages.

                                          I also think that the post illustrates the common error that people tend to do, that is to not recognize that implementation details come and go. You really should not let your language be influenced by them, and if you force implementation detail with your language then you open the barn door for that.

                                          1. 1

                                            I think protobuffer sucks because the schemas are directly compiled into the user language. They would have a much better system if they first converted protobuffer schemas into protobuffer files, then have an interpreter for such files in each dynamic language, and compilers from the schema for the compiled languages.

                                            Just from a pragmatism perspective, that sounds like significantly more work for every language that wants to have a protobuf library. As it stands, having a straightforward translation from the object in memory to the wire format greatly assists implementation across all of the potential languages that need implementing. I think this is the key reason Lua, for example, has seen such broad adoption as a scripting language. It’s easy to embed because it has a very natural layout for interoperability (all calls just push and pop stuff on a stack). It’s very easy to write a Lua FFI.

                                            1. 1

                                              It’d be a bit more work in each dynamically typed language that you need to support. You’d need a wire format decoder and a script that decodes the schema file and uses it to translate between wire format objects and their legible counterparts in the client language. But that’d be nice to use when you got to read from or write into a protobuffer file because you could just do the pip install protobuf -equivalent of your scripting language and then start rolling:

                                              schema = protobuf.load("api_schema.pb")
                                              dataset_0 = schema.open("dataset_0.pb")
                                              print(dataset_0[0].user_data)
                                              

                                              It’s quite involving to get the .proto3 -compiler to work. It’s almost like compiling a C project in complexity. It produces plain code that reserves its own directory in your project.

                                              1. 4

                                                I think protobuffer sucks because the schemas are directly compiled into the user language.

                                                IMO, this is an example of a tooling problem being perceived as a problem with protobuf because the prevailing implementations do it that way. If you want an interpreter-style proto library for C, check out nanopb. protoc will produce data and interfaces (struct definitions) instead of a full C implementation.

                                          1. 26

                                            No, this guy is wrong. Protocol buffers are endlessly pragmatic and many of the “bad decisions” he points out have concrete reasons.

                                            For instance - he suggests all of the fields should be required. required fields existed in at least proto2 and I assume proto1, but were discovered to be terrible for forwards compatibility. I agree with his footnote that there’s a debate, but one side of it decisively won. If a field is required in one release of your code, that code can never talk with any protocol buffer serializations from future releases without that field being required without blowing up. The most frequent internal advice I saw was “avoid required. required is forever.” As a result, most feedback encouraged everything to be optional or repeated, which was made official in proto3.

                                            Second, here’s how he wants to implement repeated:

                                            coproduct List<t> {
                                              Unit empty = 0;
                                              Pair<t, List<t>> cons = 1;
                                            }
                                            

                                            This just reeks of a complete ignorance of a couple of things -

                                            1. How is this going to look for serialization/deserialization? Sure, we’ve embedded a list into a data structure, but what matters is being fast. Protocol buffers pragmatically describe useful data structures that also are very close to their native wire format. This is not that, but he says
                                            2. “the actual serialization logic is allowed to do something smarter than pushing linked-lists across the network—after all, implementations and semantics don’t need to align one-to-one.” The protocol buffer implementation must be simple, straightforward, bugfree, and implemented in every language anyone wants to use. Static analysis to detect these patterns could work, but good luck maintaining that logic in every language of your lingua franca language interoperability system.

                                            Third, as an example of the designers of protobufs being amateurs, he says:

                                            It’s impossible to differentiate a field that was missing in a protobuffer from one that was assigned to the default value.

                                            headdesk proto2 definitely supported this functionality. It was stripped out in proto3 after literally decades of experience from thousands of engineers said that on balance, the tradeoff wasn’t worth it. You can’t claim that a hard look of the tradeoffs is a result of being amateurs.

                                            Fourth:

                                            With the one exception of routing software, nothing wants to inspect only some bits of a message and then forward it on unchanged.

                                            This is almost entirely the predominant programming pattern at Google, and in many other places too. Protocol buffers sound… perfectly designed for their use case!

                                            What a frustrating read.

                                            1. 4

                                              Thanks for this critique, you’re right on. I do agree with one part though - you need to make application specific non-proto data structures that often mirror the protos themselves, which isn’t exactly DRY.

                                              Here’s an example that I’m struggling to find a “nice” solution for. Locally running application has a SQLite database managed via an ORM that it collects structured log entries into. Periodically, it bundles those log entries up into proto, removes them from the local database, and sends them (or an aggregated version of them) up to a collection server.

                                              The data structures are the exact same between the protos and the database, yet I need to define the data structures twice.

                                              1. 3

                                                Hmm, yeah, that’s a tough one. One thing that the protobuf compiler supports though is extensible plugins (e.g., check out all of the stuff gogoproto adds as extensions to the compiler: https://github.com/gogo/protobuf/blob/master/extensions.md)

                                                Perhaps the right thing in ORM situations at a certain scale (maybe you’re not at this scale yet) is to write a generator that generates the ORM models from the protobuf definitions?

                                                1. 2

                                                  Yeah, that would seem like the right solution in this case. In any case, what I described isn’t even a problem with the proto way of thinking, it’s just a tooling issue.

                                              2. 4

                                                Nice critique, better than I could have enunciated. I worked with the author at a company that chose protocol buffers and assume in part that the article is directed at those of us (myself included) who chose to use protocol buffers as our wire serialization format. It was the most viable option given the constraints and the problems a prior system was exhibiting. That said, the author has predilections and they show in the article’s tone and focus.

                                                1. 1

                                                  Were you replacing XML?

                                                2. 3

                                                  This is the best critique of this rant I’ve read, and you didn’t even go into the author’s attitude. Kudos and thank you.

                                                1. 6

                                                  Overall there is a lot of depth in this post and contains a great amount of detail about terminal I/O. I can understand the author’s hatred of ncurses, but just the sheer length of the post kinda shows the point of needing some level of abstraction when starting on a TUI projects, just due to the sheer number of gotchas between different terminal types. Are there any alternative / more modern libraries in existence?

                                                  From a meta-post perceptive, things I don’t like: sentences aren’t capitalized. The author uses correct code blocks, syntax highlighting, most of it seems correctly proofread/edited, and yet she has chosen to not capitalize the first word of each sentence.

                                                  Maybe I’m just nitpicky here, or maybe she’s trying to start a trend in the way language so be directed. After all at one time English use to capitalize all nouns (like German still does), and we use to indent paragraphs (which has been replaced with block formatting, except in novels). So maybe this is just the next thing.

                                                  also, i’m a) a nobody and b) a woman. nothing i wrote would ever gain any traction; any project designed to supplant ncurses needs to come from someone who’s actually known to the FOSS community. and a maintainer who isn’t a cripple.

                                                  She shouldn’t shoot herself down here. I don’t really don’t think there’s a lot of evidence to support this “open source is hostel to women” idea that has been gaining traction. A lot of projects big and small have that Contributors Code of Conduct (or something based off of it) on them now. Are we still seeing backlash against women for being women? Are there any specific examples? (I’m not trying to troll; I really want to see real examples that don’t involve simply trying to get a CoC added to a project or removal of words like master/slave).

                                                  The fact is, a lot of projects never gain traction in the OSS community. It’s difficult to make something people would use and to get other people to use your shit. A lot of big OSS libraries and projects today are backed by huge investment by big industry, or are supported by people in academia who can work on them between research and classes. That’s a bigger meta problem in the way we think of open source today.

                                                  1. 3

                                                    I’m a nobody is of course not actually a reason to not do something. I’m a woman is especially not a reason to not do something. The idea that you need to be established for a project to gain traction is reasonable and maybe she doesn’t like being directly in the limelight.

                                                    After all if every woman had the attitude of “I’m a woman, therefore my project would not be well received” and that caused them not to start then no woman could ever be successful. It’s definitely not the right frame of mind even if it were true (and I’m not saying it’s not or that it is). I think it’s absurd to say that no discrimination happens in OSS, especially considering how recent the push to CoC has been and how much push back there has been. However I also think it’s absurd to say that one could never have a successful project as a woman as obviously women have lead successful OSS projects.

                                                    I don’t think saying “She shouldn’t shoot herself down” is very effective. We should try and evaluate what pressures make people feel this way and how we can help them overcome them. On boarding is something that is almost universally bad in OSS, and could be improved irrespective of gender. Saturn for F# is a good counterexample. https://github.com/SaturnFramework/Saturn . With it are words of encouragement, clear and direct expectations, and clear documentation. It’s missing a code of conduct which I think could help someone feel a little bit more secure about contributing, but otherwise pretty good. Those words of encouragement at the beginning help set a tone and example about how you’ll be treated when you contribute, and if you’re sharing your hard work to see what this thing can become it’s important that the culture around it is positive enough that you don’t feel punished for doing so.

                                                    1. 2

                                                      I don’t think saying “She shouldn’t shoot herself down” is very effective.

                                                      We know a lot of what pressures make people think that way. Her other essays, if some are about her, indicate she might have more pressures than most people. In any case, the negative attitude of “I shouldn’t even try because I believe X” coupled with dropping that into a write-up she put a lot of time into are both No No’s in general. It’s common in pep talk, therapy, self-help books, guides on entrepreneurship… everything about getting ahead despite challenges… to avoid thinking like that.

                                                      If the guidance includes others, they’ll also tell you not to whine to or accuse strangers by default since vast majority won’t be cool with that. I mean, let’s look at it it. The communities most likely to be giving her that much shit will not care if she writes that in her blog post. They’ll laugh. The ones who wouldn’t would be anywhere from concerned to wondering if she’s overly negative or having psychological problems. In other words, even many of them might think she’s just a pain to work with given she’s dropping stuff like that in middle of tech write-ups and most women aren’t.

                                                      It doesn’t matter which light I look at it. It’s some defeatist, over-projecting BS in her mind which isn’t going to help her if it’s true for projects she deals with or false for others where she looks like a downer. Showing some sympathy on top of discouraging such negative thinking or outbursts is a good piece of advice. It’s also common sense for a lot of people struggling in my area, esp minorities. Got to keep your head up and clear pressing forward they’d say. Well, the ones that aren’t in a slump or wanting to just quit.

                                                      Btw, that said, I totally agree with you that a welcoming community with good onboarding and support is a good thing. It will definitely help in general. It can also be a relief for these types of people. I’m just saying it’s general advice of all kinds of people to combat these negative, self-reinforcing beliefs and practices. They’re bad habits.

                                                      1. 2

                                                        I guess what I was trying to say and be clear I didn’t mean it quite so strongly as it came off, is that individual advice isn’t a solution to the systemic problem. I know you weren’t claiming it was and if I phrased it like a refutation that’s my mistake. I just saw the moment as an opportunity to talk about broader strategies that can help accommodate large groups instead of focusing on individuals. You’re right though we do broadly speaking know the problem. I also agree that no matter what the reality is, believing you can’t is never in your favor. Helping a person move past that mindset of “I can’t so I shouldn’t” is very important.

                                                        1. 1

                                                          Thanks for clarifying. That makes sense. I think we’re in agreement.

                                                      2. 0

                                                        think it’s absurd to say that no discrimination happens in OSS, especially considering how recent the push to CoC has been and how much push back there has been.

                                                        A lot of the push back on CoCs is that there doesn’t seem to be any evidence that they’re actually necessary, that there is any discrimination that needs to be addressed. I’m not sure I could discriminate against women even if I wanted to, I don’t know which ones are women! The only people in open source projects’ whose gender I’ve even noticed are people with obviously gendered names: if someone’s username is adam1423 (not a real example) then it’s obviously a guy, but otherwise I don’t even think about it, they’re just a person.

                                                        On boarding is something that is almost universally bad in OSS, and could be improved irrespective of gender. Saturn for F# is a good counterexample. https://github.com/SaturnFramework/Saturn . With it are words of encouragement, clear and direct expectations, and clear documentation.

                                                        I don’t think there’s much extra here that isn’t in most projects. A lot of projects I’ve seen (that are of decent size, at least) have some sort of ‘CONTRIBUTING’ file. I think people mistake documentation existing and documentation being rendered as HTML on GitHub. GitHub is not the only website out there for open source. There are a lot of resources on the internet. Most of the things mentioned in that file for Saturn are common to basically every project anyway.

                                                        People often don’t care about onboarding because it doesn’t actually matter to them whether you start contributing to their project.

                                                      3. 4

                                                        “She shouldn’t shoot herself down here.”

                                                        With her saying that, she has a negative attitude that casts every project in the same, discriminating light. She might be a pain in the ass to work with. That comment alone might get her ignored if a person reading a contribution saw it before. There’s others that would try to pull her in to help her out. I just think people should avoid saying stuff like that by default since politically-focused negativity is immediate turn off for many potential collaborators. Person A has it, Person B doesn’t, and so path of least effort and headaches is going with Person B.

                                                        “The fact is, a lot of projects never gain traction in the OSS community. It’s difficult to make something people would use and to get other people to use your shit.”

                                                        This right here is something to remember about introducing any new idea or practice. The default response is apathy. There’s an entire site dedicated to strategies for countering apathy to new products. Most that make it talk about how hard it was. There’s others that just took off but that’s rare. So, if wondering about negative responses, the first variable to eliminate (somehow) should be apathy. Most people just won’t give a shit.

                                                        1. 1

                                                          From a meta-post perceptive, things I don’t like: sentences aren’t capitalized. The author uses correct code blocks, syntax highlighting, most of it seems correctly proofread/edited, and yet she has chosen to not capitalize the first word of each sentence.

                                                          Maybe I’m just nitpicky here, or maybe she’s trying to start a trend in the way language so be directed. After all at one time English use to capitalize all nouns (like German still does), and we use to indent paragraphs (which has been replaced with block formatting, except in novels). So maybe this is just the next thing.

                                                          Like the CSS feedback from @johnaj, you are being nitpicky.

                                                          1. 2

                                                            I don’t understand why you’re dragging me into whatever discussion you are having. I merely informed the author that her HTML wasn’t written properly – breaking popular things like reader mode – so that she could fix it.

                                                            I haven’t read the entire article yet, but I think it is interesting so far.

                                                            1. 2

                                                              You were actually informing Lobsters that her HTML was broken. Usually I’d say send it to their comment box, email, or whatever. I didn’t see anything listed. Perhaps they don’t want to be contacted.

                                                              Anyway, probably best to tell just the author that kind of stuff since we can’t change it. If there’s no contact, then I’d say don’t even mention it since you’re just griping to us about what they are doing with their own site. Goes nowhere.

                                                              1. 3

                                                                Yeah, that’s true. I just don’t think it was pedantic. The page was difficult for me to read. I hoped the author would be on Lobste.rs and see it, which isn’t unusual, but I guess that isn’t the case here.

                                                            2. 1

                                                              I don’t think it’s nitpicking to criticise an article for not using capital letters or being formatted in a way that makes it hard to read. When I opened the article, I immediately closed it again instinctively. I didn’t read any of it. I literally opened it, saw it was light-on-dark low-constrast nearly-unformatted text and closed it, instinctively.

                                                              I’ve since read it, but I’ve found I do this a lot. If I open something and it’s badly formatted in this kind of way I unconsciously/instinctively close it within a few hundred milliseconds.

                                                          1. 1

                                                            I like Homebank. Gets the job done. Seems a bit more modern than GnuCash

                                                            1. 1

                                                              Flagged for press release and advertising, and it isn’t even tagged properly.

                                                              1. 1

                                                                Fair enough. I did think it was interesting timing given the prior discussions around open source monetization with Redis.

                                                                What would the correct tags have been?

                                                                1. 1

                                                                  The nearest I could argue for would be release (since it’s a new program), but even then it’s a business thing and not a “hey, go download new software with changes X, Y, Z” thing.

                                                              1. 4

                                                                Storj Labs is (job description). We recently hired Ben Golub as CEO, which has been an injection of rocket fuel in terms of upcoming partnerships. We started rebuilding our platform in Go as of April of this year. Tons of interesting work in distributed storage! We’re very remote friendly.

                                                                  1. 1

                                                                    Can you imagine someone writing a blog post like “farewell, ball-peen hammers! I will be using framing hammers from now on!”

                                                                    1. 2

                                                                      nice, i wrote a suite of tools that interoperate with this exact file format description a few years ago. i think the csv to tsv conversions might replace tabs with spaces instead of erroring but otherwise this is exactly that

                                                                      https://github.com/jtolds/tsv-tools

                                                                      1. 2

                                                                        Very nice! I added a link to it from my doc

                                                                      1. 4

                                                                        Any example of a situation in which creating commits like this helps?

                                                                        1. 3

                                                                          Let’s say you have some complicated history pattern with merges and so on. Lots of different developers doing lots of different things. After a bunch of merges and merge conflict resolutions you have a history of sorts, but you want to clean it up. This allows you to make a single commit where the end result is the tree matches the complex history you’d like to throw away.

                                                                          Very useful for keeping dev history clean.

                                                                          1. 1

                                                                            Ah, that makes sense. So like a squashed merge, but without the merge. I guess it’s what git merge --squash --strategy=theirs would do if that merge strategy existed.

                                                                            1. 1

                                                                              Well, now that I think about it, wouldn’t saying –strategy=theirs be specifying just how conflicts are handled? My tool is saying, forget about conflicts, merging, everything, take the entire tree from the other commit wholesale. Don’t even try and merge things together.

                                                                              1. 1

                                                                                No, that’s what --strategy=recursive -X theirs does. The existing “ours” strategy just throws away the other commit and takes the tree from the current one. A fictional “theirs” strategy would do the same with the other tree.

                                                                                Merge strategies and their options are pretty confusing.

                                                                            2. 1

                                                                              Wait… you use it to delete history? But… having that history around is the reason I use git?

                                                                              1. 1

                                                                                The last thing I want to do when fighting a production fire at 3am is be sorting through 12 merges of commits that look like:

                                                                                • add feature
                                                                                • whoops
                                                                                • small fix
                                                                                • review comments
                                                                                • doh maybe this time.

                                                                                squash that crap together! What commit broke the build is infinitely harder to figure out when the problem is in some chain of merges titled “whoops”

                                                                                I decidedly prefer having my git history serve as a neatly curated form of documentation about the evolution of the codebase, not chaos of immutable trial and error

                                                                                1. 2

                                                                                  I constantly bring this up in pull requests when I see shitty commit histories like that. Squash your damn commits! If you’ve already pushed a branch, create a new one with a new name, pick your commits on top of it, rebase -i and squash them into succinct relevant feature sets (or try to get as close as you can).

                                                                                  I realize this is once that’s already gone and it’s too late (history with a ton of “squishme: interum commit” bullshit in there) and that’s the purpose of tools like yours, but teaching people good code hygiene is pretty important too. :-P

                                                                                  1. 1

                                                                                    So I agree with you on this approach, but I think I’m still not grasping what your tool accomplishes. Couldn’t the situation you’re outlining here be accomplished by squashing?

                                                                                    1. 1

                                                                                      Yeah that last comment was really more of a discussion about why you might want to clean up git history. That’s a poor example for this tool.

                                                                                      This tool is useful when there’s multiple merges along two divergent branches of history and you want to make a commit that essentially contains the entire diff from your commit down to the merge-base of another commit combined with the diff from the merge-base back up to that other commit.

                                                                                      1. 1

                                                                                        Hmm, I guess I just can’t picture in what kind of situation that would happen. Could you explain the example chronologically?

                                                                                        1. 1

                                                                                          I think @jtolds is on significantly more complicated code bases than I’ve worked on. There was an earlier post about Octopus commits:

                                                                                          https://www.destroyallsoftware.com/blog/2017/the-biggest-and-weirdest-commits-in-linux-kernel-git-history

                                                                                          and here is a visual for what that would look like:

                                                                                          https://imgur.com/gallery/oiWeZmm

                                                                            1. 3

                                                                              A totally useful comment from Reddit: git commit-tree refspec^{tree} -p HEAD was exactly what I’ve always been looking for and does 95% of what my tool does.

                                                                              https://www.reddit.com/r/git/comments/8v3pjg/comment/e1kkmhm