1. 2

    Is anyone else holding off from using the new lua hotness because they’re hesitant to move their neovim config further away from what vim supports? I have no intention (right now) of moving off neovim and back to vim, but the loss of compatibility is leaving me with second thoughts about whether it’s worth it.

    1. 1

      I moved to neovim fulltime a couple years ago, and so far have never opened plain vim except by accident. A completely seamless transition, in my anecdotal experience. I do mostly web application programming with it, YMMV.

      1. 1

        Initially I had this same hesitation regarding many Neovim things, such as Neovim only plugins. But what’s the point of having nice things if you don’t actually use them? At this point I’ve been on Neovim for three or four years and haven’t had any regrets.

        If I go back to vim, or any other editor, I’ll view that as an opportunity to rebuild my config from scratch or near scratch - I find it a good way to clean up unused plugins, settings, themes etc and also a forcing function to discover new plugins and workflows.

      1. 2

        ActiveSupport contains it’s own JSON serialization API although it’s probably best you didn’t mention it

        1. 1

          Indeed. Somebody on Reddit mentioned panko_serializer which seems nice and fast as well.

        1. 6

          This is what early free and open people celebrated as “meritocracy”, before it became clear that the particular kind of lazy, blinkered discrimination actually practiced online, as distinct from the mythical fair holistic contest of achievement, potential, and brilliance hackers found so flattering, had terrible social consequences.

          This was in fact the entire point of the word “meritocracy” from the beginning. It was coined as satire.

          1. 2

            This was in fact the entire point of the word “meritocracy” from the beginning. It was coined as satire.

            And yet, the meaning and connotation of a term can change: See “blacklist”, a term already used in the 17th century that only recently became associated with racial tensions, so folks request to retire it now.

            While meritocracy is terribly hard to implement and easy to abuse as a gatekeeping device, I wouldn’t be too hung up on its “original” meaning to reject the notion altogether.

            1. 1

              The point of examining meritocracy is to realize that any choice of partition or gradient, regardless of its particular operational benefits, is going to lead to an unjust and harmful society which oppresses the bulk of its population. How far would you change the meaning, and where would you try to go with it? By any definition of “better” and “best”, meritocracy writes its own critique.

              1. 1

                That’s an actual argument. “It was coined as satire”, less so in my opinion (but commonly used as if it was).

            2. 1

              Yeah, it’s one of those tropes that’s better known in truncated form, which happens to give exactly the wrong impression. Like “rotten apple”, “information wants to be free”, and “Utopia”.

              I’ve basically just given up. Unless the misapprehension is really the point I want to make.

            1. 2

              If you are concerned about the privacy of metadata in a E2EE messenger, the Cwtch project over at OpenPrivacy might be of interest.

              1. 4

                Great article! One thing that might be worth adding is that ACME is not a secure protocol in cases where one client could potentially spoof another client’s address and perform the ACME challenge in its place. For network devices like load balancers and switches, this is definitely a concern since depending on the network architecture, these devices may be able to spoof each others’ addresses easily. SCEP doesn’t solve this problem either of course. It’s a challenging problem to solve and something I’ve been working on myself.

                1. 5

                  The security of ACME also depends a lot on DNS and related infrastructure. For example, I use dns-01 for a couple of my VMs, but my DNS host (Gandi) doesn’t provide a way of granting an API token that is locked down to a specific record or set of records, so either of the VMs could request Let’s Encrypt certs for any of my domains.

                  FWIW, I use acme.sh, which wasn’t on the list. Its default automation flow is a bit annoying because you want to run its cron task as an unprivileged user to get the certs and put them in a staging area and then run the deploy phase as root to move the certs into a location that the unprivileged user can’t write to. Unfortunately, it has a thing that can install the crontab entry in the unprivileged user’s crontab, which is completely unhelpful because that’s intrinsically racy with respect to the deploy step. The sane way of running it is from a periodic script that drops privileges to run acme.sh as the unprivileged user, blocks until it’s finished, and then runs the deploy step.

                  I was a bit concerned that the authors of acme.sh seemed to struggle with the idea of privilege separation and their recommended deployment is to use sudo from the unprivileged user to elevate privileges to root to run the deploy script. That’s incredibly easy to get wrong (allowing shell-script invocation via privilege elevation is never a good idea) and brings sudo into the attack surface.

                  1. 2

                    Doesn’t matter if you can spoof the address, because it won’t work without the account’s private key. See RFC8555 6.2

                  1. 14

                    As someone who paid a fair bit of attention to the early docker world, and now seeing its commodification am left wondering “what was it”, I think this article does a good job of explaining it. What it doesn’t explain is… I was around at that early redhat time, when it was small, when you could shake Bob Young’s hand at a Linux meetup. Heck, I remember when google was a stanford.edu site… the question in my mind is… why did redhat and google succeed (as corporate entities) and docker not so much? Perhaps it was the locking in of the company name and the core tech? Perhaps the world of 2010-2020 was far more harsh to smaller businesses, perhaps they just overshot by trying to fight their competitors instead of partnering with them. That will probably have to wait for a HBR retrospective, but I’m not 100% psyched that the big incumbents won this.

                    1. 13

                      Docker lost, as I understand it, because of commoditisation. There’s a bunch of goo in Linux to try to emulate FreeBSD jails / Solaris Zones and Docker provided some tooling for configuring this (now fully subsumed by containerd / runc), for building tarballs (not really something that needs a big software stack), and for describing how different tarballs should be extracted and combined using overlay filesystems (useful, but should not be a large amount of code and now largely replaced by the OCI format and containerd). Their two valuable things were:

                      • A proprietary build of a project that they released as open source that provided tooling for building container images.
                      • A repository of published container images.

                      The first of these is not actually more valuable than the open source version and is now quite crufty and so now has a load of competitors. The second is something that they tried to monetise, leaving them open to competitors who get their money from other things. Any cloud provider has an incentive to provide cheap or free container registries because a load of the people deploying the containers will be spending money to buy cloud resources to run them. Docker didn’t have any equivalent. Running a container registry is now a commodity offering and Docker doesn’t have anything valuable to couple their specific registry to that would make it more attractive.

                      1. 9

                        I wrote a bit about that here – Docker also failed to compete with Heroku, under its former name dotCloud.

                        https://news.ycombinator.com/item?id=25330023

                        I don’t think the comparison to Google makes much sense. I mean Google has a totally different business that prints loads of money. If Docker were a subdivision of Google, it could lose money for 20 years and nobody would notice.

                        As for Red hat, this article has some interesting experiences:

                        Why There Will Never Be Another RedHat: The Economics Of Open Source

                        https://techcrunch.com/2014/02/13/please-dont-tell-me-you-want-to-be-the-next-red-hat/

                        To make matters worse, the more successful an open source project, the more large companies want to co-opt the code base. I experienced this first-hand as CEO at XenSource, where every major software and hardware company leveraged our code base with nearly zero revenue coming back to us. We had made the product so easy to use and so important, that we had out-engineered ourselves.

                        (Although I don’t think Docker did much engineering. It wasn’t that capable a product. It could have been 30 to 100 people at Google implementing it, etc. Previous thread: https://lobste.rs/s/kj6vtn/it_s_time_say_goodbye_docker)

                        1. 3

                          I appreciate the article on RedHat. It has certainly opened my eyes to the troubles with their business model, which I had admired in the past. (I suppose it is still admirable, but now at least I know why there aren’t more companies like it.)

                          The back half of the article is strange, though. I’m not sure what I’m supposed to learn about building a new business based around open source by looking at Microsoft, Amazon or Facebook. While they all contribute open source code now, they did not build their businesses by selling proprietary wrappers around open source products as far as I know. And given the enormity of those companies, it seems very hard to tell how feasible it would be to copy that behavior on a small scale. Github seems like a reasonable example of a company monetizing open source, however. It is at least clear that their primary business relies on maintaining git tools. I just wish the article included a few more examples of companies to look up to. Perhaps some lobsters have ideas.

                          1. 5

                            I just wish the article included a few more examples of companies to look up to

                            To a first approximation, there are no companies to look up to.

                            1. 2

                              I feel like some of the companies acquired by RedHat might be valid examples. I expect that the ones that are still recognizable as products being sold had a working model, but I don’t know what their earnings were like.

                            2. 3

                              the biggest ones I can think of, not mentioned, are mongo and elastic… redis may go public soon, there are lots of corps around data storage and indexing that to some extent keep their core product free. There might be more. If you look at interesting failures, going back to the early days, LinuxCare was a large service oriented company that had a giant flop, as did VA Linux (over a longer time scale):

                              linuxcare https://www.wsj.com/articles/SB955151887677940572

                              va linux https://www.channelfutures.com/open-source/open-source-history-the-spectacular-rise-and-fall-of-va-linux

                              1. 2

                                Appreciate it, thanks.

                          2. 8

                            same question, I think, could be asked why netflix succeeded but blockbuster failed, both were doing very similar thing. It seems that market success consists of chains / graphs of very small incremental decisions. The closer decisions are to the companies ‘pivot time’, the more impactful they seem to be.

                            And, at least in my observation, paying well and listening to well-rounded+experienced and risk-taking folks – who join your endeavor early, pays with huge dividends later on.

                            In my subjective view, docker failed to visualize and execute on the overall ecosystem around their core technology. Folks who seem to have that vision (but perhaps, not always the core technology) are the ones at hashicorp. They are not readhat by any means, but any one of their oss+freemium products seem to have good cohesive and ‘efficient’ vision around the ecosystem in this space. (where by ‘efficient’ I mean that they do not make too many expensive and user-base jarring missteps).

                            1. 1

                              could be asked why netflix succeeded but blockbuster failed, both were doing very similar thing

                              I’m not sure I agree. Coincidentally, there’s a YT channel that I follow that did a decent overview on both of them:

                            2. 3

                              My opinion on this is that both Google and Redhat are much closer to the cloud and the target market than Docker is/was.

                              Also, I thought that Docker was continuously trying to figure out how to make a net income. They had Docker Enterprise before it was sold off, but imo I’m not sure how they were aiming to bring in income. And a startup without income is destined to eventually close up.

                              1. 3

                                the question in my mind is… why did redhat and google succeed (as corporate entities) and docker not so much?

                                Curating a Linux distribution and keeping the security patches flowing seamlessly is hard work, which made Red Hat valuable. Indexing the entire Internet is also clearly a lot of hard work.

                                By comparison, what Docker is doing as a runtime environment is just not that difficult to replace.

                                1. 1

                                  I kinda feel like this is the ding ding ding answer… when your project attempts to replicate a project going on inside of a BigCo, you will have a hard time preventing embrace and extend. Or perhaps, if you are doing that, keep your company small, w/ limited debt, because you may find a niche in the future, but you can’t beat the big teams at the enterprise game, let alone a federation of them.

                                2. 2

                                  I think we all know our true desires we are just left to discover them.-

                                  Lets not forget, The Docker Timeline:

                                  • Started in 2013.
                                  • Got open-source recognition.
                                  • Got increased public use in 2015/2016.
                                  • In 2017. project renamed from Docker to Moby. Mistake 1.
                                  • In 2018. started requiring User Registration on DockerHub. Mistake 2.
                                  • In 2019. Docker Database has been hacked which exposed user. Mistake 3.
                                  • In 2020. Docker finally died and awaits new reborn. Good bye.

                                  When I think about it, I’m not even mad. Hail death of Docker.

                                1. 1

                                  Firefox says the screencast is corrupt, for some reason.

                                  1. 6

                                    I disagree that the OVH to AWS comparison is apples to apples. You’re comparing a non-redundant SSD drive to EBS, for starters. You’ll need to price out a SAN to be fair.

                                    There are other price factors at play here that aren’t immediately apparent. I work for a modestly-sized managed hosting platform. We originally launched on Digital Ocean, but at the time the higher-end servers were too expensive, so we expanded onto Linode for those. That has widely been regarded as a bad move, as the difference in reliability of the two platforms makes Linode more expensive for us once you factor in all the support costs.

                                    1. 3

                                      I disagree that the OVH to AWS comparison is apples to apples. You’re comparing a non-redundant SSD drive to EBS, for starters. You’ll need to price out a SAN to be fair.

                                      The 150% more expensive for less compute and RAM quote though, doesn’t include the EBS cost, for that exact reason. I wanted to be as charitable as possible. That’s why I chose not to include EBS cost, to have the AWS machine be ~2/3 as good and to compare reserved for 1 year vs per month. AWS had leeway comapred to OVH in every single metric and the numbers were AWS being 150% higher. If I hadn’t given that leeway the numbers could arguably be 300-500% higher.

                                      We originally launched on Digital Ocean, but at the time the higher-end servers were too expensive, so we expanded onto Linode for those. That has widely been regarded as a bad move, as the difference in reliability of the two platforms makes Linode more expensive for us once you factor in all the support costs.

                                      Availability is indeed and issue, but eod neither of the two provider have an amazing SLA for those machines.

                                    1. 45

                                      So here’s the thing: X works extremely well for what it is, but what it is is deeply flawed. There’s no shame in that, it’s 33 years old and still relevant, I wish more software worked so well on that kind of timeframe. But using it to drive your display hardware and multiplex your input devices is choosing to make your life worse.

                                      Man, this person really knows how to make their prose pack a punch.

                                      It’s refreshing almost to the point of inspirational to see someone who’s spent a large chunk of their lives laboring to improve a given code-base make a case for its (XFree86) near total obsolescence with such eloquence and authority.

                                      1. 4

                                        You can only apply so much thrust to the pig before you question why you’re trying to make it fly at all.

                                        That quote is gonna stick in my brain for sure

                                      1. 18

                                        Correct me if I’m wrong, but isn’t the notice-and-takedown provision of the DMCA exclusively relevant to copyrighted material per se, and not applicable to the anticircumvention provision? And it even seems a bit of a stretch to claim this is even a circumvention; youtube-dl merely requests the same data as a browser, I don’t think it has any functionality related to DRM.

                                        1. 2

                                          As I understand it, the root issue was that the source code had test cases that specifically linked to copyrighted material. I suspect it would have been otherwise ignored apart from that.

                                          1. 1

                                            At least by German law the distribution of copyrighted software and software to circumvent copyright is illegal.

                                            I‘m sure other countries have similar laws.

                                            1. 28

                                              But youtube-dl is not a tool built to circumvent copyright. Especially, as YouTube does host CC-licensed material and tons of their material is owned by their creators, which can give you a license to download and use all the time. The DMCA notice makes a very careful point to only refer to the YouTube standard license. The problem there is that YT does not provide another method to exercise your right to copy.

                                              Also, while the court in Hamburg is known for its… creativity and industry-friendlyness, it also is not unusual that its decisions don’t survive revisions.

                                              1. 3

                                                Good point! Then the software would only go against the end-user license agreement of YouTube? But I guess that can’t be enforced with a DMCA. I hope the court regards this!

                                                1. 4

                                                  Especially as YouTube would have to be the party to go to court over this. Also, the Terms of Service are aimed at the user of youtube-dl.

                                                  1. 1

                                                    What’s next? “You use the computer, therefore you’re stealing our content? Even though it’s not our content at all we want you to stop and shut down your computer immediately”.

                                                    1. 1

                                                      I don’t understand your reply. The only thing I noted is that in the scenario the OP arrived at (ToS enforcement), the legal parties would be different and the RIAA cannot be in the picture. And YouTube has no interest in suing its users.

                                                      Nothing is next.

                                                      1. 1

                                                        Oh, no, I’ve meant to reply to your previous comment, and with it agreeing with it. Basically, RIAA is trying to take down youtube-dl because it was used to download copytighted content. But so was the entire computer. That’s what I’ve meant when I’‘ve asked “what’s next”.

                                                        I don’t know why I replied to this comment, not your previous one though.

                                                        1. 3

                                                          Ah, yes! Thanks for clarifying. Yep, the problem is that we need a fundamental reform, not escapism. IMHO, centrialisation vs. decentralisation is a red herring. I will just lead to the situation we had years ago: going after the nodes and the creators, with a less clear battlefield.

                                                          IMHO, this situation is less bad. It’s visible and I’m sure we’ll read about some lawyer filing a counter-claim next week or so.

                                                  2. 3

                                                    Also youtube-dl does not distribute the content. There are laws and enough court decisions in germany that private copies and their tools are allowed. youtube-dl would probably be seen as this kind of tool.

                                                  3. 1

                                                    But youtube-dl is not a tool built to circumvent copyright.

                                                    That may not matter. It is a tool build to circumvent the inconvenience of not having an easy way to download videos from the website. That inconvenience almost certainly counts as a “technical measure” (there’s some obfuscation going on). It doesn’t matter whether the works behind it is protected. Circumventing the technical measure does.

                                                    Now while the technical measure does have to restrict access to protected work, it may not have to be its primary purpose. If YouTube obfuscated download capability primarily to get users to come back & see ads, the fact is that it restrict access to protected works, and that may be enough.

                                                    1. 3

                                                      The may is very load-bearing here, though. The problem is that every right and rule is subject to weighting in court. And for example Germany has the right to create private copies for personal use/archival. Circumvention of copy protection is illegal, but copy protection is not only a technical measure, but also needs a clear marker on the source material. So, in Germany: both the algorithm must exist and the source must be marked as copyrighted and protected.

                                                      This is actually a reasonable rule: it avoids the situation where “open” things are wrapped to be closed.

                                                      If you look closely at what the RIAA quotes (I’m yet trying to find the decision they quote): they talk about a “service”, so probably about an intermediary helping users. I have not yet found which decision they actually refer to, someone on Twitter assumed this one: http://www.rechtsprechung-hamburg.de/jportal/portal/page/bsharprod.psml?doc.id=JURE180006255&st=ent&doctyp=juris-r&showdoccase=1&paramfromHL=true#focuspoint

                                                      A very tl;dr: This describes a case of a server-based service which allows you to grab the audio track of a YT video (I assume to download albums from YT). This is commercial circumvention of copy protection. The case document even goes into a lot of detail to express how the service is not just a proxy for the individual user in all cases: because it is an ad-based service, the defendant was not able to claim to enable easy private copies, it indeed monetises each copy. The question whether the user was allowed to take this copy was expressively ruled out, the sticking point was that illegal copies are monetised. Sounds like a classic stream-ripping service for me, which are indeed very damaging to video platforms (the classic was to rip a stream from a player, put it in you own, with your own ads: the platform pays the streaming cost, you get the ad value).

                                                      What the RIAA seems to rely on is that this case does mention that it assumes that the protective measure is effective (interestingly by describing how it is not usable through non-developer functions in Mozilla Firefox, maybe that’s a good feature suggestion?) but that may still lose out in weighting against the interest of the user to get their own copy. But whether they are right, at this moment does not matter here. I would not even assume that the RIAA has checked this case to fully apply to their thing: they don’t need to, they just have to present a 50% non-bullshit case to GitHub. GH is not obliged to check further then that.

                                                  4. 5

                                                    For all the complaints about the US DMCA, generally Europe has some of the harshest and most extreme copyright-regime rules, up to and including the disastrous new mandate for basically everyone to implement a YouTube-style pre-filter on all uploads.

                                                    1. 1

                                                      Is there a similar law or not? I think your comment is a little bit off-topic.

                                                      1. 10

                                                        US DMCA is a huge act. It is all the rules around all things digital. What people usually refer to are DMCA Takedowns, which I actually find reasonable, especially as they have a clear procedure. Thats section 512. It actually goes into details of what platform providers are not liable for. (Caching, etc.) I’d actually love if a German law were that direct.

                                                        Broken down, if you are a service provider hosting user content, you are not liable if the following procedure is in place:

                                                        • Someone can send you a “takedown notice”, in which they tell you that they are the copyright holder and that they believe this is their content, which you promptly respond to.
                                                        • As time is of the essence here, you don’t have to check this claim for validity, but instead have to forward this notice to the user, at the same time making their content inaccessible.
                                                        • The user can file a counter-claim, in which case the 2 parties can go to court and will notify you of the results. During this time, the claim is contested and you can continue serving the data.

                                                        In theory, fraudulent takedown notices can lead to the other side suing back, but that rarely happens, especially around groups like the RIAA and that’s where it issue lies.

                                                        Now, you may agree with copyright or not, if you run a public service, you will have to implement a procedure here. And the DMCA procedure is actually straight-forward and easy to implement. It’s worth it, as it takes you out of the danger zone.

                                                        https://www.law.cornell.edu/uscode/text/17/512

                                                        Background: I was part of the legal review and setup for crates.io around GDPR and DMCA. I can tell you, both are equally often misinterpreted.

                                                        The problem here is that the RIAA here does not invoke 512, but instead claim the illegality of the tool outright.

                                                        Finally, to be clear: I don’t support a lot of this stuff, but I don’t have the liberty to ignore them. Also, the RIAA is very much in the wrong here, in my opinion. Also, to be clear, there are reasonable takedown requests. On code hosts, that’s usually someone ripping off the license and renaming the library and publishing a copy. On other sites, it may be nude pictures someone took of his GF.

                                                        1. 6

                                                          Look up the recent EU Copyright Directive (originally known as “Article 13”) for a starter. With the US political system mostly deadlocked these days, the copyright lobby has turned its attention – with much success – to Europe, and the regime which will soon be in place there makes the US DMCA system look almost reasonable by comparison.

                                                      2. 1

                                                        circumvent copyright

                                                        not DRM?

                                                        Ytdl simply extract links.

                                                        1. 4

                                                          Well, not that simply. The takedown letter says it circumvents something called

                                                          YouTube’s “rolling cipher”

                                                          which was determined as an “effective technical measure” by the (copyright-mafia-adjacent apparently; and not under US jurisdiction) Hamburg Regional Court.

                                                          Indeed one of the test cases mentioned by the RIAA is described as ’Test generic use_cipher_signature video (#897).

                                                          And apparently what that means is running some JS function (in a tiny interpreter of a tiny subset of JS) to deobfuscate the links.

                                                          This is absolutely not what we would perceive as “real” DRM, but it does technically attempt to ‘manage’ some ‘digital rights’, lol.

                                                          1. 1

                                                            And apparently what that means is running some JS function (in a tiny interpreter of a tiny subset of JS) to deobfuscate the links.

                                                            That “some JS function” is running the JavaScript sent by YouTube to the user in response to a request for a video, and looks to be fetched each time a video is requested by the YouTube extractor. I could see a stronger argument for “circumvention” if they had re-implemented the logic in Python or saved the JavaScript into the repository. As it stands currently, this seems a really big stretch.

                                                    1. 7

                                                      In scenarios which precise better optimization of which tools are available, sure, let’s use containerd (for example) instead of Docker in our production machines for running Kubernetes.

                                                      But, sometimes, “monolithic” tools make sense. I want to use containers in my development workflow, which has a lot of requirements (running, building, inspecting…), what I need? Just Docker. It’s a no-brainer. And thanks to the OCI specification, that setup can generate images that run in production with a different set of tools.

                                                      People tend to call monolithic to stuff like if it were an obviously bad thing, but those exists because, sometimes, it just makes sense to have some set of functionalities tied together in a single package that is easier to reason about than a multitude of different packages with their different versions.

                                                      1. 4

                                                        I would be more sympathetic to this argument if Docker wasn’t a gigantic pain in the ass to use for local development.

                                                        1. 3

                                                          I agree. Docker belongs on the CI server, not on your laptop. (Or in prod, for that matter.)

                                                          1. 1

                                                            how’s that?

                                                            1. 2
                                                              1. It’s slow
                                                              2. It’s a memory hog
                                                              3. It makes every single thing you want to do at least twice as complicated
                                                          2. 2

                                                            But, sometimes, “monolithic” tools make sense

                                                            I would even say than it’s the correct approach for innovation, right after R&D and through product differentiation. They went through that quite well. Docker’s problem is no longer an architecture nor an implementation problem, more that their innovation has evolved into a commodity.

                                                          1. 19

                                                            Forgive me if this is gauche, but what is wrong with simply using bashisms? Outside of embedded contexts, where you want everything in Busybox, but writing shell scripts in just POSIX shell just seems like a tortured dialect.. The extensions are legitimately useful, so it’s also a question of why other shells haven’t implemented it.

                                                            Also curious is not wanting to use Shellcheck, even if it’s just for (skippable) CI-side tests. Shellcheck was the first tool that made writing shell scripts tolerable for me.

                                                            1. 4

                                                              Some POSIX operating systems don’t come with Bash out of the box, notably the BSDs. As such Bash is rarely used in them even if it is available. Even MacOS switched its default shell to ZSH.

                                                              Generally though I think dropping the dependency on Bash increases compatibility across the board and removes an unneeded dependency. Both of which are always welcome.

                                                              1. 3

                                                                I don’t mind taking dependencies if it helps you reduce complexity elsewhere, especially if the cost is amortized elsewhere.

                                                                1. 4

                                                                  You’re not the one maintaining thousands of rc scripts or build scripts for a distro/flavor. Or at least I assume you’re not. The tradeoffs communities make usually have a reason and just because you don’t see it or understand it doesn’t mean it’s not there.

                                                                  1. 3

                                                                    But this is not about thousands of rc scripts. This issue is about one script used during the go build process. On majority of systems it will need a dependency that is immediately satisfied. On some minority it will require a single package.

                                                                    1. 4

                                                                      The GitHub issue appears to be a troll issue, and I agree it doesn’t really matter much in the context of the Golang toolchain. However I was responding to the thread which was speaking more generically about dependencies and script maintenance.

                                                                      1. 3

                                                                        Well technically at least 3.

                                                                        But I agree in general. I fail to see why requiring bash is such a huge deal (I have read the comments here as well as the comments on the GitHub issue).

                                                                  2. 2

                                                                    Even MacOS switched its default shell to ZSH.

                                                                    For a while, not any more.

                                                                    1. 1

                                                                      What is it now?

                                                                      1. 1

                                                                        I’m fairly sure it’s back to bash.

                                                                  3. 3

                                                                    One of the comments from that issue:

                                                                    Just had my first experience with Go, which was nice. The one thing that surprised me a little bit was that bash was required to build. On OpenBSD the standard shell is a hardened ksh. bash is avoided everywhere in base so I had to install that, no biggie, but the question I would phrase in the spirit of portability and reducing dependencies is “why require more if technically all you need is POSIX shell”? I’m wondering if such a change would be desirable, aside from the question who’s going to make it happen.

                                                                    Also, doesn’t macOS uses zsh by default now? But probably, it’ll be compatible with the bash scripts used here?

                                                                    1. 3

                                                                      doesn’t macOS uses zsh by default now?

                                                                      Yes, but they’re not removing bash from the standard system AFAIK

                                                                      1. 3

                                                                        It seems like just another dependency to me though - and certainly one common enough most people will have, and run on most people’s systems.

                                                                        1. 2

                                                                          Yes, that comment is from me. It is my first encounter with the Go community, so after discovering they don’t completely dismiss the idea itself, I thought let’s first get some broader opinions and have a discussion with a community I’m part of before I continue this discussion in the Go thread.

                                                                          Also, doesn’t macOS uses zsh by default now?

                                                                          idd

                                                                          But probably, it’ll be compatible with the bash scripts used here?

                                                                          Good question, from a superquick check I can say it doesn’t right out fail like it does on OpenBSD with ksh.

                                                                        2. 2

                                                                          Forgive me if this is gauche, but what is wrong with simply using bashisms?

                                                                          Nothing, bash really makes our lives easier, the extensions are really useful. Some people wants to only use POSIX sh, just because it is a “standard”. Some people just hates bash because it is popular.

                                                                          1. 6

                                                                            I’ve used Go for some things, and enjoyed a lot of things about it, but my god is it hilariously bad at parsing JSON. I’m convinced that 50% of the reason Google created protobufs was to avoid it.

                                                                            Personally I would just use gjson in this case, because it’s helpful in a lot of additional situations.

                                                                            1. 6

                                                                              Well, part of this is JSON being hilariously bad schemaless too. As it happens, protobuf fixes also that.

                                                                              1. 1

                                                                                At the cost of the openness of HTTP APIs.

                                                                                Personally I’d still prefer to deal with brain dead API contracts like this (a single item or an array, indeed!) than lose the openness that HTTP APIs afford.

                                                                              2. 2

                                                                                You’re not the only one; half the Go questions on Stack Overflow seem to be about parsing JSON. Okay, maybe not half, but certainly a lot.

                                                                                1. 1

                                                                                  If they actually do generics for 2.0 that’s going to go a long way to fixing this pain point. Looking forward to it

                                                                              1. 7

                                                                                I think you can read a thousand blog posts praising the merits of Vim editing, but you won’t really get how powerful it is until you see an advanced user working with it. It’s like a superpower.

                                                                                1. 17

                                                                                  Ask yourself: why is documentation of internal-facing decisions like what software licenses to use being published in a public place? The answer is straightforward: to influence the public. This is propaganda.

                                                                                  The site describes what open source projects Google shares, explains their approach, and what contributors should expect. It’s marketing, but marketing by giving away a few hundred useful codebases seems pretty unobjectionable to me. Publicly explaining a policy publicly that applies to all the projects in one place so they can link to it rather than rehash a legal point every time it comes up seems like a normal use of documentation. Calling it propaganda is uncharitable and unjustified.

                                                                                  (Also, odd title: the article only addresses a single anti-AGPL argument.)

                                                                                  1. 6

                                                                                    Calling it propaganda is uncharitable and unjustified.

                                                                                    The distinction between marketing and propaganda is pretty subjective.

                                                                                  1. 6

                                                                                    The only times I’ve gone through the trouble of mocking out someone else’s interface are cases where the library authors did not provide any testing capability, or wanted me to run a whole damn simulator process in addition to my test code. Google in particular has been guilty of this.

                                                                                    1. 3

                                                                                      Yes I wrote an alternative implementation of their message bus once because the simulator barely worked. Some fat java app to run in the background but managing it’s state was really hard which made it tricky to use for testing. Sadly that code never got open sourced :(

                                                                                    1. 6

                                                                                      While I think this is a great article about the state of Rust’s ecosystem for writing HTTP backend, the name of the article seems to be quite a bit off.

                                                                                      As mentioned this is about HTTP backends, but even there it’s not really about the language, as you’ll be able to write a Flask clone in most languages. Also it depends a lot on use cases. I mean there is C++ HTTP servers for reasons, even though I do not think it is generally the language that people would recommend for HTTP backends.

                                                                                      Again, great article, just not quite what I expected by the title.

                                                                                      1. 2

                                                                                        as you’ll be able to write a Flask clone in most languages

                                                                                        Flask has a surprising amount of functionality and most new languages in fact can’t write a Flask clone. The very article discusses Rust ecosystem’s inability to replicate Flask’s file upload functionality.

                                                                                        1. 1

                                                                                          Okay, that part was oversimplified from my side and I think it also very much depends on what you consider a clone. Most languages there are have at least one “flask inspired” HTTP framework.

                                                                                          The very article also links to an issue which links to a crate, which does something very similar.

                                                                                          However this also wasn’t the main point of my comment. Just wanted to mention that this is about HTTP backends and not about servers at large, which I (wrongly) expected. Instead it’s about the HTTP ecosystem, like the file upload functionality of a web framework.

                                                                                          1. 4

                                                                                            Most languages there are have at least one “flask inspired” HTTP framework

                                                                                            Sinatra predates Flask by a few years, so one could say Flask is Sinatra-inspired. But I think the real truth is that they both map code to URL resources in as minimal a fashion as possible, so they all basically look the same because they’re solving the same problems.

                                                                                      1. 4

                                                                                        I’ve got mixed feelings about this. It’s yet another syntax for Ruby, which is already fairly complicated. It is very close to duplicating pre-existing functionality in the form of Struct and OStruct - neither of which I actually like using because there’s no built-in immutability. I don’t know that this proposal is HARMFUL, but neither do I know that it is a benefit. So, “meh”

                                                                                        1. 3

                                                                                          I’ve been using dry-struct for this reason, in concert with dry-types, and I really like it.

                                                                                          1. 3

                                                                                            I like using Structs in Ruby, especially as a form of documentation and, as the author said, ensuring that a key passed in doesn’t silently fail. It’s much easier to see the options available for a bit of code where the struct is defined in one place instead of hunting for accessors of a hash.

                                                                                            That being said, I don’t like this syntax at all. It just adds yet another way of doing something when, I feel, the current way is sufficient and easy enough already.

                                                                                          1. 4

                                                                                            Linking my comments from the last time this book came up.

                                                                                            I’ll agree that it’s certainly better than Clean Code, but I wouldn’t hand this to a new engineer without some caveats.