Threads for shazow

  1. 2

    Really excited with all of the cool stuff people are doing with neovim plugins. I just wish it was easier to onboard learning new plugins!

    1. 1

      Do you mean learning how to build new plugins?

      I’ve been spending a bit of time learning this myself, this thread was a good starting point: https://www.reddit.com/r/neovim/comments/wxbnk8/whats_the_best_practice_for_developing_neovim/

      Otherwise my approach is to find particularly simple plugins and reading their code. :) I’m mainly focusing on lua-based plugins. It’s certainly not any harder than vimscript plugins, and much easier in some cases.

    1. 12

      Once upon a time, we invented regular expressions as a comfortable midpoint between expressive power and efficiency on 1970s computers. We invented tools to search for regexes (grep), tools to automatically edit streams based on regexes (sed), programming languages designed to run code when a regex was matched in the input stream (awk) and even interactive editors with regexes as first-class UI primitives (vi).

      Here in the 21st century, text processing has advanced a long way. In particular, parsing context-free grammars no longer requires days of swearing and wrestling with shift/reduce errors; you can write grammars nearly as succinctly as regexes, and more readably. Using packrat parsing or Earley parsing or some other modern algorithm, I’d love to have a suite of text-processing tools like grep, sed, awk and vi based on grammars rather than regexes.

      1. 4

        you can write grammars nearly as succinctly as regexes, and more readably

        Do you have any links or examples? I struggle at this!

        1. 3

          In most cases, I end up doing text processing using (neo)vim macros rather than regexp. It can run surprisingly fast even on large datasets (in headless mode or with lazyredraw).

          Feels like a very modern/ergonomic/incremental/less abstract approach compared to regular expressions.

          I do like the premise of clearly defined grammars, however! Could compound nicely with macros, too.

          Made me think of this recent Structural Search & Replace plugin that uses treesitter (basically grammars under the hood).

          Now that I think of it, treesitter is essentially a database of grammars that can be used for data annotating and processing. 🙃 I guess the next step is to have a more on-the-fly way to describe and use these things.

          1. 2

            semgrep?

            1. 2

              I kind of got that feeling back when I played with turtle:

              You can see some examples in the comment here: https://github.com/Gabriella439/turtle/blob/main/src/Turtle/Pattern.hs

              hackage seems down right now, but there is a tutorial there: https://hackage.haskell.org/package/turtle/docs/Turtle-Tutorial.html

              1. 1

                That’s interesting. I think you’d be looking at matching/querying an AST-like structure?

                For matching elements of specific kinds of tree-like structure, we have jq and xpath.

                Is that the kind of thing you mean (but perhaps for general grammars?)

                If not, how do these differ from your thoughts/vision?

              1. 19

                A linker is something of a special case as it’s mostly used in a way where the business use is not hindered by (A)GPL in any way. Typically AGPL is great at getting companies to consider paying for a license.

                IANAL, but I think the author is making the right decision. I love FLOSS but I just don’t know of a FLOSS license that would work for a case like this.

                Should there be a more copylefty license that worked in a case like this? Is it even reasonably possible?

                1. 11

                  AGPL is also exceptionally good at making companies not consider the software at all, eg Google: https://opensource.google/documentation/reference/using/agpl-policy

                  1. 18

                    Yes, that’s the point. Google, like many others, will pay for an alternative license if they want to use the project:

                    In some cases, we may have alternative licenses available for AGPL licensed code.

                    Of course the software needs to be exceptionally good to be considered exceptionally. I believe mold is.

                    1. 4

                      Google, like many others, will pay for an alternative license if they want to use the project

                      Are there actually many cases of companies like Google paying for alternative licenses? When I was working at Google, it was rare and cases I saw were old legacy projects that nobody wanted to touch. For new projects, it was extremely uphill to get start procurement for a private licensing agreement. In practice, it’s often easier to just rewrite the [subset of the needed] software from scratch.

                    2. 3

                      That’s why the typical practice is for the software owner to license it to the business under a commercial license for a fee.

                    3. 3

                      Yes, if you’re trying to get someone to “pay for a license” you’re obviously not interested in open source, so I guess going nonfree is no surprise…

                      1. 9

                        That’s not obvious at all, plenty of people in open source sell licenses for their software, e.g. Red Hat Enterprise Linux, Oracle.

                        1. 2

                          I wouldn’t call either of RHEL or Oracle particularly “interested in open source”. They both do release some open source stuff, and also some not, and are also so huge and have such a mix of business models to go along with it that it’s not really comparable here.

                          That said, I was replying to my parent comment, not to the post itself. The mold maintainer here does not seem to have been attempting to sell licenses before now, and is considering it for the future (and looking at BSL so the code is still open source in the end).

                          1. 6

                            Well, RH sponsors development of a ton of things, from systemd to podman, so that would be an incorrect assumption. Also Oracle sponsors OpenJDK and MySQL so again, incorrect.

                            For the mold author, as someone else said, AGPL is a difficult license because of the software itself. So can’t blame them for looking for other options.

                            1. 1

                              For what it’s worth, they’ve explicitly been trying to sell licenses to businesses since at least May: https://github.com/rui314/mold/commit/9fbd71ec6bb315c6fd4bfefbfcde821a4737b9e0

                              1. 1

                                Interesting that in the comments there they had the idea to maybe weaken to an MIT license but are now considering the opposite.

                        2. 1

                          I’m not well versed in the legal world but how does it work with GPL? Can one build non-GPL/prioprietary software using GPL compiler (eg. GCC) and link it with GPL linker (eg. GNU ld) and keep their license? Is it because of the GPL exception? AGPL licensed linker does not have this kind of exception so special license grant is required?

                          1. 1

                            The GCC runtime library exception is required because runtime support code is copied into the final binary. The same is not true of a linker.

                            1. 1

                              So if GNU ld is used for linking, project must use GPL? Assuming its binary is distributed.

                              1. 6

                                No, the linker does not need the license exception. GCC both inserts fragments of code (for example for things like population count or count leading bit on architectures without instructions for these operations) and also inserts calls to and links several libraries that are part of the GCC codebase. As such, without the exception, everything spat out by GCC would need to comply with the GPL (it would not be GPL’d, it would simply need to convey the same rights as the GPL). In contrast, the linker just copies and pastes things from the input into the output and executes a small amount of code from the inputs to resolve relocations. It does not embed any of itself in the output and so it does not need the exception (in the same way EMACS does not need a GPL exception to avoid everything that you write in it from being GPL’d).

                        1. 6

                          The challenge with a distributed namespace is Zooko’s Triangle.

                          I found this is an interesting read on the high-level challenges of this topic: https://www.varunsrinivasan.com/2022/01/11/sufficient-decentralization-for-social-networks

                          Which inspired the Farcaster Protocol that is focused on identity vs messaging, it has a lot of similar elements to Keybase’s design too.

                          There’s also Lens Protocol that is more focused on the identity vs social graph aspect (and its ownership), less about the messaging.

                          1. 5

                            We use nix very conservatively. We only use it for managing local developer environments, ie. build toolchain and other cli tools (ansible, terraform, etc). That has worked out amazingly for us.

                            I’m in general a lot more skeptical about nix for production. You don’t clearly get the kind of support like you would from, for example, Ubuntu’s packages. There’s no “LTS” as far as I know for nix, merely the stable NixOS release. Though, that being said, nixpkgs tends to be way ahead of other package managers’ versions of software.

                            We’ve started messing around with using nixpkg’s docker tools for some web projects. That would be the first time that we’d be using nix in our production environment.

                            In general, it’s really easy to go overboard with nix and start using it really inappropriately. But if you use some discipline, it can be an amazing tool. It’s completely solved our python problems related to installing ansible. That’s invaluable.

                            1. 6

                              LTS is something that comes up regularly, and I sincerely don’t know if it should exist or not.

                              On one hand, it seems like it’s something that corporations want to have. Looking at the reason deeper than “because that’s what other distros do”, it seems to be a mixed bag of reasons.

                              Upgrades are much less risky in NixOS. Most issues are generally caught at eval or build time. And if it fails at runtime, it’s easy to roll back. Something that was a milestone on another distro becomes a ticket.

                              The company has to pay the upgrade price more often, but it’s also a benefit to them not to be stuck behind old versions. In that regard, it might be possible that having NixOS LTS releases becomes a disservice to corporations.

                              1. 4

                                This is a super interesting topic to me. I can give you some more context on the “corporations want it” part. We’re a federally regulated business. I need to be able to say to regulators, “yes, when this CVE comes out, I’ll be able to upgrade this package in no-time.” Often that implies that I need to be able to point at another company that violated their SLA if they didn’t upgrade the package (eg. Canonical for Ubuntu LTS). I’m very confident in practice that nixpkgs often will get the pkg upgrades faster than Canonical can bust it out, at least for unstable, but it’s really that legal infrastructure that I need. There are companies like Twaeg and such that provide support packages? But it still seems really shakey to me.

                                I hope that provides some more insight into what’s going on. Honestly, we’re still exploring it, and maybe it’s a solved problem, but I just don’t know.

                                Also, things like rollbacks would imply rolling back security updates. If everything gets changed with a rollback, then you’re taking away important changes. I often want to just rollback application code, but not dependent packages. This is pretty straightforward to setup with nix, afaik, but it’s still non-trivial.

                                1. 3

                                  Can you point to the capability and reality of applying overrides and patches as evidence of rapid response capability?

                                  1. 1

                                    Good question. That certainly helps! But we’d be doing everything ourselves. Also, you can eventually accumulate a lot of overrides/overlays such that it’s quite hairy to mess with stuff.

                                    1. 2

                                      It’s fairly ergonomic to pull some specific packages from a different release channel, this site gives convenient copypasta: https://lazamar.co.uk/nix-versions/

                                      One approach I’ve used in the past is maintain a separate generated TOML/JSON file with overrides, and pull that in from nix. The overrides file can be managed by some other script/process. In this particular case, I’d mainly want an expiration time for an override to get removed once it’s no longer valid.

                                      Also you might like this recent post about all the various ways to override a package, with your own patches and such: https://bobvanderlinden.me/customizing-packages-in-nix/

                                      And one more note that might bring comfort: You probably know this, but the only distinction between unstable and stable nixpkgs channels is that unstable is rolling releases but stable is discrete releases. Aside from that, I don’t think there’s any increased “stability” guarantee – they both pass the same automated testing suites. (Someone please correct me if I’m mistaken.)

                                2. 3

                                  If don’t company says “we support rhel 7” (so effectively lts), that’s what they support and can stay on those versions of dependencies for ages. It doesn’t matter if the upgrades are risky or not. It’s not a technical issue.

                                  1. 2

                                    On the subject of LTS: Upgrades can be less risky in Nix but I have been burned several times by now when upstream introduces a bug in a release which I proceed to hit the next day when installing a new system. I hate rolling-release systems as a consequence.

                                    I work in robotics, so for my work the benefit of riding the bleeding edge of software is pretty low, the potential costs of failure are very high (expensive hardware destroyed), and the cost of upgrading is also high. Even if the actual upgrade doesn’t take much work, there’s lots of integration, simulation, and real-world testing that has to be repeated. We also end up having to use hardware that has fairly limited software support, not all of it open-source, so in that case we are really stuck with the particular OS release that a vendor provides. (Fuck you, NVidia.)

                                    That said, we could potentially still benefit from Nix quite a lot, and I should play with it more someday. But we’d still end up essentially cutting our own LTS releases.

                                1. 8

                                  I really dislike the cattle vs. pets analogy, because it reenforces a speciesist world view that sadly is very prevalent in almost all parts of the world.

                                  1. 17

                                    I like it because it refers to a perspective shared among those who see it. Even those who disagree, understand the meanings behind it.

                                    1. 10

                                      Ehhhhhh… I mostly agree in this instance, but this is unfortunately not a great line of argument to take in general. You can apply the argument to any kind of human discrimination in history, and it’s just as true. If you describe something as “X for white’s, not spic’s” people will generally understand the meanings behind it, but that doesn’t mean you couldn’t do better.

                                    2. 7

                                      You could have a pet cow.

                                      1. 6

                                        What would you rather use instead?

                                        1. 10

                                          Plants vs. crops, maybe?

                                          1. 7

                                            roses and potatoes

                                            1. 7

                                              Calling my desktop a rose seems unfair to roses.

                                            2. 4

                                              I like that, maybe even garden vs farm?

                                              1. 2

                                                I really like this

                                              2. 3

                                                Reproducible and non-reproducible configurations. The key feature of cattle that we are trying to isolate is the ability to spawn new machines with a known-good configuration on a reliable basis; in other words, cattle are reproducible.

                                                1. 2

                                                  I really like that term. It also doesn’t take size into account so much and it leaves the question how these goals are achieved wide open.

                                                  1. 1

                                                    I mean, the very point of this system is to make reproducibility easy…

                                                  2. 1

                                                    Clusters and individual machines?

                                                  3. 5

                                                    Other than it is a really bad analogy in general. In most situations it’s just “now your cluster is the pet”. It’s also bad because it somehow makes it sound that holding cattle is easier than a pet.

                                                    On top of that it’s in the same line as tons of other lines meant to shut people up before arguing, so you don’t have to know what you are talking about.

                                                    A similar one is that complaining about the term serverless is like complaining about horseless carts, when in reality it’s more like calling a cab “carless”.

                                                    I really think those statements and analogies do the industry a huge disfavor and that they should be abandoned altogether. Not because analogies are always bad (they are pretty much always imprecise though), but because they don’t even serve the purpose of analogies which is explaining things well. We have good analogies in IT, that mostly work, from cryptographic keys, to files, directories (or folders if you are into that). What they have in common is that they explain something better than any technical terms. Cattle and pets don’t. They at best make bold claims about how things work, but tend to easily break no matter what direction you go. Think about protecting your cattle or your pet. How does analogy even work in terms of security, which is a big part. For files again, it works. Putting the file into the trash bin, and even retrieving it, emptying it, all works pretty well. Also protecting your file works well with analogies.

                                                    I think the difference is that some of these “bad” analogies are being mostly used for marketing and like mentioned to bring points across when you lack good arguments.

                                                    1. 4

                                                      Backyard vegetable patch vs industrial farming.

                                                    1. 2

                                                      One more, which is pretty common:

                                                      type ApplicationLister struct {
                                                      	Limit int
                                                      	Offset int
                                                      	Owner string
                                                      }
                                                      
                                                      func (a *ApplicationLister) List() []Application { ... }
                                                      
                                                      res := ApplicationLister{Limit: 5, Offset: 0}.List()
                                                      

                                                      This is common with server configs, where the struct is the server instance rather than being a separate config type/instance (with additional hidden fields) that takes configuration from the public field on .Serve() or equivalent.

                                                      It’s also common for packages to offer default wrappers, like

                                                      func List() []Application {
                                                      	return ApplicationLister{}.List()
                                                      }
                                                      

                                                      So we can call pkg.List() or pkg.ApplicationLister{...}.List()

                                                      1. 12

                                                        Free software with restrictions on use isn’t free software. Aside from being really hard to identify what does/doesn’t do harm (in essence making the Hippocratic unenforceable - is such software prohibited in United Nation humanitarian operations, since they are staffed by military personnel?), trying to put our current morals into license form just doesn’t work.

                                                        50 years ago, homosexual and transgender people were considered harmful, and would be prohibited under such a license, and at the time people thought this was right, moral, and just. I think it would be arrogant to pretend that we’re at the pinnacle of moral judgement now.

                                                        So let’s leave licenses open to all use, rather than ruling out behaviour we think is immoral now at the cost of the progressive future.

                                                        1. 1

                                                          how do you feel about a code of conduct to enforce norms in a community? the same way, or differently? why?

                                                          1. 7

                                                            Codes of conduct describe practices for how people develop the software, it doesn’t change the way free software can be used by users, or who is even allowed to be a user.

                                                            If you want to set rules on how you work as a team, that’s just fine, but that’s different from then prohibiting certain uses by people or organisations outside your team because you don’t agree with those uses.

                                                            To send a question back at you, how do you view the morality of use for things like evacuation and disaster relief efforts that involve the military (for example, in the UK a lot of COVID support was done by the military), which under the Hippocratic license would be, at first blush, prohibited?

                                                            1. 1

                                                              If you want to set rules on how you work as a team, that’s just fine, but that’s different from then prohibiting certain uses by people or organisations outside your team because you don’t agree with those uses.

                                                              i’d say it’s exactly the same; setting the code of conduct enforces who is in and who is out; same with the license.

                                                              how do you view the morality … which under the Hippocratic license would be, at first blush, prohibited?

                                                              i’d say it’s within the spirit of acceptable usage; so it would be fine.

                                                              1. 7

                                                                Unfortunately licenses don’t operate on spirits or hopes, and the Hippocratic license says:

                                                                3.1. The Licensee SHALL NOT, whether directly or indirectly, through agents or assigns:
                                                                3.1.20. Military Activities: Be an entity or a representative, agent, affiliate, successor, attorney, or assign of an entity which conducts military activities;

                                                                This clearly sets out that if you, the licensee, are a representative of an org that conducts military activities, you cannot use the software. It’s clear cut and dry.

                                                                But the point that is being hit on here, by both of us, is there there is clearly nuance in use of the software. The intent of the relicensing is to limit it to peaceful and progressive, humanitarian use. The problem is that the legal wording of the license does actually prohibit this use if you happen to be from the wrong team when you are conducting those progressive/humanitarian goals. The software could not be used in any such efforts such as rapidly deployed military hospitals to disaster zones, military helicopter search and rescue teams, coast guard, etc.

                                                                But, I promise I’m not trying to say “military good” - the underlying point is that software ends up being used in all sorts of delicately nuanced and varied situations that we cannot possibly predict, and so by trying to suggest that we can ahead-of-time predict all these nuanced cases we will either be overly restrictive, or not restrictive enough. Given that the nature of progress is to improve upon ourselves, I would rather less restrictive to allow for uses I couldn’t have predicted, rather than stifle them because we are relatively backwards compared to our progressive peers in the future.

                                                                1. 1

                                                                  licenses, like all legal agreements, are merely systems through which the world is interpreted; i.e. it’s the spirit of the intention.

                                                                  i’m not saying the hippocratic license is perfectly worded; and of course i didn’t design it; but it’s certainly possible to have different interpretations of a piece of legal writing.

                                                                  i think i agree with you that i don’t want to be overly specific, and i’d probably agree that the hippocratic license is a bit too specific; so i’m open to alternatives (hence this conversation)

                                                                  i’d hope there’s a middle ground between MIT and the Hippocratic license; and i think i’m arguing that i’d prefer to err towards hippocratic vs MIT, because at least that enables me to say something about what i want.

                                                                  1. 2

                                                                    certainly possible to have different interpretations of a piece of legal writing

                                                                    This is actually what lawyers try very hard to remove. They like things that are clear and settled.

                                                                    1. 2

                                                                      This is actually what lawyers try very hard to remove. They like things that are clear and settled.

                                                                      for what it’s worth, while i think this is a side issue to the central point - namely, how can we as programmers have some say on how software is used; and in particular try and push our industry towards positive applications of software; or at least not planet-destroying usages - i don’t think you’re right at all.

                                                                      law is all about interpreting the essence in certain settings; so while i’m sure the hippocratic license doesn’t get it perfect; i’m sure there is a way to make a best effort, that does not necessitate total prediction of the future.

                                                                      1. 3

                                                                        Licenses based in morality are guaranteed to be restrictive, since morality itself is relative. Same with “positive applications”, or “not planet-destroying usages” - relative topics, and licenses based on these are bound to be restrictive. As an example, software that cannot be used for deforestation applications (say controlling the mechanical saw) cannot be used in locations where primary source of fuel is wood and no alternative exists.

                                                                        In my experience, it is a futile effort to try and enforce some arbitrary definition of “positive applications”, “not planet-destroying” etc., without also restricting valid, legitimate, and moral use (moral as per the license author, who actually wishes to allow moral use).

                                                                2. 3

                                                                  A project with a proper FOSS license and a highly restrictive CoC can still be legitimately forked into a community with a different or even contradicting CoC.

                                                                  A project with a restrictive license can’t be legitimately forked away into a contradicting license.

                                                                  1. 1

                                                                    indeed!

                                                                    and that’s exactly what i’m going for :)

                                                                    1. 4

                                                                      My point is that this is what makes it not be “exactly the same” that you responded above. :)

                                                                      A license ties your moral judgements to the code, a CoC ties your moral judgements to your community.

                                                                      I understand that you want to tie moral judgements to code, but that’s where the disagreement lies. I, and I suspect other people you’re debating here, believe that we should be free to legitimately fork away from moral judgements. It’s less of a debate of whether the moral judgements are objectively correct or absolute or pious or whatever.

                                                                      1. 1

                                                                        i see

                                                                        i suppose what i’m getting at is, at what point do we as a tech community take a stand against various injustices? one way is through the companies we work for, and the communities we support. but what about the open-source work we do? are we doomed to just always be left open to abuse and misuse; or is there some avenue by which we can exercise personal judgement there as well? clearly there’s some level on which people are “okay” with this (i.e. GPL licenses, etc; which maybe while somewhat widely frustrating, also get traction). my interest lies in exploring that domain where we’re concerned with social good.

                                                                        it seems a shame to not at least attempt to explore this space, given how pervasive software is.

                                                                        1. 1

                                                                          I don’t think there’s much disagreement about the existence of injustices (no matter that our definitions of injustice changes over time) and the need to take actions against them.

                                                                          The disagreement is more about whether action should be taken at all layers and aspects of life/society/technology, or whether there are some places where it’s more appropriate to encode restrictions vs others where it’s less appropriate.

                                                                          In my view, the community code of conduct is a very appropriate avenue for this. We can create or think of other avenues, too! I don’t feel that the code license is a good fit, for many reasons already expressed elsewhere in this debate. :)

                                                                          I understand the urge to be absolute and complete in sanctioning people we disagree with, and maybe it’s a political axis spectrum thing. I tend to land more on sanctions through voluntary relations (deplatforming, refusing to trade, etc) rather than through mechanical means (restricting access to technology, safety, food, oxygen, whatever extreme we can imagine). I’m sure it’s a varying spectrum for many people. I’ve seen some people express this as “higher level” (social) vs “lower level” (physical).

                                                          1. 8

                                                            This debate is like the ultimate bike shedding to me. You can invent as many theoretical issues you want, the name you give to your receiver doesn’t matter at all in practice. Just name it however you want. No one will make assumptions depending on the name you gave.

                                                            1. 5

                                                              In theory it might not matter, but I wrote this article 6 years ago to demonstrate that in practice it does make a difference, with outlined scenarios of when it matters. This has still been true to this day in my experience.

                                                              I’m not making the claim that your program won’t compile if you name your variables whatever you want, but people definitely make assumptions about variables depending on what they’re named. Regardless, the main failure mode is during refactors.

                                                              For what it’s worth, I’ve found this to be particularly true with Go due to the way receivers are designed and the data struct-centric nature of the language. This observation doesn’t necessarily hold in other languages.

                                                              1. 3

                                                                I can vouch that Rust’s self does make refactors more annoying! There’s little known rust-analyzer feature where you can just rename self (thus promoting method to a function), but even with automation extra noise at use-site is annoying.

                                                              2. 3

                                                                Honestly I think this is the best attitude to this. All the issues outlined around consistent receiver names I’ve seen have never affected my teams past and present. It’s one of those things you see, realise for a couple of seconds and then never think about again on the X years you’ll work on the codebase. Consistency is useful.

                                                                1. 2

                                                                  Quote from the article:

                                                                  By naming receivers as this or self, we’re actually making receivers special in a way that is counter-productive. Imagine naming every local variable with the same name, all the time, regardless of what it represents? A scary thought.

                                                                  Yes, consistency is useful. In Go, it’s valuable to treat receivers with the same consistency we treat scoped variables.

                                                              1. 5

                                                                I was using the internet in the early-to-mid-2000s era before centralized social media got big, and I don’t remember anyone using the term “Web 2.0” to talk about PHPBB forums and blogs with comments, I only remember people using it to talk about Facebook and Twitter and Youtube. I actually remember rather distinctly one time in college (2009 or so) when I was talking to this guy who was trying to hire a “Web 2.0 manager” for a student group. I thought he wanted a web designer to make a website for the group, which was a skillset I had. But in the process of talking to him about the job, I learned that what he had in mind was basically a social media manager - someone who would write posts and respond to comments on nascent Twitter/Facebook/Youtube - which wasn’t a thing I wanted to do, and I didn’t end up working with that student group.

                                                                But that’s just a debate over the boundaries of nomenclature. The eras of people interacting with the web this guy recognizes are real, regardless of which one you assign the label “Web 2.0” to.

                                                                I’m way less hostile to cryptocurrencies than this guy is (quite the opposite, in fact), but I do think that a lot of the currently-media-hyped web3 technologies basically do have the problems outlined in this post, won’t actually make web-based social media meaningfully decentralized, and aren’t trying to do that or for the most part claiming to do that. People who talk about implementing Minecraft and Fortnite items as NFTs aren’t idealistic cypherpunks - if they were, they would be talking about how to make it possible to play these games in ways that Microsoft and Epic disapprove of - they’re entrepreneurs who are amoral on the topic of decentralization.

                                                                There are ways in which smart contracts on Ethereum and similar blockchains can contribute to meaningful decentralization, but they’re not generally as mass-marketable as Fortnite NFTs, so they get talked about less in the media and contribute less to mindshare when people imagine what things constitute “web3”.

                                                                1. 18

                                                                  To me, Flickr is the quintessential “Web 2.0” example site: user-generated and user-organized content, RSS for easy integration back into your own site or for people to just subscribe for updates. And various factors led to Flickr absolutely getting its lunch east by Instagram, a service so hostile to the Web that I struggle to think of appropriately hyperbolic terms to describe it (see, for example, “link in bio”).

                                                                  “Web3” appears to consist of a combination of “everything everywhere must be financialized in ways users can never ever escape” and codified dual-tier society (if you’re wealthy and/or well-connected and something goes wrong, the system will bail you out or reverse the bad thing for you; if you’re not wealthy and/or well-connected, “code is law” and whatever you lost is irreversibly lost).

                                                                  1. 3

                                                                    “Web3” appears to consist of a combination of “everything everywhere must be financialized in ways users can never ever escape” and codified dual-tier society

                                                                    I think this describes the current state of Web 2.0, no?

                                                                    Everyone screaming “if you’re not paying then you’re the product” for decades as a bizarre rallying cry, while even paying users are squeezed in every way possible–why would a for-profit company leave money on the table just because you’ve already given them some?

                                                                    Twitter Checkmark users getting privileges to only see tweets from other people blessed with a checkmark, literally creating a dual-tier perception of reality.

                                                                    Meanwhile all of your data is owned and resold by the intermediaries, the rules are changed as they see fit, open APIs are closed off, free services are slowly migrated to paid after competitors die out.

                                                                    1. 3

                                                                      I don’t really think of Twitter, which was originally an SMS service, as “Web 2.0”; I think of it as the sort of thing that arose as “Web 2.0” was being killed off.

                                                                      Again, to me “Web 2.0” is mostly about user-generated content and organization, and offering the ability to integrate with other things and produce mashups via APIs, RSS, etc. In other words, things that were truly of the Web. It was a very brief period and then it was over So whenever you start on a “well what about Twitter/Facebook/Instagram/etc.” tangent, just know that I describe those as the things that came after and largely killed “Web 2.0”.

                                                                      Meanwhile “Web3” financialization is so extreme that it often feels like they’d charge me for breathing if they could (and of course would do so in a unique-per-service token that has to be bought up-front), and ultimately everything ends up centralized onto a handful of big exchanges and intermediaries. Which probably doesn’t matter because “DeFi” seems to consist entirely of a bunch of entities all loaning tokens to each other and using those loans as “backing” to justify minting more tokens that they loan to each other to claim as “backing” to justify minting more tokens… in an inflationary cycle that inevitably pops when one of them crashes. As has been going on for a little while now.

                                                                      1. 1

                                                                        I don’t have much disagreement about the idealized version of “Web 2.0” as it was once upon a time in our minds (I was there too, I remember being very optimistic! I built many API mashups that were inevitably killed), but clearly that’s not what it is today, right? Or do we need a different label just so we can talk about the present state of things without tarnishing our nostalgia for Web 2.0?

                                                                        It’s challenging talking to different groups about what Web2.0/Web3 mean to them, some people take the position that “Web 2.0 is just the good parts, and everything else just doesn’t fit under that label” while also taking the position of “Web3 is just the bad parts”, or the reverse.

                                                                        I also feel that a lot of the conflict is that Thought Leaders (like cdixon) writing up a thesis on what Web3 is comes off as “we just stole all the best original ideals of The Internet/Web 2.0/etc and rebranded as Web3” – which is not entirely untrue.

                                                                        1. 1

                                                                          I have yet to see any real use case for “web3” other than “It’s got what VCs want! It’s got monetization!”

                                                                          “Play to earn” turns out to be all the downsides of old-school gold farming and none of the upside of having an interesting game associated with it.

                                                                          “DeFi” is just a game of A lending to B lending to C lending to A in the inflationary loop I already described.

                                                                          And the whole thing keeps centralizing onto a handful of major players anyway.

                                                                          Anyway, if you want to continue having a discussion about “web3” I’ll begin charging you per character to post comments to me.

                                                                  2. 2

                                                                    I don’t remember anyone using the term “Web 2.0” to talk about PHPBB forums and blogs with comments

                                                                    Not those specifically, because those were using Web 1 tech (form POSTs and page reloading.) Web 2.0 was all about using JS, “dynamic HTML” (DOM manipulation) and XMLHTTPRequest. Discourse, for example, is an über-Web2 app even though it came about after the hype cycle.

                                                                    The Web2 hype was not universal so I’m not surprised everyone didn’t see it the same way. Its epicenter was the ‘blogosphere’ [sorry] and the WIRED / O’Reilly / etc media scene.

                                                                    1. 1

                                                                      We all experienced the logos and the missing vowels, though.

                                                                    2. 1

                                                                      People who talk about implementing Minecraft and Fortnite items as NFTs aren’t idealistic cypherpunks - if they were, they would be talking about how to make it possible to play these games in ways that Microsoft and Epic disapprove of - they’re entrepreneurs who are amoral on the topic of decentralization.

                                                                      I was ideating with my buddies the other day about how sweet it would be if you could sell your DLC when you were done. Obviously I doubt those NFT crypto-bros would ever dare…

                                                                      It sucks how not only is Blockchain super lame/ugly, it’s also bereft of imagination or courage.

                                                                    1. 26

                                                                      I find the complaints about Go sort of tedious. What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors? For some reason, the author finds the first unacceptable but the second laudable, but practically speaking, why would I care? I write Go programs by stubbing stuff to the point that I can write a test, and then the tests automatically invoke both the compiler and go vet. Whether an error is caught by the one or the other is of theoretical interest only.

                                                                      Also, the premise of the article is that the compiler rejecting programs is good, but then the author complains that the compiler rejects programs that confuse uint64 with int.

                                                                      In general, the article is good and informative, but the anti-Go commentary is pretty tedious. The author is actually fairly kind to JavaScript (which is good!), but doesn’t have the same sense of “these design decisions make sense for a particular niche” when it comes to Go.

                                                                      1. 35

                                                                        What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors?

                                                                        A big part of our recommendation of Rust over modern C++ for security boiled down to one simple thing: it is incredibly easy to persuade developers to not commit (or, failing that, to quickly revert) code that does not compile. It is much harder to persuade them to not commit code where static analysis tooling tells them is wrong. It’s easy for a programmer to say ‘this is a false positive, I’m just going to silence the warning’, it’s very difficult to patch the compiler to accept code that doesn’t type check.

                                                                        1. 34

                                                                          What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors?

                                                                          One is optional, the other one is in your face. It’s similar to C situation. You have asan, ubsan, valgrind, fuzzers, libcheck, pvs and many other things which raise the quality is C code significantly when used on every compilation or even commit. Yet, if I choose a C project at random, I’d bet none of those are used. We’ll be lucky if there are any tests as well.

                                                                          Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.

                                                                          (According to the docs only a subset of the vet suite is used when running “go test”, not all of them - “high-confidence subset”)

                                                                          1. 15

                                                                            When go vet automatically runs on go test, it’s hard to call it optional. I don’t even know how to turn if off unless I dig into the documentation, and I’ve been doing Go for 12+ years now. Technically gofmt is optional too, yet it’s as pervasive as it can be in the Go ecosystem. Tooling ergonomics and conventions matter, as well as first party (go vet) vs 3rd party tooling (valgrind).

                                                                            1. 21

                                                                              That means people who don’t have tests need to run it explicitly. I know we should have tests - but many projects don’t and that means they have to run vet explicitly and in practice they just miss out on the warnings.

                                                                              1. 2

                                                                                Even in projects where I don’t have tests, I still run go test ./... when I want to check if the code compiles. If I used go build I would have an executable that I would need to throw away. Being lazy, I do go test instead.

                                                                            2. 13

                                                                              Separating the vet checks from the compilation procedure exempts those checks from Go’s compatibility promise, so they could evolve over time without breaking compilation of existing code. New vet checks have been introduced in almost every Go release.

                                                                              Compiler warnings are handy when you’re compiling a program on your own computer. But when you’re developing a more complex project, the compilation is more likely to happen in a remote CI environment and making sure that all the warnings are bubbled up is tedious and in practice usually overlooked. It is thus much simpler to just have separate workflows for compilation and (optional) checks. With compiler warnings you can certainly have a workflow that does -Werror; but once you treat CI to be as important as local development, the separate-workflow design is the simpler one - especially considering that most checks don’t need to perform a full compilation and is much faster that way.

                                                                              Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.

                                                                              I feel that the Go team cares more about enabling organizational processes, rather than encouraging individual habits. The norm for well-run Go projects is definitely to have vet checks (and likely more optional linting, like staticcheck) as part of CI, so that’s perhaps good enough (for the Go team).

                                                                              All of this is quite consistent with Go’s design goal of facilitating maintenance of large codebases.

                                                                              1. 5

                                                                                Subjecting warnings to compatibility guarantees is something that C is coming to regret (prior discussion).

                                                                                And for a language with as… let’s politely call it opinionated a stance as Go, it feels a bit odd to take the approach of “oh yeah, tons of unsafe things you shouldn’t do, oh well, up to you to figure out how to catch them and if you don’t we’ll just say it was your fault for running your project badly”.

                                                                              2. 4

                                                                                The difference is one language brings the auditing into the tooling. In C, it’s all strapped on from outside.

                                                                                1. 19

                                                                                  Yeah, “similar” is doing some heavy lifting there. The scale is more like: default - included - separate - missing. But I stand by my position - Rust is more to the left the than Go and that’s a better place to be. The less friction, the more likely people will notice/fix issues.

                                                                                2. 2

                                                                                  I’ll be honest, I get this complaint about it being an extra command to run, but I haven’t ever run go vet explicitly because I use gopls. Maybe I’m in a small subset going the LSP route, but as far as I can tell gopls by default has good overlap with go vet.

                                                                                  But I tend to use LSPs whenever they’re available for the language I’m using. I’ve been pretty impressed with rust-analyzer too.

                                                                                3. 12

                                                                                  On the thing about maps not being goroutine safe, it would be weird for the spec to specify that maps are unsafe. Everything is unsafe except for channels, mutxes, and atomics. It’s the TL;DR at the top of the memory model: https://go.dev/ref/mem

                                                                                  1. 6

                                                                                    Agreed. Whenever people complain about the Rust community being toxic, this author is who I think they’re referring to. These posts are flame bait and do a disservice to the Rust community. They’re like the tabloid news of programming, focusing on the titillating bits that inflame division.

                                                                                    1. 5

                                                                                      I don’t know if I would use the word “toxic” which is very loaded, but just to complain a little more :-) this passage:

                                                                                        go log.Println(http.ListenAndServe("localhost:6060", nil))
                                                                                      

                                                                                      Jeeze, I keep making so many mistakes with such a simple language, I must really be dense or something.

                                                                                      Let’s see… ah! We have to wrap it all in a closure, otherwise it waits for http.ListenAndServe to return, so it can then spawn log.Println on its own goroutine.

                                                                                       go func() {
                                                                                           log.Println(http.ListenAndServe("localhost:6060", nil))
                                                                                       }()
                                                                                      

                                                                                      There are approximately 10,000 things in Rust that are subtler than this. Yes, it’s an easy mistake to make as a newcomer to Go. No, it doesn’t reflect even the slightest shortcoming in the language. It’s a very simple design: the go statement takes a function and its arguments. The arguments are evaluated in the current gorountine. Once evaluated, a new goroutine is created with the evaluated parameters passed into the function. Yes, that is slightly subtler than just evaluating the whole line in a new goroutine, but if you think about it for one second, you realize that evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.

                                                                                      Like, I get it, it sucks that you made this mistake when you were working in a language you don’t normally use, but there’s no need for sarcasm or negativity. This is in fact a very “simple” design, and you just made a mistake because even simple things actually need to be learned before you can do them correctly.

                                                                                      1. 3

                                                                                        In practice, about 99% of uses of the go keyword are in the form go func() {}(). Maybe we should optimize for the more common case?

                                                                                        1. 1

                                                                                          I did a search of my code repo, and it was ⅔ go func() {}(), so you’re right that it’s the common case, but it’s not the 99% case.

                                                                                        2. 2

                                                                                          I agree that the article’s tone isn’t helpful. (Also, many of the things that the author finds questionable in Go can also be found in many other languages, so why pick on Go specifically?)

                                                                                          But could you elaborate on this?

                                                                                          evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.

                                                                                          IMO this is less surprising than what Go does. The beautiful thing about “the evaluation of the whole expression is deferred” is precisely that you don’t need to remember a more complicated arbitrary rule for deciding which subexpressions are deferred (all of them are!), and you don’t need ugly tricks like wrapping the whole expression in a closure which is the applied to the empty argument list.

                                                                                          Go’s design makes sense in context, though. Go’s authors are culturally C programmers. In idiomatic C code, you don’t nest function calls within a single expression. Instead, you store the results of function calls into temporary variables and only then pass those variables to the next function call. Go’s design doesn’t cause problems if you don’t nest function calls.

                                                                                      2. 3

                                                                                        At least they mention go vet so even people like me without knowing it can arrive at similar conclusions. And they also mention that he is somewhat biased.

                                                                                        But I think they should just calmly state without ceremony like “And yet there are no compiler warnings” that this is the compiler output and this is the output of go vet.

                                                                                        This also seems unnecessary:

                                                                                        Why we need to move it into a separate package to make that happen, or why the visibility of symbols is tied to the casing of their identifiers… your guess is as good as mine.

                                                                                        Subjectively, this reads as unnecessarily dismissive. There are more instances similar to this, so I get why you are annoyed. It makes their often valid criticism weaker.

                                                                                        I think it comes as a reaction to people valid agreeing that golang is so simple but in their (biased but true) experience it is full of little traps.

                                                                                        Somewhat related: What I also dislike is that they use loops for creating the tasks in golang, discuss a resulting problem and then not use loops in rust - probably to keep the code simple

                                                                                        All in all, it is a good article though and mostly not ranty. I think we are setting the bar for fairness pretty high. I mean we are talking about a language fan…

                                                                                        1. 5

                                                                                          This also seems unnecessary: […]

                                                                                          Agree. The frustrating thing here is that there are cases where Rust does something not obvious, the response is “If we look at the docs, we find the rationale: …” but when Go does something that is not obvious, “your guess is as good as mine.” Doesn’t feel like a very generous take.

                                                                                          1. 6

                                                                                            the author has years of Go experience. He doesn’t want to be generous, he has an axe to grind.

                                                                                            1. 3

                                                                                              So where’s the relevant docs for why

                                                                                              we need to move it into a separate package to make that happen

                                                                                              or

                                                                                              the visibility of symbols is tied to the casing of their identifiers

                                                                                              1. 3

                                                                                                we need to move it into a separate package to make that happen

                                                                                                This is simply not true. I’m not sure why the author claims it is.

                                                                                                the visibility of symbols is tied to the casing of their identifiers

                                                                                                This is Go fundamental knowledge.

                                                                                                1. 3

                                                                                                  This is Go fundamental knowledge.

                                                                                                  Yes, I’m talking about the rationale.

                                                                                                  1. 3

                                                                                                    https://go.dev/tour/basics/3

                                                                                                    In Go, a name is exported if it begins with a capital letter.

                                                                                                    1. 1

                                                                                                      rationale, n.
                                                                                                      a set of reasons or a logical basis for a course of action or belief

                                                                                                    2. 2

                                                                                                      Why func and not fn? Why are declarations var type identifier and not var identifier type? It’s just a design decision, I think.

                                                                                              2. 1

                                                                                                The information is useful but the tone is unhelpful. The difference in what’s checked/checkable and what’s not is an important difference between these platforms – as is the level of integration of the correctness guarantees are with the language definition. Although a static analysis tool for JavaScript could, theoretically, find all the bugs that rustc does, this is not really how things play out. The article demonstrates bugs which go vet can not find which are precluded by Rust’s language definition – that is real and substantive information.

                                                                                                There is more to Go than just some design decisions that make sense for a particular niche. It has a peculiar, iconoclastic design. There are Go evangelists who, much more strenuously than this author and with much less foundation, criticize JavaScript, Python, Rust, &c, as not really good for anything. The author is too keen to poke fun at the Go design and philosophy; but the examples stand on their own.

                                                                                              1. 27

                                                                                                In languages with a big standard library the usual path is:

                                                                                                1. Standard library grows a feature.
                                                                                                2. Someone makes a faster/nicer implementation, but the standard library can’t adopt it.
                                                                                                3. Every novice needs to be reminded “don’t use that old/slow standard library thing, use the better one” forever.

                                                                                                Rust already has this problem with std::mpsc. crossbeam-channel is faster, more flexible, with features that std::mpsc is lacking, and has a more consistent API.

                                                                                                1. 9

                                                                                                  Or “standard library adopts yet another implementation of the same thing because the old one can’t be replaced” like Python’s infamous urllib & urllib2.

                                                                                                  1. 11

                                                                                                    Another case is getopt replaced by optparse replaced by argparse.

                                                                                                    1. 3

                                                                                                      Yeah. The urllib case is just the most infamous one because there’s literally a “2” in the name. (And then there was a urllib3 but that’s an external library.)

                                                                                                      1. 1

                                                                                                        Naming it urllib3 is my biggest regret about urllib3. :)

                                                                                                        But at least urllib/urllib2 have now been comfortably under http.client and urllib in Python 3 for quite a while.

                                                                                                        That said, as a beginner Rustacean, it is quite frustrating to pick the “correct” http library. If I search Cargo for ‘http’, the top results are messy and not helpful. After a lot of investigation, it seems “hyper” is one of the correct choices (Rust’s version of urllib3?), if not “reqwest”. Finding a correct server library to use is even harder–do I want async/tokio, or some other concurrency model? Seems “warp” is a more recent library that is a good fit, but it’s not 1.0 yet–will I regret using it? Will I need to migrate to something else altogether in another year or two? Makes me really appreciate Go’s standard http library.

                                                                                                        1. 1

                                                                                                          reqwest is built on top of hyper, it’s a convenience layer like the famous python requests. Which IIRC was built on top of urllib3? :)

                                                                                                          Looks like Axum is going to be the “blessed” web server convenience layer of the Tokio ecosystem.

                                                                                                          1. 2

                                                                                                            How do I make a decision between Axiums vs Warp? (or should I be interfacing directly with frameworks like actix, rocket, rouille?)

                                                                                                            If it’s any help, the context was that I wanted to prototype a quick JSONRPC2 mitm server that does some extra stuff between requests (as an excuse to learn Rust better, rather than just reading about it). I ended up spending way too much time trying to choose which http server library to use that I ended up abandoning the idea altogether. -_-

                                                                                                  2. 2

                                                                                                    Do you think there’s any chance Rust will adopt that API in a newer edition?

                                                                                                    Obviously I have the benefit of hindsight, but it’s not clear to me that channels should even be in std at all. Unlike mutexes—a relatively simple and universal primitive—channels often have behavior tied to some kind of runtime. For example, tokio has its own channel type.

                                                                                                    1. 8

                                                                                                      There’s a possibility of replacing some of the implementation back-end (i.e. make it faster), but there’s no chance of changing the public API.

                                                                                                      Rust editions are “shallow”. They affect syntax and compiler warnings, but intentionally don’t touch deeper compiler internals or libraries to avoid complicating the compiler too much. Making std edition-dependent is difficult, because code from multiple crates with different editions could share the same object. Currently there are no plans to have std edition-dependent.

                                                                                                      1. 3

                                                                                                        Even Mutex has some subtlety to it. Generally people recommend using the parking_lot crate’s Mutex over std’s. The parking_lot docs say that it “provides implementations of Mutex, RwLock, Condvar and Once that are smaller, faster and more flexible than those in the Rust standard library”.

                                                                                                        1. 5

                                                                                                          Right but like smallvec and smartstring, that crate optimizes for something totally different than the common simple case. Unless you have a whole lot of strings, vecs, or mutexes, there’s no reason to pull in those crates. I think channels are different. If a channel isn’t in some kind of hot path, I’m probably just going to use a Condvar or Barrier from std, along with some other ordinary data structure.

                                                                                                          I recently wrote a program that uses exactly one std::sync::Mutex, that’s locked a couple hundred times per second at most. I guarantee you that using parking_lot would have a 0.0% impact on performance.

                                                                                                          In general I feel like stuff in std should be pretty fast without necessarily hyper-optimizing for huge scale. I don’t feel that way about channels.

                                                                                                          1. 5

                                                                                                            I recently wrote a program that uses exactly one std::sync::Mutex, that’s locked a couple hundred times per second at most. I guarantee you that using parking_lot would have a 0.0% impact on performance.

                                                                                                            And that’s fine but the thing that you’re really saying is that this mutex implementation is optimised for your use case, where ease of use of API is the most important thing and the performance of locking is less important. The design space for mutexes is surprisingly large. For example:

                                                                                                            • Are you optimising for the contended or uncontended case? This impacts how many times you poll before you put the calling thread to sleep. If you assume contended then you should sleep immediately, if you assume uncontended then you want to poll for longer.
                                                                                                            • Do you need to guarantee fairness? If so, then you need a mechanism that inserts waiters into a queue (which, with something like a ticket lock, doesn’t have to be a data structure but rather a side effect of the algorithm), if not then you can use a simpler design.
                                                                                                            • Does the size of the mutex matter and to what extent? The simplest mutex can fit in a single bit. The futex system call on Linux has a mode for storing a set of 32 mutexes in a single 32-bit integer but this means that you get false sharing if two things try to lock different mutexes in the same set together. A ticket lock needs two machine words. A queue lock needs each waiter to have a queue entry that it can add to a list. If size doesn’t matter at all, padding the mutex to a cache-line size will help a lot in avoiding false sharing (the FreeBSD kernel mutex, for example, requires cache-line [typically 64 byte] alignment so that you never get two different mutexes in the same cache line).
                                                                                                            • Do you need the mutex to be usable before the memory allocator is initialised? This is a pretty niche case, but it’s one that I’ve encountered and it means that the mutex implementation can’t allocate memory.
                                                                                                            • Do you need the mutex to be sharable between different processes? If so, then either all state must be inline or in the kernel, you can’t use an indirection (pthread_mutex_t is normally a pointer to the real mutex structure).
                                                                                                            • Do you need priority propagation? If a low-priority thread holds a mutex and a high-priority thread sleeps on it then what should happen? Similarly, if a low-priority thread is sleeping on the mutex and a high-priority thread joins the sleeping set, which should be woken up first?
                                                                                                            • Do you need robust behaviour? If a thread / process owning a mutex terminates abnormally, then should the mutex be unlocked?
                                                                                                            • Do you need interoperability with some other library? For example, C++’s std::mutex has an optional implementation-defined ‘native handle’ call, which lets you get whatever platform-specific mutex it’s using so you can, on *NIX systems, mix std::mutex and pthread_mutex calls on the same object.

                                                                                                            There are probably different dimensions that I’m missing but even this set gives a design space of over a dozen different approaches to building a mutex, any of which will be better in some situations and worse in others. By picking any mutex implementation, you are implicitly asserting that this point in the design space is the most common. That may even be true today but will it be true in 20 years? A lot of these decisions have ABI implications and, in the case of the process-shared decision, that ABI extends beyond a single program.

                                                                                                            1. 1

                                                                                                              The design space for mutexes is surprisingly large.

                                                                                                              And therefore the specialized cases should be relegated to crates.

                                                                                                              By picking any mutex implementation, you are implicitly asserting that this point in the design space is the most common.

                                                                                                              I don’t necessarily think the current std::sync::Mutex is the most common, especially in “heavy duty” applications. But I do think the basic case is the right fit for the standard library, same as String and Vec. If you’re reaching for a mutex and you just need a mutex without any other considerations, it’s good.

                                                                                                              For channels, I would argue you probably have specific considerations. I wouldn’t even consider channels a basic primitive at all. I think that idea was mainly popularized by Go, and mainly worth doing because of the Go runtime. Channels work as an abstraction for using Go’s built in facilities for concurrency.

                                                                                                              As Rust doesn’t have a built in runtime like that, I don’t think channels should be built in either. But I also admit this is little more than a personal opinion.

                                                                                                              1. 2

                                                                                                                If you’re reaching for a mutex and you just need a mutex without any other considerations, it’s good.

                                                                                                                What does that mean? You want a mutex that doesn’t impose too large a size penalty and is optimised for the uncontended case? You are asserting that, within a multi-dimensional tradeoff space, there is a ‘basic case’. Once it’s in the standard library, software is going to depend on its performance characteristics and you can’t change it.

                                                                                                                But I do think the basic case is the right fit for the standard library, same as String and Vec

                                                                                                                Vectors I sort-of agree on: a resizable contiguous memory allocation is a pretty basic thing and a default geometric growth policy (combined with the ability to explicitly resize the underlying space) is a general building block.

                                                                                                                Strings I’m far less certain about. The only standard library I’ve seen with a moderately good string design is OpenStep, which defines an efficient interface for strings and makes it easy to plug in different storage. Again, strings have a multidimensional design space and are one of the big problems with C++ performance at scale: the standard library defines a string type that bakes the representation in. For strings:

                                                                                                                • Do you need efficient insertion? If so, don’t mandate contiguous storage because now operations become linear complexity in the size of the string when they could be constant complexity.
                                                                                                                • Do you need to efficiently iterate over unicode code points or grapheme clusters? This has some impact on encoding but for efficient iteration over grapheme clusters then you may want to space cache the cluster breaks.
                                                                                                                • Do you need to be able to efficiently generate C strings in a particular encoding?
                                                                                                                • What size of string do you want to optimise for? Small-string optimisations add overhead for large strings but avoid allocation for small strings below a certain threshold.

                                                                                                                It’s possible to define a string interface that can support implementations in all of the design space and provide implementations for some common points (OpenStep did this, though the design isn’t perfect) but I consider defining a single concrete type for strings to be one of the biggest mistakes that a standard library can make.

                                                                                                                1. 1

                                                                                                                  On the other hand not defining a single concrete storage for strings leads to.. well, Haskell, where you have to convert between 5 string types all the time, and think about the performance of the conversions (Text is UTF-16 so you might end up converting between UTF-8 and UTF-16 way too much!)

                                                                                                                  It’s very practical to standardize on contiguous UTF-8 vectors. Efficient insertion can be relegated to specialist alternatives (call it StringBuilder like Java does, for example).

                                                                                                                  1. 3

                                                                                                                    On the other hand not defining a single concrete storage for strings leads to.. well, Haskell, where you have to convert between 5 string types all the time, and think about the performance of the conversions (Text is UTF-16 so you might end up converting between UTF-8 and UTF-16 way too much!)

                                                                                                                    This isn’t a problem if you define an efficient interface for strings, such as OpenStep’s NSString or ICU’s UText.

                                                                                                                    It’s very practical to standardize on contiguous UTF-8 vectors. Efficient insertion can be relegated to specialist alternatives (call it StringBuilder like Java does, for example).

                                                                                                                    That’s fine if your goal is to build a string once. It isn’t if you want to have a string that you modify after creation. Anything that’s exposed to modification by a user, or receives updates from a data source, needs efficient insertion in the middle. That’s easy to do with a twine-like data structure, but if your standard library string operations work only on contiguous buffers then you end up needing a lot of adaptors.

                                                                                                      2. 2

                                                                                                        Yeah but then things like this appear where the usage of crossbeam breaks tokio..

                                                                                                      1. 2

                                                                                                        One underestimated property of having a feature in a language’s standard library is: The language core team is committing to maintaining that feature.

                                                                                                        Having something in the standard library or not does not remove or add work, it just shifts the responsibility and commitment around.

                                                                                                        This can be good and bad. In early Python’s case, the core team perhaps bit off more “batteries” than they could chew, and a lot of standard library code rotted very quickly. This burden shifted to the third-party ecosystem, and let me tell you: It’s painful. We’ve been maintaining urllib3 since 2008, and we’ve burned out through several maintainers in the process. Occasionally we are lucky to have maintainers who are allowed to work on urllib3 as part of their day jobs, but that won’t last indefinitely (people switch jobs, or want a change in their life).

                                                                                                        When a language is funded through a foundation or a corporation, part of that funding is to maintain the standard library. Some features are so fundamental and core to day-to-day programming, that they should be maintained by the core team. They’re too important to leave it up to the altruism of random people on the internet, and their ephemeral circumstances that allow them to express it.

                                                                                                        I would argue that HTTP server/client is one of those things that is important enough to be maintained by the core language team. Common cryptographic primitives is another one (or at least good bindings to something like NaCl).

                                                                                                        I wish it was more common for language core teams to “acquire” third party projects and merge them into the standard library to provide more appropriate maintenance guarantees.

                                                                                                        Or another option is to merge some minimal subset of a third party project (the most useful 80%), while allowing the third party project to continue operating independently as an optional dependency.

                                                                                                        1. 11

                                                                                                          Hiya lobsters, if anyone is interested in working on open source, we have lots of “Contributor Friendly” tagged issues: https://github.com/urllib3/urllib3/issues

                                                                                                          We can even compensate for some kinds of issues! Pop into our Discord chat to discuss. :)

                                                                                                          1. 13

                                                                                                            Cool project but I really wish I didn’t have to use Discord.

                                                                                                            1. 12

                                                                                                              Maintaining a big open source project is hard enough as it is, this is the sweet spot for us right now. We’ve changed several chat platforms over the years (we used Gitter for a while, for example), who knows what will be next!

                                                                                                              1. 2

                                                                                                                Just out of curiosity, why is that?

                                                                                                                1. 11

                                                                                                                  Drew here has pretty much summed up my sentiments on this: https://drewdevault.com/2021/12/28/Dont-use-Discord-for-FOSS.html

                                                                                                                  1. 1

                                                                                                                    Thanks!

                                                                                                            1. 5

                                                                                                              A lot of people seem upset that this article is not specifically relevant to them, since it appears to be written for people who are building “Web3 DApps” (or merkle-trees in lobsters tag parlance) and the author realized that there’s no point in using JWT in that context. As someone who builds such things, I feel it’s an important realization for people who are getting into the space.

                                                                                                              If anyone wants to read more about the mentioned “Login with Ethereum” thing, the standard and working demo is here: https://login.xyz/ – it can be used with several dozen different wallets (not just MetaMask), including cross-device (mobile to desktop using WalletConnect), hardware wallets, and even Social Recovery Wallets. It’s pretty nifty.

                                                                                                              Not trying to get anyone to buy anything (yes, this can be used with an empty wallet). It’s just cool technology. If it’s not relevant to you then feel free to ignore. 🤷

                                                                                                              I’m not op but I’ve been building things in this space for a long time, happy to answer questions if you have any.

                                                                                                              1. 6

                                                                                                                Any idea why one should switch to Lua for their config?

                                                                                                                1. 11

                                                                                                                  As far as I know, the reasons are any/all of:

                                                                                                                  1. LuaJit runs much faster than vimscript (vimscript is entirely unoptimized, although not sure how much of a difference this will make in relatively small amounts of code like a config).
                                                                                                                  2. Lua is a full programming language, which could have utility for more complex configurations.
                                                                                                                  3. You prefer Lua syntax to vimscript.
                                                                                                                  1. 8
                                                                                                                    1. Lua is a mainstream programming language with lots of great tooling, libraries, etc. that continues to improve at a great pace.

                                                                                                                    A big part for me was the frustration of learning vimscript while knowing that I will never use this knowledge and code anywhere else. Now when I mess with my nvim configs or writing a new plugin, I’m also practicing my lua which I can use in other projects. :)

                                                                                                                    1. 1

                                                                                                                      Yeah good point. I do know (some) vimscript but have converted (most) of my configs to lua specifically to learn lua.

                                                                                                                  2. 5

                                                                                                                    I would switch in a heartbeat to Lua for my configs in all software I use instead of the usual JSON, YAML, TOML.

                                                                                                                    Why? Because it allows me to script and abstract things I might want. For the cases I don’t want to abstract anything, I can simply use Lua table notation instead of JSON and serve basically the same purpose.

                                                                                                                    It also allows software to move from configuration files to initialization files. The change might seem subtle, but with initialization files, you don’t need to outsmart your user and provide all the features they might ever want or need as you do with dumb configuration files. Instead, you can provide a flexible API and let them build initialization files to serve their unique personal needs.

                                                                                                                    1. 3

                                                                                                                      In https://changelog.com/podcast/457 TJ (neovim maintainer) says don’t switch to lua just for the sake of it. Search for “to write your entire configuration”.

                                                                                                                      I took a snippet of mine that I thought was confusing (to me) in vimscript and ported it over. It didn’t take too long and I had to learn the vim apis to invoke lua etc which was new to me.

                                                                                                                      1. 2

                                                                                                                        This strikes me as accurate. I was able to get most of my existing init.vim ported to lua, with a couple of small bits not working correctly (or at least me not knowing how to make them work correctly yet). As I mentioned in my above comment, I could leave those small pices in vim.cmd([[ ... ]]) blocks in init.lua and have them work the same as they did before, which is fine by me. This podcast was recorded in August 2021, and I’m looking forward to further improvements in the configuration APIs from the neovim contributors.

                                                                                                                        My own motivation for switching was some mix of wanting to try out this new Lua-in-Neovim thing I’d been hearing about, partially wanting to move to some fancy new post-Neovim-0.5 plugins that all had configuration documentation in Lua, and partially wanting to break up my lengthy init.vim into several smaller files. I don’t actually know if it’s impossible to do that in Vimscript, but it was certainly straightforward in Lua.

                                                                                                                      2. 2

                                                                                                                        Lua is simple and logical, vimscript seems to be the opposite of that.

                                                                                                                      1. 1

                                                                                                                        Is it viable to do the Terraform bits using NixOps at all?

                                                                                                                        1. 2

                                                                                                                          Probably? I’m personally liking Terraform for this a lot more than NixOps (if only because terraform is the lingua franca of frantically gluing clouds together) but you could likely get away with NixOps for this.

                                                                                                                          1. 1

                                                                                                                            Thanks. :) I still haven’t forced myself to play with NixOps, but Terraform has made me cry many times before, so it’s more of an “enemy of my enemy” type of situation.

                                                                                                                            1. 3
                                                                                                                              1. 1

                                                                                                                                Noted!

                                                                                                                        1. 2

                                                                                                                          Hi, creator here, thanks for submitting and please share any feedback or thoughts you might have!

                                                                                                                          1. 21

                                                                                                                            Without more context it’s difficult at a glance to know how to interpret “Harmful”.

                                                                                                                            It looks like it’s saying “Mozilla’s implementation of the Serial API is harmful” but it sounds like what it’s actually saying is “Mozilla considers the Serial API to be harmful” which is very different!

                                                                                                                            1. 1

                                                                                                                              Hmm, yeah, good point. The wording is taken from their own site which is linked to when one clicks on the status.

                                                                                                                              Suggestion on how it could be improved?

                                                                                                                              1. 8

                                                                                                                                Personally I have no idea what this website is about. Perhaps add a few lines on top that explain what it is?

                                                                                                                                1. 1

                                                                                                                                  Further down I’ve written:

                                                                                                                                  observations of APIs with controversy around them and where hard facts has often been hard to find

                                                                                                                                  Maybe replacing/extending the current “Background” in the top with something similar? Maybe like this:

                                                                                                                                  Gathering of Web API specifications that has caused controversy among browser vendors, giving them relevant context

                                                                                                                                  1. 3

                                                                                                                                    I think you need even more context than that. What is Web API? Why is it controversial? The nice thing about the FAQ format is that you can spend the first 1–3 items answering questions like these and anyone who already has this context can just skip over them.

                                                                                                                                    A design note—the text in the FAQ expands to fill the full width of the screen (or at least the 1,280 pixels of my browser window), and there is also no margin between the text and the edge of the screen. Both of these things make the text harder to read. You might consider limiting the width of the text to 800 px or 40 em (very approximate numbers) and, on smaller screens, adding at least 10 px of whitespace on either side.

                                                                                                                                    1. 1

                                                                                                                                      Suggestion on wording and such is much appreciated, this is just something I threw together quickly in an afternoon to try and gather references in these topics :)

                                                                                                                                2. 3

                                                                                                                                  Perhaps you’d consider changing the colour scheme? To me, green = GOOD and red = BAD, which makes it hard to understand what’s actually going on at first glance.

                                                                                                                                  1. 1

                                                                                                                                    In what way? Green = positive about the state of the spec, Red = negative about the state of the spec, isn’t that the correct way?

                                                                                                                                    1. 4

                                                                                                                                      I think the problem is that it’s not immediately obvious that this ‘judgement’ of good vs bad is about the spec. At first glance this just looks like chrome has everything green and is thus good, while firefox/safari have everything red and are thus bad.

                                                                                                                                      1. 1

                                                                                                                                        Yeah, good feedback, will try to find time to improve it asap

                                                                                                                                  2. 2

                                                                                                                                    “Harmful to Users” or “Deemed Harmful to Users” perhaps? The key point that needs communicating is that Mozilla has determined that implementing the spec would be harmful to its own users, e.g. someone might use the serial api to modify their insulin delivery device.

                                                                                                                                    1. 1

                                                                                                                                      Intentionally omitted.

                                                                                                                                      1. 1

                                                                                                                                        Well, Mozilla’s own description of their “Harmful” label is “Mozilla considers this specification to be harmful in its current state.”

                                                                                                                                        That the focus is on the spec, not the implementation should get clarified

                                                                                                                                        1. 5

                                                                                                                                          “Harmful” doesn’t mean anything on its own, you have to tell a person what is being harmed.

                                                                                                                                          1. 1

                                                                                                                                            Totally, but that’s better explained by the ones considering it to be harmful than for me to try and summarize and maybe misinterpret

                                                                                                                                            1. 1

                                                                                                                                              By using the single word “Harmful” I’d argue that you have summarized. It’s just that that summary is ambiguous and prone to misinterpretation, as others in this thread have pointed out.

                                                                                                                                              1. 1

                                                                                                                                                How would you summarize it better? I would love to do it better

                                                                                                                                                1. 1

                                                                                                                                                  Maybe “Mozilla considers it harmful” or “No plans to implement”? I know these are more wordy than what you’ve got now, but I can’t think of a shorter bit of text that still conveys the right meaning.

                                                                                                                                      2. 2

                                                                                                                                        Added an issue for it to ensure it doesn’t get lost: https://github.com/voxpelli/webapicontroversy.com/issues/1

                                                                                                                                        1. 1

                                                                                                                                          Perhaps “Rejected” instead of “Harmful”?

                                                                                                                                          Though Mozilla themselves refer to it as “harmful”.

                                                                                                                                          1. 1

                                                                                                                                            They often haven’t rejected the specs though, rather they have found that they in their current state would be harmful to the web.

                                                                                                                                            Remember: All of these specs are drafts and still under discussion, even though Chrome has decided to ship them

                                                                                                                                            1. 1

                                                                                                                                              Yea, I get that, I just don’t feel that “HARMFUL” is representative of what’s going on.

                                                                                                                                              For example, the Safari side of things talks about anti-fingerprinting challenges, which is fair.

                                                                                                                                              I don’t get the sense that the Chrome side is actively trying to enact more ways to fingerprint, but rather they’re trying to build a browser environment that competes with OS functionality, which I think is also fair (ideology aside). I’m not sure how Safari feels about this, given that they’re a purveyor of iOS and macOS and probably don’t love the idea of browsers competing.

                                                                                                                                              Meanwhile I’m not sure what Mozilla’s agenda is. They’re no longer providing Firefox OS, but also it’s not clear that Firefox is interested in pushing browser functionality forward, while also experimenting with ads/sponsored content by default.

                                                                                                                                              My personal bias, as someone who uses Linux and benefits greatly from cross-platform applications like browser apps, is that I like the idea of these additional WebAPIs and it doesn’t sound intractable to make them robust against fingerprinting. The cost of not advertising the functionality by default and even requiring the user to manually enable them seems more than worth it.

                                                                                                                                              My wish for something like the Web API Controversy page (which I appreciate exists as it’s a handy dashboard to keep track of!) is that it didn’t make the premise of the proposals seem nefarious and intractable. :)

                                                                                                                                      3. 2

                                                                                                                                        I think it would be nice to link to discussions directly in the details, e.g. https://github.com/mozilla/standards-positions/issues/336

                                                                                                                                        1. 2

                                                                                                                                          I prefer to link to the most official kind of reference and have it refer to the discussions they feel are relevant, feels like that has a better chance of being up to date and staying as objective as possible

                                                                                                                                      1. 2

                                                                                                                                        The vim integration off the bat looks great! Very compelling feature, thanks for sharing. :)

                                                                                                                                        1. 2

                                                                                                                                          Have you tried just using Syncthing, without the SMB share? There’s a native Android client. It works Okay, though I need to figure out a better way to arrange the sync destination/layout.

                                                                                                                                          1. 1

                                                                                                                                            I used to have only Syncthing on all of my devices but now have configured a SMB share so that it is a bit easier to browse the files. Also my family could use it to access a family photo archive.

                                                                                                                                            1. 1

                                                                                                                                              Right, I use Syncthing to back up my phone’s photos to my NAS, which has its own sharing/display mechanisms. Feels like a fairly good substitute for Google Photos sync, at least.

                                                                                                                                              Do you feel that the SMB component is actually combining multiple solutions though? It sounds like the actual Dropbox-like selective sync is all done by Syncthing, but you’re augmenting the result with SMB for more usecases. It just so happens that Dropbox does all those things in addition to sync (sharing, album gallery, etc) but I feel more comfortable knowing that Syncthing will only safely sync files and not risk being exposed to weird sharing vulnerabilities.

                                                                                                                                              1. 1

                                                                                                                                                Syncthing works well for me for the same use case: backing up files from the device to NAS. The other way around is a bit more challenging. Let’s say I want to access my family photo archive (which is many gigs) from my phone. For this I was planning to use SMB share cause it lets me browse the remote folder and access the file I want. I guess it would be nice to have GUI for Syncthing to stop ignoring a file/folder and download it to the device.

                                                                                                                                                1. 1

                                                                                                                                                  Yea you’re totally right about that. I also find syncthing is much simpler to reason about as one-way backups rather than by-directional data flow.