Threads for af

  1. 4

    How do I ssh -X with wayland?

    1. 14

      waypipe will proxy Wayland messages similarly to ssh -X.

    1. 2

      https://support.mozilla.org/en-US/questions/1370494

      Oh dear God! They want to decrease market share even more.

      1. 0

        On ESR with blocked updates.

        1. 1

          Ouh man, with C++ gaining more and more functionality with the increasing speed, one thing that is sort of left there is ownership in threads.

          1. 2

            Interesting, how come grep didn’t add OpenMP support yet.

            1. 13

              Like the author mentions, there have been many posts and discussions about V that are a bit harsh or antagonistic. These have left a bad taste in my mouth regarding V. I really appreciate this post being more objective, calm, and, kind. I especially liked this:

              so long as the rest of the V community is enjoying working on V and no one is getting hurt, then whatever there are more important things to worry about.

              That’s a great way to look at it.

              1. -10

                Getting hurt in a git repo, new level of wokeness.

              1. 5

                explicit over implicit

                I see it has defer. I would instead be tempted for explicit destructors (aka. “higher RAII” or whatever Vale calls it). Defer-style cleanup kind of has the wrong default: It works well for classic resources that the user expects, like memory, files and mutexes, but if making an API around some other resource, it is nice to be able to give an object to the user that he/she can’t simply forget to do something about. If they want to ignore it, they just make a function that consumes it – destructors in this sense are just normal functions.

                1. 4

                  There is at least one person looking at ways to make forgetting to call a cleanup function a compile error.

                  1. 1

                    You mean to have something like:

                    def __defer__(self):
                       # this is called on defer
                    

                    ?

                    1. 4

                      No, not if you have to remember to call it. Not a special function name either. Forget defer (or maybe keep it for convenience’s sake). Think destructors, except that the calls are explicit – it’s an error to let such an object go out of scope alive.

                      // Let's say this means that HotPotato must not outlive its scope.
                      #[derive(Resource)]
                      type HotPotato = struct{};
                      
                      /// Classic destructor – takes nothing, returns nothing
                      fn foo(self: HotPotato) void = {
                          whatever_cleanup();
                          // Only in a destructor implementation, after all obligations are met:
                          self.forget(); // Actually go out of scope (derived from Resource).
                      };
                      
                      /// The typestate pattern – consumes the object, returns something else
                      fn bar(self: HotPotato, otherArgument: int) SomeOtherTypeState = {
                          // implementation defined
                      };
                      
                  1. 16

                    Who the fuck knows?!!

                    1. 5

                      Pub drinking!

                      1. 1

                        I have faint memories of that. Have fun.

                      1. 1

                        If I would be a mainter, I would just say that I don’t have time to maintain this, please fork away… but then there will be no drama =)

                        1. 4

                          You seem to be missing that the “contribution” mentioned in the article is the addition of a package to the ports tree, which the contributor would then be maintaining, not the Alpine developers.

                        1. 7

                          Congrats! and thanks for not changing the name!

                          1. 3

                            Drinking in pubs! Let’s go!

                            1. 14

                              I’ve never seen the bash pipefail option before, and what I’ve been able to read and try does not line up with what is in the blog post. Can someone clarify this for me?

                              As I understand it, pipefail is about setting the exit status of the overall pipeline:

                              $ false | true
                              $ echo $?
                              0
                              $ set -o pipefail
                              $ false | true
                              $ echo $?
                              1
                              $ 
                              

                              But now if I do

                              $ set -o pipefail
                              $ false | sleep 2
                              $ 
                              

                              That command runs for two seconds. In particular, the sleep does not seem to have been interrupted or have any indication that false failed. So if the problem was the command

                              dos-make-addr-conf | dosctl set template_vars -
                              

                              Then yes, pipefail is going to make that shell script exit 1 now instead of exiting 0. But I don’t see what stops dosctl set template_vars - from taking the empty output from dos-make-addr-conf and stuffing it into template_vars. Is the whole shell script running in some kind of transaction, such that the exit value from the shell script prevents the writes from hitting production?

                              Thanks for any clarifications. (I agree with the general rule here about never using shell to do these things in the first place, pipefail or not!)

                              1. 14

                                You’re absolutely right, pipefail is only about the return value of the entire pipeline and nothing else.

                                From the article:

                                Enabling this option [pipefail] changes the shell’s behavior so that, when any command in a pipeline series fails, the entire pipeline stops processing.

                                Nope, wrong, nothing stops earlier.

                                1. 5

                                  Author here – good catch! I tried to golf the example down to a one-liner for clarity, but it looks like I need to update the blog.

                                  Indeed, as @enpo mentioned in a sibling post, -e is also critical, and a more accurate reproduction would be something like…

                                  cat unformatted.json | jq . > formatted.json
                                  

                                  If unformatted.json does not exist, then, without -e and -o pipefail, you will clobber formatted.json.

                                  1. 8

                                    If unformatted.json does not exist, then, without -e and -o pipefail, you will clobber formatted.json.

                                    Even with errexit and pipefail, you will still clobber formatted.json

                                    $ bash -c '
                                     > set -o pipefail
                                     > set -o errexit
                                     > printf '{}\n' >formatted.json
                                     > cat unformatted.json | jq . >formatted.json
                                     > '
                                    cat: unformatted.json: No such file or directory
                                    $ cat formatted.json
                                    

                                    This is because bash starts each part of a pipeline in a subshell, and then waits for each part to finish.

                                    Each command in a pipeline is executed as a separate process (i.e., in a subshell).

                                    And before running the commands in the subshells bash handles the redirections, so formatted.json is truncated immediately, before the commands are run, which is why you get behavior like:

                                    $ cp /etc/motd .
                                    $ wc -l motd
                                    7 motd
                                    $ cat motd | wc -l > motd
                                    $ cat motd
                                    0
                                    
                                    
                                    1. 7

                                      Sigh. I’ve updated the post with a new (hopefully correct) contrived example:

                                      (dos-make-addr-conf | tee config.toml) && dosctl set template_vars config.toml
                                      
                                      1. 7

                                        Hm, no lobsters acknowledgments in the post? Kinda shame.

                                        1. 4

                                          This does have the desired effect of not running dosctl if dos-make-addr-conf fails, but it is a bit hard to read. Why are you using tee, do you want the config to go to stdout as well? One way to make the control flow easier to read is to use if/else/fi:

                                          if dos-make-addr-conf >config.toml; then
                                              dosctl set template_vars config.toml
                                          else
                                              printf 'Unable to create config.toml\n' >&2
                                              exit 1
                                          fi
                                          

                                          This way your intentions are clearer and you don’t even need to rely on pipefail being set.

                                    2. 4

                                      The article focused on set -o pipefail, but the fix presented also had set -e. According to the documentation, this makes all the difference.

                                      The article should probably have been more clear in that regard.

                                      1. 2

                                        I took the theory for a test drive, and @lollipopman is entirely correct. Today I learned something new about shell scripting :)

                                        $ cat failtest.bash
                                        #!/bin/bash
                                        set -euo pipefail
                                        
                                        false | sleep 2
                                        
                                        $ time ./failtest.bash
                                        real    0m2.010s
                                        user    0m0.000s
                                        sys     0m0.008s
                                        
                                      2. 2

                                        Just to get it out of the way first, pipefail in itself won’t stop the script from proceeding, so it only makes sense together with errexit or errtrace or an explicit check (à la if false | true; then …). As you say, it’s about the status of the pipeline.

                                        man 1 bash:

                                        If pipefail is set, the pipeline’s return status is the value of the rightmost command to exit with a non-zero status

                                        But you seem to be right: Pipefail doesn’t propagate the error across the pipeline. Which isn’t surprising given the description above. Firstly, there is of course no waiting for each other’s exit statuses, because the processes in the pipeline are run concurrently. Secondly, it doesn’t kill the other processes either. Not as much as a sighup – your sleep command would evidently die if it got a sighup.

                                        1. 1

                                          @rsc, I am pretty sure you are correct that setting pipefail, errexit, or nounset has no effect on whether dosctl set template_vars - is run as part of the pipeline. Bash starts all the parts of the pipeline asynchronously, so whether dos-make-addr-conf produces an error or not, even with pipefail, has no effect on whether dosctl is run. I believe the correct solution is to break the pipeline apart into separate steps and check error codes appropriately.

                                        1. 8

                                          Why not just postgres?

                                          1. 7

                                            The claim is that they don’t want to do the operations work that requires. At this point it sounds like they’ve spent more time hacking random things together than they would have on just running PostgreSQL. But they also only have one giant table, apparently, so I think there may be deeper issues. I mean, if you only need a kv store, why not just start with a kv store like Tokyo Cabinet?

                                            1. 5

                                              If you read the article, it explains that they actually don’t want to keep using just a key-value store style, they want to migrate to more complex data and in fact are in the process of doing so.

                                              1. 3

                                                Long term support and viability, I’ve never heard of any projects using TC.

                                                1. 1

                                                  Considering what they’ve been doing wrapping etcd and the like, I’m not sure it’s that big a concern to them. Plus TokyoCabinet (or KyotoCabinet now) have been around longer than etcd, I think. They’re just pretty stable.

                                                  1. 1

                                                    TC is a building block, but using it directly you have to be careful. I’ve been bit by a corrupt TC more than once

                                              2. 3

                                                I consider this the most important question at the start of any system architecture discussion and it had better have an extremely detailed answer with considerable evidence. But I don’t work at an internet company

                                                1. 3

                                                  I mean this in a pretty judge-y way but only for gossip, but this story and the previous one sound like a case of the CTO wanting to play with their Legos.

                                                  Even if they want to move to a more complex data structure later, dismissing “just buy a managed SQL server with the money that companies give you every month” because of hand-wave-y ops work (it’s tailscale! They have money to hire someone who is competent at databases) feels really odd.

                                                  1. 3

                                                    Except as they’ve said, they don’t want to deal with the operational overhead of Postgres or MySQL. They have people who could deal with it but they don’t want to. They’re a fairly small company so they aren’t going to hire someone just to deal with making sure a DB is replicating properly. They seem to only have one writer process and maybe some distributed readers? Litestream with the upcoming read replicas seems like the perfect fit for their use case.

                                                    1. 3

                                                      Sorry, you’re right that their current SQLite setup is very reasonable. This is more about their bespoke etcd model.

                                                      The operational efforts of a DB exist but if they have one writer process and some distributed readers, it’s a known quantity and is … mostly a solved problem? Importantly, it’s also a solved problem in the “pay another company some money to deal with it for you” variety in my opinion.

                                                      They wrote a custom etcd client because they didn’t want to set up a primary/secondary replica setup (and it’s not like those other solutions don’t also have the same operational concerns about backups and the like!)

                                                      There are operational concerns about any database, nice thing about most of the classic relational DBs is that there are documented, “boring” solutions.

                                              1. 1

                                                I’m surprised no one forked PHP and cleaned it up. You can’t beat PHP for web pages.

                                                1. 7

                                                  That happened! https://hacklang.org/

                                                  It seems like it’s mainly used at Facebook though …

                                                  1. 4

                                                    PHP is bad for webpages because it’s not a suitable HTML templating language and it uses file based routing. A suitable templating language is safe by default. Even the cool new PHP things all end up using some other safer language for templating. File based routing can be okay if it’s smart routing, like there’s some parsing of route info, a la Next.js, but PHP file based routing is dumb URL → page and you need to use Apache or whatever to make the routing acceptable. I guess someone could fork PHP to fix those issues, but that would just be Laravel, no?

                                                    What PHP has going for it is that it is very easy to get something to production quickly. This is a very important quality, perhaps even the most important for many uses. Still, for most new users today my recommendation is to just make a static site and host it on Netlify so you can collect form inputs or use Airtable for forms.

                                                    1. 2

                                                      it’s not a suitable HTML templating language

                                                      In light of laravel and symfony I don’t think that is true at all.

                                                      1. 3

                                                        Laravel has Blade, which is a superset of PHP because PHP is not a suitable templating language.

                                                        Symphony has https://github.com/symfony/templating which I guess is closer to pure PHP, but it sucks because you have to manually escape things.

                                                        1. 3

                                                          FYI the templating URL you’re linking to is Symfony’s templating language framework, not something intended to be used as-is. The actual templating language symfony and Drupal use is Twig, which is escaped by default.

                                                          1. 1

                                                            Thanks. I thought it was a mustache-like but Duck Duck Go failed me.

                                                          2. 2

                                                            And laravel is totally fine for me, it’s not like people started inventing Mustache, jinja and frameworks like Vue,React etc out of fun.

                                                        2. 1

                                                          it uses file based routing

                                                          Most serious projects use PHP frameworks that do not use file-based routing, it’s there at the lower level but that’s not really part of your development experience. You define routes very similar to the way you would in e.g. popular JavaScript frameworks.

                                                          1. 1

                                                            and you need to use Apache or whatever to make the routing acceptable.

                                                            That’s what I said.

                                                            1. 1

                                                              I guess I’m not understanding your point on how PHP’s native routing makes it bad for web pages. I’m guessing I disagree with you on something here but I’m not even sure what that is :) File-based routing is fine for very simple sites, and if you’re doing something more complex you have a framework that gives you more power. Isn’t this exactly the same as HTML+JS? Web servers use file-based routing too (unless you’re doing something fancy with JS) so it’s not really a weakness of PHP, that’s just how the web works.

                                                              1. 1

                                                                No, you want to separate routes from controllers in any kind of practical system. It doesn’t have to be a regex (although those are nice to have), but eg it should be easy to do /page/:id. In PHP, to do that, you have to leave PHP, go to your web server, configure that to send pages to index.php but include the original URL as a query parameter or something, go back to PHP and add code to unpack the information from the router, then do the actual routing. None of that is impossible, but it’s not the easy path, and the easy path should include /page/:id because that’s a minimum for a dynamic web server. If you only need static routing, just use a static site generator.

                                                                1. 1

                                                                  you want to separate routes from controllers in any kind of practical system

                                                                  I agree with you, and there is a case here that PHP could have some native routing support that would make the framework implementations of this simpler. But I would still say, the process you describe doesn’t really matter as a PHP dev because all of that is invisible and handled for you by any popular framework (routes and controllers are decoupled). Stuff like /page/:id I’m pretty sure I’ve been able to manage just through mod_rewrite e.g. /foo/id/3 gets matched to foo.php?id=3 – although forgive me if PHP did require something to make that work and I just forgot, I haven’t needed to set that up in a long time.

                                                                  1. 1

                                                                    Go back to the beginning:

                                                                    I’m surprised no one forked PHP and cleaned it up. You can’t beat PHP for web pages.

                                                                    The idea of this comment, as I understand it, is not “the PHP ecosystem is good” or “there are strong PHP frameworks”. It’s “PHP the language is really nice and convenient for making a dynamic website, and if someone could just go back and fix all the weird naming issues with the standard library, it would be perfect.” I disagree with that. My disagreement isn’t that good frameworks can’t be written in PHP. Laravel is very cool! It’s that the core language and standard library, even if you went back and fixed the inconsistent naming stuff, aren’t good enough to make a dynamic website for two reasons: 1. the templating is unsafe and you need Twig or something to fix it and 2. you need non-file based routing, or at least dynamic file routing, and to do that in PHP, you need to coordinate some hand off with your webserver.

                                                                    1. 1

                                                                      I think I understand the point better, thank you for clarifying.

                                                                      To that I would still say though, “good” must at some point be relative. Every language (that I’m aware of anyway) when used for the web, is primarily used with a framework. At least for any non-trivial project, which is arguably the use case that matters the most.

                                                                      By the standard and reasoning I’m taking away from this criticism, there are /no/ languages that are good at dynamic websites because if they were, none of them would need frameworks. Express.js and Django exist because JavaScript and Python aren’t perfect for web apps out of the box. In that sense this criticism of PHP seems tautological.

                                                                      1. 1

                                                                        Go’s standard library has an adequate templating library and router. It’s very common to use it without a “framework” although you will probably end up using some kind of third party packages for things like databases and sessions. I don’t think it’s unreasonable to think that “a cleaned up PHP” would come with a decent router and templating language.

                                                        3. 2

                                                          How so? Most web frameworks have a templated language that is the same form as PHP (erb, etc).

                                                          1. 1

                                                            Hm, what most web frameworks have server modules for Apache or nginx like php modules?

                                                          2. 1

                                                            It would be cool if someone forked it to a very small binary with a very aggressive strip. With things like auto loading or even OOP support removed. Something that could be used to build quick and dirty web interfaces. Perhaps even in microcontrolers.

                                                            The template capabilities are still, IMO, the easiest to use of any template engine out there. Because they just use the language constructs the programmer already knows.

                                                          1. 10

                                                            How many connections to google it does while compiling/booting?

                                                            1. 11

                                                              It’s a shame Google can’t run open-source projects. Fuchsia looks like one of the more interesting operating systems but as long as Google has complete control over what goes in and no open governance it’s not something I’d be interested in contributing to.

                                                              1. 11

                                                                To be fair to Google - they’re doing work in the open that other companies would do privately. While they say they welcome contributions they’re not (AFAIK) pretending that the governance is anything it’s not. On their governance page, “Google steers the direction of Fuchsia and makes platform decisions related to Fuchsia” – honest if not the Platonic ideal of FOSS governance.

                                                                To put it another way - they’re not aiming for something like the Linux kernel. They know how to run that kind of project, I’m sure, but the trade-off would be to (potentially) sacrifice their product roadmap for a more egalitarian governance.

                                                                Given that they seem to have some product goals in mind, it’s not surprising or wrong for them to take the approach they’re taking so long as they’re honest about that. At a later date they may decide the goals for the project require a more inclusive model.

                                                                If the road to Hell is paved with good intentions, the road to disappointment is likely paved with the expectation that single-vendor initiatives like this will be structured altruistically.

                                                                1. 6

                                                                  The governance model is pretty similar to Rust’s in terms of transparency: https://fuchsia.dev/fuchsia-src/contribute/governance/rfcs

                                                                  Imperfect in that curreny almost all development is done by Google employees, but that’s a known bug. But (to evolve the animal metaphors) there’s a chicken and egg issue here. Without significant external contributions it’s hard for external contributors to have a significant impact on major technical decisions.

                                                                  This same issue exists for other OSes like Debian, FreeBSD, etc - it’s the major contributors that have the biggest decision making impact. Fuchsia has the disadvantage that it’s been bootstrapped by a company so most of the contributors, initially, work for a single company.

                                                                  I’m optimistic that over time the diversity of contributors will improve to match that of other projects.

                                                                  1. 4

                                                                    A real shame indeed. Its design decisions seem very interesting.

                                                                    1. 1

                                                                      yeah I’d bet the moment they have what they wanted it’ll be closed down, because this is ultimately the everything-owned without GPL -OS for google

                                                                    2. 7

                                                                      Probably zero. Or if you’re using 8.8.8.8 for your DNS probably less than Windows or macOS.

                                                                      1. 5

                                                                        They all start like this, but at the end it will be another chrome.

                                                                        1. 5

                                                                          Co-developed with companies as diverse as Opera, Brave, Microsoft and Igalia, as well as many independent individuals? As a Fuchsia developer that’s a future I aspire to.

                                                                          1. 13

                                                                            Chrome, which refused to accept FreeBSD patches with a community willing to support them because of the maintenance burden relative to market share, yet, accepted Fuchsia patches passing the same maintenance burden on to the rest of the contributors, in spite of an even smaller market share? If I were an antitrust regulator looking at Google, their management of the Chromium project is one of the first places that I’d look. Good luck building an Android competitor if you’re not Google: you need Google to accept your patches upstream to be able to support the dominant web browser. Not, in my mind, a great example of Google running an inclusive open source project.

                                                                            1. 6

                                                                              It’s not just about whose labor goes into the project, but about who decides the project’s roadmap. That said, maybe it’s about time to get the capability-security community interested in forking Fuchsia for our own needs.

                                                                              1. 3

                                                                                You should be more worried about the “goma is required to build Chrome in under 5 hours” future, in my opinion.

                                                                                1. 0

                                                                                  Keep aspiring on google salary. It would be good to disclose conflict of interest btw.

                                                                                  1. 11

                                                                                    I mentioned that I’m a Fuchsia developer. I’m not sure what my conflict of interest here is. I’m interested in promoting user freedom by working on open source software across the stack and have managed to find people to pay me to do that some of the time, though generally less than I would have made had I focused on monetary reward rather than the impact of my work.

                                                                            2. 5

                                                                              The website doesn’t have working CSS without allowing gstatic.com, so I’d guess at least one?

                                                                              1. 1

                                                                                /me clutches pearls

                                                                            1. 1

                                                                              Very high assurance in unsafe blocks especially =)

                                                                              1. 1

                                                                                You laugh, but there’s Rudra project that does static analysis of unsafe Rust code using extra information from the context of safe Rust code interacting with it.

                                                                              1. 7

                                                                                Mozilla’s vision is what people find in librewolf.

                                                                                1. 2

                                                                                  37 GB torrent of fresh meat. Claimed…

                                                                                  1. 2

                                                                                    I would never transfer rights of my abandoned repos, don’t like - fork it!

                                                                                    1. 3

                                                                                      I’ve added contributors with full rights, enabled forks, and done transfers. The problem is in branding — lots of people assume a fork is secondary. Even having a README do the redirect is clunky, it doesn’t transfer “the space it occupies in people’s minds”. Full transfers do, mostly, transfer the branding. Adding contributors is probably the cleanest from a branding perspective, but doesn’t credit the new maintainers as much as I’d like. Forks can work, but it takes a lot of effort to restart the community around the new brand.

                                                                                      1. 1

                                                                                        You did I wouldn’t.