1. 11

    why to people have the need to use a framework for everything, like the BDD testing frameworks in this article. i really don’t see the value of it. it’s just another dependency to carry around, and i can’t just read and understand what is happening.

    what is gained by writing:

    Expect(resp.StatusCode).To(Equal(http.StatusOK))
    

    instead of

    if resp.StatusCode != http.StatusOK { 
        t.Fail() 
    }
    
    1. 11

      I don’t use that particular testing framework, but the thing I’d expect to gain by using it is better test failure messages. I use testify at work for almost precisely that reason. require.Equal(t, valueA, valueB) provides a lot of value, for example. I tried not to use any additional test helpers in the beginning, probably because we have similar sensibilities. But writing good tests that also have good messages when they fail got pretty old pretty fast.

      1. 3

        ok, i can see that good messages may help, though i’d still rather use t.Fatal/Fatalf/Error/Errorf, maybe paired with a custom type implementing error (admitting that it’s a bit more to type) if a custom DSL is the alternative :)

        testify looks interesting though!

        1. 4

          testify is nice because it isn’t really a framework, unless maybe you start using its “suite” functionality, which is admittedly pretty light weight. But the rest of the library drops right into the normal Go unit testing harness, which I like.

          I did try your methods for a while, but it was just untenable. I eventually just stopped writing good failure messages, which I just regretted later when trying to debug test failures. :-)

          testify is a nice middle ground that doesn’t force you to play by their rules, but adds a lot of nice conveniences.

      2. 6

        The former probably gives a much better failure message (e.g. something like “expected value ‘200’ but got value ‘500’”, rather than “assertion failed”).

        That’s obviously not inherent to the complicated testing DSL, though. In general, I’m a fan of more expressive assert statements that can give better indications of what went wrong; I’m not a big fan of heavyweight testing frameworks or assertion DSLs because, like you, I generally find they badly obfuscate what’s actually going on in the test code.

        1. 4

          yeah, with the caveats listed by others, I sort of thing this is a particularly egregious example of strange library usage/design. in theory, anyone (read: not just engineers) is supposed to be able to write a BDD spec. However, for that to be possible, it should be written in natural language. Behat specs are a good example of this: http://behat.org/en/latest/. But this one is just a DSL, which misses the point I think…

          1. 3

            However, for that to be possible, it should be written in natural language. Behat specs are a good example of this: http://behat.org/en/latest/. But this one is just a DSL, which misses the point I think…

            I’d say that the thing behat does is a real DSL (like, with a parser and stuff). The library from the article just has fancy named functions which are a bit a black box to me.

            Just a thought: One could maybe write a compiler for a behat-like language which generates stdlib Go-tests, using type information found in the tested package, instead of using interface{} and reflect. That’d be a bit of work though ;)

        1. 5

          this full-throttle tinfoily panic mode of some people right now. “move to hosted gitlab!!1 that will show ‘em!!11”. i’m not anti-gitlab, but hosted gitlab has the same set of problems like github. like, for example, being bought by $EVILCOMPANY

          if microsoft now decides there will be no more free repos, it’s ok! they can do with their property however they please (just like before that acquisition github could’ve done). don’t bitch about the free lunch not tasting right. that is the deal if you use resources of others for free.

          1. 3

            I think for most people, if gitlab took a similar turn, a self-hosted (or pay someone else to host it) OSS version of GitLab would be fine.

            People use gitlab.com because it’s hands-off, not because it’s the commercial version for free.

            1. 3

              It’s not “that will show em” at all. No idea where that is being quoted from.
              I can say my statement was, IF the MS acquisition bothered you, and there is enough historical precedent that it may reasonably do so for reasonable people, then note that Gitlab does currently have 1-click repository migration from GitHub. In addition that is is also a possibility that Github may unilaterally sever that capability IF the migration becomes a flood. Ergo if you are going to do it, then do so now and don’t wait.

              1. 1

                it was a purposely overstated made-up-quote (easily spotted by the liberal use of “!!111”).

                microsoft is an actor on the market and as a result does things to maximize profits. one only has to take that in account when choosing to use their services. i’m not overly happy with it either, but gitlab is an actor too and plays by the same rules, including the possibility of being acquired. just self host, it’s not even hard, scaleway has prepared images for that for example.

                regarding the importing functionality: if they break the mechanisms to do that, i guess many other things won’t work as well, like bots acting on issues, etc. i don’t think they will break the whole ecosystem, as effectively that’s what they’ve paid for. maybe they’ll do that in the extended future, like twitter breaking their api for clients.

              2. 2

                Imagine what would happen when MSFT after buying GH also gets travisCi , which i believe they will do :)

                1. 2

                  It should also be quite a bit cheaper, afaik they never took VC money.

              1. 12

                Output should be simple to parse and compose

                No JSON, please.

                Yes, every tool should have a custom format that needs a badly cobbled together parser (in awk or whatever) that will break once the format is changed slighly or the output accidentally contains a space. No, jq doesn’t exist, can’t be fitted into Unix pipelines and we will be stuck with sed and awk until the end of times, occasionally trying to solve the worst failures with find -print0 and xargs -0.

                1. 11

                  JSON replaces these problems with different ones. Different tools will use different constructs inside JSON (named lists, unnamed ones, different layouts and nesting strategies).

                  In a JSON shell tool world you will have to spend time parsing and re-arranging JSON data between tools; as well as constructing it manually as inputs. I think that would end up being just as hacky as the horrid stuff we do today (let’s not mention IFS and quoting abuse :D).


                  Sidestory: several months back I had a co-worker who wanted me to make some code that parsed his data stream and did something with it (I think it was plotting related IIRC).

                  Me: “Could I have these numbers in one-record-per-row plaintext format please?”

                  Co: “Can I send them to you in JSON instead?”

                  Me: “Sure. What will be the format inside the JSON?”

                  Co: “…. it’ll just be JSON.”

                  Me: “But it what form? Will there be a list? Name of the elements inside it?”

                  Co: “…”

                  Me: “Can you write me an example JSON message and send it to me, that might be easier.”

                  Co: “Why do you need that, it’ll be in JSON?”

                  Grrr :P


                  Anyway, JSON is a format, but you still need a format inside this format. Element names, overall structures. Using JSON does not make every tool use the same format, that’s strictly impossible. One tool’s stage1.input-file is different to another tool’s output-file.[5].filename; especially if those tools are for different tasks.

                  1. 3

                    I think that would end up being just as hacky as the horrid stuff we do today (let’s not mention IFS and quoting abuse :D).

                    Except that standardized, popular formats like JSON get the side effect of tool ecosystems to solve most problems they can bring. Autogenerators, transformers, and so on come with this if it’s a data format. We usually don’t get this if it’s random people creating formats for their own use. We have to fully customize the part handling the format rather than adapt an existing one.

                    1. 2

                      Still, even XML that had the best tooling I have used so far for a general purpose format (XSLT and XSD in primis), was unable to handle partial results.

                      The issue is probably due to their history, as a representation of a complete document / data structure.

                      Even s-expressions (the simplest format of the family) have the same issue.

                      Now we should also note that pipelines can be created on the fly, even from binary data manipulations. So a single dictated format would probably pose too restrictions, if you want the system to actually enforce and validate it.

                      1. 2

                        “Still, even XML”

                        XML and its ecosystem were extremely complex. I used s-expressions with partial results in the past. You just have to structure the data to make it easy to get a piece at a time. I can’t recall the details right now. Another I used trying to balance efficiency, flexibility, and complexity was XDR. Too bad it didn’t get more attention.

                        “So a single dictated format would probably pose too restrictions, if you want the system to actually enforce and validate it.”

                        The L4 family usually handles that by standardizing on an interface, description language with all of it auto-generated. Works well enough for them. Camkes is an example.

                        1. 3

                          XML and its ecosystem were extremely complex.

                          It is coherent, powerful and flexible.

                          One might argue that it’s too flexible or too powerful, so that you can solve any of the problems it solves with simpler custom languages. And I would agree to a large extent.

                          But, for example, XHTML was a perfect use case. Indeed to do what I did back then with XLST now people use Javascript, which is less coherent and way more powerful, and in no way simpler.

                          The L4 family usually handles that by standardizing on an interface, description language with all of it auto-generated.

                          Yes but they generate OS modules that are composed at build time.

                          Pipelines are integrated on the fly.

                          I really like strongly typed and standard formats but the tradeoff here is about composability.

                          UNIX turned every communication into byte streams.

                          Bytes byte at times, but they are standard, after all! Their interpretation is not, but that’s what provides the flexibility.

                          1. 4

                            Indeed to do what I did back then with XLST now people use Javascript, which is less coherent and way more powerful, and in no way simpler.

                            While I am definitely not a proponent of JavaScript, computations in XSLT are incredibly verbose and convoluted, mainly because XSLT for some reason needs to be XML and XML is just a poor syntax for actual programming.

                            That and the fact that while my transformations worked fine with xsltproc but did just nothing in browsers without any decent way to debug the problem made me put away XSLT as an esolang — lot of fun for an afternoon, not what I would use to actually get things done.

                            That said, I’d take XML output from Unix tools and some kind of jq-like processor any day over manually parsing text out of byte streams.

                            1. 2

                              I loved it when I did HTML wanting something more flexible that machines could handle. XHTML was my use case as well. Once I was a better programmer, I realized it was probably an overkill standard that could’ve been something simpler with a series of tools each doing their little job. Maybe even different formats for different kinds of things. W3C ended up creating a bunch of those anyway.

                              “Pipelines are integrated on the fly.”

                              Maybe put it in the OS like a JIT. Far as bytestreams, that mostly what XDR did. They were just minimally-structured, byte streams. Just tie the data types, layouts, and so on to whatever language the OS or platform uses the most.

                      2. 3

                        JSON replaces these problems with different ones. Different tools will use different constructs inside JSON (named lists, unnamed ones, different layouts and nesting strategies).

                        This is true, but but it does not mean heaving some kind of common interchange format does not improve things. So yes, it does not tell you what the data will contain (but “custom text format, possibly tab separated” is, again, not better). I know the problem, since I often work with JSON that contains or misses things. But the problem is not to not use JSON but rather have specifications. JSON has a number of possible schema formats which puts it at a big advantage of most custom formats.

                        The other alternative is of course something like ProtoBuf, because it forces the use of proto files, which is at least some kind of specification. That throws away the human readability, which I didn’t want to suggest to a Unix crowd.

                        Thinking about it, an established binary interchange format with schemas and a transport is in some ways reminiscent of COM & CORBA in the nineties.

                      3. 7

                        will break once the format is changed slighly

                        Doesn’t this happens with json too?
                        A slight change in the key names or turning a string to a listof strings and the recipient won’t be able to handle the input anyway.

                        the output accidentally contains a space.

                        Or the output accidentally contact a comma: depending on the parser, the behaviour will change.

                        No, jq doesn’t exis…

                        Jq is great, but I would not say JSON should be the default output when you want composable programs.

                        For example JSON root is always a whole object and this won’t work for streams that get produced slowly.

                        1. 5

                          will break once the format is changed slighly

                          Doesn’t this happens with json too?

                          Using a whitespace separated table such as suggested in the article is somewhat vulnerable to continuing to appear to work after the format has changed while actually misinterpreting the data (e.g. if you inserted a new column at the beginning, your pipeline could happily continue, since all it needs is at least two columns with numbers in). Json is more likely to either continue working correctly and ignore the new column or fail with an error. Arguably it is the key-value aspect that’s helpful here, not specifically json. As you point out, there are other issues with using json in a pipeline.

                        2. 3

                          On the other hand, most Unix tools use tabular format or key value format. I do agree though that the lack of guidelines makes it annoying to compose.

                          1. 2

                            Hands up everybody that has to write parsers for zpool status and its load-bearing whitespaces to do ZFS health monitoring.

                            1. 2

                              In my day-to-day work, there are times when I wish some tools would produce JSON and other times when I wish a JSON output was just textual (as recommended in the article). Ideally, tools should be able to produce different kinds of outputs, and I find libxo (mentioned by @apy) very interesting.

                              1. 2

                                I spent very little time thinking about this after reading your comment and wonder how, for example, the core utils would look like if they accepted/returned JSON as well as plain text.

                                A priori we have this awful problem of making everyone understand every one else’s input and output schemas, but that might not be necessary. For any tool that expects a file as input, we make it accept any JSON object that contains the key-value pair "file": "something". For tools that expect multiple files, have them take an array of such objects. Tools that return files, like ls for example, can then return whatever they want in their JSON objects, as long as those objects contain "file": "something". Then we should get to keep chaining pipes of stuff together without having to write ungodly amounts jq between them.

                                I have no idea how much people have tried doing this or anything similar. Is there prior art?

                                1. 9

                                  In FreeBSD we have libxo which a lot of the CLI programs are getting support for. This lets the program print its output and it can be translated to JSON, HTML, or other output forms automatically. So that would allow people to experiment with various formats (although it doesn’t handle reading in the output).

                                  But as @Shamar points out, one problem with JSON is that you need to parse the whole thing before you can do much with it. One can hack around it but then they are kind of abusing JSON.

                                  1. 2

                                    That looks like a fantastic tool, thanks for writing about it. Is there a concerted effort in FreeBSD (or other communities) to use libxo more?

                                    1. 1

                                      FreeBSD definitely has a concerted effort to use it, I’m not sure about elsewhere. For a simple example, you can check out wc:

                                      apy@bsdell ~> wc -l --libxo=dtrt dmesg.log
                                           238 dmesg.log
                                      apy@bsdell ~> wc -l --libxo=json dmesg.log
                                      {"wc": {"file": [{"lines":238,"filename":"dmesg.log"}]}
                                      }
                                      
                                2. 1

                                  powershell uses objects for its pipelines, i think it even runs on linux nowaday.

                                  i like json, but for shell pipelining it’s not ideal:

                                  • the unstructured nature of the classic output is a core feature. you can easily mangle it in ways the programs author never assumed, and that makes it powerful.

                                  • with line based records you can parse incomplete (as in the process is not finished) data more easily. you just have to split after a newline. with json, technically you can’t begin using the data until a (sub)object is completely parsed. using half-parsed objects seems not so wise.

                                  • if you output json, you probably have to keep the structure of the object tree which you generated in memory, like “currently i’m in a list in an object in a list”. thats not ideal sometimes (one doesn’t have to use real serialization all the time, but it’s nicer than to just print the correct tokens at the right places).

                                  • json is “java script object notation”. not everything is ideally represented as an object. thats why relational databases are still in use.

                                  edit: be nicer ;)

                                1. 15

                                  world-open QA-less package ecosystems (NPM, go get)

                                  This is one I’m increasingly grumpy about. I wish more ecosystems would establish a gold set of packages that have complete test coverage, complete API documentation, and proper semantic versioning.

                                  1. 4

                                    world-open QA-less package ecosystems (NPM, go get)

                                    i’d argue that go get is no package ecosystem. it’s just a (historic) convenience tool, which was good enough for the initial use (inside a organization). furthermore, i like the approach better than the centralized language package systems. nobody checks all the packages in pypi or rubygems. using a known good git repo isn’t worse, maybe it’s even better as there is not another link in the chain which could break, as the original repository is used instead of a somehow packaged copy.

                                    I wish more ecosystems would establish a gold set of packages that have complete test coverage, complete API documentation, and proper semantic versioning.

                                    python has the batteries included since ages, gos standard library isn’t bad either. both are well-tested and have good documentation. in my opinion the problem is that often another 3rd pary depencendy gets quickly pulled in, instead of giving a second thought to if it is really required or can be done by oneself which may spare one trouble in the future (e.g. left-pad).

                                    in some cases there is even a bit of quality control for non standard packages: some database drivers for go are externally tested: https://github.com/golang/go/wiki/SQLDrivers

                                    1. 2

                                      Then you get the curation (and censorship) of Google Play or Apple’s Store.

                                      Maybe you want more of the Linux package repo model where you have the official repo (Debian, RedHat, Gentoo Portage), some optional non-oss or slightly less official repos (Fedora EPEL) and then always having the option to add 3rd party vendor repos with their own signing keys (PPA, opensuse build service, Gentoo Portage overlays).

                                      I really wish Google Play had the option of adding other package trees. I feel like Apple and Google took a great concept and totally fucked it up. Ubuntu Snap is going in the same (wrong) direction.

                                      1. 2

                                        On Android it’s certainly possible to install F-Droid, and get access to an alternate package management ecosystem. I think I had to sideload the F-Droid APK to get it to work though, which not every user would know how to do easily (I just checked, it doesn’t seem to be available in the play store).

                                    1. 1

                                      There is a typo in the title (decontructing -> deconstructing).

                                      1. 2

                                        it’s in the article too, so i’d keep it that way here

                                      1. 13

                                        I have other things going on in the pixel mines, but a couple parts of this I don’t think illustrate the points the author wants to make.

                                        But this criticism largely misses the point. It might be nice to have very small and simple utilities, but once you’ve released them to the public, they’ve become system boundaries and now you can’t change them in backwards-incompatible ways.

                                        This is not an argument for making larger tools–is it better to have large and weird complicated system boundaries you can’t change, or small ones you can’t change?

                                        While Plan 9 can claim some kind of ideological purity because it used a /net file system to expose the network to applications, we’re perfectly capable of accomplishing some of the same things with netcat on any POSIX system today. It’s not as critical to making the shell useful.

                                        This is a gross oversimplification and glossing over of what Plan 9 enabled. It wasn’t mere “ideological purity”, but a comprehensive philosophy that enabled an environment with neat tricks.

                                        The author might as well have something similar about the “ideological purity of using virtual memory”, since some of the same things can be accomplished with cooperative multitasking!

                                        1. 4

                                          This is a gross oversimplification and glossing over of what Plan 9 enabled. It wasn’t mere “ideological purity”, but a comprehensive philosophy that enabled an environment with neat tricks.

                                          Not only tricks, but a whole concept of how ressources can be used: Use the file storage on one system, the input/output (screen, mouse, etc.) of another and run the programs somewhere with a strong cpu, all by composing filesystems. Meanwhile in 2018 we are stuck with ugly hacks and different protocols for everything, trying to fix problems by adding another layer on top of things (e.g. pulseaudio on top of alsa).

                                          And, from the article:

                                          And as a filesystem, you start having issues if you need to make atomic, transactional changes to multiple files at once. Good luck.

                                          Thats an issue of designing the concrete filesystem, not of the filesystem-abstraction. You could write settings to a bunch of files which are together in a dictionary and commit them with a write to a single control file.

                                          Going beyond text streams

                                          PowerShell is a good solution, but the problem we have with pipelines on current unix-style systems isn’t that the data is text, but that the text is ill formatted. Many things return some cute markup. That makes it more difficult to parse than necessary.

                                          1. 3

                                            Actually Unix proposed the file as an universal interface before Plan 9 was a dream.
                                            The issue was that that temporary convenience and the hope that “worse is better” put Unix in a local minimum were that interface was not universal at all (sockets, ioctl, fctl, signals…).
                                            Pike tried to escape such minimum with Plan 9, where almost every kernel and user service is provided as a filesystem and you can stack filesystems like you compose pipes in Unix.

                                            1. 10

                                              Put a quarter in the Plan9 file-vs-filesystem “well actually” jar ;)

                                            1. 2

                                              go sometimes has it in its docs, for example: https://golang.org/pkg/sort/#Sort

                                              1. 3

                                                I hadn’t seen that. I know that Russ Cox is pretty algorithmically inclined - his analysis of the algortithms for the new dependency versioning mechanism is really thoughtful, but usually I see it as comment in the implemetation (which is usually more appropriate).

                                                1. 2

                                                  thats one thing i like about go, they value the research that has been done and try to implement the best solution (within sane limits).

                                              1. 5

                                                2 cents: this is the reason why federated protocols make more sense, instead of centralizing, but moxie is against federation.

                                                the infrastructure should be owned by the users.

                                                i never quite got why signal is so hyped, you essentially just choose to trust them and not whatsapp/telegram/whatever with your metadata.

                                                1. 3

                                                  There’s always going to be a question of trust, and OWS is more independent than your examples. If something is federated and as secure and trustworthy, you got to have easy-to-use clients and trust in maintainership of the servers and the code base.

                                                  1. 3

                                                    While signal is open source, what should keep them from not deploying that version to their servers, but a slightly modified? Even if that’s not a problem with the chats being e2e encrypted, why should i trust them with the metadata? With federation I (or a party I know and trust) can run a server, and I am still able to talk to people on other servers (the other party has to be trusted with metadata too, but thats inherent to the problem).

                                                    I just don’t like the OWS cult. The classic advise “use signal and everythings gonna be fine” is misled. OWS is a single point of failure. People have to learn how technology works. Not the gory crypto details, but at least the 10000ft view. They use cloud resources. I’d expect that there are some parties that are more than interested in access to those servers. I know that this sounds a bit tin-foil-heady, but with the risk profile of signal, the first thing I’d do would be having my own infrastructure I can control. It’s just a compromise which doesn’t match the whole secure communitations idea.

                                                    Imagine: Someone other than OWS has access to the cloudy servers and deploys a version of signal server which exploits a flaw in the signal client, maybe a protocol parsing bug. I don’t know how good the client sanitizes the communication with the server, but I’ll guess the expectation is that the server is well behaving. Bingo, possibly all clients are pwnd. With federated services this seems to be much harder, as a) other parties should always expect malign behavior in such protocols b) just the clients of this one instance are affected. Other servers are probably running a different OS, with a different setup, in different countrys, which makes attacking every server much more complicated.

                                                    edit: fix b0rken english

                                                1. 4

                                                  Is there anyone who can review a distro without reviewing some desktop manager?

                                                  Is there anyone who understands that desktop managers are independent of distros?

                                                  1. 5

                                                    distros are mostly the same under the hood, linux, systemd and deb/rpm packages.

                                                    the interesting parts are things like “will it destroy itself during distro upgrades” but those are rarely included in reviews

                                                  1. 21

                                                    I detest paying for software except when it occupies certain specialized cases or represents something more akin to work of art, such as the video game Portal.

                                                    I detest this attitude. He probably also uses an ad blocker and complains about how companies sell his personal information. You can’t be an open source advocate if you detest supporting the engineers that build open source software.

                                                    But only when it’s on sale.

                                                    I’m literally disgusted.

                                                    1. 8

                                                      It’s reasonable to disagree with the quote about paying for software. But how on earth does this defense of the advertising industry come in?

                                                      Certainly it’s possible to be an open source advocate and use an ad blocker and oppose the selling of personal information.

                                                      1. 2

                                                        Certainly. Actually, I would describe myself in that way. But you can’t believe that, and also believe you’re entitled to free-as-in-beer software. Especially high quality “just works” software the author describes. It’s a contradiction.

                                                        Alternative revenue streams like advertising exist to power products people won’t pay for. I don’t know many software engineers that want to put advertising in their products, rather they have to in order to avoid losing money. That’s why I happily pay for quality software like Dash and Working Copy, and donate to open source projects.

                                                        1. 1

                                                          But you can’t believe that, and also believe you’re entitled to free-as-in-beer software.

                                                          I don’t get that sort of vibe from this article. He doesn’t seem to be entitled at all.

                                                      2. 4

                                                        “free as in free beer”!

                                                        1. 1

                                                          I can’t afford to have a different attitude.

                                                        1. 5

                                                          They claim that the gopher is still there but I didn’t see it anywhere…

                                                          https://mobile.twitter.com/golang/status/989622490719838210

                                                          Rest easy, our beloved Gopher Mascot remains at the center of our brand.”

                                                          and why on earth is this downvoted off topic?

                                                          1. 4

                                                            https://twitter.com/rob_pike/status/989930843433979904

                                                            Rob Pike seconding this.

                                                            Also, this is pretty relevant because when people think “golang logo” they typically think of the gopher. I’m not sure people even realized there was a hand drawn golang text logo before this announcement.

                                                            1. 3

                                                              It had two speed lines. Go got faster, so they added a 3rd. Presumably, there’s room for more speed lines as Go’s speed improves.

                                                          1. 3
                                                            1. The relative difficulty of running your own as an absolute beginner

                                                            yes, it’s difficult. that’s because one has to know how things work to make them work. we have to get away from this “computers are easy!” thing. they aren’t, and everything that pretends to be easy has a trade-off (privacy seems to be the current thing). analogy: cars (even bikes) are seldom built by their owners, the trade-off for them being easy and comfortable to use are high repair costs, as things are more complex. technology isn’t easy, even if everybody wants one to believe that.

                                                            1. The eventual centralization on top of the most well-run versions (like Matrix)

                                                            there will always be bigger instances. reasons for running an own instance are either that you find it interesting or that you don’t trust any of the existing ones (or both). one doesn’t have to federate just because it’s possible. the important thing is that it is possible.

                                                            1. 1

                                                              i’ve always wondered if one could use gpus to speed up prolog?

                                                              1. 4

                                                                The way prolog is written in practice tends to lean pretty heavily on the order in which things are evaluated – you make sure predicates fail fast, and to do that, you take advantage of the knowledge that the order in which predicates are defined is the order in which they are evaluated (and then you use cuts to prevent other paths from being evaluated at all). A lot of real code would fail if it got implicitly parallelized in any way. (This is one of the reasons I didn’t want to keep compatibility.)

                                                                It’s pretty trivial to make a prolog-like language that implicitly parallelizes most back-tracking. (In fact, that’s most of what this is.) But, when used naturally, it would cease to have the same kind of operation ordering guarantees. (You could backtrack on the results after parallel execution, in order to figure out which path should have run first, but there aren’t enough prolog programmers to justify keeping compatibility IMO.)

                                                                I’m not sure GPUs would be more beneficial than having extra conventional CPU threads, since what we’re doing is basically a search. However, maybe there’s some way to model a tree search as matrix multiplication at compile time or something. (I don’t really have enough of a math background to say. They managed to do it with neural nets, so I guess anything’s possible.)

                                                                1. 1

                                                                  thanks for the reply! i don’t really now much about prolog, but when i last was doing some cuda stuff i thought about this. i didn’t know that the evaluation order is used that much in practice.

                                                                  maybe tree searches could be somewhat parallelized when stored like a closure table for sql, but that’s a wild uneducated guess :)

                                                                1. 10

                                                                  I still can’t believe Go punted on providing a way to turn an enum value into a string without code generation. What a spectacular waste of an opportunity to do a better C.

                                                                  1. 5

                                                                    Dismissive comments like this are not constructive and do not belong in any environment that encourages the evolution of opinion. If you have a strong opinion about something (e.g. code-generation) then back up that opinion with a strong argument.

                                                                    1. 3

                                                                      please, not this discussion again.

                                                                    1. 3

                                                                      you can get free dns services (including just-slave) from hurricane electric: https://dns.he.net/

                                                                      1. 3

                                                                        Also, Cloudflare provides DNS in their free plan. Though it doesn’t cover all record types, it’s still pretty good.

                                                                        1. 2

                                                                          That’s very interesting (and quite rare), thanks! How did you hear about it?

                                                                          1. 2

                                                                            i run a he ipv6 tunnel for some years now, i guess it was recommended by a friend back then. i can only recommend the free hurricane electric services, never had any problems. they even send me a new t-shirt when my free-ipv6-certification-sage-t-shirt got lost in international mail :)

                                                                        1. 1

                                                                          just came to my mind: in case there’s a security vulnerability in package parsing in apt (or one of the libraries it uses), an attacker could craft a package exploiting this vulnerability inject it into a unencrypted http connection. this would likely be undetected if the exploit is sophisticated enough (which it will be, i guess).

                                                                          1. 3

                                                                            i remember mr. poettering saying that bsds aren’t relevant anymore in 2011: https://bsd.slashdot.org/story/11/07/16/0020243/lennart-poettering-bsd-isnt-relevant-anymore

                                                                            guess they are still here.

                                                                            1. 3

                                                                              “Lennart explains that he thinks BSD support is holding back a lot of Free Software development”

                                                                              I can think of something else which is holding back a lot of Free Software development.

                                                                              1. 1

                                                                                Poettering’s approach to software development seems to make it clear that he doesn’t see any value in the continued existence of the BSDs. I think that they are an important part of the larger open *Nix world/ecosystem and that Linux benefits from their existence so long as there remains some degree of compatibility. I will say that I think the BSDs’ use of a permissive rather than reciprocal licence I think had been bad for them in the long run.

                                                                                1. 1

                                                                                  I don’t think that it’s not about the *Nix world/ecosystem or that Poettering just doesn’t care about BSDs. His attitude seems to be more like that people and distros not wanting to buy in on systemd and/or pulseaudio or in general his software or designs are irrelevant - or approaches that aren’t compatible with his. I think the wrong statements he made leading to uselessd disproving them and OpenRC disproving a lot of them as well made that clear.

                                                                                  Now people have different opinions about systemd, but from my experience projects ignoring the rest of the world tend to turn out bad on multiple levels. Other than that portability often (not always) is an indicator for code quality as well.

                                                                                  But going a bit off topic. What I want to say is that even though BSDs are mentioned the statement also targets every distribution not relying on systemd. It’s just that most of them aren’t exactly “mainstream”, which is why I think they are ignored and not mentioned.

                                                                              1. 16

                                                                                If folks actually read this story Firefox is working pretty hard to make this a non invasive, non privacy compromising feature change, and they’re also opening themselves up for public comment.

                                                                                Consider voicing your objections rather than simply jumping ship. Having a viable open source option is important for the web ecosystem IMO.

                                                                                1. 15

                                                                                  If folks actually read this story Firefox is working pretty hard to make this a non invasive, non privacy compromising feature change, and they’re also opening themselves up for public comment.

                                                                                  i just want a freaking browser engine with the possibility of enhancements via extensions. i don’t want to turn off magick features. i just want a browser which displays websites. the new firefox engine is really great, but i fear that now they slowly fill firefox with crappy features until its slow again.

                                                                                  1. 3

                                                                                    What happens on the “New Tab page has zero effect on page load times. If you don’t like what the New Tab page looks like, customize it. Some of your options are:

                                                                                    • set it to blank
                                                                                    • install an extension that takes over
                                                                                    • customize the current experience

                                                                                    For the last option, click the little gear icon in the top right and you should see this https://imgur.com/a/1p47a where you can turn off any section that you don’t like.

                                                                                    1. 7

                                                                                      yes, i know. i still don’t want these features shipped and active by default. if i want pocket, i could install it as extension. heck, i wouldn’t mind if they said “hey, we have this great extension named pocket, care to try it out?” on their default new page, with a link to install it. but not shipped by default.

                                                                                      1. 4

                                                                                        What happens on the “New Tab page has zero effect on page load times.

                                                                                        I don’t care so much about page load times; sites which care are already fast (e.g. no JS BS), whilst those which don’t will soon bloat up to offset any increase in browser performance.

                                                                                        My main complaints with Pocket/Hello/Looking Glass/pdf.js/etc. are code bloat, install size, added complexity, attack surface, etc.

                                                                                        1. 1

                                                                                          You can’t do that on mobile.