1. 4

    My wife went to a thing called Rock Fest yesterday, to see Flogging Molly, Dropkick Murphys, and Rancid. I took the opportunity to figure out what the whole fuss is about this Fortnite thing I keep hearing about. Turns out you can install it on the Nintendo Switch, so I tried that out, and it turns out I suck massively. It’s kinda fun, in a way, but it’s not my usual cup of tea at all.

    I’ve also been supposed to do something with an old computer someone handed down to me; I’m going to add it to my roster of machines in my workshop, probably using it to host a Gitea instance, and some other stuff. Firmware on the BIOS is a bit dated, it won’t take a regular USB stick to boot up, would only accept USB-FDD, which I do not own. So I had to dig up a CD-R that was not already written on and I may be on my way to glory.

    I otherwise intend to play a bit of Minecraft with my son, he wants us to do some building challenge or other. I’d also like to finish setting up my kids’ (hand-me-down) laptops and start teaching them how to use them. Anyone knows of good open-source type-teaching software?

    1. 2

      and it turns out I suck massively

      I think Fortnite was what two coworkers were talking about last night. One said he felt accomplished getting 5 or 6 kills in a game. You might not suck as bad as you think: seems inherently more difficult than some games to get kills.

      1. 4

        People should stop ranting about agile when they are in fact complaining about scrum. I’ve used scrum and kanban and there’s a big difference in workflow. I feel less stressed by the later and I feel it’s more realistic.

        1. 9

          The article is significant in that it comes from one of the original signatories of the Manifesto for Agile Software Development. I had a huge knee-jerk reaction when I read the title of the article, “Who the hell are you to tell me that I should abandon Agile?” and yeah it turns out the guy is actually pretty important in the story of “Agile”. Much more than I am, anyway.

          It’s also not just a rant. I forced myself to read the article before I commented, just to be sure I didn’t just throw random anger at the internet. Author provides tentative solutions to get out of the bad situation of being forced to do “Certified-Agile-as-a-product”, as sold by, well, businesses. I kind of begrudgingly agree with everything in the article, minus everything preachy about XP, of which I have no real-world experience.

        1. 2

          Outside of work, I’m trying really really hard to figure out what GUI toolkit I can use for Go that can also deploy to Windows, statically linked, that won’t be too much trouble to install. Surprisingly complicated, I’m almost tempted to go back to something like C# and Mono for the purposes of the app in question.

          Context: Father is a doctor, and they’re changing their “doctor app” (for lack of a better word in my vocabulary) to something else. The “something else” lacks a pregnancy calculator. Super easy: I need a date of last periods from the user, and from there I need to figure out 1- How many weeks are they along, and 2- when is X, Y and Z weeks from the date of last periods, to plan for follow-up appointments at weeks X, Y and Z, and finally 3- the due date.

          It’s super easy to compute in Go, because for date managements I can just do like dateOfLastPeriods.Add(280 * 24 * time.Day) and I’m off to the races. Also I thought it’d be cool to do it in Go, and figure out how to wire a GUI to that, statically linking a library to it, like QT, but it’s less simple than anticipated.

          1. 4

            what happens when disparate applications really do need to know about data in other applications? … Fire off a message saying “CustomerAddressUpdated” and any other applciation that is concerned can now listen for that message and deal with it as it sees fit.

            What happens if the message drops?

            1. 1

              PubSub should guarantee delivery.

              1. 3

                And/or you can make it so that the application maintains the events as part of its service. Then, if there’s an outage, as part of the recovery you can go read the event log and update the data as required. Any solution where event messages aren’t ephemeral will do, I think. I also think “not being able to emit an event message” should also be treated like a fairly critical incident, if you go down that path. I think many things.

                1. 2

                  How? The application can crash between updating the data and publishing the message.

                  1. 1

                    Ah, you mean the publishing app - I’d thought you meant the subscriber earlier. Treat messages the way an offline email client treats newly composed email: stick it in a queue to be sent and only remove items from the queue once read receipts have been received for them. This requires message to be idempotent, of course.

                    1. 2

                      That doesn’t address my question, though: there are two actions happening: update the data in the DB and tell people about it. The app can fail between the first and second.

                      1. 2

                        The message creation (e.g. postgres queue) can be part of the data transaction. Otherwise you need 2PC to guarantee the operation between two subsystems. https://en.wikipedia.org/wiki/Two-phase_commit_protocol

              1. 1

                This seems to assume knowledge of many things to be accessible. 800 words in, and many “characters” are introduced by name only. I have zero clue what Xanadu, Project Xanadu, ZigZag, OSMIC, xu92 or 88, or POOMfilade, or Udanax Gold or Green is. This might be a failure on my part, I scrolled a bit further in the article, grasping at a known technological term, and I saw IPFS and DHT! That got me briefly excited about the article, but at that point my interest was already lost. I’m sorry! Maybe the article would gain from a brief introduction to what Xanadu is/was?

                1. 3

                  I was assuming the audience was vaguely aware of Xanadu, although I did briefly touch on its historical importance at the beginning of the introduction. Most of the terms I dropped in the introduction have their own section later on. (The whole reason I’m writing this is that publically accessible documentation on OSMIC and ZigZag are nearly non-existent and documentation on xu88 is hard to find.)

                  The TL;DR version, however, is: Xanadu is the project that originated most of the ideas that are now called ‘hypertext’, and hypertext systems like the World Wide Web are based on descriptions of Xanadu prototypes made in the late 1960s and early 1970s. Work continues on Xanadu to the current day, but releases are pretty rare, so the concepts haven’t been integrated into the rest of the community.

                  For a less technical introduction to Xanadu, I recommend Ted’s video series Xanadu Basics:







                1. 2

                  I’m honestly not sure I understand what this is about.

                  From what I gather, he could achieve most of his hardware goals with a Raspberry Pi, a fancy case with a battery pack, a few peripheral sensors, and a few days of coding.

                  As far as the software environment goes, Emacs is great and all, but some of his decisions seem a little forced. For example, using evil-mode is a weird choice for somebody trying to put everything into Emacs. I’m sure he has a good reason, but it’s odd.

                  1. 4

                    I think he’s describing how he does things; this is the ideal system from his perspective. I can totally appreciate that; I attack the same problem with a terminal window and pipes as the glue and Acme as the editor. <3 the plumber program.

                  1. 8

                    The proposed solutions provided in the article all swarm around the idea of finding a third-party source randomness that everyone can agree on. Almost all the proposed solutions on the reddit thread do the same. (Props to this person for walking to the beat of a different drummer.)

                    I think they (or we) can do better! But, I don’t know how, yet.

                    I think the solution should be cryptographic in nature… So, I’ll try to get close to the answer by clicking anything in Wikipedia’s Cryptography Portal and connected nodes and sharing anything that looks remotely related.

                    These look really promising:

                    These look … interesting? Hopefully unnecessary.

                    1. 5

                      What about this?

                      1. Each party generates a secret string
                      2. Each party publishes the hash of their string to the entire group
                      3. Once all hashes have been published, each party publishes their original string to the entire group
                      4. The random number is the hash of the concatenated strings

                      There’s nothing in this protocol enforcing that the secret strings be random, but I believe that it’s game-theoretically in each party’s interest to do so, so as to avoid e.g. dictionary attacks. I can’t see how any party gains anything by generating a string that is anything except truly random, ensuring a random hash of the concatenated strings.

                      Am I thinking about this correctly?

                      EDIT: Ah, I see, this is basically the “commitment scheme” idea mentioned in the Wikipedia article you posted. Cool!

                      1. 1

                        I came up with a variant of this, but instead of strings, each person picks a number, which is then hashed, then the resulting number is the sum of the chosen numbers, mod 20, plus 1.

                        Another thing you could do is send a message, and the millisecond it was received at, mod 20, plus 1, is your number. You would have to trust that the internet makes the timing random enough, and that you can trust your chat system, but usually you can.

                      2. 4

                        They don’t need to agree on the source of randomness, it just needs to be reasonably beyond manipulation by dev1. Like @asrp says, stuff like Bitcoin IDs will work. You could also hash the fifty first characters of the first article of a newspaper published the day of. Just as long as you announce the method before the event happens, and that the data source is sufficiently out of your control, you’re good.

                        1. 2

                          It depends on what they mean by their constraint 2. If there’s no input from the other devs or any third party then the only remaining source of information is the first developer and so I think it cannot be done.

                        1. 7

                          This is something that has been concerning me for a while. Tech companies abusing weaknesses in human behaviour to build addictions and shape emotions. One of the best things you can do for yourself is disable all notifications that aren’t urgent for you to act on right now. Stops you getting drawn away from the task you were doing every few minutes

                          1. 4

                            Agreed, I have turned off as many notifications as I can so my phone never does anything with them. Anything important I push to my watch which just vibrates. To be sure, this is me mostly trying to give the stupid “smart” watch something to do that makes me not regret buying one.

                            That and having firewall rules to ban me from browsing the web for a while at a time. Or for the work laptop I use this: https://selfcontrolapp.com

                            To keep me from habitually clicking on sites like this one…. at least for a set duration.

                            1. 1

                              I’ve set facebook.com to localhost in my host files and on my home-run dnsmasq instance. Turns out the other thing I check often is this. I don’t often participate in discussions, almost never post a thing, and have gone to page two once in what I think it’s months. Granted, the site does send me on a lot of wild goose chases, but the content quality is much higher than on Facebook (duh). I still go to Facebook every once in a while to see if my wife posted things, preferably pictures of my kids or dogs or herself (which is stalky, and a whole ’nother can of worms). I could do more complex stuff but I found that this keeps my digital demons at bay, most of the time.

                          1. 1

                            Don’t know what to think. Flogging a dead horse or resurrecting a dead horse

                            1. 9

                              Definitely necrhorsemancy

                              1. 2

                                Equinecromancy? Perhaps?

                              2. 4

                                Resurrecting a horse corpse so it can then be flogged?

                              1. 3

                                What’s Core Erlang, relative to Regular Erlang? I tried googling it, and it was not as useful as I thought it would be

                                1. 4

                                  it’s a compiler IL. several other beam languages use it as a target, and a lot of tooling (most notably the dialyzer) works directly on it. in the context of the series, iirc it’s the form that the compiler runs a lot of transformations on for optimization.

                                  1. 2

                                    I could be mistaken, but I believe I read in some white paper at some point, that the dialyzer project needed an intermediate format (one that would make their lives easier) in order to proceed. So Core Erlang sprung out those efforts during dialyzer’s infancy at Uppsala University. Feel free to correct me where I’ve mis-remembered.

                                    1. 2

                                      I have since read a bit on Core Erlang, Dialyzer works on the intermediate format directly. The intermediate format is also what Elixir (and other BEAM languages) compile down to, which is why the Dialyzer works at all on them. Really cool.

                                    2. 1

                                      If my understanding is correct, it’s basically an intermediate compilation target between normal Erlang code and what runs on the BEAM.

                                    1. 4

                                      I’ve been using Acme for the past few months and it works fine. It’s pretty bare, but it works. No completion, no syntax highlighting, no nothing. Still enjoy it. Also there’s like no updates, especially not automated. If you need modern features, I recommend Emacs. Spacemacs is pretty neat.

                                      1. 2

                                        The nice thing about Acme is that it’s command set is so minimal and quasi-irreductible (except for the window management stuff, that could be handled by another program).

                                        And “modernity” is a very relative term, so all I’ll say is that it encourages you to use unix as your ide and gives you a simple interface to do so with. So as long as your unix-like system is secure and up to date, you’ll always be using top of the line tools.

                                        1. 1

                                          Well. I’m really a big fan of the plumber. I’m definitely looking to use that more in my general workflow. I’m already a big fan of accumulating scripts in $HOME/bin, and that idiom has a nice synergy with Acme too. I also don’t miss syntax highlighting as much as I thought I would. Or completion.

                                        2. 1

                                          What mouse are you using with it? A lot of the Acme loyalists get good old three button mice.

                                          1. 1

                                            I use the contour mouse.

                                            1. 1

                                              I have a mouse that has a mouse wheel, which doubles as a middle-clicky-thing. My office one is a bit sensitive for my taste, and also does a weird web-page-navigation thingie when you press the wheel to the left or right, which I profoundly hate (and renders middle-clicking a bit harder).

                                              My home mouse is much better, but it’s also a run-of-the-mill wheely mouse. Less sensitive to scroll action, no left-or-right action. Works perfectly fine for my purposes. I’ve learned over time that Acme is also decently easy to manoeuvre with a trackpad, since you can simulate middle-clicks with Ctrl (even in chording, at least with the Left->middle chord (which is useful, because, like, copy-paste)). Doesn’t work with middle->left though.

                                          1. 2

                                            How does one license software so that the hobbyist or lone pro or small company can profit from it while dissuading usage where the gained efficiency would only garnish the pockets of the lords (see 100x salary vs average worker kind of nonsense)?

                                            1. 3

                                              AGPLv3 ;)

                                              1. 2

                                                I think the most successful model is that of open-source + enterprise licenses/versions, with the latter usually including some consultancy hours and more that you buy as a package. Companies of the size where they need it can afford to pay it, where everyone can use the open source version free of charge, like any other project.

                                                The “downside” is that you need to create a company around the software to monetize it, but I think that it can be quite rewarding to work, for pay, on an enterprise version where 90% of the code is shared as open source.

                                              1. 2

                                                My only realistic hope this year is local conferences, feels like, so I might attend ApacheCon in September. I initially wanted to do Gophercon Iceland, but that’s not happening (my going there I mean)

                                                1. 1

                                                  I am happy to hear that there is a european GopherCon in Iceland!

                                                1. 6
                                                  1. 4

                                                    Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo

                                                  1. 16


                                                    • In 2004 Apple, Mozilla and Opera were becoming increasingly concerned about the W3C’s direction with XHTML, lack of interest in HTML, and apparent disregard for the needs of real-world web developers and created WHATWG as a way to get control over the web standards
                                                    • they throw away a whole stack of powerful web technologies (XHTML, XSLT…) whose purpose was to make the web both machine readable and useful to humans
                                                    • they invented Live Standards that are a sort of ex-post standards: always evolving documents, unstable by design, designed by their hands-on committee, that no one else can really implement fully, to establish a dynamic oligopoly
                                                    • in 2017, Google and Microsoft joined the WHATWG to form a Steering Group for “improving web standards”
                                                    • meanwhile the W3C realized that their core business is not to help lobbies spread broken DRM technologies, and started working to a new version of the DOM API.
                                                    • in 2018, after months of political negotiations, they proposed to move the working draft to recommendation
                                                    • in 2018, Google, Microsoft, Apple and Mozilla felt offended by this lack of lip service.

                                                    It’s worth noticing that both these groups have their center in the USA but their decisions affects the whole world.

                                                    So we could further summarize that we have two groups, one controlled by USA lobbies and the other controlled by the most powerful companies in the world, fighting for the control of the most important infrastructure of the planet.

                                                    Under Trump’s Presidency.

                                                    Take this, science fiction! :-D

                                                    1. 27

                                                      This is somewhat disingenuous. Web browser’s HTML parser needs to be compatible with existing web, but W3C’s HTML4 specification couldn’t be used to build a web-compatible HTML parser, so reverse engineering was required for independent implementation. With WHATWG’s HTML5 specification, for the first time in history, a web-compatible HTML parsing got specified, with its adoption agency algorithm and all. This was a great achievement in standard writing.

                                                      Servo is a beneficiary of this work. Servo’s HTML parser was written directly from the specification without any reverse engineering, and it worked! To the contrary to your implication, WHATWG lowered barrier to entry for independent implementation of web. Servo is struggling with CSS because CSS is still ill-specified in the manner of HTML4. For example, only reasonable specification of table layout is an unofficial draft: https://dbaron.org/css/intrinsic/ For a laugh, count the number of times “does not specify” appear in CSS2’s table chapter.

                                                      1. 4

                                                        You say Backwards compatibility is necessary, and yet Google managed to get all major sites to adopt AMP in a matter of months. AMP has even stricter validation rules than even XHTML.

                                                        XHTML could have easily been successful, if it hadn’t been torpedoed by the WHATWG.

                                                        1. 15

                                                          That’s nothing to do with the amp technology, but with google providing CDN and preloading (I.e., IMHO abusing their market position)

                                                          1. -1

                                                            abusing their market position

                                                            Who? Google? The web AI champion?

                                                            No… they do no evil… they just want to protect their web!

                                                        2. 2

                                                          Disingenuous? Me? Really? :-D

                                                          Who was in the working group that wrote CSS2 specification?

                                                          I bet a coffee that each of those “does not specify” was the outcome of a political compromise.

                                                          But again, beyond the technical stuffs, don’t you see a huge geopolitical issue?

                                                        3. 15

                                                          This is an interesting interpretation, but I’d call it incorrect.

                                                          • the reason to create whatwg wasn’t about control
                                                          • XHTML had little traction, because of developers
                                                          • html5 (a whatwg standard fwiw) was the first meaningful HTML spec because it actually finally explained how to parse it
                                                          • w3c didn’t “start working on a new Dom”. They copy/backport changes from whatwg hoping to provide stable releases for living standards
                                                          • this has nothing to do with DRM (or EME). These after completely different people!
                                                          • this isn’t about lobby groups, neither is this avout influencing politics in the US or anywhere.

                                                          I’m not speaking on behalf of my function in the w3c working group I’m in, nor for Mozilla. But those positions provided me with the understanding and background information to post this comment.

                                                          1. 8

                                                            XHTML had little traction, because of developers

                                                            I remember that in early 2000s everyone started to write <br/> instead of <br> and it was considered cool and modern. There were 80x15 badges everywhere saying website is in xhtml. My Motorola C380 phone supported wap and some xhtml websites, but not regular html in builtin browser. So I had impression that xhtml was very popular.

                                                            1. 6

                                                              xhtml made testing much easier. For me it changed many tests from using regexps (qr#<title>foo</title>#) to using any old XML parser and XPATH.

                                                              1. 3

                                                                Agreed. Worth noting that, after the html5 parsing algorithm was fully specified and libraries like html5lib became available, it became possible to apply exactly the same approach with html5 parsers outputting a DOM structure and then querying it with xpath expressions.

                                                            2. -1

                                                              This is an interesting interpretation, but I’d call it incorrect.

                                                              You are welcome. But given your arguments, I still stand with my political interpretation.

                                                              the reason to create whatwg wasn’t about control

                                                              I was 24 back then, and my reaction was “What? Why?”.

                                                              My boss commented: “wrong question. You should ask: who?”

                                                              XHTML had little traction, because of developers

                                                              Are you sure?

                                                              I wrote several web site back then using XML, XSLT and XInclude serverside to produce XHTML and CSS.

                                                              It was a great technological stack for distributing contents over the web.

                                                              w3c didn’t “start working on a new Dom”. They copy/backport changes from whatwg hoping to provide stable releases for living standards

                                                              Well, had I wrote a technical document about an alternative DOM for the whole planet, without anyone asking me to, I would be glad if W3C had take my work into account!

                                                              In what other way they can NOT waste WHATWG’s hard work?
                                                              Wel, except saying: “guys, from now on do whatever Google, Apple, Microsoft and few other companies from the Silicon Valley tell you to do”.

                                                              But I do not want to take part for W3C: to me, they lost their technical authority with EME (different group, but same organisation).

                                                              The technical point is that we need stable, well thought, standards. What you call live standard, are… working draft?

                                                              The political point is that no oligopoly should be in condition to dictate the architecture of the web to the world.

                                                              And you know, in a state where strong cryptography is qualified as munitions and is subject to export restrictions.

                                                              I’m not speaking on behalf of my function in the w3c working group I’m in, nor for Mozilla. But those positions provided me with the understanding and background information to post this comment.

                                                              I have no doubt about your good faith.

                                                              But probably your idealism is fooling you.

                                                              As you try to see these facts from a wider perspective, you will see the problem I describe.

                                                            3. 4

                                                              XHTML was fairly clearly a mistake and unworkable in the real world, as shown by how many nominally XHTML sites weren’t, and didn’t validate as XHTML if you forced them to be treated as such. In an ideal world where everyone used tools that always created 100% correct XHTML, maybe it would have worked out, but in this one it didn’t; there are too many people generating too much content in too many sloppy ways for draconian error handling to work well. The whole situation was not helped by the content-type issue, where if you served your ‘XHTML’ as anything other than application/xhtml+xml it wasn’t interpreted as XHTML by browsers (instead it was HTML tag soup). One result was that you could have non-validating ‘XHTML’ that still displayed in browsers because they weren’t interpreting it as XHTML and thus weren’t using strict error handling.

                                                              (This fact is vividly illustrated through syndication feeds and syndication feed handlers. In theory all syndication feed formats are strict and one of them is strongly XML based, so all syndication feeds should validate and you should be able to consume them with a strictly validating parser. In practice plenty of syndication feeds do not validate and anyone who wants to write a widely usable syndication feed parser that people will like cannot insist on strict error handling.)

                                                              1. 2

                                                                there are too many people generating too much content in too many sloppy ways for draconian error handling to work well.

                                                                I do remember this argument was pretty popular back then, but I have never understood why.

                                                                I had no issue in generating xhtml strict pages from user contents. This real world company had a couple handred of customers with pretty various needs (from ecommerce, to online magazines or institutional web sites) and thousands of daily visitors.

                                                                We used XHTML and CSS to distribute highly accessible contents, and we had pretty good results with a prototype based on XLS-FO.

                                                                To me back then the call to real world issues seemed pretestuous. We literally had no issue. The issues I remember were all from IE.

                                                                You are right that many mediocre software were unable to produce proper XHTML. But is this an argument?

                                                                Do not fix the software, let’s break the specifications!

                                                                It seems a little childish!

                                                                XHTML was not perfect, but it was the right direction.

                                                                Look at what we have now instead: unparsable contents, hundreds of incompatible javascript frameworks, subtle bugs, bootstrap everywhere (aka much less creativity) and so on.

                                                                Who gain most from this unstructured complexity?

                                                                The same who now propose the final solution lock-in: web assembly.

                                                                Seeing linux running inside the browser is not funny anymore.

                                                                Going after incompetent developers was not democratization of the web, it was technological populism.

                                                                1. 2

                                                                  What is possible does not matter; what matters is what actually happens in the real world. With XHTML, the answer is clear. Quite a lot of people spent years pushing XHTML as the way of the future on the web, enough people listened to them to generate a fair amount of ‘XHTML’, and almost none of it was valid and most of it was not being served as XHTML (which conveniently hid this invalidity).

                                                                  Pragmatically, you can still write XHTML today. What you can’t do is force other people to write XHTML. The collective browser world has decided that one of the ways that people can’t force XHTML is by freezing the development of all other HTML standards, so XHTML is the only way forward and desirable new features appear only in XHTML. The philosophical reason for this decision is pretty clear; browsers ultimately serve users, and in the real world users are clearly not well served by a focus on fully valid XHTML only.

                                                                  (Users don’t care about validation, they care about seeing web pages, because seeing web pages is their goal. Preventing them from seeing web pages is not serving them well, and draconian XHTML error handling was thus always an unstable situation.)

                                                                  That the W3C has stopped developing XHTML and related standards is simply acknowledging this reality. There always have been and always will be a great deal of tag soup web pages and far fewer pages that validate, especially reliably (in XHTML or anything else). Handling these tag soup web pages is the reality of the web.

                                                                  (HTML5 is a step forward for handling tag soup because for the first time it standardizes how to handle errors, so that browsers will theoretically be consistent in the face of them. XHTML could never be this step forward because its entire premise was that invalid web pages wouldn’t exist and if they did exist, browsers would refuse to show them.)

                                                                  1. 0

                                                                    Users don’t care about validation, they care about seeing web pages, because seeing web pages is their goal.

                                                                    Users do not care about the quality of concrete because having a home is their goal.
                                                                    There will always be incompetent architects, thus let them work their way so that people get what they want.

                                                                    Users do not care about car safety because what they want is to move from point A to point B.
                                                                    There will always be incompetent manufacturers, thus let them work their way so that people get what they want.

                                                                    That’s not how engineering (should) work.

                                                                    Was XHTML flawless? No.
                                                                    Was it properly understood by the average web developers that most companies like to hire? No.

                                                                    Was it possible to improve it? Yes. Was it better tha the current javascript driven mess? Yes!

                                                                    The collective browser world has decided…

                                                                    Collective browser world? ROTFL!

                                                                    There’s a huge number of browsers’ implementors that nobody consulted.

                                                                    Among others, in 2004, the most widely used browser, IE, did not join WHATWG.

                                                                    Why WHATWG did not used the IE design if the goal was to liberate developers from the burden of well designed tools?

                                                                    Why we have faced for years incompatibilities between browsers?

                                                                    WHATWG was turned into one of the weapons in a commercial war for the control of the web.

                                                                    Microsoft lost such war.

                                                                    As always, the winner write the history that everybody know and celebrate.

                                                                    But who is old enough to remember the fact, can see the hypocrisy of these manoeuvres pretty well.

                                                                    There was no technical reason to throw away XHTML. The reasons were political and economical.

                                                                    How can you sell Ads if a tool can easily remove them from the XHTML code? How can you sell API access to data, if a program can easily consume the same XHTML that users consume? How can you lock users, if they can consume the web without a browser? Or with a custom one?

                                                                    The WHATWG did not served users’ interests, whatever were the Mozilla’s intentions in 2004.

                                                                    They served some businesses at the expense of the users and of all the high quality web companies that didn’t have much issues with XHTML.

                                                                    Back then it was possible to disable Javascript without loosing access to the web functionalities.

                                                                    Try it now.

                                                                    Back then people were exploring the concept of semantic web with the passion people now talk about the last JS framework.

                                                                    I remember experiments with web readers for blind people that could never work with the modern js polluted web.

                                                                    You are right, W3C abandoned its leadership in the engineering of the web back then.

                                                                    But you can’t sell to a web developer bullshit about HTML5.

                                                                    Beyond few new elements and a slightly more structured page (that could have been done in XHTML too) all its exciting innovations were… more Javascript.

                                                                    Users did not gain anything good from this, just less control over contents, more ads, and a huge security hole worldwide.

                                                                    Because, you know, when you run a javascript in Spain that was served to you from a server in the USA, who is responsible for such javascript running on your computer? Under which law?

                                                                    Do you really think that such legal issues were not taken into account from the browser vendors that flued this involution of the web?

                                                                    I cannot believe they were so incompetent.

                                                                    They knew what they were doing, and did it on purpose.

                                                                    Not to serve their users. To use those who trusted them.

                                                              2. 0

                                                                The mention of Trump is pure trolling—as you yourself point out, the dispute predates Trump.

                                                                1. 6

                                                                  I think it’s more about all of this sounding like a science fiction plot than just taking a jab at the Trump presidency; just a few years ago nobody would have predicted that would have happened. So, no, not pure trolling.

                                                                  1. 2

                                                                    Fair enough. I’m sorry for the accusation.

                                                                    Since the author is critical of Apple/Google/Mozilla here, I took it as a sort of guilt by association attack on them (I don’t mind jabs at Trump), but I see that it probably wasn’t that.

                                                                    1. 2

                                                                      No problem.

                                                                      I didn’t saw such possible interpretation or I wouldn’t have written that line. Sorry.

                                                                  2. 3

                                                                    After 20 years of Berlusconi and with our current empasse with the Government, no Italian could ever troll an American about his current President.

                                                                    It was not my intention in any way.

                                                                    As @olivier said, I was pointing to this surreal situation from an international perspective.

                                                                    USA control most of internet: most root DNS, the most powerful web companies, the standards of the web and so on.

                                                                    Whatever effect Cambridge Analitica had to the election of Trump, it has shown the world that internet is a common infrastructure that we have to control and protect together. Just like we should control the production of oxigen and global warming.

                                                                    If Cambridge Analitica was able to manipulate USA elections (by manipulating Americans), what could do Facebook itself in Italy? Or in German?
                                                                    Or what could Google do in France?

                                                                    The Internet was a DARPA project. We can see it is a military success beyond any expectation.

                                                                    I tried to summarize the debacle between W3C and WHATWG with a bit of irony because, in itself, it shows a pretty scary aspect of this infrastructure.

                                                                    The fact that a group of companies dares to challenge W3C (that, at least in theory, is an international organisation) is an evidence that they do not feel the need to pretend they are working for everybody.

                                                                    They have too much power, to care.

                                                                    1. 4

                                                                      The last point is the crux of the issue: are technologists willing to do the leg work of decentralizing power?

                                                                      Because regular people won’t do this. They don’t care. This, they should have less say in the issue, though still some, as they are deeply affected by it too.

                                                                      1. 0

                                                                        No. Most won’t.

                                                                        Technologist are a wide category, that etymologically includes everyone that feel entitled to speak about how to do things.

                                                                        So we have technologists that mislead people to invest in the “blockchain revolution”, technologists that mislead politicians to allow barely tested AI to kill people on the roads, technologists teaching in the Universities that neural networks computations cannot be explained and thus must be trusted as superhuman oracles… and technologists that classify as troll any criticism of mainstream wisdom.

                                                                        My hope is in hackers: all over the world they have a better understanding of their political role.

                                                                      2. 2

                                                                        If anyone wonders about Berlusconi, Cracked has a great article on him that had me calling Trump a pale imitation of Berlusconi and his exploits. Well, until Trump got into US Presidency which is a bigger achievement than Berlusconi. He did that somewhat by accident, though. Can’t last 20 years either. I still think Berlusconi has him beat at biggest scumbag of that type.

                                                                        1. 2

                                                                          Yeah, the article is funny, but Berlusconi was not. Not for Italians.

                                                                          His problems with women did not impress much us. But for when it became clear most of them were underage.

                                                                          But the demage he did to our laws and (worse) to our public ethics will last for decades.
                                                                          He did not just changed the law to help himself: he destroyed most legal tools to fight the organized crime and to fight bribes and corruption.
                                                                          Worse he helped a whole generation of younger people like him to be bold about their smartness with law workarounds.

                                                                          I pray for the US and the whole world that Trump is not like him.

                                                                  1. 2

                                                                    I can’t help but disagree with the “accept interfaces, return structs” thing. Especially if you’re exposing those functions publicly. I’m probably dumb and just doing this very wrong, but I’ve lived through many cases where mocking dependencies is harder than it needs to be because I need to provide a struct as the result of something. I mean, especially when it’s an external package, I don’t necessarily want to be tightly bound to the struct that is exposed and would generally much rather use an interface. Am I doing this wrong?

                                                                    1. 4

                                                                      If you mock dependencies, you’re creating a mock implementation of a narrowly-scoped interface as an input to a function, not a copy of a struct as the output of an e.g. constructor.

                                                                      Concretely, if you have a function

                                                                      func process(s3 *aws.S3) error {
                                                                          x, err := s3.activate()
                                                                          if err != nil {
                                                                              return errors.Wrap(err, "activation failed")
                                                                          if err := x.signal(); err != nil {
                                                                              return errors.Wrap(err, "signal failed")
                                                                          return nil

                                                                      it should instead be

                                                                      +type activator interface{ activate() (xtype, error) }
                                                                      -func process(s3 *aws.S3) error {
                                                                      +func process(a activator) error {

                                                                      but aws.NewS3 can and should continue to return *aws.S3.

                                                                      1. 2

                                                                        I had the same impulse when starting Go and friends rightly warned me away from it.

                                                                        If you write functions that take interfaces and that return interfaces, your mock implementations start returning their own mocks, which gets brittle very fast. Instead, by having interfaces as inputs and concrete types are return values, your mock implementations can simply return a plain struct and all is well.

                                                                        But I think it’s worth not taking someone else’s word for it, and trying both approaches and seeing how it goes.

                                                                        1. 3

                                                                          I agree but in effect, most non-trivial libraries (stuff like Vault, for example) return structs that expose functions that return structs. Now, if I need to access that second layer of structure in my code, the appropriate design would seem to be, under those directions, to declare a function whose sole job is to accept the first level of structure as an interface and spit back the second layer (which still leaves me with a top-level function that is hard to test)

                                                                          1. 1

                                                                            Looking at Vault’s GoDoc, I see Auth.Token, which sounds like the kind of API you’re describing. Sometimes, it might be a matter of how to approach the test, like instead of mocking heavily, you run an instance of Vault in the test and ensure your code is interacting with Vault correctly.

                                                                            1. 1

                                                                              I’m not against this, technically, but this is not necessarily practical. Take databases for example. Or external services? Vault is one thing, but what if the thing you depend on us an external service that you can’t mock cleanly, that depends on another service that it can’t mock cleanly? I don’t have a solution for this, I realize that exposing an object as an interface is not especially practical either, and that it’s weird to decide what to expose through that interface. The inverse, to me, is equally weird for other reasons.

                                                                        2. 2

                                                                          I’m probably dumb and just doing this very wrong

                                                                          If so, then I’m right there with you. If I followed @peterbourgon’s advice, then my code would be an unsalvageable dumpster fire. Defining a billion ad hoc interfaces everywhere very quickly becomes unmaintainable in my experience. I’ve certainly tried to do it.

                                                                          See also: https://lobste.rs/s/c984tz/note_on_using_go_interfaces_well#c_mye0mj

                                                                          1. 1

                                                                            All advice has a context, and the context for mine is that you’re dealing with mostly- or completely-opaque structs for the purposes of their behavior (methods) they implement, and not for the data (fields) they contain. If you’re doing mostly the latter, then interfaces don’t help you — GetFoo() Foo is an antipattern most of the time. In those cases, you can take the whole original struct, if you use a lot of the fields; or the specific fields you need, if you only take one or two; or, rarely, re-define your own subset of the struct, and copy fields over before making calls.

                                                                            But when I’m in the former situation, doing as I’ve illustrated above has almost always improved my code, making it less fragile, easier to understand, and much easier to test — really the opposite of a dumpster fire. And you certainly don’t want a billion ad-hoc interfaces, generally you define consumer interfaces (contracts) at the major behavior boundaries of your package, where package unit tests naturally make sense.

                                                                            1. 1

                                                                              I do suspect we are thinking of different things, but it is actually hard to tease it apart in a short Internet conversation. :-) I am indeed dealing with completely-opaque structs, and those structs generally represent a collection of methods that provide access to some external service (s3 being a decent example, but it could be PostgreSQL or Elasticsearch), where their underlying state is probably something like a connection pool. Defining consumer interfaces was completely unmaintainable because their number quickly becomes overwhelming. Some of these types get a lot of use in a lot of code, and tend to be built at program initialization and live for the life of the program. At a certain point, using consumer interfaces just felt like busy work. “Oh I need to use this other method with Elasticsearch but the interface defined a few layers up doesn’t include it, so let’s just add it.” What ends up happening is that the consumer interfaces are just an ad hoc collection of methods that wind up becoming a reflection of specific implementation strategies.

                                                                        1. 8

                                                                          I’m a little perplexed that salary is an afterthought here–in a field where what we do is so directly tied to outsized profits, it’s weird we aren’t pushing harder to get a cut of the pie.

                                                                          1. 3

                                                                            I unholstered my sarcasm gun, paused to think about it, and put it away again. I fully agree with you. I recently read this book which touches on the matter. I love the stability of being an employee and I hate dealing with clients for a reason or another. I also hate very many things about corporate life, and I wanna break out of that at some point, although it ain’t possible nor practical right now. Anyway I digress, it was a good read.

                                                                            1. 1

                                                                              Because a higher salary won’t make you happier.

                                                                              A good example of this is the amount of people unhappy at work, asking for a raison to justify their stay.

                                                                              Anyway, what you seem to want is a better distribution of income in a company, no specifically higher salary, so the question would be, is income equality a factor of hapiness? Maybe.

                                                                              1. 5

                                                                                No, I specifically want a better cut of the value that I produce.

                                                                                If I grow sales by 5x by implementing a feature, there are some options, right?

                                                                                • Others who failed to grow sales are penalized
                                                                                • Others who failed to grow sales are penalized and their rewards given to me instead
                                                                                • Everybody gets a cut of the sales, even Frank the sales guy who sucks and Ann the janitor who does the same amount of work day in and day out cleaning toilets
                                                                                • I get no compensation (compared to base case of not succeeding)
                                                                                • I get compensated by small bonus
                                                                                • I get a percent of the sales increase

                                                                                Only two of those aren’t terrible. Only one of them is fair.

                                                                                If you want to see a developer act do work that can bring in 10 million, cut them in for 5%. I can’t think of any dev who wouldn’t move heaven and earth to make 500K in a year.

                                                                                Of course, the current system is more a deal of “You’re lucky to have a job here, here’s the salary, management/shareholders will skim off the profits that, by construction, they themselves could not have realized had you (or devs like you) agreed to it.”

                                                                                And honestly, while I respect the problems of folks who aren’t in our industry making our wages, I also work very hard not to have those problems. I have no desire to be holding the bag when market trends correct themselves and we’re paid the same as non-devs while the people we let fleece us are fucking off in their Teslas to Moneyland.

                                                                                1. 2

                                                                                  This sounds like a great idea in principle, but how do you attribute whose work produced what value? This seems like a hard question to answer in general (i.e. not just for engineering roles), with maybe the exception of direct sales roles (where a commission based on deal size is often the norm). Even in sales roles, I think attribution is a hard question to answer: are your sales great because your salespeople are great or your application engineer is great or the intern you hired produced a ton of value by fixing a bunch of stuff that nobody had bothered to?

                                                                                  When I worked in adtech, we had similar difficulty trying to attribute clicks to specific ads. The honest truth seems to be that, in both ads and work, it’s hard to do attribution “fairly” when you have a high-touch process involving many people.

                                                                                  1. 4

                                                                                    So, there’s a few different parts of that, but the one I’ll poke at is attribution.

                                                                                    A lot of sales folks work on commission: and yeah, that has pathologies, but it’s the case more often than not that a salesperson that puts in the work to seal a deal is pretty unquestionably the one that deserves a cut of the sale.

                                                                                    The idea that we can’t do basic accountability in engineering is something I disagree with. Some solutions:

                                                                                    • The entire engineering team gets a cut of engineering-related success that year split evenly, if for no other reason than they failed to fuck up growth.
                                                                                    • Individual contributors that actually lay hands on a feature and implement it get a cut of sales that touch that feature, or cost savings if it’s an efficiency improvement. If the company can’t track what features lead to sales or what features boost efficiencies, even in some rough way, I posit the company is poorly managed.
                                                                                    • At perhaps the most asinine solution, do a trace on what functions/features get called (same as we do for code coverage!), multiply it by uses/users/revenue, and do a weighted payout to the folks that wrote the code.

                                                                                    At least in some fields, say e-commerce, it’s pretty obvious how to break things down. If an engineer builds the product page, builds the order logic, and builds the persistence, they’re pretty obviously the ones that deserve the credit. If one team builds, say, product search, it’s pretty easy to track what generated a sale and how the customer got there (they’re tracking the customers, right?) and give them a cut.

                                                                                    And one immediate objection to this is “but how do engineers that don’t do customer-facing stuff get rewarded?” And my answer to that is basically: if an engineer doesn’t directly do stuff that puts money in the hands of the business, they don’t actually generate value for the company and as such shouldn’t be rewarded a cut of the spoils. From a business standpoint, a bad engineer that ships continuously and drives sales is worth infinitely more than a great engineer that refactors in pursuit of perfection.

                                                                                    I’m still debating internally how hard I believe this line of reasoning, but it’s opening up some interesting tangents in my head so I don’t think it should be dismissed outright.

                                                                                    1. 4

                                                                                      I am an SRE for an online retailer (I am not, but the sake of the argument I am).

                                                                                      If I screw up big time, I completely halt all the sales. On the other hand, if I do decently my job, my work goes unnoticed.

                                                                                      So, according to your model, should I earn the entire company profit or should I earn nothing?

                                                                                      1. 1

                                                                                        Well, as I wrote, you don’t drive sales or make savings, you have rated new value…you’ve acted to preserve existing value.

                                                                                        You should still be well compensated for your work! Just not with a cut of the growth.

                                                                                        1. 3

                                                                                          I disagree, a bit. A decent SRE is going to prevent an incredible amount of screwing ups that would potentially cost any given company a lot of money. In a way, their presence is a form of risk mitigation; there’s definitely value in that, but it’s less obvious. For example: At some point in my career I was asked to implement in emergency a feature that essentially mitigated a risk that, should it realize, would have cost them in the tens of millions. Once mitigated, the risk didn’t exist anymore, and I’d bet that there’s SOME value in that risk having been permanently mitigated. Probably less than tens of millions, but probably more than a pat on the back.

                                                                                          [edit:] I mean, it’s less obvious as opposed to “See, since I tweaked this endpoint last week sales have increased by 20k per day, where’s my cut?”

                                                                                      2. 1

                                                                                        Individual contributors that actually lay hands on a feature and implement it get a cut of sales that touch that feature, or cost savings if it’s an efficiency improvement.

                                                                                        This might be combined with “competitive coder” schemes to get even more results if they themselves are getting the results they claim.

                                                                              1. 5

                                                                                I can’t say that I especially found what I was looking for in the article. For one, the author fails to show the immediate threat blockchain technology poses on the distributed nature of the internet. Well, okay, fine, maybe it’s a threat in the sense that it does not require the “central authority” that’s required for the internet to work (quotation marks because you do need a DNS, but in effect you can yourself decide what names resolve to, even though for all intents and purposes that’s impractical. I digress).

                                                                                The only point the article makes is essentially that for end users, copying the whole of the Bitcoin blockchain is expensive in terms of hard drive space. I’ll agree that it’s definitely not space-efficient to copy all the internet on every computers (in a way that makes it mega-distributed though). The alternatives suggested by the writer revolve around software that cannot trivially exist on an end-user’s laptop.

                                                                                Probably a “cleaner” solution exists in things like the Dat Project and Rotonde, a blogging platform powered by the Dat Project.

                                                                                Bonus points for readability. Can’t say that I enjoyed the structure of the content (the introduction lacked the appropriate guidance into the content of the article), nor did I find the arguments convincing. I don’t find the thing to make much sense at all as an idea. It would only be a threat if the fear of everything being copied everywhere was realized, which is not practical (or possible, for the most.)

                                                                                1. 1

                                                                                  I cringe when I see practice of putting clear passwords in any text file, especially the dreaded .netrc.

                                                                                  Supposedly secstore(1) could help with that, but I have never ventured further in those. Can somebody say anything about the security aspect of these programs in plan9port?

                                                                                  1. 1

                                                                                    With msmtp(1) you should use the passwordeval option which will evaluate an expression and use whatever is returned on stdout as the password:

                                                                                    gpg2 --no-tty -q -d ~/.msmtp-password.gpg

                                                                                    Install pinentry-gtk2 and you’ll get a nice dialog box.

                                                                                    I intended to mention the passwordeval option, but the writing went into the wee hours and it was lost. :D I’ve updated the $HOME/.msmtprc example with a note referencing it.

                                                                                    As for secstore(1), that’s a backing store for factotum(4). I think you could use passwordeval with factotum(4).

                                                                                    1. 1

                                                                                      How does one set up factotum with secstore? Can I use it the same way I use pass? If I don’t explicitly use secstore will I have to set the secret everytime I start factotum?

                                                                                      1. 1

                                                                                        iirc, yeah, you’ll get prompted. Well, may get prompted. I don’t think things like auth/fgui(1) got brought over.