1. 7

    These are all good rules to follow. I worked on a project at IBM that was translated into six languages day-0 on releases and another ~14 as translations were delivered to the team. We used gettext and had few problems with it. Hebrew and Arabic provided the greatest opportunities for improvement because no one on the team had experience with RTL languages.

    One recommendation to add is to have a language for testing that really stands out. It’s too easy to test against English strings and other language strings may change over time. I tended to use Esperanto as my testing language because I know it (first IBM product to ship an eo translation, maybe?) but I had some fun with an xx-pirate and xx-pittsburghese translation…

    1. 9

      The “fake language” idea is one I have used to superb effect when I’ve worked on internationalization. Usually I define several of them to exercise a few things that tend to break with English-focused UI designs:

      • A “fake Chinese” locale that replaces every chunk of ~8 characters in the English string with a random Chinese character. This drives actual Chinese speakers a bit batty at first because it is complete gibberish, but exposes hidden assumptions that a given piece of text will always be at least a certain length.
      • Similarly, a “fake Hebrew” or “fake Arabic” locale that shows you what your layouts look like with RTL text.
      • A “long English” locale that grows all English text by some amount (I’ve found 1/3 works well) by adding filler characters. This one was the bane of the UI designers who hadn’t kicked the habit of carefully sizing UI elements to precisely fit the English label. You’ll see which of your web developers really understand the CSS box model with this one.
      1.  

        ‘right-to-left’ (rtl) and bidirectional text comes with its own challenges indeed.

        desktop applications using gtk+, the toolkit used by gnome software (and many other software), can be made to render in ‘right-to-left’ (rtl) mode even in an ‘left-to-right’ (ltr) locale. this works on any system and does not require a development environment: a simple way to temporarily enable this is by opening the gtk inspector and flipping the ‘text-direction’ property for an application window. this is useful to find bugs, which are usually solved by using ‘beginning’ and ‘end’ instead of left and right.

      1.  

        There’s definitely a need for tools to better manage a “sequence of related changes” workflow. (I’ve been trying out Graphite at work for that exact use case and it has saved me a ton of time.)

        But I’m not sure I’d want to work in a fully branchless setup. A pretty common scenario for me is that I’ll fix a few bugs in a row, each in its own branch, and send out a PR for each one. There is no ordering among them: any of them can be merged first, and any of them can be revised based on review feedback without holding up the others. Modeling that set of fixes as a single linear thread of history obscures the fact that they don’t have any dependencies on one another.

        1.  

          You can do that with git-branchless too. Just base each commit off main, or commit them sequentially and then use git move -s bugfix_sha -d main to move them later before uploading a PR. For example, here is part of my smartlog at work right now, with commit info removed:

          ◇ main
          ┣━┓
          ┃ ◯
          ┃ ┣━┓
          ┃ ┃ ◯
          ┃ ┃ ┃
          ┃ ┃ ◯
          ┃ ┃ ┃
          ┃ ┃ ◯
          ┃ ┣━┓
          ┃ ┃ ◯
          ┃ ┃ ┃
          ┃ ┃ ●
          ┃ ┣━┓
          ┃ ┃ ◯
          ┃ ┃
          ┃ ◯
          ┣━┓
          ┃ ◯
          ┃
          ◯
          

          I already give the commits descriptive titles, so managing branch names is just a chore. With git-branchless I don’t have to.

        1. 2

          Folks building new software (and not using hosted auth services like auth0 or Google accounts): are you still building your own auth system from scratch or are you using an existing open source system like keycloak or kanidm?

          1. 9

            As the main back-end guy at my company, I decided to use Keycloak for our system’s authentication. It has pluses and minuses.

            The minuses: Up-front development and setup cost was MUCH higher than a simple username/hashed-password table. I spent a fair bit of time fussing with Java OAuth libraries and Spring Security before I got it working the way I wanted. Stuff like “let users manage their own API keys for non-interactive clients” turns into a minor ordeal involving multiple OAuth tokens and fake Keycloak user identities. Customizing the registration flow is a hassle compared to implementing it as a couple extra pages in our main web app. The admin UI is a little clunky (though, it must be said, still far nicer than any of the other auth packages I evaluated). Its role-based access control setup is not flexible enough for our application and I ended up building my own authorization layer.

            On the plus side, adding a working “log in with Google” button to our app took me all of 5 minutes. Adding “log in with Apple” took longer but was still pretty quick (and most of the blame for it taking longer is on Apple, not on Keycloak). When we wanted to set up a Grafana instance and let employees log in with the same credentials they use for our web app, I just configured Grafana to use our Keycloak server as its OAuth provider, configured Keycloak to only allow access to users with our “employee” role, and it was done. Other internal tools sit behind an oauth2-proxy instance that uses Keycloak for login, and the tools don’t need to know anything about where the user data lives. My expectation is that we’ll be adding other such services over time, some end-user-facing and some not.

            On balance, I think it was the right choice for us, and the advantages will grow over time. But the minuses are significant enough that I can’t say that with absolute confidence; I could envision an alternate reality where it took me less time to do all the “easy in Keycloak” stuff from scratch than it took to mess with Keycloak.

            1. 1

              Thanks for the details!

              Up-front development and setup cost was MUCH higher than a simple username/hashed-password table.

              I think this is always the case with every separate auth system to be fair.

              The admin UI is a little clunky (though, it must be said, still far nicer than any of the other auth packages I evaluated).

              Since you mentioned you evaluated others… any info to share on specifically which other auth systems you ended up not going with?

              1. 2

                I think this is always the case with every separate auth system to be fair.

                Definitely, and going into it I expected it to take longer than a simpler system. I just didn’t appreciate how much more work it was going to be. There were lots of little fussy details that weren’t apparent when I did my initial proof-of-concept implementation as part of the evaluation process.

                Others I seriously evaluated were CAS (which seemed capable but just getting it up and running at all was a nigh-impenetrable configuration nightmare and it didn’t look like it’d get any easier afterwards), OpenAM (seemed okay-ish, but it seems like they started off with a commercial package, half-converted it to a community project, then got bored and released their work in progress, which didn’t give me lots of confidence about its long-term viability), and WSO2 (my second choice after Keycloak, but quite resource-hungry and forces you to muck around with XML to configure permissions).

                Those aren’t the only auth packages out there by a long shot, but they were the only ones I found that met our specific product requirements. Definitely worth figuring out what you really need out of such a system before looking for candidates.

            2. 4

              Building from scratch every time. But I do reuse code from project to project so it’s not entirely “from scratch” as that wouldn’t make any sense. Much auth code can be reused as it’s the same basic functionality with only minor things that change from project to project.

              1. 2

                the second

                1. 1

                  Which one are you using?

                  1. 2

                    Keycloak. Sorry for the dry reply back there.

                    Lead the IdP/IAM implementation (customer facing both B2B and B2C type of UX) in my last gig and used Keycloak as its core. Has a fairly impressive out-of-the-box completeness and extensibility is achievable albeit awkward IMO.

                    In my current gig I found a Keycloak already in place. Honestly I would rather try something a bit less bulky and have been looking to Ory but haven’t done anything noteworthy yet.

                    My main issues with Keycloak have been mostly around extending it. Has that old-school enterprisy Java coding experience going, as it runs on JBoss and does a poor job in hiding that. Clustering on K8S was more hassle than it should too. I think they’re “rewriting” it under Keycloak X, which is both a good and a bad thing. Good because it is targeting a more lightweight “cloud-native” runtime but breakage is sure to happen. Gave me zero issues in production for 3 years with low traffic (dozens TPS tops) but very high criticality (all user and all “microservices” used it for auth)

                    I understand it depends on context but rolling out a “user/pass” form like it’s the 00’s really rubs me the wrong way. OIDC comes with a lot of value and I find it frustrating when I use something that doesn’t support auth federation: thinking on self hostable open source stuff but also commercial products

                    It’s a lot of work to implement something like Keycloak from the ground up… even if you’re using some framework for it, registration/login/recovery/sso/browser support/email/mfa/“social login”/… in a profissional, product development setting, I would default against that.

              1. 4

                “Computer science” was the word that caused me some trouble here. I spent some time trying to understand what the article was saying, and I concluded that the article, at the same time, managed to be elitist and to be trivial.

                The statement that “computer science was originally invented to be taught to everyone, but not for economic advantage.” is loaded.

                Perhaps it is a reaction against the “We need to teach everyone to code.” craze, but then I found elsewhere in the article a note about how not teaching everyone computers was a threat to a democratic society, so I really could not place it.

                Computers are a tool. Some people build the tools, and a lot of other people use them to do what they actually want to do, like make art or run a business.

                I’d like to compare the use of computers with the use of cars. When cars first came out they were fiddly things and for a while you had to be mechanically inclined to use them. Then they became more and more user friendly because for every one who liked to spend evenings under the motor there were a thousand for whom it was a tool to improve life quality.

                We got to the nice position in society where you didn’t have to know anything about internal combustion, lithium ions, gears or electric motors to use the car for business or pleasure.

                Through the efforts of those employing computer science we are approaching the state where you don’t have to know about bits and bytes to use computers for business or pleasure, and we have been at a reasonable spot with that for many decades now.

                It is not a threat to the free world. If we need to teach artists and entrepreneurs computer science so they can use computers we have failed in the same way as if we had to teach them thermodynamics and electrical engineering so they can drive a car.

                1. 4

                  Academics aren’t incentivized to create something like this, because doing so is just “applied” research which tends not to be as prestigious. You don’t get to write many groundbreaking papers by taking a bunch of existing ideas and putting them together nicely.

                  Consider electronic voting as a simple example. With a paper ballot, the set of people that can audit the process is huge: basically, anyone who is numerate and not too badly visually impaired. Any candidate who has enough support to stand a chance in a fair election can find people who can turn up in polling stations and monitor the ballots. Contrast that with an electronic scheme where (ignoring the difficulties accessing the code) the number of people who can audit the election is very small. There are a lot of examples like this where power is concentrated into the hands of a small number of people who understand a particular system.

                  I’d like to compare the use of computers with the use of cars.

                  I think that is an incredibly misleading analogy. 100 years ago, a car was a machine to get you from A to B. Today, a car is a machine to get you from A to B. The value of the car is directly related to how well it performs that specific task. The task is reasonably well defined and (aside from a few changes to traffic legislation) really hasn’t changed much over the last century. The most valuable car would be one that requires zero maintenance and drives itself. Having cars all do the same thing makes traffic management easier and improves efficiency in the system overall

                  In contrast, the value of a computer comes from the fact that it can be made to do new things. The larger the space of new things you are able to make a computer do, the more valuable the computer is to you. If enough people need to do a specific thing then there may be some off-the-shelf software that does it already but as soon as you want to do something more specialised then you need to make the computer do something new. This may be something simple, such as entering a new formula into a spreadsheet or writing a macro to automate a task in a word processor, but it still fundamentally a specific thing that you are making the computer do beyond what everyone else does with it.

                  If we need to teach artists and entrepreneurs computer science so they can use computers we have failed in the same way as if we had to teach them thermodynamics and electrical engineering so they can drive a car.

                  If we are going to say that the only tasks that someone should do with a general-purpose computing device are the set of things that an elite of programmers have permitted that they do, then we have failed them in a far worse way.

                  1. 1

                    It is not a threat to the free world.

                    Events with social media, misinformation, and the like beg to differ.

                    as if we had to teach them thermodynamics and electrical engineering so they can drive a car.

                    Even mechanics don’t need to know either of those, so the comparison feels forced.

                    1. 11

                      I have a computer science degree from a fairly well-known college. I disagree pretty strongly with a lot of my friends and colleagues who studied computer science at the same time and place I did, about exactly what specific things are the problems with social media, what information constitutes misinformation, and what the correct political or technological responses to these issues are.

                      Expecting people who study computer science to magically have the right answers to these fundamentally-political questions is like expecting everyone who knows how to take apart a car engine to magically have the right answers to public policy questions about what the right road tolls should be and whether it’s a good or bad idea to build a highway in a given location.

                      1. 1

                        No, of course I don’t expect any X to magically Y in any context.

                        What I do expect is that better-educated populartions are harder to control at scale. Would teaching person X that computers can process information in ways A, B, or C make them realise Facebook is dangerous? No, of course not, just as teaching person X to read would not have broken the stranglehold of the church by itself.

                        1. 1

                          I would agree that better educated populations are harder to control.

                          However I lump any “practical” computer science degree in with the least educational vocational schools and schools which don’t take the liberal arts seriously. A liberal arts degree is an education… at some schools. But not others. It’s just as useless as computer science degrees that don’t head for theory land and get lost there.

                          To the point where I tell most people who want to code professionally who wish to go to university that they ought study anything but computer science.

                          1. 1

                            Oh, of course, most of most current University systems is a shit show. But most people wouldn’t get CS exposure at University anyway because most people don’t go to university. Anyway, veering far out of the topic space now I fear…

                      2. 3

                        Events with social media, misinformation, and the like beg to differ.

                        How strong is the evidence that learning computer science makes one less likely to fall for non-computer-science-related misinformation?

                        To the extent there’s a correlation, it seems like it’d be pretty hard to disentangle “learning computer science” (which is what we’re talking about getting everyone to do) from “being predisposed to learn computer science.”

                        1. 1

                          More that knowing anything about the power of computing would make people less likely to blindly give over all their data and attention to a single black-box program.

                          1. 11

                            Is that actually true, though? I know tons of people with CS degrees who are totally fine with Facebook, etc. At the same time, I know a bunch of people who aren’t terribly technical who are concerned about that stuff. I think it actually has much more to do with general civic awareness than technical skills.

                        2. 3

                          But is social media and misinformation really caused by computers? I’ve been thinking so for a while. It’s easy easier to get into bubbles. It makes sense.

                          But lately I am not so sure anymore. Looking a few decades back, you still have crazy terrorists, but you also have really crazy cults, mass suicides, and terrorists with horribly obscure believes.

                          At the same time it’s easier than ever to get opposing opinions, even in circumstances where one is watched a lot of the time.

                          I think to some degree it’s easier to be professional looking, but then judging things purely by looks has always been wrong. Yes, I think people have to learn to not blindly trust everything, but people had to do that with conspiracy theories in books, newspapers, TV.

                          Besides that people really should not forget how history also changed society over these decades. Vietnam war, child war, the wars in the middle east, huge amounts of lies were told by governments across the world. There’s good reasons for people to distrust governments.

                          We have a situation now where we see different media telling different stories, but we have had that. Catholics vs protestants, different parties having their own newspapers.

                          Of course today media spreads faster, maybe too fast, evoking emotions like the other things I’ve mentioned did. Globalisation, everyone knowing English, information “warfare” being possible by smaller groups or individuals being possible all are part of that.

                          However with the history one needs to put things into perspective. And in my opinion it’s not too different from terrorist organizations of any kind having better weapons, because there are better weapons.

                          I think social media and a culture that doesn’t care about making stupid ideas public just makes things that used to be thought in private, talked in homes, bars, etc. more public. We get a mirror of society that is real time rather than having investigative journalists having to infiltrate personal meetings. Now you know how your cousin, your uncle, etc. actually thinks about the world.

                          I think that being public about all things also lead to other changes. People date to talk about their sexuality, about diseases, as well as many other former taboos for the same reasons.

                          I think a lot of this is two sided. I certainly think social media is to blame for all sorts of things, but I do think a lot of what they are criticized for is making symptoms visible and most likely enhancing them. I don’t think root causes are often found in social media.

                      1. 15

                        I’ve been using Fish since 2015, and it’s been great. Fish not being POSIX compatible hasn’t been an issue in practice, though I don’t do a lot of shell scripting to begin with. If somebody is curious about my Fish configuration, you can find it here.

                        1. 4

                          For me the lack of sh-compatible syntax has been a real problem, to the point where I switched to bash at work. Fish does have the best user experience I’ve seen, but the need for specific Fish scripts is a problem, in particular with Nix or any tool that you need to load from your profile. There are wrappersz like bass, but they don’t always work and have overhead.

                          1. 30

                            Just because fish is your interactive shell doesn’t mean that you need to start shell scripts with !/usr/bin/env fish.

                            1. 9

                              I never understood what people is doing all day in their prompt that needs POSIX compatibility. The syntax to call commands is the same.

                              I think it is mostly a meme or a simple matter of running copy pasted scripts from the web and not understanding how interpreter resolution works or that you can manually define it.

                              1. 1

                                Not necessarily whole scripts. Sometimes you want to paste just a couple commands into the interactive prompt.

                              2. 3

                                But for stuff like Nix, don’t you have to run the setup scripts in your interactive shell with source or equivalent, so they can set environment variables and such?

                                1. 3

                                  In Unix, all child processes of a process will inherit the parent’s environment. You should be able to write all your scripts as POSIX compliant (or bash compliant) and run them from inside fish without an issue, as long as you specify the interpreter like so: bash myscript.sh

                                  1. 8

                                    The problem, if I understood it right (I’ve never used things like Nix) is that these are not things you’re supposed to run, but things you’re supposed to source. I.e. you source whatever.sh so that you get the right environment to do everything else. Sort of like the decade-old env.sh hack for closed-source toolchains and the like, which you source so that you can get all the PATH and LD_LIBRARY_PATH hacks only for this interactive session instead of polluting your .profile with them.

                                    1. 1

                                      I see, that makes sense. I guess I didn’t consider that, wonder how the activate script generated with a Python virtual environment would work with Fish. Even a relatively fancy .profile file might be incompatible with Fish.

                              3. 8

                                I usually just switch to bash when I need to run a command this way. And honesty I’m more annoyed at commands like these that modify your shell environment and thus force you to use a POSIX-compatible shell, than I am at fish for deliberately trying something different that isn’t POSIX.

                                1. 1

                                  Fortunately some commands are designed to output a shell snippet to be used with eval

                                  eval $(foo) # foo prints 'export VAR=bar'
                                  

                                  In that case you can pipe output of foo to Fish’s source

                                  foo | source
                                  
                                  1. 2

                                    No, that’s exactly what you can’t do, the code won’t be valid for source-ing (unless those commands specifically output csh-style rather than posix-style script)

                                    Though apparently these days fish does support the most common POSIX-isms

                                    1. 1

                                      I mean only the case where you set env variables (like luarocks path)

                                2. 4

                                  I also had problems with bass. It was too slow to run in my config.fish. However, I switched to https://github.com/oh-my-fish/plugin-foreign-env and it’s worked perfectly for me. And you don’t need oh-my-fish to use it — I installed it with the plugin manager https://github.com/jorgebucaran/fisher.

                                  1. 2

                                    Ah, I hadn’t seen this one, if it succeeds to setup Nix then it’s party time!

                                    1. 3

                                      Not a fish user, but since you’re a Nix user I would also recommend checking out direnv which has fish support. For nix in direnv specifically I would also recommend something like nix-direnv (and several others) which caches and also gcroots environments so overheads are next to negligible when things remain unchanged.

                                    2. 1

                                      That looks good enough to make me want to try fish again. I had not seen it last time I tried fish. Thanks for pointing it out.

                                1. 4

                                  I wonder if there’s enough Kotlin-related content to justify creating a kotlin tag. Gotta admit it makes me kind of sad to see this tagged as java (especially since a big focus of the last several Kotlin releases has been support for non-JVM environments) but there’s no better alternative at the moment.

                                  1. 23

                                    “Selecting a programming language can be a form of premature optimization, so select Python because it is the optimal choice.”

                                    I do think people (especially on the junior end of the experience spectrum) spend way more time and energy on language choice than is useful. And sometimes it is due to concerns about performance that don’t matter in context. There is a good article to be written about that.

                                    But this article is just a “Python is good and you should definitely use it” advocacy piece.

                                    1. 9

                                      This is unhelpfully driven, in part, by our industry stubborn clinginess when it comes to technology. Every recruiter and every hiring manager I’ve ever met has not only asked me what languages I prefer, but has been taken aback to the point of hostility by questions like “for what?” and answers like “it depends”. If choosing the right language is indeed a form of premature optimisation, it’s hard to blame the people who do it: oftentimes it’s a career choice.

                                      That, in turn, is driven by many other factors, not the least the fact that language complexity these days is mind-boggling. Even Python, which has the reputation of a simple language, is not that easy – your average codebase doesn’t use just Python “the language”, but also a myriad conventions about what is and isn’t Python, all sorts of decorators of questionable usefulness and so on.

                                      It’s like every general-purpose language out there is slowly evolving to encompass several DSLs, to the point where you can’t just “know” Python, or C++, or Rust – you have to use all of it, full time, on a permanent basis, and follow all the important community blogs, and watch the conference talks, because the way you wrote code two years ago is no longer idiomatic.

                                      I, for one, am pretty hesitant to say I know Python, even though I’ve actually used it for a very long time, since before 2.x, in fact: truth is, even though it’s the scripting language I am most familiar with (I’ve buried Perl more than 10 years ago), if you stick me in front of a Python source file picked at random from a major project, there’s about a 50% chance that it’ll be basically incomprehensible to me unless I google the hell out of it.

                                      1. 1

                                        I kinda feel that keeping up with the shifting package managers is more troublesome than keeping up with shifting idioms. However, I have been working in Python and JS shops. So I recognise that I may be something of an extreme case.

                                    1. 7

                                      The original bug report as to why:

                                      With Schools using Google Forms as a testing platform, students are able to use this shortcut to search through the source of the page, and determine the correct answers.

                                      Use case: Admin wants to prevent and stop students from using View-Source as a way to cheat during exams, state testing and quizzes

                                      1. 32

                                        Clearly, modifying the browser is the only possible technical approach that can stop people from finding test answers in the page sources of Google Forms. Google is otherwise completely powerless to prevent test answers from appearing in the form’s HTML.

                                        1. 15

                                          That’s an interesting thread. I’m surprised to see so few people saying “instead how about you fix these stupid websites to not embed this stuff in the page source.”

                                          View Source is one of the emblematic features of the open Web. Countless people got started with HTML (or picked up new tricks) by viewing the source of websites.

                                          1. 6

                                            View Source is one of the emblematic features of the open Web.

                                            No surprise that Chrome* is open to blocking it, then.

                                          2. 8

                                            Wait, are they serious?! Why on Earth would the answers even be in the HTML? That literally doesn’t make sense…

                                            1. 2

                                              I ran into this a while ago, doing some compulsory training for an employer.

                                              After digging around, it seems the quiz sections were transpiled from Flash into JavaScript, leading to trivially exposed answers in the source code. Because the Flash app itself was entirely client-side, and wasn’t validating the answers against a server.

                                          1. 21

                                            This hits on a bugbear of mine where people say “just do X and then just do Y, simple enough”; emphasis on the word “just”. It’s almost never just a matter of “just” and a lot of seemingly trivial tasks end up down rabbit holes where a bulk of the venturing could have been avoided by a slightly lengthier planning process, or “thinking more about it” process if one is averse to the concept of planning.

                                            Reminds of the (unattributed?) saying:

                                            Weeks of coding can save you hours of planning

                                            1. 17

                                              To be fair, sometimes, hours of coding can save you weeks of planning as well.

                                              1. 5

                                                Wholly agreed. My own experience has unfortunately gravitated more towards the “just this and just that” mentality.

                                              2. 4

                                                You’re not alone - I reference this piece often when people overuse “just”. https://alistapart.com/blog/post/the-most-dangerous-word-in-software-development/

                                                1. 6

                                                  At some point it occurred to me that whenever I used the word “just” in a code review comment, it was a sign that I wasn’t being as constructive and helpful as possible. Now when I catch myself using that word, I stop and consider whether my comment could come off as condescending or flippant even if that wasn’t my conscious intent. The answer is “yes” often enough to keep me vigilant about the word.

                                                2. 2

                                                  Ha ha ha - had not heard that one. As a marketer, I always roll my eyes when people say (and they often do!) “can’t they just build that in, like, 2 hours?”

                                                1. 3

                                                  This article didn’t have enough information about the data set for me to form an opinion about whether the fix would actually have been the right way to solve it. The shape of the data is a huge input into the shape of its representation.

                                                  If they weren’t seeing the same value more than once or twice in one of the columns (e.g., because the spammer was running through a list of recipients, so each TO value only appeared once in the data set) then they may have been better off leaving that mostly-unique column in the main table and covering it with the multi-column unique index.

                                                  The main thrust of the article is right: data modeling is easy to get wrong especially when you’re just starting out. But the conclusion really ought to be, “Analyze your data and your access patterns to determine how to model it,” not, “Normalization makes queries faster.”

                                                  1. 11

                                                    I think the conclusion of the article was not about any of that at all. It was about it being okay to make mistakes or something.

                                                  1. 3

                                                    I think this article undersells tests. The critiques of tests all make sense, but the conclusion that you should toss out most of your test suite in favor of properties doesn’t seem right to me.

                                                    Until we have automated systems for proving that an implementation precisely matches a formal spec (or alternately, a general-purpose way to generate complete implementations of complex systems from formal specs) there will be implementation bugs that no amount of spec analysis can possibly uncover. Tests are not the best way to prove that a specification is correct, but they’re still a pretty good way to increase confidence that the code does what you expect it to, sanity-check that you haven’t broken existing functionality in the course of making a change, expose places where your implementation is awkward to interact with, and identify components whose implementations are unnecessarily coupled. The fact that tests can also increase confidence that the high-level spec is what you want is a bonus, but it’s rarely the primary motivation for writing tests.

                                                    We also actually understood the desired requirements here, what if we don’t even have a firm grasp on that?

                                                    Glad this line was included. In a lot of environments, this is the biggest problem by an overwhelming margin, but sadly it’s mostly a people problem, not a technical one.

                                                    1. 2

                                                      If I’m being hopeful: Implementing a query API to pull arbitrary data from an application database with knowledge about the layout of the data that lets me translate the queries into highly performant SQL.

                                                      If I’m being cynical: Reinventing GraphQL.

                                                      If I’m being selfish: Taking the opportunity to build something substantial with jOOQ’s new multiset operator which seems pretty nifty so far.

                                                      1. 5

                                                        Good article. But there is a bit of a tease at the end: footnote 1 speaks in glowing terms about advice the author got from Bonnie Eisenman about keeping meetings productive, but doesn’t say what the advice actually was. I looked over her blog and didn’t see any posts on that topic. The curiosity is killing me!

                                                        1. 14

                                                          John Earnest says on reddit (I agree):

                                                          Alternatively, look to APL (k, j, q), Smalltalk, Forth (factor, postscript), and Lisp: all of these languages offer their own take on uniform operator precedence. It’s always obvious to a reader, without any need to memorize. It’s also simpler for the machine to parse. Solve the problem by removing it.

                                                          Despite having written a c parser, I frequently get its precedence confused.

                                                          1. 9

                                                            I’ve hit bugs caused by people getting it wrong and (much more often) I’ve had to consult a table to find out which way it works. It’s pretty easy to remember that multiply and divide are higher precedence than add and subtract, since we learned it at school and were tested on it over many years. It’s much harder to remember what the relative precedence of pre-increment and post-increment or bitwise and, and left shift are. I consider a language trying to make me remember these things to be a bit of a usability mistake and generally just put brackets everywhere.

                                                            In Verona, we’re experimenting with no operator precedence. a op1 b op2 c is a parse error. You can chain sequences of the same operator but any pair of different operators needs brackets. You can write (a op1 b op1 c) op2 d or a op1 b op1 (c op2 d) and it’s obvious to the reader what the order of application is. This is a bit more important for us because we allow function names to include symbol characters and allow any function to be used infix, so statically defining rules for application order of a append b £$%£ c would lead to confusion.

                                                            1. 5

                                                              just put brackets everywhere.

                                                              This is my go-to approach as well not just because I can’t remember some of the less-frequently-used precedence rules, but also because I assume the person maintaining the code after me will be similarly fuzzy on those details and making it explicit will make my code easier to understand. For that reason, I often add parentheses even in cases where I am quite certain of the precedence rules.

                                                              1. 2

                                                                Not all binary operators are associative, though.

                                                                1. 2

                                                                  That’s true. For a single operator, the evaluation order is exactly the same as for other expressions (left to right) so it should be easy to remember the rule, even if not all evaluation orders would give the same result.

                                                            1. 3

                                                              An accessibility gripe: the author’s use of emojis made it quite hard to read the tables with comparisons of different communication mechanisms. At least on MacOS, the emoji glyphs without skin-color modifiers are identically-colored faces with just a few pixels’ difference in the mouths. As someone with less-than-perfect vision, I had to crank my browser’s zoom level up to 200% to read that part of the article.

                                                              For an article about clear and concise communication, it struck me as pretty ironic. That information would have been more clearly conveyed as text or, if the author really wanted emoji, as different numbers of stars.

                                                              1. 5

                                                                But even if [the added costs] fell wholly on the purchaser, consumers would, I suspect, be willing to pay a few dollars more for a gadget if that meant reliable access to software for it—indefinitely.

                                                                That seems completely at odds with reality to me. I mean, okay, some consumers would be willing to do that. Clearly there’s a market for more expensive devices with higher-quality support. But judging by the market dominance of low-end devices, it seems to me a lot of consumers are far more eager to pay a few dollars less for a device that’s good enough for the moment.

                                                                And it probably won’t be “a few dollars more.” The purchase price of the device will basically have to fund a perpetual annuity that will cover the expected cost of keeping an engineering team proficient enough on an increasingly obsolete platform that they can provide the mandatory updates. And you can’t get there by just charging a few dollars for an annual support subscription, either: as the device wanes in popularity, the number of subscribers will drop too low to support the maintenance costs, and you’ll have to raise the subscription price for the remaining users. This will probably not go over well with people who expect “a few dollars more” to get them a limitless amount of engineering time over the span of decades.

                                                                And finally, are companies permitted to go out of business under this scheme? I’ve accumulated a bunch of spare hardware over the years, and a lot of the older bits of gear are made by companies that no longer exist. Who do I sue when the updates stop coming to me?

                                                                1. 7

                                                                  are companies permitted to go out of business under this scheme? I’ve accumulated a bunch of spare hardware over the years, and a lot of the older bits of gear are made by companies that no longer exist. Who do I sue when the updates stop coming to me?

                                                                  This would become a strategy: each product is its own company, the company folds when the product goes off the market.

                                                                  1. 8

                                                                    This is already a strategy employed by some real-estate developers to avoid liabilities. A new subdivision will be developed by This Specific Subdivision LLC, which builds it and then goes out of business. When, 10 years later, someone wants to sue over misrepresentation and/or corners having been cut, the company no longer exists.

                                                                1. 11

                                                                  Various hot takes for discussion:

                                                                  • The majority of software (>99.9%) doesn’t significantly benefit from formal methods, especially with the reliance on both plumbing together bits of garbage and also mixing together business and display logic.
                                                                  • The majority of businesses don’t care or aren’t interested in limiting their domains/growth in ways amenable to formal methods. This is, incidentally, why Haskell will forever be a niche language.
                                                                  • We’ve had some of these methods for decades (design by contract dates back to the 80s with Eiffel) and they haven’t stuck. Beyond the advent of faster computers with which to run analyses, I’m curious what changes would suddenly make programmers eat their vegetables–we still don’t do that with testing, which is far simpler than trying to pick the right types or model system interactions.
                                                                  • For maximum benefit, new systems need to be built on either vetted new code (vetted via analysis), or old code (vetted via years of development and testing). Unfortunately, old code is oftentimes too gnarly/involved to just slather in formal methods–say, for example, old Fortran or C numerical code…and the business case for “well, it already works, let’s rewrite it so we can prove it works!” is hard to make (and that’s assuming you can even find people with the domain expertise to do a rewrite safely).
                                                                  • So much software just isn’t worth writing well, given its lifetime. This is especially obvious in webdev. There is a transactional cost to using good, sound formal methods, and for some minor tasks it just isn’t worth paying. As mentioned above, most of what we do are those minor tasks.
                                                                  1. 2

                                                                    Agreed, but DbC plays nice with such a scenario.

                                                                    I use asserts all over the place when working with, aaah, “legacy” software.

                                                                    I analyse what I think the big ball of mud is doing, and then I executably document the results of that analysis as asserts, and then if I’m wrong with my analysis, I find out reliably at the point where I stuffed up.

                                                                    1. 2

                                                                      I agree - there’s a huge class of software for which even integration tests are pointless because the person asking for the software doesn’t know what they actually want.

                                                                      1. 3

                                                                        I’ve found myself in that situation pretty often over the years, and I don’t exactly disagree, but I think there’s still value in integration tests. An integration test can’t tell you whether the software is useful, but it can at least tell you that the software isn’t behaving in ways the programmers didn’t intend. If your integration test starts spewing database errors, it’s probably a sign of a bug even if your customer doesn’t know what end results they want.

                                                                        But I agree broadly that the less you know about what you’re trying to accomplish, the less useful it becomes to spend a lot of time writing exhaustive tests or machine-readable formal specifications.

                                                                    1. 2

                                                                      I can’t wait to try Shenandoah and ZGC in production (one of my services does over 1.5K RPS with G1). I’ve moved over to Kotlin long time ago seeing same features land in Java, and more JVM improvements get me excited and confident in future of JVM ecosystem.

                                                                      1. 12

                                                                        It really is an exciting time to be working in a JVM language. I too moved over to Kotlin a while back, but I still closely follow what’s going on in Java.

                                                                        My hunch is that a lot of people who currently dismiss Java and the JVM as slow bulky dinosaur tech are going to be shocked when some of the major upcoming changes get released. Loom (virtual threads) in particular should drive a stake through the heart of async/await or reactive-style programming outside a small set of niche use cases, without sacrificing any of the scalability wins. Valhalla (user-defined types with the low overhead of primitive types) and Panama (lightweight interaction between Java and native code) will, I suspect, make JVM languages a competitive option for a lot of applications where people currently turn to Python with C extensions.

                                                                        1. 2

                                                                          My hunch is that a lot of people who currently dismiss Java and the JVM as slow bulky dinosaur tech are going to be shocked when some of the major upcoming changes get released

                                                                          I agree with this re the JVM, but isn’t Java mostly releasing language-level changes that are just catch-up with things that have been commonplace elsewhere for years?

                                                                          1. 4

                                                                            That’s a fair point, sure.

                                                                            Maybe a better way to frame it is that as language changes roll out, it’ll get harder to point to Java and say, “That’s such an obsolete, behind-the-times language. It doesn’t even have thing X like the other 9 of the top 10 languages have had for years.”

                                                                            Of course, Java will never (and should never, IMO) be on the bleeding edge of language design; its designers have made a deliberate choice to let other languages prove, or fail to prove, the value of new language features. By design, you’ll pretty much always be able to point to existing precedent for anything new in Java, and it’ll never look as modern as brand-new languages. My point is more that I think the perception will shift from, “Java is obsolete and stagnant” to, “Java is conservative but modern.”

                                                                          2. 1

                                                                            my Android client app shares code with backend (both are in Java).
                                                                            The android’s Java is at about JDK 8+ level (https://developer.android.com/studio/write/java8-support-table ) The backend is currently using JDK 11.

                                                                            So sharing the code between client and backend is becoming more challenging.

                                                                            I think, if I am to move backend to JDK 17, then it will be harder to share code (if take advantage of JDK 17 features on the backend).

                                                                            I guess the solution is to move both backend and frontend to Kotlin… but that’s a lot of work without significant business value.

                                                                            1. 3

                                                                              Nothing prevents you from using JDK 17 with Java 8 language features level, essentially marking any use of new features a compile errors and making sure your compiler produces Java 1.8 compatible bytecode. That’s what we do in our library that needs to be JDK 8+ (but we use JDK 8 on the CI side to compile the final JARs to be on the safe side). Then you can run that code on JVM 17 on the server and take advantage of the JVM improvements (but not the new language features). We have decided to add Kotlin where it makes sense gradually instead of a full rewrite (e.g. when we’d want to use coroutines).

                                                                              1. 2

                                                                                You could also just stick to 11. It’ll be supported for years.

                                                                          1. 26

                                                                            There are a lot of extensions that automatically select the ‘reject all’ or walk the list and decline them all. Why push people towards one that makes them agree? The cookie pop-ups are part of wilful misinterpretation of the GDPR: you don’t need consent for cookies, you need consent for tracking and data sharing. If your site doesn’t track users or share data with third parties, you don’t need a pop up. See GitHub for an example of a complex web-app that manages this. Generally, a well-designed site shouldn’t need to keep PII about users unless they register an account, at which point you can ask permission for everything that you need to store and explain why you are storing it.

                                                                            Note also that the GDPR is very specific about requiring informed consent. It is not at all clear to me that most of these pop-ups actually meet this requirement. If a user of your site cannot explain exactly what PII handling they have agreed to then you are not in compliance.

                                                                            1. 4

                                                                              Can’t answer this for other people, but I want tracking cookies.

                                                                              When people try to articulate the harm, it seems to boil down to an intangible “creepy” feeling or a circular “Corporations tracking you is bad because it means corporations are tracking you” argument that begs the question.

                                                                              Tracking improves the quality of ad targeting; that’s the whole point of the exercise. Narrowly-targeted ads are more profitable, and more ad revenue means fewer sites have to support themselves with paywalls. Fewer paywalls mean more sites available to low-income users, especially ones in developing countries where even what seem like cheap microtransactions from a developed-world perspective would be prohibitively expensive.

                                                                              To me, the whole “I don’t care if it means I have to pay, just stop tracking me” argument is dripping with privilege. I think the ad-supported, free-for-all-comers web is possibly second only to universal literacy as the most egalitarian development in the history of information dissemination. Yes, Wikipedia exists and is wonderful and I donate to it annually, but anyone who has run a small online service that asks for donations knows that relying on the charity of random strangers to cover your costs is often not a reliable way to keep the bills paid. Ads are a more predictable revenue stream.

                                                                              Tracking cookies cost me nothing and benefit others. I always click “Agree” and I do it on purpose.

                                                                              1. 3

                                                                                ‘an intangible “creepy” feeling’ is a nice way of describing how it feels to find out that someone committed a serious crime using your identity. There are real serious consequences of unnecessary tracking, and it costs billions and destroys lives.

                                                                                Also I don’t want ads at all, and I have no interest in targeted ads. If I want to buy things I know how to use a search bar, and if I don’t know I need something, do I really need it? If I am on a website where I frequently shop I might even enable tracking cookies but I don’t want blanket enable them on all sites.

                                                                                1. 4

                                                                                  How does it “costs billions and destroys lives”?

                                                                                  1. 2

                                                                                    https://www.ftc.gov/system/files/documents/reports/consumer-sentinel-network-data-book-2020/csn_annual_data_book_2020.pdf see page 8. This is in the US alone and does not take the other 7.7b people in the world into account. I will admit it is not clear what percentage of fraud and identity theft are due to leaked or hacked data from tracking cookies so this data is hardly accurate for the current discussion, but I think it covers the question of ‘how’. If you want more detail just google the individual categories in the report under fraud and identity theft.

                                                                                    Also see this and this

                                                                                    But I covered criminal prosecution in the same sentence you just quoted from my reply above so clearly you meant ‘other than being put in prison’. Also, people sometimes die in prison, and they almost always lose their jobs.

                                                                                    1. 4

                                                                                      The first identity theft story doesn’t really detail what exactly happened surrounding the ID theft, and the second one is about a childhood acquaintance stealing the man’s ID. It doesn’t say how exactly either, and neither does that FTC report as far as I can see: it just lists ID theft as a problem. Well, okay, but colour me skeptical that this is cause by run-of-mill adtech/engagement tracking, which is what we’re talking about here. Not that I think it’s not problematic, but it’s a different thing and I don’t see how they’re strongly connected.

                                                                                      The NSA will do what the NSA will do; if we had no Google then they would just do the same. I also don’t think it’s as problematic as often claimed as agencies such as the NSA also do necessary work. It really depends on the details on who/why/what was done exactly (but the article doesn’t mention that, and it’s probably not public anyway; I’d argue lack of oversight and trust is the biggest issue here, rather than the actions themselves, but this is veering very off-topic).

                                                                                      In short, I feel there’s a sore lack of nuance here and confusion between things that are (mostly) unconnected.

                                                                                      1. 2

                                                                                        Nevertheless all this personal data is being collected, and sometimes it gets out of the data silos. To pretend that it never causes any harm just because some stranger on the internet failed to come up with a completely airtight example case in 5 minutes of web searching is either dishonest or naive. If you really want to know, you can do the research yourself and find real cases. If you would rather just feel comfortable with your choice to allow all tracking cookies that is also totally fine. You asked how, I believe my answer was sufficient and roughly correct. If you feel the need to prove me wrong that is also fine, and I will consider any evidence you present.

                                                                                        1. 2

                                                                                          The type of “personal data” required for identity theft is stuff like social security numbers, passport numbers, and that kind of stuff. That’s quite a different sort of “personal data” than your internet history/behaviour.

                                                                                          To pretend that it never causes any harm just because some stranger on the internet failed to come up with a completely airtight example case in 5 minutes of web searching is either dishonest or naive. If you really want to know, you can do the research yourself and find real cases.

                                                                                          C’mon man, if you’re making such large claims such as “it costs billions and destroys lives” then you should be prepared to back them up. I’m not an expert but spent over ten years paying close attention to these kind of things, and I don’t see how these claims bear out, but I’m always willing to learn something new which is why I asked the question. Coming back with “do your own research” and “prove me wrong then!” is rather unimpressive.

                                                                                          If you would rather just feel comfortable with your choice to allow all tracking cookies that is also totally fine.

                                                                                          I don’t, and I never said anything which implied it.

                                                                                          If you feel the need to prove me wrong that is also fine, and I will consider any evidence you present.

                                                                                          I feel the need to understand reality to the best of my ability.

                                                                                          1. 1

                                                                                            I feel the need to understand reality to the best of my ability.

                                                                                            Sorry I was a bit rude in my wording. There is no call for that. I just felt like I was being asked to do a lot of online research for a discussion I have no real stake in.

                                                                                            GDPR Article 4 Paragraph 1 and GDPR Article 9 Paragraph 1 specify what kind of information they need to ask permission to collect. It is all pretty serious stuff. There is no mention of ‘shopping preferences’. Social security numbers and passport numbers are included, as well as health data, things that are often the cause of discrimination like sexuality/religion/political affiliation. Also included is any data that can be used to uniquely identify you as an individual (without which aggregate data is much harder to abuse) which includes your IP, your real name.

                                                                                            A lot of sites just ask permission to cover their asses and don’t need to. This I agree is annoying. But if a site is giving you a list of cookies to say yes or no to they probably know what they are doing and are collecting the above information about you. If you are a white heterosexual English speaking male then a lot of that information probably seems tame enough too, but for a lot of people having that information collected online is very dangerous in quite real and tangible ways.

                                                                                    2. 3

                                                                                      I am absolutely willing to have my view on this changed. Can you point me to some examples of serious identity theft crimes being committed using tracking cookies?

                                                                                      1. 2

                                                                                        See my reply to the other guy above. The FTC data does not specify where the hackers stole the identity information so it is impossible for me to say what percentage are legitimately caused by tracking cookies. The law that mandates these banners refers to information that can be used to identify individuals. Even if it has never ever happened in history that hacked or leaked cookie data has been used for fraud or identity theft, it is a real danger. I would love to supply concrete examples but I have a full time job and a life and if your claim is “Sure all this personal data is out there on the web, and yes sometimes it gets out of the data silos, but I don’t believe anyone ever used it for a crime” then I feel like its not worth my time spending hours digging out case studies and court records to prove you wrong. Having said that if you do some searching to satisfy your own curiosity and find anything definitive I would love to hear about it.

                                                                                      2. 2

                                                                                        someone committed a serious crime using your identity

                                                                                        because of cookies? that doesn’t follow

                                                                                      3. 1

                                                                                        Well this is weird. I think it’s easy to read that and forget that the industry you’re waxing lyrical about is worth hundreds of billions; it’s not an egalitarian development, it’s an empire. Those small online services that don’t want to rely on asking for donations aren’t billion-dollar companies, get a deal entirely on someone else’s terms, and are almost certainly taken advantage of for the privilege.

                                                                                        It also has its own agenda. The ability to mechanically assess “ad-friendliness” already restricts ad-supported content producers to what corporations are happy to see their name next to. I don’t want to get too speculative on the site, but there’s such a thing as an ad-friendly viewer too, and I expect that concept to become increasingly relevant.

                                                                                        So, tracking cookies. They support an industry I think is a social ill, so I’d be opposed to them on that alone. But I also think it’s extremely… optimistic… to think being spied on will only ever be good for you. Advertisers already leave content providers in the cold when it’s financially indicated—what happens when your tracking profile tells them you’re not worth advertising to?

                                                                                        I claim the cost to the individual is unknowable. The benefit to society is Cambridge Analytica.

                                                                                      4. 2

                                                                                        The cookie law is much older than GDPR. In the EU you do need consent for cookies. It is a dumb law.

                                                                                        1. 11

                                                                                          In the EU you do need consent for cookies. It is a dumb law.

                                                                                          This is not true. In the EU you need consent for tracking, whether or not you do that with cookies. It has to be informed consent, which means that the user must understand what they are agreeing to. As such, a lot of the cookie consent UIs are not GDPR compliant. Max Schrems’ company is filing complaints about non-compliant cookie banners.

                                                                                          If you only use functional cookies, you don’t need to ask for consent.

                                                                                          1. 3

                                                                                            https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:31995L0046 concerns consent of user data processing.

                                                                                            https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32002L0058 from 2002 builds on the 1995 directive, bringing in “cookies” explicitly. Among other things it states “The methods for giving information, offering a right to refuse or requesting consent should be made as user-friendly as possible.”

                                                                                            In 2009 https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32009L0136 updated the 2002 directive, closing a few loop holes.

                                                                                            The Do-Not-Track header should have been enough signal to cut down on cookie banners (and a few websites are sensible enough to interpret it as universal rejection for unnecessary data storage), but apparently that was too easy on users? It went as quickly as it came after Microsoft defused it by enabling it by default and parts of adtech arguing that the header doesn’t signify an informed decision anymore and therefore can be ignored.

                                                                                            If banners are annoying it’s because they’re a deliberate dark pattern, see https://twitter.com/pixelscript/status/1436664488913215490 for a particularly egregious example: A direct breach of the 2002 directive that is typically brought up as “the cookie law” given how it mandates “as user-friendly as possible.”

                                                                                            1. 2

                                                                                              I don’t understand what you’re trying to say. Most cookie banners on EU sites are not at all what I’d call a dark pattern. They’re just trying to follow the law. It is a stupid law which only trained people to click agree on all website warnings, making GDPR less effective. Without the cookie law, dark patterns against GDPR would be less effective.

                                                                                              1. 3

                                                                                                The dark pattern pgeorgi refers to is that on many cookie banners, the “Refuse all” button requires more clicks and/or more careful looking than the “Accept all” button. People who have trained themselves to click “Accept” mostly chose “Accept” because it is easier — one click on a bright button, and done. If “Refuse all” were equally easy to choose, more people would train themselves to always click “Refuse”.

                                                                                                Let’s pretend for a moment the cookie law no longer exists. A website wants to set a tracking cookie. A tracking cookie, by definition, constitutes personally identifiable information (PII) – as long as the cookie is present, you can show an ad to specifically that user. The GDPR recognizes 6 different conditions under which processing PII is lawful.

                                                                                                The only legal ground to set a tracking cookie for advertising purposes is (a) If the data subject has given consent to the processing of his or her personal data. I won’t go over every GDPR ground, but suffice it to say that tracking-for-advertising-purposes is not covered by

                                                                                                • (b) To fulfil contractual obligations with a data subject;
                                                                                                • nor is it covered by (f) For the legitimate interests of a data controller or a third party, unless these interests are overridden by interests of the data subject.

                                                                                                So even if there were no cookie law, GDPR ensures that if you want to set a tracking cookie, you have to ask the user.

                                                                                                Conversely, if you want to show ads without setting tracking cookies, you don’t need to get consent for anything.

                                                                                                1. 2

                                                                                                  I feel the mistake with the whole “cookie law” thing is that it focuses too much on the technology rather than what people/companies are actually doing. That is, there are many innocent non-tracking reasons to store information in a browser that’s not “strictly necessary”, and there are many ways to track people without storing information in the browser.

                                                                                                2. 1

                                                                                                  I’m not saying that dark patterns are employed on the banners. The banners themselves are dark patterns.

                                                                                                  1. 1

                                                                                                    The banners often come from freely available compliance packages… It’s not dark, it’s just lazy and badly thought out, like the law itself.

                                                                                                    1. 1

                                                                                                      What about the law do you think is badly thought out?

                                                                                                      1. 1

                                                                                                        The cookie part of the ePrivacy Directive is too technological. You don’t need consent, but you do have to inform the user of cookie storage (or localstorage etc) no matter what you use it for. It’s unnecessary information, and it doesn’t protect the user. These are the cookie banners that only let you choose “I understand”, cause they only store strictly necessary cookies (or any kind of cookie before GDPR in 2016).

                                                                                                        GDPR is the right way to do it. The cookie part of EPR should have been scrapped with GDPR. That would make banners that do ask for PII storage consent stand out more. You can’t make you GDPR banner look like an EPR information banner if EPR banners aren’t a thing.

                                                                                            2. 2

                                                                                              Usually when I see the cookie consent popup I haven’t shared any personal information yet. There is what the site has from my browser and network connection, but I trust my browser, uBlock origin and DDG privacy tools to block various things and I use a VPN to somewhere random when I don’t want a site to know everything it can about my network location.

                                                                                              If I really do want to share personal info with a site, I’ll go and be very careful what I provide and what I agree too, but also realistic in that I know there are no guarantees.

                                                                                              1. 8

                                                                                                If you’re using a VPN and uBlock origin, then your anonymity set probably doesn’t contain more than a handful of people. Combined with browser fingerprinting, it probably contains just you.

                                                                                                1. 2

                                                                                                  Should I be concerned about that? I’m really not sure I have properly thought through any threats from the unique identification that comes from that. Do you have any pointers to how to figure out what that might lead to?

                                                                                                  1. 9

                                                                                                    The point of things like the GDPR and so on is to prevent people assembling large databases of correlated knowledge that violate individual privacy. For example, if someone tracks which news articles you read, they have a good first approximation of your voting preferences. If they correlate it with your address, they can tell if you’re in a constituency where their candidate may have a chance. If you are, they know the issues that are important to you and so can target adverts towards you (including targeted postal adverts if they’re able to get your address, which they can if they share data with any company that’s shipped anything physical to you) that may influence the election.

                                                                                                    Personally, I consider automated propaganda engines backed by sophisticated psychological models to be an existential threat to a free society that can be addressed only by some quite aggressive regulation. Any unique identifier that allows you to be associated with the kind of profile that these things construct is a problem.

                                                                                                  2. 2

                                                                                                    Do you have a recommendation?

                                                                                                2. 2

                                                                                                  The problem with rejecting all the tracking is that without it most ad networks will serve you the worst/cheapest untargeted adverts which have a high chance of being a vector for malware.

                                                                                                  So if you reject the tracking you pretty much have to also run an ad-blocker to protect yourself. Of course if you are running an ad blocker then the cookies arent going to make much difference either way.

                                                                                                  1. 1

                                                                                                    I don’t believe it makes any difference whether you agree or disagree? the goal is just to make the box go away

                                                                                                    1. 2

                                                                                                      Yes. If I agree and they track me, they are legally covered. If I disagree and they track me then the regulator can impose a fine of up to 5% of their annual turnover. As a second-order effect: if aggregate statistics say 95% of people click ‘agree’ then they have no incentive to reduce their tracking, whereas if aggregate statistics say ‘10% leave the page without clicking either, 50% click disagree’ then they have a strong case that tracking will lose them business and this will impact their financial planning.

                                                                                                  1. 1

                                                                                                    Always good to see more options out there, but why would I choose this over SQLite3 for my “SQL database engine as a library” needs?

                                                                                                    1. 1

                                                                                                      Presumably you use this to implement a SQL frontend for X backend.

                                                                                                      1. 1

                                                                                                        I think this is a bit of a lower-level toolkit that would let you build a thing that uses SQL but (for example) queries off some random datastore. One example in the readme is “SQL database from your google sheets”.

                                                                                                        I could see a good usecase in something like Stripe Sigma, where they have a SQL querying layer that probably isn’t directly interfacing with their primary DBs

                                                                                                        I think in reality that Stripe Sigma is actually doing a thing where it’s copying data over to a read-only secondary DB tho…