1. 2

    The current data points used for generating fingerprints are: user agent, screen print, color depth, current resolution, available resolution, device XDPI, device YDPI, plugin list, font list, local storage, session storage, timezone, language, system language, cookies, canvas print

    Curious if a browser plugin that randomizes or obfuscates these exists.

    1. 5

      The tor browser (which is a set of firefox configurations + extensions) blocked this successfully for me.

      1. 5

        The Tor browser does the best possible thing: it gives everyone the same UA, resolution, etc. And more importantly, it picks the most common values that are observed on the web for those. Every Tor browser user looks like the most statistically average web user in the world.

      2. 5

        Firefox has privacy.resistFingerprinting, which I’ve used reasonably successfully. Sometimes it breaks sites that display time e.g. Gmail other times it breaks in bigger ways e.g. when writing to a <canvas> element. So it’s not uncommon for me to need to temporarily disable it for a one-off basis.

        1. 3

          I’m running Firefox from the Debian repos with essentially all the privacy settings enabled as well as a bunch of extensions for fingerprint blocking, tracker blocking etc and it seems to have stopped this site from doing its tricks :)

          1. 1

            Brave has something builtin AFAIK

            1. 2

              I temporarily installed brave just to test this, then removed it because I find other things about it worrisome. But it did successfully block this specific site from identifying me. Vanilla firefox did not block it. tor browser successfully blocked it. So did vivaldi.

              1. 1

                What were worrisome parts? May be I can evaluate too.

                1. 5

                  They have, in the past, decided it was OK to inject content into websites for their own financial gain. Here’s an example. This is related. Their administration of the “Brave Rewards” program (stripping ads from sites, showing their own stuff, and holding payments until the sites step up and ask for them) is also a little disturbing if less likely to be privacy-violating.

                  In short, if I want an alternate blink-based thing, I think Vivaldi is less likely to have a profit motive where they benefit from compromising my interests. And If I want something really privacy focused, I don’t think a blink thing is likely the smart play anyway. So there’s no upside to make me want to keep Brave around given what they’ve shown me.

          1. 14

            Apparently the “Submit Story” form has eaten the .md extension from the link and I didn’t notice it. It should be readable in pretty-printed Markdown as well by re-adding it. Would some moderator edit the link, please? :)

            Edit: link to the pretty-printed version: https://write.as/bpsylevc6lliaspe.md

            1. 3

              That’s neat—didn’t know write.as had that feature.

            1. 5

              Similarly, I have a ~/.bash.local file which allows for machine-specific config (it’s .gitignored) and overrides by being sourced last: https://github.com/Pinjasaur/dotfiles/blob/193df781b46e1f7e7a556f386172b76f067adcd9/.bash_profile#L28-L32

              1. 2

                I have a slightly more complex setup. I put all of my bash config under ~/.config/bash, which is in a git repo. I have a helper function that sources files in the pattern ${XDG_CONFIG_HOME}/bash/$1/$2 and ${XDG_CONFIG_HOME}/bash/$1/$2.d/*. I call this function with hosts and the output from hostname and with systems and the output of uname. This lets me have either individual files or directories full of files inside ~/config/bash/hosts for individual machines and the same in systems for things that are for every machine I have with a specific OS (e.g. FreeBSD, Linux, macOS, with a special case in Linux that tries again with Linux-WSL if it detects that it’s running in WSL).

                This means all of my configs for all machines are in the same git repo and don’t get lost if anything happens to a particular machine.

                1. 2

                  I also have a .aliases-local alongside local bash, and check for existence of both before sourcing in my dotfiled bashrc.

                  1. 1

                    Same!

                    if test -e ~/local.sh
                      source ~/local.sh
                    end
                    
                  1. 4

                    I’ve been running a minimal setup with a MikroTik hAP AC2. Related to your Pi-hole comment, I added some NAT firewall config to redirect port 53 requests to the local Pi-hole. Full disclosure: approaching 2 years since I dove down that rabbit hole so the solution I found may be out of date now.

                    1. 2

                      Shipping an update to my blog that I’ve been wanting to do for a while: better UI theming including honoring the prefers-color-scheme CSS query. I did a basic implementation a few years back but the light/dark themes weren’t particularly well thought out nor did it honor prefers-color-scheme. I wrote about it, for the curious.

                      1. 2

                        I’m on week 2 of a sabbatical, so doing some of the following:

                        • driving school
                        • playing some worker placement board games (just got through a solitaire round of Viticulture, what a fun game)
                        • building out a digital version of Everdell for “practice” on trying to do a client-server thing that isn’t some CRUD app
                        • Trying to fix my Emacs setup (somehow leading me to writing a patch for lsp-mode at the moment..)
                        1. 1

                          Curious to hear more about driving school. Any particular topics? I’ve always thought it would be cool to attend one of those rally school courses.

                        1. 5

                          The + selector is one I found late, which is a shame because it’s very useful. Especially when trying to style prose.

                          1. 5

                            I remember hearing about it originally in the context of a “lobotomized owl” selector: * + *

                            Still brings a smile to my face. :^]

                          1. 2

                            Semi-related: Is there anything that you’d recommend for unit testing bash scripts?

                            1. 5

                              Noah’s Mill, but I’m also a fan of ryes.

                              1. 3

                                I’ve used BATS and it’s been a good experience. If you install via Homebrew just make sure to do brew install bats-core and not bats as the latter is an older, unmaintained release.

                                1. 2

                                  shunit2

                                  1. 1

                                    (I’m the blog post author)

                                    I’ve not actually tried testing bash scripts - once they get passed a fairly simple level I usually replace them with a small go binary

                                    1. 1

                                      I’ve looked into BATS, but the only time I bothered testing anything bash, I just ended up doing simple mocking like: https://github.com/adedomin/neo8ball-irc/blob/master/test/test.sh

                                      of course this is testing something which would probably be considered a strange, if not mental, use of bash.

                                    1. 24

                                      Since it’s a medium post with a clickbait title here’s a TLDR:

                                      While attempting to hack PayPal with me during the summer of 2020, Justin Gardner (@Rhynorater) shared an interesting bit of Node.js source code found on GitHub.

                                      The code was meant for internal PayPal use, and, in its package.json file, appeared to contain a mix of public and private dependencies — public packages from npm, as well as non-public package names, most likely hosted internally by PayPal. These names did not exist on the public npm registry at the time.

                                      The idea was to upload my own “malicious” Node packages to the npm registry under all the unclaimed names, which would “phone home” from each computer they were installed on.

                                      Apparently, it is quite common for internal package.json files, which contain the names of a javascript project’s dependencies, to become embedded into public script files during their build process, exposing internal package names. Similarly, leaked internal paths or require() calls within these files may also contain dependency names. Apple, Yelp, and Tesla are just a few examples of companies who had internal names exposed in this way.

                                      This type of vulnerability, which I have started calling dependency confusion, was detected inside more than 35 organizations to date, across all three tested programming languages.

                                      Feels weird and scary that this had always been possible! Another incident to add to the “package management is solved” meme. Great article.

                                      1. 10

                                        public packages from npm, as well as non-public package names, most likely hosted internally by PayPal.

                                        Even if you’re not using npm’s organization feature to host your modules, you probably want to use names scoped to an npm account or organization you control, so others can’t publish packages with matching names to the public registry.

                                        That said, dependency managers probably shouldn’t be running arbitrary code on users machines during installation, as in the case with the preinstall used in this example. Unfortunately, this was reported back in 2016 (VU#319816) and nothing came of it.

                                        1. 8

                                          I don’t really know how anything about npm dependency fetching works, but shouldn’t the logic be, “Do we have an internal package called ‘foo’? If not, look for public packages called ‘foo’.”? Based on the article description it sounds like it must be doing, “Is there a public package called ‘foo’? If not, look for an internal one”. Is this really how it works?

                                          1. 7

                                            npm has a limited concept of different registries. It fetches all packages from the one set in the global configuration file, a environment variable, or a CLI flag. The exception is scoped modules (modules whose names look like @mycompany/foobar), where each scope (the @mycompany part) can be assigned a registry.

                                            If you pay npm, you can set scoped packages published on their registry to only be installable by users logged into your organization.

                                            Before scoped modules were added to npm, the best you could do is create unscoped packages that didn’t exist, and point npm at a proxy that decided what backend to fetch a package from based on the requested name. A common implementation checked an internal registry first, and if it didn’t exist, then it fetches from the public registry.

                                            The author of this post provides examples of internal modules being unscoped, so I’m assume these companies are relying on developers connecting to a proxy to fetch the correct dependencies. I could easy invision scenarios where new developers, CI systems, IDEs are improperly configured and fetch those names instead from the public registry, thus this vulnerability.

                                            1. 3

                                              If the package exists on both [the internal and public], it defaults to installing from the source with the higher version number.

                                              The kicker there being you can make an arbitrarily higher-versioned package e.g., 9000.0.1 to force the public (malicious, in this context) dependency. The article also describes that same behavior in Artifactory which is popular within companies to host various internal packages (including npm):

                                              Artifactory uses the exact same vulnerable algorithm described above to decide between serving an internal and an external package with the same name.

                                              I think for npm, using the save-exact feature would be a fix—and imho a sane default—but I’m not 100% certain.

                                              1. 2

                                                I’m not sure this is accurate, or at least it wasn’t the implementation of any proxies I worked on or with back when I was still working on npm.

                                                npm would ask the proxy for information about a package name. All the ones I used would query that metadata from an internal version, and only if it returned nothing, did it fetch information from the public proxy.

                                                This implementation choice was made in the proxies to allow teams to hold back or override open source modules they used (especially useful with deeply nested dependencies before lockfiles) and to avoid situations where someone else claimed the same name to try to get you to fetch it instead (this being before scoped modules).

                                                I haven’t been in the Node.js community for about 4 years now, and have never had access to Artifactory, so I can’t confirm or deny what implementation they’re using now. It would be a shame if they forged ahead without the security concerns open sourced alternatives had long considered.

                                                1. 1

                                                  I’ll be honest: not sure on the technical differences to how Artifactory works compared to the proxies you worked with. When I’ve previously used Artifactory (as a humble user) it’s effectively worked as a pull-through cache of sorts: serve a package that exists internally then fall back to the public if necessary. What comes to mind as of recent was the change by Docker Hub that rate-limited requests.

                                                  Anyways, your reply made me think more specifically about the Node.js/npm vector from the article:

                                                  Unless specified otherwise (via --registry or in a .npmrc) then the default (public) registry is used. Given that, I think it’s not out of question for a npm install acme-co-internal-package to be blindly ran which would hit the public (malicious) package if there’s no internal registry specified. Just my $0.02.

                                                  1. 2

                                                    Yeah, that’s the conclusion I wrote up thread.

                                                    I could easy invision scenarios where new developers, CI systems, IDEs are improperly configured and fetch those names instead from the public registry, thus this vulnerability.

                                                    1. 1

                                                      D’oh, I missed that. Just like the pesky step in a project’s README that tells (hypothetical) you to set the internal registry. ;^]

                                                      I’m sure it’s a curious sight internally at npm to see all the 404ing requests for packages—many of which exist in an internal registry.

                                          2. 3

                                            The article is (intentionally, I believe) vague about it, but I’m curious how they came across all the dependency declaration files in the first place.

                                            common for internal package.json files, which contain the names of a javascript project’s dependencies, to become embedded into public script files

                                            I don’t quite follow. Anyone have insights on the semantics of “leak” in this context?

                                            1. 1

                                              I think they might be concatenated into the production minified js file due to a misconfigured js build pipeline, but that’s just a guess.

                                          1. 2

                                            My personal blog isn’t on Gatsby—it’s on Blot—but I recently implemented the same idea. Although, I went with “Last modified” verbiage as my implementation was tied directly to file mtime. It also seemed more fitting (compared to “Last updated”) as my personal blog is relatively technical.

                                            1. 2

                                              Pretty cool. I might change it to “Last modified”, it makes more sense.