1. 8

    Already seeing several friends complaining that they can’t use it because they have restricted it to people who have their phones set to use the German app store. https://github.com/corona-warn-app/cwa-app-android/issues/478 Here in Berlin immigrants make up something like 30% of the population, and there is generally no need to get a German phone when you move here. Add “app store location is where you physically live” to the list of myths programmers mistakenly believe in.

    1. 1

      I really wonder, what’s the goal of such measure?

      Let’s say I leave in a neighboring country like France for example, but work in Germany, it would make sense for me to have both apps, right? (Actually it would make even more sense to have one app for all countries, but that’s another story…). By tying the apps to a given national store, then it makes things harder, which is counter productive.

      1. 2

        My impression is that it was an easy way for Apple/Google to enforce the one app per country rule, but outside of that I’m not really sure.

        I had a similar weird experience when trying to purchase RainToday (en) while my phone was set to the US app store.¹ Many of MeteoGroup’s other apps are available in the US app store, but not RainToday, and when asked I simply got a response that “[t]he warnings and the radar in RainToday are not available for the USA. That’s why RainToday is not available in the US store.” I guess they didn’t want to deal with the support load of people buying the app and expecting it to work in the States.

        ¹ Dark Sky’s rain alerts don’t work in Europe, but that doesn’t even matter any more because the world sucks.

    1. 1

      I had always thought this was the reason the LOOP instruction was slow on Athlon and later processors: that it was artificially slowed down to support old timing routines. After reading this SO post, however, I’m not so sure anymore.

      1. 1

        I’ve found LWN to be consistently high-quality technical journalism, and well worth your (or your employer’s) money. If you’re working for a FAANG, it’s likely you can click a button and get it expensed. Older articles are available for free.

        One of the more fundamental aspects of my engineering philosophy with regard to optimization came from a quote in an LWN article from more than 10 years ago:

        “It’s not about booting faster, it’s about booting in 5 seconds.” Instead of saving a second here and there, set a time budget for the whole system, and make each step of the boot finish in its allotted time.

        That mentality has set me up for a lot of success in taming unruly systems.

        Outside of LWN, the asshole filter was a helpful post in dealing with cross-team communication. The tldr is: people want to get things done and if the official process is not the best way to do that, people are then incentivized to violate boundaries. I’ve seen this a lot in client-based teams that grow from ad-hoc email requests to a formal ticketing system: usually the ticketing system is where tickets go to die, while the clients know that to get real work done you just ping someone on the team directly. Eventually, the only way the ticket system survives is for people who get pinged to reject the ping: “Sorry, we’re trying to use the ticket system for this. Can you make a ticket and we’ll continue there?” By making the ticket system the best way to get something done, people are incentivized to do the right thing. Of course, this only works if the tickets are actually looked at and responded to, and only if the whole team makes a unified effort – otherwise the pings just get shifted somewhere else.

        1. 2

          Having used something like this in a previous life, I’m very excited to see a publicly available web-based VCS-backed IDE come to life. Congrats to the team!

          1. 1

            xmake looks really interesting and is something I’ll probably consider for my next project (especially if the FreeBSD package is updated). The documentation that I’ve seen so far all does in-tree builds though, which is probably the worst antipattern in build-system design. Are they any docs for driving the build from the build directory?

            1. 1

              The documentation that I’ve seen so far all does in-tree builds though, which is probably the worst antipattern in build-system design. Are they any docs for driving the build from the build directory?

              ? I do not understand it.

              1. 1

                @david_chisnall is asking if it is possible to build in a different directory than the directory with the source code. A good test for this is: “can I mount the source on a read-only filesystem, and compile somewhere else?” This is often useful for people who work on multiple branches in a large codebase, or CI systems for large codebases which heavily rely on caching.

                1. 2

                  oh, it is supported. xmake will not write any files to the source directories by default, only write some target and cache files in curdir/build, tmp, curdir/.xmake and ~/.xmake directories, but we can also modify the location of these directories.

                  modify global and local cache directory

                  export XMAKE_GLOBAL_DIR=/xxx
                  export XMAKE_CONFIG_DIR=/yyy
                  

                  modify tmpdir

                  export XMAKE_TMP_DIR=/ttt
                  

                  modify the build output directory:

                  xmake f --buildir=xxx
                  xmake
                  
                  1. 3

                    ~/.xmake

                    Please, use XDG Base Directory instead.

                2. 1

                  All of the examples (unless I am misreading them, please correct me if I’m wrong) run xmake from the source directory. Modern build systems drive the build from the build directory. This has a number of advantages:

                  • The source directory can be on a read-only or write-is-expensive filesystem (e.g. an aggressively cached local copy of a slow network filesystem), whereas the build directory can be on fast throw-away storage (e.g. the local SSD on a cloud VM).
                  • It’s easy to have different backup policies for the source and build directories.
                  • Your per-build configuration is stored in the build directory, so you can easily rerun.
                  • A completely clean build, even in the presence of build-system bugs, is trivial to do by simply deleting the build directory.
                  • You don’t need to add a load of things to .gitignore for build outputs and it’s really easy to keep your source tree clean.
                  • If you have generated source files, it’s trivial to tell the generated ones apart from the others (the generated ones are only in the build directory, not the source tree).

                  With CMake (which has many other problems), the normal workflow is:

                  1. Create a new build directory.
                  2. Run ccmake to configure the build with all of the options that you want.
                  3. Run ninja to build it.

                  You can repeat these steps for multiple build configurations and you can build each independently. For example, for LLVM I have a release build and a debug+ASan build that I can link against, and when I’m writing something that depends on LLVM libraries I will do the same thing and have a release build and a debug build with ASan. When I want rapid edit-compile-test cycles, I use the release build, but as soon as something goes wrong I switch to the ASan version and get much more useful debugging information.

                  In contrast, the instructions that I read for xmake said to run xmake in the source directory. This makes me really nervous, because it remind me of the bad old days when you ran make in your source directory and got a load of generated files interleaved.

                  The tool I would like is something that I can use in the same way that I use CMake but where the input language is not a horrible macro language that’s been gradually extended and where there is clean layering in the standard library with nothing hard-coded in the tool (for example, the way CMake hard codes the flags that it passes to cl.exe).

                  1. 2

                    Although xmake runs by default in the root directory of the project with the xmake.lua file, it will not write any files to the source directory. All files will be generated in a separate build directory.

                    cd projectdir
                    xmake
                    

                    It will generate ‘build’ and ‘.xmake’ directories to projectdir.

                    But the build directory can also be modified.

                    cd projectdir
                    xmake f --buildir=/tmp/build
                    xmake
                    

                    But xmake still generate .xmake directory to projectdir/.xmake

                    we can also do not generate .xmake to projectdir

                     export XMAKE_CONFIG_DIR=/tmp/build/cache
                    xmake f --buildir=/tmp/build
                    xmake
                    

                    Now, even xmake is still running under the project directory, but the project directory will always be very clean.

                    In addition, xmake can also not run under the project directory.

                     cd otherdir
                     xmake -P projectdir
                    
                    1. 1

                      Thanks, it sounds as if it can do what I want. It’s a shame that the documentation and default behaviour encourages bad style though.

              1. 2
                • Sublime Text - I’m still astonished at how fast it renders/scrolls.
                • Sublime Merge - my day job has a heavy rebase/commit split workflow, and Sublime Merge is a godsend when doing tricky interactive rebases.
                • autojump - as mentioned earlier. It’s one of the first things I install on a new workstation. I believe there are alternates such as z and fasd which you may want to check out.
                • Pull Reminders - I absolutely hate email notifications, and things like review requests or responses to comments usually get buried in my inbox. I also find the GitHub notification workflow suboptimal – I still have to click to archive notifications, which means they pile up, which means things get lost. By hooking up Pull Reminders to Slack I get notifications that I don’t have to click to archive. This has dramatically reduced my response time to PRs.
                • Pull Assigner - Automatic PR assignment has made reviews in a team go much smoother. Without this, I often find teams fall into bimodal patterns where a few people review everything and the other people don’t respond due to bystander effect.
                • mtr - A fantastic way to diagnose latency and packet loss issues in your network. I feel like it’s not as well known as it should be.
                • A “percentile latency” graph - I’m not sure if there is an official term for it, but a graph where the X axis is a percentile and the Y axis is latency. I first saw these in Gil Tene’s How Not to Measure Latency (summarized here) and was blown away – not only are they very effective at describing how a system operates in normal cases and at the limits, but you can directly lay your SLAs on top of them and see how much wiggle room you have before you’re in violation.
                • black and isort - mentioned otherwise in this submission, but a great way to focus more on the code instead of the style.
                1. 2

                  I just wanted to circle back and say that I ended up going with @colindean’s recommendation and bought a Ubiquiti Dream Machine. So far I’m really happy with it, although it was confusing to set up at first.

                  For the record: for home use you probably want to set it up using “remote access” (formerly called “cloud access”), and access the controller via https://unifi.ui.com/. It’s possible to log in via the controller’s IP address directly, but then you’d need to accept its cert or bypass SSL cert verification. Additionally, read performance optimizations from the Ubiquiti forums with a grain of salt – some of those people are optimizing for business or conference installations with dozens or hundreds of devices, and what they recommend may not be appropriate for a home network with different use cases and constraints.

                  Other than that initial setup, it seems to do everything I want. Thanks again!

                  1. 1

                    Glad to hear it. Welcome to the Ubiquiti world!

                  1. 7

                    Avoid meshes if you can. You’ll want n access points, where n is an integer and depends on the area to cover. Connect those access points to the upstream using cabled ethernet.

                    Mesh is fine if you want coverage, not so fine if you want capacity in a saturated environment. Every packet has to be sent more than once across the air, and the impact of that is worse than a doubling because if the way radio time is shared.

                    Clueful friends of mine tend to use either Ubiquiti or Mikrotik. One has Cisco Aironets brought home from work when they replaced them with new radios. I have Mikrotik hardware myself, the oldest is 10+ years old at this point and still receiving OS upgrades. If you consider Mikrotik, look for metal hardware, not plastic. The metal is built to last.

                    My own policy is to use those slow old APs forever, and to say that if something wants 100Mbps or more then that device needs an ethernet cable. That’s been a good policy for me in practice. For example it’s kept the WLAN free of bandwidth hogs, even if those hogs (like a few giant rsync jobs I run) aren’t time sensitive.

                    1. 2

                      [I asked an extended version of this in a different reply in this thread]

                      Is there anything special you need to do to enable switching amongst the various access points as you wander around the house?

                      1. 1

                        Enable, no, but there are things you can do to improve quality of service. I use a Mikrotik feature called capsman, I understand that Ubiquiti and Cisco Aironet provide the same functionality. (I don’t know about other vendors, those three are just what friends of mine use.)

                        The only thing to watch out for is really that you have to purchase APs from one vendor if you want the nice roaming. If you mix brands, TCP connections will be interrupted when you move through the house, and a moving station may remain connected to an AP for as long a that’s possible, not just for as long as that AP is the best choice.

                      2. 1

                        You’ll want n access points, where n is an integer and depends on the area to cover. Connect those access points to the upstream using cabled ethernet.

                        If I could get ethernet to everywhere I want wifi, I wouldn’t need the wifi.

                        1. 1

                          That’s true of course, but isn’t it rather beyond the point? The goal is to get ethernet to enough points that the entire area has a good WLAN signal.

                          When I installed my cables I strictly needed two APs, but I got three instead in order to simplify the cabling. More APs, but less work pulling cables.

                        2. 1

                          I don’t know if you’d call the environment saturated here in an urban road but mesh is working nicely. No dropouts, fast, covers everything, cheap. What sort of environment would cause it trouble?

                          1. 2

                            At one point 27 different WLANs were visible in what’s now our kitchen, two of them often with a great deal of traffic, and intuitively I think there was some other noise source, not sure what. That was usually good, occasionally saturated, and bulk transfer throughput would drop deep, even as low as 1Mbps. I cabled and now I don’t need pay attention to the spectral graph.

                            I’ve worked in an office where over 50 WLANs from various departments and companies in the same building were visible. Some people did >100-gigabyte file transfers over our WLAN, so I expect our WLAN bothered the other companies as much as their us. The spectral graph was pure black.

                            1. 1

                              As of right now, I see 21 networks from a Macbook in my living room. 2 of those are even hotspots from the street, run by competing phone companies. It doesn’t help that many ISPs offer “homespots,” where customers who rent their router create several SSIDs – one for the user, and one for other customers of that ISP to use as a hotspot. So I guess mesh is not a good idea where I am.

                              1. 2

                                Well, most people don’t have a lot of guests who transmit a lot of traffic, so maybe?

                                Still, I haven’t regretted the cable I had installed. Remember that you can simplify the problem, you don’t have to install the cable along the most difficult route.

                        1. 2

                          It’s been mentioned in passing a few times in here, but I wanted to call it out specifically: @pytest.mark.parametrize is great. The syntax takes a little getting used to (you have to feed it a string of comma-separated parameter names) but other than that it works as expected. As a bonus, you can stack multiple @pytest.mark.parameterize decorators on a test to get a cartesian product of test cases.

                          1. 3

                            I’ve been using Nix (NixOS?) as a replacement for Homebrew and so far it’s been pretty good. Not the best learning curve but so far I’ve been able to install the packages I need (although getting git installed with shell completion took up the better part of an afternoon…). The main reason I like it over Homebrew is that every time I go to install a package I’m not rolling the dice on how long the operation will take. Homebrew operations seem to take 5 seconds or 15 minutes depending on whether it needs to recompile openSSL or not, and you can’t predict this before you run brew install… Good to know that from the packager’s side it’s a bit more tricky.

                            1. 1

                              iPhone Fits-In-Your-Hand (SE). I thought about giving in and buying a 7 over the holidays but decided to stick with this until it dies or they release another phone this size.

                              1. 6

                                I find it curious that the Blink team at Google takes this action in order to prevent various other teams at Google from doing harmful user-agent sniffing to block browsers they don’t like. Google certainly isn’t the only ones, but they’re some of the biggest user-agent sniffing abusers.

                                FWIW, I think it’s a good step, nobody needs to know I’m on Ubuntu Linux using X11 on an x86_64 CPU running Firefox 74 with Gecko 20100101. At most, the Firefox/74 part is relevant, but even that has limited value.

                                1. 14

                                  They still want to know that. The mail contains a link to the proposed “user agent client hints” RFC, which splits the user agent into multiple more standardized headers the server has to request, making “user-agent sniffing” more effective.

                                  1. 4

                                    Oh. That’s sad. I read through a bit of the RFC now, and yeah, I don’t see why corporations wouldn’t just ask for everything and have slightly more reliable fingerprinting while still blocking browsers they don’t like. I don’t see how the proposed replacement isn’t also “an abundant source of compatibility issues … resulting in browsers lying about themselves … and sites (including Google properties) being broken in some browsers for no good reason”.

                                    What possible use case could a website have for knowing whether I’m on ARM or Risc-V or x86 or x86_64 other than fingerprinting? How is it responsible to let the server ask for the exact model of device you’re using?

                                    The spec even contains wording like “To set the Sec-CH-Platform header for a request, given a request (r), user agents MUST: […] Let value be a Structured Header object whose value is the user agent’s platform brand and version”, so there’s not even any space for a browser to offer an anti-fingerprinting setting and still claim to be compliant.

                                    1. 4

                                      What possible use case could a website have for knowing whether I’m on ARM or Risc-V or x86 or x86_64 other than fingerprinting?

                                      Software download links.

                                      How is it responsible to let the server ask for the exact model of device you’re using?

                                      … Okay, I’ve got nothing. At least the W3C has the presence of mind to ask the same question. This is literally “Issue 1” in the spec.

                                      1. 3

                                        Okay, I’ve got nothing.

                                        I have a use case for it. I’ve a server which users run on a intranet (typically either just an access point, or a mobile phone hotspot), with web browsers running on random personal tablets/mobile devices. Given that the users are generally not technical, they’d probably be able to identify a connected device as “iPad” versus “Samsung S10” if I can show that in the web app (or at least ask around to figure out whose device it is), but will not be able to do much with e.g an IP address.

                                        Obviously pretty niche. I have more secure solutions planned for this, however I’d like to keep the low barrier to entry that knowing the hardware type from user agent provides in addition to those.

                                      2. 2

                                        What possible use case could a website have for knowing whether I’m on ARM or Risc-V or x86 or x86_64 other than fingerprinting?

                                        Benchmarking and profiling. If your site performance starts tanking on one kind of processor on phones in the Philippines, you probably want to know that to see what you can do about it.

                                        Additionally, you can build a website with a certain performance budget when you know what your market minimally has. See the Steam Hardware and Software Survey for an example of this in the desktop videogame world.

                                        Finally, if you generally know what kinds of devices your customers are using, you can buy a bunch of those for your QA lab to make sure users are getting good real-world performance.

                                    2. 7

                                      Gecko 20100101

                                      Amusingly, this date is a static string — it is already frozen for compatibility reasons.

                                      1. 2

                                        Any site that offers you/administrators a “login history” view benefits from somewhat accurate information. Knowing the CPU type or window system probably doesn’t help much, but knowing it’s Firefox on Ubuntu combined with a location lookup from your IP is certainly a reasonable description to identify if it’s you or someone else using the account.

                                        1. 2

                                          There are terms I’d certainly like sites to know if I’m using a minority browser or a minority platform, though. Yes, there are downsides because of the risk of fingerprinting, but it’s good to remind sites that people like me exist.

                                          1. 1

                                            Though the audience here will play the world’s tiniest violin regarding for those affected the technical impact aspect may be of interest.

                                            The version numbering is useful low-hanging-fruit method in the ad-tech industry to catch fraud. A lot of bad actors use either just old browsers[1] or skew browser usage ratios; though of course most ‘fraud’ detection methods are native and just assume anything older than two major releases is fraud and ignore details such as LTS releases.

                                            [1] persuade the user to install a ‘useful’ tool and it sits as a background task burning ads or as a replacement for the users regular browser (never updated)

                                          1. 2

                                            I was happy to see GoatCounter come up recently, as I keep thinking “you know, all these cookie warnings are only needed if you use third-party analytics” and was wondering if someone offered analytics without that. I hope this gives them a boost!

                                            1. 4

                                              Sublime may not have the tight integration you’d get from other IDEs, but it does have the advantage of being fast. It’s been effectively jank-free since I switched to it several years ago. If VS Code isn’t powerful enough then Sublime + plugins probably won’t be either, but I did want to at least give it a mention.

                                              1. 1

                                                Sublime (with anaconda, terminus) and use of pudb (in a separate terminal window) for stepping through code.

                                                Bias: This is my spare-time coding setup on a 5-6 year old laptop, feel like I’d invest in an IDE if I was writing python more regularly.

                                              1. 7

                                                I feel like the original post (and some of the subsequent discussion here) is full of armchair quarterbacking and general smugness, even though I do sympathize with the fact that computers are ridiculously faster than 10 years ago but somehow seem slower (Dan Luu dives into that a bit if you’re interested). That said, I don’t think the problem is frameworks-upon-frameworks or leaky abstractions or that modern developers don’t have a Mel-like total understanding of every transistor on their system.

                                                My view of the problem is that developers are reluctant to acknowledge that, as a complex system, software will always run in a degraded mode. If you acknowledge that things can’t be perfect, you can shift the discussion to how much you want to spend to approach perfection. Conversely, you can then think about where you want to spend your time with that budget.

                                                1. 1

                                                  I’m going for the first time but don’t have any specific plans. Hoping to wander around and check out cool stuff.

                                                  1. 2

                                                    Looking forward to it! I’ll be doing Rust after I started with it on AoC last year. One annoyance I found with AoC 2018 (maybe it’s every year, I didn’t check) was heavy reliance on fiddly text parsing. I don’t design puzzles, let alone language-agnostic programming puzzles, so it might be that this is just how it has to be done. Still, I just wish there was less emphasis on text parsing and more on problem solving.

                                                    1. 9

                                                      Every time I read Paul Graham I think back to Dabblers and Blowhards from 2005 (which unfortunately has some sexist overtones, but a lot of the essay still holds). This quote in particular sticks out:

                                                      [A]fter a while, you begin to notice that all the essays are an elaborate set of mirrors set up to reflect different facets of the author, in a big distributed act of participatory narcissism.

                                                      The whole genre reminds me of the the wooly business books one comes across at airports (“Management secrets of Gengis Khan”, the “Lexus and the Olive Tree”) that milk a bad analogy for two hundred pages to arrive at the conclusion that people just like the author are pretty great.

                                                      1. 5

                                                        I don’t know why the author is writing in hypotheticals, rad-hard ICs have been in production for decades now. A quick search yields an ARM core that offers reasonable performance for a thousand bucks.

                                                        1. 6

                                                          I keep telling you! Like the Phoenix, FORTH will rise from the ashes, and take us into the skies!

                                                          1. 4

                                                            In particular, there is, right this moment, working CPU rolling over Martian terrain in the form of RAD750 (PowerPC ISA) inside Curiosity rover. It has worked on Mars for more than 7 years since 2012.

                                                          1. 3

                                                            A long time ago I set up Chrome Remote Desktop on a GCP VM so I could browse Facebook without it tracking me. It was kind of fun, but the biggest annoyance was having to log into the GCP console to start the VMs when I needed them. Cool to find out about Apache Guacamole.