1. 1

    Using “git cu” instead of “git clone” will create a gitlab.com/3point2/git-cu directory structure inside your current working directory.

    I really like using git worktree and using it alongside this tool would feel weird. You’d have to put your worktrees somewhere else or come up with some naming convention like gitlab.com/3point2/git-cu_my-branch-name.

    1. 1

      I agree, it’s not a neat fit for worktrees in its current state. A friend of mine who uses worktrees a lot said the same! I’d be interested in doing something to include support for worktrees somehow, so I’ve created https://gitlab.com/3point2/git-cu/-/issues/1 - if you have any thoughts / ideas on what would feel ergonomic to you please leave a comment.

    1. 22

      I hesitate to post a Twitter thread in response to a submission of a Twitter thread (I wish foone would do this on a blog), but this is worth pointing out since the teardown makes it seem like the electronics version is a joke.

      https://twitter.com/V_Saggiomo/status/1301809747042217984

      That said, this device is incredibly wasteful and irresponsible, and that the marketing around it is highly questionable.

      1. 4

        Thanks for sharing the rebuttal, that is a good perspective. I hesitated for a few days before posting that thread as I wasn’t sure how kosher Twitter was here. I always find these sorts of teardowns fascinating because it makes me realize how much “magic” we assume in things we see around us.

        And, yes, I wish this was posted on a blog and not just a Twitter roll-up.

        1. 9

          In general, submissions of Twitter threads are discouraged, with the occasional exception, mainly because Twitter posts tend to be low content, high impact/drama info tidbits (read: news headlines). In this case I consider it an exception since foone tends to post long threads with lots of information. Still, I wouldn’t make a habit of it.

          1. 2

            Thank you, I appreciate the explanation and will certainly be sparing.

        2. 2

          The problem I see here is that the original read to me as purely technical “Oh wow, it seems unguessably complicated and in the end it’s an LED and a photo sensor” - (I myself would’ve expected measuring a change of current or resistance in the test material, but I would’ve been very wrong)

          But this “rebuttal” is more condescending like “look at this idiot dissecting this thing where it COMMON KNOWLEDGE how it works” etc.pp.

          The post by Naomi Wu is insightful, but I don’t get how people can be mad at foone, because I didn’t see any “omg the people who buy this are so stupid”. And that it’s wasteful to be thrown away after one use is a simple fact, the debate whether it’s worth it something completely different.

          1. 2

            What is it about this device you think is irresponsible?

            1. 1

              That it is single use and mass produced.

              1. 0

                So is the manual test strip inside of it?

                1. 4

                  Single-use electronics is much worse than a test strip, I think. Beyond being overkill, it’s polluting to produce, generates long lasting garbage, and consumes rare materials.

                  1. 1

                    If it was such a waste of “rare” materials, it would be more expensive.

                    1. 3

                      Of course, markets are efficient and always account for externalities 🙄

                      1. 2

                        Ah yes, the market is perfect, of course.

            1. 2

              If a user inadvertently visited homebrew.sh, after various redirects an update for “Adobe Flash Player” would be aggressively recommended

              Heh. This must be a very effective way of convincing people to install your payload. I think I remember seeing this as early as the mid 2000s. And to think it’s still used!

              1. 26

                It is not recommended to write if name__ == ‘__main’

                Man, I hate to see bad advice, and that is really bad advice … there is a reason why the name guard has been a standard idiom of Python for years, and no one should be throwing about recommendations to break that. The name guard exists specifically to keep side effects from happening when the file is imported, which includes when you use the help builtin. Which you should do, a lot.

                1. 1

                  I’d argue it’s not bad advice depending on the reasoning why. I can assume that, if you’re using this idiom, you’re doing it to control whether a “main” function executes and that you’re probably also using the same file as an executable.

                  Setuptools’ entry_points option should be used instead. Doing this removes the need for the above idiom.

                  1. 2

                    A blanket recommendation not to use any standard and long established language idiom is inherently bad advice, and in Python the name guard remains the idiomatic way to isolate code that is only meant to run if the individual module is executed, rather than imported.

                    A command line entry point (as with setuptools) or a main function is just one such use. So is a self test or demonstration code for a single module library, or for a single module within a larger package. If you’re using unittest then the unittest.main() call at the bottom of the file should always be inside a name guard.

                    Even saying that setuptools’ entry_points “should” be used instead is, arguably, bad advice… that’s only true if you’re packaging with setuptools. Sure, that’s the de facto (but still optional) toolchain for packaging, but that could change, and then you’d be left with a bunch of code that has hewn to the idiom of a dependency rather than to the idioms of the language itself. So does it remove the functional need for the name guard? Sure, for your programs publicly facing entry points… but dropping the idiom doesn’t make your code any better.

                1. 2

                  Nice! This reminds me of VDE - Virtual Distributed Ethernet.

                  1. 3

                    In the demo repo linked from this post, I was able to extract the secret using just strings(1) . No decompilation required!

                    1. 1

                      I don’t agree with the viewpoint I’m about to describe, but it’s a counterpoint I’ve heard to using git this way. I used to work with some engineers who were against this method because it changes the meaning of a commit. In their eyes, a “commit” was interpreted quite literally. Meaning, it should leave the code in a reasonably presentable state, and definitely compilable. By using git add -p, you are committing a set of code that has never been tested.

                      1. 5

                        I am a kind of person who prefers commits to be atomic and ideally compilable state (not that I’m fanatic about it, but that’s what I’m striving for). I thing add -p combined with interactive rebase only helps with that – if you discovered a typo or need some amendment, you can surgically squash it into one of your previous commits, instead of having a ‘broken’ commit and a useless separate commit with an amendment

                        1. 3

                          You’re right in theory, but I always use git -p and the only things I usually leave out are single debug log lines, or some CSS changes when I work on the app itself, etc.pp, not huge swaths of code. The amount of times this has bitten be can be counted on one hand. So yes, technically it’s “never tested” but if you at any point push before running tests locally… I don’t think it matters most of the times, that’s what CI is for.

                          1. 2

                            This counterpoint (which I understand isn’t yours) isn’t even one: always work in tiny increments, commit anytime the tests are green.

                            Your tests aren’t green fo long stretches of time? Fix that.

                            All green tests don’t mean that code can be committed? Fix that.

                            1. 1

                              Sorry for the late (months late) comment but I think this is quite a narrow view that applies really well to easily tested things and awfully to everything else.

                          1. 11

                            That was far more interesting than I’d have hoped. Especially because it was more about operating this at scale. For my non-petabyte-scale stuff I’ve always felt like mysql is easier to use as developer. The permissions system for example is confusing. But I was also bitten by things like using utf8mb4 over utf8 in mysql. (and I always recommend mariadb)

                            1. 9

                              I’m a little stunned to hear anyone say they prefer MySQL over PostgreSQL as a developer. A couple things I’ve really disliked about MySQL over the years:

                              • silent coercion of values (where PG, in contrast, would noisily raise a fatal error to help you as a developer) – it makes it a lot harder to debug things when what you thought you stored in a column is not what you get out of it
                              • the MySQL CLI had significantly poorer UX and feature set than psql. My favourite pet peeve (of MySQL): Pressing Ctrl-C completely exited the CLI (by default), whereas, in psql, it just cancels the current command or query.
                              1. 4

                                After spending three years trying to make MySQL 5.6.32 on Galera Cluster work well, being bitten by A5A SQL anomalies, coercion of data type sillyness et al, I’ve found Postgres to be a breath of fresh air and I never want to go back to the insane world of MySQL.

                                Postgres has it’s warts, incredible warts, but when they’re fixed, they’re usually fixed comprehensively. I’m interested in logical replication for zero downtime database upgrades, but being the only girl on the team who manages the backend and the database mostly by herself, I’m less than inclined to hunt that white whale.

                                1. 2

                                  the MySQL CLI had significantly poorer UX and feature set than psql

                                  Hmm I’ve always felt the opposite way. The psql client has lots of single-letter backslash commands to remember to inspect your database. What’s the difference the various combinations of \d, \dS, \dS+, \da, \daS, \dC+, and \ds? It’s all very confusing, and for the same reason we don’t use single-letter variables. I find MySQL’s usage of Show tables, show databases, describe X to be a lot easier to use.

                                  1. 1

                                    Yeah this is also bugging me. Sure “show databases” is long and something like “db” would be nice, but I know it and (for once at least) it’s concistent to “show tables” etc.

                                    1. 1

                                      I grant you that, but \? and \h are 3 keystrokes away, and the ones I use most frequently I’ve memorized by now. But I just couldn’t stand the ^C behaviour, because I use that in pretty much every other shell interface of any kind without it blowing up on me. MySQL was the one, glaring exception.

                                  2. 1

                                    Totally agree, this is almost exactly my situation too. I had always used mysql and knew it pretty well, but got burned a few times trying to deal with utf8, then got hit with a few huge table structure changes (something I think has improved since). Ended up moving to Postgres for most new stuff and have been pretty happy, but I do miss Mysql once and a while.

                                  1. 4

                                    I’m curious about how this is implemented. It seems to delegate the lifting to some sort of policy engine. The logic seems to be some dsl embedded within the go file: https://github.com/mhausenblas/cidrchk/blob/master/main.go#L20

                                    1. 1

                                      It seems to use the evaluation module from https://github.com/open-policy-agent/opa

                                      1. 2

                                        Seems like an odd dependency. Go’s stdlib has IPNet.Contains.

                                    1. 2

                                      Do any platforms enforce that binary signatures match the package’s signature? This feels like a common security feature that would cause problems with this technique.

                                      1. 10

                                        I wonder how much security review maintainers actually do. Reviews are difficult and incredibly boring. I have reviewed a bunch of crates for cargo-crev, and it was very tempting to gloss over the code and conclude “LGTM”.

                                        Especially in traditional distros that bundle tons of C code, a backdoor doesn’t have to look like if (user == "hax0r") RUN_BACKDOOR(). It could be something super subtle like an integer overflow when calculating lengths of buffers. I don’t expect a volunteer maintainer looking at a diff to spot something like that.

                                        1. 4

                                          a backdoor doesn’t have to look like if (user == “hax0r”) RUN_BACKDOOR()

                                          Reminded me of this attempted backdoor: https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-attempt-of-2003/

                                          1. 3

                                            I assume that as long as we keep insisting on designing complex software in languages that can introduce vulnerabilities, then such software will contain such vulnerabilities (introduced intentionally or unintentionally).

                                            I think integer overflow is a great example. In C & C++ unsigned integers wrap on overflow and signed integers exhibit UB on overflow. Rust could have fixed this, but overflow is unchecked by default in release builds for both signed and unsigned integers! Zig doubles down on the UB by making signed and unsigned integer overflow both undefined unless you use the overflow-specific operators! How is it that none of these languages Rust doesn’t handle overflow safely by default?! (edit: My information on Zig was inaccurate and false, it actually does have checks built into the type-system (error system?) of the language!)

                                            Are the performance gains really worth letting every addition operation be a potential source of uncaught bugs and vulnerabilities? I certainly don’t think so.

                                            1. 6

                                              Rust did fix it. In release builds, overflow is not UB. The current situation is that overflow panics in debug mode but wraps in release mode. However, it is possible for implementations to panic on overflow in release mode. See: https://github.com/rust-lang/rfcs/blob/master/text/0560-integer-overflow.md

                                              Obviously, there is a reason for this nuanced stance. Because overflow checks presumably inhibit performance or other optimizations in the surrounding code.

                                              1. 2

                                                To clarify what I was saying: I consider overflow and UB to both be unacceptable behavior by my standard of safety. Saying that it is an error to have integer overflow or underflow, and then not enforce that error by default for all projects feels similar to how C has an assert statement that is only enabled for debug builds (well really just when NDEBUG isn’t defined). So far the only language that I have seen which can perform these error checks at compile time (without using something like Result) is ATS, but that language is a bit beyond my abilities.

                                                If I remember correctly, some measurements were taken and it was found that always enabling arithmetic checks in the Rust compiler lead to something like a 5% performance decrease overall. The Rust team decided that this hit was not worth it, especially since Rust is aiming for performance parity with C++. I respect the team’s decision, but it does not align with the ideals that I would strive for in a safety-first language (although Rust’s primary goal is memory safety not everything-under-the-sun safety).

                                                1. 4

                                                  That’s fine to consider it unacceptable, but it sounded like you thought overflow was UB in Rust based on your comment. And even then, you might consider it unacceptable, but surely the lack of UB is an improvement. Of course, you can opt into checked arithmetic, but it’s quite a bit more verbose.

                                                  But yes, it seems like you understand the issue and the trade off here. I guess I’m just perplexed by your original comment where you act surprised at how Rust arrived at its current state. From what I recall, the intent was always to turn overflow checks on in release mode once technology erased the performance difference. (Or rather, at least that was the hope as I remember it at the time the RFC was written.)

                                                  1. 2

                                                    Oh no I actually am glad that at least Rust has it defined. It is more like Ada in that way, where it is something that you can build static analysis tools around instead of the UB-optimization-wild-west situation like in C/C++.

                                                    From what I recall, the intent was always to turn overflow checks on in release mode once technology erased the performance difference.

                                                    Yes I believe that was in the original RFC, or at least it was discussed a bunch in the GitHub comments, but “once technology erased the performance difference” is different than “right now in shipping code after the language’s 1.0 release”. I would say it is less of surprise that I feel and more a grumpy frustration - one of the reasons I picked up (or at least tried to pick up RIP) Rust in the first place was because I wanted a more fault-tolerant systems programming language as my daily driver (I am primarily a C developer). But I remember being severely disappointed when I learned that overflow checks were disabled by default in release builds because a that ends up being only marginally better than everyone using -fwrapv in GCC/Clang. I like having the option to enable the checks myself, but I just wish it was universal, because that would eliminate a whole class of errors from the mental model of a program (by default).

                                                    :(

                                              2. 3

                                                With the cost, it depends. Even though overflow checks are cheap themselves, they have a cost of preventing autovectorization and other optimizations that combine or reorder operations, because abort/exception is an observable side effect. Branches of the checks are trivially predictable, but they crowd out other branches out of branch predictors, reducing their overall efficiency.

                                                OTOH overflows aren’t that dangerous in Rust. Rust doesn’t deal with bare malloc + memcpy. Array access is checked, so you may get a panic or a higher-level domain-specific problem, but not the usual buffer overflow. Rust doesn’t implicitly cast to a 32-bit int anywhere, so most of the time you have to overflow 64-bit values.

                                                1. 3

                                                  How is it that none of these languages handle overflow safely by default?!

                                                  The fact that both signed and unsigned integer overflow is (detectable) UB in Zig actually makes zig’s –release-safe build mode safer than Rust’s –release with regards to integer overflow. (Note that Rust is much safer however with regards to memory aliasing)

                                                  1. 1

                                                    I stand corrected. I just tested this out and it appears that Zig forces me to check an error!

                                                    I’ll cross out the inaccuracy above…and I think I’m going to give Zig a more rigorous look… I’m quite stunned and impressed actually. Sorry for getting grumpy about something I was wrong about.

                                                  2. 2

                                                    I don’t think so either. I did a benchmark of overflow detection in expressions and it wasn’t that much of a time overhead, but it would bloat the code a bit, as you have to not only check after each instruction, but only use instructions that set the overflow bit.

                                                1. 0

                                                  I come from the Windows world (where IIS is the only web server that matters) so the idea of having to use a (reverse) proxy to (easely) support HTTPS is ludicrous to me.

                                                  1. 4

                                                    IIS fills the same place as nginx in this design.

                                                    1. 2

                                                      You don’t have to, but it is convenient.

                                                      1. 2

                                                        It’s really easy. Here’s a trivial nginx conf to enable that:

                                                        server {
                                                                server_name api.myhost.com;
                                                                listen 80;
                                                        
                                                                location / {
                                                                        include proxy_params;
                                                                        proxy_pass http://localhost:5000/;
                                                                }
                                                        
                                                               # Certbot will put in SSL for you
                                                        }
                                                        

                                                        And then you can easily get SNI-based multiple hosts on the same ‘node’ if you’d like. This lets you easily handle SSL stuff and still have whatever web-server you want bound to the local host:port to actually do the action. You can also do all the fun URL rewrite stuff up there if you’d like.

                                                      1. 2

                                                        A whole lot! On my kubernetes cluster:

                                                        Elsewhere at home, mostly in VMs:

                                                        • Supporting services for Kubernetes like storage, mysql, postgres
                                                        • Plex (self-hosted Netflix)
                                                        • Syncthing (self-hosted Dropbox)
                                                        • Minio (Self-hosted S3)
                                                        • Subsonic (a self-hosted music streaming server)
                                                        • Pysonic (a custom drop-in replacement for Subsonic’s XML API since I found the stock server to be problematic)
                                                        • A custom (and unnamed!) podcast generator (records, on schedule, segments from live streams and exports them as a podcast feed)
                                                        • Gitea, of course
                                                        • Various provisioning/monitoring/alerting tools to help maintain everything else (puppet, influxdb, grafana, custom monitoring, etc)

                                                        As you can tell, I very much like writing tools apps for myself. Some of these are new (and I would call good) and others haven’t been updated in years.

                                                        1. 1

                                                          The gist I’m getting from this is that both implementations have some pretty serious flaws and that someone should create a 3rd standard that mixes the best of both.

                                                          1. 3

                                                            Why put the commands in a separate file, copy it to the container, and run it? In the Dockerfile you can write:

                                                            RUN set -euo pipefail && \
                                                                export DEBIAN_FRONTEND=noninteractive && \
                                                                apt-get update && \
                                                                apt-get -y upgrade && \
                                                                apt-get -y install --no-install-recommends syslog-ng && \
                                                                apt-get clean && \
                                                                rm -rf /var/lib/apt/lists/*
                                                            

                                                            Which saves you the extra layer that needing the additional copy step would add.

                                                            I’ve been under the impression that base images are respun often enough that there is really no point to running package upgrades on build. Is this not the case?

                                                            Also, if you’re REALLY concerned about overhead from layers, you can squash your image into a single layer.

                                                            1. 3

                                                              If you have set -e your && becomes redundant and should be set to ;.

                                                              1. 2

                                                                I’d say it’s best stick with the && idiom, lest someone come along and copy the ; without the set. Shell tends to propagate via clipboard, particularly in Dockerfiles.

                                                                1. 1

                                                                  I strongly disagree. If your premise of your code is that “people are going to copy/paste/break” the code anyway you have a lot of other struggles too.

                                                                  1. 1

                                                                    Well yeah, we’re starting from a baseline of using shell. Of course there are problems! My perspective is based on supporting Docker in a large organization. I think that it is better to set folks up for (partial) success based on what they will likely do in practice, rather than pretending that domain specialists are going to learn shell properly.

                                                              2. 2

                                                                The centos:8 image hasn’t been updated for two months. To be fair, none of the updates appear to be security fixes, but I would not rely on base image being up-to-date.

                                                                1. 2

                                                                  Wow, that’s very strange. Every centos-based image out there having a slightly different layer containing the same updates seems like a missed opportunity for space and bandwidth savings because of layer re-use.

                                                              1. 1

                                                                A local date and time plus a UTC offset can be converted to a UTC date and time, but not the other way around.

                                                                This seems incorrect. Why can’t it? UTC itself is local if you’re in the right place. It’s no more difficult than converting times between different timezones anyway.

                                                                1. 3

                                                                  Things that share the same UTC offset can have different political boundaries, and that in turn means you may have different DST (and hence UTC offset times). Merely having an offset loses that information.

                                                                  1. 1

                                                                    That’s true for a naked timestamp, but a timestamp with no related information isn’t really useful. I don’t think the timezone field is a good place to store the location of whatever the date belongs to.

                                                                    1. 2

                                                                      Oh no I totally agree–that’s why I specified that.

                                                                1. 2

                                                                  Scratching an itch, and it’s great, but one should be cautious with Tampermonkey.

                                                                  1. 4

                                                                    This seems clickbaity considering the article is about licensing rather than any technical danger, like a vulnerability or something.

                                                                  1. 18

                                                                    The entire debate about systemd fascinates me as an expression of FLOSS culture. The way I see it, systemd is a boon for two classes of users - the few who manage huge amounts of “machines” (i.e. “the cloud”), and the many who use Linux as a workstation or on a laptop, where they seldom have any reason to bother about how a service starts, or is scheduled.

                                                                    The class of users who don’t like systemd seem to be the Mittelstand - the sysadmin of a few dozen machines, the user of a VPS who manages a lot of their own services to participate in the open web. They’re reliant on the “folk wisdom” of their own experience, and the substrate of blog and forum posts explaining how to set stuff up. Having to learn a new way of dealing with this stuff feels unnecessary - it worked fine before! The issues that systemd is set up to deal with have never impacted them.

                                                                    1. 16

                                                                      Disagree. I am exactly a user of all 3 groups and

                                                                      • had a few problems with systemd in the cloud that wouldn’t have happened with other init systems
                                                                      • I don’t care about startup time anywhere, but I also don’t use autoscaling
                                                                      • my laptop shuts down slower now
                                                                      • if you have bare metal, the 20s saved with systemd don’t compare to the 3min RAM check…

                                                                      Overall systemd solved all the problems I never had. Not as a laptop user, not as a small-scale system admin, but ok - I didn’t work with cloud stuff before it happened.

                                                                      1. 4

                                                                        You shut your laptop down? I haven’t turned a laptop off (until I sell it) in a decade. They crash from time to time, and restart for updates, but I can’t think of a reason to turn one off unless you’re selling it or putting it into storage.

                                                                        1. 2

                                                                          work laptop: true, I usually don’t shut it down

                                                                          my private laptops: I use them once or twice a month, why would I keep all 3 (one is very old and only used every few months) of them running all the time?

                                                                        2. 1

                                                                          Who runs a RAM check in the modern world? Seems like a complete waste of time to me.

                                                                          (Feel free to convince me otherwise - I’m always open to persuasive arguments!)

                                                                          1. 10

                                                                            Even without a RAM check, a typical HP server still takes a good minute or two to POST and then prepare all of the firmwares for things like HBAs, NICs, sensors, RAID arrays etc before you even get as far as the operating system bootloader. I imagine the same to be true on most “server-grade” tin.

                                                                        3. 13

                                                                          I am in “the user of a VPS” category myself and I use Debian. I had some experience of writing init scripts and I am glad I don’t have to do this anymore.

                                                                          I also happen to SRE a huge Linux fleet at work, which uses a custom task supervisor written before systemd.

                                                                          I am all for more diversity and experimentation in init systems (e.g. one written in Rust would be great to experiment with, given a well-defined format for service files).

                                                                          1. 3

                                                                            I also happen to SRE a huge Linux fleet at work, which uses a custom task supervisor written before systemd.

                                                                            I see this once in awhile and always wonder why. Enterprise distros since RHEL 6 (released in 2011!) ship with upstart as an init system, which is a great process manager. I don’t understand the need for installing yet another process manager.

                                                                        1. 2

                                                                          I haven’t been paying attention to Calibre ever since the author asserted the project will stay on Python 2, and that he’ll personally maintain python 2. Is that still the case?

                                                                          1. 2

                                                                            This wasn’t even really true as of the link you posted (did you read past the first comment?), and it certainly isn’t the case now. Calibre runs under Python 3 and has for quite a few versions now.

                                                                            1. 1
                                                                              1. 1

                                                                                Sorry, are you saying somewhere in my link the author changed his mind? I must have missed that. But I don’t see any other comments by the project’s author so I’m a little confused what you’re referring to?

                                                                            1. 11

                                                                              More people than you realize will take a URL, go to their favorite search engine, and type the URL into the search engine’s search field, never realizing they can actually edit the contents of the address bar above, [snip]

                                                                              I paint this bleak picture primarily for the benefit of Internet veterans [snip]

                                                                              If my description of “normal” users above surprised, shocked or disappointed you, you’re the target audience.

                                                                              Hmm. I don’t see this as bleak at all. I see this as a great advancement in tech, as now even non-technical users have no problem navigating to any site they wish. What would they have done before search engines? I suspect they would have been locked out because it was too hard to use at the time.

                                                                              1. 12

                                                                                now even non-technical users have no problem navigating to any site they wish.

                                                                                That doesn’t sound like the situation described. The situation described in the text you quoted says that non-technical users are only capable of visiting sites their search engine allows them to visit. It’s adding an additional layer or tracking and another opportunity for censorship, but only to non-technical users.

                                                                                1. 4

                                                                                  says that non-technical users are only capable of visiting sites their search engine allows them to visit.

                                                                                  Sure. What would those users have done before search engines?

                                                                                  1. 7

                                                                                    In the described case, and many I’ve seen over others’ shoulders, they are typing the URL they want to visit into a search engine. Without a search engine, they would do the same thing I do when I don’t have a bookmark to a site I want to visit that I haven’t visited in the past: type it in the URL bar or browser start page. They might get a character wrong, in which case they are no more susceptible to phishing and other problems as if they were doing the same into a search engine.

                                                                                    I don’t think this in itself is much worse, but it does teach non-technical users to ignore that search engines are web sites, and instead they think it’s their browser. But it seems inevitable.

                                                                                    1. 2

                                                                                      I don’t think they’re typing the URL into the search engine. They’re typing the name of what they want into the search engine, like “facebook”. Google trends comparison. Edit: another common variation is “[service_name] login”.

                                                                                      they would … type it in the URL bar or browser start page.

                                                                                      Hmm, that’s exactly what the article’s author is complaining about - people not doing this because they don’t realize it’s a feature. I think the root cause of this is that users don’t understand what URLs are.

                                                                                      And why should they? They can just type “facebook” into whatever input field is focused when the browser opens and eventually end up where they want. Had the user typed 4 more characters (”.com”) and into the browser’s URL bar instead, they’d end up at the same place and save the step of clicking a search result. Even an average person understands saving time and effort. So why don’t they do it?

                                                                                      I think we’re assuming the average user is way more savvy than reality.

                                                                                      1. 1

                                                                                        I think users will do the least that they have to. I think then that not requiring they know the difference between a website and their computer should not be confusable!

                                                                                        1. 1

                                                                                          The article is saying they are going to a search page and entering the URL into the search. While a start page or address bar may also go to a search page, the browser will first attempt the text as a URL! (and in my case, neither is enabled to search - I have a search bar for that.)

                                                                                  2. 4

                                                                                    Maybe I’m being dense but I don’t see any difference between

                                                                                    “type the exact digits into the phone application on your mobile”

                                                                                    “type the exact website address into the browser address bar”

                                                                                    from a point of usability. Imagine browsers never had added the omnibar.

                                                                                    1. 3

                                                                                      I would guess it’s something between the learning curve and level of standardization. Phones give you clear feedback that you did something wrong but provide zero help when you dial incorrectly, besides telling you that you did so. Omnibars provide suggestions that, with today’s very smart search engines, are almost always what you wanted. Browsers come in a variety of shapes and sizes, but phone dial pads are always the same. Phone numbers (at least, when dialing domestically) are always the same length and format. Web addresses have much more variation.

                                                                                    2. 4

                                                                                      What would they have done before search engines?

                                                                                      What my parents and my grand parents eventually had to do, they would have learnt.

                                                                                      That being said as an internet “veteran” of 25 years I still find it more convenient to type the name of a company into the nav bar and have my search engine of choice display a series of links of which the first one is usually what I am after rather than type in the whole url.

                                                                                      All I can say is my teenage self would have been very disappointed if they saw how I navigate the internet today, what can I say? The lowest common denominator won out; in technology you either adapt or you die.