1. 1

    my website’s https://superbaud.org/ and i’m curious what people have to say and/or offer as suggestions since i’m new to doing this sort of thing

    1. 2

      Nice simple and clean. Maybe “Home” is a bit empty. I would make either “About” or “Blog” the landing page instead.

    1. 10

      Along these lines, see David Reed’s memories of UDP “design”, where he notes that he and Steven Kent argued unsuccessfully for mandatory end-to-end encryption at the protocol layer circa 1977.

      1. 3

        also the “bubba” and “skeeter” TCP options; a 1991-era proposal for opportunistic encryption of TCP connections (https://simson.net/thesis/pki2.pdf, https://www.ietf.org/mail-archive/web/tcpm/current/msg05424.html, http://mailman.postel.org/pipermail/internet-history/2001-November/000073.html)

      1. 6

        Ahhh nice, it actually addresses the UEFI “runtime services” (does it talk about all the ACPI horrors that Linux runs in the ACPI bytecode interpreter? Idk) as well as SMM/ME. Frankly, this level of complexity in platform firmware is terrifying, how can anyone have a hope of building a secure/trustworthy system with x86 if all this is in play?

        1. 21

          The fundamental problem with USB-C is also seemingly its selling point: USB-C is a connector shape, not a bus. It’s impossible to communicate that intelligibly to the average consumer, so now people are expecting external GPUs (which run on Intel’s Thunderbolt bus) for their Nintendo Switch (which supports only USB 3 and DisplayPort external busses) because hey, the Switch has USB-C and the eGPU connects with USB-C, so it must work, right? And hey why can I charge with this port but not that port, they’re “exactly the same”?

          This “one connector to rule them all, with opaque and hard to explain incompatibilities hidden behind them” movement seems like a very foolish consistency.

          1. 7

            It’s not even a particularly good connector. This is anecdotal, of course, but I have been using USB Type-A connectors since around the year 2000. In that time not a single connector has physically failed for me. In the year that I’ve had a device with Type-C ports (current Macbook Pro), both ports have become loose enough that simply bumping the cable will cause the charging state to flap. The Type-A connector may only connect in one orientation but damn if it isn’t resilient.

            1. 9

              Might be crappy hardware. My phone and Thinkpad have been holding up just fine. The USB C seems a lot more robust than the micro b.

              1. 3

                It is much better, but it’s still quite delicate with the “tongue” in the device port and all. It’s also very easy to bend the metal sheeting around the USB-C plug by stepping on it etc.

              2. 6

                The perfect connector has already been invented, and it’s the 3.5mm audio jack. It is:

                • Orientation-free
                • Positively-locking (not just friction-fit)
                • Sturdy
                • Durable

                Every time someone announces a new connector and it’s not a cylindrical plug, I give up a little more on ever seeing a new connector introduced that’s not a fragile and/or obnoxious piece of crap.

                1. 6

                  Audio jacks are horrible from a durability perspective. I have had many plugs become bent and jacks damaged over the years, resulting in crossover or nothing playing at all. I have never had USB cable fail on me because I stood up with it plugged in.

                  1. 1

                    Not been my experience. I’ve never had either USB-A or 3.5mm audio fail. (Even if they are in practice fragile, it’s totally possible to reinforce the connection basically as much as you want, which is not true of micro USB or USB-C.) Micro USB, on the other hand, is quite fragile, and USB-C perpetuates its most fragile feature (the contact-loaded “tongue”—also, both of them unforgivably put the fragile feature on the device—i.e., expensive—side of the connection).

                  2. 4

                    You can’t feasibly fit enough pins for high-bandwidth data into a TR(RRRR…)S plug.

                    1. 1

                      You could potentially go optical with a cylindrical plug, I suppose.

                      1. 3

                        Until the cable breaks because it gets squished in your bag.

                    2. 3

                      3.mm connectors are not durable and are absolutely unfit for any sort of high-speed data.

                      They easily get bent and any sort of imperfection translates to small interruptions in the connection when the connector turns. If I – after my hearing’s been demolished by recurring ear infections, loud eurobeat, and gunshots – can notice those tiny interruptions while listening to music, a multigigabit SerDes PHY absolutely will too.

                    3. 3

                      This. USB-A is the only type of usb connector that never failed for me. All B types (Normal, Mini, Micro) and now C failed for me in some situation (breaking off, getting wobbly, loose connections, etc.)

                      That said, Apple displays their iPhones in Apple Stores solely resting on their plug. That alone speaks for some sort of good reliability design on their ports. Plus the holes in devices don’t need some sort of “tongue” that might break off at some point - the Lightning plug itself doesn’t have any intricate holes or similar and is made (mostly) of a solid piece of metal.

                      As much as I despise Apple, I really love the feeling and robustness of the Lightning plug.

                      1. 1

                        I’m having the same problem, the slightest bump will just get it off of charging mode. I’ve been listening to music a lot recently and it gets really annoying.

                        1. 2

                          Have you tried to clean the port you are using for charging?

                          I have noticed that Type C seems to suffer a lot more from lint in the ports than type A

                      2. 6

                        It’s impossible to communicate that intelligibly to the average consumer,

                        That’s an optimistic view of things. It’s not just “average consumer[s]” who’ll be affected by this; there will almost certainly be security issues originating from the Alternate Mode thing – because different protocols (like thunderbolt / displayport / PCIe / USB 3) have extremely different semantics and attack surfaces.

                        It’s an understandable thing to do, given how “every data link standard converges to serial point-to-point links connected in a tiered-star topology and transporting packets”, and there’s indeed lots in common between all these standards and their PHYs and cable preferences; but melding them all into one connector is a bit dangerous.

                        I don’t want a USB device of unknown provenance to be able to talk with my GPU and I certainly don’t want it to even think of speaking PCIe to me! It speaking USB is frankly, scary enough. What if it lies about its PCIe Requester ID and my PCIe switch is fooled? How scary and uncouth!

                        1. 3

                          Another complication is making every port do everything is expensive, so you end up with fewer ports total. Thunderbolt in particular. Laptops with 4 USB A, hdmi, DisplayPort, Ethernet, and power are easy to find. I doubt you’ll ever see a laptop with 8 full featured usb c ports.

                        1. 24

                          I wonder if Lenovo will also go retro on the spyware. Maybe Reader Rabbit or WeatherBug.

                          1. 6

                            I’d chip in a few satoshis to see a modern day BonsaiBuddy.

                            1. 6

                              Well, they probably ship with Windows, and that includes Cortana

                              1. 3

                                With HDR and lens flare.

                              2. 2

                                I simply insist on it being “Back Orifice”.

                                1. 1

                                  I wonder what’s inside – is it spinning rust hard drives or is it flash?

                                  1. 2

                                    God, it’s depressing seeing something I wrote in there and realising how little I’ve been able to get done since then :\

                                    1. 26

                                      NIHing everything but nevertheless making mistakes that had been understood and fixed decades ago – this is why many people have negative attitudes towards systemd. I’ve been using it for many years but still the insistence on reimplementing things in shitty and incomplete ways (journald is a shoddy half-baked reimplementation of sqlite, say) frustrates me to no end.

                                      1. 20

                                        Is Spinning Disk Going Extinct?

                                        15K RPM drives (and probably laptop hard drives) are getting killed off ruthlessly by flash. Nobody will mourn 15K drives but I confess that the end of laptop hard drives will make me sad and nostalgic.

                                        Spinning rust has a dim future outside the data centre, but when $/GB matters, it reigns supreme, and that’s why in 2017 the state of the art still involves literal tubs of hard drives: https://code.facebook.com/posts/1869788206569924/introducing-bryce-canyon-our-next-generation-storage-platform/

                                        Google presented a paper at FAST16 about the possibility of fundamentally redesigning hard drives to specifically target hard drives that are exclusively operated as part of a very large collection of disks (where individual errors are not such a big deal as in other applications) – in order to even further reduce $/GB and increase IOPS: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44830.pdf .

                                        Possible changes mentioned in the paper include:

                                        • New (non-compatible) physical form factor[s] in order to freely change the dimensions of the heads and platters

                                        • adding another actuator arm / voice coil with its own set of heads

                                        • accepting higher error rates and “flexible” (this is a euphemism for “degrades over time”) capacity in exchange for higher areal density, lower cost, and better latencies

                                        • exposing more lower-level details of the spinning rust to the host, such as host-managed retries and exposing APIs that let the host control when the drive schedules its internal management tasks

                                        • better profiling data (time spent seeking, time spent waiting for the disk to spin, time spent reading and processing data) for reads/writes

                                        • Caching improvements, such as ability to mark data as not to be cached (for streaming reads) or using PCIe to use the host’s memory for more cache

                                        • Read-ahead or read-behind once the head is settled costs nothing (there’s no seek involved!). If the host could annotate its read commands with its optional desires for nearby blocks, the hard drive could do some free read-ahead (if it was possible without delaying other queued commands).

                                        • better management of queuing – there’s a lot more detail on page 15 of that PDF about queuing/prioritisation/reordering, including the need for the drive’s command scheduler to be hard real-time and be aware of the current positioning of the heads and of the media. Fun stuff! I sorta wish I could be involved in making this sort of thing happen.

                                        tl;dr there is a lot of room for improvement if you’re willing to throw tradition to the wind and focus on the single application (very large scale bulk storage) where spinning rust won’t get killed off by flash in a decade.

                                        http://www.ewh.ieee.org/r6/scv/mag/MtgSum/Meeting2010_10_Presentation.pdf is a fun set of technically-focused slides about future reading/writing methodologies and is very much worth a read. Also TDMR is literally just MIMO for hard drives.

                                        1. 4

                                          Is it literally rust in some sense, or is that a joke? The platters I’ve seen don’t look like rust, although they don’t look like anything else from the everyday world, either.

                                          1. 10

                                            The magnetic layer (the part of the sandwich of platter materials/coatings that actually stores the data) used to indeed be iron oxide up to the 1980s but anything after that doesn’t have iron oxides – just a fancy cobalt alloy.

                                            Nowadays it’s just a facetious term (unless you own vintage hard drives, in which case you actually are spinning literal rust).

                                        1. 1

                                          AFAIK there is still a cost/density argument for spinning disks. That will go away as transistor sizes get smaller, but afaik we’re approaching the limit for how physically small transistors can be, so it may never truly go away.

                                          1. 5

                                            There’s hardcore lithography involved in making the read/write heads of hard drives (http://life.lithoguru.com/?p=249) but making the platters don’t involve doing actual lithography (which helps hard drives stay cost-effective). Modern hard drives have bit cells a little larger than those of flash (it took until 2015 for flash bit cells to get smaller than those on spinning rust! http://www.digitalpreservation.gov/meetings/documents/storage14/Fontana_Volumetric%20Density%20Trends%20for%20Storage%20Components%20--%20LOC%2009222014.pdf).

                                            With flash, AFAIK, shrinking things is hard because the number of electrons you can store decreases with size which, combined with the fact that electrons leak out over time, makes the electron-counting business even hairier.

                                          1. 1

                                            Besides the “this is the browser’s job to display the top-level domain differently” or whatever comments, how difficult would it be for Let’s Encrypt to be given a list of commonly-phished websites and delay issuance (and notify the real paypal so they can take legal action or quash the domain registration)?

                                            1. 5

                                              It’s a one line perl script to see if a domain name matches a ban list, but who decides what’s in the ban list? (Who decides who decides what’s in the ban list?)

                                              1. 1

                                                LE already uses the google safe browsing list, so using the inverse of it (the legitimate websites that are imitated by the entries on the safe browsing list) isn’t really superbly controversial.

                                                Creating an audit log of all certificates that have been delayed/quashed due to this procedure (along with the legal entity responsible) seems also completely doable.

                                              2. 3

                                                Thanks to certificate transparency, that is unnecessary. Paypal can watch the CT log and “take legal action or quash the domain registration” whenever they feel like it :)

                                              1. 4

                                                making wireless networks not suck kinda depends on managing airtime properly – this is why wifi sucks and LTE has thousands of pages of specs about a whole menagerie of control channels with heinous acronyms (that even the spec writers sometimes typo) that allocate who transmits what and when (and at what frequency).

                                                1. 4

                                                  Given what you’ve said, it’s surprising LTE works in practice, because I’d expect implementations to be buggy and screw everything up if the standard is hard to follow. Or are they abnormally well-tested in practice or something? :)

                                                  1. 8

                                                    The standard is hard to follow only in that there’s plenty of moving parts and many different control channels because shared resources – such as airtime and radio bandwidth (and backhaul/network bandwidth) need to be allocated and shared precisely among many different devices.

                                                    If you want to avoid getting buggy implementations you can make formal models for all the layers – the RF layers, the bits that get modulated onto RF, and the protocols that get run over those bits. Formal models let you write software that can output valid protocol transcripts (that you can send to a radio) or validate a transcript (that you received from a radio) – all to make sure that every device is sending/receiving the right bits modulated the right way at the right times/frequencies.

                                                    Once you somehow obtain software that implements a formal protocol model, you (or someone who makes or certifies LTE equipment) can verify/fuzz code that runs on UEs and eNBs – both when it’s just harmless source code and also (if you have some SDRs) when it’s transmitting real RF in real devices in some test lab. So yes, implementations are indeed well-tested (and indeed, are required to be tested before they can be sold and be allowed on real LTE networks)

                                                1. 6

                                                  hello, thx C-Keen, i’m the creator of this project, thx for inviting me here. any questions from anyone feel free to ask.

                                                  1. 3

                                                    Are there any plans to make EOMA68 cards with a lot more than 2GB of RAM? I like the EOMA68 idea but 2GB of RAM is painfully, painfully small for the sorts of things I do (like “have lots of tabs open in chromium” or “compile stuff with ghc”) – it’s mostly tolerable on a recentish i5 laptop with 8GB of memory but I cannot find the masochism within me to buy a computer with 2GB of memory and use it like I use my laptop.

                                                    I would utterly love a non-horribly-expensive AArch64 machine with a comfy amount of memory (like, say, 8 or 16GB) and some SATA/SAS ports – if you can make that happen or if I can help make that happen I am willing to contribute my time and money.

                                                    I really do want some decent aarch64 hardware that isn’t violently expensive and that i wouldn’t mind using as my primary machine, but the situation is…frankly bleak.

                                                    1. 1

                                                      hiya zkms, yes there are… but the SoC fabless semi companies have to actually come up with the goods… or we simply have to raise between $5m and $10m and get one made. i replied on that post for you, to explain a bit about what’s involved. 2GB RAM is the max you’ll ever likely see on memory-mapped 32-bit processors because they can only address up to 4GB RAM as it is!

                                                      we’ll get there. the project’s got another 10 years ahead of it at least.

                                                      1. 1

                                                        nods – it’s weird that there aren’t any available SoCs that use aarch64 (or 32 bit SoCs that support LPAE) and expose enough address lines to connect a reasonable amount of RAM, tbh

                                                    2. 3

                                                      Neat project. I posted the same idea years ago on Schneier’s blog for subversion concerns. Idea was that physical possession of one’s hardware usually leads to compromise. Most stuff on a computer doesn’t have to be trusted either. High-assurance security also teaches to make trusted part as tiny & reusable as possible. I think I was screwing around with ARTIGO’s, smartcards or old-school game cartridges when I got the idea of a PC Card or ARTIGO-like module featuring secure processor, RAM, and I/O mediation. This would plug into desktops, servers, monitors, laptops if low-power, and so on. Ecosystem could show up like with iPod accessories probably starting in China where it’s cheaper. Later, I noted a private company came up with something similar and probably had patents on it so I backed off temporarily. Can’t recall its name but brochure had one or two in there that looked just like my marketing.

                                                      Projects like yours are a nice testing ground for this concept that I keep shelved but not forgotten. Interesting to see which decisions will work in market and which won’t. Important before an expensive, security-oriented product is attempted. The project is nice except for one, little problem it shares with others: the ARM processor. ARM Inc is about as opposite of protecting freedom as they can be. MIPS isn’t much better. Both have sued open-source and startup competition for patent infringement. “Open” POWER is an unknown. RISC-V isn’t on market yet. The only FOSS ISA that’s production grade right now is Cobham Gaisler’s Leon3 SPARC CPU’s. They’re GPL’d, have fabbed parts at who knows what price, SPARC ISA is open, Open Firmware exists, & products only need sub-$100 fee for trademark.

                                                      http://www.gaisler.com/index.php/products/processors/leon3

                                                      Note: OpenSparc T1 and T2 processors were GPL’d, too. FOSS workstations, servers and embedded should be all over these in terms of getting them fabbed and in real systems. They stay ignored for x86 and ARM mainly even if not performance-critical.

                                                      1. 3

                                                        totally cool man. i’m familiar with gaisler research stuff, i looked at it years ago. ooooOoooo nice, nice, nice: LEON4 goes up to 1.7ghz in 32nm, is 64-bit and is SMP ready right now. niiiiice. oo, that’s really exciting. and there’s a simplified developer board that runs at 150mhz (good enough for testing, i bet it’s like 180nm or something)

                                                        having found the GPLGPU and the MIAOU project i think we have enough to put something together that would kick ass.

                                                        awww darnit, LEON4 is still only 32-bit. aw poop :)

                                                        1. 2

                                                          OpenSPARC T2 is 64-bit. Some smaller projects just knock some cores and stuff out of it to simplify it.

                                                          http://www.oracle.com/technetwork/systems/opensparc/opensparc-t2-page-1446157.html

                                                          Gaisler is still best for embedded and customizable. Wonder how hard it would be to make it 64-bit.

                                                          1. 2

                                                            the crucial bit is the SMP support, to be able to combine those…. opensparc… oracle… we’re not a huuge fan of oracle.. hmm interesting: just underneath the popup that i refuse to click which prevents and prohibits access to their web site, i can just about make out that the opensparc engine is GPLv2…. mostly. haha i bet they were expecting that to be a roadblock to prevent commercial SoCs being made around it…. :)

                                                            1. 1

                                                              Probably haha. Yeah, I avoid Oracle wherever possible too. Just that these are supposedly GPL v2. Either a last resort for OSS CPU or a top contender if you need one with performance. A T2 on 28nm probably be no joke.

                                                      2. 2

                                                        Very cool project!

                                                        I hope these questions aren’t too basic, but I’m not familiar with small ARM computers like this and I couldn’t find the info on the Crowdsupply page or updates:

                                                        1) When you say Linux 3.4 is supported, does that mean just 3.4 or 3.4 and all later versions? I saw in one update you mentioned 4.7 (I think) working but crashing frequently… What does future support likely look like: i.e. is everything getting into the mainline kernels and do you expect future versions to work even better, or should we expect to stay on 3.4 forever?

                                                        2) How close is the environment to “stock” distributions? I.e. when you say it has “Debian” on it, does that really mean it’s using totally standard Debian packages, tracking the official repositories, getting all the security updates from the Debian Security team, etc? Or is it more of a custom Debian-based environment tweaked for this hardware specifically? If the latter, how much does it differ from base Debian and is there anyone actively maintaining/updating it for the foreseeable future?

                                                        3) What does the installation/update procedure look like; is it as simple as on an x86 desktop where I’d just grab a bootable USB installer?

                                                        Thank you!

                                                        1. 1

                                                          thx felix.

                                                          (1) no it’s precisely and specifically 3.4.104+ version which you can find is maintained by the sunxi community. this kernel has support for dual-screens, stable NAND flash (albeit odd and quirky), accelerated 2D GPU provision, hardware-accelerated 1080p60 video playback/encode provision and much more. it’s a stable continuation of what allwinner released. i’m currently bisecting git tags on linux mainline, so far i have: v3.4 works v3.15 works v4.0 works v4.2 lots of segfaults v4.4 failed v4.7 failed. so it’s a work-in-progress to find at least one mainline stable kernel.

                                                          (2) yes completely “normal” - exception being the kernel - there’s a huge active community behind the A20 but i will not be “holding anybody’s hand” - you’ll have to take responsibilty amongst yourselves as i am working on delivering hardware to people and, as i’m only one person, i simply don’t have time. i’m anticipating that people will help each other out on the mailing list.

                                                          (3) sigh the standard process should be to have an initrd installer (debian-installer netboot) but that’s actually too complex for developers to cope with, so instead what they do is create “pre-built” images. i REALLY don’t like this practice but for convenience i’m “going with the flow” for now.

                                                          feel free to ask more :)

                                                      1. 19

                                                        Talking about “overdiagnosis” in a vacuum without talking about the devastating costs of underdiagnosis is horrible practice, (especially given how much ADHD is underdiagnosed in women). Please read the “The Impact of ADHD During Adulthood” section in that article. I’ll wait for you.

                                                        Self-medication with effective medication isn’t even really possible because amphetamines aren’t over-the-counter, so diagnosis is kinda a necessary condition to get access to effective meds.

                                                        The whole “overdiagnosis” meme leads to parents not being OK with the idea of their children being diagnosed with ADHD or *shudder* be on effective medication like adderall or ritalin, or people thinking/internalizing the idea that they can’t ~*~really~*~ have ADHD and that they’re just “lazy” or “flaky” or “apathetic” or whatever other shitty terms are used for people with executive dysfunction.

                                                        It also makes doctors more suspicious of people seeking treatment (after all, if it’s overdiagnosed it must be rare, so some of the people I see as a doctor have to be be faking it!), which actually causes harm to people who need access to medical care (finding a doctor who’s ok prescribing can fucking suck when you’re ADHD and have run out of meds).

                                                        I’m super critical of psychiatry, and there definitely are people who are diagnosed with it who don’t have it / don’t benefit from medication, but the “overdiagnosis” moral panic causes real harm to people who actually end up needing medical access (often to controlled substances which are impossible to legally get without the appropriate diagnosis).

                                                        1. 17

                                                          the most spiteful thing about this isn’t just that it destroyed usability (especially if you need to use a keyboard and can’t use a mouse, or use a screen reader) and makes websites unusable and slow – it’s that there are many applications where javascript in web pages is actually useful.

                                                          for example, checking the checksum on a credit card number (so you don’t need to submit a typo’d credit card number and have to reenter everything on the page) (also you literally afaict never need to click on the “this is a visa” radio button, the card type can be identified with the number)

                                                          also it was cool when twitter let you look at threaded tweets on a timeline/userpage without having to change URL or open up a horrid cringey lightbox, it was fast and useful and wasn’t too onerous, unlike the new lightboxes.

                                                          i hope whoever invented those lightboxes for tweets has to use twitter on a computer that doesn’t have the latest CPU and doesn’t have 24 GB of memory and isn’t connected with a gigabit link to twitter.com. it is sufficient punishment.

                                                          1. 21

                                                            We owe them a thorough and professional investigation under law. That’s what this is. The American people should expect nothing less from the FBI.

                                                            Oh that’s why the Feeb did a “a reckless and forensically unsound password change” on the phone’s iCloud account? That’s what “professional” means?

                                                            1. 2

                                                              What’s the point of the password change? Why can’t they just subpoena all the info on the servers?

                                                              1. 3

                                                                The purpose elucidated by their letter justifying the action (linked from the above, here for convenience) was that it allowed the phone’s owners (not the shooters - it was a work phone) to access the existing backups. The letter does not raise the possibility of a subpoena to Apple.

                                                                1. 1

                                                                  That letter makes it sound like they reset the iCloud password to get at the backups without judicial oversight. I suppose if the employer / owner of the AppleId email was cooperating, that’s not too bad? Still, an interesting precedent.

                                                                  1. 2

                                                                    Yeah, they did nothing wrong in asking to have it reset, but it leaves them in the position of having to argue that they’re incompetent, because the alternative is that they’re completely disingenuous (which is not actually proven as a matter of law, after all).

                                                            1. 6

                                                              I know they should be using actual PGP signatures or whatever instead of just hashes; but:

                                                              Choosing to use MD5 at all in 2016 is a sign of negligence and incompetence when it comes to crypto.

                                                              There’s no excuse for choosing to use MD5 or SHA1 for anything today.

                                                              There’s no excuse for coming up with probably-incorrect handwaving arguments about how “MD5 is broken, but not broken in how we use it” or “we also provide sha1 hashes” instead of replacing it with an actually secure hash (there are faster and more secure hashes out there like blake2b).

                                                              Choosing to use MD5 or SHA1 (or other such hallmarks of bad 90s-era civilian cryptography) is as cringeworthy and negligent engineering as the safety engineering in 1960s American cars that ended with “padded dashboards” and 2-point seatbelts and “recessed hub steering wheels”.

                                                              1. 1

                                                                If I’ve understood this correctly, in this case it wouldn’t really help? I mean would a regular linux mint user verify it after download?

                                                                My take is that the installer should verify itself, something akin to signify on openbsd but for the whole image.

                                                              1. 5

                                                                Is there any desire to have Lobsters support 2FA?

                                                                1. 3

                                                                  I’d use it; I’m not sure I’d advocate for spending effort on it over other features, but I’m not aware of any recent feature proposals. Halfhearted yes?

                                                                  1. 2

                                                                    I already got a u2f usb thing for github purposes so definitely – any website where I need to manually log on is a website i desire support U2F.

                                                                    1. 1

                                                                      i’d use it!