1. 2

    It occurs to me that if you do not have the right to benchmark then you do not have the right to test that the product works as advertised. This cannot be legal.

    1. 2

      This license forbids systems integrators from publishing benchmarks related to this microcode. Presumably because Intel reserves that right to themselves. If you are not a systems integrator it doesn’t apply to you. If you are a systems integrator not only can you benchmark, clause 4 makes it clear you are under no obligation to share those results, even with Intel.

      1. 1

        We don’t want to get submissions for every CVE and, if we do get CVEs, we probably want them tagged security.

        1. 16

          while I agree with you in this case, I don’t particularly like the “I speak for everyone” stance you seem to be taking here.

          1. 9

            This one is somewhat notable for being the first (?) RCE in Rust, a very safety-focused language. However, the CVE entry itself is almost useless, and the previously-linked blog post (mentioned by @Freaky) is a much better article to link and discuss.

            1. 4

              Second. There was a security vulnerability affecting rustdoc plugins.

          2. 4

            Do you think an additional CVE tag would make sense? Given there’s upvotes some people seem to be interested.

            1. 2

              That’d be a good meta tag proposal thread.

            2. 4

              Yeah, I’d rather not have them at all. Maybe a detailed, tech write-up of discovery, implementation, and mitigation of new classes of vulnerability with wide impact. Meltdown/Spectre or Return-oriented Programming are examples. Then, we see only the deep stuff with vulnerability-listing sites having the regular stuff for people using that stuff.

              1. 5

                seems like a CVE especially arbitrary code execution is worth posting. my 2 cents

                1. 5

                  There are a lot of potentially-RCE bugs (type confusion, use after free, buffer overflow write), if there was a lobsters thread for each of them, there’d be no room for anything else.

                  Here’s a list a short from the past year or two, from one source: https://bugs.chromium.org/p/oss-fuzz/issues/list?can=1&q=Type%3DBug-Security+label%3AStability-Memory-AddressSanitizer&sort=-modified&colspec=ID+Type+Component+Status+Library+Reported+Owner+Summary+Modified&cells=ids

                  1. 2

                    i’m fully aware of that. What I was commenting on was Rust having one of these RCE-type bugs, which, to me, is worthy of discussion. I think its weird to police these like their some kind of existential threat to the community, especially given how much enlightenment can be gained by discussion of their individual circumstances.

                    1. -1

                      But that’s not Rust, the perfect language that is supposed to save the world from security vulnerabilities.

                      1. 3

                        Rust is not and never claimed to be perfect. On the other hand, Rust is and claims to be better than C++ with respect to security vulnerabilities.

                        1. 0

                          It claims few things - from the rustlang website:

                          Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.

                          None of those claims are really true.

                          It’s clearly not fast enough if you need unsafe to get real performance - which is the reason this cve was possible.

                          It’s clearly not preventing segfaults - which this cve shows.

                          It also can’t prevent deadlocks so it is not guaranteeing thread safety.

                          I like rustlang but the claims it makes are mostly incorrect or overblown.

                          1. 2

                            Unsafe Rust is part of Rust. I grant you that “safe Rust is blazingly fast” may not be “really true”.

                            Rust prevents segfaults. It just does not prevent all segfaults. For example, a DOM fuzzer was run on Chrome and Firefox and found segfaults, but the same fuzzer run for the same time on Servo found none.

                            I grant you on deadlocks. But “Rust prevents data race” is true.

                        2. 2

                          I’m just going to link my previous commentary: https://lobste.rs/s/7b0gab/how_rust_s_standard_library_was#c_njpoza

                  1. 2

                    Yet another reason to try for BSD jails and ansible.

                    1. 2

                      If only any of the BSDs had an init system with declarable units, instead of the hack that is shell scripts.

                      1. 1

                        Nobody is preventing you from installing and using one

                        1. 2

                          Yes, and nobody is preventing me from using Linux with systemd either, which I rather do until they fix this. If they never fix it, that’s fine too.

                          1. 1

                            How many service units are you writing on a daily basis that makes Systemd a necessity for your use case? Do Linux packages typically ship without service units and force you to do it yourself?

                            1. 1

                              Well, none of the Fun parts even come from the official repos. Plus there’s of course all internally developed stuff – somebody needs to write init scripts or unit files for those. Getting a unit file 95% correct on the first try is possible.

                              You may be right that systemd is not necessary for anything I do. It’s just a whole lot more convenient than the alternatives.

                    1. 1

                      Is this still vulnerable to watermark attacks?

                      I’ll just stick to GELI encrypting all my drives…

                      edit: looks like yes https://lists.freebsd.org/pipermail/freebsd-current/2018-August/070860.html

                      1. 2

                        “I’ve had voters who have overnighted to our jurisdiction and paid over $50 to do so, and it still didn’t get back to us by voting day.”

                        I’d like to see more than just this anecdote as a reason that absentee ballots are not a solution.

                        1. 3

                          This sounds like rather a big deal, why is this in these old intel CPUs?

                          1. 12

                            VIA is an independent manufacturer of computers that sold low-power, crypto-accelerated, x86 chips designed by the third, x86 vendor that’s still around: Centaur. Here’s a video about them. They worked on processor verification with ACL2. Jared Davis, who contributed to that work, later did a “self-verifying” prover called Milawa that bootstrappers should find inspiring.

                            So, interesting company and people. Them being low watts with x86 compatibility got them used in a lot of embedded applications. The VIA Artigos were also one of only boxes you could get for $300 with tiny, form factor and crypto accelerator (incl TRNG). VIA stayed being a struggling also-ran in x86 but many users.

                            1. 6

                              VIA is not Intel

                            1. 1

                              Do they still refuse to accept patches for the BSDs?

                              1. 2

                                Does anyone here use Bitwarden? I didn’t know about it, but it looks really attractive.

                                1. 3

                                  Yes, it’s awesome. It’s also the only password manager that has a Firefox for Android extension (to my knowledge).

                                  1. 3

                                    Yes. It has some rough edges – I wish syncing was better – but it’s working great.

                                    My syncing issue has to do with the fact that everything has its own copy the data: desktop app, mobile app, browser plugins, etc. When you make a change they do not sync between them all immediately. You can have a Bitwarden app or plugin that is days behind so you have to go to settings and do a manual sync. Very annoying, but not a deal breaker.

                                    1. 2

                                      I use the venerable pass. It has none of this mobile mumbojumbo or autosync frills the kids today are talking about.

                                      It’s so simple and lean, I never thought pass git pull would be annoying.

                                      I would appreciate a mobile UI sometimes, though. A Sailfish client. But that’s not a dealbreaker either.

                                      Maybe I could hook the missus up with Rubywarden, though. Pass would be too much for her.

                                      Addendum: There appears to be a QML frontend on OpenRepos. Found through storeman. Not a complete client but have to give it a spin :)

                                      1. 1

                                        There is definitely a pass app for android. I’m not sure about iOS.

                                        1. 1

                                          As someone who uses a mobile and two desktops, having passwords being synced across devices is a must-have. It’s just too much of a pain to remember to copy new passwords from my phone to machine A, then B, and vice-versa.

                                          1. 1

                                            Home desktop, work desktop, work laptop, work macOS laptop and hopefully soon two Sailfish mobiles running pass.

                                            Made git pull a habit, not a chore, but ymmv.

                                      2. 2

                                        yeah, it’s open source and possible to run self-hosted as well.

                                        check out the discussion from a topic from a few days ago, id just be copying from there:

                                      1. 10

                                        The security researcher also recommended we consider using GPG signing for Homebrew/homebrew-core. The Homebrew project leadership committee took a vote on this and it was rejected non-unanimously due to workflow concerns.

                                        This is incredibly sad and makes me wonder what part of the workflow would have been impacted. Git automatically signs the commits I make for me once I have entered my password once, thanks to gpg-agent.

                                        1. 3

                                          They have a bot which commits hashes for updated binary artifacts. If all commits needed to be signed, it’d need an active key, and now you have a GPG key on the Jenkins server, leaving you no better off.

                                          1. 2

                                            But gpg cannot work with multiple smartcards at the same time, so maybe that’s a reason for some people. Either way there are simpler ways to deal with signing than gpg

                                            1. 1

                                              GPG signing wouldn’t have fixed this vulnerability as such, since presumably the same people not thinking about the visibility of the bot’s token would have equally failed to think about the visibility of the bot’s hypothetical private key

                                            1. 1

                                              Will the new DNS over HTTPS lose the hosts file records? I also use a feature of systemd which makes any subdomain of localhost point to localhost.

                                              1. 3

                                                I assume that Firefox will ignore the local system resolver entirely, so this feature would no longer work for you unless you turn this off in Firefox.

                                              1. 17

                                                An interesting aspect of this: their employees’ credentials were compromised by intercepting two-factor authentication that used SMS. Security folks have been complaining about SMS-based 2FA for a while, but it’s still a common configuration on big cloud providers.

                                                1. 11

                                                  What’s especially bugging me is platforms like twitter that do provide alternatives to SMS for 2FA, but still require SMS to be enabled even if you want to use safer means. The moment you remove your phone number from twitter, all of 2FA is disabled.

                                                  The problem is that if SMS is an option, that’s going to be what an attacker uses. It doesn’t matter that I myself always use a Yubikey.

                                                  But the worst are services that also use that 2FA phone number they got for password recovery. Forgot your password? No problem. Just type the code we just sent you via SMS.

                                                  This effectively reduces the strength of your overall account security to the ability of your phone company to resist social engineering. Your phone company who has trained their call center agents to handle „customer“ requests as quickly and efficiently as possible.

                                                  update: I just noticed that twitter has fixed this and you can now disable SMS while keeping TOTP and U2F enabled.

                                                  1. 2

                                                    But the worst are services that also use that 2FA phone number they got for password recovery. Forgot your password? No problem. Just type the code we just sent you via SMS.

                                                    I get why they do this from a convenience perspective, but it bugs me to call the result 2FA. If you can change the password through the SMS recovery method, password and SMS aren’t two separate authentication factors, it’s just 1FA!

                                                    1. 1

                                                      Have sites been keeping SMS given the cost of supporting locked out users? Lost phones are a frequent occurrence. I wonder if sites have thought about implementing really slow, but automated recovery processes to avoid this issue. Going through support with Google after losing your phone is painful, but smaller sites don’t have a support staff at all, so they are likely to keep allowing SMS since your mobile phone number is pretty recoverable.

                                                      1. 1

                                                        In case of many accounts that are now de-facto protected by nothing but a single easily hackable SMS I’d much rather lose access to it than risk somebody else getting access.

                                                        If there was a way to tell these services and my phone company that I absolutely never want to recover my account, I would do that in a heartbeat

                                                      2. 1

                                                        This effectively reduces the strength of your overall account security to the ability of your phone company to resist social engineering. Your phone company who has trained their call center agents to handle „customer“ requests as quickly and efficiently as possible.

                                                        True. Also, if you have the target’s phone number, you can skip the social engineering, and go directly for SS7 hacks.

                                                      3. 1

                                                        I don’t remember the details but there is a specific carrier (tmobile I think?) that is extremely susceptible to SMS interception and its people on their network that have been getting targeted for attacks like this.

                                                        1. 4

                                                          Your mobile phone number can relatively easily be stolen (more specifically: ported out to another network by an attacker). This happened to me on T-Mobile, but I believe it is possible on other networks too. In my case my phone number was used to setup Zelle and transfer money out of my bank account.

                                                          This article actually provides more detail on the method attackers have used to port your number: https://motherboard.vice.com/en_us/article/vbqax3/hackers-sim-swapping-steal-phone-numbers-instagram-bitcoin

                                                          1. 1

                                                            T-Mobile sent a text message blast to all customers many months ago urging users to setup a security code on their account to prevent this. Did you do it?

                                                            Feb 1, 2018: “T-Mobile Alert: We have identified an industry-wide phone number port out scam and encourage you to add account security. Learn more: t-mo.co/secure”

                                                            1. 1

                                                              Yeah I did after recovering my number. Sadly this action was taken in response to myself and others having been attacked already :)

                                                      1. 2

                                                        I stopped using iTerm2 a long time ago because of the latency. There is a drastic difference between iTerm2 and stock Terminal. If this fixes it, I’m back on board.

                                                        edit: WOW this is crazy fast!!

                                                        1. 1

                                                          Most users will see improved latency

                                                          Sounds like it - can’t say I’ve had any problems with latency though…

                                                        1. 4

                                                          The HipChat MacOS client is only using 40MB of RAM. I don’t want Slack’s resource usage. 👎

                                                          1. 5

                                                            Might I suggest https://github.com/wee-slack/wee-slack. At $WORK, we have a bunch of folks using it for Slack integration and it’s been a pretty good UX.

                                                          1. 1

                                                            No. Python interpreter startup time is too slow for these tools. The amount of wasted CPU time worldwide from scripts, monitoring tools, etc executing these commands rewritten as python is simply unforgivable.

                                                            1. 1

                                                              I install glances on every physical host and most of the VMs I manage. It does have quite a few dependencies but almost all of them are optional depending on what you need. It works great. Its one of the first things I go to when troubleshooting a problem. Speed is literally not an issue.

                                                            1. 3

                                                              I have been waiting for this native support for a long time.

                                                              1. 6

                                                                I want to know where Microsoft and Apple stand on AV1. I remember when all the major players were duking it out over WebM or H.264; H.264 won (and Mozilla and Opera, who were pushing WebM, got pressured into adding patent-encumbered H.264 into their browsers by market forces).

                                                                AFAICT, that happened for three big reasons:

                                                                1. Apple and Microsoft implemented H.264 and refused to implement WebM. In retrospect I guess that made more sense for Microsoft since they were still in “we blindly hate anything with the word ‘open’ in it” mode. Apple made less sense to me.
                                                                2. Google promised that Chrome would drop H.264 support, but never followed through. At the time <video> was new enough, and Chrome had enough market share, that I really think they would have been able to turn the tide and score a victory for WebM if they had been serious. But apparently they weren’t.
                                                                3. H.264 had hardware partnerships which meant decoding was often hardware-accelerated - especially important for mobile performance. But I have no idea where I know that from so Citation Needed™.

                                                                I dunno, I think there’s hope for AV1 but that a lot could still go wrong. Apple I am particularly worried about due to iOS’ market share. If they refuse to implement the standard, it could seriously harm or even kill widespread adoption. But OTOH, maybe I’m just a pessimist :P

                                                                1. 6

                                                                  A few months ago, Apple has announced that they joined the AV1 group and Microsoft was a founding member. That makes me much more optimistic than previous open formats.

                                                                  I think the MPEG-LA really fucked things up with the minefield they set up for H.265.

                                                                  https://www.cnet.com/google-amp/news/apple-online-video-compression-av1/

                                                                  https://en.m.wikipedia.org/wiki/Alliance_for_Open_Media

                                                                  1. 5

                                                                    Apple and Microsoft implemented H.264 and refused to implement WebM. In retrospect I guess that made more sense for Microsoft since they were still in “we blindly hate anything with the word ‘open’ in it” mode. Apple made less sense to me.

                                                                    Apple and Microsoft are both large corporations, and thus hydras; what one head said doesn’t necessarily reflect another. Still, they both have a foot in the game in three awful races: an attempt to be a monopoly without appearing to be such to regulators; both are heavily invested in software patents (a lose-lose game for everyone, but there’s a sunk cost fallacy problem here); heavy investment and affiliation with proprietary media companies.

                                                                    I think the rest of your analysis on why h.264 made it in is right in gneral. Also, Cisco did the “here’s an open source h.264 implementation except if you modify it we might sue you for patent violations, so it’s not free software in practice” thing, and that was enough for various parties to check a box on their end, sadly.

                                                                    BTW, I sat in on some of the RTCWeb IETF meetings where the battle over whether or not we would move to a royalty free default video codec on the web would happen then. I watched as a room mostly full of web activists not wanting patent-encumbered video to overtake the web were steamrolled by a variety of corporate representatives (Apple especially). A real bummer.

                                                                    I’d like AV1 to do better… maybe it can by being actually better technology, and reducing a company’s bottom line by having a smaller bandwidth footprint, as it looks like they’re aiming for here. Dunno. Would love to hear more about strategy there.

                                                                    1. 1

                                                                      Also, Cisco did the “here’s an open source h.264 implementation except if you modify it we might sue you for patent violations, so it’s not free software in practice” thing, and that was enough for various parties to check a box on their end, sadly.

                                                                      What exactly was happening there? IIRC Cisco basically said “we’ll eat the licensing costs on this particular implementation to fix this problem” so Mozilla/Opera(?) ended up using that to avoid the fees. Is that not what happened?

                                                                      I definitely remember Mozilla attempting to hold out for as long as possible. Eventually it became clear that Firefox couldn’t compete in the market without H.264 and that’s when the Cisco plugin went in.

                                                                      I watched as a room mostly full of web activists not wanting patent-encumbered video to overtake the web were steamrolled by a variety of corporate representatives (Apple especially).

                                                                      This is super gross.

                                                                    2. 3

                                                                      Apple made less sense to me

                                                                      Apple is extremely sensitive to things that affect battery life of iOS devices. H.264 can be decoded in hardware on their devices. WebM would have to be decoded in software, so supporting it would be a worse experience for device reliability (battery would drain really fast on sites with lots of WebM content).

                                                                    1. 9

                                                                      There are arguments for and against HTTPS for static sites, but what I’ve seen is Troy (and to some extent Scott) making valid points, then talking past other people online (who in turn talk past them). Neither side budges and it descends into the usual online bickering.

                                                                      There are good reasons to implement HTTPS on a static website, as illustrated by Troy. HTTPS isn’t the only way to secure a transport layer, nor does HTTPS magically stop 100% of man-in-the-middle attacks. There are plenty of attacks on TLS (which is why we use TLS 1.3 and not 1.0), plenty of problems with the openssl monoculture and a web of trust broken by companies and governments to contend with.

                                                                      It makes sense to advocate for people to use HTTPS where practical to protect their site and users from casual interception. It does not make sense to resort to dark UX patterns, nor to mandate HTTPS. Browsers mandating HTTPS, or warning users that not using HTTPS with full WoT is insecure, then they are against decentralization. The WoT is a centralized transport layer on top of a decentralized protocol.

                                                                      In the 90s, there were many that advocated for mandatory IPSEC. Indeed, IPSEC is integrated into the IPv6 spec. Better solutions came along, and now IPSEC is losing ground to TLS VPNs.

                                                                      Protecting user data is a good idea. Pushing users into a fully centralized web, open to abuse by governments and corporations is a bad idea. There are alternatives to TLS out there for people that want them, and we can even build better decentralized alternatives, but if we mandate moves to a centralized web there’ll be too many incentives to stop moving off.

                                                                      1. 7

                                                                        IPSEC is integrated into the IPv6 spec.

                                                                        IPSEC was removed from the IPv6 spec a long time ago. Around 2011 it changed from a MUST to a SHOULD, and now it isn’t mentioned at all anymore in the latest RFC that combined all of the various RFCs comprising IPv6: https://tools.ietf.org/html/rfc8200

                                                                        I have been using IPv6 in some form since at least 2004 and have never once seen it coupled with IPSEC.

                                                                        RFC 4294 - IPv6 Node Requirements: IPsec MUST

                                                                        RFC 6434 - IPv6 Node Requirements: IPsec SHOULD

                                                                        1. 1

                                                                          Thanks for pointing that out. I didn’t know that it’d been removed.

                                                                      1. 2

                                                                        My issue with most implementations of 2FA is that they rely on phones and MMS/SMS which is beyond terrible and is often less secure than no-2FA at all - as well as placing you at the mercy of a third party provider of which you are a mere customer. Don’t pay your bill because of hard times or, worse yet, have an adversary inside the provider or government that has influence over the priced and all bets are off - your password is going to get reset or account ‘recovered’ and there isn’t much you can do.

                                                                        For these reasons, the best 2FA, IMO, is a combination of “something you have” - a crypto key - and “something you know” - the password to that key. Then you can backup your own encrypted key, without being at the mercy of third parties.

                                                                        Of course, if you loose the key or forget the password then all bets are off - but that’s much more acceptable to me than alternative.

                                                                        (FYI - I don’t use Github and I’m not familiar with their 2FA scheme, but commenting generally that most 2FA is done poorly and sometimes it’s better not to use it at all, depending on how it’s implemented.)

                                                                        1. 4

                                                                          (FYI - I don’t use Github and I’m not familiar with their 2FA scheme, but commenting generally that most 2FA is done poorly and sometimes it’s better not to use it at all, depending on how it’s implemented.)

                                                                          GitHub has a very extensive 2FA implementation and prefers Google Authenticator or similar apps as a second factor.

                                                                          https://help.github.com/articles/securing-your-account-with-two-factor-authentication-2fa/

                                                                          1. 2

                                                                            I don’t use Google’s search engine or any of their products nor do I have a Google account, and I don’t use social media - I have no Facebook or Twitter or MySpace or similar (that includes GitHub because I consider it social networking). Lobste.rs is about as far into ‘social networking’ as I go. Sadly, it appears that the GitHub 2FA requires using Google or a Google product - quite unfortunate.

                                                                            1. 9

                                                                              You can use any app implementing the appropriate TOTP mechanisms. Authenticator is just an example.

                                                                              https://help.github.com/articles/configuring-two-factor-authentication-via-a-totp-mobile-app/

                                                                              1. 5

                                                                                Google Authenticator does not require a Google account, nor does it connect with one in any way so far as I am aware.

                                                                                Github also offers U2F (Security Key) support, which provides the highest level of protection, including against phishing.

                                                                                1. 3

                                                                                  This is very good to know - thank you for educating me. I only wish every service gave these sort of options.

                                                                                2. 1

                                                                                  You can also use a U2F/FIDO dongle as a second factor (with Chrome or Firefox, or the safari extension if you use macOS). Yubikey is an example, but GitHub has also released and open sourced a software U2F app

                                                                              2. 0

                                                                                My issue with most implementations of 2FA is that they rely on phones and MMS/SMS which is beyond terrible and is often less secure than no-2FA at all

                                                                                A second factor is never less secure than one factor. Please stop spreading lies and FUD. The insecurity of MMS/SMS is only a concern if you are being targeted by someone with the resources required to physically locate you and bring equipment to spy on you and intercept your messages or socially engineer your cellular provider to transfer your service to their phone/SIM card.

                                                                                2FA with SMS is plenty secure to stop script kiddies or anyone with compromised passwords from accessing your account.

                                                                                1. 1

                                                                                  I happen to disagree completely. This is not lies nor FUD. This is simple reality.

                                                                                  The when the second factor is something that is easily recreated by a third party it does not enhance security. Since many common “two-factor” methods allow resetting of a password with only SMS/MMS and a password, the issue should be quite apparent.

                                                                                  If you either do not believe or simply choose to ignore this risk, you do so at your own peril - but to accuse me of lying or spreading FUD only shows your shortsightedness here, especially with all of the recent exploits which have occurred in the wild.

                                                                                  1. 1

                                                                                    Give me an example of such a vulnerable service with SMS 2FA. I will create an account and enable 2FA. I will give my username and password and one year to compromise my account. If you succeed I will pay you $100USD.

                                                                                    1. 1

                                                                                      We both know $100 doesn’t even come close to covering the necessary expenses or risks of such an attack - $10,000 or $100,000 is a much different story - and it’s happened over and over and over.

                                                                                      For example, see:

                                                                                      Just because I’m not immediately able to exploit your account does not mean that it’s wise to throw best-practices to the wind.

                                                                                      This is like deprecating MD5 or moving away from 512-bit keys - while you might not be able to immediately crack such a key or find a collision, there were warnings in place for years which were ignored - until the attacks become trivial, and then it’s a scramble to replace vulnerable practices and replace exploitable systems.

                                                                                      I’m not sure what there is to gain in trying to downplay the risk and advising against best practices. Be part of the solution, not the problem.

                                                                                      Edit: Your challenge is similar to: “I use remote access to my home computer extensively - I’ll switch to using Telnet for a month and pay you $100 when you’ve compromised my account.”

                                                                                      Even if you can’t that doesn’t justify promoting insecure authentication and communication methods. Instead of arguing about the adaquecy of SMS 2FA long after it’s been exposed as weak, we should instead be pushing for secure solutions (as GitHub already has and was mentioned in the threads above).

                                                                                      I also wanted to apologize for the condescending attitude in my precious response to you.

                                                                                      1. 1

                                                                                        So you’re admitting that SMS 2FA is perfectly fine for the average person unless they’ve been specifically targeted by someone who has a lot of money and resources.

                                                                                        Got it.

                                                                                        1. 1

                                                                                          DES, MD5, and unencrypted Telnet connections are perfectly fine for the average person too - until they are targeted by someone with modest resources or motivation.

                                                                                          So, yes, I admit that. It still is no excuse to refuse best practices and use insecure tech because it’s “usually fine”.

                                                                                          1. 1

                                                                                            Please study up on Threat Models. Grandma has a different Threat Model than Edward Snowden. Sure, Grandma should be using a very secure password with a hardware token for 2FA, but that is not a user friendly or accessible technology for Grandma. Her bank account is significantly more secure with SMS 2FA than nothing.

                                                                                            1. 1

                                                                                              That actually depends on how much money is in Grandma’s bank account. And if SMS can be used for a password reset, I’d highly recommend grandma avoid it - it simply is not safer than using a strong unique password. With the prevalence of password managers, this is now trivial.

                                                                                              While I don’t have any grandma’s left, I still have a mother in her 80’s, and, bless her heart, she uses 2FA with her bank - which is integrated into the banking application itself that runs on the tablet I bought her - it does not rely on SMS. At the onset of her forgetful old age she started using the open-source “pwsafe” program to generate and manage her passwords. She also understands phishing and similar risks better than most of the kids these days simply because she’s been using technology for many years. She grew up with it and knows more of the basics, because schools seem to no longer teach the basics outside of a computer science curriculum.

                                                                                              These days, being born in the 1930s or 1940s means that you would have entered college right at the first big tech boom and the introduction of widescale computing - I find that many “grandma/grandpa” types actually have a better understanding of technology and it’s risks than than millennials.

                                                                                              I do understand Theat Models, but this argument falls apart when it’s actually easier to use the strong unique passwords than the weaker ones - and the archetype of the technology oblivious senior, clinging to their fountain pens and their wall mounted rotary phones is, as of about ten years ago, a thing of the past.

                                                                                              1. 1

                                                                                                More on SMS 2FA posts:

                                                                                                https://pages.nist.gov/800-63-3/sp800-63b.html#pstnOOB

                                                                                                https://www.schneier.com/blog/archives/2016/08/nist_is_no_long.html

                                                                                                NIST is no longer recommending two-factor authentication systems that use SMS, because of their many insecurities. In the latest draft of its Digital Authentication Guideline, there’s the line: [Out of band verification] using SMS is deprecated, and will no longer be allowed in future releases of this guidance.

                                                                                                Since NIST has come out strongly against using SMS 2FA years ago it should be fairly straightforward to cease any recommendations for it’s use at this point.

                                                                              1. 1

                                                                                Why is there a bandwidth limit on the outq? That shouldn’t be necessary. Maybe it’s just the implementation in pf, but it’s totally not required in FreeBSD’s IPFW.

                                                                                Point is that for incoming traffic you cannot control the flow of packets so you have to fake a lower bandwidth link by artificially dropping packets to slow down the sender. For outbound traffic you have full control of the sending rate and the ability to detect congestion early so limiting your max outbound bandwidth should not be required.

                                                                                1. 1

                                                                                  Your home router doesn’t know the uplink bandwidth of your cable modem connection to your ISP. So you have to dial it in for the FQ-CoDel algorithm to know how to achieve the right send rate to flush the buffers quickly and fairly enough.

                                                                                  1. 1

                                                                                    Your home router doesn’t know the uplink bandwidth of your cable modem connection to your ISP

                                                                                    That doesn’t matter. The firewall has the ability to detect congestion of the outbound traffic and apply queueing/shaping to immediately control the sending rate and prevent buffer bloat. It shouldn’t need a bandwidth limitation. That’s only required for inbound traffic where you cannot control the sending rate, so you fake a smaller pipe with reasonable overhead (~10%) to drop packets early to slow down the sender and prevent severe congestion/buffer bloat.

                                                                                    All I can tell you is that I am not restricting any bandwidth for outbound with FQ-CoDel via DummyNet+IPFW and I get passing test results every time. I only have to restrict on the incoming. So something is different between your pf implementation and my DummyNet implementation. Does the OpenBSD pf implementation include ECN (Explicit Congestion Notification) ?

                                                                                    ipfw pipe 1 config delay 0
                                                                                    ipfw pipe 2 config bw 220Mbit/s delay 0
                                                                                    ipfw sched 1 config pipe 1 type fq_codel
                                                                                    ipfw sched 2 config pipe 2 type fq_codel
                                                                                    ipfw queue 1 config sched 1
                                                                                    ipfw queue 2 config sched 2
                                                                                    $cmd 00100 queue 1 ip from any to any out via $pif
                                                                                    $cmd 00101 queue 2 ip from any to any in via $pif
                                                                                    

                                                                                    Here is my test result: http://www.dslreports.com/speedtest/35535303