1. 5

    Some devs are not very happy with that new notary thing.

    1. 4

      It only applies for binaries downloaded with the browser. Anything on a game launcher or similar is already unaffected.

      1. 1

        Isn’t it related to every executable (and kext) that is being built by every developer?

        1. 3

          No. it’s only related to binaries marked as quarantined. It’s up to the transferring application to set that extended attribute. Compilers don’t. Browsers do.

          Stuff you build for yourself is unaffected. Same goes for whatever pre-built binary brew downloads.

          1. 1

            Compilers don’t. Browsers do.

            The Notary service does it. Not the browser. The browser (well, only Safari AFAIK) leaves a note so that the OS can tell you where a document, file or binary came from. But that is not the notarization process. That happens on Apple’s servers. You have to send your app to the Notarization API and you will get back a binary that has some special signed meta data attached.

            If it were as simple as the browser adding this then every piece of malware would be doing that.

            Note that this does not cost money. You don’t need a $100 developer subscription. All you need is an Apple ID.

            1. 1

              I was talking about setting the quarantine xattr. Unless an executable has that flag set, the OS will execute it even when it’s not signed.

              Of course notarization has to be done by Apple and not each user individually. That was the main goal of the change.

        2. 1

          I don’t get it, how is downloading a binary with a browser different than a game launcher?

          1. 3

            Game launcher (e.g., Steam) is verified. It’s now Steam’s job to police the contents of their platform. If they fail Apple can blacklist Steam for everyone at a moment’s notice, so Valve is incentivized to not ship malware through Steam.

            1. 2

              Browsers set the gatekeeper flag, game launchers don’t. It sounds stupid.

              1. 0

                Browsers don’t set any flags.

                1. 2

                  http://ix.io/1Y1C

                  I beg to differ

                  1. 1

                    Yes but this is not used by Gatekeeper to decide whether or not to run a binary. This is just for the notification you will see in the Finder when you open the app. Notarization is a signing process. If it were just as simple as adding some meta data to a file then every piece of malware would be doing that.

        1. 1

          This is the first I’ve heard of the signing requirement. If it will actually cost $99/yr to distribute software for MacOS starting in January 2020 (two months!!!) I’ll need to be looking into a new laptop very soon. :/

          1. 1

            It already costs $99/yr to be able to sign for distribution, an it has been like that for a few years now (it sucks, but it’s nothing new). If you don’t sign, users get scary warnings and need to perform a couple of unobvious clicks to bypass them.

            The only thing that changed recently is technicalities of the signing method. It’s always been PITA, now it’s PITA++ (instead of offline-ish binary signing with a cert now you need extra compiler flags and signing tool uploads the binary to Apple).

            1. 1

              I have read that notarization is only required if the app is signed. Unsigned app will still run like in 10.14, apparently.

              1. 3

                The only change compared to how it worked before is that now Apple is involved in signing the binary at the time of when the signing happens. Before that Apple was only involved when you got or renewed your paid membership.

                In order to sign for Gatekeeper to be happy, you needed a paid dev program membership before this change and now after this change, you still do.

                If you are opposed to this rule (and you do not want to tell gatekeeper to still execute the application despite it not being signed) to the point you want to leave the platform, then you should have done so when gatekeeper was introduced back at macOS 10.7 Leopard

                1. 1

                  Unsigned apps will always run, but only for users that went into the settings and changed the gatekeeper mode…

                  1. 1

                    Or right clicked on the file and chose “open”. Which is a really small barrier for only the most clueless users which are also the most likely to fall prey to malware.

              1. 13

                So a dev who never developed for macOS in the first place, has decided to continue to not do so, sighting the following reasons:

                • Apple dropping 32 bit support in a forthcoming OS release
                • Apple requiring a $99 yearly fee for app signing (eg. requires a developer account)
                • A steam report (from 4 years ago!) that at that time macos users submitted more support issues than windows users[1]

                Um.. Ok?
                For sure, definitely do whatever you need to do for your business and personal circumstances.


                [1]: No reasoning as to these numbers presented. Who knows, maybe the cause of the higher percentage of support tickets 4 years ago were for people who bought a game on steam thinking it was playable on macos, but turned it out wasn’t? Because most macos steam games were at that time mostly ports from windows (not sure this is even true, but maybe?)? At that time (4 years ago) poor framework (unity, etc) support for macos? Also strange that Linux users accounted for 1% share, but submitted… 30% of support requests? That’s pretty wild!

                1. 6

                  Apple requiring a $99 yearly fee for app signing (eg. requires a developer account)

                  If $99 is too expensive for you to cover with sales of the game, then the game might indeed not be worth releasing, much less spend the time developing it (which is probably 10000s of time the cost of the developer program fee)

                  Also, those $99 also give you access to the iOS store where nobody seems to have a problem developing for (and which review policy is much worse than notarization which is fully automated and takes less than 10 minutes)

                  The original post feels like it’s trying to justify a business decision by giving all possible reasons but the one that’s actually true which is that the Mac gaming market is too small and the game sales will likely not be enough to recoup the development cost (which of corse dwarf that membership fee)

                  1. 3

                    If $99 is too expensive for you to cover with sales of the game, then the game might indeed not be worth releasing, much less spend the time developing it (which is probably 10000s of time the cost of the developer program fee)

                    It’s not about 99$ being to expensive. It’s about 99$ being to expensive for a relatively small platform that requires a lot more attention (and therefore has a much smaller profit-margin) than the other platforms. This means that releasing for Apple products only becomes profitable when your strategy shifts from “survival on” to “dominating” the market. These are two completely different things and from a business perspective the author is certainly right.

                    It’s also not just about the 99$ a year. It’s also about having to buy specific Apple hardware to test and sign your code on. which quickly blows up to a monthly salary for a small company. However, this in no way means that the game the author is releasing isn’t worth it. That simply is a logical fallacy.

                    Also, those $99 also give you access to the iOS store where nobody seems to have a problem developing for (and which review policy is much worse than notarization which is fully automated and takes less than 10 minutes)

                    The authors main complaint with the notarization seems to be that it limits people’s freedom. The author is clearly serving a niche market (rouge-like games) and is simply frustrated by the barriers to entry that Apple has erected. Because of this, some games will not be available to everyone. This flies directly against what the internet promised us in the 90’s and is just one extra point that makes the author think “nope, to risky, not doing that”.

                    I get where the author is coming from, because I have been forced to make the same choice in the past, albeit in a very different market.

                    The original post feels like it’s trying to justify a business decision by giving all possible reasons but the one that’s actually true which is that the Mac gaming market is too small and the game sales will likely not be enough to recoup the development cost (which of corse dwarf that membership fee)

                    If there is anything you should take from this post, it is not that it is a way to justify a business decision, but rather that business decisions are not always taken based on fully rational arguments.

                  2. 4

                    Unity worked fine way before 4 years ago. I remember playing Gone Home when it came out in 2013 on a MacBook.


                    Signing fees are definitely one of the worst aspects of proprietary ecosystems. Especially when they apply to non-commercial projects, which they always do, they’re not even per project, they’re for “developer accounts”.

                    1. 1

                      Thanks for the data point. I personally couldn’t remember any games I played on macos back then that may have used unity.

                      Somewhat related – as I recall macos “Mountain Lion”/10.8 was released in 2012, and I found early versions of 10.8, as well as the previous release (“Lion”/10.7) to be pretty damned buggy.
                      Eventually 10.8 got pretty solid late into the “dot releases”, but I recall it being was /pretty rough/ for a while. ugh. :/

                      1. 1

                        I honestly don’t remember any bugginess, from Leopard to whatever it was in 2016-ish when I stopped using macOS.

                  1. 24

                    I see too many people rolling PHP-FPM only to show an IP address to the client. So, I wanted to share a simpler method which, I hope, can save you some time.

                    1. 3

                      This seems very elegant, but just to be thorough are there any drawbacks/tradeoffs?

                      1. 4

                        The same T&Cs apply as when using nginx for standard stuff. This will/might be wrong if this nginx is behind another nginx, then you should look at X_FORWARDED_FOR (or whatever it’s called exactly).

                        1. 2

                          Beware if you use another public facing server in front of nginx. For example, if you have a reverse proxy (HAproxy for example), then the variable $remote_addr can represent the IP address of the proxy, not the initial HTTP client.

                          1. 4

                            Have a look at the realip module that allows Nginx to set the remote address based on a header set by the frontend proxy provided it’s one you have decided to trust that it’s setting correct headers.

                            Doing this over a custom solution in the application has the advantage that all remote address based features continue to work unaltered. Like geoip detection or logging addresses to web log files using built-in standard formats

                        2. 2

                          Yes this method also has my preference and we use that for years now. Years ago we used to use php (without php-fpm) for this, more or less like this:

                          <?php
                          echo $_SERVER['REMOTE_ADDR'] . PHP_EOL;
                          

                          But I was wondering: do you have suggestions for making it output both IPv4 and IPv6 addresses (like https://ip6.nl and others do) without adding additional complexity/dependencies like php (preferably with stock nginx or apache).

                          1. 4

                            To show both IPv4 and IPv6, the client needs to make two separate requests, to two separate domains that are configured differently, one with only an A record and one with only an AAAA record. Any given HTTP request is only going to be coming in on one or the other IP version.

                            ip6.nl makes XHR requests to https://4only.ip6.nl/myip.plp and https://6only.ip6.nl/myip.plp and displays the results on the page, again with Javascript. While those servers could very well be running the nginx config in the linked article, the ability to show both on the same page is much more complicated, tech-wise.

                            1. 2

                              You might be able to do it with redirects. Have the IPv4 server redirect to the IPv6 server with ?v4=a.b.c.d, and vice versa. Both servers would display both addresses once available.

                              It falls apart if you only have one type of address, since the redirect would be broken, but there’s probably a way around that. Maybe include the single address in the body of the 303, so if the redirect fails to connect you still have the initial IP address you used?

                              1. 3

                                The case where the caller can only connect on one protocol is probably very, very common still.

                            2. 3

                              But I was wondering: do you have suggestions for making it output both IPv4 and IPv6 addresses (like https://ip6.nl and others do) without adding additional complexity/dependencies like php (preferably with stock nginx or apache).

                              The tcp/ip stack of the client decides whether to try to connect using v4 or v6 first. I’ve added two extra dns entries, one with only a v4 address, and one with only a v6 address atop of one that has both: http://ip.netsend.nl http://ip4.netsend.nl http://ip6.netsend.nl

                            3. 2

                              Nice trick! Thanks!

                              However, you could add links to the relevant nginx-pages to your blog post as well.

                            1. 3

                              The arguments given in the article to support the headline claim seem to be:

                              1. JWTs are bigger than minimally-sized cookies.
                              2. JWTs might need similar handling to cookies - in one aspect of operation related to their handling. ‘You’re going to hit the database(sic) anyway’

                              1: I’ve never tracked down a performance problem to the increased size of a Cookie header when it is an encoded JWT.

                              2: Hitting a database once (Here this is Redis, but whatever) with a lookup by a guaranteed unique key, in order to check expiry or blacklist, is very cheap. JWTs allow the opportunity to embed more information about the user/session - so there could be many more ‘database’ queries (or microservice calls, or…) saved that would otherwise have been required.

                              Fear of performance problems without measurement isn’t very good engineering - and ‘might as well’ do something entirely different due to a single point of similarity also seems poor argument.

                              1. 5

                                Hitting a database once (Here this is Redis, but whatever) with a lookup by a guaranteed unique key, in order to check expiry or blacklist, is very cheap

                                It’s just as expensive as hitting the database (or redis) once to get to the session information. You can aggregate that session information from multiple sources too before storing the session information.

                                The thing is: once you need a central place to check tokens for validity, there is zero benefit over classic sessions with the actual data stored as part of a session record but now you also carry the responsibility of getting crypto right in addition of the issue of scaling a central identity validation service.

                                With traditional sessions you only do that.

                              1. 1

                                So anything which can open the TCP port to PHP-FPM can execute arbitrary code as it? This seems like it might be a awkward if you had a setup where multiple different accounts have PHP-FPM processes running on the same machine, binding different TCP ports on localhost.

                                I hope PHP-FPM at least defaults to binding to localhost rather than 0.0.0.0 (I think this is the case, but it’s been a while since I looked) and wouldn’t it be nice if it would bind a unix domain socket rather than a TCP socket, eh?

                                1. 2

                                  It defaults to binding to 127.0.0.1:9000. I don’t see who would change this to be a public interface. But I guess it’s possible.

                                  https://github.com/php/php-src/blob/master/sapi/fpm/www.conf.in#L36

                                  1. 1

                                    Thanks! That’s pleasingly sensible. :)

                                1. 16

                                  Amusingly the site won’t load for me.

                                  1. 13

                                    ButtCloudFlare literally gatekeeping me with a captcha for using Tor :(

                                    1. 1

                                      You really blame them when their operating requirements include minimizing liability?

                                      1. 6

                                        It’s a terrible default. If someone gets a lot of e.g. bot registrations from Tor, they should have that option, but it’s really stupid for a static document site that cannot receive any interaction from the outside world.

                                        1. 1

                                          Do you think it should scan and interpret all the content on all the pages it serves to decide which get Tor filtering? Or what’s your alternative implementation that achieves the same level of protection with the labor cost of adding some firewall rules? Gotta be something their management would agree with.

                                          1. 7

                                            A more reasonable default would be to not show CAPTCHA until a POST request has happened.

                                            1. 4

                                              Bam! There it is! That could be a great sell since they’d spend less resources on the CAPTCHA’s in the first place. Maybe (depends on implementation). I’ll try to remember and mention it when I run into Cloudfare employees. :)

                                    2. 3

                                      It has no A or AAAA records. No MX record either.

                                      $ dig any  stop-gatekeeping.email
                                      
                                      ; <<>> DiG 9.11.5-P4-RedHat-9.11.5-4.P4.fc29 <<>> any stop-gatekeeping.email
                                      ;; global options: +cmd
                                      ;; Got answer:
                                      ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45133
                                      ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
                                      
                                      ;; OPT PSEUDOSECTION:
                                      ; EDNS: version: 0, flags:; udp: 512
                                      ;; QUESTION SECTION:
                                      ;stop-gatekeeping.email.		IN	ANY
                                      
                                      ;; ANSWER SECTION:
                                      stop-gatekeeping.email.	3788	IN	HINFO	"RFC8482" ""
                                      
                                      ;; Query time: 22 msec
                                      ;; SERVER: 8.8.8.8#53(8.8.8.8)
                                      ;; WHEN: Thu Jul 25 03:31:29 EDT 2019
                                      ;; MSG SIZE  rcvd: 72
                                      
                                        1. 2

                                          It’s a domain just purchased for cheap, I guess this will be fixed soon if there ever is a bug.

                                          Given how DNS works there might be different delays between the moment where the record is published and when it is made available.

                                          Maybe it is a cache issue…

                                          To accelerate domain changes:

                                          From the users side, I use a local (dq) cache that points at the root servers, so I can flush my cache myself.

                                          1. 1

                                            It does have A (and AAAA) records:

                                            $ dig +short A stop-gatekeeping.email
                                            104.31.77.194
                                            104.31.76.194
                                            
                                            $ dig +short AAAA stop-gatekeeping.email
                                            2606:4700:30::681f:4cc2
                                            2606:4700:30::681f:4dc2
                                            

                                            it just doesn’t respond to ANY queries by following RFC 8482

                                            1. 1

                                              Now it does. When I tested, it wasn’t responding to either A, AAAA, or ANY.

                                        1. 1

                                          The behavior exhibited by that bitrock installer is completely inacceptable for 2019. With encryption so readily available in all OSes doing a simple “does the given password match my copy I have in plain text” check is pure craziness.

                                          This was all about offering a bullet point feature and nothing about actually offering a usable feature. Worse: by providing it they left their users with a false sense of security.

                                          1. 4

                                            C: 0.73 new features per year, measured by the number of bullet points in the C11 article on Wikipedia which summarizes the changes from C99, adjusted to account for the fact that C18 introduced no new features.

                                            adjusted to account for the fact that C18 introduced no new features.

                                            And that is why I love C. Yes, it has its problems (of which there are many), but it’s a much smaller, bounded set of problems. The devil I know. Many other languages are so large, I couldn’t even know all of the devils if I tried.

                                            1. 25

                                              The devil I know

                                              if the devil you know is an omnipotent being of unlimited power that generations of warriors have tried to fight and never succeeded because it’s just too powerful, then I would argue that it might be worth trying to chose a different evil to fight.

                                              Even in 2019 70% of security vulnerabilities are caused by memory-safety issues that would just not happen if the world wasn’t running on languages without memory-safety.

                                              1. 1

                                                I don’t think being memory safe is enough for a programming language to be a good C replacement.

                                                1. 18

                                                  No. It’s not enough. But IMHO it’s required.

                                                  1. 4

                                                    … a requirement that C, incidentally, does not fulfil. Now that memory-safe low-level languages have swum into our ken, C is no longer a good C replacement. ;-)


                                                    Edited to add a wink. I meant this no more seriously than pub talk – though I believe it has a kernel of truth, I phrased it that way mainly because it was fun to phrase it that way. There are many good reasons to use C, and and I also appreciate those. (And acknowledge that the joke does not acknowledge them.)

                                                    1. 3

                                                      that is my point.

                                                      1. 2

                                                        Hi, sorry, I spent a lot of time on my edit – everything below the line plus the smiley above it wasn’t there when you replied. Sorry to readers for making this look confusing.

                                                        It is indeed your point, and I agree with it.

                                                  2. 6

                                                    Nobody is arguing that it’s sufficient, but it is necessary.

                                                    If I were to develop a new language today, a language that was as unsafe as C but had lots of shiny new features like ADTs and a nice package manager and stuff, I’d never get traction. It would be ridiculous.

                                                    1. 1

                                                      I don’t know. PHP and C are still pretty popular. You just target those markets with selective enhancements of a language that fits their style closely. ;)

                                                  3. 1

                                                    Doesn’t web assembly allow unchanged C to be memory safe?

                                                    1. 2

                                                      Sort of, but not really. Unmanaged C isn’t allowed to escape the sandbox it’s assigned but there is still plenty of opportunities for undefined behavior. Process-level isolation in OSes provide similar guarantees. In the context of WebAssembly, even if the TLS stack were segregated into its own module it would do nothing to mitigate a Heartbleed-style vulnerability.

                                                      1. 2

                                                        There are other environments where the C standard is vague enough to allow for C to compile to a managed and safe environment. As the local AS/400 expert, C there compiles to managed bytecode, which is then compiled again by a trusted translator.

                                                        1. 1

                                                          I try to keep up with folks’ skills in case opportunities arise. Do you know both AS/400 and z/OS? Or just AS/400?

                                                          Also interested in you elaborating on it making C safer.

                                                          1. 3

                                                            No, z is a separate thing I don’t know much about.

                                                            Because C on AS/400 (or i, whatever IBM marketing calls it this week) is managed code, it does things like checking the validity of pointers to prevent things like buffer overflows. It does that by injecting hardware-enforced tagging. To prevent you from cheating it, the trusted translator is the only program allowed to generate native code. (AIX programs in the syscall emulator, however, can generate native code, but are then subject to normal Unix process boundaries and a kernel very paranoid about code running in a sandbox.) The tags are also used as capabilities to objects in the same address space, which it uses in place of a traditional filesystem.

                                                            1. 1

                                                              Thanks. That makes sense except for one thing: hardware-enforced tagging. I thought System/38’s hardware enforcement was taken out with things just type- or runtime-checked or something at firmware/software level. That’s at least how some folks were talking. Do you have any references that show what hardware checking the current systems use?

                                                              1. 1

                                                                No, tags and capabilities are still there, contrary to rumours otherwise.

                                                                The tagging apparatus on modern systems are undocumented and as a result I know little about them, but it’s definitely in the CPU, from what I’ve heard.

                                                                1. 1

                                                                  Ok. So, I guess I gotta press POWER CPU experts at some point to figure it out or just look at the ISA references. Least I know there wasn’t something obvious I overlooked.

                                                                  EDIT: Just downloaded and searched the POWER ISA 3 PDF for capabilities and pointer to see what showed up. Nothing about this. They’re either wrong or it’s undocumented as they told you. If it’s still there, that’s a solid reason for building critical services on top of IBM i’s even if commodity stuff had same reliability. Security be higher. Still gotta harden them, of course.

                                                    2. 1

                                                      Sort of. It had a much larger set of problems than safe, system languages that compete with it. There’s less to know with them unless you choose to dance with the devil in a specific module. Some were more productive with faster debugging, too. So, it seems like C programmers force themselves to know and do more unnecessarily at least on language level.

                                                      Now, pragmatically, the ecosystem is so large and mature that using or at least outputing C might make sense in a lot of projects.

                                                    1. 89

                                                      In light of all of these problems, I’ll take my segfaults and buffer overflows

                                                      but I won’t take your buffer overflows. I have spent enough time with emergency security patching RCEs left by C programmers who were “good enough” not to need a memory-safe language.

                                                      1. 9

                                                        “I have the best taxi service in town because my car is so fast! Yeah, it has no seat belts, but don’t worry — I’m a very safe driver.”

                                                        1. 1

                                                          This is a beautiful way to put that.

                                                        2. 7

                                                          I was thinking about this actually. I was pretty sure that those were security issues rather than just annoying. Maybe he doesn’t mean for the finished product? I hope that’s what he means.

                                                          1. 3

                                                            I’m sure he meant that, but it is still very unlikely that he or anyone else will iron out all segfaults and buffer overflows.

                                                          2. -9

                                                            So since PHP is memory-safe, it’s a good C replacement?

                                                            1. 23

                                                              Yes, if you have to choose between C and PHP, and both can do the job, you definitely should choose PHP, no question. The problem is that PHP often can’t do the job.

                                                              1. 17

                                                                That’s a very uncharitable interpretation. Brainfuck is also memory safe and is obviously not a good C replacement - obviously the only requirement isn’t memory safety.

                                                                1. 3

                                                                  Yes. I chose PHP because it’s an extreme case.

                                                                  The point is: yes, Rust is safe, but the article explains why (besides memory safety) Rust isn’t a good C replacement. In other words: there are other programming languages that are better suited than Rust to replace C.

                                                                  1. 5

                                                                    I think everyone get’s your point, but it’s banal noise that pollutes the discussion, hence the downvotes, which are actually pretty rare here.

                                                            1. 11

                                                              What happened to “do not break user space”?

                                                              1. 15

                                                                The mailing list is worth a read.

                                                                1. 3

                                                                  It won’t break any recent user space, unless it’s a really really old one. Where you can revert the patch and build your own kernel

                                                                  1. 9

                                                                    Well. You can always revert the patch and build your own kernel. That’s true for all other proposed and shut-down user space breakages too.

                                                                    What is the barrier after which breaking user space becomes acceptable? Until now, I felt like the mantra of the kernel was that no such barrier exists and that user space should never be broken. Now we learn that “never” is a somewhat fluid term even in the kernel

                                                                    1. 1

                                                                      Because I think the likeliihood that anybody cares about a.out core dumps is basically zero. While the likelihood that we have some odd old binary that is still a.out is slightly above zero.

                                                                      From the mailing list you were recommended to read. They believe they are breaking basically zero users.

                                                                      considering how even the toolchains cannot create a.out executables in its default configuration

                                                                      It’s even more unlikely that anyone using ancient toolchains still capable of generating a.out are also running a mainline kernel. If you’re using an ancient binary… same thing. You probably aren’t using a recent kernel.

                                                                  2. 2

                                                                    The commit message kind of implies that use space broke itself, and the kernel has been ‘supporting’ something that hasn’t worked for quite a while:

                                                                    […] but considering how even the toolchains cannot create a.out executables in its default configuration […]

                                                                    1. 3

                                                                      Sure. Tools can’t create them any more. But if I still had a binary around from waybackwhen, I could still run it today. Until the kernel removes support that is.

                                                                      Don’t get me wrong. I get it. a.out binaries are likely nowhere to be found any more. But some might. And this highlights he absurdity of the claim to never break userspace. At some point, you have to deprecate and remove things or you cannot ever move forward.

                                                                      1. 3

                                                                        They drop old architectures sometimes - that breaks all of userspace for those platforms…

                                                                        1. 3

                                                                          And this highlights he absurdity of the claim to never break userspace.

                                                                          The mailing list comment from Linus literally says that they won’t do it if anyone actually says ‘hey that will break my workflow’ and that if they do it and it does break someone’s workflow they’ll revert it.

                                                                    1. 1

                                                                      I don’t really get what arguments there are for not having DNSSEC. But then, the world is complex and has quite a few people.

                                                                      Good to see that there is an actual working revocation mechanism, though - including … well done and very clear.

                                                                      1. 4

                                                                        don’t really get what arguments there are for not having DNSSEC

                                                                        The benefits don’t outweigh the issues, so it’s not worth it.

                                                                        • DNSSEC’s security ultimately is in the hands of whoever runs a top level domain. Not all domains are run with the same level of, let’s say, competency, so depending on the top level domain, DNSSEC might provide little to no additional security while pretending otherwise.
                                                                        • DNS resolution is very low level in the OSes to the point where there’s little option for error reporting. Any certificate chain validation error will be seen as a failure to resolve a name by client applications and there’s no API to provide more details. This leads to hard to debug issues.
                                                                        • DNSSEC is using late-90ies levels of key-strengths and algorithms and upgrading those is very hard and requires cooperation between all of the DNSSEC users at once (read: is unlikely to happen).
                                                                        • DNSSEC provides no encryption, so comes with zero privacy benefits for users.

                                                                        The increased maintenance issues and the non-existent error reporting for end users are bad enough issues that can’t be compensated by what practically amounts to only little more than security theatre.

                                                                        1. 2

                                                                          So the argument is that because there are theoretical attacks at state level actor and it is unattractive work at the other end, that you don’t change the default at the client/resolver side for the 99.99% of use cases to protect against all kinds of local hijacks by random script kiddies? Anything that breaks because of DNSSEC is supposed to break hard, and intentionally so. Mistakes are extremely uncommon compared to the exposure to risks on a daily basis (any untrusted network…) Any mistake at the server end of some individual would be fixed soon enough, and would not contaminate others. It is not mandatory, so any service just doesn’t implement DNSSEC server side and the risk disappears completely. Debugging is not so hard either, because it will consistently hard fail for everyone. As a user, I posit there is no risk and all benefits.

                                                                          Because you disregard things like DANE, SSFP, PGP that require DNSSEC and actually do have significant privacy benefits…

                                                                      1. 5

                                                                        Title is slightly wrong. You can boot it but you can’t install it because the OS is blocked from seeing the internal storage.

                                                                        1. 15

                                                                          I don’t think “blocked from seeing the internal storage” is quite the correct characterization. The T2 chip is acting as an SSD controller, I bet if somebody takes the time to write a T2 driver for Linux everything will work just fine. The difficulty there will likely be that there is no datasheet available for the chip so the driver will have to be reverse engineered from mac OS which is certainly not trivial.

                                                                          1. 5

                                                                            This has shades of the “Lenovo is blocking Linux support” “incident” where Lenovo just forced the storage controller into a RAID mode Linux didn’t have a driver for.

                                                                            1. 2

                                                                              At least from what the system report tool says the drive appears as an NVME SSD and just an iteration on the one from previous generations (AP0512J vs AP0512M in the 2018 Air). So it might just work with the Linux NVME drivers once there’s a working UEFI shim that’s trusted. At that point this tutorial seems plausible.

                                                                              1. 3

                                                                                Trust is not an issue because secure boot can be completely disabled.

                                                                                As the article mentions, people who tried live USBs found out that the internal storage is not recognized. So looks like T2 is indeed actually acting as an SSD controller. (And of course macOS would report the actual underlying SSD even if there is no direct connection to it. The T2 could be reporting that info to the OS.)

                                                                            2. 8

                                                                              The difficulty there will likely be that there is no datasheet available for the chip

                                                                              Unless they completely and utterly butchered the initialization, no amount of datasheets will save you. From the T2 documentation:

                                                                              By default, Mac computers supporting secure boot only trust content signed by Apple. However, in order to improve the security of Boot Camp installations, support for secure booting Windows is also provided. The UEFI firmware includes a copy of the Microsoft Windows Production CA 2011 certificate used to authenticate Microsoft bootloaders.

                                                                              NOTE: There is currently no trust provided for the the Microsoft Corporation UEFI CA 2011, which would allow verification of code signed by Microsoft partners. This UEFI CA is commonly used to verify the authenticity of bootloaders for other operating systems such as Linux variants.

                                                                              To bypass the check of the cryptographic signature, you’d probably have to find some kind of exploitable vulnerability in the verification code (or even earlier in the boot process so that you get code execution in the bootloader before the actual check).

                                                                              1. 8

                                                                                As the article says, you can disable the T2 Secure Boot so the code signature verification is not the problem at that point. The problem then is that the T2 acts as the SSD controller, and nobody has taught Linux yet how to talk to a T2 chip. The article incorrectly conflates the two issues.

                                                                                1. 5

                                                                                  Doesn’t look like it’s conflating them. You might have to scroll down further :) but there’s a screenshot of the Startup Security Utility and this text:

                                                                                  However, reports have come in that even with it disabled, users are still unable to boot a Linux OS as the hardware won’t recognize the internal storage device. Using the External Boot option (pictured above), you may be able to run Linux from a Live USB, but that certainly defeats the purpose of having an expensive machine with bleeding-edge hardware.

                                                                                2. 2

                                                                                  Secure boot can be disabled. Then the machine will boot anything you tell it to boot, bringing the security inline with machines predating the T2.

                                                                                  Source: I tried it out on my iMac pro which is a T2 machine.

                                                                                  1. 1

                                                                                    edit: mis-read that. Yeah until they add partner support you’re probably pretty stuck. Although somebody like RedHat or Canonical that have relationships with Microsoft might be able to have them cross-sign their shim to support booting on the new Air. Either that or we’re stuck waiting for Apple to support the UEFI CA.

                                                                              1. 8

                                                                                If you need a freely licensed font, google Noto has you covered. Fedora has it as google-noto-sans-egyptian-hieroglyphs-fonts.noarch, for example

                                                                                (this must be a fairly new addition to Noto, since I couldn’t find it last time I ‘researched’ this exact same topic)

                                                                                1. 4

                                                                                  The sans hieroglyphs in the name makes it sound like the censored version of the font.

                                                                                  1. 5

                                                                                    sans stands for sans-serif.

                                                                                    1. 2

                                                                                      If you don’t know what it actually means

                                                                                  1. 7

                                                                                    CTEs are great, but it’s important to understand the implementation characteristics as they differ between databases. Some RDBMSs, like PostgreSQL, treat a CTE like an optimization fence while others (Greenplum for example) plan them as subqueries.

                                                                                    1. 2

                                                                                      The article mentions offhand they use SQL Server, which AFAIK does a pretty good job of using them in plans. I believe (not 100% sure) its optimiser can see right through CTEs.

                                                                                      1. 2

                                                                                        … and then you have RDBMSs like Oracle whose support for CTE is a complete and utter disgrace.

                                                                                        I praying for the day Oracle’s DB falls out of use, because I imagine that will happen sooner than them managing to properly implement SQL standards from 20 years ago.

                                                                                        1. 2

                                                                                          At university we had to use Oracle and via the iSQL web-interface for all the SQL-related parts in our database-courses. It was the slowest most painful experience, executing a simple select could take several minutes and navigating the interface/paginating results would take at least a minute per operation.

                                                                                          I would always change it to show all results on one page (no pagination) but the environment would do a full reset every few hours requiring me to spend probably 15-30minutes changing the settings back to my slightly saner defaults. Every lab would take at least twice as long because of the pain in using this system. I loved the course and the lecturer, it was probably one of the best courses I took during my time at university, but I did not want to use Oracle again after that point.

                                                                                          I’ve heard that they nowadays have moved the course to use PostgreSQL instead which seems like a much more sane approach, what I would have given to be able to run the code locally on my computer at that time.

                                                                                        2. 1

                                                                                          I didn’t know this, so using a CTE in Postgres current would be at a disadvantage compared to subqueries?

                                                                                          Haven’t really used CTEs in Postgres much yet but I’ve looked at them and considered them. Is there any plans on enabling optimization through CTE’s in pg? Or is there a deeper more fundamental undelaying problem?

                                                                                          1. 5

                                                                                            would be at a disadvantage compared to subqueries

                                                                                            it depends. I have successfully used CTEs to circumvent shortcomings in the planner which was mi-estimating row counts no matter what I set the stats target to (this was also before create statistics).

                                                                                            Is there any plans on enabling optimization through CTE’s in pg

                                                                                            it’s on the table for version 12

                                                                                            1. 2

                                                                                              It’s not necessarily less efficient due to the optimization fence, it all depends on your workload. The underlying reason is a conscious design decision, not a technical issue. There have been lots of discussions around changing it, or at least to provide the option per CTE on how to plan/optimize it. There are patches on the -hackers mailing list but so far nothing has made it in.

                                                                                            2. 1

                                                                                              Does anyone know if CTEs are an optimization fence in DB2 as well?

                                                                                            1. 2

                                                                                              Can someone ELI5 why Firefox is not to be trusted anymore?

                                                                                              1. 4

                                                                                                They’ve done some questionable things. They did this weird tie-in with Mr. Robot or some TV show, where they auto-installed a plugin(but disabled thankfully) to like everyone as part of an update. It wasn’t enabled by default if I remember right, but it got installed everywhere.

                                                                                                Their income stream, according to wikipedia: is funded by donations and “search royalties”. But really their entire revenue stream comes directly from Google. Also in 2012 they failed an IRS audit having to pay 1.5 million dollars. Hopefully they learned their lesson, time will tell.

                                                                                                They bought pocket and said it would be open sourced, but it’s been over a year now, and so far only the FF plugin is OSS.

                                                                                                1. 4

                                                                                                  Some of this isn’t true.

                                                                                                  1. Mr. Robot was like a promotion, but not a paid thing, like an ad. Someone thought this was a good idea and managed tto bypass code review. This won’t happen again.
                                                                                                  2. Money comes from a variety of search providers, depending on locale. Money ggoes directly into the people, the engineers, the product. There are no stakeholders we need to make happy. No corporations we got to talk to. Search providers come to us to get our users.
                                                                                                  3. Pocket. Still not everything, but much more than the add-on: https://github.com/Pocket?tab=repositories
                                                                                                  1. 3
                                                                                                    1. OK, fair enough, but I never used the word “ad”. Glad it won’t happen again.

                                                                                                    2. When like 80 or 90% of their funding is directly from Google… It at the very least raises questions. So I wouldn’t say not true, perhaps I over-simplified, and fair enough.

                                                                                                    3. YAY! Good to know. I hadn’t checked in a while, happy to be wrong here. Hopefully this will continue.

                                                                                                    But overall thank you for elaborating. I was trying to keep it simple, but I don’t disagree with anything you said here. Also, I still use FF as my default browser. It’s the best of the options.

                                                                                                  2. 4

                                                                                                    But really their entire revenue stream comes directly from Google.

                                                                                                    To put this part another way: the majority of their income comes from auctioning off being the default search bar target. That happens to be worth somewhere in the 100s of $millions to Google, but Microsoft also bid (as did other search engines in other parts of the world. IIRC the choice is localised) - Google just bid higher. There’s a meta-level criticism where Mozilla can’t afford to challenge /all/ the possible corporate bidders for that search placement, but they aren’t directly beholden to Google in the way the previous poster suggests.

                                                                                                    1. 1

                                                                                                      Agreed. Except it’s well over half of their income, I think it’s up in the 80% or 90% range of how much of their funding comes from Google.

                                                                                                      1. 2

                                                                                                        And if they diversify and, say, sell out tiles on the new tab screen? Or integrate read-it-later services? That also doesn’t fly as recent history has shown.

                                                                                                        People ask from Mozilla to not sell ads, not take money for search engine integration, not partner with media properties and still keep up their investment into development of the platform.

                                                                                                        People don’t leave any explanation of how they can do that while also rejecting all their means of making money.

                                                                                                        1. 2

                                                                                                          Agreed. I assume this wasn’t an attack on me personally, and just as a comment of the sad state of FF’s diversification woes. They definitely need diversification. I don’t have any awesome suggestions here, except I think they need to diversify. Having all your income controlled by one source is almost always a terrible idea long-term.

                                                                                                          I don’t have problems, personally, with their selling of search integration, I have problems with Google essentially being their only income stream. I think it’s great they are trying to diversify, and I like that they do search integration by region/area, so at least it’s not 100% Google. I hope they continue testing the waters and finding new ways to diversify. I’m sure some will be mistakes, but hopefully with time, they can get Google(or anyone else) down around the 40-50% range.

                                                                                                        2. 1

                                                                                                          That’s what “majority of their income” means. Or at least that’s what I intended it to mean!

                                                                                                    2. 2

                                                                                                      You also have the fact they are based in the USA, that means following American laws. Regarding personal datas, they are not very protective about them and even less if you are not an American citizen.

                                                                                                      Moreover, they are testing in nightly to use Cloudfare DNS as DNS resolver even if the operating system configure an other. A DNS knows all domaine name resolution you did, that means it know which websiste you visit. You should be able to disable it in about:config but in making this way and not in the Firefox preferences menu, it is clear indication to make it not easily done.

                                                                                                      You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

                                                                                                      1. 3

                                                                                                        Mozilla does not have your personal data. Whatever they have for sync is encrypted in such a way that it cannot be tied to an account or decrypted.

                                                                                                        1. 1

                                                                                                          They have my sync data, sync data is personal data so they have my personal data. How do they encrypt it? Do you have any link about how they manage it? In which country is it stored? What is the law about it?

                                                                                                          1. 4

                                                                                                            Mozilla has your encrypted sync data. They do not have the key to decrypt that data. Your key never leaves your computer. All data is encrypted and decrypted locally in Firefox with a key that only you have.

                                                                                                            Your data is encrypted with very strong crypto and the encryption key is derived from your password with a very strong key derivation algorithm. All locally.

                                                                                                            The encrypted data is copied to and from Mozilla’s servers. The servers are dumb and do not actually know or do crypto. They just store blobs. The servers are in the USA and on AWS.

                                                                                                            The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.

                                                                                                            This of course assuming that your password is not ‘hunter2’.

                                                                                                            It is starting to sound like you went through this effort because you don’t trust Mozilla with your data. That is totally fair, but I think that if you had understood the architecture a bit better, you may actually not have decided to self host. This is all put together really well, and with privacy and data breaches in mind. IMO there is very little reason to self host.

                                                                                                            1. 1

                                                                                                              “The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.”

                                                                                                              That’s not the worst by far. The Core Secrets leak indicated they were compeling via FBI suppliers to put in backdoors. So, they’d either pay/force a developer to insert a weakness that looks accidental, push malware in during an update, or (most likely) just use a browser sploit on the target.

                                                                                                              1. 1

                                                                                                                In all of those cases, it’s game over for your browser data regardless of whether you use Firefox Sync, Mozilla-hosted or otherwise.

                                                                                                                1. 1

                                                                                                                  That’s true! Unless they rewrite it all in Rust with overflow checking on. And in a form that an info-flow analyzer can check for leaks. ;)

                                                                                                              2. 1

                                                                                                                As you said, it’s totally fair to not trust Mozilla with data. As part of that, it should always be possible/supported to “self-host”, as a means to keep that as an option. Enough said to that point.

                                                                                                                As to “understanding the architecture”, it also comes with appreciating the business practices, ethics, and means to work to the privacy laws of a given jurisdiction. This isn’t being conveyed well by any of the major players, so with the minor ones having to cater to those “big guys”, it’s no surprise that mistrust would be present here.

                                                                                                              3. 2

                                                                                                                How do they encrypt it?

                                                                                                                On the client, of course. (Even Chrome does this the same way.) Firefox is open source, you can find out yourself how exactly everything is done. I found this keys module, if you really care, you can find where the encrypt operation is invoked and what data is there, etc.

                                                                                                                1. 2

                                                                                                                  You don’t have to give it to them. Firefox sync is totally optional, I for one don’t use it.

                                                                                                                  Country: almost certainly the USA. Encryption: looks like this is what they use: https://wiki.mozilla.org/Labs/Weave/Developer/Crypto

                                                                                                              4. 2

                                                                                                                The move to Clouflare as dns over https is annoying enough to make me consider other browsers.

                                                                                                                You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

                                                                                                                Please, no FUD. :)

                                                                                                                1. 3

                                                                                                                  move to Clouflare

                                                                                                                  It’s an experiment, not a permanent “move”. Right now you can manually set your own resolver and enable-disable DoH in about:config.

                                                                                                            1. 8

                                                                                                              I’d honestly much rather have software update itself using the official OS level updating process rather than using some home-grown mechanism. Point is: once something runs on your machine it has the ability to alter your machine as it sees fit.

                                                                                                              Sure. Some changes require elevated privileges, but whether it’s Skype asking for sudo to install the update it has downloaded and then abusing its privilege to alter your system in undesired ways or whether it’s Skypes repository containing undesired packages makes no difference.

                                                                                                              To the contrary: apt can be configured to ask before installing anything and normally even does so by default.

                                                                                                              The only change that could possibly placate the author would be to remove all auto updating capability, but that would be much worse for everybody if there ever was a remotely exploitable vulnerability in Skype because then the attack vector shifts from „Microsoft can compromise your machine“ to „everybody can compromise your machine“ and for many users there is no obvious way or even the understanding to do something about this.

                                                                                                              1. 4

                                                                                                                Its a bit of a tough choice. With the current state of things, most users would see a massive improvement from switching from ISP DNS servers that admit to collecting and selling your data and switching to cloudflare who has agreed to protect privacy.

                                                                                                                In the end, you have to trust someone for your DNS. Mozilla could probably host it themself but they also dont have the wide spread of server locations that a CDN company has.

                                                                                                                1. 5

                                                                                                                  While I agree that, you need to trust someone to your DNS, it shouldn’t be a specific app making that choice for you. A household or even a user with multiple devices benefits from their router caching DNS results for multiple devices, every app on every device doing this independently is foolish. If Mozilla wants to help users then they can run an informational campaign, setting a precedent for apps each using their own DNS and circumventing what users have set for themselves is the worst solution.

                                                                                                                  1. 1

                                                                                                                    It isn’t ideal that firefox is doing DNS in app but it’s the most realistic solution. They could try and get microsoft, apple and all linux distros to change to DNS over HTTPS and maybe in 5 years we might all have it or they could just do it themself and we all have it in a few months. Once firefox has started proving it works really well then OS vendors will start adding it and firefox can remove their own version or distros will patch it to use the system DoH.

                                                                                                                    1. 6

                                                                                                                      They could try and get microsoft, apple and all linux distros to change to DNS over HTTPS

                                                                                                                      I don’t WANT DNS over HTTPS. I especially don’t want DNS over HTTP/2.0. There’s a lot of value in having protocols that are easy to implement, debug, and understand at a low level, and none of those families of protocols are that.

                                                                                                                      Add TLS, maybe – it’s also a horrendous mess, but since DNSCURVE seems to be dead, it may get enough traction. Cloudflare, if they really want, can do protocol sniffing on port 443. But please, let’s not make the house of card protocol stack that is the internet even more complex.

                                                                                                                      1. 8

                                                                                                                        DNS is “easy to implement, debug, and understand”? That’s news to me.

                                                                                                                        1. 5

                                                                                                                          it’s for sure easier than when tunneled over HTTP2 > SSL > TCP, because that’s how DoH works. The payload of the data being transmitted over HTTP is actual binary DNS packets so all this does is adding complexity overhead.

                                                                                                                          I’m not a big fan of DoH because of that and also because this means that by default intranet and development sites won’t be available any more to users and developers, invalidating an age-old concept of having private DNS.

                                                                                                                          So either you now need to deploy customized browser packages, or tweak browser’s configs via group policy or equivalent functionality (if available), or expose your intranet names to public DNS which is a security downgrade from the status quo.

                                                                                                                          1. 3

                                                                                                                            It is when you have a decent library to encode/decode DNS packets and UDP is nearly trivial to deal with compared to TCP (much less TLS).

                                                                                                                          2. 0

                                                                                                                            Stacking protocols makes things more simple. Instead of having to understand a massive protocol that sits on its own, you now only have to understand the layer that you are interested in. I haven’t looked in to DNS but I can’t imagine it’s too simple. It’s incredibly trivial for me to experiment and develop with applications running on top of HTTP because all of the tools already exist for it and aren’t specific to DoH. You can also share software and libraries so you only need one http library for a lot of protocols instead of them all managing sending data over TCP.

                                                                                                                            1. 6

                                                                                                                              But the thing transmitted over HTTP is binary DNS packets. So when debugging you still need to know how DNS packets are built, but you now also have to deal with HTTP on top. Your HTTP libraries only give you a view into the HTTP part of the protocol stack but not into the DNS part, so when you need to debug that, you’re back to square one but also need your HTTP libraries

                                                                                                                              1. 6

                                                                                                                                And don’t forget that HTTP/2 is basically a binary version of HTTP, so now you have to do two translation steps! Also, because DoH is basically just the original DNS encoding, it only adds complexity. For instance, the spec itself points out that you have two levels of error handling: One of HTTP errors (let’s say a 502 because the server is overloaded) and one of DNS errors.

                                                                                                                                It makes more sense to just encode DNS over TLS (without the unnecessary HTTP/2 stuff), or to completely ditch the regular DNS spec and use a different wire format based on JSON or XML over HTTP.

                                                                                                                                1. 4

                                                                                                                                  And don’t forget that HTTP/2 is basically a binary version of HTTP

                                                                                                                                  If only it was that simple. There’s server push, multi-streaming, flow control, and a huge amount of other stuff on top of HTTP/2, which gives it a relatively huge attack surface compared to just using (potentially encrypted) UDP packets.

                                                                                                                                  1. 3

                                                                                                                                    Yeah, I forgot about all that extra stuff. It’s there (and thus can be exploited), even if it’s not strictly needed for DoH (I really like that acronym for this, BTW :P)

                                                                                                                        2. 1

                                                                                                                          it shouldn’t be a specific app making that choice for you

                                                                                                                          I think there is a disconnect here between what security researchers know to be true vs what most people / IT professionals think is true.

                                                                                                                          Security, in this case privacy and data integrity is best handled with the awareness of the application, not by trying to make it part of the network or infrastructure levels. That mostly doesn’t work.

                                                                                                                          You can’t get any reasonable security guarantees from the vast majority of local network equipment / CPE. To provide any kind of privacy the application is the right security barrier, not your local network or isp.

                                                                                                                          1. 3

                                                                                                                            I agree that sensible defaults will increase security for the majority of users, and there is something to be said for ones browser being the single most DNS hungry app for that same majority.

                                                                                                                            If its an option that one can simply override (which appears to be the case), then why not. It will improve things for lots of people, and those which choose to have the same type of security (dnscrypt/dnssec/future DNS improvements) on their host or router can do so.

                                                                                                                            But I can’t help thinking its a bit of a duct tape solution to bigger issues with DNS overall as a technolgy and the privacy concerns that it represents.

                                                                                                                      1. 2

                                                                                                                        IMHO, copying a file to a local machine should nave no side-effects aside of the file existing on the machine.

                                                                                                                        It would be a totally fine compromise to first having to explicitly launch the application before macOS registers the URL- or filetype handler. Once you trick users into launching your malware, all bets are off anyways and it doesn’t matter whether your malware then also registers a URL handler or not.

                                                                                                                        1. 17

                                                                                                                          An interesting aspect of this: their employees’ credentials were compromised by intercepting two-factor authentication that used SMS. Security folks have been complaining about SMS-based 2FA for a while, but it’s still a common configuration on big cloud providers.

                                                                                                                          1. 11

                                                                                                                            What’s especially bugging me is platforms like twitter that do provide alternatives to SMS for 2FA, but still require SMS to be enabled even if you want to use safer means. The moment you remove your phone number from twitter, all of 2FA is disabled.

                                                                                                                            The problem is that if SMS is an option, that’s going to be what an attacker uses. It doesn’t matter that I myself always use a Yubikey.

                                                                                                                            But the worst are services that also use that 2FA phone number they got for password recovery. Forgot your password? No problem. Just type the code we just sent you via SMS.

                                                                                                                            This effectively reduces the strength of your overall account security to the ability of your phone company to resist social engineering. Your phone company who has trained their call center agents to handle „customer“ requests as quickly and efficiently as possible.

                                                                                                                            update: I just noticed that twitter has fixed this and you can now disable SMS while keeping TOTP and U2F enabled.

                                                                                                                            1. 2

                                                                                                                              But the worst are services that also use that 2FA phone number they got for password recovery. Forgot your password? No problem. Just type the code we just sent you via SMS.

                                                                                                                              I get why they do this from a convenience perspective, but it bugs me to call the result 2FA. If you can change the password through the SMS recovery method, password and SMS aren’t two separate authentication factors, it’s just 1FA!

                                                                                                                              1. 1

                                                                                                                                Have sites been keeping SMS given the cost of supporting locked out users? Lost phones are a frequent occurrence. I wonder if sites have thought about implementing really slow, but automated recovery processes to avoid this issue. Going through support with Google after losing your phone is painful, but smaller sites don’t have a support staff at all, so they are likely to keep allowing SMS since your mobile phone number is pretty recoverable.

                                                                                                                                1. 1

                                                                                                                                  In case of many accounts that are now de-facto protected by nothing but a single easily hackable SMS I’d much rather lose access to it than risk somebody else getting access.

                                                                                                                                  If there was a way to tell these services and my phone company that I absolutely never want to recover my account, I would do that in a heartbeat

                                                                                                                                2. 1

                                                                                                                                  This effectively reduces the strength of your overall account security to the ability of your phone company to resist social engineering. Your phone company who has trained their call center agents to handle „customer“ requests as quickly and efficiently as possible.

                                                                                                                                  True. Also, if you have the target’s phone number, you can skip the social engineering, and go directly for SS7 hacks.

                                                                                                                                3. 1

                                                                                                                                  I don’t remember the details but there is a specific carrier (tmobile I think?) that is extremely susceptible to SMS interception and its people on their network that have been getting targeted for attacks like this.

                                                                                                                                  1. 4

                                                                                                                                    Your mobile phone number can relatively easily be stolen (more specifically: ported out to another network by an attacker). This happened to me on T-Mobile, but I believe it is possible on other networks too. In my case my phone number was used to setup Zelle and transfer money out of my bank account.

                                                                                                                                    This article actually provides more detail on the method attackers have used to port your number: https://motherboard.vice.com/en_us/article/vbqax3/hackers-sim-swapping-steal-phone-numbers-instagram-bitcoin

                                                                                                                                    1. 1

                                                                                                                                      T-Mobile sent a text message blast to all customers many months ago urging users to setup a security code on their account to prevent this. Did you do it?

                                                                                                                                      Feb 1, 2018: “T-Mobile Alert: We have identified an industry-wide phone number port out scam and encourage you to add account security. Learn more: t-mo.co/secure”

                                                                                                                                      1. 1

                                                                                                                                        Yeah I did after recovering my number. Sadly this action was taken in response to myself and others having been attacked already :)