1. 5

    Title is slightly wrong. You can boot it but you can’t install it because the OS is blocked from seeing the internal storage.

    1. 15

      I don’t think “blocked from seeing the internal storage” is quite the correct characterization. The T2 chip is acting as an SSD controller, I bet if somebody takes the time to write a T2 driver for Linux everything will work just fine. The difficulty there will likely be that there is no datasheet available for the chip so the driver will have to be reverse engineered from mac OS which is certainly not trivial.

      1. 5

        This has shades of the “Lenovo is blocking Linux support” “incident” where Lenovo just forced the storage controller into a RAID mode Linux didn’t have a driver for.

        1. 2

          At least from what the system report tool says the drive appears as an NVME SSD and just an iteration on the one from previous generations (AP0512J vs AP0512M in the 2018 Air). So it might just work with the Linux NVME drivers once there’s a working UEFI shim that’s trusted. At that point this tutorial seems plausible.

          1. 3

            Trust is not an issue because secure boot can be completely disabled.

            As the article mentions, people who tried live USBs found out that the internal storage is not recognized. So looks like T2 is indeed actually acting as an SSD controller. (And of course macOS would report the actual underlying SSD even if there is no direct connection to it. The T2 could be reporting that info to the OS.)

        2. 8

          The difficulty there will likely be that there is no datasheet available for the chip

          Unless they completely and utterly butchered the initialization, no amount of datasheets will save you. From the T2 documentation:

          By default, Mac computers supporting secure boot only trust content signed by Apple. However, in order to improve the security of Boot Camp installations, support for secure booting Windows is also provided. The UEFI firmware includes a copy of the Microsoft Windows Production CA 2011 certificate used to authenticate Microsoft bootloaders.

          NOTE: There is currently no trust provided for the the Microsoft Corporation UEFI CA 2011, which would allow verification of code signed by Microsoft partners. This UEFI CA is commonly used to verify the authenticity of bootloaders for other operating systems such as Linux variants.

          To bypass the check of the cryptographic signature, you’d probably have to find some kind of exploitable vulnerability in the verification code (or even earlier in the boot process so that you get code execution in the bootloader before the actual check).

          1. 8

            As the article says, you can disable the T2 Secure Boot so the code signature verification is not the problem at that point. The problem then is that the T2 acts as the SSD controller, and nobody has taught Linux yet how to talk to a T2 chip. The article incorrectly conflates the two issues.

            1. 5

              Doesn’t look like it’s conflating them. You might have to scroll down further :) but there’s a screenshot of the Startup Security Utility and this text:

              However, reports have come in that even with it disabled, users are still unable to boot a Linux OS as the hardware won’t recognize the internal storage device. Using the External Boot option (pictured above), you may be able to run Linux from a Live USB, but that certainly defeats the purpose of having an expensive machine with bleeding-edge hardware.

            2. 2

              Secure boot can be disabled. Then the machine will boot anything you tell it to boot, bringing the security inline with machines predating the T2.

              Source: I tried it out on my iMac pro which is a T2 machine.

              1. 1

                edit: mis-read that. Yeah until they add partner support you’re probably pretty stuck. Although somebody like RedHat or Canonical that have relationships with Microsoft might be able to have them cross-sign their shim to support booting on the new Air. Either that or we’re stuck waiting for Apple to support the UEFI CA.

          1. 8

            If you need a freely licensed font, google Noto has you covered. Fedora has it as google-noto-sans-egyptian-hieroglyphs-fonts.noarch, for example

            (this must be a fairly new addition to Noto, since I couldn’t find it last time I ‘researched’ this exact same topic)

            1. 3

              The sans hieroglyphs in the name makes it sound like the censored version of the font.

              1. 5

                sans stands for sans-serif.

                1. 2

                  If you don’t know what it actually means

              1. 7

                CTEs are great, but it’s important to understand the implementation characteristics as they differ between databases. Some RDBMSs, like PostgreSQL, treat a CTE like an optimization fence while others (Greenplum for example) plan them as subqueries.

                1. 2

                  The article mentions offhand they use SQL Server, which AFAIK does a pretty good job of using them in plans. I believe (not 100% sure) its optimiser can see right through CTEs.

                  1. 2

                    … and then you have RDBMSs like Oracle whose support for CTE is a complete and utter disgrace.

                    I praying for the day Oracle’s DB falls out of use, because I imagine that will happen sooner than them managing to properly implement SQL standards from 20 years ago.

                    1. 2

                      At university we had to use Oracle and via the iSQL web-interface for all the SQL-related parts in our database-courses. It was the slowest most painful experience, executing a simple select could take several minutes and navigating the interface/paginating results would take at least a minute per operation.

                      I would always change it to show all results on one page (no pagination) but the environment would do a full reset every few hours requiring me to spend probably 15-30minutes changing the settings back to my slightly saner defaults. Every lab would take at least twice as long because of the pain in using this system. I loved the course and the lecturer, it was probably one of the best courses I took during my time at university, but I did not want to use Oracle again after that point.

                      I’ve heard that they nowadays have moved the course to use PostgreSQL instead which seems like a much more sane approach, what I would have given to be able to run the code locally on my computer at that time.

                    2. 1

                      I didn’t know this, so using a CTE in Postgres current would be at a disadvantage compared to subqueries?

                      Haven’t really used CTEs in Postgres much yet but I’ve looked at them and considered them. Is there any plans on enabling optimization through CTE’s in pg? Or is there a deeper more fundamental undelaying problem?

                      1. 5

                        would be at a disadvantage compared to subqueries

                        it depends. I have successfully used CTEs to circumvent shortcomings in the planner which was mi-estimating row counts no matter what I set the stats target to (this was also before create statistics).

                        Is there any plans on enabling optimization through CTE’s in pg

                        it’s on the table for version 12

                        1. 2

                          It’s not necessarily less efficient due to the optimization fence, it all depends on your workload. The underlying reason is a conscious design decision, not a technical issue. There have been lots of discussions around changing it, or at least to provide the option per CTE on how to plan/optimize it. There are patches on the -hackers mailing list but so far nothing has made it in.

                        2. 1

                          Does anyone know if CTEs are an optimization fence in DB2 as well?

                        1. 2

                          Can someone ELI5 why Firefox is not to be trusted anymore?

                          1. 4

                            They’ve done some questionable things. They did this weird tie-in with Mr. Robot or some TV show, where they auto-installed a plugin(but disabled thankfully) to like everyone as part of an update. It wasn’t enabled by default if I remember right, but it got installed everywhere.

                            Their income stream, according to wikipedia: is funded by donations and “search royalties”. But really their entire revenue stream comes directly from Google. Also in 2012 they failed an IRS audit having to pay 1.5 million dollars. Hopefully they learned their lesson, time will tell.

                            They bought pocket and said it would be open sourced, but it’s been over a year now, and so far only the FF plugin is OSS.

                            1. 4

                              Some of this isn’t true.

                              1. Mr. Robot was like a promotion, but not a paid thing, like an ad. Someone thought this was a good idea and managed tto bypass code review. This won’t happen again.
                              2. Money comes from a variety of search providers, depending on locale. Money ggoes directly into the people, the engineers, the product. There are no stakeholders we need to make happy. No corporations we got to talk to. Search providers come to us to get our users.
                              3. Pocket. Still not everything, but much more than the add-on: https://github.com/Pocket?tab=repositories
                              1. 3
                                1. OK, fair enough, but I never used the word “ad”. Glad it won’t happen again.

                                2. When like 80 or 90% of their funding is directly from Google… It at the very least raises questions. So I wouldn’t say not true, perhaps I over-simplified, and fair enough.

                                3. YAY! Good to know. I hadn’t checked in a while, happy to be wrong here. Hopefully this will continue.

                                But overall thank you for elaborating. I was trying to keep it simple, but I don’t disagree with anything you said here. Also, I still use FF as my default browser. It’s the best of the options.

                              2. 4

                                But really their entire revenue stream comes directly from Google.

                                To put this part another way: the majority of their income comes from auctioning off being the default search bar target. That happens to be worth somewhere in the 100s of $millions to Google, but Microsoft also bid (as did other search engines in other parts of the world. IIRC the choice is localised) - Google just bid higher. There’s a meta-level criticism where Mozilla can’t afford to challenge /all/ the possible corporate bidders for that search placement, but they aren’t directly beholden to Google in the way the previous poster suggests.

                                1. 1

                                  Agreed. Except it’s well over half of their income, I think it’s up in the 80% or 90% range of how much of their funding comes from Google.

                                  1. 2

                                    And if they diversify and, say, sell out tiles on the new tab screen? Or integrate read-it-later services? That also doesn’t fly as recent history has shown.

                                    People ask from Mozilla to not sell ads, not take money for search engine integration, not partner with media properties and still keep up their investment into development of the platform.

                                    People don’t leave any explanation of how they can do that while also rejecting all their means of making money.

                                    1. 2

                                      Agreed. I assume this wasn’t an attack on me personally, and just as a comment of the sad state of FF’s diversification woes. They definitely need diversification. I don’t have any awesome suggestions here, except I think they need to diversify. Having all your income controlled by one source is almost always a terrible idea long-term.

                                      I don’t have problems, personally, with their selling of search integration, I have problems with Google essentially being their only income stream. I think it’s great they are trying to diversify, and I like that they do search integration by region/area, so at least it’s not 100% Google. I hope they continue testing the waters and finding new ways to diversify. I’m sure some will be mistakes, but hopefully with time, they can get Google(or anyone else) down around the 40-50% range.

                                    2. 1

                                      That’s what “majority of their income” means. Or at least that’s what I intended it to mean!

                                2. 2

                                  You also have the fact they are based in the USA, that means following American laws. Regarding personal datas, they are not very protective about them and even less if you are not an American citizen.

                                  Moreover, they are testing in nightly to use Cloudfare DNS as DNS resolver even if the operating system configure an other. A DNS knows all domaine name resolution you did, that means it know which websiste you visit. You should be able to disable it in about:config but in making this way and not in the Firefox preferences menu, it is clear indication to make it not easily done.

                                  You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

                                  1. 3

                                    Mozilla does not have your personal data. Whatever they have for sync is encrypted in such a way that it cannot be tied to an account or decrypted.

                                    1. 1

                                      They have my sync data, sync data is personal data so they have my personal data. How do they encrypt it? Do you have any link about how they manage it? In which country is it stored? What is the law about it?

                                      1. 4

                                        Mozilla has your encrypted sync data. They do not have the key to decrypt that data. Your key never leaves your computer. All data is encrypted and decrypted locally in Firefox with a key that only you have.

                                        Your data is encrypted with very strong crypto and the encryption key is derived from your password with a very strong key derivation algorithm. All locally.

                                        The encrypted data is copied to and from Mozilla’s servers. The servers are dumb and do not actually know or do crypto. They just store blobs. The servers are in the USA and on AWS.

                                        The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.

                                        This of course assuming that your password is not ‘hunter2’.

                                        It is starting to sound like you went through this effort because you don’t trust Mozilla with your data. That is totally fair, but I think that if you had understood the architecture a bit better, you may actually not have decided to self host. This is all put together really well, and with privacy and data breaches in mind. IMO there is very little reason to self host.

                                        1. 1

                                          “The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.”

                                          That’s not the worst by far. The Core Secrets leak indicated they were compeling via FBI suppliers to put in backdoors. So, they’d either pay/force a developer to insert a weakness that looks accidental, push malware in during an update, or (most likely) just use a browser sploit on the target.

                                          1. 1

                                            In all of those cases, it’s game over for your browser data regardless of whether you use Firefox Sync, Mozilla-hosted or otherwise.

                                            1. 1

                                              That’s true! Unless they rewrite it all in Rust with overflow checking on. And in a form that an info-flow analyzer can check for leaks. ;)

                                          2. 1

                                            As you said, it’s totally fair to not trust Mozilla with data. As part of that, it should always be possible/supported to “self-host”, as a means to keep that as an option. Enough said to that point.

                                            As to “understanding the architecture”, it also comes with appreciating the business practices, ethics, and means to work to the privacy laws of a given jurisdiction. This isn’t being conveyed well by any of the major players, so with the minor ones having to cater to those “big guys”, it’s no surprise that mistrust would be present here.

                                          3. 2

                                            How do they encrypt it?

                                            On the client, of course. (Even Chrome does this the same way.) Firefox is open source, you can find out yourself how exactly everything is done. I found this keys module, if you really care, you can find where the encrypt operation is invoked and what data is there, etc.

                                            1. 2

                                              You don’t have to give it to them. Firefox sync is totally optional, I for one don’t use it.

                                              Country: almost certainly the USA. Encryption: looks like this is what they use: https://wiki.mozilla.org/Labs/Weave/Developer/Crypto

                                          4. 2

                                            The move to Clouflare as dns over https is annoying enough to make me consider other browsers.

                                            You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

                                            Please, no FUD. :)

                                            1. 3

                                              move to Clouflare

                                              It’s an experiment, not a permanent “move”. Right now you can manually set your own resolver and enable-disable DoH in about:config.

                                        1. 8

                                          I’d honestly much rather have software update itself using the official OS level updating process rather than using some home-grown mechanism. Point is: once something runs on your machine it has the ability to alter your machine as it sees fit.

                                          Sure. Some changes require elevated privileges, but whether it’s Skype asking for sudo to install the update it has downloaded and then abusing its privilege to alter your system in undesired ways or whether it’s Skypes repository containing undesired packages makes no difference.

                                          To the contrary: apt can be configured to ask before installing anything and normally even does so by default.

                                          The only change that could possibly placate the author would be to remove all auto updating capability, but that would be much worse for everybody if there ever was a remotely exploitable vulnerability in Skype because then the attack vector shifts from „Microsoft can compromise your machine“ to „everybody can compromise your machine“ and for many users there is no obvious way or even the understanding to do something about this.

                                          1. 4

                                            Its a bit of a tough choice. With the current state of things, most users would see a massive improvement from switching from ISP DNS servers that admit to collecting and selling your data and switching to cloudflare who has agreed to protect privacy.

                                            In the end, you have to trust someone for your DNS. Mozilla could probably host it themself but they also dont have the wide spread of server locations that a CDN company has.

                                            1. 5

                                              While I agree that, you need to trust someone to your DNS, it shouldn’t be a specific app making that choice for you. A household or even a user with multiple devices benefits from their router caching DNS results for multiple devices, every app on every device doing this independently is foolish. If Mozilla wants to help users then they can run an informational campaign, setting a precedent for apps each using their own DNS and circumventing what users have set for themselves is the worst solution.

                                              1. 1

                                                It isn’t ideal that firefox is doing DNS in app but it’s the most realistic solution. They could try and get microsoft, apple and all linux distros to change to DNS over HTTPS and maybe in 5 years we might all have it or they could just do it themself and we all have it in a few months. Once firefox has started proving it works really well then OS vendors will start adding it and firefox can remove their own version or distros will patch it to use the system DoH.

                                                1. 6

                                                  They could try and get microsoft, apple and all linux distros to change to DNS over HTTPS

                                                  I don’t WANT DNS over HTTPS. I especially don’t want DNS over HTTP/2.0. There’s a lot of value in having protocols that are easy to implement, debug, and understand at a low level, and none of those families of protocols are that.

                                                  Add TLS, maybe – it’s also a horrendous mess, but since DNSCURVE seems to be dead, it may get enough traction. Cloudflare, if they really want, can do protocol sniffing on port 443. But please, let’s not make the house of card protocol stack that is the internet even more complex.

                                                  1. 8

                                                    DNS is “easy to implement, debug, and understand”? That’s news to me.

                                                    1. 5

                                                      it’s for sure easier than when tunneled over HTTP2 > SSL > TCP, because that’s how DoH works. The payload of the data being transmitted over HTTP is actual binary DNS packets so all this does is adding complexity overhead.

                                                      I’m not a big fan of DoH because of that and also because this means that by default intranet and development sites won’t be available any more to users and developers, invalidating an age-old concept of having private DNS.

                                                      So either you now need to deploy customized browser packages, or tweak browser’s configs via group policy or equivalent functionality (if available), or expose your intranet names to public DNS which is a security downgrade from the status quo.

                                                      1. 3

                                                        It is when you have a decent library to encode/decode DNS packets and UDP is nearly trivial to deal with compared to TCP (much less TLS).

                                                      2. 0

                                                        Stacking protocols makes things more simple. Instead of having to understand a massive protocol that sits on its own, you now only have to understand the layer that you are interested in. I haven’t looked in to DNS but I can’t imagine it’s too simple. It’s incredibly trivial for me to experiment and develop with applications running on top of HTTP because all of the tools already exist for it and aren’t specific to DoH. You can also share software and libraries so you only need one http library for a lot of protocols instead of them all managing sending data over TCP.

                                                        1. 6

                                                          But the thing transmitted over HTTP is binary DNS packets. So when debugging you still need to know how DNS packets are built, but you now also have to deal with HTTP on top. Your HTTP libraries only give you a view into the HTTP part of the protocol stack but not into the DNS part, so when you need to debug that, you’re back to square one but also need your HTTP libraries

                                                          1. 6

                                                            And don’t forget that HTTP/2 is basically a binary version of HTTP, so now you have to do two translation steps! Also, because DoH is basically just the original DNS encoding, it only adds complexity. For instance, the spec itself points out that you have two levels of error handling: One of HTTP errors (let’s say a 502 because the server is overloaded) and one of DNS errors.

                                                            It makes more sense to just encode DNS over TLS (without the unnecessary HTTP/2 stuff), or to completely ditch the regular DNS spec and use a different wire format based on JSON or XML over HTTP.

                                                            1. 4

                                                              And don’t forget that HTTP/2 is basically a binary version of HTTP

                                                              If only it was that simple. There’s server push, multi-streaming, flow control, and a huge amount of other stuff on top of HTTP/2, which gives it a relatively huge attack surface compared to just using (potentially encrypted) UDP packets.

                                                              1. 3

                                                                Yeah, I forgot about all that extra stuff. It’s there (and thus can be exploited), even if it’s not strictly needed for DoH (I really like that acronym for this, BTW :P)

                                                    2. 1

                                                      it shouldn’t be a specific app making that choice for you

                                                      I think there is a disconnect here between what security researchers know to be true vs what most people / IT professionals think is true.

                                                      Security, in this case privacy and data integrity is best handled with the awareness of the application, not by trying to make it part of the network or infrastructure levels. That mostly doesn’t work.

                                                      You can’t get any reasonable security guarantees from the vast majority of local network equipment / CPE. To provide any kind of privacy the application is the right security barrier, not your local network or isp.

                                                      1. 3

                                                        I agree that sensible defaults will increase security for the majority of users, and there is something to be said for ones browser being the single most DNS hungry app for that same majority.

                                                        If its an option that one can simply override (which appears to be the case), then why not. It will improve things for lots of people, and those which choose to have the same type of security (dnscrypt/dnssec/future DNS improvements) on their host or router can do so.

                                                        But I can’t help thinking its a bit of a duct tape solution to bigger issues with DNS overall as a technolgy and the privacy concerns that it represents.

                                                  1. 2

                                                    IMHO, copying a file to a local machine should nave no side-effects aside of the file existing on the machine.

                                                    It would be a totally fine compromise to first having to explicitly launch the application before macOS registers the URL- or filetype handler. Once you trick users into launching your malware, all bets are off anyways and it doesn’t matter whether your malware then also registers a URL handler or not.

                                                    1. 17

                                                      An interesting aspect of this: their employees’ credentials were compromised by intercepting two-factor authentication that used SMS. Security folks have been complaining about SMS-based 2FA for a while, but it’s still a common configuration on big cloud providers.

                                                      1. 11

                                                        What’s especially bugging me is platforms like twitter that do provide alternatives to SMS for 2FA, but still require SMS to be enabled even if you want to use safer means. The moment you remove your phone number from twitter, all of 2FA is disabled.

                                                        The problem is that if SMS is an option, that’s going to be what an attacker uses. It doesn’t matter that I myself always use a Yubikey.

                                                        But the worst are services that also use that 2FA phone number they got for password recovery. Forgot your password? No problem. Just type the code we just sent you via SMS.

                                                        This effectively reduces the strength of your overall account security to the ability of your phone company to resist social engineering. Your phone company who has trained their call center agents to handle „customer“ requests as quickly and efficiently as possible.

                                                        update: I just noticed that twitter has fixed this and you can now disable SMS while keeping TOTP and U2F enabled.

                                                        1. 2

                                                          But the worst are services that also use that 2FA phone number they got for password recovery. Forgot your password? No problem. Just type the code we just sent you via SMS.

                                                          I get why they do this from a convenience perspective, but it bugs me to call the result 2FA. If you can change the password through the SMS recovery method, password and SMS aren’t two separate authentication factors, it’s just 1FA!

                                                          1. 1

                                                            Have sites been keeping SMS given the cost of supporting locked out users? Lost phones are a frequent occurrence. I wonder if sites have thought about implementing really slow, but automated recovery processes to avoid this issue. Going through support with Google after losing your phone is painful, but smaller sites don’t have a support staff at all, so they are likely to keep allowing SMS since your mobile phone number is pretty recoverable.

                                                            1. 1

                                                              In case of many accounts that are now de-facto protected by nothing but a single easily hackable SMS I’d much rather lose access to it than risk somebody else getting access.

                                                              If there was a way to tell these services and my phone company that I absolutely never want to recover my account, I would do that in a heartbeat

                                                            2. 1

                                                              This effectively reduces the strength of your overall account security to the ability of your phone company to resist social engineering. Your phone company who has trained their call center agents to handle „customer“ requests as quickly and efficiently as possible.

                                                              True. Also, if you have the target’s phone number, you can skip the social engineering, and go directly for SS7 hacks.

                                                            3. 1

                                                              I don’t remember the details but there is a specific carrier (tmobile I think?) that is extremely susceptible to SMS interception and its people on their network that have been getting targeted for attacks like this.

                                                              1. 4

                                                                Your mobile phone number can relatively easily be stolen (more specifically: ported out to another network by an attacker). This happened to me on T-Mobile, but I believe it is possible on other networks too. In my case my phone number was used to setup Zelle and transfer money out of my bank account.

                                                                This article actually provides more detail on the method attackers have used to port your number: https://motherboard.vice.com/en_us/article/vbqax3/hackers-sim-swapping-steal-phone-numbers-instagram-bitcoin

                                                                1. 1

                                                                  T-Mobile sent a text message blast to all customers many months ago urging users to setup a security code on their account to prevent this. Did you do it?

                                                                  Feb 1, 2018: “T-Mobile Alert: We have identified an industry-wide phone number port out scam and encourage you to add account security. Learn more: t-mo.co/secure”

                                                                  1. 1

                                                                    Yeah I did after recovering my number. Sadly this action was taken in response to myself and others having been attacked already :)

                                                            1. 6

                                                              I want to know where Microsoft and Apple stand on AV1. I remember when all the major players were duking it out over WebM or H.264; H.264 won (and Mozilla and Opera, who were pushing WebM, got pressured into adding patent-encumbered H.264 into their browsers by market forces).

                                                              AFAICT, that happened for three big reasons:

                                                              1. Apple and Microsoft implemented H.264 and refused to implement WebM. In retrospect I guess that made more sense for Microsoft since they were still in “we blindly hate anything with the word ‘open’ in it” mode. Apple made less sense to me.
                                                              2. Google promised that Chrome would drop H.264 support, but never followed through. At the time <video> was new enough, and Chrome had enough market share, that I really think they would have been able to turn the tide and score a victory for WebM if they had been serious. But apparently they weren’t.
                                                              3. H.264 had hardware partnerships which meant decoding was often hardware-accelerated - especially important for mobile performance. But I have no idea where I know that from so Citation Needed™.

                                                              I dunno, I think there’s hope for AV1 but that a lot could still go wrong. Apple I am particularly worried about due to iOS’ market share. If they refuse to implement the standard, it could seriously harm or even kill widespread adoption. But OTOH, maybe I’m just a pessimist :P

                                                              1. 6

                                                                A few months ago, Apple has announced that they joined the AV1 group and Microsoft was a founding member. That makes me much more optimistic than previous open formats.

                                                                I think the MPEG-LA really fucked things up with the minefield they set up for H.265.

                                                                https://www.cnet.com/google-amp/news/apple-online-video-compression-av1/

                                                                https://en.m.wikipedia.org/wiki/Alliance_for_Open_Media

                                                                1. 5

                                                                  Apple and Microsoft implemented H.264 and refused to implement WebM. In retrospect I guess that made more sense for Microsoft since they were still in “we blindly hate anything with the word ‘open’ in it” mode. Apple made less sense to me.

                                                                  Apple and Microsoft are both large corporations, and thus hydras; what one head said doesn’t necessarily reflect another. Still, they both have a foot in the game in three awful races: an attempt to be a monopoly without appearing to be such to regulators; both are heavily invested in software patents (a lose-lose game for everyone, but there’s a sunk cost fallacy problem here); heavy investment and affiliation with proprietary media companies.

                                                                  I think the rest of your analysis on why h.264 made it in is right in gneral. Also, Cisco did the “here’s an open source h.264 implementation except if you modify it we might sue you for patent violations, so it’s not free software in practice” thing, and that was enough for various parties to check a box on their end, sadly.

                                                                  BTW, I sat in on some of the RTCWeb IETF meetings where the battle over whether or not we would move to a royalty free default video codec on the web would happen then. I watched as a room mostly full of web activists not wanting patent-encumbered video to overtake the web were steamrolled by a variety of corporate representatives (Apple especially). A real bummer.

                                                                  I’d like AV1 to do better… maybe it can by being actually better technology, and reducing a company’s bottom line by having a smaller bandwidth footprint, as it looks like they’re aiming for here. Dunno. Would love to hear more about strategy there.

                                                                  1. 1

                                                                    Also, Cisco did the “here’s an open source h.264 implementation except if you modify it we might sue you for patent violations, so it’s not free software in practice” thing, and that was enough for various parties to check a box on their end, sadly.

                                                                    What exactly was happening there? IIRC Cisco basically said “we’ll eat the licensing costs on this particular implementation to fix this problem” so Mozilla/Opera(?) ended up using that to avoid the fees. Is that not what happened?

                                                                    I definitely remember Mozilla attempting to hold out for as long as possible. Eventually it became clear that Firefox couldn’t compete in the market without H.264 and that’s when the Cisco plugin went in.

                                                                    I watched as a room mostly full of web activists not wanting patent-encumbered video to overtake the web were steamrolled by a variety of corporate representatives (Apple especially).

                                                                    This is super gross.

                                                                  2. 3

                                                                    Apple made less sense to me

                                                                    Apple is extremely sensitive to things that affect battery life of iOS devices. H.264 can be decoded in hardware on their devices. WebM would have to be decoded in software, so supporting it would be a worse experience for device reliability (battery would drain really fast on sites with lots of WebM content).

                                                                  1. 3

                                                                    Oh IE - I had no idea about this one:

                                                                    Internet Explorer treats ` as an attribute delimiter

                                                                    I’m tempted to leave IE users totally exposed to this one. This security issue IMHO is up to the browser vendor to fix, not the individual pages on the internet.

                                                                    1. 23

                                                                      I think that ARM do not realizes what they just did.

                                                                      Besides stupid idea aka ‘Get the Facts’ from Microsoft now people start to acknowledge what RISC-V is and that its an alternative to ARM.

                                                                      Before ARM made that site people did not even knew RISC-V existed :)

                                                                      1. 9

                                                                        Before ARM made that site people did not even knew RISC-V existed :)

                                                                        I’m just one data-point and I’m more of a software rather than hardware person, so I don’t really matter, but yes. I had no idea about RISC-V before Matthew Garret tweeted about this page. Nice to see an open design. This would definitely be something to consider if I ever have to deal with hardware at this level.

                                                                        1. 4

                                                                          I’m a little new to RISC-V but I see a whole lot of very familiar names up on this wall: https://pbs.twimg.com/media/DgyJOMwX0AAeSgx.jpg:large

                                                                          So while it might not be as mainstream as ARM, my impression is that the industry knows about RISC-V and is watching it very carefully.

                                                                        1. 8

                                                                          Given that most popular email clients these days are awful and can’t handle basic tasks like “sending email” properly

                                                                          I agree with the sentiment in general, but once you’re in the position where everybody else does it wrong and you’re the last person on the planet that does it right, then maybe it’s time to acknowledge that the times have changed and that the old way has been replaced by the new way and that maybe it is you who is wrong and not everybody else.

                                                                          And I’m saying this as a huge fan of plain-text only email, message threading and inline quotes using nested > to define the quote level.

                                                                          It’s just that I acknowledge that I have become a fossil as the times have changed.

                                                                          1. 3

                                                                            once you’re in the position where everybody else does it wrong and you’re the last person on the planet that does it right

                                                                            Thankfully we haven’t reached this position for email usage on technical projects. Operating systems, browsers, and databases still use developer mailing lists, and system programmers know how to format emails properly for the benefit of precise line-oriented tools.

                                                                            I acknowledge that I have become a fossil as the times have changed

                                                                            If the technology and processes you prefer have intrinsic merit, then why regretfully and silently abandon them? I’m not saying we should refuse to cooperate on interesting new projects simply because they use slightly worse development processes. But we should let people know about the existence of other tools and ways to collaborate, and explain the pros and cons.

                                                                            1. 2

                                                                              If the technology and processes you prefer have intrinsic merit, then why regretfully and silently abandon them?

                                                                              Because when I didn’t, people were complaining about my quoting style, not understanding which part of the message was mine and which wasn’t and complaining that me stripping off all the useless bottom quote caused them to lose context.

                                                                              This was a fight it didn’t feel worth fighting.

                                                                              I can still use my old usenet quoting habits when talking to other old people on mailing lists (which is another technology on the way out it seems), but I wouldn’t say that the other platforms and quoting styles the majority of internet users use these days are wrong.

                                                                              After all, if the maiority uses them, it might as well be the thing that finally helped the “other” people to get online to do their work, so it might very well be time for our “antiquated” ways to die off.

                                                                            2. 1

                                                                              I’d like to try to convince you that it’s _good* that plain text email is no longer the norm.

                                                                              First, let’s dispense with a false dichotomy: I’m not a fan of HTML emails that are heavy on layout tables and (especially) images with no text equivalents. Given my passion for accessibility (see my profile), that should come as no surprise.

                                                                              But HTML emails are good for one thing: providing hyperlinks without exposing URLs to people. As much as good web developers aim for elegant URLs, the fact remains that URLs are for machines, not people. A hyperlink with descriptive text, where the URL is available if and only if the reader really wants it, is more humane.

                                                                              For longer emails, HTML is also good for conveying the structure of the text, e.g. headingsg and lists.

                                                                              Granted, Markdown could accomplish the same things. But HTML email actually took off. Of course, you could hack together a system that would let you compose an email in Markdown and send it in both plain text and HTML. For folks like us that don’t prefer WYSIWYG editors, that might be the best of all worlds.

                                                                              1. 2

                                                                                But HTML emails are good for one thing: providing hyperlinks without exposing URLs to people.

                                                                                That doesn’t come without a huge cost. People don’t realize that they need to know the underlying URL and don’t care to pay attention to it. That leads to people going places they didn’t expect or getting phished and the like.

                                                                                Those same people probably wouldn’t notice the difference between login.youremail.com and login.yourema.il.com either, though. So I’m not saying the URL is the solution but at least, putting it in front of you, gives you a chance.

                                                                                1. 2

                                                                                  As much as good web developers aim for elegant URLs, the fact remains that URLs are for machines, not people.

                                                                                  I’m not sure about this… at least the whole point of DNS is to allow humans to understand URLs. Unreadable URLs seem to be a relatively recent development in the war against users.

                                                                                  1. 2

                                                                                    Not only do I completely agree with you but you are also absolutely right about that.

                                                                                    Excerpt from section 4.5 of the RFC3986 - Uniform Resource Identifier (URI): Generic Syntax:

                                                                                    Such references are primarily intended for human interpretation
                                                                                    rather than for machines, with the assumption that context-based
                                                                                    heuristics are sufficient to complete the URI [...]
                                                                                    

                                                                                    BTW, the above URL is a perfect example of how one should look like.

                                                                                  2. 1

                                                                                    Personally, I hate HTML in email - I don’t think it belongs there. Mainly, for the very reasons you had just mentioned.

                                                                                    Let’s take phishing, for example - and spear phishing in particular. At an institution where I work, people - especially those at the top - are being targeted. And it’s no longer click here-type of emails - institutional HTML layouts are being used to a great effect to collect people’s personal data (passwords, mainly). With the whole abstraction people cannot distinguish whether an email, or even a particular link, is genuine.

                                                                                    When it comes it the structure itself, all of that can be achieved with plain text email - the conventions used predate Markdown, BTW, and are just as readable as they were several decades ago.

                                                                                    1. 1

                                                                                      are these conventions well-defined? is there some document which describes conventions for stuff like delimiting sections of plain text emails?

                                                                                      1. 1

                                                                                        are these conventions well-defined? is there some page which describes conventions for stuff like delimiting sections of plain text emails?

                                                                                    2. 1

                                                                                      It’s just that I acknowledge that I have become a fossil as the times have changed.

                                                                                      Well, there are just too many of us fossils to acknowledge this just yet.

                                                                                    1. 23

                                                                                      I believe we are beginning to see the downfall of YouTube as we know it. They are really going way and beyond to ruin their own platform/reputation.

                                                                                      1. 8

                                                                                        That has been happening for couple years now. All the content that made youtube popular are nowadays shunned and banned by recommendation algos. In short, if it cannot be monetized by US linear TV standards, it cannot be found in search or recommendations. So unless you already have several hundred thousand followers (and ads enabled), your content is family friendly and you have used thousands of dollars worth of equipment there are no new viewers.

                                                                                        This did hit people filming motorcycle related videos pretty hard, as apparently that is very media unsexy content in US. Which happens to most of my youtube subscriptions, from most I watch every video they produce. And my youtube “home”/“recommended” section is full of everything that is not related in any way to my most watched stuff.

                                                                                        1. 7

                                                                                          yes. This is the straw that breaks the camels back. The blocking of help videos of a 3D modeller is going to be the downfall of YouTube. Unable to learn how to use their 3D modelling software, the masses will wander off to different venues in droves.

                                                                                          /s

                                                                                          (without snark: nobody outside of our little circle here cares about this. Not the advertisers, not youtube, not the general audience, not the press. The is entirely inconsequential to youtube’s future)

                                                                                          1. 4

                                                                                            You might compare it to gentrification. You cater to the middle ground, the cool stuff around the edges is pushed out, the really creative people abandon the platform, you’re left with the most generic content. Blender is just the latest victim of a broad trend.

                                                                                            Most people may not “care” about Blender specifically, but they should care about an opaque platform that caters to the IP needs of multinationals in overly broad ways and incentivizes some really messed up behavior.

                                                                                          2. 4

                                                                                            It will be awesome to see what the video hosting landscape will be like when PeerTube reaches its height of popularity!

                                                                                            1. 3

                                                                                              I was checking peertube yesterday and it’s a huge change from youtube user experience. A lot more involved, and a lot less intuitive. I have hard time imagining mass adoption with what I saw. Are there any good beginner friendly tutorials/intros to peertube out there?

                                                                                              1. 3

                                                                                                Take a look at https://d.tube/ too. It’s much closer to the youtube experience.

                                                                                                1. 1

                                                                                                  You can always checkout this I guess: https://joinpeertube.org/en/#how-it-works

                                                                                            1. 14

                                                                                              I really hate browser notifications. I never click yes ever. It feels like preventing browsers from going down this hole is just yet another hack. The Spammers and the CAPTCHAers are fighting a continuous war, all because of the 2% of people who actually click on SPAM.

                                                                                              1. 7

                                                                                                I’m amazed there is no “deny all” setting for this

                                                                                                1. 5

                                                                                                  My firefox has that in the settings somewhere:

                                                                                                  [X] Block new requests asking to allow notifications

                                                                                                  This will prevent any websites not listed above from requesting permission to send notifications. Blocking notifications may break some website features.

                                                                                                  help links here: https://support.mozilla.org/en-US/kb/push-notifications-firefox?as=u&utm_source=inproduct

                                                                                                  1. 2

                                                                                                    Did anyone find the about:config setting for this, to put in ones user.js? I am aware of dom.webnotifications.enabled, but I don’t want to disable it completely because there are 3 websites which notifications I want.

                                                                                                    1. 3

                                                                                                      permissions.default.desktop-notification = 2

                                                                                                  2. 1

                                                                                                    there always has been in Chrome and Safari and since very recently, there’s also one in Firefox. It’s the first thing I turn off whenever I configure a new browser. I can’t possibly think of anybody actually actively wanting notifications to be delivered to them.

                                                                                                    Sure, there’s some web apps like gmail, but even there - I’d rather use a native app for this.

                                                                                                    1. 3

                                                                                                      I can’t possibly think of anybody actually actively wanting notifications to be delivered to them.

                                                                                                      Users of web-based chat software. I primarily use native apps for that, but occasionally I need to use a chat system that I don’t want to bother installing locally. And it’s nice to have a web backup for when the native app breaks. (I’m looking at you, HipChat for Windows.)

                                                                                                  3. 5

                                                                                                    There is a default deny option in Chrome, takes a little digging to find though. But I agree that it’s crazy how widespread sites trying to use notification are. There’s like 1 or 2 sites that I actually want them from, but it seems like every single news site and random blog wants to be able to send notifications. And they usually do it immediately upon loading the page, before you’ve even read the article, much less clicked something about wanting to be notified of future posts or something.

                                                                                                    1. 1

                                                                                                      The only time I have clicked “yes” for notifications is for forums (Discourse only at this point) that offer notifications of replies and DMs. I don’t see a need for any other websites to need to notify me.

                                                                                                    1. 2

                                                                                                      Still makes me sad that even in UTF-8 there are invalid code points. ie. You have to double inspect every damn byte if you’re doing data mining.

                                                                                                      Typically in data mining you are presented with source material. It’s not your material, it’s whatever is given to you.

                                                                                                      If somebody has screwed up the Unicode encoding, you can’t fix it. You have work with whatever hits the fan, and everything else in your ecosystem is going barf if you throw an invalid code point at it, even if it was just going to ignore it anyway.

                                                                                                      So you first have to inspect every byte and see if it’s a valid code point and then on the fly squash them to the special invalid thingy. ie. Double work for each byte and you can’t just mmap the file.

                                                                                                      Ah for The Good Old Bad Old Days of 8bit ascii.

                                                                                                      1. 6

                                                                                                        Still makes me sad that even in UTF-8 there are invalid code points. ie. You have to double inspect every damn byte if you’re doing data mining.

                                                                                                        I disagree. It’s an amazing feature of UTF-8 because it allows me to be certain to exclude utf-8 from a list of possible encodings a body of text might have. No other 8-bit encoding has that feature. A blob of bytes that happens to be text encoded in ISO-8859-1 looks exactly the same as a blob of bytes that is encoded in ISO-8859-3, but it can’t be utf-8 (at least when it’s using anything outside of the ASCII range).

                                                                                                        Ah for The Good Old Bad Old Days of 8bit ascii.

                                                                                                        if you need to make sense of the data you have mined, the Old Days were as bad as the new days are because you’re still stuck having to guess the encoding by interpreting the blob of bytes as different encodings and then trying to see whether the text makes sense in any of the possible languages that could have been used in conjunction with your candidate encoding.

                                                                                                        This is incredibly hard and error-prone.

                                                                                                        1. 1

                                                                                                          I guess I’d like a Shiny New Future where nobody tries to guess encoding, because standards bodies and software manufacturers insist on making it explicit, and all software by default splats bad code points to invalid without doing something really stupid like throwing an exception….

                                                                                                          Sigh.

                                                                                                          I guess for decades to come I’ll still remember the Good Old Bad Old days of everything is Ascii (and if it wasn’t we carried on anyway) fondly….. I’m not going to hold my breathe waiting for a sane future.

                                                                                                        2. 2

                                                                                                          Ah for The Good Old Bad Old Days of 8bit ascii.

                                                                                                          It wasn’t ASCII, and that’s the point: There was no way to verify what encoding you had, even if you knew the file was uncorrupted and you had a substantial corpus. You could, at best, guess at it, but since there was no way to disprove any of your guesses conclusively, that wasn’t hugely helpful.

                                                                                                          I remember playing with the encoding functionality in web browsers to try to figure out what a page was written in, operating on the somewhat-optimistic premise that it had a single, consistent text encoding which didn’t change partway through. I didn’t always succeed.

                                                                                                          UTF-8 is great because absolutely nothing looks like UTF-8. UTF-16 is fairly good because you can usually detect it with a high confidence, too, even without a BOM. UCS-4 is good because absolutely nobody uses it to store or ship text across the Internet, as far as I can tell.

                                                                                                        1. 8

                                                                                                          Now that we’ve passed $1K - can we beat the $5K?

                                                                                                          1. 6

                                                                                                            If we’re trying to shoot for 5k we should at least let Maine know.

                                                                                                            1. 6

                                                                                                              As a European, I honestly prefer us to have it :)

                                                                                                              1. 5

                                                                                                                I’d say we go for it, and offer them to take over the gold spot for a 10k donation to Unicode instead. :-)

                                                                                                            1. [Comment removed by author]

                                                                                                              1. 3

                                                                                                                You have to ensure the software supports that.

                                                                                                                If you plug that into an iphone with a 3.5mm jack you’ll get an error saying you can’t use it. Given the general feel of this post, and the way Apple has deprecated things in the past, it’s not unfathomable that future iterations of iOS might not allow the use of the lightning to headphone connector.

                                                                                                                1. 1

                                                                                                                  If you plug that into an iphone with a 3.5mm jack you’ll get an error saying you can’t use it

                                                                                                                  That’s not true. It works on various iPads and on an iPhone 6s, all of which predating the existence of the adapter. The only requirement is iOS 10 or later which, I think, runs on all lightning port equipped devices.

                                                                                                              1. 2

                                                                                                                It sounds to me like they are deprecating all server services and probably preparing to merge macOS Server into macOS so they’ll have just one computer OS. Am I missing anything?

                                                                                                                1. 5

                                                                                                                  Ever since Lion they stopped having a macOS Server version. That’s when the server app appeared the first time which was installing what previously was part of the server OS.

                                                                                                                  Over time they removed more and more features from that though, so now all that’s left is OpenDirectory (their LDAP/Kerberos AD equivalent) and their MDM solution.

                                                                                                                  1. 2

                                                                                                                    It now makes sense why it seems such a dramatic change for me as the last time I’ve worked with macOS Server it was back during the Tiger days. Thank you for clearing things up!

                                                                                                                1. 7

                                                                                                                  I‘m not convinced that the current trend to put authentication info in local storage is entirely driven by the thought of being able to bypass the EU cookie banner thing. I think it‘s more related to the fact that a lot of people are jumping on the JWT bandwagon and that you need to send that JWT over an Authorization header rather than the cookie header.

                                                                                                                  Also, often, the domain serving the API isn‘t the domain the user connects to (nor even a single service in many cases), so you might not even have access to a cookie to send to the API.

                                                                                                                  However, I totally agree with the article that storing security sensitive things in local storage is a very bad idea and that httponly cookies would be a better idea. But current architecture best-practice (stateless JWT tokens, microservices across domains) make them impractical.

                                                                                                                  1. 4

                                                                                                                    Hey! You are correct in that this isn’t the main reason people are doing this – but I’ve spoken to numerous people who are doing this as a workaround because of the legislation which is why I wrote the article =/

                                                                                                                    I think one way of solving the issue you mention (cross-domain style stuff) is to use redirect based cookie auth. I’ve recently put together a talk which covers this in more details, but have yet to write up a proper article about it. It’s on my todo list: https://speakerdeck.com/rdegges/jwts-suck-and-are-stupid

                                                                                                                    1. 2

                                                                                                                      Ha! I absolutely agree with that slide deck of yours. It’s very hard to convince people though.

                                                                                                                      One more for your list: having JWTs valid for a relatively short amount of time but also provide a way to refresh them (like what you’d do with an oauth refresh token) is tricky to do and practically requires a blacklist on the server, reintroducing state and defeating the one single advantage of JWTs (their statelessnes, though of course you can have that with cookies too)

                                                                                                                      JWTs to me feel like an overarchitectured solution to an already solved problem.

                                                                                                                      1. 1

                                                                                                                        There’s a third use case: services that are behind an authentication gateway (like Kong) and whenever a user is doing an authenticated request then the JWT is injected by the gateway into the request headers and passed forward to the corresponding service.

                                                                                                                        But yes, a lot of people are using $TECHNOLOGY just because it’s the latest trend and discard “older” approaches just because they are no longer new which is quite interesting because we today see a resurgence of functional languages which are quite old, but I digress.

                                                                                                                      2. 2

                                                                                                                        you need to send that JWT over an Authorization header rather than the cookie header.

                                                                                                                        Well, you don’t need to, but many systems require you to. It’s completely possible — although it breaks certain HTTP expectations — to use cookies for auth² is after all quite an old technique.

                                                                                                                        1. 1

                                                                                                                          This is true – you could definitely store it in a cookie – but there’s basically no incentive to do so. EG: Instead just use a cryptographically signed session ID and get the same benefits with less overhead.

                                                                                                                          The other issue w/ storing JWTs in cookies is that cookies are limited to 4kb of data, and JWTs often exceed that by their stateless nature (trying to shove as much data into the token as possible to remove state).

                                                                                                                        2. 1

                                                                                                                          Could you point me to some sort of explanation of why using localStorage is bad for security? Last time I looked at it, it seemed that there was no clear advantage to cookie based storage: http://blog.portswigger.net/2016/05/web-storage-lesser-evil-for-session.html

                                                                                                                          1. 2

                                                                                                                            Just as the article says: if you mark the session cookie as http only, then an XSS vulnerability will not allow the token to be exfiltrated by injected script code.

                                                                                                                            1. 1

                                                                                                                              Are we reading the same article? What I see is:

                                                                                                                              • “The HttpOnly flag is an almost useless XSS mitigation.”
                                                                                                                              • “[Web storage] conveys a huge security benefit, because it means the session tokens don’t act as an ambient authority”
                                                                                                                              • “This post is intended to argue that Web Storage is often a viable and secure alternative to cookies”

                                                                                                                              Anyway, I was just wondering if you have another source with a different conclusion, but if not, it’s OK.

                                                                                                                              1. 3

                                                                                                                                I disagree with the author of that article linked above. I’m currently typing out a full article to explain in more depth – far too long for comments.

                                                                                                                                The gist of it is: HttpOnly works fine at preventing XSS. The risk of storing session data in a cookie is far less than storing it in local storage. The attack surface is greater there. There are a number of smaller reasons as well.

                                                                                                                                1. 1

                                                                                                                                  Great, I would appreciate a link (or a Lobsters submission) when you’ve written it.

                                                                                                                        1. 2

                                                                                                                          What do we learn from this?

                                                                                                                          READ THE DOCUMENTATION!!!

                                                                                                                          In a case like this where the framework is practically priming a landmine for you to step on, I would say you’d rather fix the framework than read the docs. If your ORM has, for all intents and purposes, completely broken transactions, you’re not allowed to hide behind the docs.