1. 39
    They squandered the holy grail ai mac xeiaso.net
  1.  

    1. 32

      I think that Apple Intelligence is a failure of a product from an implementation standpoint. This is frustrating because the foundation they are building on top of is nearly invincible

      Meredith Whittaker, CEO of Signal, recently gave a presentation at 38c3 where she dives deeper into this issue from a privacy perspective: https://media.ccc.de/v/38c3-feelings-are-facts-love-privacy-and-the-politics-of-intellectual-shame

      She argues, convincingly, that privacy is not about who owns what data, but about the right to be who you are, without shame or judgement, and that features like “on device” processing do nothing for privacy if you are not the one who decides how this “processing” works.

      1. 5

        She makes this point at the very end (at 37:00, emphasis mine):

        Homomorphic encryption guarantees privacy, the claims go, because data processing happens on your device, not on Apple’s or Google’s servers, because data is never sent off of your device, and because of the presence of technically guaranteed plausible or provable deniability. But— butterfly meme— is this privacy?

        Under the old data-centric understanding, yeah. Sort of. Probably. It’s not worth joining a movement over, but I guess it’s privacy. It leverages encryption, after all, and isn’t that private?

        But with a version of privacy that is animated by a rejection of epistemic authority, by the protection of spaces for love, transformation, and radical reimagining that cannot accept such imposed classifications, the answer to the question of whether this is privacy changes. As I promised, it gets a lot clearer:

        No, it’s not private. It may be better than the alternative in some way or another. There may be reasons it represents harm reduction over the status quo. That’s a tactical call. But no, it’s not private. Under this rehydrated privacy definition, it doesn’t matter so much where the data is processed if the AI models, the logics and probabilities by which is classifies me and my activity remain firmly outside of my control. Whether or not the model is on device, it’s nevertheless imposing classifications and categories solely determined by Apple, or Google, or whoever. Categories that work to slot me into one or another cell in a metaphorical and literal database and that go on to produce more classifications and judgments about me even as the companies producing these [classifications] claim plausible deniability when it comes to what specifically these classifications mean and their potential errors.

      2. 18

        About the “bicycles for the mind” thing… here are some properties of bicycles that might be useful for those interested in extending the metaphor.

        • You can build a bicycle from spare parts, because the parts are standardized
        • You can learn to build and maintain many bicycles by simply observing the mechanisms
        • Cheap bikes work nearly as well as expensive ones for non-competitive riding
        • Bicycles don’t attempt to do your work for you, rather they channel and direct your energy
        • You need to learn to ride a bike; most able-bodied people can do it, but it’s not trivial
        • You need to learn additional skills, including some discretion, to ride safely in traffic

        I think what Apple’s been up to lately is at best “Hyperloops for the mind” and veering dangerously into “self-driving cars for the mind” territory.

        1. 2

          I think what Apple’s been up to lately is at best “Hyperloops for the mind” and veering dangerously into “self-driving cars for the mind” territory.

          That hyperloops piece is a really interesting reflection. I was feeling more generous and have been thinking Apple lately has been building those e-assist bikes for the mind, that people around me tend to ride so inconsiderately and unsafely.

        2. 9

          I have to admit I was skeptical (I’m an idiot and bugged the author for references… which were already in the article and I missed ¬ ¬U), but the materials that Apple has released are much better than I expected. It’s certainly worth reading Apple’s materials.

          Trusted computing is a fascinating topic, and Apple’s materials address many of my concerns about the topic. There’s plenty of independent auditing possibilities, although I missed independent auditing that validates that Apple is running the code they say they are running. Hopefully more knowledgeable people will research and validate this topic.

          I think trusted computing is way harder than, say, E2EE. I think E2EE is doable, but I’m not even sure trusted computing has a lot to do with “technical” details. But I enjoy reading these materials and I’m glad big companies are working on this.

          But then I think the main point of the article is about “Apple made this huge effort to do trusted computing… to run LLMs”, which is a premise I agree with. Actually, despite all the issues I have with LLMs, I think they can provide value, and I trust Apple much more than other players to find valid applications of the technology. And I think it’s highly unlikely that they will not use their private cloud compute for other, more useful, non-LLM stuff.

          1. 5

            Don’t feel bad, it’s an easy thing to miss. It’s also a pretty outrageous claim, so you were right to be skeptical. I was, I planned on writing something about it but haven’t had the time to yet.

            1. 2

              Check out Confidential Computing if Apple’s private computing is interesting to you.

                1. 1

                  Super! Thanks for sharing. I wish CC would have taken off by now, and I’m glad that’s what Apple seems to be using even if with their own marketing name attached to it.

                  1. 3

                    There are some very hard problems, depending on your threat model. If you assume that an attacker can modify DRAM, you need a Merkel tree over anything that leaves cache, which is really slow, or you need DRAM plus controllers to be in a sealed unit with end-to end authenticated encryption with the machine (this is the better solution, but requires modifying memory interfaces).

                    Apple is currently punting on this by relying on the intrusion detection systems, but since these are hidden in their data centres, you can’t audit them. It’s probably fine, but there are quite a few ways that Apple would be able to break their own systems. The test for that is probably random third-party audits.

            2. 7

              They X-ray the hardware at every step of the assembly process and compare that to reference images

              The day I can do this will be the day I can even consider the possibility of absolutely trusting my own devices. An interesting and serious look at what it really takes to accomplish, end-to-end. Of course, as an outsider to Apple, you’d have to take their word at what they say they are doing, but if it is honestly the case and they are able to keep at it, it is really impressive.

              1. 10

                We struggled a lot with these problems when I was at Microsoft (several of the folks I worked with moved to Apple). There are two related problems:

                • How do we ensure that there are technical guarantees that we cannot look inside a customer’s workload?
                • How do we convince customers of these properties?

                For the Arm chips and ML accelerators going into Azure, we were the chip vendor, the system integrator, and the operator. That provides a lot of places where we could maliciously introduce things that violate the confidentiality and integrity guarantees. There were some places where we could put independent audits, but the main way that I think we ended up providing these guarantees was with lawyers. We would claim we hadn’t done the bad things on contracts with big customers that came with very large penalty clauses if we did them. I don’t think that would work so well for Apple, because the users are individuals and there’s no contract in place. Apple could, potentially, make it part of the license for iOS that you are entitled to $10,000 if they have intentionally introduced a backdoor into their back end that lets them violate your confidentiality guarantees. If they did, every iPhone owner would get enough money that it would bankrupt the company. That’s a pretty strong incentive not to.

                1. 3

                  How do we convince customers of these properties? … I think we ended up providing these guarantees was with lawyers.

                  I agree that liability and the legal processes built around around liabilities are a big part of the solution. e.g. My personal comfort with Signal’s claims re: end-to-end encryption is much greater than my comfort with Apple’s claims re: iMessage due to the extents the government has had to go while pursuing criminals.

              2. 5

                I agree the current generation of AI features in Apple OS’s is fairly useless. But I think those were the easy bits to rush out the door first, because Apple had to rush something out, to counter all those headlines about how they’re “losing the AI race.”

                Apple likes to keep iterating & improving features over years. Often they start out frustratingly simplistic but then mature. (Compare the Notes and Reminders apps with their early versions.) I’m looking forward to features due out in iOS 18.4, especially LLM-enhanced Siri with the ability to reach inside apps via intents — that should make those early demos into reality.

                1. 14

                  But I think those were the easy bits to rush out the door first, because Apple had to rush something out, to counter all those headlines about how they’re “losing the AI race.”

                  This makes sense right up until you read the consumer reports showing that customers are actively avoiding products marketed as AI and the advice from marketing companies telling people not to mention AI when selling their AI features.

                  1. 2

                    There’s an interesting disconnect there, to be sure. But inside tech companies the one I work at, anyway) the AI hype is still deafening.

                    1. 1

                      Time to trot out the old famous Steve Jobs quote:

                      Some people say, “Give the customers what they want.” But that’s not my approach. Our job is to figure out what they’re going to want before they do.

                      For “figure out” you can read “dictate”, because the design process has always been extremely secret. Whether you find this attitude bold and visionary or patronizingly haughty, gotta admit that it has permeated Apple leadership.

                    2. 2

                      especially LLM-enhanced Siri with the ability to reach inside apps via intents — that should make those early demos into reality.

                      This could be the thing that actually makes Siri useful. At least half the time when I ask Siri something, she gives me a page of search results instead of answering the question. But when she does answer the question, THAT feels like the future to me. And that’s exactly what an LLM, presumably, would facilitate.

                    3. 4

                      The best real-world trusted personal computing platform I know of remains the Precursor.

                      Apple deserves accolades for its new process, and I hope it sets a standard for other companies to follow, but I’m not holding my breath on that.

                      1. 2

                        It’s only the first iteration. Give them time.

                        To me the biggest challenge is the long term financials. Looks like for all the rage no one is making money, other than the ones selling the shovels that is.

                        1. 2

                          If anything, the human cost seems like that it would outweigh any process gains from being able to draw a cat on the moon faster. Generative AI is completely useless as a product unto itself, but could be part of a larger product in some way. It should never be the selling point.

                          AI is a very impactful tool for creative work. Technologies like turntables and computers have revolutionized how music is composed. I believe that the same thing is going to happen with AI, it can accelerate and facilitate the creative process in many ways.

                          1. 2

                            There was a term, Job’s reality distortion field, or something. Reading the opening of this article reminded me of that.

                            Apple’s thing was to control the hardware as well as software so that the system worked a bit more reliably than PCs which were heterogeneous hardware and firmware.

                            I use Ubuntu on a thinkpad and my spouse an older Mac. I can’t say I would go out of my way to buy a mac.

                            1. 2

                              The thing that sucks about it is that they made the holy grail of remotely attested trusted compute and then made the end result so much worse to use than manually making your own integrations with Ollama on the same device.

                              I laughed at this, because it’s so true.

                              1. 3

                                I think that Apple Intelligence is a failure of a product from an implementation standpoint. This is frustrating because the foundation they are building on top of is nearly invincible.

                                Apple Intelligence is band new, it’s been out for less than six months! Seriously, go back and see what ChatGPT, Copilot, or any other consumer-branded AI system looked like at just six months. Go back and look at the original iPhone, at how limited it was, and how incredibly far Apple has come since then.

                                What’s frustrating is that everyone complaining here has lost perspective. Not everyone is the target audience. Not everything is perfect at launch (it even says it’s in beta), and AI is hard (not just integration, but getting good data for training while hopefully not infringing on someone else’s copyright, and making an attempt to preserve user privacy). And the Apple Intelligence is clearly marked as a beta product.

                                Another beta product is Tesla’s Autopilot. It’s not perfect, and surprise it too is a beta. So you can’t sleep at the wheel, watch a movie, or go sit in the backseat. You have to sit at in the driver’s seat with your hands on the wheel.

                                Wait and see what Apple has done with Apple Intelligence in iOS 20/Mac OS 17, then get out your judgy blog posts. I think you’ll find that Apple has done well and avoided pitfalls and controversies that other’s encountered.

                                1. 12

                                  Tesla’s “beta” has been running long enough to kill people and change laws. I’m not sure this is a favorable comparison.

                                  1. 1

                                    Tesla’s “beta” really was (is) a beta, even if people don’t treat it as such.

                                    1. 13

                                      It’s pretty irresponsible to put such a beta in a car of all things.