1. 28
  1.  

  2. 2

    IIRC the UI was scripted in JS but I can’t find a reference to that online. I probably heard it at the time from people who had been at Pixo.

    1. 1

      Why is the case so large but empty?

      1. 4

        There’s a lot of reasons. You need something big enough to hold both the innards and the various probes and wires for a logic analyser. Some modules start out as breadboard prototypes before being integrated on the main PCB, so you may need to house both the innards and a breadboard. During early development, you need buttons that are easy to reach and easy to press, with contacts that are easy to connect to signal generators, even if the actual product has small buttons, because you are going to press them damn buttons thousands of times and stress-test everything with a signal generator that presses buttons hundreds of times per second before you get IRQ handling right. That’s why the wheel is about as big as the whole iPod. Boards slip out easily when they’re held in place by duct tape (you can’t screw them in place because sometimes you need to flip them over for legit purposes, like reaching a test pad), and you want something that’s big enough to allow for some slippage without falling onto something metallic on someone’s desk and accidentally shorting the board.

        This is a pretty late prototype so some of these reasons are less obvious, but – obviously, depending on what route each development team takes and so on – earlier prototypes tend to make these things look far less empty. I’ve worked on things that ended up slightly larger than a VHS tape but started out in a box big enough to take up half my desk.

        Edit: just a few additional notes because I just know these are gonna pop up :-D.

        1. Dogfooding: yes, it’s a valid concern, but if you start doing that too early, all you get is angry developers who have to press tiny buttons while being careful not to accidentally short teenie-weenie 0402 SMDs with their fingers. Some development has to happen before you can dogfood some things. Managers who insist on doing it as early as technically possible aren’t visionary, just clueless.

        2. Obscuring hardware details to software teams and vice-versa: first of all, there’s only so much you can obscure before making development literally impossible – e.g. the people doing the UI need to know how big the screen is, for example, not to mention that the people doing the hardware need to know how big the screen is and where the buttons are. If you give the former a box that has a spinwheel and four buttons and a screen yay big, it’s not that hard to put two and two together and get a basic set of specs.

        What the author most likely meant to say by this:

        It also has the Jobsian side-benefit of keeping the engineers in the dark about what the final device will look like.

        is IMHO that it had the side-benefit of allowing people to work on the software while minimising the chances of a leak about the case design. That’s obviously important for a company which uses leaks as a marketing tool.

        Second, while you can technically do this obscuring thing (albeit not too early in a project) it’s usually a very bad idea – it’s one of the ways that you get hardware which is ill-suited for the software it’s supposed to run, and slow software (that’s not even inefficient, it’s just optimised for the wrong hardware).

        A prototype like this one has the far more mundane benefit of allowing people to start working on the software way before the hardware is finished in all its details. Depending on what CPU you use, you can often start working on the software using a development board, long before the first hardware design draft, as long as you stick to the correct parts. This isn’t some Jobsian vision, it’s how embedded software has been written since practically forever.

        1. 1

          It also has the Jobsian side-benefit of keeping the engineers in the dark about what the final device will look like.

          I understood this to mean the engineers could work on hardware and software of the iPod without risk of them leaking what the final device would look like. It’s not clear if that was the intention of the large case, or a natural side effect.

          I also expect that developing on a prototype is much more productive and inexpensive. You could swap components more easily than if it were all glued together in a tiny metal case.

        2. 1

          I don’t see the original iPod as a shift at Apple. The original iPod was very much a Mac-only device. Apple had been making Mac peripherals for ages. The iPod was a peripheral that let you take your music with you when you went away from your Mac. It could be used with only one computer at a time (I don’t know if they ever changed that) and it existed purely to increase the value of the Mac ecosystem.

          The second generation was the start of the shift. A bunch of people reverse engineered the iPod interface and made it work with other operating systems and, rather than locking down the iPod, Apple added official Windows support. When it became more popular, they ported iTunes to Windows to give people a taste of the OS X experience. They also ported Safari. Even then, the iPod was primarily a device intended to get people to buy Macs. The idea was that Windows users would use the iPod, iTunes, and Safari and be more tempted to buy a Mac. This didn’t work so well in part because (from what I could tell) it was a continuation of Yellow Box and shipped a whole load of macOS frameworks built for Windows NT and so didn’t use any native services (even text rendering, so it was very jarring seeing two different antialiasing models in Apple vs native applications).

          The first iPhones were a continuation of this. There was a big philosophical split between Apple and Google’s view of mobile devices. Google saw them as computers, Apple as peripherals. The iPhone and iPad weren’t intended to replace your laptop, they were intended to work with them. Once Apple allowed third-party app development on iOS, they pitched a lot of it at folks making mobile companions to their desktop apps, rather than mobile-first applications.

          It wasn’t until Apple’s revenue from iOS started to grow to a comparable size to their Mac revenue that this changed and the iOS products became independent systems, rather than products that existed to try to sell Macs. Now Apple is trying to reinvent itself as a services company and the iOS ( / watchOS / tvOS / whateverOS) and macOS lines are all being folded into this as channels for selling Apple services (Apple TV, Apple Music, iCloud, Apple App Store, and so on).

          1. 1

            they ported iTunes to Windows to give people a taste of the OS X experience

            Not sure if it didn’t do more harm than good…

            1. 1

              I also disliked iTunes for Windows when I tried it, but you might be underestimating the people who only had iPods and later iPhones and never had a mac, especially at work. Not that this is a great pro for iTunes or makes it more enticing, but it would probably made the users’ life harder when trying to use their iDevice for music.