Regarding Poqet PC:
“Using advanced technology such as the unique Power Management software, these batteries will provide several weeks of computing use. To conserve power, the Poqet PC actually ‘sleeps’ between key presses and other actions.”
Actually turns off the CPU between key presses. Runs 50-100 hours on the batteries with 10-20hrs with heavy use. Has office suite and BASIC. Decent-sized keyboard. Enough hardware specs to run Forth OS, maybe Oberon, or secure messaging w/ seL4 or mini-Ada runtime. This thing is worth an article of its own. Lots of potential to be recreated if anybody deploys human-verifiable chips. They’ll be forced to use old designs & techniques to be usable. This one could be great for keeping secrets in one form or another. Even easy to hide.
To conserve power, the Poqet PC actually ‘sleeps’ between key presses and other actions.
Doesn’t everything these days? If no processes want any CPU time, my laptop will put CPU cores into low-power modes and may even turn some off. (It is conservative about turning things off but this is because reactivating them also takes power and time.) All the GUI programs on my laptop are event driven and many will only demand CPU time in response to user input.
Admittedly in practice my laptop doesn’t hit 0% CPU utilisation in between keypresses, but this is just because I’m running a bunch of software that does various kinds of busy polling in the background. Since they mention it running DOS, I suppose that Poqet PC is probably not even doing preemptive multitasking, right? It’s way easier to make it so that CPU time isn’t being used in the background if there is literally no concept of a “background process” other than a TSR which can only be jumped into by a hardware interrupt, of which there are a known finite set, or a software interrupt which is under the control of the foreground process.
A fun tidbit is that there’s no HLT instruction on early x86, so there’s a spin loop taking up 100% utilization in DOS. I presume they’ve modified the ROM software to use halt instruction equivalents though.
Could you implement an ersatz HLT in glue logic with something that listens to the ISA bus and the interrupt lines, switching the CPU’s power off if it sees a write to its ISA port, and switching the CPU’s power back on if any of the interrupt lines are activated? I’m not sure what you’d do about DRAM refresh, maybe use only SRAM instead?
So then the “wait for next keypress” subroutine has a loop around an OUT instruction that targets the switch-me-off doohickey and the interrupt handler patches that loop to break out of it.
I think some mobile-focused x86 chipset before standardized HLT had something like what you’re describing.
The 8088 (used in the original IBM PC) supported a HLT instruction. Of course, a HLT instruction only stops the CPU until the next interrupts happens so maybe that’s what you are thinking of.
I was mistaken, early x86 had HLT, but it didnt reduce power consumption until 486DX4.
I had a Poqet PC (long after it was obsolete)–the quality of the keyboard on that tiny thing was incredible.
I can’t imagine how fun Oberon and friends would be on an XT clone…
This article seems to be written by someone young enough to not remember what the reality of PCMCIA was. The idea of the PCMCIA sounds great but the reality was miserable. Drivers were always an issue, ports would often break, and the cards would often break. At least that was my experience.
As for why it died; that’s simple. Manufacturers started including everything you needed in the laptops. I’ve never once pined for a PCMCIA slot in the last 10 years.
Back in the early 2000’s the only thing I usually used one for was a wireless card on old Thinkpads. Once wireless networking became a standard feature it really had only niche uses any more and none of them strong enough to be worth giving up all that space which could be devoted to a bigger battery or something else useful to more users.
Oh and also the whole title of the article is baloney. PCMCIA slots were a standard feature on laptops for quite a number of years. It did take off. Then it died. Rightfully. Just like so many other outdated technologies.
Yeah, great analysis - it’s only obvious if you were there, so it’s useful to say it.
One thing I’d add is the observation that the only reason expandable hardware is ever a thing is when either it’s something not everyone has a use for, or it’s expensive enough that people want to do without it to save money. For network hardware in the 90s, both were true. And it wasn’t as simple as having wifi and that being everything you needed - depending on where you habitually used your computer, you’d need a dialup modem, a wired Ethernet card, or later a wifi card. So there was real market pressure to leave it out of the base machine.
To some extent, also, USB took over the role of PCMCIA. And we should be happy that it did, because it’s dramatically more secure, although that concern was barely on anybody’s radar at the time.
USB becoming used for general-purpose expansion couldn’t really have happened before it did; older serial ports weren’t fast enough. Also, older serial ports were a horrible experience mechanically, electrically, and with regard to software, but that applies to everything from that era, as you noted. :)
People Can’t Memorize Computer Industry Acronyms. The best of all expansion card formats, except for all the others. I can’t recall how many PCMCIA cards I broke the fiddly little pop-out or hard-attached external port off of, or how many proprietary dongles I lost. I do remember that the hotplug process was horribly broken on Linux for the complete duration of the technology. Good riddance.
The EOMA68 is delivering exactly this with a complete mainboard in the PCMCIA form factor.
Already mentioned in the article.