1. 28
  1.  

  2. 3

    This wasn’t an issue on AmigaOS, despite Amiga having similar planar bitmap graphics. That was thanks to hardware-accelerated window blitting.

    (the title is misleading — the article says it was a thing mainly in Windows 3.x)

    1. 8

      And most PC video cards by the time Windows 95 was a thing (and quite a while before) had 8-bit minimum graphics and a GDI accelerator capable of blitting. The Amiga was nice for the time, but it was was outmoded quickly (and Commodore’s incompetence made successors disappointing), only special for niche things that turned out not to matter in the end. As such, I get really tired when Amiga users jump into a thread whenever an old PC video limitation is mentioned - as if anyone cares about rasterbars and HAM.

      1. 2

        and a GDI accelerator capable of blitting.

        There’s blitting and then there’s Blitting. Look into Amiga’s Blitter actual functionality set if you care to understand the difference.

        as if anyone cares about rasterbars and HAM.

        I, for one, do. And talking about rasterbars is also very dismissive of what the copper is actually capable of.

        but it was outmoded quickly

        For some definition of quickly. Amiga was released in 1985. It took other mainstream platforms until the 90s to be competitive.

        I really am talking about the PC getting 3d accelerator cards and Windows 2000 (NT for the masses), there.

        Anything before that? Well, you could play doom and quake with CPU rendering better than an Amiga could without a graphics card, due to cost of chucky2planar conversion. But that’s about it in terms of practical advantages on the PC.

        Everything else, the Amiga did better and cheaper. Particularly, AmigaOS ran laps around Windows 9x. I would know, as I experienced both.

        Commodore’s incompetence made successors disappointing

        AGA was much less than it could have been if Commodore didn’t cancel every chipset (already designed or not) that the engineers proposed, on grounds of being supposedly “too expensive”, while dismissing the value of having non-debatable technical superiority.

        But AGA still was amazing in 1992.

        But that was indeed the end, Commodore incompetence to thank for it.

        1. 4

          There’s blitting and then there’s Blitting. Look into Amiga’s Blitter actual functionality set if you care to understand the difference.

          I know what Copper can do.

          as if anyone cares about rasterbars and HAM.

          I think HAM’s usefulness is overstated because it’s hard to do anything in motion. It’s nice DPaint can use it, but once 16-bit graphics became common in the PC world, the appeal was a lot less.

          Anything before that? Well, you could play doom and quake with CPU rendering better than an Amiga could without a graphics card, due to cost of chucky2planar conversion. But that’s about it in terms of practical advantages on the PC.

          This is a great example of dismissing something huge because the Amiga couldn’t do it - everyone wanted Doom in 1993/1994. I assure you many teenage boys in Europe with an Amiga were itching to get a PC for Doom - why else would there be a glut of mediocre Doom clones for the Amiga in 1994?

          Everything else, the Amiga did better and cheaper. Particularly, AmigaOS ran laps around Windows 9x. I would know, as I experienced both.

          This is dubious having used it w/o nostalgia for it. The Windows 95 interface isn’t my favourite, but I’d use it any day over Workbench. I can concede the RTOS (not a microkernel) design can be good in some situations though, but I’d rather use something like NT over it.

          AGA was much less than it could have been if Commodore didn’t cancel every chipset (already designed or not) that the engineers proposed, on grounds of being supposedly “too expensive”, while dismissing the value of having non-debatable technical superiority.

          Sure, but we’re dealing with the timeline with Amiga that shipped and failed to achieve a market segment big enough to sustain itself, not the one where Gould and Mehdi weren’t inept.

          tl;dr: I think the Amiga is mostly good at parlour tricks (a rasterbar is nothing but a parlour trick) with sprite-based graphics - something less needed when systems could just do everything that mattered, all in software, better. In fact, the Archimedes at the time could do Amiga-quality graphics software-rendered by an ARM CPU. It’d be only a matter of time before PCs caught up, and they did.

          1. 1

            I think HAM’s usefulness is overstated because it’s hard to do anything in motion.

            It’s well known. Still, if anything, it is less motion-impaired than it is believed to be, as demonstrated here.

            Put in perspective, 320x512 (PAL w/o overscan) and 4096 colors with some restrictions (partially workaround-able with copper help dynamically changing palette entries to ideal ones that allow minimization of fringing) was amazing on what was a relatively (to PC/Mac) very cheap computer in 1985.

            This contrast between what expensive PCs and cheap Amiga could do was repeated later on AGA, with HAM8.

            At that time, 18bit (technically 24bit but dragging the LSB around until a pixel was set from palette instead of modified) at higher resolution and with a 64-color palette for the “set” operation… was amazing.

            SVGA cards that did compete with that were only deployed later, Windows 95 times.

            1994

            CD32 had hardware accelerated conversion to planar and was released in 1993.

            A clever workaround was the Graffiti card… which would take Amiga video output and do c2p after the fact. Nevermind RTG.

            Doom was very much possible on a dirt cheap a1200; it just wasn’t done. And it was mostly because Amiga was seen as a dead end… thanks to CBM’s failures.

            something less needed when systems could just do everything that mattered, all in software, better.

            Absolutely. But we’re talking much later and/or much more expensive. It’s comparably easy to brute-force problems with complex hardware, high clocks and large power usage.

            The reason the Amiga is and will always be so loved is the cleverness of the relatively low complexity (thus cheap to make and low power draw) hardware.

            Copper + blitter + sprites were a mighty combination that could do a lot on their own without the CPU having to do any work, which in turn meant the CPU was free to do other work.

            It’d be only a matter of time before PCs caught up, and they did.

            Only because of Commodore. If the Amiga didn’t move (it barely did… and even then they still managed AGA, which was awesome in the 1992 it was introduced at, at a fraction of a PC clone’s price), something else was bound to catch up.

            I think the Amiga is mostly good at parlour tricks

            People who actively used the Amiga back in the late 80s and throughout the 90s (I only switched at the very end! and kept my precious hardware) absolutely leveraged the hardware capabilities, be it productivity or games; “parlour tricks” really doesn’t do justice to the extremely flexible custom hardware, and the OS that was made to take full advantage of it.

      2. 4

        There were a load of different ‘Windows accelerators’ on the market in the early to mid ‘90s. Some of these just did bit blitting but a most of them did at least some form of 2D drawing and sprite acceleration. One of the problems with x86 *NIX desktops at the time was that the X server could often use them, but the X drawing interfaces were quite primitive. Anything that did more complex graphics ended up doing client-side rendering and shipping pixmaps to the server and so couldn’t take advantage of any of these cards.

        By the late ‘90s, CPUs were fast enough (and PCI / AGP were slow enough) that CPU-side rendering was faster for most things. Oddly enough, text rendering was the biggest bottleneck on windowing systems in the early 2000s. Each glyph is a set of bezier paths that need to be rendered with sub-pixel anti-aliasing and composited on the background. Quartz Extreme on macOS accelerated this by rendering each glyph to a texture and compositing them on the GPU. I’m not sure what DirectText does, but I recall a paper from MSR around 2005ish that stored each Bezier in a font glyph as a pair of triangles and then used a pixel shader to render the real curve with the relevant antialiasing and so I hope that’s what’s used.

        1. 1

          I’m not sure what DirectText does, but I recall a paper from MSR around 2005ish that stored each Bezier in a font glyph as a pair of triangles and then used a pixel shader to render the real curve with the relevant antialiasing and so I hope that’s what’s used.

          That’s correct for DirectWrite, and I think for contemporary GDI+ [edit: this is 100% wrong; GDI+ is still 100% on the CPU], but GDI last I looked doesn’t run on the GPU for compatibility reasons.

      3. 3

        ³ I guess you could force your video card into a 16-color mode and try to relive the world where pixels don’t occupy full bytes, and some x-coordinates perform better than others.

        No need to guess: this bit me back in the day (late 1990s) when I made real-time graphic effects plugins for Macromedia Director. We got a very angry customer berating us for having the gall of shipping such a defective product: in 15 or 16bpp windowed modes, some times there were glitches at the left/right borders. But it did work well in my machine (made sure I was running the tests at 16bpp), and I was using the documented GDI functions, no hacks anywhere…

        …but my OS was Windows NT 4, not Windows 95, and it turns out that the graphics subsystem did have some differences like this. When blitting 16-bit buffers, the 95 function just rounded up the coordinates which I guess must have got them a few extra points in benchmarks. I did not notice any slowdown in NT which was doing the right thing, BTW.

        What I did was to write my own blit function, which ended up being as fast as the official one but was always correct no matter the graphic mode and OS, and then since I had the source and origin pixels I wrote a variant, almost as fast, that did a simple motion blur and looked great for things like cube and flip effects.

        1. 1

          Amazing that they’ve kept that capability for 30 years!