Wine is quite honestly, one of the most impressive open source projects in existence.
It’s even more impressive when you realize this is made entirely through black-box RE - which of course makes sense, as they gotta avoid being in a not-so-legal area, but that doesn’t make it any less impressive.
Is there any way I can support the main developers? I can’t really do much right now, but I’d love to support developers once I have a stable source of income again.
I’m excited about WoW64 support coming soon! Wine is the only reason I have 32-bit libraries installed on any of my systems, and with WoW64 I’ll be able to remove those. It should also make running GPU-accelerated games easier, since you won’t have to manually install the 32-bit libraries for your GPU drivers
There are still some games that GOG claims run on macOS, but which are bundled with a 32-bit WINE binary that doesn’t launch on any supported version of macOS. It will be great for these to start working again.
PowerPC64 was never well supported. The OS could run 64-bit processes, but a lot of the libraries (including AppKit) were available only as 32-bit binaries. This meant that, even if Apple had gone directly to x86-64, they’d have needed 32-bit compat libraries for Rosetta to use because there were no graphical 64-bit OS X apps. Given that history, it made sense to ship the 32-bit x86 stack. It didn’t make sense to buy the Core 1 machines and I put off replacing my G4 PowerBook for a while because they were obviously going to be short-lifetime devices.
even if Apple had gone directly to x86-64, they’d have needed 32-bit compat libraries for Rosetta to use because there were no graphical 64-bit OS X apps
Why’s that? It seems not significantly harder to target 64-bit x86 than 32-bit x86. (Need MAP_32BIT, but w/e.) I guess if you keep all the other details of ABI and layout the same, you can skip having a translation layer for them, but that’s a big if; and if you’re supporting 32-bit userspace, you need that translation layer anyway at least for your kernel interfaces.
(One of my maybe-get-to-it-someday projects is a compiler from 32-bit to 64-bit x86; motivation, obviously, being to play portal on my arm mac. Of course, I expect wrangling the binary format and translating the interfaces to be the hard part; actually compiling should be a breeze.)
I guess if you keep all the other details of ABI and layout the same, you can skip having a translation layer for them, but that’s a big if
Apple has tried very hard to do this because it lets them rely a lot less on emulation than otherwise. This was one of the big wins for Rosetta: it integrated with the run-time linker so that all of the public API functions that were called from emulated code went to thunks that jumped to the native version. Often, quite a small proportion of your code was running in emulation because all of Quartz, AppKit, and so on were native versions. By keeping structure layouts the same between different architectures, Apple was able to just pass pointers between native and emulated worlds in their thunks, making it very cheap to exit from emulation mode. The Arm EC ABI on Windows does the same thing.
Well yes, that they jumped from PPC and just missed x86-64 is kind of amazing. But wasting 4 bytes for every pointer for the vast number of programs that quite happily sit in 4G is grotesque. I guess they’re a hardware company after all. Software consuming too more RAM means more hardware purchases. (I don’t see a conspiracy, just the alignment of incentives)
It’s a question of software maintenance cost more than anything else. A lot of Apple things use PAC or stash things in the high bits of pointers and so having alternative code paths for 32-bit is painful. In Cocoa, the 64-bit switch also bumped CGFloat and NSInteger to 64 bits, which adds a lot more overhead than doubling the pointer size did, but also eliminates floating point rounding errors from a load of places that need work arounds in 32-bit mode (try scrolling in an NSTableView with a million rows on OS X 10.6).
Yeah. I work on an OS that’s 64 bit only (Fuchsia) any every time someone suggests maybe we could support just this one 32 but use case I have a strong case of the nopes.
I work on the IPC system which we end up using for a lot (see: https://www.youtube.com/watch?v=ApHpmA1k73k) and being able to say “everything’s 64 bit, little endian” makes it almost tractable.
One of the biggest challenges is testing. Plenty of stuff will compile just fine across bit widths and endiannesses. It’ll even appear to work, but there can be deep bugs only seen in exceptional conditions.
Of course there is an alive and well and growing 32-bit architecture that everyone seems so excited about called WASM.
Wine is getting more and more solid by the day. Many people still think it’s like it was 5 years ago, but the development pace has been incredible in the last few years. I presume they finally had enough features to have time to do refactoring, bringing much greater stability in the process.
The most hilarious thing about Wine these days is that it’s getting so stable, and its support so broad, that it’s becoming one of the most stable and feature-complete GUI toolkits for Linux. I used to feel bad about running my old Windows software on Wine, but the honest truth is it just works–and, since Wine 7 and the redo of Common Controls to support theming, honestly often runs better than e.g. older versions of Gtk on a contemporary system. I’ve wondered, if I wrote brand-new Linux desktop software in 2023, if targeting Wine wouldn’t honestly be one of the best options (even if I wrote “Linux-focused” Wine to just get the benefits of the stable API and so on).
In the same vein, I have encountered some Windows programs that work happily on WINE on macOS but don’t work on modern Windows. I have more chance of running 20-25-year-old software with WINE than with any other toolkit, including the native toolkit for any of the major platforms.
Since we have wsl2 and windows subsystem for android, is there any chance we can get lsw2 and lsa (Linux subsystem for Android)? Shouldn’t it be easier to hack with qemu?
*** PE modules
- Most modules are built in PE format (Portable Executable, the
Windows binary format) instead of ELF when the MinGW compiler is
available. This helps various copy protection schemes that check
that the on-disk and in-memory contents of system modules are
identical.
- The actual PE binaries are copied into the Wine prefix instead of
the fake DLL files. This makes the prefix look more like a real
Windows installation, at the cost of some extra disk space.
- Modules that have been converted to PE can use standard wide-char C
functions, as well as wide-char character constants like L"abc".
This makes the code easier to read.
- Not all modules have been converted to PE yet; this is an ongoing
process that will continue during the Wine 5.x development series.
- The Wine C runtime is updated to support linking to MinGW-compiled
binaries; it is used by default instead of the MinGW runtime when
building DLLs.
Wine is quite honestly, one of the most impressive open source projects in existence. It’s even more impressive when you realize this is made entirely through black-box RE - which of course makes sense, as they gotta avoid being in a not-so-legal area, but that doesn’t make it any less impressive.
Is there any way I can support the main developers? I can’t really do much right now, but I’d love to support developers once I have a stable source of income again.
Maybe you can contact one of the people here.
https://wiki.winehq.org/Project_Organization
I’m excited about WoW64 support coming soon! Wine is the only reason I have 32-bit libraries installed on any of my systems, and with WoW64 I’ll be able to remove those. It should also make running GPU-accelerated games easier, since you won’t have to manually install the 32-bit libraries for your GPU drivers
There are still some games that GOG claims run on macOS, but which are bundled with a 32-bit WINE binary that doesn’t launch on any supported version of macOS. It will be great for these to start working again.
It blows my mind whenever I remember that macOS dropped 32bit user space.
It blows my mind that they ever had it. IIRC, it was only the very first line of intel macs that had 32-bit processors; all the rest had 64-bit ones.
PowerPC64 was never well supported. The OS could run 64-bit processes, but a lot of the libraries (including AppKit) were available only as 32-bit binaries. This meant that, even if Apple had gone directly to x86-64, they’d have needed 32-bit compat libraries for Rosetta to use because there were no graphical 64-bit OS X apps. Given that history, it made sense to ship the 32-bit x86 stack. It didn’t make sense to buy the Core 1 machines and I put off replacing my G4 PowerBook for a while because they were obviously going to be short-lifetime devices.
Why’s that? It seems not significantly harder to target 64-bit x86 than 32-bit x86. (Need MAP_32BIT, but w/e.) I guess if you keep all the other details of ABI and layout the same, you can skip having a translation layer for them, but that’s a big if; and if you’re supporting 32-bit userspace, you need that translation layer anyway at least for your kernel interfaces.
(One of my maybe-get-to-it-someday projects is a compiler from 32-bit to 64-bit x86; motivation, obviously, being to play portal on my arm mac. Of course, I expect wrangling the binary format and translating the interfaces to be the hard part; actually compiling should be a breeze.)
Apple has tried very hard to do this because it lets them rely a lot less on emulation than otherwise. This was one of the big wins for Rosetta: it integrated with the run-time linker so that all of the public API functions that were called from emulated code went to thunks that jumped to the native version. Often, quite a small proportion of your code was running in emulation because all of Quartz, AppKit, and so on were native versions. By keeping structure layouts the same between different architectures, Apple was able to just pass pointers between native and emulated worlds in their thunks, making it very cheap to exit from emulation mode. The Arm EC ABI on Windows does the same thing.
Well yes, that they jumped from PPC and just missed x86-64 is kind of amazing. But wasting 4 bytes for every pointer for the vast number of programs that quite happily sit in 4G is grotesque. I guess they’re a hardware company after all. Software consuming too more RAM means more hardware purchases. (I don’t see a conspiracy, just the alignment of incentives)
It’s a question of software maintenance cost more than anything else. A lot of Apple things use PAC or stash things in the high bits of pointers and so having alternative code paths for 32-bit is painful. In Cocoa, the 64-bit switch also bumped CGFloat and NSInteger to 64 bits, which adds a lot more overhead than doubling the pointer size did, but also eliminates floating point rounding errors from a load of places that need work arounds in 32-bit mode (try scrolling in an NSTableView with a million rows on OS X 10.6).
Yeah. I work on an OS that’s 64 bit only (Fuchsia) any every time someone suggests maybe we could support just this one 32 but use case I have a strong case of the nopes.
I was going to say that you have no idea how jealous that makes me, but judging from the rest of your post, you probably do…
I work on the IPC system which we end up using for a lot (see: https://www.youtube.com/watch?v=ApHpmA1k73k) and being able to say “everything’s 64 bit, little endian” makes it almost tractable.
One of the biggest challenges is testing. Plenty of stuff will compile just fine across bit widths and endiannesses. It’ll even appear to work, but there can be deep bugs only seen in exceptional conditions.
Of course there is an alive and well and growing 32-bit architecture that everyone seems so excited about called WASM.
Wine is getting more and more solid by the day. Many people still think it’s like it was 5 years ago, but the development pace has been incredible in the last few years. I presume they finally had enough features to have time to do refactoring, bringing much greater stability in the process.
Valve has paid for a lot of dev time over the last few years through their Proton project.
The most hilarious thing about Wine these days is that it’s getting so stable, and its support so broad, that it’s becoming one of the most stable and feature-complete GUI toolkits for Linux. I used to feel bad about running my old Windows software on Wine, but the honest truth is it just works–and, since Wine 7 and the redo of Common Controls to support theming, honestly often runs better than e.g. older versions of Gtk on a contemporary system. I’ve wondered, if I wrote brand-new Linux desktop software in 2023, if targeting Wine wouldn’t honestly be one of the best options (even if I wrote “Linux-focused” Wine to just get the benefits of the stable API and so on).
In the same vein, I have encountered some Windows programs that work happily on WINE on macOS but don’t work on modern Windows. I have more chance of running 20-25-year-old software with WINE than with any other toolkit, including the native toolkit for any of the major platforms.
Since we have wsl2 and windows subsystem for android, is there any chance we can get lsw2 and lsa (Linux subsystem for Android)? Shouldn’t it be easier to hack with qemu?
I don’t really understand what the new “PE modules” feature means. It sounds intriguing. Anyone have any links to explanations of what this means?
See What’s new in Wine 5.0, it gives a bit more context: