The problem isn’t (just) the desktop environment, it’s also the apps you’re running. This was why Jef Raskin’s designs, for example, were so radical. You couldn’t graft a new WM into old programs; you’ll still have various sharp edges where the paradigms collide.
Not only had, but was supported for several versions, along with classic MIPS. Alpha even had Win2K builds right up until release.
This is the same guy who “ported” (the HAL and drivers, at least) Windows NT to Power Macs. Previously only the weir-dass IBM PReP hardware could run it, and we all know how much of a dent that made in the market. For that matter, it couldn’t run on SGI MIPS either. No wonder Microsoft jettisoned those ports.
The G4 is still my favourite iMac design. I have a 15” 1.25GHz still trucking along. It’s good enough to use as a terminal and for Classic apps, and it occasionally plays DVDs and CDs for the kitchen with its set of Pro Speakers. Plus, this size display wasn’t too heavy for the arm; the springs are still in great shape.
For me a debugger is most useful in those circumstances where you don’t know what code you have to instrument or put debug printfs in. In that case, throw some hardware watchpoints around and see what falls out, look at the backtrace, etc.
I like the flexibility, but from a retrocomputing viewpoint, there are some uncanny valley problems with some of these (fonts that aren’t quite right, layout that doesn’t quite match). The Amiga one seems pretty good but the Mac ones set off my inner pedant. I should dust off my old System 7 X11 WM where I tried very hard to be pixel exact.
I’d love to see X11 in full System 7 drag, but it’s probably not what they’re going for here. It’s like the difference between classic car enthusiasts and hot-rodders. Historical accuracy vs heritage-inspired aesthetics.
Did Dikus do copyover? My academic performance was ruined by LPmuds instead, which have a VM. I always thought that was a more comprehensive approach, though the admins were perpetually worried about memory usage.
I dunno about the broader Diku family, but CircleMUD (a Diku derivative) definitely had a copyover patch available for it. LPMuds passed me by, but the driver/mudlib split with a VM does sound like a good thing.
I think about this all the time; I’m not sure I want stuff that’s already out there cross compiled for WASM, or just a whole new stack that was purpose built for it, but yes I do think it makes sense, or something like gVisor
This is basically exactly what I am working on, using a sandboxing approach called LFI, which is more similar to NaCl than Wasm. I think the LFI approach is better than Wasm for native code sandboxing because it trades off architectural portability for better performance and higher compatibility (it supports exceptions, setjmp/longjmp, programs with hand-written assembly, multithreading/atomics, etc.). I’m especially excited about this for library sandboxing (for libraries written in unsafe languages), since in-process sandboxing techniques like LFI/Wasm have very low “IPC” costs for calling functions across the sandbox boundary.
Yes exactly – the compiler applies a transformation while assembling that turns loads/stores into a 64-bit base added to a 32-bit offset, guaranteeing that all loads/stores can only access up to 4GiB from the base. For example, on Arm64 it transforms ldr x0, [x1] into ldr x0, [x21, w1, uxtw], where x21 stores the base and is a reserved register. At runtime, a verifier analyzes the machine code and ensures that all loads/stores have this form [1] and that x21 is never modified. Since Arm64 has a fixed-width instruction length, the analysis is sound (it’s impossible to jump into the middle of an instruction). The x86-64 implementation is a bit more complex since it requires a technique known as “instruction bundling” to solve this issue [2, 3].
[1]: Certain loads/stores can’t be encoded with the addressing mode used above. In those cases, a separate reserved register is used as the target, and the verifier ensures that this reserved register is only ever loaded with values computed as x21+wN (64-bit plus 32-bit). Similarly, the stack pointer can only be loaded with sandbox addresses, so accesses to the stack don’t have to be rewritten.
Thanks for the links, definitely curious on how you dealt with the addressing modes in x86
I’ve been toying with the idea of embeddable workloads on nommu linux + usermode linux, which has interesting characteristics, but a shared address space is one of the main problems. Something like LFI will be interesting to research.
On x86-64 you can get a similar effect (64-bit base+32-bit offset) by using a segment register, for example %gs:(%edi). Often you can get away with just adding a gs segment selector and switching registers to 32-bit versions, but sometimes you have to rewrite into an lea followed by a simple %gs:(%edi) style access.
I think combining this with usermode virtualization approaches like UML would be very cool. Last time I looked at UML though it didn’t support SMP, which was disappointing. NoMMU+UML sounds like Nabla Linux – not sure if it still works though.
I still have a LocalTalk via PhoneNet segment here for some of the very old machines where other options like SCSI Ethernet are nearly impossible (as opposed to merely impractical). My first home network was PhoneNet, though I didn’t try running it over the apartment wiring, which was likely to have been pretty crappy anyway.
Me too. I have one classic mac with an ethernet dongle, and a little LocalTalk daisychain for the other one and the eMate, just like in the olden days. They don’t get out much. Ironically, my c64 can do ethernet directly.
I’ve used ethernet-over-coax to repurpose some of the old TV cable that was already in my house when I moved in, but eventually just got dirty and ran new cat6e where I wanted it. I recall staring wistfully at the old POTS phone cable switch block in my basement, knowing perfectly well that I can’t run ethernet over those wires.
It’s cool and all. This is a huge contrast to substack which has nags on scroll.
But one of the feature of HTML is hypertext (aka “linkability”). plain/text looses that ability, I had to select the URL you referenced, and right click, and “Go to https://…”
For the record, HTML 4 from 1998 still work on today’s browser.
At one time (not sure when this changed) Firefox would linkify URLs in plain text. This didn’t always work for obvious reasons but I imagine it was a recognition of a lot more non-HTML content with links in it, at least back then.
A simple update of about 200Mb for the server version of the OS takes about 1-2 hours.
The default Debian version detects only 4Gb RAM from 8Gb installed, so I’ve tried Ubuntu and it works much better and faster. However, as I see, it is only for IoT.
Switching to NVMe SSD is not described well, so I used Google and more than one guide to do it.
Maybe I had to google it more time, but it’s not described in the official documentation or the getting started guide. I know this will sound strange, but I believe the documentation could be improved in the past two years, considering the fact that this question has been answered several times on the forum.
TL/DR: my experience is not so cool as some of other hardware nerds here, sorry about that =)
milk-v oasis is currently suspended, as sophgo, the company behind the SoC, was rejected by TSMC citing US trade restrictions.
It doesn’t mean there won’t be similar SoCs from other companies (there are likely many in the pipelines), but that specific SoC is probably not happening at all.
The best recent news is SG2044 exists and has been benchmarked (results have been posted in Geekbench). That’s a followup of SG2042, a 64-core monster, but using C930 or whatever the name of the newer core is.
C910 was RVV 0.7 + proprietary extensions, the new one is standard-compliant RVV 1.0 and thus a big deal.
I’ve had VisionFive 2 since early 2023. As of a few weeks ago, it is a Debian Trixie server with ZFS root. Runs Debian’s standard kernel, which is much better situation than most non-x86 boards out there.
But before that, I played with it plenty as a desktop. It’s more or less like this:
CPU is faster than rpi3, but slower than rpi4.
IO is much better than rpi4.
GPU is fast. 2-3x relative to rpi4’s. An open driver is being worked on with Imagination Technologies funding the effort, but it isn’t ready. Running the proprietary driver is a major pain.
Firefox and Chrome are both fine. With experimental video decode support, youtube is smooth.
Almost everything is already upstreamed from server pov, but desktop use needs patches for at least HDMI support, or using vendor’s old kernel for GPU or codec acceleration.
Nice restoration. Well done. I bet the CPU socket accelerator problem might be a phenomenon on other machines, so I’ll add it to my list to check if I run across a DOA system with an accelerator replacing the original processor.
That said, my “daily driver” Amiga is an A4000T with a QuikPak ’060 …
It’s notable, but not entirely surprising, that the breadth of undocumented behaviour in the 68K family has yet to be fully elucidated. I suspect there are other corner cases lurking that something, somewhere, depends on.
If memory serves, didn’t early Macs use illegal instructions for what would probably be considered a syscall interface today? Obviously that’s not little-known, at all, but may be little-remembered.
I can’t remember the official description well enough to explain it thoroughly, and I think we called them “A Trap” instructions, which is not a great term for web searches. I get mostly Admiral Ackbar memes. There was even a bomb dialog that said “Unimplemented Trap”. The OS relied on them, and OS extension authors would hook in ahead of the OS reliance on them using “INIT” resources.
!!! DO NOT XEROX <— That, coupled with the tech content of the card, brought back a flood of memories.
Thank you for finding that! I used to have that exact reference card hanging next to my desk in the early 2000s. It was up there to help me troubleshoot builds for handspring PDAs. Macs had, of course, long since moved on from 68k. But there was a window of time when I looked at that card frequently. I’m pretty sure I bought it at SoftPro, in Burlington, MA. Where I also used to buy OpenBSD CDs.
I would be hesitant to call A-line traps “illegal instructions” - the construct is perfectly valid and documented, even if not every system defines every possible one of them (the “Unimplemented trap” message you saw just indicates that the exception handler doesn’t know that trap number). The same general mechanism is used for FPU instructions, which would be diverted to software using a similar method if an FPU were not present (i.e., F-line instructions).
When I had to learn assembly in school, we learned it on a weird simulated CPU, then 6502, then 68000. Without comment on the validity of the technique, my understanding of what was going on under the hood was that the 68k CPU was treating it as an illegal instruction, then the software was trapping it and keeping things moving. But it’s been a really long time, and may be just an artifact of how the professor who explained it to me understood things.
When I was in school I used Metrowerks CodeWarrior a bit. I guess NXP owns it now but I don’t know if they kept the toolchain or switched to something more standard.
I suppose this only counts as commercial, not obscure - it seemed to be the default in certain circles 20-25 years ago, afaik among them MacOS 9 and/or early OS X. I want to say I tried it on Windows, but I might me misremembering.
it seemed to be the default in certain circles 20-25 years ago, afaik among them MacOS 9 and/or early OS X.
*nod* After the 680x0-to-PowerPC switch, CodeWarrior ate up the market formerly occupied by Apple’s Macintosh Programmer’s Workshop and Symantec’s THINK C and THINK Pascal and stayed in that role until Apple released XCode for OS X and killed off the market for paid toolchains.
(IIRC, it was because CodeWarrior was better than MPW or THINK at producing optimized PPC machine code but my memory is hazy so don’t take my word for it. I was on Windows during that period.)
I want to say I tried it on Windows, but I might me misremembering.
Not only was it available for Windows, some versions included the Mac OS and Windows versions in the same SKU and the ones of those I collected for my retro-hobby macs advertise support for Windows→Mac OS cross-compilation.
I believe Nintendo’s Gamecube also used CodeWarrior for its SDK. I guess because it was a good PPC code optimizer and the Gamecube was a modified PPC 750
If I’m remembering my history right, CodeWarrior was the IDE to use for Mac OS (not OSX) back in the day. Which would have been targeting Motorola 68k. It looks like Motorola bought Metrowerks in 1999. Motorola spun out their semiconductor group into Freescale in 2004. Meanwhile NXP spun off from Phillips in 2006. Both IPO’d, went through a bunch of weird stuff with private equity, and eventually in 2015 decided to merge. Seems that CodeWarrior just came along for the ride!
The problem is that everything else is calling ?etenv() too, not just your own code. It’s hard to avoid the footgun when everything seems to have a trigger.
The beauty of Perl lies in its backward compatibility. Despite our codebase spanning over two decades, upgrading Perl across versions has never broken our code. In contrast, with other languages, we’ve observed issues like API changes causing extensive refactoring or even rewrites.
I think that’s a worthwhile feature in a programming language. I find it interesting how there are different approaches to it. IIRC Perl has that “use vX.Y.Z;” function for this.
I ran into an edge case once with signals handling that occurred around 5.14 or so. But that’s about the only breaking change I can remember hitting my own scripts since I started using Perl 5 at 5.003 (though I started with Perl with 4.036, and even most of those scripts still generally worked when we migrated to 5).
Another +1 for the M1 MBA (this one has 16GB of RAM). I’ve been very happy with it and it doesn’t have dat notch. It was definitely a big step up from the Haswell MBA I had before, though I did like the 11” form factor. We were in the Apple store the other day to get my wife an iPad Pro for work and I played with the current MBAs and MBPs a little, but for my typical daily driver tasks it doesn’t seem I’d get a lot of improvement.
This is a nice 6502 algorithm and impressively efficient. For some fun alternatives, there’s also the Mandelbrot Construction Set for the C64 (auf Deutsch: https://www.c64-wiki.de/wiki/Mandelbrot-Construction-Set ), which uses the 1541 disk drive as a coprocessor, and of course my favourite implementation is the distributed Mandelbrot plotter that uses a Mac front end and an AIX Apple Network Server as backend, sent over Apple events (disclosure, my article: https://oldvcr.blogspot.com/2023/11/the-apple-network-servers-all-too.html ).
Ah, but don’t you know, they “aren’t” rolling their own crypto, per their FAQ.
Is Session rolling its own cryptography?
No, Session does not roll its own cryptography. Session uses Libsodium, a highly tested, widely used, and highly-regarded crypto library.
Libsodium is completely open-source.
heavily rolls eyes
I like libsodium. It’s a great library of cryptography algorithms.
It doesn’t come, batteries included, with a full protocol for end-to-end encryption built-in. And so anyone who uses libsodium for e2ee is necessarily rolling their own crypto.
I’ve only heard of Session in passing over the years. This result is not surprising.
i initially thought you were being overzealous, until i read the first paragraph of the article
using libsodium to implement the documented signal protocol, i think would be fine. it does have risks, and should have some review before relying on it, but i wouldn’t really call that “rolling your own crypto”. and having a clean-room re-implementation would probably be good for the ecosystem
…but that’s not what they’re doing. they’re implementing their own protocol, and a cursory glance at their reasoning suggests that they want a decentralized messenger and care about security as an afterthought. which would be fine if they branded it that way, and not as an alternative to signal
This may be a little off topic, but I dislike the phrase “don’t roll your own crypto”.
Don’t roll your own crypto is generally a term which in itself is very ambiguous.
I’ve seen this phrase being thrown around when people just use gnutls in C vs people implementing some hash algorithm themselves. One I find very valid while the other one is an area where I would just use libsodium.
There are so many layers in crypto where you can apply the phrase that I find refuting (their claims with) this phrase in itself is meaningless unless you know what the authors intended. In this case it may as well be claims regarding the resistance against certain side channel attacks.
I’ve always asked myself how I can identify the moment I arrive at a skill level where I’m allowed to “roll my own crypto” depending on each possible interpretation people are using.
There are so many layers in crypto where you can apply the phrase that I find refuting this phrase in itself is meaningless unless you know what the authors intended.
Absolutely. And the advice, taken to its logical extreme, would result in zero cryptography ever being developed.
It’s supposed to be along the same lines as advice given from lawyers to their kids that are considering a career in law. They say, “Don’t be a lawyer,” and if their kid isn’t dissuaded and can argue why they’d succeed as a lawyer, then maybe they should be one.
“Don’t roll your own crypto” is along the same lines. I currently do it professionally, but I also have a lot of oversight into the work I do to ensure it’s correct. Detailed specifications, machine-assisted proofs, peer review, you name it. I never start with code; I always start with “here’s why I’m doing this at all” (which includes closely related ideas and why they do not work for the problem I’m solving) and a threat model for my specific solution.
It can take months, or even years, to get a new cryptography design vetted and released with the appropriate amount of safety.
When it comes to cryptography protocol design, the greatest adversary is often your own ego.
I always read the advice as an incomplete sentence, which ends with “unless you know what you’re doing”, which is coincidentally like other safety advice, right? “This thing that you’re about to do is risky and dangerous unless you know how to do it, and in some cases, even if you do. Avoid doing it if you can. Exercise caution and care otherwise.” No?
I always viewed it as “don’t ship your own” - feel free to roll your own to play around with, but be cautious and get some review before putting it into production.
One piece of advice I’ve heard is: Before trying to build crypto, learn how to break crypto. Cryptopals is a good resource for that. It’s mindbending to learn about all the weird ways that crypto can fall apart.
I remember many moons ago that an expert in security and crypto actually published a list of cryptographic choices that should be your default. I wonder if this rings a bell to someone, it would be nice to recover that document, publish it here and see what this community would say in terms of freshen it up.
I might be wrong, but I think in the beginning the meaning of the phrase “don’t roll your own crypto” mean “do not try to come up with cryptographic algorithms on your own; use something tested and done by someone who know what they are doing”. I think the best way to describe what Soatok is putting forward is “don’t skip the default practices of security” or “don’t wave away cryptographic defaults in name of a watered down threat model”.
I think that’s a very reasonable concern. Particularly in light of the very first issue @soatok cites: the removal of PFS from the protocol. I’m on record as being skeptical of the “just use signal” advice that seems frequently deployed as a though-terminating measure in discussions about encrypted communication, but if I wanted to make something that was like signal but really a deniable honeypot, Session makes the same choices I would. It seems like a downgrade from signal in every important way.
Unrelated: the “Ad-blocker not detected” message at the bottom of the post made me laugh quite a bit. I use some tracker-blocking extensions (and browser configs) but I don’t run a real ad blocker in this browser. But many sites give me an “ad blocker detected” message and claim I need to turn off my non-existent ad blocker to support them. This site telling me I’m not running enough of one struck me as very funny.
Sure, its plausible.
But I find basically every time Soatok (or any security researcher) exposes any application that advertises itself as “secure/private” on the box, for their glaring bad practices, people (myself included) immediately go to “this is so stupid it has to be a honeypot”.
Are they all honeypots? (genuinely, maybe yes), or is it just stupidity?
I would posit stupidity. Previous honeypots that weren’t takeovers of server operators have been somewhat targeted: Anom required a buy-in of the phone (as evidence you’re a criminal), Playpen required you be a pedophile (or at least, hanging out online with pedophiles) to be caught in the net, Hansa was a drug market, etc. Creating a general-purpose backdoored app to en masse catch criminals seems to cast quite a wide net when the same arrest portfolio can probably be gathered by doing the same thing to Twitter DMs with a backdoor and a secret court order. I wouldn’t put it past law enforcement but it seems like a mega hassle vs. targeted honeypots and backdoors.
If it were a honeypot (or backdoor), it’s certainly too much hassle for legitimate law enforcement purposes like the ones you described. You’d want this for someone you couldn’t reach through normal court (even a secret rubberstamp like FISA) channels.
This would be more like something you’d use for getting information from a group operating under some legal regime that’s not friendly to you gathering that information. Getting it in place, then convincing the group you were interested in to migrate off, say, Telegram, might be one approach.
The interesting thing in this case (IMO) here is that the fork removes things that were:
Already implemented by an easy-to-reuse library
Not costly in terms of performance or cognitive overhead
Seemingly beneficial to the stated goals of their product
and without articulating the upside to their removal very persuasively. Sure, stupidity is always a possibility. But it feels more like they want to add some features that they don’t want to talk about. On the less-nefarious end of that spectrum, I could imagine that it is as simple as supporting devices that don’t work with the upstream, but that they don’t want to discuss in public. It’s also easy to imagine wanting to support some middle scanner-type box on a corporate network that PFS would break. But it could also be something like being able to read messages from a device where you can maintain passive traffic capture/analysis but can’t (or would prefer not to) maintain an ongoing endpoint compromise without detection. e.g. You have compromised a foreign telco and can pull long term key material off a device when its owner stays in your hotel, but you can’t or won’t leave anything running on there because the risk of detection would then be too high.
That’s just all speculation about when it might serve someone with an effectively unlimited budget to do something like this. Stupidity is certainly more common than such scenarios.
Only the first bit could charitably be attributed to “don’t roll your own crypto”. The rest was just obtuse idiocy or malevolence. Call the library-provided “encrypt this chunk with symmetric encryption using this key” then providing a public key.. that’s not about rolling your own crypto.
The problem isn’t (just) the desktop environment, it’s also the apps you’re running. This was why Jef Raskin’s designs, for example, were so radical. You couldn’t graft a new WM into old programs; you’ll still have various sharp edges where the paradigms collide.
I’m more surprised that Windows NT had an official PowerPC port in the first place
Not only had, but was supported for several versions, along with classic MIPS. Alpha even had Win2K builds right up until release.
This is the same guy who “ported” (the HAL and drivers, at least) Windows NT to Power Macs. Previously only the weir-dass IBM PReP hardware could run it, and we all know how much of a dent that made in the market. For that matter, it couldn’t run on SGI MIPS either. No wonder Microsoft jettisoned those ports.
The G4 is still my favourite iMac design. I have a 15” 1.25GHz still trucking along. It’s good enough to use as a terminal and for Classic apps, and it occasionally plays DVDs and CDs for the kitchen with its set of Pro Speakers. Plus, this size display wasn’t too heavy for the arm; the springs are still in great shape.
Maybe their ”robotics” thing is a new take on this design? I’d buy it!
For me a debugger is most useful in those circumstances where you don’t know what code you have to instrument or put debug printfs in. In that case, throw some hardware watchpoints around and see what falls out, look at the backtrace, etc.
I like the flexibility, but from a retrocomputing viewpoint, there are some uncanny valley problems with some of these (fonts that aren’t quite right, layout that doesn’t quite match). The Amiga one seems pretty good but the Mac ones set off my inner pedant. I should dust off my old System 7 X11 WM where I tried very hard to be pixel exact.
I’d love to see X11 in full System 7 drag, but it’s probably not what they’re going for here. It’s like the difference between classic car enthusiasts and hot-rodders. Historical accuracy vs heritage-inspired aesthetics.
Yeah, the Aqua one looks like a mixture of Aqua, System 7, and early iOS.
Did Dikus do copyover? My academic performance was ruined by LPmuds instead, which have a VM. I always thought that was a more comprehensive approach, though the admins were perpetually worried about memory usage.
I dunno about the broader Diku family, but CircleMUD (a Diku derivative) definitely had a copyover patch available for it. LPMuds passed me by, but the driver/mudlib split with a VM does sound like a good thing.
I am so looking forward to the point where this kind of thing is easy and straightforward, and not a giant mess.
I want to run pretty much everything in a WASM sandbox. It’s 2025, why run untrusted code using anything by else?
Clearly the JVM missed an opportunity here.
I think about this all the time; I’m not sure I want stuff that’s already out there cross compiled for WASM, or just a whole new stack that was purpose built for it, but yes I do think it makes sense, or something like gVisor
This is basically exactly what I am working on, using a sandboxing approach called LFI, which is more similar to NaCl than Wasm. I think the LFI approach is better than Wasm for native code sandboxing because it trades off architectural portability for better performance and higher compatibility (it supports exceptions, setjmp/longjmp, programs with hand-written assembly, multithreading/atomics, etc.). I’m especially excited about this for library sandboxing (for libraries written in unsafe languages), since in-process sandboxing techniques like LFI/Wasm have very low “IPC” costs for calling functions across the sandbox boundary.
Super interesting! How does the memory sandboxing work? are all reads/writes modified to be offsets from the sandbox base?
Yes exactly – the compiler applies a transformation while assembling that turns loads/stores into a 64-bit base added to a 32-bit offset, guaranteeing that all loads/stores can only access up to 4GiB from the base. For example, on Arm64 it transforms
ldr x0, [x1]intoldr x0, [x21, w1, uxtw], wherex21stores the base and is a reserved register. At runtime, a verifier analyzes the machine code and ensures that all loads/stores have this form [1] and thatx21is never modified. Since Arm64 has a fixed-width instruction length, the analysis is sound (it’s impossible to jump into the middle of an instruction). The x86-64 implementation is a bit more complex since it requires a technique known as “instruction bundling” to solve this issue [2, 3].[1]: Certain loads/stores can’t be encoded with the addressing mode used above. In those cases, a separate reserved register is used as the target, and the verifier ensures that this reserved register is only ever loaded with values computed as x21+wN (64-bit plus 32-bit). Similarly, the stack pointer can only be loaded with sandbox addresses, so accesses to the stack don’t have to be rewritten.
Thanks for the links, definitely curious on how you dealt with the addressing modes in x86
I’ve been toying with the idea of embeddable workloads on nommu linux + usermode linux, which has interesting characteristics, but a shared address space is one of the main problems. Something like LFI will be interesting to research.
On x86-64 you can get a similar effect (64-bit base+32-bit offset) by using a segment register, for example
%gs:(%edi). Often you can get away with just adding a gs segment selector and switching registers to 32-bit versions, but sometimes you have to rewrite into an lea followed by a simple%gs:(%edi)style access.I think combining this with usermode virtualization approaches like UML would be very cool. Last time I looked at UML though it didn’t support SMP, which was disappointing. NoMMU+UML sounds like Nabla Linux – not sure if it still works though.
I still have a LocalTalk via PhoneNet segment here for some of the very old machines where other options like SCSI Ethernet are nearly impossible (as opposed to merely impractical). My first home network was PhoneNet, though I didn’t try running it over the apartment wiring, which was likely to have been pretty crappy anyway.
Me too. I have one classic mac with an ethernet dongle, and a little LocalTalk daisychain for the other one and the eMate, just like in the olden days. They don’t get out much. Ironically, my c64 can do ethernet directly.
I’ve used ethernet-over-coax to repurpose some of the old TV cable that was already in my house when I moved in, but eventually just got dirty and ran new cat6e where I wanted it. I recall staring wistfully at the old POTS phone cable switch block in my basement, knowing perfectly well that I can’t run ethernet over those wires.
Heroic. However, I’m surprised about the failure to build on recent macOS. That seems like a bug someone would have noticed.
When in doubt, go check the nixpkgs build status. macOS perl builds for 5.40 seem to be doing fine
It’s cool and all. This is a huge contrast to substack which has nags on scroll.
But one of the feature of HTML is hypertext (aka “linkability”).
plain/textlooses that ability, I had to select the URL you referenced, and right click, and “Go to https://…”For the record, HTML 4 from 1998 still work on today’s browser.
At one time (not sure when this changed) Firefox would linkify URLs in plain text. This didn’t always work for obvious reasons but I imagine it was a recognition of a lot more non-HTML content with links in it, at least back then.
While the text itself is not linkified, if you select it, Firefox’s context menu has an “Open Link” option if the selection is link-shaped.
Yeah, clickable links may be the missing feature of raw text files viewed in a browser. But I don’t think it’s such a big problem:
At $199, that is very reasonably priced! – I wonder what the software support landscape in RISC-V is presently.
The software support isn’t bad, but the price is largely because this particular chip is no barnburner.
The Debian trixie effort is doing well.
https://buildd.debian.org/stats/graph-week-big.png
Neat, thanks for the stats – just what I was looking for!
I’ve been watching since RISC-V became a first tier Debian architecture and the big build started.
We’re only weeks (possibly just days!) away from surpassing ppc64 and becoming the 3rd largest.
Ubuntu works better then debian from my experience on VisionFive 2
Is it usable, speed wise? For, say Firefox? @deivid above mentions it’s slow, but that can also be IO or whatever.
I haven’t tried it for the desktop because:
You’d need a DeviceTree overlay, as there are multiple versions of the board. This is a minimal one:
Build the overlay with
dtc -@. In u-boot’s extlinux.conf, usefdtoverlayscommand to specify your overlay.Maybe I had to google it more time, but it’s not described in the official documentation or the getting started guide. I know this will sound strange, but I believe the documentation could be improved in the past two years, considering the fact that this question has been answered several times on the forum.
TL/DR: my experience is not so cool as some of other hardware nerds here, sorry about that =)
PS: I’m really liked radxa community and their hardware, maybe https://forum.radxa.com/t/radxa-rock-5-itx-vs-milk-v-oasis/20960 will have more opportunity from users.
milk-v oasis is currently suspended, as sophgo, the company behind the SoC, was rejected by TSMC citing US trade restrictions.
It doesn’t mean there won’t be similar SoCs from other companies (there are likely many in the pipelines), but that specific SoC is probably not happening at all.
The best recent news is SG2044 exists and has been benchmarked (results have been posted in Geekbench). That’s a followup of SG2042, a 64-core monster, but using C930 or whatever the name of the newer core is.
C910 was RVV 0.7 + proprietary extensions, the new one is standard-compliant RVV 1.0 and thus a big deal.
It’s about 3-4x slower than a Raspberry Pi cpu-wise. The I/O is not great, even with an NVMe SSD, less than 100MB/s.
Which model of Raspberry Pi? They vary in performance by maybe 100x https://magpi.raspberrypi.com/articles/raspberry-pi-specs-benchmarks (that article predates the 5)
Considering JH7110 CPU performance falls between rpi3 and 4, the stated figure suggests it is a comparison with rpi5.
I’ve had VisionFive 2 since early 2023. As of a few weeks ago, it is a Debian Trixie server with ZFS root. Runs Debian’s standard kernel, which is much better situation than most non-x86 boards out there.
But before that, I played with it plenty as a desktop. It’s more or less like this:
Firefox and Chrome are both fine. With experimental video decode support, youtube is smooth.
Almost everything is already upstreamed from server pov, but desktop use needs patches for at least HDMI support, or using vendor’s old kernel for GPU or codec acceleration.
It’s amazing what kind of hardware we have available nowadays :)
Thanks for the update!
Nice restoration. Well done. I bet the CPU socket accelerator problem might be a phenomenon on other machines, so I’ll add it to my list to check if I run across a DOA system with an accelerator replacing the original processor.
That said, my “daily driver” Amiga is an A4000T with a QuikPak ’060 …
It’s notable, but not entirely surprising, that the breadth of undocumented behaviour in the 68K family has yet to be fully elucidated. I suspect there are other corner cases lurking that something, somewhere, depends on.
If memory serves, didn’t early Macs use illegal instructions for what would probably be considered a syscall interface today? Obviously that’s not little-known, at all, but may be little-remembered.
I can’t remember the official description well enough to explain it thoroughly, and I think we called them “A Trap” instructions, which is not a great term for web searches. I get mostly Admiral Ackbar memes. There was even a bomb dialog that said “Unimplemented Trap”. The OS relied on them, and OS extension authors would hook in ahead of the OS reliance on them using “INIT” resources.
A bit of searchengineering led to [macintosh 68000 interrupt] and [macintosh 68000 vector] and this non-mac reference card has a terse paragraph or two explaining traps https://os9projects.com/CD_Archive/TUTORIAL/REF/CARD/68000_Ref_Card.pdf
!!! DO NOT XEROX <— That, coupled with the tech content of the card, brought back a flood of memories.
Thank you for finding that! I used to have that exact reference card hanging next to my desk in the early 2000s. It was up there to help me troubleshoot builds for handspring PDAs. Macs had, of course, long since moved on from 68k. But there was a window of time when I looked at that card frequently. I’m pretty sure I bought it at SoftPro, in Burlington, MA. Where I also used to buy OpenBSD CDs.
I would be hesitant to call A-line traps “illegal instructions” - the construct is perfectly valid and documented, even if not every system defines every possible one of them (the “Unimplemented trap” message you saw just indicates that the exception handler doesn’t know that trap number). The same general mechanism is used for FPU instructions, which would be diverted to software using a similar method if an FPU were not present (i.e., F-line instructions).
That is a very reasonable perspective.
When I had to learn assembly in school, we learned it on a weird simulated CPU, then 6502, then 68000. Without comment on the validity of the technique, my understanding of what was going on under the hood was that the 68k CPU was treating it as an illegal instruction, then the software was trapping it and keeping things moving. But it’s been a really long time, and may be just an artifact of how the professor who explained it to me understood things.
Hopefully wiring up some serial cables for my DEC Professional 380.
When I was in school I used Metrowerks CodeWarrior a bit. I guess NXP owns it now but I don’t know if they kept the toolchain or switched to something more standard.
I suppose this only counts as commercial, not obscure - it seemed to be the default in certain circles 20-25 years ago, afaik among them MacOS 9 and/or early OS X. I want to say I tried it on Windows, but I might me misremembering.
*nod* After the 680x0-to-PowerPC switch, CodeWarrior ate up the market formerly occupied by Apple’s Macintosh Programmer’s Workshop and Symantec’s THINK C and THINK Pascal and stayed in that role until Apple released XCode for OS X and killed off the market for paid toolchains.
(IIRC, it was because CodeWarrior was better than MPW or THINK at producing optimized PPC machine code but my memory is hazy so don’t take my word for it. I was on Windows during that period.)
Not only was it available for Windows, some versions included the Mac OS and Windows versions in the same SKU and the ones of those I collected for my retro-hobby macs advertise support for Windows→Mac OS cross-compilation.
I used to wear a tshirt for this compiler in high school.
I believe Nintendo’s Gamecube also used CodeWarrior for its SDK. I guess because it was a good PPC code optimizer and the Gamecube was a modified PPC 750
It did.
If I’m remembering my history right, CodeWarrior was the IDE to use for Mac OS (not OSX) back in the day. Which would have been targeting Motorola 68k. It looks like Motorola bought Metrowerks in 1999. Motorola spun out their semiconductor group into Freescale in 2004. Meanwhile NXP spun off from Phillips in 2006. Both IPO’d, went through a bunch of weird stuff with private equity, and eventually in 2015 decided to merge. Seems that CodeWarrior just came along for the ride!
Still use it on BeOS/powerpc and of course classic Mac OS.
The problem is that everything else is calling
?etenv()too, not just your own code. It’s hard to avoid the footgun when everything seems to have a trigger.I think that’s a worthwhile feature in a programming language. I find it interesting how there are different approaches to it. IIRC Perl has that “use vX.Y.Z;” function for this.
I ran into an edge case once with signals handling that occurred around 5.14 or so. But that’s about the only breaking change I can remember hitting my own scripts since I started using Perl 5 at 5.003 (though I started with Perl with 4.036, and even most of those scripts still generally worked when we migrated to 5).
Another +1 for the M1 MBA (this one has 16GB of RAM). I’ve been very happy with it and it doesn’t have dat notch. It was definitely a big step up from the Haswell MBA I had before, though I did like the 11” form factor. We were in the Apple store the other day to get my wife an iPad Pro for work and I played with the current MBAs and MBPs a little, but for my typical daily driver tasks it doesn’t seem I’d get a lot of improvement.
This is a nice 6502 algorithm and impressively efficient. For some fun alternatives, there’s also the Mandelbrot Construction Set for the C64 (auf Deutsch: https://www.c64-wiki.de/wiki/Mandelbrot-Construction-Set ), which uses the 1541 disk drive as a coprocessor, and of course my favourite implementation is the distributed Mandelbrot plotter that uses a Mac front end and an AIX Apple Network Server as backend, sent over Apple events (disclosure, my article: https://oldvcr.blogspot.com/2023/11/the-apple-network-servers-all-too.html ).
This list of deficiencies reads like a slightly obscured, writ-large version of “don’t roll your own crypto”.
Ah, but don’t you know, they “aren’t” rolling their own crypto, per their FAQ.
heavily rolls eyes
I like libsodium. It’s a great library of cryptography algorithms.
It doesn’t come, batteries included, with a full protocol for end-to-end encryption built-in. And so anyone who uses libsodium for e2ee is necessarily rolling their own crypto.
I’ve only heard of Session in passing over the years. This result is not surprising.
i initially thought you were being overzealous, until i read the first paragraph of the article
using libsodium to implement the documented signal protocol, i think would be fine. it does have risks, and should have some review before relying on it, but i wouldn’t really call that “rolling your own crypto”. and having a clean-room re-implementation would probably be good for the ecosystem
…but that’s not what they’re doing. they’re implementing their own protocol, and a cursory glance at their reasoning suggests that they want a decentralized messenger and care about security as an afterthought. which would be fine if they branded it that way, and not as an alternative to signal
This may be a little off topic, but I dislike the phrase “don’t roll your own crypto”.
Don’t roll your own crypto is generally a term which in itself is very ambiguous.
I’ve seen this phrase being thrown around when people just use gnutls in C vs people implementing some hash algorithm themselves. One I find very valid while the other one is an area where I would just use libsodium.
There are so many layers in crypto where you can apply the phrase that I find refuting (their claims with) this phrase in itself is meaningless unless you know what the authors intended. In this case it may as well be claims regarding the resistance against certain side channel attacks.
I’ve always asked myself how I can identify the moment I arrive at a skill level where I’m allowed to “roll my own crypto” depending on each possible interpretation people are using.
edit: added intended missing meaning in (…)
Absolutely. And the advice, taken to its logical extreme, would result in zero cryptography ever being developed.
It’s supposed to be along the same lines as advice given from lawyers to their kids that are considering a career in law. They say, “Don’t be a lawyer,” and if their kid isn’t dissuaded and can argue why they’d succeed as a lawyer, then maybe they should be one.
“Don’t roll your own crypto” is along the same lines. I currently do it professionally, but I also have a lot of oversight into the work I do to ensure it’s correct. Detailed specifications, machine-assisted proofs, peer review, you name it. I never start with code; I always start with “here’s why I’m doing this at all” (which includes closely related ideas and why they do not work for the problem I’m solving) and a threat model for my specific solution.
It can take months, or even years, to get a new cryptography design vetted and released with the appropriate amount of safety.
When it comes to cryptography protocol design, the greatest adversary is often your own ego.
I always read the advice as an incomplete sentence, which ends with “unless you know what you’re doing”, which is coincidentally like other safety advice, right? “This thing that you’re about to do is risky and dangerous unless you know how to do it, and in some cases, even if you do. Avoid doing it if you can. Exercise caution and care otherwise.” No?
I always viewed it as “don’t ship your own” - feel free to roll your own to play around with, but be cautious and get some review before putting it into production.
One piece of advice I’ve heard is: Before trying to build crypto, learn how to break crypto. Cryptopals is a good resource for that. It’s mindbending to learn about all the weird ways that crypto can fall apart.
At least one of my online friends agrees.
I think it’s more like, don’t roll your own crypto: don’t do it by yourself, collaborate with other experts, get lots of review from many angles.
I remember many moons ago that an expert in security and crypto actually published a list of cryptographic choices that should be your default. I wonder if this rings a bell to someone, it would be nice to recover that document, publish it here and see what this community would say in terms of freshen it up.
I might be wrong, but I think in the beginning the meaning of the phrase “don’t roll your own crypto” mean “do not try to come up with cryptographic algorithms on your own; use something tested and done by someone who know what they are doing”. I think the best way to describe what Soatok is putting forward is “don’t skip the default practices of security” or “don’t wave away cryptographic defaults in name of a watered down threat model”.
But maybe I am too off?
You might be thinking of “cryptographic right answers” from ’tptacek (2018 version, 2024 post-quantum version)
YES!!! You found it! Thank you @zimpenfish!
There’s also What To Use Instead of PGP from the very blog this Lobsters thread is about.
It was also posted on Lobsters.
Maybe I’m paranoid, but it reads to me like a plausibly deniable honeypot.
I think that’s a very reasonable concern. Particularly in light of the very first issue @soatok cites: the removal of PFS from the protocol. I’m on record as being skeptical of the “just use signal” advice that seems frequently deployed as a though-terminating measure in discussions about encrypted communication, but if I wanted to make something that was like signal but really a deniable honeypot, Session makes the same choices I would. It seems like a downgrade from signal in every important way.
Unrelated: the “Ad-blocker not detected” message at the bottom of the post made me laugh quite a bit. I use some tracker-blocking extensions (and browser configs) but I don’t run a real ad blocker in this browser. But many sites give me an “ad blocker detected” message and claim I need to turn off my non-existent ad blocker to support them. This site telling me I’m not running enough of one struck me as very funny.
Sure, its plausible.
But I find basically every time Soatok (or any security researcher) exposes any application that advertises itself as “secure/private” on the box, for their glaring bad practices, people (myself included) immediately go to “this is so stupid it has to be a honeypot”.
Are they all honeypots? (genuinely, maybe yes), or is it just stupidity?
i used to think that people sending a knockoff paypal payment link from a TLD i’ve never heard of was an obvious scam
then i tried to buy a monitor from hewlett packard via customer support, and i found out who these scammers are imitating
I would posit stupidity. Previous honeypots that weren’t takeovers of server operators have been somewhat targeted: Anom required a buy-in of the phone (as evidence you’re a criminal), Playpen required you be a pedophile (or at least, hanging out online with pedophiles) to be caught in the net, Hansa was a drug market, etc. Creating a general-purpose backdoored app to en masse catch criminals seems to cast quite a wide net when the same arrest portfolio can probably be gathered by doing the same thing to Twitter DMs with a backdoor and a secret court order. I wouldn’t put it past law enforcement but it seems like a mega hassle vs. targeted honeypots and backdoors.
If it were a honeypot (or backdoor), it’s certainly too much hassle for legitimate law enforcement purposes like the ones you described. You’d want this for someone you couldn’t reach through normal court (even a secret rubberstamp like FISA) channels.
This would be more like something you’d use for getting information from a group operating under some legal regime that’s not friendly to you gathering that information. Getting it in place, then convincing the group you were interested in to migrate off, say, Telegram, might be one approach.
The interesting thing in this case (IMO) here is that the fork removes things that were:
and without articulating the upside to their removal very persuasively. Sure, stupidity is always a possibility. But it feels more like they want to add some features that they don’t want to talk about. On the less-nefarious end of that spectrum, I could imagine that it is as simple as supporting devices that don’t work with the upstream, but that they don’t want to discuss in public. It’s also easy to imagine wanting to support some middle scanner-type box on a corporate network that PFS would break. But it could also be something like being able to read messages from a device where you can maintain passive traffic capture/analysis but can’t (or would prefer not to) maintain an ongoing endpoint compromise without detection. e.g. You have compromised a foreign telco and can pull long term key material off a device when its owner stays in your hotel, but you can’t or won’t leave anything running on there because the risk of detection would then be too high.
That’s just all speculation about when it might serve someone with an effectively unlimited budget to do something like this. Stupidity is certainly more common than such scenarios.
Hence “plausibly deniable.”
Only the first bit could charitably be attributed to “don’t roll your own crypto”. The rest was just obtuse idiocy or malevolence. Call the library-provided “encrypt this chunk with symmetric encryption using this key” then providing a public key.. that’s not about rolling your own crypto.