Lossless compression has high potential in the digital archiving context where you might want to have fully reversible compression and be able to give back the same file that was deposited (same checksum).
JPEG2000 might be the only one at the moment being used. Usually uncompressed files are still being used (mostly TIFF). If JPEG XL could gain some traction, it could be a good solution
I don’t how to feel about yet another RAW photo format. As a digital archivist, I fear incoming years of conversion nightmare to open and sustainable formats.
But most existing camera companies have their own proprietary RAW format. This is just adding “Apple iPhone camera RAW” to the list.
I shoot RAW just for convenience - I get slightly better chances to adjust stuff like exposure and white balance - but I publish in JPG and will not assume that anything else is viable long-term.
I do not challenge the usefulness of the RAW format. The plethora of proprietary RAW formats is not something we should be happy about. There have been efforts to find a common RAW-like format to please everyone (photographers, graphic designers, archivists) but it has not been successful yet.
I agree that RAW should not be considered a long-term preservation compatible format, but some content creators are not aware and digital archivists do get RAW files. With Apple firepower, I am afraid of the potential explosion of RAW files that will need to be taken care of.
I’ve been photographing RAW since… 2008? and I honestly can’t remember the format that Adobe promoted and that was used by a few manufacturers. Maybe the new Apple format will be so popular that it will sweep all before it and become the one true standard, or maybe the different manufacturers will continue using with own proprietary formats out of inertia.
My point is, one more entry to long list of incompatible formats is neither here nor there. The ship sailed long ago.
I don’t agree. This isn’t like supporting documents written in WordPerfect for classic Macintosh. There’s simply no reason to think that compatibility with old RAW formats will ever go away in future versions of major software. Of course it’s possible that in 50 years Adobe will fail into obscurity. But at the very least you’ll always have LibRaw. Or if in 50 years the project is abandoned and the new hotness fails to support older cameras, the source code for LibRaw will still be in your digital archive—and I dare say archivists will be able to keep that final release compiling on future systems forever.
Existing RAW formats are a fact and digital archivists have to deal with it on a daily basis: should we keep it as it is? convert it? My concern is this additional format by Apple that could be a new challenge, with Apple not having a great track of openness and compatibility concerns with its file formats.
Sorry for the slight delay in replying but my answer to archivists is simple: always keep originals. Optionally keep translated files as well, if you like.
By keeping the original RAW, you maintain the ability to take advantage of the very latest RAW processing technology. I’ve recently updated to the latest version of DXO PhotoLab with DeepPRIME RAW de-noising and it’s utterly magical. Because I have kept RAW files I can go through decades old photo collections and re-process them. It’s like going back in time and re-shooting with two or three more stops of light.
Thank you for sharing your point of view. I agree that original RAW files should be kept but as you know, some file formats do become increasingly more difficult to interpret and converted files may become, a one point, the only “understable” version of the information.
According to various sources, the ProRAW seems to be a 12-bit Linear DNG, which is good news and surprising since Apple is usually known for pushing for new and proprietary formats. At the end, this could prove to be a good move by Apple and long term preservation.
I had no idea of the diversity of scancodes. I want one of those keyboards with the “coffee cup” key.
Would have been nice to credit all the photos you have used.
It would’ve been, yes but for two problems:
I’m doing what I can within the constraints of Substack’s platform. That’s why pretty much wherever I’ve posted this I’ve said that I would do additional attributions, corrections and further resources in the next issue - I can then edit the post and link to the next issue with all the attributions. It’s also why I’ve posted further links to resources I found helpful in this submission.
So why post on substack?
Because it’s the least worst environment I’ve used that does both newsletters and regular posting in a fairly seamless way. Yes Substack has constraints, but some of those constraints are quite good at keeping me on a regular schedule and keeping the stuff I write under the point where emails would get cut-off or rejected anyway.
Skeuomorphism is only part of the blog post. The author also talks about the font used on keyboard keys and how he is trying to revive it as a usable digital font.
I think that counts as skeumorphism too, since his intention is to replicate the style of a particular physical object.
A nice personal story that covers a lot of issues with how we use and manage credentials. Authentication is a hard problem.
Authentication is a moderately difficult problem but with some good existing solutions. Revocation; however, is an incredibly hard problem. The only solutions that work well involve going via an intermediary.
With WebAuthn, I can have keys stored in a TPM or other security element processor and protected by biometrics. My OS never sees them, the coprocessor signs a nonce from the server and so even a full compromise can only log in live, it can’t exfiltrate my credentials. Unfortunately, the biometrics are not completely unhackable, so if I sell or lose a machine, I need to remove that as an authorised key on every service. For things that I sign into with GitHub, I can just invalidate the key with GitHub, but now GitHub is a mediator for these things and can see all of the things that I log into. I don’t mind this since I only use GitHub sign in for things that are related to open source projects and that GitHub could track anyway, but it’s not a general solution.
I have been living without additional adblocker that the one provided by Firefox for at least a year now. The positive side is that you quickly identify websites that are abusing ads and you slowly stop using them. Adblocking might not be solving the root cause: the business model of many websites is based on ads. Start using websites that are not relying on ads and change the business model.
Such a end of 90’s/beginning of 00’s vibe in those virtual aquarium. Reminds me of the Fish Life, the virtual aquarium released by SEGA: https://segafish.museebolo.ch/
I have recently been through a quest to consume all my content legally: no torrent, no streaming, no cracking… And I started to wonder if watching YouTube or even reading news with an adblocker could be considered illegal, so I started to surf without it.
What a eye-opener! The web is full of ads breaking the content in unreadable chunks or even the layout (particularly on mobile) and I quickly identified the websites with minimal/smart/no ads, such as public news website (BBC in UK, RTS for French speaking Switzerland…) which you may already pay for anyway in your taxes. Surprisingly, some social networks are also not to bad such as Twitter. Obviously, subscribing to some services also allows to have an ad-free experience.
Although nice to watch, I would argue that that it’s only the display part that’s at 300 baud, not the actual surfing. He’s logging into a remote Debian shell, and surfing from there.
You are right to point out this detail. To fully clarify:
Someone should make an equivalent that allows communicating with axolotls through your screen, Cortazar style.
We are working with the community to make the game and platform (very close to a Dreamcast) run in a emulator. The touch screen functionalities will probably be the most challenging.
That would be pretty awesome and a great way to preserve that history. You might want to consider contacting The Living Computer Museum in Seattle to see if they’d be willing to put up am exhibit around simulated version.
Indeed. Actually, we will try to have it in our own exhibition. But obviously, our preservation work can be reuse by other institutions.
Is it on a Naomi board?
It is closer to a regular Dreamcast I would say.
Seriously though, I understand its importance for computer history, but I’m not sure if I understand who would buy it at the time. Is it a very advanced platform for this day made into a single-purpose computer? What’s unique about it other than the early use of touchscreen? Anyone knows what the list price was?
I searched for “Sega Fish Life” on duckduckgo and found Fish Life at Sega Retro which says ¥498000.
Software was sold at 19800¥.
The platform was, as indicated on SEGA website, targeting public places:
“Perfect for use in the following locations:
¥498000 sounds like an obscene price that could hardly justify the purchase even for businesses. The software sounds affordable though, and if it was meant to be a platform for interactive displays, I definitely can see the appeal.
There’s one on eBay for $10k without the monitor or software. There are no completed ones on eBay so who knows how much one of these would actually go for. I had never heard of this until now.
We (Musée Bolo) have one of the 5 pieces of software that were released and the main unit. We do not have the touch screen however.
Quality and cost-effectiveness are significant issues for companies developing their own hardware. The GameBoy meets both of those compared to the alternatives.
Reminds me of this patient https://patents.google.com/patent/US5876351 relating to the use of the GameBoy in ECGs (something I believe was actually applied in the German market). There is even a presentation citing ECG software with custom hardware here https://webpages.uncc.edu/~jmconrad/ECGR6185-2013-01/Presentations/Chitale_Paper_Presentation.pdf
I often see people disregard these sort of things because it was sold as an entertainment system for children but the belts and braces of it is that the GameBoy was/is a decent ARM based machine with great battery life and built like a god damned tank!
I was just at the Louvre and they use a Nintendo 3DS for their entertainment guide. Cheap to replace, easy to program for, durable since it’s designed for kids to drop, built-in wifi and update mechanisms; probably the best choice you could make in proprietary hardware.
Actually, in 2011, Nintendo and Satoru Iwata (RIP) were really into 3DS and interactive museum. Nintendo gifted 5000 to the Louvre and helped develop the software for its use as a audio guide. As a volunteer in a computer and video game museum, that sounds like the perfect solution for us. However, I am not sure we can convince Nintendo to be that generous to us.
I figured it had to be some kind of partnership but didn’t realize it was that intense. Maybe not Nintendo, but I’m sure you could find homebrew game developers who could set you up with a system (and used Nintendo DSes to hopefully keep the spend down) that would work for your museum.
Game Boy and Game Boy Colour were not ARM-based, they used a Z80 clone with some operations removed built by Sharp. The Game Boy Advance was the first ARM-based Nintendo handheld, based on the ARM7TDMI, and included a full Z80 chip for Game Boy backwards compatibility.
gbcpu is not “a Z80 clone with some operations removed”. This is a mistake that has propagated forever.
As far as we know, we (#gbdev) think it was based on some “core” Sharp has for many custom jobs. The actual chip name is LR35902 and should always be referenced as this, or as “gbcpu” or similar.
included a full Z80 chip
included a full Z80 chip
No. They included the gbcpu, actually some gbc revision. There is a good chance it will not play some original Game Boy games (I say “good chance” because I have no references, but I’m pretty sure this is fact).
I will be happy to answer any more questions. I love this little device for the nostalgia factor, its history, simple architecture, cheap price point, and retrocoding.
If people made software today like they did during those times, we would have blazing fast applications and way better battery life. It is sad in a way. Our phones could probably run so much longer.
I’m waiting for the day someone also creates an avr-like handheld.
Sorry for oversimplifying. As far as I understood though, the LR35902 is a Z80 derivative with certain operations missing (and a few others added). If this is wrong, could you point to some documentation of how exactly an LR35902 differs from the Z80/8080?
I recently got an ODROID GO, which is a Gameboy-like handheld with a backlit color LCD and an ESP32, which is a really nice MCU with WiFi, Bluetooth and a bunch of GPIO neatly exposed in a sturdy enclosure.
If you google “The Ultimate Game Boy Talk”, there is a diagram in it that shows exactly what is missing and what is extra :)
It is not a derivative though. It’s simply based on some internal core Sharp re-uses. As I said, this is misinformantion that has been casually spread for a long time now.
Another extremely similar chip from Sharp is SM8521. http://pdf.datasheetcatalog.com/datasheet/Sharp/mXuyzuq.pdf
You’re right. I was thinking of the GameBoy SP with its ARMv7 :)
Not trying to be a smart ass, but ARM7TDMI is (confusingly) an ARMv4 core, ARMv7 is a newer architecture used by ARM Cortex cores.
I will never understand why ARM chose this confusing naming for their cores & architectures, but there it is.
I did not know that. It’s quite interesting to see the internal workings of these devices; especially with what they were able to achieve with them at the time.
Way off topic.
I feel like this is interesting opsec article, even though not technical. What would be the best topic/category for it?
Best topic/category? There isn’t, it’s not lobste.rs material.
That said, I thought it was interesting too. I even linked it to a few friends.
Computer and video game history museum in Switzerland: