I first published about owi here about two years ago. Back then it was zapashcanon’s sandbox for his PhD on Ocaml GC implementation for Wasm. owi has since evolved into becoming a symbolic execution platform for Wasm.
This is especially interesting as it lets you perform cross-language symbolic execution. It has been already used to identify a bug in Rust’s core library (see https://hal.science/hal-04627413 for more details).
Looking at today’s instant messaging solutions, I think IRC is very
underrated. The functionality of clients for IRC made years ago still
surpass what “modern” protocols like Matrix have to offer. I think
re-adoption of IRC is very much possible only by introducing a good UI,
nothing more.
About a year ago I moved my family/friends chat network to IRC. Thanks to modern clients like Goguma and Gamja and the v3 chathistory support and other features of Ergo this gives a nice modern feeling chat experience even without a bouncer. All of my users other than myself are at basic computer literacy level, they can muddle along with mobile and web apps not much more. So it’s definitely possible.
I went this route because I wanted something that I can fully own, understand and debug if needed.
You could bolt-on E2EE, but decentralization is missing—you have to create accounts on that server. Built for the ’10s, XMPP + MUCs can do these things without the storage & resource bloat of Matrix + eventual consistency. That said, for a lot of communites IRC is a serviceable, lightweight, accessible solution that I agree is underrated for text chat (even if client adoption of IRCv3 is still not where one might expect relative to server adoption)—& I would 100% rather see it over some Slack/Telegram/Discord chatroom exclusivity.
I dunno. The collapse of Freenode 3 years ago showed that a lot of the accounts there were either inactive or bots (because the number of accounts on Libera after the migration was significantly lower). I don’t see any newer software projects using IRC (a depressingly large number of them still point to Freenode, which just reinforces my point).
I like IRC and I still use it but it’s not a growth area.
There’s an ongoing effort to modernize IRC with https://ircv3.net. I would agree that most of these evolutions is just IRC catching up with features of modern chat plaforms.
Calling IRCv3 an “ongoing effort” is technically correct, but it’s been ongoing for around 8 to 9 years at this point and barely anything came out of it - and definitely nothing groundbreaking that IRC would need to catch up to the current times (e.g. message history).
The collapse of Freenode 3 years ago showed that a lot of the accounts there were either inactive or bots (because the number of accounts on Libera after the migration was significantly lower).
I don’t know if that’s really the right conclusion. A bunch of communities that were on Freenode never moved to Libera because they migrated to XMPP, Slack, Matrix, Discord, OFTC, and many more alternatives. I went from being on about 20 channels on Freenode to about 5 on Libera right after Freenode’s death, and today that number is closer to 1 (which I’m accessing via a Matrix bridge…).
I guess it just depends what channels you were in; every single one I was using at the time made the jump from Freenode to Libera, tho there were a couple that had already moved off to Slack several years earlier.
It’s “opt-in” in the sense that if you send an OTR message to someone without a plugin, they see garbage, yes. OTR is the predecessor to “signal” and back then (assuming you meant “chats” above), E2EE meant “one-to-one”: https://en.wikipedia.org/wiki/Off-the-record_messaging – but it does support end-to-end encrypted messages, and from my memory of using it on AIM in the zeros, it was pretty easy to setup and use. (At one point, we quietly added support to the hiptop, for example.)
Someone could probably write a modern double-ratchet replacement, using the same transport concepts as OTR, but I bet the people interested in working on that are more interested in implementing some form of RFC 9420 these days.
I’m bit surprised to see a Bus Pirate comeback after all those years. I’m impressed by the attention to details (hydro-dipped connectors, custom plastic injection, etc.).
I’ve been really curious to try out a Glasgow but they’re definitely in a different price category ($37 vs $199). I’ve kept a BPv4 in my field toolbox for years. I think I did manage to smoke one and also lost one somewhere along the way. At the price point I’m not too upset if that happens. I’m also not so sure about what the UX is like comparatively; one of the things that’s awesome about the BP is that you basically only need a terminal emulator installed and you’re ready to roll for easy debugging activities. On Linux and OSX I pretty much always have minicom installed and on Windows it’s easy enough to get the 200kB or whatever it is Putty binary (if it’s not already installed) and have a serviceable serial terminal as well.
To me, that’s how big the TKey is: its main idea (deriving an application specific secret) obsoleted the TMP. Before this I could kind of forgive the TPM’s complexity, but now any justification for this pile of bloat is gone: we have a better way.
FYI, we’re aiming to ship the first CHERIoT chips in 2024. I think it would be a much better platform for your ideas because you can properly compartmentalise access to keys and so on (we’re most likely using the GF 22nm process, so will have non-volatile storage on the die, which will let you build requirements about persistent key storage into your code signing rules). If you’ve got an Arty A7, you can play with our prototyping platform now, but I hope we can get you one of the chips once they’re packaged.
I think it would be a much better platform for your ideas because you can properly compartmentalise access to keys and so on
Yes, I remember the long thread where you eventually sold me on compartmentalisation, which the TKey as such cannot do. I may be able to implement that on the unlocked version, but (i) the tiny FPGA it runs on is already packed full, and (ii) I have yet to write a single line of Verilog. Not to mention the other safety features of CHERIoT, so, yeah, colour me enthused.
If you’ve got an Arty A7, you can play with our prototyping platform now,
I don’t, though maybe I’ll purchase one if (once?) we have a decent free software toolchain for it. But first, I need to write some hello-world blinking LED on my TKey unlocked and learn how to FPGA.
but I hope we can get you one of the chips once they’re packaged.
I don’t, though maybe I’ll purchase one if (once?) we have a decent free software toolchain for it.
We have a F/OSS toolchain for the software bits (which are the only bits shared between the prototyping platform and the final version).
It looks as if openFPGALoader supports the board, which is a really useful discovery because I’d been wondering how we’d distribute updated bit files to partners (installing Vivado is a huge amount of pain and suffering, since it requires waiting for a human to verify your export compliance status and does not give helpful error messages that this is the reason for the problem).
I am currently using Vivado in a Docker container with Rosetta. This is fine for building, since that can run from the command line, but programming the FPGA requires using X11 to display the (awful Java) GUI and running a little program on the Mac that exposes the USB programming interface via their remote cable protocol. This is a lot of string and duck tape. Being able to just run openFPGALoader on my Mac will be a huge improvement.
F4PGA supports the board, but I don’t know how much integration work is needed to make our prototyping platform build with it. It looks like it should support our existing constraints files. I’d love to make that a supported flow.
Hard to compare timing exactly (we’re not yet forcing a fixed seed, so the timing is pretty variable across runs), but it seems to take me about as long on my laptop as it takes Kunyan on to the (x86-64) build server that he’s using to build his bitfiles. Producing a 20 MHz bitfile for the CHERIoT Ibex took me <10 minutes. The 33 MHz one takes about 45 (we’re pretty close to the edge for timing at 33 MHz), but took 15 the first time I ran it. It’s single threaded for almost the entire run, which is annoying (11 cores on my laptop sitting idle, even with max threads set to 12, and the wall clock time is more than half the CPU time).
After @Loup-Vaillant’s comment, I played a bit with the open source FPGA tools. A lot of the design is in tcl files and so I couldn’t work out how to translate them into something that F4PGA could understand (it seems to assume that all of your build is either Verilog or constraints. Possibly the TCL is setting things that could be expressed in the constraints file somehow?), but loading with openFPGALoader was much faster than using the Vivado GUI, so I can now throw that away and just build and load from the command line.
Oooh, CHERIoT chips are coming next year? Any idea about pricing, either for just the chips or for devboards? I would really like to get my hands on a real CHERI system.
Pricing isn’t finalised, we’re working on the exact feature set (driven by customer demands, if you know anyone who might want to buy a lot of them then let me know!). I’m aiming to get close to $1 for v2 in bulk, but v1 will be more expensive. Much cheaper than the FPGA dev boards though. We’re aiming to sell both bare chips and M.2 MicroModules, and probably use an existing dev board that can house the M.2 (there are a bunch of nice off-the-shelf ones).
We’re using the Arty A7 a prototyping platform. It currently runs the CHERIoT Ibex at 33 MHz and has a working Ethernet interface (I’ll be open sourcing the compartmentalised network stack in January, on my desk it connects to my home network and happily works with IPv4 and v6 but currently has almost everything shoved into one big compartment). The ASIC should be 200-300 MHz, somewhat dependent on the power envelope. The A7 is only about $300, which is fairly cheap for a dev board, but more than an order of magnitude more than an ASIC for final deployment.
Are M.2 MicroModules the same as the SparkFun MicroMod system? Are there other compatible suppliers?
I am vaguely interested in higher-density connections for MCU dev boards than the usual 0.1 in pitch pads/pins, especially if there are existing ecosystems I can use. (Tho right now I am more interested in FPC ribbon cables than direct board-to-board connections.)
Yup. I think that’s the system the hardware folks have been looking at (I stop at digital logic. Anything that involves physics is someone else’s problem).
driven by customer demands, if you know anyone who might want to buy a lot of them then let me know!
Alas, I only know hobbyists and small scale makers who might want to buy tens of chips on average. I know a few people who work at large companies that could conceivably ship large volumes, but that’s probably a bit too indirect :p
I’m aiming to get close to $1 for v2 in bulk
That’s very reasonable. Once V2 is available I’ll pester some of the local electronics distributors to stock a few reels. (ordering small quantities from them tends to be much cheaper than small quantities from international distributors, IME)
We’re aiming to sell both bare chips and M.2 MicroModules, and probably use an existing dev board that can house the M.2 (there are a bunch of nice off-the-shelf ones).
I hadn’t heard of this before. Is it the same as what SparkFun calls MicroMod? (that was what I found when googling, anyway). Do you have any specific recs for a nice one?
I wonder if anyone might want to produce boards in the RPi Pico form factor, which I’ve found quite convenient.
I’ll be open sourcing the compartmentalised network stack in January, on my desk it connects to my home network and happily works with IPv4 and v6 but currently has almost everything shoved into one big compartment
I’m looking forward to seeing it!
The A7 is only about $300, which is fairly cheap for a dev board, but more than an order of magnitude more than an ASIC for final deployment.
That’s not so bad, I think I’ll get one of those if I find a job anytime soon.
so will have non-volatile storage on the die
How much, if you don’t mind me asking? Is it enough to reasonably store firmware, or smaller and suitable just for application data?
Edit: final question. How good is it at generating entropy on-chip?
Once V2 is available I’ll pester some of the local electronics distributors to stock a few reels.
Note, v2 will not exist unless we sell enough of v1, though hopefully most of those can go to military and critical infrastructure providers, who are willing to pay a (modest) premium for security features that they can’t get anywhere else. My goal has always been to approach no-security microcontrollers in price though.
How much, if you don’t mind me asking? Is it enough to reasonably store firmware, or smaller and suitable just for application data?
Still finalising that a bit. It looks as if we have quite a bit of area to play with because we’re pad-limited (we need area along the edge of the chip to solder wires to, the smallest chip we can make that has space for all of the external connections we need leaves loads of space in the middle for logic). I really hope we can get enough NVRAM for A/B firmware with execute in place, since that eliminates the need for most secure boot complexity (you validate signatures writing to the B firmware and grant write access to it and the boot toggle only to the compartment that will do that), which also gives us more crypto agility since we can move to quantum-safe signature algorithms when we need to.
How good is it at generating entropy on-chip?
There will be an on-chip entropy source, which should be adequate for crypto operations (not sure what its sample rate will be yet).
Not, v2 will not exist unless we sell enough of v1,
Good luck!
I really hope we can get enough NVRAM for A/B firmware with execute in place, since that eliminates the need for most secure boot complexity
That’d be really great.
There will be an on-chip entropy source, which should be adequate for crypto operations (not sure what its sample rate will be yet).
Even a not so good sample rate should be enough to seed a CSPRNG at boot, and depending on how threat model and how hard it is to read NVRAM externally, perhaps saving a seed until next boot?
Even a not so good sample rate should be enough to seed a CSPRNG at boot
Yup, that’s my expectation. It gets a little bit interesting with multiple compartments having to trust the random number generator but that’s no different from multiple processes on a conventional OS trusting /dev/random (actually, better, since they know exactly the code in the CSPRNG compartment and know nothing else in the OS can tamper with its internal state).
I think their website is missing a tl;dr description of the hardware, so let me try it: the main component is a modified PicoRV32-based SoC running on an iCE40 FPGA. That FPGA is interfaced to USB via a CH552 micro-controller (cheap 8051 that natively supports USB). They seem to have a custom hardware RNG.
Betrusted (FPGA-based secure device) took a different path: they both have an avalanche noise generator and an in-FPGA TRNG similar to the one found in Tillitis (see https://www.bunniestudios.com/blog/?p=6097).
I’m not sure what should be considered acceptable in that domain.
I would regard that as a hardware entropy source, rather than a hardware random number generator. It looks great as an input into Fortuna (or Yarrow if you enjoy doing difficult maths), not as a replacement.
I confess I was not convinced by their exact technique: if there’s a bias, even very slight, in the RNG, it is liable to affect every single bit the same way. So instead of using it directly I would rather accumulate somewhere between 256 and 512 bits from that source, then hash it with BLAKE2s to obtain 256 bits I’ll be pretty sure will be close enough to uniformly random.
I spent a little time trying to find a guaranteed good enough procedure for sampling the RP2040 randombit, to feed into Gimli, but I put it on the back burner a while back. I had really hoped that the RPi engineers would actually characterize it, but instead they just merged a really crappy way of using it for low quality random number into their SDK.
This project has been purchased by Beeper, please contact them with any questions about licensing.
I’m only just now seeing Beeper on another site, so the first questions I have are: is Beeper based on this? Or, did Beeper independently RE iMessage and then buy this competitor project…?
OCamlPro is currently working on Cobol-related projects: they help companies migrate their COBOL applications away from legacy mainframe environments. As part of this venture, they contribute to GnuCOBOL, but they also released some modern tooling for working with COBOL codebases.
I’m curious if people are actually using GNU COBOL. Pretty much all of the COBOL I hear people talk about is on z (edit: and that implies things like Db2, CICS, etc. - it’s not just COBOL, but the ecosystem). At least in my world (i) COBOL is a bit of a thing, but massively dwarfed by RPG.
I wonder how they’re handling the everything else around the machine. Business logic in COBOL is one thing, but the transaction engine (that your code usually runs within the context of), database (so DB migration plus probably rewriting vendor specific SQL), and frontends like 3270/web stuff is just as important. I’m not familiar enough with GCOS to compare with z on that front though.
There are some weird corners of the world outside mainframes where COBOL can be found: My wife used to be a software developer for an Oracle PeopleSoft installation, and she would moan whenever some bug led her to dig deep enough into the guts of PeopleSoft that she found herself reading COBOL.
I see folks are suggesting alternative open source solutions - have you looked at Sourcehut? Fully open source and features (IMO) quite a nice CI system.
The main selling point is being able to SSH into CI nodes so you can muck around until the command succeeds, which I think would solve most of this posts’ complaints. I agree the iteration time of developing a CI by pushing commits then waiting for it to run is brutal and makes it all take 10x longer than it should.
Aye this is my favourite feature on CircleCI, that it’ll just drop me into a shell on a failed build step is gold, and the SSH auth is magic.
Combined with putting the “meat” of the build definitions in Make or similar, so you can do most work locally before pushing, and then any final bits of debugging in the CI shell, it’s not bad.
I’m very intrigued by Nix tho, all these people here are giving me FOMO
It is. And frankly it feels embarrassing. You sit there crafting commits to fix the issue and if anyone is getting notifications on the PR you are peppering them with your failures. Would not recommend.
I’m a customer, and it’s been on my list to figure it out for a while. The way it works feels just different enough from other stuff in the space that I haven’t gotten ‘round to it yet. Do you know if there’s a write-up of something like running a bunch of tests on a linux image, then pushing a container to a remote VPS after they pass?
The docs seem good, but more reference-style, and I’d really be curious to just see how people use it for something like that before I put in the labor to make my way through the reference.
There is no tutorial in the documentation indeed, but starting from their synapse-bt example and evolving from it is sufficient from my experience.
The cool things about SourceHut is that you don’t need a Git (or Mercurial) project to run a CI pipeline. You can directly feed a Yaml manifest to the web interface and have it executed. That plus the SSH access to a failed pipeline makes it quite easy to debug.
Why this fixation on imitating a messenger app? For me, the style of UI is an obvious mismatch for IRC. The traditional compact text with the nickname inline, uses up the limited screen real estate much more efficiently.
This chat bubbles visual is suited for mobile one to one communication because Trafic is low and mostly 1 to 1. But I fail tu understand why anyone would think it would be a good idea for an IRC cliente.
There’s no fixation: you can enable the denser ‘compact mode’ which matches your description of a traditional compact text with inline nickname: https://i.imgur.com/VIQjXBt.png
Nice to see these new-ish developments in the IRC world (even though I’m still using good old Quassel for all my IRCing)! Bit surprised by the “bubble” styling of the chat views, for IRC that somehow feels really odd to me.
Bubble chat views feel more natural when you enable link preview (disabled by default for obvious privacy reason). It also works better with users who aren’t really geeky but still want to take part in conversation with geeky friends :)
This being said you can enable the compact mode if you prefer the “traditional” look of IRC clients: https://i.imgur.com/VIQjXBt.png
I’m in charge of the publication of Goguma on iOS. We have an issue tracking all the features not available yet on iOS: https://todo.sr.ht/~emersion/goguma/138
We are still working on notification support, as iOS has strict requirements for background tasks. Besides that, it works decently well and has good accessibility.
Had a look at it: their extension is unfortunately just sending requests to a closed source API that they host, which is supposedly doing the actual APNS requests :(
I owned one of these as my first work laptop and I cannot agree, it’s a decent laptop but not the best one by far. What I disliked the most was it’s abysmal display, dark, low resolution, bad color reproduction. As usual with Lenovo, it’s a lottery game with the screen and from the model number you cannot infer what manufacturer the screen is from. The keyboard was pretty good though, even though it had a lot of flex and feels pretty cheap compared to what you get nowadays. Also, I don’t get the point of carrying another battery pack, to swap it out you need to power down the machine. HP’s elitebook 8460[w/p] models could be configured with a 9-cell battery and an optional battery slice which gave them almost a full day of battery life. Those elitebooks were built like a tank but at the same time very heavy. Compared to the X220 they’re the better laptops in my opinion.
However, the best laptop is an Apple silicon MacBook Air. It’s so much better than what else is available that it’s almost unfair. No fan noise, all day battery life, instant power on and very powerful. It would be great if it could run any Linux distribution though, but macOS just works and is good enough for me.
I totally disagree, and I have both an X220 and an M1 MacBook Air.
I much prefer to the X220. In fact, I have 2 of them, and I only have the MBA because work bought me one. I would not pay for it myself.
I do use the MBA for travel sometimes, because at a conference it’s more important to have something very portable, but it is a less useful tool in general.
I am a writer. The keyboard matters more than almost anything else. The X220 has a wonderful keyboard and the MBA has a terrible keyboard, one of the worst on any premium laptop.
Both my X220s have more RAM, 1 or 2 aftermarket SSDs, and so on. That is impossible with the MBA.
My X220s have multiple USB 2, multiple USB 3, plus Displayport plus VGA. I can have it plugged in and still run 3 screeens, a keyboard, a mouse, and still have a spare port. On the MBA this means carrying a hub and thus its thinness and lightness goes away.
I am 6’2”. I cannot work on a laptop in a normal plane seat. I do not want to have to carry my laptop on board. But you cannot check in a laptop battery. The X220 solves this: I can just unplug its battery in seconds, and take only the battery on board. I can also carry a charged spare, or several.
The X220 screen is fine. I am 55. I owned 1990s laptops. I remember 1980s laptops. I remember greyscale passive-matrix LCDs and I know why OSes have options to help you find the mouse cursor. The X220 screen is fine. A bit higher-res would be fine but I cannot see 200 or 300ppi at laptop screen range so I do not need a bulky GPU trying to render invisibly small pixels. It is a work tool; I do not want to watch movies on it.
I have recently reviewed the X13S Arm Thinkpad, and the Z13 AMD Thinkpad, and the X1 Carbon gen 12.
My X220 is better than all of them, and I prefer it to all of those and to the MacBook Air.
I say all this not to say YOU ARE WRONG because you are entitled to your own opinions and choices. I am merely trying to clearly explain why I do not agree with them.
… And why it really annoys me that you and your choices have so limited the market that I have to use a decade-old laptop to get what I want in a laptop because your choices apparently outweigh mine and nobody makes a laptop that does what I want in a laptop any more, including the makers of my X220.
Probably because your requirements are very specific and “developer” laptops are niche market.
I don’t think my requirements are very specific.
I am not a developer, and I don’t know what a “developer” laptop is meant to be.
I don’t think it’s that niche:
a. Mine is a widely-held view
b. The fact there is such a large aftermarket in classic Thinkpads and parts for them, even upgrade motherboards, falsifies this claim.
It was not a specialist tool when new; it was a typical pro-grade machine. It’s not a niche product.
This change in marketing is not about ignoring niche markets. It’s about two things: reducing cost, and thus increasing margin; and about following trends and not doing customer research.
Comparison: I want a phone with a removable battery, a headphone socket, physical buttons I can use with gloves on, and at least 2 SIM slots plus a card slot. These are all simple easy requirements which were ubiquitous a decade ago, but are gone now, because everyone copies the market leaders, without understanding what makes them the market leader.
Whereas ISTM that your argument amounts to “if people wanted that they’d buy it, so if they don’t, they mustn’t want it”. Which is trivially falsified: this does not work if there is no such product to buy.
But there used to be, same as I used to have a wide choice of phones with physical buttons, headphone sockets, easily augmented storage, etc.
In other markets, companies are thriving by supplying products that go counter to industry trends. For instance, the Royal Enfield company supplies inexpensive, low-powered motorcycles that are easily maintained by their owners, which goes directly counter to the trend among Japanese motorcycles of constantly increasing power, lowering weight, and removing customer-maintainability by making highly-integrated devices with sealed, proprietary electronics controlling them.
Framework laptops are demonstrating some of this for laptops.
When I say major brands are lacking innovation, derivative, and copy one another, this is hardly even a controversial statement. Calling it a conspiracy theory is borderline offensive and I am not happy with that.
Margins in the laptop business are razor-thin. Laptops are seen as a commodity. The biggest buyers are businesses who simply want to provide their employees with a tool to do their jobs.
These economic facts do tend to converge available options towards a market-leader sameness, but that’s simply how the market works.
Motorcycles are different. They’re consumer/lifestyle products. You don’t ride a Royal Enfield because you need to, you do it because you want to, and you want to signal within the biker community what kind of person you are.
This is the core point. For instance, my work machine, which I am not especially fond of, is described in reviews as being a standard corporate fleet box.
I checked the price when reviewing the newer Lenovos, and it was about £800 in bulk.
These are, or were when new, all ~£2000 premium devices, some significantly more.
And yet, my budget-priced commodity fleet Dell has more ports than any of them, even the flagship X1C – that has 4 USB ports, but the Dell, at about a third of the price, has all those and HDMI and Ethernet.
This is not a cost-cutting thing at the budget end of the market. These are premium devices.
And FWIW I think you’re wrong about the Enfields, too. The company is Indian, and survived decades after the UK parent company died, outcompeted by cheaper, better-engineered Japanese machines.
Enfield faded from world view, making cheap robust low-spec bikes for a billion Indian people who couldn’t afford cars. Then some people in the UK noticed that they still existed, started importing them, and the company made it official, applied for and regained the “Royal” prefix and now exports its machines.
But the core point that I was making was that in both cases, it is the budget machines at the bottom of the market which preserve the ports. It is the expensive premium models which are the highly-integrated, locked-down sealed units.
This is not cost-cutting; it is fashion-led. Like womens’ skirts and dresses without pockets, it is designed for looks not practicality, and sold for premium prices.
Basically, what I am reading from your comments is that Royal Enfield motorcycles (I knew about the Indian connection, btw, but didn’t know they’d made a comeback in the UK) and chunky black laptops with a lot of ports is for people with not a lot of money, or who prefer to not spend a lot of money for bikes or laptops.
Why there are not more products aimed at this segment of the market is left as an exercise to the reader.
ISTM that you are adamantly refusing to admit that there is a point here.
Point Number 1:
This is not some exotic new requirement. It is exactly how most products used to be, in laptops, in phones, in other sectors. Some manufacturers cut costs, sold it as part of a “fashionable” or “stylish” premium thing, everyone else followed along like sheep… And now it is ubiquitous, and some spectators, unable to follow the logic of cause and effect, say “ah well, it is like that because nobody wants those features any more.”
And no matter how many of us stand up and say “BUT WE WANT THEM!” apparently we do not count for some reason.
Point Number 2:
more products aimed at this segment of the market
That’s the problem. Please, I beg you, give me links to any such device available in the laptop market today, please.
I don’t doubt there are people who want these features. They’re vocal enough.
But there are not enough of them (either self-declared, or found via market research) for a manufacturer to make the bet that they will make money making products for this market.
It’s quite possible that an new X220-like laptop would cost around $5,000. Would such a laptop sell enough to make money back for the manufacturer?
“Probably because your requirements are very specific and “developer” laptops are niche market.”
I’d suggest an alternate reason. Yes, developer laptops are a niche market. But I’d propose that laptops moving away from the X220 is a result of chasing “thinner and lighter” above all else, plus lowering costs. And the result when the majority of manufacturers all chase the same targets, you get skewed results.
Plus: User choice only influences laptop sales so much. I’m not sure what the split is, but many laptops are purchased by corporations for their workforce. You get the option of a select few laptops that business services / IT has procured, approved, and will support. If they are a Lenovo shop or a Dell shop and the next generation or three suck, it has little impact on sales because it takes years before a business will offer an alternative. If they even listen to user complaints.
And if I buy my own laptop, new, all the options look alike - so there’s no meaningful way to buy my preference and have that influence product direction.
“Neither I, nor anyone else who bought an Apple product is responsible for your choice of a laptop.”
Mostly true. The popularity of Apple products has caused the effect I described above. When Apple started pulling ahead of the pack, instead of (say) Lenovo saying “we’ll go the opposite direction” the manufacturers chased the Apple model. In part due to repeated feedback that users want laptops like the Air, so we get the X1 Carbons. And ultimately all the Lenovo models get crappy chicklet keyboards, many get soldiered RAM, fewer ports, etc. (As well as Dell, etc.)
(Note I’m making some pretty sweeping generalizations here, but my main point is that the market is limited not so much because the OP’s choices are “niche” but because the market embraces trends way too eagerly and blindly.)
Mostly true. The popularity of Apple products has caused the effect I described above. When Apple started pulling ahead of the pack, instead of (say) Lenovo saying “we’ll go the opposite direction” the manufacturers chased the Apple model. In part due to repeated feedback that users want laptops like the Air, so we get the X1 Carbons. And ultimately all the Lenovo models get crappy chicklet keyboards, many get soldiered RAM, fewer ports, etc. (As well as Dell, etc.)
This reminds me a great deal of my recurring complaint that it’s hard to find a car with a manual transmission anymore. Even down to the point that, last time I was shopping, I looked at German-designed/manufactured vehicles, knowing that the prevailing sentiment last time I visited Germany was that automatic transmissions were for people who were elderly and/or disabled.
This, but Asahi still has a long, long way to go before it can be considered stable enough to be a viable replacement for macOS.
For the time being, you’re pretty much limited to running macOS as a host OS and then virtualize Linux on top of it, which is good enough for 90% of use cases anyway. That’s what I do and it works just fine, most of the time.
Out of curiosity, what are you using for virtualization? The options for arm64 virtualization seemed slim last I checked (UTM “works” but was buggy. VMWare Fusion only has a tech preview, which I tried once and also ran into problems). Though this was a year or two ago, so maybe things have improved.
VMware and Parallels have full versions out supporting Arm now, and there are literally dozens of “light” VM runners out now, using Apple’s Virtualisation framework (not to be confused with the older, lower level Hypervisor.framework)
I’m using UTM to run FreeBSD and also have Podman set up to run FreeBSD containers (with a VM that it manages). Both Podman (open source) and Docker Desktop (free for orgs with fewer than, I think, 250 employees) can manage a Linux VM for running containers. Apple exposes a Linux binary for Rosetta 2 that Docker Desktop uses, so can run x86 Linux containers.
I’m not speaking for @petar, but I use UTM when I need full fat Linux. (For example, to mount an external LUKS-encrypted drive and copy files.) That said, I probably don’t push it hard enough to run into real bugs. But the happy path for doing something quick on a Ubuntu or Fedora VM has not caused me any real headaches.
It feels like most of the other things I used to use a Linux VM for work well in Docker desktop. I still have my ThinkPad around (with a bare metal install) in case I need it, but I haven’t reached for it very often in the past year.
It’s in a closed beta stage. If you’re friends with Jonathan Blow or somehow catch their attention he might give you access, entirely at his discretion.
I get why he did this for the first couple years. But continuing with this development model for over 10 years made me realize Jai is and will likely always be closed source software.
I’m having trouble understanding what the AMD SMU is. From the context I guess it is somewhat like the Intel ME? Though AMD PSP is the direct equivalent for that. I am confused.
A cloud exit makes sense if you have an established long-term viable product with predictable & stable traffic patterns.
Cloud gives you the flexibility to establish those parameters for your product, without gambling on expensive one-off purchases for resources you may not need, as you experiment with which hardware resources are best suited for your load.
I’m starting to feel that “cloud allows you to scale up” has become a meme by now…
You can get 32GB of RAM with a Intel Xeon E5-1620 on kimsufi dot com (I don’t want people to think that I’m advertising) for $40/m and $40 of installation fees.
The equivalent in AWS is the t3.2xlarge at $240/m. Just on the first month, with the installation fees, it’s cheaper with a dedicated server than AWS. And AWS will kill you on the bandwith which is 100Mbps unmetered with kimsufi.
Also, with one of these, a well-written application and good caching, you can easily handle ~1k req/s. That’s a lot of users!
I sometimes think that cloud companies managed to sell outrageously over-priced products to gullible users who never needed it in the first place. :(
There’s a lot more to the word “flexibility” besides “scale up”. Clouds allow you to experiment with (managed) blob store, queues, DBs, caches, search indexes, CDNs, networking, etc; to figure out what best suits your product. And then easily move the product to another continent if it turns out it is more popular in EU than in the US—or wherever else your hardware is.
Most of this can be experimented on very cheaply, one command away with apt install varnish-cache glusterfs postgresql-server rabbitmq-server haproxy. The default package configuration will be more than good enough to experiment.
Also, this will put you much more in control of things instead of debugging C++ stack traces (I had ton of these when trying to use AWS Redshift), or weird HTTP error messages returned by proprietary APIs. (Try to debug authorization errors in AWS, good luck with this)
As the wikipedia page mentions, production services have achieved over 1 million concurrent connections on a single machine over 10 years ago – e.g. WhatsApp, using Erlang, not even a native language like C.
Granted those are probably tiny requests, and I guess keeping the connections open probably allows many more requests per second, since setting up the connections is expensive.
But it’s still orders of magnitude more than 1K / second (even though the units aren’t the same; I’d be interested in any pointers that elaborate on the relation)
Maybe 1K page loads, since modern web apps seem to make like 500 requests per page load :-P Or they download 10-50 MB of Javascript routinely.
Maybe a more comparable site is StackOverflow, they are a self-hosted monolith and seem have large amounts of optimization at the .NET layer.
The “cacheless” point is important. I’ve seen a lot of bad architectures papered over by caches, which make things better in some cases and worse in others. They also introduce a lot of operational expense.
If $40 vs $240 is the debate, you should definitely go with kimsufi. For a lot of other cases you should use the cloud. And then after a certain scale it might indeed pay off to be off the cloud. It really depends on the services you would use.
Seems to just resell OVH? As i understand it, the servers all have ecc ram? Still 100 Mbps is abyssimal - would much prefer minimum Gbps. You get that unmetered from Hetzner (or 10Gbps with 30TB/month) - but at closer to 100 USD/month if you disregard pre-owned boxes.
Still, try and price 30 (or just 2!) TB egress from AWS…
That aside, you would probably want at least two boxes and a floating ip to have your meaningful risks in a similar ballpark to AWS (worse risk, but possibly similar in actual business terms even if you trade 30 minutes downtime/year for a day of downtime).
The 100mbit isn’t much, that’s true. But on the other side: if you actually have that many requests you probably want something like a CDN or a different host in front for all the static assets. So for a typical crud app it is probably enough bandwidth. Then again there are definitely other vendors which also give you a fixed amount of compute, storage, memory and bandwidth with > 1Gbit, that do not charge you for the bandwidth. They are bookable monthly too..
Obviously you will need someone who has knowledge about Linux, to at least install the base and for example docker. But you will need someone equally for AWS (and then probably Linux on top).
I think this is again a part of “no one has ever been fired for buying IBM”, but now we’re doing the same with AWS. For most people it probably isn’t actually reasonable to use AWS, apart from using the trodden path and not having to search for different vendors. Which is totally fine if you want to spend that money.
I personally wouldn’t run anything business related off kimsufi, OVH doesn’t really care all that much about it and will leave you with dead hardware. It’s not that much more expensive (for a business at least) to go up to an actual OVH or use something like Hetzner if you’re only concerned about Europe. Even if you want to go for “cloud” machines then OVH or Hetzner might be a better bet anyway.
Right, and furthermore, with hardware being as powerful as it is, much more powerful than needed for most use-cases, you may not need more than a small fraction of that expensive resource for a very long time, or essentially forever.
Depending on the MCU model you can have very different pull-up strengths (impedance if you will). The author doesn’t specify which MCU they were using (maybe STSPIN32F0A from STEVAL-ESC002V1, but I’m not sure if it’s acting as an I2C slave or master?), so it’s kinda complicated to draw meaningful conclusions from this article.
But the biggest issue is that it’s written in SystemVerilog.
I’m neither a big fan of SystemVerilog (using Verilog 05 for design and C++ or Python for verification instead), but there is also a UVM version for SystemC (which is C++ actually); see https://systemc.org/about/systemc-verification/overview/
It’s just a programming language
Yes for verification; no for digital design, which is not the same as “programming”.
I believe cocotb will be one of the important technologies in the coming years
SystemC is, in my opinion, the one thing worse than SystemVerilog.
I’ve used cocostb. Evaluated it to see if maybe it was an option for a team of new verification engineers. I liked it quite a lot. But SV for verification is a lot more than classes.
When I checked out, it was missing constraints (and corresponding randomization), functional coverage (groups, points, properties), and SystemVerilog Assertions. Functional coverage is a must have in my domain.
SystemC is, in my opinion, the one thing worse than SystemVerilog.
Well, it’s the C++ programming language plus a library; they didn’t change the language. In contrast SystemVerilog is - from my point of view - a rather unfortunate amalgation of at least four different languages, causing unnecessary complexity, which is an impediment for people who have to learn it and tool implementers who have to maintain compatibility. It would not have been so serious if IEEE had subsequently withdrawn the Verilog standard. Thus, people were forced into SystemVerilog against their wishes, despite its continued relatively low acceptance. Most engineers I know including me still use Verilog for design and something else - like cocotb - for verification. SystemC has still a rather low adoption rate, but if it has a UVM implementation then people at least are not forced into SV.
I first published about owi here about two years ago. Back then it was zapashcanon’s sandbox for his PhD on Ocaml GC implementation for Wasm. owi has since evolved into becoming a symbolic execution platform for Wasm.
This is especially interesting as it lets you perform cross-language symbolic execution. It has been already used to identify a bug in Rust’s core library (see https://hal.science/hal-04627413 for more details).
Looking at today’s instant messaging solutions, I think IRC is very underrated. The functionality of clients for IRC made years ago still surpass what “modern” protocols like Matrix have to offer. I think re-adoption of IRC is very much possible only by introducing a good UI, nothing more.
aka drawing the rest of the owl
More like upscaling an image drawn before the average web developer was born.
no UI will add offline message delivery to IRC
Doesn’t the “IRCToday” service linked in this post solve that? (and other IRC bouncers)
sure but that’s more than just a UI
Specs and implementations on the other hand…
I thínk “Lounge” is a really decent web-based UI.
About a year ago I moved my family/friends chat network to IRC. Thanks to modern clients like Goguma and Gamja and the v3 chathistory support and other features of Ergo this gives a nice modern feeling chat experience even without a bouncer. All of my users other than myself are at basic computer literacy level, they can muddle along with mobile and web apps not much more. So it’s definitely possible.
I went this route because I wanted something that I can fully own, understand and debug if needed.
You could bolt-on E2EE, but decentralization is missing—you have to create accounts on that server. Built for the ’10s, XMPP + MUCs can do these things without the storage & resource bloat of Matrix + eventual consistency. That said, for a lot of communites IRC is a serviceable, lightweight, accessible solution that I agree is underrated for text chat (even if client adoption of IRCv3 is still not where one might expect relative to server adoption)—& I would 100% rather see it over some Slack/Telegram/Discord chatroom exclusivity.
I dunno. The collapse of Freenode 3 years ago showed that a lot of the accounts there were either inactive or bots (because the number of accounts on Libera after the migration was significantly lower). I don’t see any newer software projects using IRC (a depressingly large number of them still point to Freenode, which just reinforces my point).
I like IRC and I still use it but it’s not a growth area.
There’s an ongoing effort to modernize IRC with https://ircv3.net. I would agree that most of these evolutions is just IRC catching up with features of modern chat plaforms.
The IRC software landscape is also evolving with https://lobste.rs/s/wy2jgl/goguma_irc_client_for_mobile_devices and https://lobste.rs/s/0dnybw/soju_user_friendly_irc_bouncer.
Calling IRCv3 an “ongoing effort” is technically correct, but it’s been ongoing for around 8 to 9 years at this point and barely anything came out of it - and definitely nothing groundbreaking that IRC would need to catch up to the current times (e.g. message history).
Message history is provided by this thing (IRC Today), and it does it through means of IRC v3 support.
I don’t know if that’s really the right conclusion. A bunch of communities that were on Freenode never moved to Libera because they migrated to XMPP, Slack, Matrix, Discord, OFTC, and many more alternatives. I went from being on about 20 channels on Freenode to about 5 on Libera right after Freenode’s death, and today that number is closer to 1 (which I’m accessing via a Matrix bridge…).
I guess it just depends what channels you were in; every single one I was using at the time made the jump from Freenode to Libera, tho there were a couple that had already moved off to Slack several years earlier.
IRC really needs end-to-end encrypted messages.
Isn’t that what OTR does?
Not really. It’s opt-in and it only works for 1:1 charts, doesn’t it?
It’s “opt-in” in the sense that if you send an OTR message to someone without a plugin, they see garbage, yes. OTR is the predecessor to “signal” and back then (assuming you meant “chats” above), E2EE meant “one-to-one”: https://en.wikipedia.org/wiki/Off-the-record_messaging – but it does support end-to-end encrypted messages, and from my memory of using it on AIM in the zeros, it was pretty easy to setup and use. (At one point, we quietly added support to the hiptop, for example.)
Someone could probably write a modern double-ratchet replacement, using the same transport concepts as OTR, but I bet the people interested in working on that are more interested in implementing some form of RFC 9420 these days.
I’m bit surprised to see a Bus Pirate comeback after all those years. I’m impressed by the attention to details (hydro-dipped connectors, custom plastic injection, etc.).
However I wonder how much traction it will get, Glasgow (https://github.com/GlasgowEmbedded/glasgow) looks much more capable (FPGA-based, USB 2.0 HS transfer speed).
I’ve been really curious to try out a Glasgow but they’re definitely in a different price category ($37 vs $199). I’ve kept a BPv4 in my field toolbox for years. I think I did manage to smoke one and also lost one somewhere along the way. At the price point I’m not too upset if that happens. I’m also not so sure about what the UX is like comparatively; one of the things that’s awesome about the BP is that you basically only need a terminal emulator installed and you’re ready to roll for easy debugging activities. On Linux and OSX I pretty much always have minicom installed and on Windows it’s easy enough to get the 200kB or whatever it is Putty binary (if it’s not already installed) and have a serviceable serial terminal as well.
Did the Glasgow ever ship? Can it actually be bought somewhere?
See also: https://github.com/kormax/apple-home-key
I have to plug my article about this marvellous little piece of hardware: Fixing the TPM: Hardware Security Modules Done Right.
To me, that’s how big the TKey is: its main idea (deriving an application specific secret) obsoleted the TMP. Before this I could kind of forgive the TPM’s complexity, but now any justification for this pile of bloat is gone: we have a better way.
FYI, we’re aiming to ship the first CHERIoT chips in 2024. I think it would be a much better platform for your ideas because you can properly compartmentalise access to keys and so on (we’re most likely using the GF 22nm process, so will have non-volatile storage on the die, which will let you build requirements about persistent key storage into your code signing rules). If you’ve got an Arty A7, you can play with our prototyping platform now, but I hope we can get you one of the chips once they’re packaged.
Yes, I remember the long thread where you eventually sold me on compartmentalisation, which the TKey as such cannot do. I may be able to implement that on the unlocked version, but (i) the tiny FPGA it runs on is already packed full, and (ii) I have yet to write a single line of Verilog. Not to mention the other safety features of CHERIoT, so, yeah, colour me enthused.
I don’t, though maybe I’ll purchase one if (once?) we have a decent free software toolchain for it. But first, I need to write some hello-world blinking LED on my TKey unlocked and learn how to FPGA.
That would be beyond awesome.
We have a F/OSS toolchain for the software bits (which are the only bits shared between the prototyping platform and the final version).
It looks as if openFPGALoader supports the board, which is a really useful discovery because I’d been wondering how we’d distribute updated bit files to partners (installing Vivado is a huge amount of pain and suffering, since it requires waiting for a human to verify your export compliance status and does not give helpful error messages that this is the reason for the problem).
I am currently using Vivado in a Docker container with Rosetta. This is fine for building, since that can run from the command line, but programming the FPGA requires using X11 to display the (awful Java) GUI and running a little program on the Mac that exposes the USB programming interface via their remote cable protocol. This is a lot of string and duck tape. Being able to just run openFPGALoader on my Mac will be a huge improvement.
F4PGA supports the board, but I don’t know how much integration work is needed to make our prototyping platform build with it. It looks like it should support our existing constraints files. I’d love to make that a supported flow.
How’s the performance of Vivado in Rosetta?
Hard to compare timing exactly (we’re not yet forcing a fixed seed, so the timing is pretty variable across runs), but it seems to take me about as long on my laptop as it takes Kunyan on to the (x86-64) build server that he’s using to build his bitfiles. Producing a 20 MHz bitfile for the CHERIoT Ibex took me <10 minutes. The 33 MHz one takes about 45 (we’re pretty close to the edge for timing at 33 MHz), but took 15 the first time I ran it. It’s single threaded for almost the entire run, which is annoying (11 cores on my laptop sitting idle, even with max threads set to 12, and the wall clock time is more than half the CPU time).
After @Loup-Vaillant’s comment, I played a bit with the open source FPGA tools. A lot of the design is in tcl files and so I couldn’t work out how to translate them into something that F4PGA could understand (it seems to assume that all of your build is either Verilog or constraints. Possibly the TCL is setting things that could be expressed in the constraints file somehow?), but loading with openFPGALoader was much faster than using the Vivado GUI, so I can now throw that away and just build and load from the command line.
Oooh, CHERIoT chips are coming next year? Any idea about pricing, either for just the chips or for devboards? I would really like to get my hands on a real CHERI system.
Pricing isn’t finalised, we’re working on the exact feature set (driven by customer demands, if you know anyone who might want to buy a lot of them then let me know!). I’m aiming to get close to $1 for v2 in bulk, but v1 will be more expensive. Much cheaper than the FPGA dev boards though. We’re aiming to sell both bare chips and M.2 MicroModules, and probably use an existing dev board that can house the M.2 (there are a bunch of nice off-the-shelf ones).
We’re using the Arty A7 a prototyping platform. It currently runs the CHERIoT Ibex at 33 MHz and has a working Ethernet interface (I’ll be open sourcing the compartmentalised network stack in January, on my desk it connects to my home network and happily works with IPv4 and v6 but currently has almost everything shoved into one big compartment). The ASIC should be 200-300 MHz, somewhat dependent on the power envelope. The A7 is only about $300, which is fairly cheap for a dev board, but more than an order of magnitude more than an ASIC for final deployment.
Are M.2 MicroModules the same as the SparkFun MicroMod system? Are there other compatible suppliers?
I am vaguely interested in higher-density connections for MCU dev boards than the usual 0.1 in pitch pads/pins, especially if there are existing ecosystems I can use. (Tho right now I am more interested in FPC ribbon cables than direct board-to-board connections.)
Yup. I think that’s the system the hardware folks have been looking at (I stop at digital logic. Anything that involves physics is someone else’s problem).
Alas, I only know hobbyists and small scale makers who might want to buy tens of chips on average. I know a few people who work at large companies that could conceivably ship large volumes, but that’s probably a bit too indirect :p
That’s very reasonable. Once V2 is available I’ll pester some of the local electronics distributors to stock a few reels. (ordering small quantities from them tends to be much cheaper than small quantities from international distributors, IME)
I hadn’t heard of this before. Is it the same as what SparkFun calls MicroMod? (that was what I found when googling, anyway). Do you have any specific recs for a nice one?
I wonder if anyone might want to produce boards in the RPi Pico form factor, which I’ve found quite convenient.
I’m looking forward to seeing it!
That’s not so bad, I think I’ll get one of those if I find a job anytime soon.
How much, if you don’t mind me asking? Is it enough to reasonably store firmware, or smaller and suitable just for application data?
Edit: final question. How good is it at generating entropy on-chip?
Note, v2 will not exist unless we sell enough of v1, though hopefully most of those can go to military and critical infrastructure providers, who are willing to pay a (modest) premium for security features that they can’t get anywhere else. My goal has always been to approach no-security microcontrollers in price though.
Still finalising that a bit. It looks as if we have quite a bit of area to play with because we’re pad-limited (we need area along the edge of the chip to solder wires to, the smallest chip we can make that has space for all of the external connections we need leaves loads of space in the middle for logic). I really hope we can get enough NVRAM for A/B firmware with execute in place, since that eliminates the need for most secure boot complexity (you validate signatures writing to the B firmware and grant write access to it and the boot toggle only to the compartment that will do that), which also gives us more crypto agility since we can move to quantum-safe signature algorithms when we need to.
There will be an on-chip entropy source, which should be adequate for crypto operations (not sure what its sample rate will be yet).
Good luck!
That’d be really great.
Even a not so good sample rate should be enough to seed a CSPRNG at boot, and depending on how threat model and how hard it is to read NVRAM externally, perhaps saving a seed until next boot?
Yup, that’s my expectation. It gets a little bit interesting with multiple compartments having to trust the random number generator but that’s no different from multiple processes on a conventional OS trusting /dev/random (actually, better, since they know exactly the code in the CSPRNG compartment and know nothing else in the OS can tamper with its internal state).
I think their website is missing a tl;dr description of the hardware, so let me try it: the main component is a modified PicoRV32-based SoC running on an iCE40 FPGA. That FPGA is interfaced to USB via a CH552 micro-controller (cheap 8051 that natively supports USB). They seem to have a custom hardware RNG.
The phrase “custom hardware rng” doesn’t fill me with joy
You can read more details about their TRNG design here: https://github.com/tillitis/tillitis-key1/tree/main/hw/application_fpga/core/trng. tl;dr: Many free-running ring oscillators being sampled in a smart way.
Betrusted (FPGA-based secure device) took a different path: they both have an avalanche noise generator and an in-FPGA TRNG similar to the one found in Tillitis (see https://www.bunniestudios.com/blog/?p=6097).
I’m not sure what should be considered acceptable in that domain.
I would regard that as a hardware entropy source, rather than a hardware random number generator. It looks great as an input into Fortuna (or Yarrow if you enjoy doing difficult maths), not as a replacement.
I confess I was not convinced by their exact technique: if there’s a bias, even very slight, in the RNG, it is liable to affect every single bit the same way. So instead of using it directly I would rather accumulate somewhere between 256 and 512 bits from that source, then hash it with BLAKE2s to obtain 256 bits I’ll be pretty sure will be close enough to uniformly random.
That seems to be what they recommend, anyway.
I spent a little time trying to find a guaranteed good enough procedure for sampling the RP2040 randombit, to feed into Gimli, but I put it on the back burner a while back. I had really hoped that the RPi engineers would actually characterize it, but instead they just merged a really crappy way of using it for low quality random number into their SDK.
Oh, I didn’t know, my bad.
I’m only just now seeing Beeper on another site, so the first questions I have are: is Beeper based on this? Or, did Beeper independently RE iMessage and then buy this competitor project…?
It seems like Beeper is using actual Mac in a datacenter for this: https://youtu.be/ji5HwS3bhlU?t=358 (which means confidentiality concerns).
EDIT: It looks like the recently released Beeper Mini is actually based on this RE work: https://www.theverge.com/2023/12/5/23987817/beeper-mini-imessage-android-reverse-engineer
OCamlPro is currently working on Cobol-related projects: they help companies migrate their COBOL applications away from legacy mainframe environments. As part of this venture, they contribute to GnuCOBOL, but they also released some modern tooling for working with COBOL codebases.
I’m curious if people are actually using GNU COBOL. Pretty much all of the COBOL I hear people talk about is on z (edit: and that implies things like Db2, CICS, etc. - it’s not just COBOL, but the ecosystem). At least in my world (i) COBOL is a bit of a thing, but massively dwarfed by RPG.
This LinkedIn post from OCamlPro (in French: https://www.linkedin.com/posts/get-superbol-france_gnucobol-cobol-mainframe-activity-7122607267480260610-Lfa0) seems to imply that the French government would migrate from GCOS Cobol environment to Gnu COBOL.
I wonder how they’re handling the everything else around the machine. Business logic in COBOL is one thing, but the transaction engine (that your code usually runs within the context of), database (so DB migration plus probably rewriting vendor specific SQL), and frontends like 3270/web stuff is just as important. I’m not familiar enough with GCOS to compare with z on that front though.
There are some weird corners of the world outside mainframes where COBOL can be found: My wife used to be a software developer for an Oracle PeopleSoft installation, and she would moan whenever some bug led her to dig deep enough into the guts of PeopleSoft that she found herself reading COBOL.
I see folks are suggesting alternative open source solutions - have you looked at Sourcehut? Fully open source and features (IMO) quite a nice CI system.
The main selling point is being able to SSH into CI nodes so you can muck around until the command succeeds, which I think would solve most of this posts’ complaints. I agree the iteration time of developing a CI by pushing commits then waiting for it to run is brutal and makes it all take 10x longer than it should.
Aye this is my favourite feature on CircleCI, that it’ll just drop me into a shell on a failed build step is gold, and the SSH auth is magic.
Combined with putting the “meat” of the build definitions in Make or similar, so you can do most work locally before pushing, and then any final bits of debugging in the CI shell, it’s not bad.
I’m very intrigued by Nix tho, all these people here are giving me FOMO
I’m flabbergasted that anyone would use a system that lacks this feature. It must make debugging so frustrating.
It is. And frankly it feels embarrassing. You sit there crafting commits to fix the issue and if anyone is getting notifications on the PR you are peppering them with your failures. Would not recommend.
I’m a customer, and it’s been on my list to figure it out for a while. The way it works feels just different enough from other stuff in the space that I haven’t gotten ‘round to it yet. Do you know if there’s a write-up of something like running a bunch of tests on a linux image, then pushing a container to a remote VPS after they pass?
The docs seem good, but more reference-style, and I’d really be curious to just see how people use it for something like that before I put in the labor to make my way through the reference.
There is no tutorial in the documentation indeed, but starting from their synapse-bt example and evolving from it is sufficient from my experience.
The cool things about SourceHut is that you don’t need a Git (or Mercurial) project to run a CI pipeline. You can directly feed a Yaml manifest to the web interface and have it executed. That plus the SSH access to a failed pipeline makes it quite easy to debug.
Why this fixation on imitating a messenger app? For me, the style of UI is an obvious mismatch for IRC. The traditional compact text with the nickname inline, uses up the limited screen real estate much more efficiently.
This chat bubbles visual is suited for mobile one to one communication because Trafic is low and mostly 1 to 1. But I fail tu understand why anyone would think it would be a good idea for an IRC cliente.
There’s no fixation: you can enable the denser ‘compact mode’ which matches your description of a traditional compact text with inline nickname: https://i.imgur.com/VIQjXBt.png
Nice to see these new-ish developments in the IRC world (even though I’m still using good old Quassel for all my IRCing)! Bit surprised by the “bubble” styling of the chat views, for IRC that somehow feels really odd to me.
Bubble chat views feel more natural when you enable link preview (disabled by default for obvious privacy reason). It also works better with users who aren’t really geeky but still want to take part in conversation with geeky friends :)
This being said you can enable the compact mode if you prefer the “traditional” look of IRC clients: https://i.imgur.com/VIQjXBt.png
Did anybody use this on iOS? The description says:
I’m in charge of the publication of Goguma on iOS. We have an issue tracking all the features not available yet on iOS: https://todo.sr.ht/~emersion/goguma/138
We are still working on notification support, as iOS has strict requirements for background tasks. Besides that, it works decently well and has good accessibility.
Palaver has protocol to register how to send push messages for iOS. Maybe implementing something like that would be helpful.
https://github.com/cocodelabs/palaver-irc-capability
Had a look at it: their extension is unfortunately just sending requests to a closed source API that they host, which is supposedly doing the actual APNS requests :(
It seems that there is source for API, but nothing prevents anyone from reimplementing that API independently.
How Marcan discovered this bug: https://social.treehouse.systems/@marcan/111160552044972689
That’s the same link, btw. I think you wanted this link for the original bug post.
Yep, a copy-paste error. Thanks for linking to the original post.
I owned one of these as my first work laptop and I cannot agree, it’s a decent laptop but not the best one by far. What I disliked the most was it’s abysmal display, dark, low resolution, bad color reproduction. As usual with Lenovo, it’s a lottery game with the screen and from the model number you cannot infer what manufacturer the screen is from. The keyboard was pretty good though, even though it had a lot of flex and feels pretty cheap compared to what you get nowadays. Also, I don’t get the point of carrying another battery pack, to swap it out you need to power down the machine. HP’s elitebook 8460[w/p] models could be configured with a 9-cell battery and an optional battery slice which gave them almost a full day of battery life. Those elitebooks were built like a tank but at the same time very heavy. Compared to the X220 they’re the better laptops in my opinion. However, the best laptop is an Apple silicon MacBook Air. It’s so much better than what else is available that it’s almost unfair. No fan noise, all day battery life, instant power on and very powerful. It would be great if it could run any Linux distribution though, but macOS just works and is good enough for me.
I totally disagree, and I have both an X220 and an M1 MacBook Air.
I much prefer to the X220. In fact, I have 2 of them, and I only have the MBA because work bought me one. I would not pay for it myself.
I do use the MBA for travel sometimes, because at a conference it’s more important to have something very portable, but it is a less useful tool in general.
I am a writer. The keyboard matters more than almost anything else. The X220 has a wonderful keyboard and the MBA has a terrible keyboard, one of the worst on any premium laptop.
Both my X220s have more RAM, 1 or 2 aftermarket SSDs, and so on. That is impossible with the MBA.
My X220s have multiple USB 2, multiple USB 3, plus Displayport plus VGA. I can have it plugged in and still run 3 screeens, a keyboard, a mouse, and still have a spare port. On the MBA this means carrying a hub and thus its thinness and lightness goes away.
I am 6’2”. I cannot work on a laptop in a normal plane seat. I do not want to have to carry my laptop on board. But you cannot check in a laptop battery. The X220 solves this: I can just unplug its battery in seconds, and take only the battery on board. I can also carry a charged spare, or several.
The X220 screen is fine. I am 55. I owned 1990s laptops. I remember 1980s laptops. I remember greyscale passive-matrix LCDs and I know why OSes have options to help you find the mouse cursor. The X220 screen is fine. A bit higher-res would be fine but I cannot see 200 or 300ppi at laptop screen range so I do not need a bulky GPU trying to render invisibly small pixels. It is a work tool; I do not want to watch movies on it.
I have recently reviewed the X13S Arm Thinkpad, and the Z13 AMD Thinkpad, and the X1 Carbon gen 12.
My X220 is better than all of them, and I prefer it to all of those and to the MacBook Air.
I say all this not to say YOU ARE WRONG because you are entitled to your own opinions and choices. I am merely trying to clearly explain why I do not agree with them.
… And why it really annoys me that you and your choices have so limited the market that I have to use a decade-old laptop to get what I want in a laptop because your choices apparently outweigh mine and nobody makes a laptop that does what I want in a laptop any more, including the makers of my X220.
That is not fair and that is not OK.
It’s perfectly fair to like the X220 and other older laptop models, that’s simply personal preference.
Probably because your requirements are very specific and “developer” laptops are niche market.
Neither I, nor anyone else who bought an Apple product is responsible for your choice of a laptop.
The core of my disagreement is with this line:
Comparison: I want a phone with a removable battery, a headphone socket, physical buttons I can use with gloves on, and at least 2 SIM slots plus a card slot. These are all simple easy requirements which were ubiquitous a decade ago, but are gone now, because everyone copies the market leaders, without understanding what makes them the market leader.
If there was a significant market for a new laptop with the features similar to the X220, there would be such a laptop offered for sale.
There’s no conspiracy.
I didn’t claim there was any conspiracy.
Whereas ISTM that your argument amounts to “if people wanted that they’d buy it, so if they don’t, they mustn’t want it”. Which is trivially falsified: this does not work if there is no such product to buy.
But there used to be, same as I used to have a wide choice of phones with physical buttons, headphone sockets, easily augmented storage, etc.
In other markets, companies are thriving by supplying products that go counter to industry trends. For instance, the Royal Enfield company supplies inexpensive, low-powered motorcycles that are easily maintained by their owners, which goes directly counter to the trend among Japanese motorcycles of constantly increasing power, lowering weight, and removing customer-maintainability by making highly-integrated devices with sealed, proprietary electronics controlling them.
Framework laptops are demonstrating some of this for laptops.
When I say major brands are lacking innovation, derivative, and copy one another, this is hardly even a controversial statement. Calling it a conspiracy theory is borderline offensive and I am not happy with that.
Margins in the laptop business are razor-thin. Laptops are seen as a commodity. The biggest buyers are businesses who simply want to provide their employees with a tool to do their jobs.
These economic facts do tend to converge available options towards a market-leader sameness, but that’s simply how the market works.
Motorcycles are different. They’re consumer/lifestyle products. You don’t ride a Royal Enfield because you need to, you do it because you want to, and you want to signal within the biker community what kind of person you are.
Still no.
This is the core point. For instance, my work machine, which I am not especially fond of, is described in reviews as being a standard corporate fleet box.
I checked the price when reviewing the newer Lenovos, and it was about £800 in bulk.
But I have reviewed the X1 Carbon as a Linux machine, the Z13 similarly, and the Arm-powered X13s both with Windows and with Linux.
These are, or were when new, all ~£2000 premium devices, some significantly more.
And yet, my budget-priced commodity fleet Dell has more ports than any of them, even the flagship X1C – that has 4 USB ports, but the Dell, at about a third of the price, has all those and HDMI and Ethernet.
This is not a cost-cutting thing at the budget end of the market. These are premium devices.
And FWIW I think you’re wrong about the Enfields, too. The company is Indian, and survived decades after the UK parent company died, outcompeted by cheaper, better-engineered Japanese machines.
Enfield faded from world view, making cheap robust low-spec bikes for a billion Indian people who couldn’t afford cars. Then some people in the UK noticed that they still existed, started importing them, and the company made it official, applied for and regained the “Royal” prefix and now exports its machines.
But the core point that I was making was that in both cases, it is the budget machines at the bottom of the market which preserve the ports. It is the expensive premium models which are the highly-integrated, locked-down sealed units.
This is not cost-cutting; it is fashion-led. Like womens’ skirts and dresses without pockets, it is designed for looks not practicality, and sold for premium prices.
Basically, what I am reading from your comments is that Royal Enfield motorcycles (I knew about the Indian connection, btw, but didn’t know they’d made a comeback in the UK) and chunky black laptops with a lot of ports is for people with not a lot of money, or who prefer to not spend a lot of money for bikes or laptops.
Why there are not more products aimed at this segment of the market is left as an exercise to the reader.
ISTM that you are adamantly refusing to admit that there is a point here.
Point Number 1:
This is not some exotic new requirement. It is exactly how most products used to be, in laptops, in phones, in other sectors. Some manufacturers cut costs, sold it as part of a “fashionable” or “stylish” premium thing, everyone else followed along like sheep… And now it is ubiquitous, and some spectators, unable to follow the logic of cause and effect, say “ah well, it is like that because nobody wants those features any more.”
And no matter how many of us stand up and say “BUT WE WANT THEM!” apparently we do not count for some reason.
Point Number 2:
That’s the problem. Please, I beg you, give me links to any such device available in the laptop market today, please.
I don’t doubt there are people who want these features. They’re vocal enough.
But there are not enough of them (either self-declared, or found via market research) for a manufacturer to make the bet that they will make money making products for this market.
It’s quite possible that an new X220-like laptop would cost around $5,000. Would such a laptop sell enough to make money back for the manufacturer?
The brown manual wagon problem: everyone who says they want one will only buy them 7 years later used.
“Probably because your requirements are very specific and “developer” laptops are niche market.”
I’d suggest an alternate reason. Yes, developer laptops are a niche market. But I’d propose that laptops moving away from the X220 is a result of chasing “thinner and lighter” above all else, plus lowering costs. And the result when the majority of manufacturers all chase the same targets, you get skewed results.
Plus: User choice only influences laptop sales so much. I’m not sure what the split is, but many laptops are purchased by corporations for their workforce. You get the option of a select few laptops that business services / IT has procured, approved, and will support. If they are a Lenovo shop or a Dell shop and the next generation or three suck, it has little impact on sales because it takes years before a business will offer an alternative. If they even listen to user complaints.
And if I buy my own laptop, new, all the options look alike - so there’s no meaningful way to buy my preference and have that influence product direction.
“Neither I, nor anyone else who bought an Apple product is responsible for your choice of a laptop.”
Mostly true. The popularity of Apple products has caused the effect I described above. When Apple started pulling ahead of the pack, instead of (say) Lenovo saying “we’ll go the opposite direction” the manufacturers chased the Apple model. In part due to repeated feedback that users want laptops like the Air, so we get the X1 Carbons. And ultimately all the Lenovo models get crappy chicklet keyboards, many get soldiered RAM, fewer ports, etc. (As well as Dell, etc.)
(Note I’m making some pretty sweeping generalizations here, but my main point is that the market is limited not so much because the OP’s choices are “niche” but because the market embraces trends way too eagerly and blindly.)
This reminds me a great deal of my recurring complaint that it’s hard to find a car with a manual transmission anymore. Even down to the point that, last time I was shopping, I looked at German-designed/manufactured vehicles, knowing that the prevailing sentiment last time I visited Germany was that automatic transmissions were for people who were elderly and/or disabled.
I think the reasons are very similar.
The move to hybrid and electric has also shrunk the market for manual transmissions.
I’ve done my time with manual. My dual-clutch automatic has at least as good fuel economy and takes a lot of the drudge out of driving.
All of this! Well said, Joe.
https://asahilinux.org ;)
This, but Asahi still has a long, long way to go before it can be considered stable enough to be a viable replacement for macOS.
For the time being, you’re pretty much limited to running macOS as a host OS and then virtualize Linux on top of it, which is good enough for 90% of use cases anyway. That’s what I do and it works just fine, most of the time.
Out of curiosity, what are you using for virtualization? The options for arm64 virtualization seemed slim last I checked (UTM “works” but was buggy. VMWare Fusion only has a tech preview, which I tried once and also ran into problems). Though this was a year or two ago, so maybe things have improved.
VMware and Parallels have full versions out supporting Arm now, and there are literally dozens of “light” VM runners out now, using Apple’s Virtualisation framework (not to be confused with the older, lower level Hypervisor.framework)
I’m using UTM to run FreeBSD and also have Podman set up to run FreeBSD containers (with a VM that it manages). Both Podman (open source) and Docker Desktop (free for orgs with fewer than, I think, 250 employees) can manage a Linux VM for running containers. Apple exposes a Linux binary for Rosetta 2 that Docker Desktop uses, so can run x86 Linux containers.
I’m not speaking for @petar, but I use UTM when I need full fat Linux. (For example, to mount an external LUKS-encrypted drive and copy files.) That said, I probably don’t push it hard enough to run into real bugs. But the happy path for doing something quick on a Ubuntu or Fedora VM has not caused me any real headaches.
It feels like most of the other things I used to use a Linux VM for work well in Docker desktop. I still have my ThinkPad around (with a bare metal install) in case I need it, but I haven’t reached for it very often in the past year.
Wait, is Jai available now?
It’s in a closed beta stage. If you’re friends with Jonathan Blow or somehow catch their attention he might give you access, entirely at his discretion.
I get why he did this for the first couple years. But continuing with this development model for over 10 years made me realize Jai is and will likely always be closed source software.
Doesn’t seem like there’s a public release of it yet.
I’m having trouble understanding what the AMD SMU is. From the context I guess it is somewhat like the Intel ME? Though AMD PSP is the direct equivalent for that. I am confused.
Looks like the SMU is more of a power/thermal management controller, according to https://fuse.wikichip.org/news/1177/amds-zen-cpu-complex-cache-and-smu/2/
A cloud exit makes sense if you have an established long-term viable product with predictable & stable traffic patterns.
Cloud gives you the flexibility to establish those parameters for your product, without gambling on expensive one-off purchases for resources you may not need, as you experiment with which hardware resources are best suited for your load.
I’m starting to feel that “cloud allows you to scale up” has become a meme by now…
You can get 32GB of RAM with a Intel Xeon E5-1620 on kimsufi dot com (I don’t want people to think that I’m advertising) for $40/m and $40 of installation fees. The equivalent in AWS is the
t3.2xlargeat $240/m. Just on the first month, with the installation fees, it’s cheaper with a dedicated server than AWS. And AWS will kill you on the bandwith which is 100Mbps unmetered with kimsufi.Also, with one of these, a well-written application and good caching, you can easily handle ~1k req/s. That’s a lot of users!
I sometimes think that cloud companies managed to sell outrageously over-priced products to gullible users who never needed it in the first place. :(
There’s a lot more to the word “flexibility” besides “scale up”. Clouds allow you to experiment with (managed) blob store, queues, DBs, caches, search indexes, CDNs, networking, etc; to figure out what best suits your product. And then easily move the product to another continent if it turns out it is more popular in EU than in the US—or wherever else your hardware is.
How often does this happen? And is it worth paying 10x the price per month, just to optimize for this use case?
Most of this can be experimented on very cheaply, one command away with
apt install varnish-cache glusterfs postgresql-server rabbitmq-server haproxy. The default package configuration will be more than good enough to experiment.Also, this will put you much more in control of things instead of debugging C++ stack traces (I had ton of these when trying to use AWS Redshift), or weird HTTP error messages returned by proprietary APIs. (Try to debug authorization errors in AWS, good luck with this)
1K requests / second could be underselling it …
It’s hard to compare directly, but back in 1999 people talked about the C10K problem – 10,000 concurrent connections on a single machine.
https://en.wikipedia.org/wiki/C10k_problem
As the wikipedia page mentions, production services have achieved over 1 million concurrent connections on a single machine over 10 years ago – e.g. WhatsApp, using Erlang, not even a native language like C.
Granted those are probably tiny requests, and I guess keeping the connections open probably allows many more requests per second, since setting up the connections is expensive.
But it’s still orders of magnitude more than 1K / second (even though the units aren’t the same; I’d be interested in any pointers that elaborate on the relation)
Maybe 1K page loads, since modern web apps seem to make like 500 requests per page load :-P Or they download 10-50 MB of Javascript routinely.
Maybe a more comparable site is StackOverflow, they are a self-hosted monolith and seem have large amounts of optimization at the .NET layer.
Stack Overflow is a cacheless, 9-server on-prem monolith
The “cacheless” point is important. I’ve seen a lot of bad architectures papered over by caches, which make things better in some cases and worse in others. They also introduce a lot of operational expense.
https://twitter.com/sahnlam/status/1629713954225405952 – actually the Twitter thread says it’s 6000 request/second per machine, consuming 5-10% capacity. Interesting
6000 requests/second is 140 B requests/month, which is at least in the ballpark of “2B page views/ month” claimed
So yeah StackOverflow was acquired for $1.8 billion in 2021, and you can run it on 9 machines, each doing ~6000 requests/second.
If $40 vs $240 is the debate, you should definitely go with kimsufi. For a lot of other cases you should use the cloud. And then after a certain scale it might indeed pay off to be off the cloud. It really depends on the services you would use.
Seems to just resell OVH? As i understand it, the servers all have ecc ram? Still 100 Mbps is abyssimal - would much prefer minimum Gbps. You get that unmetered from Hetzner (or 10Gbps with 30TB/month) - but at closer to 100 USD/month if you disregard pre-owned boxes.
Still, try and price 30 (or just 2!) TB egress from AWS…
That aside, you would probably want at least two boxes and a floating ip to have your meaningful risks in a similar ballpark to AWS (worse risk, but possibly similar in actual business terms even if you trade 30 minutes downtime/year for a day of downtime).
Kimsufi is part of the OVHcloud group. They offer old servers (Intel Atom N2800 anyone?) through this brand, with simpler services.
Well, if we’re talking non-ecc ram, hetzner has a couple of cheap options with unmetered Gbps uplink.
Amazon overcharges for egress to keep you keeping you there as much as possible 🤣
The 100mbit isn’t much, that’s true. But on the other side: if you actually have that many requests you probably want something like a CDN or a different host in front for all the static assets. So for a typical crud app it is probably enough bandwidth. Then again there are definitely other vendors which also give you a fixed amount of compute, storage, memory and bandwidth with > 1Gbit, that do not charge you for the bandwidth. They are bookable monthly too..
Obviously you will need someone who has knowledge about Linux, to at least install the base and for example docker. But you will need someone equally for AWS (and then probably Linux on top).
I think this is again a part of “no one has ever been fired for buying IBM”, but now we’re doing the same with AWS. For most people it probably isn’t actually reasonable to use AWS, apart from using the trodden path and not having to search for different vendors. Which is totally fine if you want to spend that money.
I personally wouldn’t run anything business related off kimsufi, OVH doesn’t really care all that much about it and will leave you with dead hardware. It’s not that much more expensive (for a business at least) to go up to an actual OVH or use something like Hetzner if you’re only concerned about Europe. Even if you want to go for “cloud” machines then OVH or Hetzner might be a better bet anyway.
Right, and furthermore, with hardware being as powerful as it is, much more powerful than needed for most use-cases, you may not need more than a small fraction of that expensive resource for a very long time, or essentially forever.
Depending on the MCU model you can have very different pull-up strengths (impedance if you will). The author doesn’t specify which MCU they were using (maybe STSPIN32F0A from STEVAL-ESC002V1, but I’m not sure if it’s acting as an I2C slave or master?), so it’s kinda complicated to draw meaningful conclusions from this article.
For more context: https://lobste.rs/s/kfpwxn/mold_1_7_0_author_seriously_considering
I’m neither a big fan of SystemVerilog (using Verilog 05 for design and C++ or Python for verification instead), but there is also a UVM version for SystemC (which is C++ actually); see https://systemc.org/about/systemc-verification/overview/
Yes for verification; no for digital design, which is not the same as “programming”.
Agree
SystemC is, in my opinion, the one thing worse than SystemVerilog.
I’ve used cocostb. Evaluated it to see if maybe it was an option for a team of new verification engineers. I liked it quite a lot. But SV for verification is a lot more than classes.
When I checked out, it was missing constraints (and corresponding randomization), functional coverage (groups, points, properties), and SystemVerilog Assertions. Functional coverage is a must have in my domain.
Well, it’s the C++ programming language plus a library; they didn’t change the language. In contrast SystemVerilog is - from my point of view - a rather unfortunate amalgation of at least four different languages, causing unnecessary complexity, which is an impediment for people who have to learn it and tool implementers who have to maintain compatibility. It would not have been so serious if IEEE had subsequently withdrawn the Verilog standard. Thus, people were forced into SystemVerilog against their wishes, despite its continued relatively low acceptance. Most engineers I know including me still use Verilog for design and something else - like cocotb - for verification. SystemC has still a rather low adoption rate, but if it has a UVM implementation then people at least are not forced into SV.
It seems like pyvsc could help with the randomization and the coverage. I haven’t tried it myself yet though.