Bunny is not an AWS reseller. It started as just a CDN and has gradually been expanding to other services. This is a marketing article though, probably doesn’t belong here.
On the Changelog podcast they presented some benchmarks of CDNs and Bunny beat the competition (Cloudflare) by a wide margin. So if they are reselling hardware that would still be pretty impressive. I didn‘t know them before, but their engineering seams sound. The name is a little odd though.
I am not associated with CodeWeavers, but I assume it is, because Linux and especially Linux gaming is growing, so more developers are needed.
(For everyone wondering about the comment: The Lobsters submit page prompted me to write a comment, because this link was already posted a year ago. This seems to be a new change.)
Very interesting tool! I am a user of DVC and even gave a talk about it using it for simulation data management. Do you know DVC? There is also a project built on top of DVC called CalKit, which adheres to very similar audience as your tool. I never tried it though, but chatted with the creator and it sounds very promising. How does logis differ from DVC? DVC also offers metrics, experiments and pipeline definition / execution, which you can query using the DVC CLI or GitPython and the DVC Python API in custom scripts
I remember DVC when it first came out for data management, I didn’t realise they added experiment tracking. This looks great. Honestly it’s very similar to logis and a smarter implementation with custom refs.
There might be other directions I can take logis (it’s still very new and lightweight), or I’ll start building on-top of DVC.
Though now that I think about it, DVC experiment commits don’t get pushed to Github which IMO limits the ability to integrate the metadata into other tools. I assume DVC offers an API, but the vendor lock-in is not ideal.
There is just a little bit information about it under “FAQ” and “About”, but all of this seems more of a plan/announcement of the thing than the thing itself. Meanwhile sympy is pretty great, but heavily underfunded looking at the amount of open issues on GitHub.
A terminal editor like vim (which I use) but completely redesigned to be more 2025 aware, even more 1990s aware :D With text menus for certain things that are hard to remember, integration with LLMs, tabs and so forth. I don’t like much to go outside the terminal app, nor alike IDE-alike stuff.
Helix is never getting vim keybindings, unfortunately. You can get Neovim to a comparable place, but it takes quite a bit of work. telescope.nvim for improved UI elements, which-key.nvim to display hints for key sequences, coc.nvim for code completion… and you may still need to mess around with LSP configs.
I’m a fairly happy neovim user; I have all the functionality I need and none of what I want, in 300 or so lines of lua and a few plugins.
But I have to admit that interacting with the helix “shell” (I don’t know if that’s the term; the colon command stuff) is much nicer than the vi legacy stuff. They’ve thought it through very nicely.
Why won’t neovim get vim keybindings? Last I heard they were going to embed a scheme in it, I thought they were going full bore on configurability.
The Helix project’s official stance is that they’re uninterested in enabling alternative editing paradigms. I would assume this philosophy will extend to the functionality exposed to the plugin system, although we won’t know for sure until said plugin system actually exists.
There’s a vim-keybinding plugin for VS code, ever try it out? I found it not perfect, but quite satisfying. Doesn’t necessarily integrate well with menus and stuff though.
I missed that bit! Though it does better than most graphical editors, since you can tunnel it over SSH to work on a remote system. Not perfect, but works pretty well.
I feel like helix tends a bit toward the IDE-alike direction. But OP also asks for “integration with LLMs” which is another thing I’d say tends toward the IDE-alike direction, so I can’t say I’m sure what @antirez means by that criterion.
emacs definitely feels like an IDE to me when I’ve tried it. Even without a pile of plugins it still loads noticeably slower than the other terminal editors I’ve tried like vim, and then when I’ve asked about this, people tell me I need to learn “emacs server” which seems even more IDE-tastic. Lisp is cool and undoubtably there’s a lot of power & depth there, but like Nix I think I need to make it part of my personality to realize the benefits.
I don’t even think you need to run emacs as a server, just have a little patience. Even with a huge framework’s worth of plugins my Emacs is useful in a couple of seconds.
For my habits and workflow, 2 seconds to wait for a text file is completely unacceptable. What’s next, 2 seconds to open a new shell or change directories or wait for ls? Please do not advise me to “just” lower my standards and change my workflow to excuse poor software performance.
For me, Emacs is started in daemon-mode when I log in (with systemd, but could be done perfectly well with one line in .xinitrc or even .profile). Then if I’m invoking it from the shell, I have a script that pops up a terminal frame instantly, and is suitable for using as $VISUAL for mail clients and such. I’m absolutely not trying to say you should do this, just that it’s the standard way for using a terminal Emacs in the “run editor, edit, quit” workflow (as opposed to the “never leave Emacs” workflow).
It’s ridiculous to call loading an editing environment like Emacs “poor performance” with the features vs say Helix. Sometimes I use Helix for command line edits, usually I use Emacs for file editing. If you’re going to need all the fancy features of an editor you can wait <2 seconds like an adult, or use vim like an adult, but if you’re needing LLM integration and can’t wait shorter than a breath for your editor to load that is a you problem, not a problem with your editor.
You really don’t need to make it part of your personality, but you do need to install it so you’ve got a server, and now make sure you’re running a version with aot compilation.
In fairness, I don’t think many people use stock vim or vs code as their daily driver either, because there are rich returns to mildly customizing your editor and investing in knowing it well.
I don’t think many people use stock vim or vs code as their daily driver
I’m actually pretty curious about this. Does anyone know if there’s data on this?
I think my only Vim config for a long time was “set nocompatible” and “syntax on” (both of which I don’t seem to need these days). Oh, and “set nobackup”. VS Code, I think I only disabled wordwrap. Emacs required a bit of customization (cua-mode, SLIME, selecting a non-bright-white theme), but I’ve used it a lot less than Vim and VS Code. I used to be big on customization, but I guess I never got into customizing editors–not sure why…
I’ve been plugging lazyvim hard of late, and I think it’d fit your needs without leaving vim and not spending an eternity configuring stuff:
text menus for certain things that are hard to remember
which-key + most things being bound to leader (i.e. space) make it super easy to find how to do things, and autocomplete on the command line takes you the rest of the way
integration with LLMs
Comes with one-button setup for copilot, supermaven and a couple others. IME it’s not as good as VS code but gets pretty close; I’ve definitely not felt IDE AI features give me enough value to switch.
I know about terminal split, but I don’t believe that’s the way. It was much better with 1990s MS-DOS editors… where they tried to bring you some decent terminal UI. Btw I see that I was suggested to look at a few projects, I’ll do! Thanks.
Of course — the terminal part was just «if I am listing features that can be used nowadays without relearning the editor, I can as well mention that». But overall the situation with new features in Vim or NeoVim is better than you described, so in case you find some dealbreakers with mentioned editors — hopefully not but who knows (I do find dealbreakers there) — you can get at least some of the way from Vim.
Not riding the AI (LLM) hypetrain, but LLMs (especially of the shelves ones) work much better with open source software, because all of the technical documentation, FAQ, issues, pull requests, etc. are openly available. It would be interesting if a LLM fine-tuned on the Postgres data could‘ve answered that question and gave the source.
Why are 11 people flagging this as spam? Just because it’s about a commercial product? It’s difficult to produce hardware without charging money for it, since it’s, y’know, a physical object that someone has to build. At least the RPi is open-source hardware.
Specifically, the bootloader is proprietary/secret, as in, you cannot build a customized version yourself. I was actually bitten by this: I wanted to PXE-boot a largish initrd (entire root fs, I think it was 600M compressed) but the bootloader had a lower limit of how much it could download. I asked the PI folks to increase the limit but they refused (probably they had their reasons). So it was a no go. And a 16G PI would be a natural fit for running an in-memory OS.
Try iPXE? There’s a build for the pi here : https://github.com/ipxe/pipxe
Then you boot a small (~4MB or so when I’ve used it on x86_64) image via PXE and then use iPXE to pull the rest down via PXE or http(s) (or probably other methods, I’ve not looked beyond those).
Yes, I was thinking of chain booting but decided not to bother (8G was still too tight but might be worth the hassle for 16G). But I didn’t know about the iPXE build for Pi, thanks for the pointer!
It is but the underlying OS is now FOSS. This does not mean that the Pi firmware is FOSS – it isn’t – but it does mean that the code is there to study.
Guessing people parsed “for sale” in the title and flagged it on that basis alone. I wasn’t one of them, but I note that it’s consistent with how Lobsters treats most other product-release links.
I think spam (“for links that promote a commercial service”) is more applicable than off-topic (“for stories that are not about computing”), but I honestly can see both applying. The reason is this has very little technical detail. The article spends about 3/4 of the words talking about the company’s climate change policy and 1/4 talking about the newly released product. Basically only one (smaller) paragraph gets into any technical specifics about what makes this board different from others.
I’m happy to see a new Raspberry Pi model get released, I’m glad the company is considering how to take care of the Earth, but it certainly seems intended simply to sell the new Raspberry Pi, hence promoting a commercial service and I don’t see how it fits this criteria:
“Will this improve the reader’s next program? Will it deepen their understanding of their last program? Will it be more interesting in five or ten years?”
So why haven’t I flagged? As of my writing this comment, my flag would be the balance between positive and neutral votes on the article (+15, 14 flags). This article isn’t so egregious as to make me want to be That Guy and flag it down. Others that feel differently may vote their conscious.
Indeed the selling point of several Pi rivals is that they are open hardware.
The Pi is entirely closed and not very standards-compliant. It uses closed-source firmware based on ThreadX, and has a unique design and startup sequence: the VideoCore GPU is the primary processor and it loads code into the Arm. Thus the Pis can’t even run a normal Arm bootloader without work, and there is nothing resembling UEFI on them.
Some distros, such as openSUSE on Pi, start by softloading Das U-Boot as a more standard bootloader, and then load the OS from this, just to get the hardware into something closer to a more standards-based Arm computer before loading Linux.
The GPU boots the computer, loads the Arm code image from SD or wherever, places is it RAM, initalises the Arm core(s) and sets them running what it put in RAM.
If you want, you are perfectly free to put a UEFI loader in that RAM – but don’t kid yourself: it’s no more in control of the computer than the passenger in the front seat of a self-driving train.
The Picos and RP2040 and RP2350 are thoroughly documented but they don’t go as far as being open source. Last time I looked, the Pico board designs are not published, though the reference designs (for the PCB design guide documentation) are available. Some parts of the chip are published, such as the firmware source and the RISC-V cores, but other parts developed by Raspberry Pi Ltd such as the PIO cores are not published. (And of course most of the big IP blocks such as the ARM cores and USB controller are proprietary to other companies.)
Yeah I guess the original title (“16GB Raspberry Pi on sale now at $120”) could be interpreted as being a sale as in promotion, versus just the release announcement which happens to mention the price in the title. Didn’t even think about the confusing wording when I posted this, I just re-used the original title, but I can see how someone could think it’s off-topic.
IMHO more off-topic than spam, but either way its a product announcement for a minor variant of an existing product, hard for something like that to be on-topic (vs new generations where one often can argue that something fundamental enough has changed that it maybe deserves attention. But even then there are limits)
I haven’t tracked a lot of hardware in this class, but the overall price with the doubled RAM seems noteworthy?
I fondly remember rescuing one of the older models from e-waste and turning it into a more “persistent” alarm clock. I know a colleague that runs a whole rack of Pis for various hobbies. I do figure you’re likely more to get storage I/O bound than RAM bound first, but they still fill an interesting niche for a cheap, “beginner” device.
I do figure you’re likely more to get storage I/O bound than RAM bound first
That may still be true… but these model 5’s are a big improvement in IO throughput compared to previous ones, including first class support for an m.2 nvme daughterboard. It’s a huge step up from the days of doing everything on a microsd card.
Because it’s an ad for a single board computer. In what way is it relevant to lobste.rs? I’d not like it if lobste.rs to be a feed on any kind of ordinary computers.
With “any kind of ordinary computer” I mean that it’s not some world first implementation that is also being discussed.
On top of that the article is not just regarding an ordinary computer it’s also just an ad. There’s nothing setting it apart in content from most other ads out there.
Mods have since improved the title, but two reasons. (1) “X on sale for $Y” is always functionally an ad, even though it’s a plain statement of fact, since its function is to encourage the reader to weigh up buying one - IMO Lobsters don’t need that in their feeds; (2) the article itself has scant technical content to discuss, leading me to the conclusion that the main point of this submission is “hey maybe you want to buy this thing”. To me that’s spam regardless of whether the submitter benefits or not.
Look, I’ve been around since the 1991 green-card-lottery spam on Usenet that coined the term, and what you’re describing is not what “spam” means. This was posted one time by an active member, for informational purposes. You can dislike this sort of post for whatever reason, but use more accurate wording.
The Lobste.rs about page notes how it is defined in this community, which is different from how it was originally used.
For stories, these are: “Off-topic” for stories that are not about computing; … “Spam” for links that promote a commercial service.
In this case, spam simply means this article is by a commercial entity attempting to sell one of its products, with very little technical benefit. Not that people believe the article is trying to sell us pills or a new stock market scheme.
For those of us unfamiliar with it, what is Chimera Linux?
I read their “about” pages and learned basically that it’s a Linux distribution not based on other Linux distributions that aims to re-impliment things they think other distributions do poorly. But that doesn’t tell me anything about its actual strengths.
It’s the Linux kernel with BSD userland instead of GNU, musl instead of glibc, and LLVM instead of gcc. There’s apk for packages and systemd is supported.
I’m still trying to see big end-user changes here, most of the changes are rather “internal”. BSD user space, yet another systemd replacement. I mean, people have been complaining about “GNU bloat” since the 90s. I distinctly remember “Mastodon Linux”, where the author also didn’t quite like the change from a.out to ELF. There also seems to be some void Linux DNA here, one of the more prominent musl-based distributions.
While I appreciate lean projects, I’m not sure whether that really matters at all here. Given what you’ve got with the kernel and the desktop and browser stack, shaving off a few glibc/coreutils options barely matters. Reading the site, the authors seem quite pragmatic (as compared to e.g. the suckless people, the most “prominent” torchbearers of minimalism for minimalism’s sake).
But again, not quite sure how much of that will be visible in the end.
Chimera isn’t really in the “minimalist” camp, but could perhaps be in the “power user” camp. And, like Void, they do have a little bit of that BSD vibe where you’re installing a full system of components and there’s some opinions on how they fit together.
I actually looked for the same thing, but trying to do new things can be a goal in of itself. May be there are some unknown unknown advantages of this tech stack.
Note that (at least to my understanding) this is not due to support getting worse, but the standards for main category getting higher (which I think is good)
The question is if any actual devices will manage to get there. From what I see of pmOS, it’s mostly a lot of people getting devices to boot, but nothing daily drivable. Which is totally fine, there’s a lot of fun in getting things to work, but the rest of the 10% is a slog. It’s just a question of if it’s relevant/interesting to people beyond SoC hackers.
The technical requirements for being in the “main” category didn’t change, we added some new requirements wrt having more active maintainers for a “main” device.
We want “main” devices to be well supported, and a key part of that is usually having more folks who want to do the dev/support/testing for the device.
I hope to see some new phones in “main” next year 🤞
Moreover, we have been the single significant contributor of the source code. Our ecosystem tools have received a healthy amount of contributions, but not the core database. That makes sense. The ScyllaDB internal implementation is a C++, shard-per-core, future-promise code base that is extremely hard to understand and requires full-time devotion. Thus source-wise, in terms of the code, we operated as a full open-source-first project. However, in reality, we benefitted from this no more than as a source-available project.
I’m assuming since they’re making this move they had all contributors to their AGPL-licensed repo sign a CLA. Makes sense nobody would want to invest at least a year learning to contribute to a codebase where their work could then be yanked away, as it was here.
Anyway it’s fine, this is just businesses doing business things. Revenue in the FOSS world is a strange game. Maybe they’ll succeed enough to then eventually use a viral FOSS license again, although unless they then get rid of the CLA nobody will want to contribute to it.
I don’t know that CLAs are much of a deterrent to outside contributors. It feels like many FOSS projects have CLAs, even if they have no associated business. The stated justification is usually to avoid relicensing problems from one FOSS license to another, if the need arises, since you don’t have to track down dead/missing contributors.
Ultimately, I think the solution to free-riding at all levels is something like Attribution-Based Economics, where monetary payouts are distributed based on repo contributions, regardless of whether you’re an employee or not. But that’s a huge undertaking, and afaik, nobody outside of a few people in the Racket community have ever tried building something like it.
CLAs completely undermine the GPL. It’s like encryption with a backdoor. It’s only a matter of time until it’s exploited, every single time. The stated justifications don’t matter because they are completely detached from what actually happens. People who develop GPL code and license it under some entity’s CLA need to be aware that they perform unpaid labor for that entity. Their code can and likely will end up in unfree software. If you are okay with that, it’s fine. Just don’t be surprised.
The project SHALL use a share-alike license such as the MPLv2, or a GPLv3 variant thereof (GPL, LGPL, AGPL).
All contributions to the project source code (“patches”) SHALL use the same license as the project.
All patches are owned by their authors. There SHALL NOT be any copyright assignment process.
Each Contributor SHALL be responsible for identifying themselves in the project Contributor list.
That 2.2.3 is critical — an RFC 2119 SHALL NOT, a hard requirement of C4. Pieter wrote, in the ZeroMQ guide:
Here we come to the key reason people trust their investments in ZeroMQ: it’s logistically impossible to buy the copyrights to create a closed source competitor to ZeroMQ. iMatix can’t do this either. And the more people that send patches, the harder it becomes. ZeroMQ isn’t just free and open today–this specific rule means it will remain so forever. Note that it’s not the case in all MPLv2/GPL projects, many of which still ask for copyright transfer back to the maintainers.
I have been turning a related problem over in my mind for some time — I have aspirations to write something that needs to remain Free Software so that it can participate in bootstrapping chains, but the experiments I’m looking at might also make it attractive in its own right. Such a project seems to have the greatest chance of success if the number of external contributors is maximised, and a strong licence like the AGPL may scare away too many.
I can’t think of a licensing scheme that gives the project a way to offer ad-hoc licenses for the good of the project (e.g., to fund further development) that does not also make the project weak to the relicensing rugpull. Maybe an actual lawyer could invent some clever scheme, where some separate legal entity holds the copyright (including copyright assigned by contributors) and certain individuals are allowed to direct that entity to create and re-licence snapshots of the source tree. But I don’t know enough to dare try.
I am currently thinking that GPLv3+ and “no copyright assignment” is the best balance of external understandability, likelihood of attracting contributors, and ease of compliance for a project like the one I alluded to. GPLv3 does not look to me like it goes far enough in its anti-anti-circumvention provisions but it’s something, and AGPLv3 seems to scare a lot of people away. Having capabilities that are only available to the Free Software ecosystem makes it attractive to participate in, and I don’t think a BSD/MIT/X11/ISC-style licence would achieve that.
I am currently thinking that GPLv3+ and “no copyright assignment” is the best balance of external understandability, likelihood of attracting contributors, and ease of compliance for a project like the one I alluded to
I worry that in the future, there will be a hostile takeover of the FSF, which would open the possibility of a much weaker GPLv4 which all GPLv3-or-later licensed code could be “upgraded” to. Especially after all the Stallman-bullying that has been going on over the past couple of years, I have lost confidence that the organization and indeed the free software community itself is equipped to deal with the pressure in the long-term or can be trusted as a steward of this kind of power. I think this is a much bigger threat than being stuck on a particular GPL version. Hence, it might be wise to license GPLv3-only.
You can always fork without a CLA. Open source is a do-ocracy. People who do the work have power. See what happened with the forks of TerraForm (OpenTofu) or Gitea (Forgejo). People used their power there. Users don’t have power in open source unless they pay money.
Sure a fork is possible based off the final AGPL release, but that requires a knowledgeable development community to have already formed around the project or for the value proposition to be truly unique (there are a good number of FOSS databases already). And the formation of such a community will be hindered by the presence of a CLA, because the community cannot control when the rugpull happens. If it happens before the community reaches critical mass then it just gets killed.
People who develop GPL code and license it under some entity’s CLA need to be aware that they perform unpaid labor for that entity.
Ironically, this is the same rationale many (formerly) FOSS companies give when moving to source-available licenses. They paid to develop it, but AWS treated it as unpaid labor for their competing service. Just a hierarchy of exploitation.
Regardless, I suspect that if devs became allergic to CLAs, more companies would choose going proprietary over FOSS.
The stated justification is usually to avoid relicensing problems from one FOSS license to another, if the need arises, since you don’t have to track down dead/missing contributors.
That’s the point, the non-CLA-covered contributors form a durable distributed network preventing the rugpull.
If it’s just some kind of trivial project then a CLA doesn’t matter, I’ll contribute to it to get a feature I want for however long the project is open source, whatever. But some codebases are just very complicated and take a huge amount of time investment before even minor changes are possible - you can’t just do the thing you want to do, you usually have to do a bunch of smaller warmup tasks to learn how things work first. That’s where you start caring about a CLA and whether you’re just being taken advantage of.
I’m assuming since they’re making this move they had all contributors to their AGPL-licensed repo sign a CLA. Makes sense nobody would want to invest at least a year learning to contribute to a codebase where their work could then be yanked away, as it was here.
Do we think this is a significant cause of non contributors? It seems to me that even if there weren’t a CLA, the number of people skilled enough and willing to invest that much time into contributing would be vanishingly small (barring some sort of pay through their employer).
Building a community of FOSS contributors is very difficult. It requires substantial investment; companies can do things like:
Do as much development in the open as possible, on the public mailing list or issue tracker
Hold monthly virtual community meetings/office hours with the core dev team
Timely response to questions and PRs by contributors by the core dev team
Serious investment in documentation, conceptual tutorials, and other written onboarding materials
Creation and broadcast of mentorship programs
Cultivate a contracting ecosystem where contributors can make some money through occasional contracts with companies that use or want to use the project
Actual cash development grant programs
Without these, for complicated projects like a database you’re basically just waiting and hoping a senior-level semi-retired software engineer wanders past all the other FOSS projects pining for their time and stops at yours then perseveres (for free!) through the development process through sheer force of will or some kind of passion for the project.
I like Ubuntu, but I am always interested in people finding things they don‘t like about it. It was my first distro and I just stuck with it. I have Arch on my Steam Deck and thats also fine, but also not that much different that it is worth a switch. Also what most people underestimate is that Ubuntu is among those distros which are officially supported and mentioned in almost any documentation.
It’s another of their own posts, upthread, where they say:
All that’s to say I’m comfortable troubleshooting arcane things, and I don’t feel very constrained with regard to which distribution I use. And even with that, when I want to try out a new-to-me piece of software, I will always reach for the most recent Ubuntu LTS first. Because very nearly everyone tests with that. I think that’s Ubuntu’s main superpower.
I built one of these about 4 years ago. Frankly I just think a nice Intel nuc is a hell of a lot cheaper with three or four of those you have about the same power draw at about 20 times the performance. Plus you don’t have to pay for all the power supplies and all of us things you need by the time you’re done with it decent RPI-5 here at about 100 bucks.
I guess the main reason to do something like is not compute performance. With a single computer you also have a single point of failure. You could have those in multiple locations (in case of fire) or power one down for maintenance. Another big factor is probably how cool this is :D
AliExpress occasionally has issues shipping to Germany. I’m the main person at SFC working on the OpenWrt One, and I emailed our contact at the manufacturer yesterday - I expect they’ll get back to us in the next day or two on how people in Germany can order one. Note that most other EU countries are supported (I checked France, Italy, Spain, for example) so the issue does seem limited to Germany currently.
Looks like spam. Another AI generated article from AWS resellers.
Bunny is not an AWS reseller. It started as just a CDN and has gradually been expanding to other services. This is a marketing article though, probably doesn’t belong here.
On the Changelog podcast they presented some benchmarks of CDNs and Bunny beat the competition (Cloudflare) by a wide margin. So if they are reselling hardware that would still be pretty impressive. I didn‘t know them before, but their engineering seams sound. The name is a little odd though.
This should be merged with https://lobste.rs/s/d8ydvt/command_conquer_red_alert_source_code
Oh sorry, didn‘t see that :/ You are right
I summon @calvin. I hope it doesn‘t feel like cheating. Btw I can‘t watch it today, so I count on you.
What does this actually mean for projects like Wine / Proton? Will there be a native Linux port of DirectX in the future?
I’m not super deep into this but from reading the article it seems to be about open-sourcing efforts of the DXIL shader compiler subsystem.
I’m guessing this is about verifying correctness of shader pre-compiles..?
Wine’s already better than Windows in many cases. No need for a port, just add parts to Wine where better!
But that‘s because of Windows I believe and a native port would be even better than a translation layer (both are using Linux)
This is a new job posting for the same job
What happened, regular turnover? The job looks very interesting.
I am not associated with CodeWeavers, but I assume it is, because Linux and especially Linux gaming is growing, so more developers are needed.
(For everyone wondering about the comment: The Lobsters submit page prompted me to write a comment, because this link was already posted a year ago. This seems to be a new change.)
Very interesting tool! I am a user of DVC and even gave a talk about it using it for simulation data management. Do you know DVC? There is also a project built on top of DVC called CalKit, which adheres to very similar audience as your tool. I never tried it though, but chatted with the creator and it sounds very promising. How does logis differ from DVC? DVC also offers metrics, experiments and pipeline definition / execution, which you can query using the DVC CLI or GitPython and the DVC Python API in custom scripts
I remember DVC when it first came out for data management, I didn’t realise they added experiment tracking. This looks great. Honestly it’s very similar to logis and a smarter implementation with custom refs.
There might be other directions I can take logis (it’s still very new and lightweight), or I’ll start building on-top of DVC.
Thanks for sharing!
Though now that I think about it, DVC experiment commits don’t get pushed to Github which IMO limits the ability to integrate the metadata into other tools. I assume DVC offers an API, but the vendor lock-in is not ideal.
DVC is great, but there is still so much room to innovate and optimize (see the end of my talk for ideas)! Happy building :)
While I like competition the JupyterLab UI never really struck me as slow. Also what makes JupyterLab great is the ecosystem.
There is just a little bit information about it under “FAQ” and “About”, but all of this seems more of a plan/announcement of the thing than the thing itself. Meanwhile sympy is pretty great, but heavily underfunded looking at the amount of open issues on GitHub.
A terminal editor like vim (which I use) but completely redesigned to be more 2025 aware, even more 1990s aware :D With text menus for certain things that are hard to remember, integration with LLMs, tabs and so forth. I don’t like much to go outside the terminal app, nor alike IDE-alike stuff.
You’re looking for helix.
Thanks, just downloaded, I’m trying it.
Came here to say this.
As soon as helix gets vim keybindings, I’ll use it.
I gave the helix/kakoune bindings a college try. Did not like them at all, the way it deals with trailing spaces after words messes up my workflow.
But the LSP integration and over all more modern interface is just so much better than neovim.
There is a fork called evil-helix, not sure how good it is
Helix is never getting vim keybindings, unfortunately. You can get Neovim to a comparable place, but it takes quite a bit of work. telescope.nvim for improved UI elements, which-key.nvim to display hints for key sequences, coc.nvim for code completion… and you may still need to mess around with LSP configs.
I’m a fairly happy neovim user; I have all the functionality I need and none of what I want, in 300 or so lines of lua and a few plugins.
But I have to admit that interacting with the helix “shell” (I don’t know if that’s the term; the colon command stuff) is much nicer than the vi legacy stuff. They’ve thought it through very nicely.
Why won’t neovim get vim keybindings? Last I heard they were going to embed a scheme in it, I thought they were going full bore on configurability.
The Helix project’s official stance is that they’re uninterested in enabling alternative editing paradigms. I would assume this philosophy will extend to the functionality exposed to the plugin system, although we won’t know for sure until said plugin system actually exists.
There’s a vim-keybinding plugin for VS code, ever try it out? I found it not perfect, but quite satisfying. Doesn’t necessarily integrate well with menus and stuff though.
what’s vscode got to do with terminal editors? :)
I missed that bit! Though it does better than most graphical editors, since you can tunnel it over SSH to work on a remote system. Not perfect, but works pretty well.
I feel like helix tends a bit toward the IDE-alike direction. But OP also asks for “integration with LLMs” which is another thing I’d say tends toward the IDE-alike direction, so I can’t say I’m sure what @antirez means by that criterion.
Or kak or zed or … what xi aspires to be, hows it doing?
Unless they finish plugin system, there’s no LLM integration yet. I do aware of workaround like helix-gpt
Have you seen flow?
I’ve fully converted all my terminal editing over to flow, it just feels right
What about emacs in the terminal with evil mode? That would hit a lot of your points.
emacs definitely feels like an IDE to me when I’ve tried it. Even without a pile of plugins it still loads noticeably slower than the other terminal editors I’ve tried like vim, and then when I’ve asked about this, people tell me I need to learn “emacs server” which seems even more IDE-tastic. Lisp is cool and undoubtably there’s a lot of power & depth there, but like Nix I think I need to make it part of my personality to realize the benefits.
I don’t even think you need to run emacs as a server, just have a little patience. Even with a huge framework’s worth of plugins my Emacs is useful in a couple of seconds.
For my habits and workflow, 2 seconds to wait for a text file is completely unacceptable. What’s next, 2 seconds to open a new shell or change directories or wait for
ls? Please do not advise me to “just” lower my standards and change my workflow to excuse poor software performance.For me, Emacs is started in daemon-mode when I log in (with systemd, but could be done perfectly well with one line in .xinitrc or even .profile). Then if I’m invoking it from the shell, I have a script that pops up a terminal frame instantly, and is suitable for using as $VISUAL for mail clients and such. I’m absolutely not trying to say you should do this, just that it’s the standard way for using a terminal Emacs in the “run editor, edit, quit” workflow (as opposed to the “never leave Emacs” workflow).
It’s ridiculous to call loading an editing environment like Emacs “poor performance” with the features vs say Helix. Sometimes I use Helix for command line edits, usually I use Emacs for file editing. If you’re going to need all the fancy features of an editor you can wait <2 seconds like an adult, or use vim like an adult, but if you’re needing LLM integration and can’t wait shorter than a breath for your editor to load that is a you problem, not a problem with your editor.
You really don’t need to make it part of your personality, but you do need to install it so you’ve got a server, and now make sure you’re running a version with aot compilation.
In fairness, I don’t think many people use stock vim or vs code as their daily driver either, because there are rich returns to mildly customizing your editor and investing in knowing it well.
I’m actually pretty curious about this. Does anyone know if there’s data on this?
I think my only Vim config for a long time was “set nocompatible” and “syntax on” (both of which I don’t seem to need these days). Oh, and “set nobackup”. VS Code, I think I only disabled wordwrap. Emacs required a bit of customization (cua-mode, SLIME, selecting a non-bright-white theme), but I’ve used it a lot less than Vim and VS Code. I used to be big on customization, but I guess I never got into customizing editors–not sure why…
I’ve been plugging lazyvim hard of late, and I think it’d fit your needs without leaving vim and not spending an eternity configuring stuff:
which-key + most things being bound to leader (i.e. space) make it super easy to find how to do things, and autocomplete on the command line takes you the rest of the way
Comes with one-button setup for copilot, supermaven and a couple others. IME it’s not as good as VS code but gets pretty close; I’ve definitely not felt IDE AI features give me enough value to switch.
Tabs by default and very sensible keybinds for switching between them! https://www.lazyvim.org/configuration/tips#navigating-around-multiple-buffers
try helix or kakoune
Vim does have built-in tabs since quite a few years, by the way.
Moreover, you might want to read the Vim help on menu support, there is at least something for terminal mode.
Neovim seems to have built-in support for a terminal split, by the way.
Yes, since 2006 to be more precise, so vim added support for tabs 19 years ago.
I know about terminal split, but I don’t believe that’s the way. It was much better with 1990s MS-DOS editors… where they tried to bring you some decent terminal UI. Btw I see that I was suggested to look at a few projects, I’ll do! Thanks.
Of course — the terminal part was just «if I am listing features that can be used nowadays without relearning the editor, I can as well mention that». But overall the situation with new features in Vim or NeoVim is better than you described, so in case you find some dealbreakers with mentioned editors — hopefully not but who knows (I do find dealbreakers there) — you can get at least some of the way from Vim.
zed has a great vim mode (seriously)
the readme on this repository does not contain a list of differences, but there is one on the documentation mdbook: https://conduwuit.puppyirl.gay/differences.html
I eventually found this list as well, but it doesn‘t the answer the question of why. Why not upstream those changes and features?
If you look at the upstream history, the development pace is much slower:
https://gitlab.com/famedly/conduit/-/commits/next?ref_type=HEADS
I don’t have any inside knowledge, but that makes me think that the fork wants to develop faster than upstream does.
Not riding the AI (LLM) hypetrain, but LLMs (especially of the shelves ones) work much better with open source software, because all of the technical documentation, FAQ, issues, pull requests, etc. are openly available. It would be interesting if a LLM fine-tuned on the Postgres data could‘ve answered that question and gave the source.
Why are 11 people flagging this as spam? Just because it’s about a commercial product? It’s difficult to produce hardware without charging money for it, since it’s, y’know, a physical object that someone has to build. At least the RPi is open-source hardware.
Neither the hardware nor the firmware are open source, and large chunks of it (principally the GPU) are proprietary secrets.
Specifically, the bootloader is proprietary/secret, as in, you cannot build a customized version yourself. I was actually bitten by this: I wanted to PXE-boot a largish initrd (entire root fs, I think it was 600M compressed) but the bootloader had a lower limit of how much it could download. I asked the PI folks to increase the limit but they refused (probably they had their reasons). So it was a no go. And a 16G PI would be a natural fit for running an in-memory OS.
Try iPXE? There’s a build for the pi here : https://github.com/ipxe/pipxe Then you boot a small (~4MB or so when I’ve used it on x86_64) image via PXE and then use iPXE to pull the rest down via PXE or http(s) (or probably other methods, I’ve not looked beyond those).
Yes, I was thinking of chain booting but decided not to bother (8G was still too tight but might be worth the hassle for 16G). But I didn’t know about the iPXE build for Pi, thanks for the pointer!
It is but the underlying OS is now FOSS. This does not mean that the Pi firmware is FOSS – it isn’t – but it does mean that the code is there to study.
I wrote about it:
https://www.theregister.com/2023/11/28/microsoft_opens_sources_threadx/
Guessing people parsed “for sale” in the title and flagged it on that basis alone. I wasn’t one of them, but I note that it’s consistent with how Lobsters treats most other product-release links.
This isn’t supposed to be a place to promote products. But we’re aggravatingly inconsistent about that policy; see https://lobste.rs/s/bed56d/mecha_comet_modular_linux_handheld for a recent example.
Caveat: I haven’t flagged this article.
I think spam (“for links that promote a commercial service”) is more applicable than off-topic (“for stories that are not about computing”), but I honestly can see both applying. The reason is this has very little technical detail. The article spends about 3/4 of the words talking about the company’s climate change policy and 1/4 talking about the newly released product. Basically only one (smaller) paragraph gets into any technical specifics about what makes this board different from others.
I’m happy to see a new Raspberry Pi model get released, I’m glad the company is considering how to take care of the Earth, but it certainly seems intended simply to sell the new Raspberry Pi, hence promoting a commercial service and I don’t see how it fits this criteria:
So why haven’t I flagged? As of my writing this comment, my flag would be the balance between positive and neutral votes on the article (+15, 14 flags). This article isn’t so egregious as to make me want to be That Guy and flag it down. Others that feel differently may vote their conscious.
Jeff Geerling did a very in depth review and benchmark of this, which probably would have been the better link to post: https://www.jeffgeerling.com/blog/2025/who-would-buy-raspberry-pi-120
No it is not, and never was.
Indeed the selling point of several Pi rivals is that they are open hardware.
The Pi is entirely closed and not very standards-compliant. It uses closed-source firmware based on ThreadX, and has a unique design and startup sequence: the VideoCore GPU is the primary processor and it loads code into the Arm. Thus the Pis can’t even run a normal Arm bootloader without work, and there is nothing resembling UEFI on them.
Some distros, such as openSUSE on Pi, start by softloading Das U-Boot as a more standard bootloader, and then load the OS from this, just to get the hardware into something closer to a more standards-based Arm computer before loading Linux.
Is it still true for Pi >= 4? https://developer.arm.com/documentation/102677/0100/Set-up-the-Raspberry-Pi explains how to run UEFI on the Pi 4.
Yes.
AIUI the simplified boot process is this:
The GPU boots the computer, loads the Arm code image from SD or wherever, places is it RAM, initalises the Arm core(s) and sets them running what it put in RAM.
If you want, you are perfectly free to put a UEFI loader in that RAM – but don’t kid yourself: it’s no more in control of the computer than the passenger in the front seat of a self-driving train.
Apologies for that … must’ve been a false memory implanted by the aliens.
In fairness, it’s a widespread belief that I’ve seen a lot in other places.
The Pi Pico is FOSS, I believe, but it’s not a general-purpose computer.
The Picos and RP2040 and RP2350 are thoroughly documented but they don’t go as far as being open source. Last time I looked, the Pico board designs are not published, though the reference designs (for the PCB design guide documentation) are available. Some parts of the chip are published, such as the firmware source and the RISC-V cores, but other parts developed by Raspberry Pi Ltd such as the PIO cores are not published. (And of course most of the big IP blocks such as the ARM cores and USB controller are proprietary to other companies.)
Yeah I guess the original title (“16GB Raspberry Pi on sale now at $120”) could be interpreted as being a sale as in promotion, versus just the release announcement which happens to mention the price in the title. Didn’t even think about the confusing wording when I posted this, I just re-used the original title, but I can see how someone could think it’s off-topic.
IMHO more off-topic than spam, but either way its a product announcement for a minor variant of an existing product, hard for something like that to be on-topic (vs new generations where one often can argue that something fundamental enough has changed that it maybe deserves attention. But even then there are limits)
I haven’t tracked a lot of hardware in this class, but the overall price with the doubled RAM seems noteworthy?
I fondly remember rescuing one of the older models from e-waste and turning it into a more “persistent” alarm clock. I know a colleague that runs a whole rack of Pis for various hobbies. I do figure you’re likely more to get storage I/O bound than RAM bound first, but they still fill an interesting niche for a cheap, “beginner” device.
That may still be true… but these model 5’s are a big improvement in IO throughput compared to previous ones, including first class support for an m.2 nvme daughterboard. It’s a huge step up from the days of doing everything on a microsd card.
Because it’s an ad for a single board computer. In what way is it relevant to lobste.rs? I’d not like it if lobste.rs to be a feed on any kind of ordinary computers.
With “any kind of ordinary computer” I mean that it’s not some world first implementation that is also being discussed.
On top of that the article is not just regarding an ordinary computer it’s also just an ad. There’s nothing setting it apart in content from most other ads out there.
Mods have since improved the title, but two reasons. (1) “X on sale for $Y” is always functionally an ad, even though it’s a plain statement of fact, since its function is to encourage the reader to weigh up buying one - IMO Lobsters don’t need that in their feeds; (2) the article itself has scant technical content to discuss, leading me to the conclusion that the main point of this submission is “hey maybe you want to buy this thing”. To me that’s spam regardless of whether the submitter benefits or not.
Look, I’ve been around since the 1991 green-card-lottery spam on Usenet that coined the term, and what you’re describing is not what “spam” means. This was posted one time by an active member, for informational purposes. You can dislike this sort of post for whatever reason, but use more accurate wording.
The Lobste.rs about page notes how it is defined in this community, which is different from how it was originally used.
In this case, spam simply means this article is by a commercial entity attempting to sell one of its products, with very little technical benefit. Not that people believe the article is trying to sell us pills or a new stock market scheme.
For those of us unfamiliar with it, what is Chimera Linux?
I read their “about” pages and learned basically that it’s a Linux distribution not based on other Linux distributions that aims to re-impliment things they think other distributions do poorly. But that doesn’t tell me anything about its actual strengths.
It’s the Linux kernel with BSD userland instead of GNU, musl instead of glibc, and LLVM instead of gcc. There’s apk for packages and systemd is supported.
It’s not correct that systemd is supported, chimera uses dinit for init and service management.
There are some individual systemd tools used such as sysusers and tmpfiles and systemd-udevd is used as well.
Currently logind is used but the intent is to extend turnstile to fully replace it.
As well as mimalloc (a high performance malloc) instead of musl’s built in malloc (mallocng, which has somewhat lackluster performance).
Thanks for the explanation, I originally confused this with ChimeraOS, a gaming-focused Linux distribution.
I’m still trying to see big end-user changes here, most of the changes are rather “internal”. BSD user space, yet another systemd replacement. I mean, people have been complaining about “GNU bloat” since the 90s. I distinctly remember “Mastodon Linux”, where the author also didn’t quite like the change from a.out to ELF. There also seems to be some void Linux DNA here, one of the more prominent musl-based distributions.
While I appreciate lean projects, I’m not sure whether that really matters at all here. Given what you’ve got with the kernel and the desktop and browser stack, shaving off a few glibc/coreutils options barely matters. Reading the site, the authors seem quite pragmatic (as compared to e.g. the suckless people, the most “prominent” torchbearers of minimalism for minimalism’s sake).
But again, not quite sure how much of that will be visible in the end.
Chimera isn’t really in the “minimalist” camp, but could perhaps be in the “power user” camp. And, like Void, they do have a little bit of that BSD vibe where you’re installing a full system of components and there’s some opinions on how they fit together.
One cool thing is the cports system, which is also reminiscent of Void Linux. But it uses Python instead of shell, and is very readable. Look how nice this is: https://github.com/chimera-linux/cports/blob/master/user/incus/template.py
Isn’t it the Linux kernel with a BSD username?
I actually looked for the same thing, but trying to do new things can be a goal in of itself. May be there are some unknown unknown advantages of this tech stack.
Sadly, I will not be there this year 😕 Have fun!
Oh that’s so sad. There are no longer any actual phones in the main category :-(
Note that (at least to my understanding) this is not due to support getting worse, but the standards for main category getting higher (which I think is good)
The question is if any actual devices will manage to get there. From what I see of pmOS, it’s mostly a lot of people getting devices to boot, but nothing daily drivable. Which is totally fine, there’s a lot of fun in getting things to work, but the rest of the 10% is a slog. It’s just a question of if it’s relevant/interesting to people beyond SoC hackers.
The technical requirements for being in the “main” category didn’t change, we added some new requirements wrt having more active maintainers for a “main” device.
We want “main” devices to be well supported, and a key part of that is usually having more folks who want to do the dev/support/testing for the device.
I hope to see some new phones in “main” next year 🤞
This is obviously a joke, but more often than not, it feels like it
I’m assuming since they’re making this move they had all contributors to their AGPL-licensed repo sign a CLA. Makes sense nobody would want to invest at least a year learning to contribute to a codebase where their work could then be yanked away, as it was here.
Anyway it’s fine, this is just businesses doing business things. Revenue in the FOSS world is a strange game. Maybe they’ll succeed enough to then eventually use a viral FOSS license again, although unless they then get rid of the CLA nobody will want to contribute to it.
I don’t know that CLAs are much of a deterrent to outside contributors. It feels like many FOSS projects have CLAs, even if they have no associated business. The stated justification is usually to avoid relicensing problems from one FOSS license to another, if the need arises, since you don’t have to track down dead/missing contributors.
Ultimately, I think the solution to free-riding at all levels is something like Attribution-Based Economics, where monetary payouts are distributed based on repo contributions, regardless of whether you’re an employee or not. But that’s a huge undertaking, and afaik, nobody outside of a few people in the Racket community have ever tried building something like it.
CLAs completely undermine the GPL. It’s like encryption with a backdoor. It’s only a matter of time until it’s exploited, every single time. The stated justifications don’t matter because they are completely detached from what actually happens. People who develop GPL code and license it under some entity’s CLA need to be aware that they perform unpaid labor for that entity. Their code can and likely will end up in unfree software. If you are okay with that, it’s fine. Just don’t be surprised.
This situation, repeated again and again, is why Pieter Hintjens put the following into his C4 (Collective Code Construction Contract) standard:
That 2.2.3 is critical — an RFC 2119 SHALL NOT, a hard requirement of C4. Pieter wrote, in the ZeroMQ guide:
I have been turning a related problem over in my mind for some time — I have aspirations to write something that needs to remain Free Software so that it can participate in bootstrapping chains, but the experiments I’m looking at might also make it attractive in its own right. Such a project seems to have the greatest chance of success if the number of external contributors is maximised, and a strong licence like the AGPL may scare away too many.
I can’t think of a licensing scheme that gives the project a way to offer ad-hoc licenses for the good of the project (e.g., to fund further development) that does not also make the project weak to the relicensing rugpull. Maybe an actual lawyer could invent some clever scheme, where some separate legal entity holds the copyright (including copyright assigned by contributors) and certain individuals are allowed to direct that entity to create and re-licence snapshots of the source tree. But I don’t know enough to dare try.
I am currently thinking that GPLv3+ and “no copyright assignment” is the best balance of external understandability, likelihood of attracting contributors, and ease of compliance for a project like the one I alluded to. GPLv3 does not look to me like it goes far enough in its anti-anti-circumvention provisions but it’s something, and AGPLv3 seems to scare a lot of people away. Having capabilities that are only available to the Free Software ecosystem makes it attractive to participate in, and I don’t think a BSD/MIT/X11/ISC-style licence would achieve that.
I worry that in the future, there will be a hostile takeover of the FSF, which would open the possibility of a much weaker GPLv4 which all GPLv3-or-later licensed code could be “upgraded” to. Especially after all the Stallman-bullying that has been going on over the past couple of years, I have lost confidence that the organization and indeed the free software community itself is equipped to deal with the pressure in the long-term or can be trusted as a steward of this kind of power. I think this is a much bigger threat than being stuck on a particular GPL version. Hence, it might be wise to license GPLv3-only.
You can specify an alternate license steward when using or-later
You can always fork without a CLA. Open source is a do-ocracy. People who do the work have power. See what happened with the forks of TerraForm (OpenTofu) or Gitea (Forgejo). People used their power there. Users don’t have power in open source unless they pay money.
Sure a fork is possible based off the final AGPL release, but that requires a knowledgeable development community to have already formed around the project or for the value proposition to be truly unique (there are a good number of FOSS databases already). And the formation of such a community will be hindered by the presence of a CLA, because the community cannot control when the rugpull happens. If it happens before the community reaches critical mass then it just gets killed.
Ironically, this is the same rationale many (formerly) FOSS companies give when moving to source-available licenses. They paid to develop it, but AWS treated it as unpaid labor for their competing service. Just a hierarchy of exploitation.
Regardless, I suspect that if devs became allergic to CLAs, more companies would choose going proprietary over FOSS.
That’s the point, the non-CLA-covered contributors form a durable distributed network preventing the rugpull.
If it’s just some kind of trivial project then a CLA doesn’t matter, I’ll contribute to it to get a feature I want for however long the project is open source, whatever. But some codebases are just very complicated and take a huge amount of time investment before even minor changes are possible - you can’t just do the thing you want to do, you usually have to do a bunch of smaller warmup tasks to learn how things work first. That’s where you start caring about a CLA and whether you’re just being taken advantage of.
Maybe I should have put “stated” in quotes to indicate that I don’t believe in that justification. Too late to edit.
Alas it did override the second part of your post! I had never heard of attribution-based economics.
Ah, well, chalk it up to a lesson in textual communication.
ABE is a really interesting idea! Here’s links:
Do we think this is a significant cause of non contributors? It seems to me that even if there weren’t a CLA, the number of people skilled enough and willing to invest that much time into contributing would be vanishingly small (barring some sort of pay through their employer).
Building a community of FOSS contributors is very difficult. It requires substantial investment; companies can do things like:
Without these, for complicated projects like a database you’re basically just waiting and hoping a senior-level semi-retired software engineer wanders past all the other FOSS projects pining for their time and stops at yours then perseveres (for free!) through the development process through sheer force of will or some kind of passion for the project.
Here it is: https://www.scylladb.com/open-source-nosql-database/contributor-agreement/ (link found in https://github.com/scylladb/scylladb/blob/master/CONTRIBUTING.md)
I like Ubuntu, but I am always interested in people finding things they don‘t like about it. It was my first distro and I just stuck with it. I have Arch on my Steam Deck and thats also fine, but also not that much different that it is worth a switch. Also what most people underestimate is that Ubuntu is among those distros which are officially supported and mentioned in almost any documentation.
That is absolutely my favorite thing about Ubuntu, and it is definitely the thing that keeps me using it (sometimes) even when I like using other things more.
I am sorry but I do not understand what you are linking to.
It’s another of their own posts, upthread, where they say:
Sorry. When I followed the link on my own machine, it highlighted the comment. @kivikakk quoted the main part I was adding here.
I built one of these about 4 years ago. Frankly I just think a nice Intel nuc is a hell of a lot cheaper with three or four of those you have about the same power draw at about 20 times the performance. Plus you don’t have to pay for all the power supplies and all of us things you need by the time you’re done with it decent RPI-5 here at about 100 bucks.
I guess the main reason to do something like is not compute performance. With a single computer you also have a single point of failure. You could have those in multiple locations (in case of fire) or power one down for maintenance. Another big factor is probably how cool this is :D
Sadly cant order it from Germany :/ Or am I doing something wrong?
AliExpress occasionally has issues shipping to Germany. I’m the main person at SFC working on the OpenWrt One, and I emailed our contact at the manufacturer yesterday - I expect they’ll get back to us in the next day or two on how people in Germany can order one. Note that most other EU countries are supported (I checked France, Italy, Spain, for example) so the issue does seem limited to Germany currently.
Thank you!