I find amazing that so many people are discovering Niri right now (me too!). Last weekend I even took the time to write a short ArchWiki page about it. It’s not complete and I encourage people to add stuff to it.
My wife recently started a second round of studies at a local university in the US and was quite taken aback by how courses were structured (she has an MBA from Vienna economic university, graduating 2008). She describes it as a very strict regimen of classes, like she took in her high school equivalent: lecture courses that have weekly quizzes and mandatory attendance, many and frequent exams; contrast that to how you’d get credits for a lecture in Europe: you do the exam at the end of the semester, and it’s on you to keep up with the material (or cram it all just prior to the exam).
Undergrad students in the US aren’t given a whole lot of freedom; that they compete in these prizes at all points to near heroic effort.
This is also happening in Europe, at least in some countries like Spain. It all started with the Bologna Process. Now, Bologna doesn’t really require universities to follow a more high school like approach but most universities here interpreted that way and now it’s difficult to pass without daily attendance, the weekly quizzes, etc You can still pass doing a final exam, but you will probably be the only one in the class taking that approach and you lose the opportunity of the recovery exam (because that exam is the recovery exam, the normal exam is allowed to take daily work into account and most people do it).
OP wrote about students in 2020, so take my old stories with a grain of salt, I graduated in 2010 but I have one of these old-school German diplomas (generally 9 semesters, most needed more vs BSc+MSc in 6+4?) from the other Munich university, and they were shifting towards the Bologna system while I was there (TUM had already moved over iirc). So unless they changed it meaningfully since them (and assuming most German universities have the same course load, which makes sense in a standardized system) that already seemed like a hard shift from “do whatever you want, you need to attend some stuff, then you have a test at the end and only some exercises to hand in inbetween” to “this is your schedule, welcome back to school”.
I wouldn’t say taking off a semester was common but it was also not completely unheard of.
Daily attendance to the courses you follow, and the choice of courses — these are two different dimensions, though. You can definitely have heavier continuous assesment or midterm+final or final-only across the courses in the same program, and then there is a question of which courses you take.
I think Germany has a lot of course choice (and also some courses with unpredictable availability), and also some universities where succeeding in graduating on time is feasible only for very few (and the academic culture that adapts to this fact). French universities are likely to have fixed schedules and failing students might have to retake a year not mix-and-match, although in a sense this sometimes makes taking a gap year easier; not sure about the amount of course choice in the fancier «engineering schools» and ENS (not well-aligned with Bologna process, more prestigious than universities). Also in France if you just barely fail one course and excel in the others, you probably still pass the year by compensation between courses.
Europe is not a single place, even if it has adopted a course load standard to simplify student exchanges…
To add a Canadian perspective: our universities seem to be a mix and it really depends on the department. I know people in Engineering programs often had mandatory attendance and even daily quizzes in their courses, whereas people in Math programs often just had monthly assignments and a final exam. As someone who did a Computer Science degree, I only ever had mandatory attendance in my elective courses that involved a lot more collaboration like a French course which needed students to be there to practice speaking with each other in the presence of the teacher. Everything else I took was similar to this monthly assignments + final exam model. I have no idea if this changed after COVID though.
I teach in a computer science program in the U.S. and I follow something like that. Making the entire grade be the final project or final exam would be too far from the norm to get away with without complaints, I think, but it is possible to do a project-based class that doesn’t grade attendance or have quizzes. A grading scheme along the lines of: 20% homeworks (mostly a completion grade), 20% project checkpoint #1, 20% project checkpoint #2, 40% final project (w/ deliverables and presentation). I have also taught a more exam-based class where the scheme is: 20% homeworks, 25% midterm exam 1, 25% midterm exam 2, 30% final exam. That one obviously requires coming to class on the exam days, but you could in theory come only those three days and still get an A.
This varies a lot by university though, both in terms of formal policies and student/faculty culture.
Adding the Italian perspective, when I graduated in physics, well after Bologna was already in place, some of the courses had 2 intermediate written exams. If you passed both of them you would not have to do the final written exam. However, everyone had to do the final oral exam, which covered the whole program of the course. In my experience, less than half of the course would pass both intermediates, so the “final” written exams where quite crowded. No one gave an absolute crap if you were in class or not. You could literally show up just for the exam and you would get a fair examination. I know it firsthand because as a working student I did it fairly often. The only exception would have been labs courses, which is very reasonable. Also, in Italy tuition is quite affordable and if you want to skip a semester, or even a year, it’s just up to you. There are no repercussions and nowadays you could get a discount on the tuition if you declare beforehand that this year you plan to take less credits. They won’t kick you out if you don’t complete courses, as long as you pay. You are considered an adult, it’s up to you what you want to do with your life. On the “not so bright side”, no one gives an absolute crap about exam failure rates and professors are completely unaccountable for that. Take it or leave it, I guess…
You say on the article that “There really tends to be only one at a time though” but I think that’s not true, the industry is big enough that supports multiple technologies at the same. Which makes me question what is the scope of this “tech”, consumer technology, computery-stuff, relation to software? But neither of the definitions of tech satisfy all the examples you mentioned.
This is because I want to share with you a real hype cycle that ended very bad at the time, but it was mostly managed by specific companies only: solar panels. In 2008 you could find many solar panel companies in Europe. It was seen as the industries of the future. In my city solar panel companies sponsored the local sports clubs. However, it was unsustainable, there was not enough demand at that time, solar panels were not that efficient, and electricity was still kind of cheap. There were many layoffs when this companies closed. Later they were replaced by Chinese makers which were cheaper.
Now it’s a mature technology, each year we beat a new record of solar panels production but more than 80% of them are made in China, and China wants to regulate the production (kind of solar OPEC) to control the market and make it sustainable long term.
Very cogent points! These things happen all over and all the time; I could tell similar stories about the various swings in oil prices being connected to expectations vs reality of new technologies and prospects (the Marcellus Shale gas fields being the one I know the most personally). This list in particular is often the things that touch me through the media/news I consume, so, tech and programmer-y stuff. In particular I see it often driven by the Silicon Valley startup and funding culture, which has insane amounts of money and a propensity to spend it in flashy ways that create lots of… well, hype, even among people who aren’t practitioners in the field.
So I spend a year having to hear about bitcoin on the radio while driving to work, roll my eyes at the stupidity at it all and say “it will end mostly-badly because these people aren’t usually creating anything of new value”. Lo and behold it ends mostly-badly and the successful bits that are genuinely good ideas fade into the background of existence, and a year or two the tech industry is kinda fallow and nobody is really being all that creative… and suddenly I start hearing about the big AI boom and how it’s going to change the world! Just like blockchain, Big Data, IoT, etc etc all did. ie, lots of money will be wasted, lots of money will be siphoned from the have-not’s to the have’s, using the internet will get slightly more resource-heavy and painful, and we’ll grow a new and exciting sub-field for security researchers.
I mean, things were overhyped and ridiculous, but can anyone say that the internet isn’t at the core of the economy?
1999-2006: Java
Still one of the most widely used languages, powering many multi-billion dollar companies.
2004-2007: Web 2.0
What we now just think of as “the web”.
2007-2010: The Cloud
Again, powering multi-billion dollar workloads, a major economic factor for top tech companies, massive innovation centers for new database technologies, etc.
2010-2015: Social media
Still massively important, much to everyone’s regret.
2012-2015: Internet of Things
This one is interesting. I don’t have anything myself but most people I know have a Smart TV (I don’t get it tbh, an HDMI cord and a laptop seems infinitely better)
2013-2015: Big Data
Still a thing, right? I mean, more than ever, probably.
2017-2021: Blockchain
Weirdly still a thing, but I see this as finally being relegated to what it was primarily good for (with regards to crypto) - crime.
2021-present: AI
Do we expect this “bubble” to “pop” like the others? If so, I expect AI to be a massive part of the industry in 20 years. No question, things ebb and flow, and some of that ebbing and flowing is extremely dramatic (dot com), but in all of these cases the technology has survived and in almost every case thrived.
All of these things produced real things that are still useful, more or less, but also were massively and absolutely overhyped. I’m looking at the level of the hype more than the level of the technology. Most of these things involved huge amounts of money being dumped into very dubious ventures, most of which has not been worth it,
and several of them absolutely involved a nearly-audible pop that destroyed companies.
Yeah, I was just reflecting on the terminology. I’d never really seen someone list out so many examples before and I was struck by how successful and pervasive these technologies are. It makes me think that bubble is not the right word other than perhaps in the case of dot com where there was a very dramatic, bursty implosion.
The typical S-shaped logistic curves of exponential processes seeking (and always eventually finding!) new limits. The hype is just the noise of accelerating money. If you were riding one of these up and then it sort of leveled off unexpectedly, you might experience that as a “pop”.
To me the distinguishing feature is the inflated expectations (such as NVidia’s stock price tripling within a year, despite them not really changing much as a company), followed by backlash and disillusionment (often social/cultural, such as few people wanting to associate with cryptobro’s, outside of their niche community). This is accompanied by vast amounts of investment money flooding into, and then out of, the portion of the industry in question both of which self-reinforce the swing-y tendency.
Not for everyone and also not cheap, but many projectors come with something like android on a compute stick that is just plugged into the hdmi port, so unplug and it’s dumb.
Yeah, I’ve been eying a projector myself for a while now, but my wife is concerned about we’d be able to make the space dark enough for the image to be visible.
I use a monitor with my console for video games, same with watching TV with others. I think the only reason this wouldn’t work is if people just don’t use laptops or don’t like having to plug in? Or something idk
This one is interesting. I don’t have anything myself but most people I know have a Smart TV (I don’t get it tbh, an HDMI cord and a laptop seems infinitely better)
It’s the UX. Being able to watch a video with just your very same TV remote or a mobile phone it’s much much better than plugging your laptop with an HDMI cord. The same reason why still dedicated video game consoles exist even if there are devices like smartphones or computers that are technically just better. Now almost all TVs sold are Smart TVs, but even before, many people (like me) liked to buy TV boxes and TV dongles.
And that’s taking into account that a person own a laptop, because the number of people that doesn’t use PCs outside of work is increasing.
I have a dumb TV with a smart dongle - a jailbroken FireStick running LineageOS TV. The UX is a lot better than I’d have connecting a laptop, generally. If I were sticking to my own media collection, the difference might not be that big, but e.g. Netflix limits the resolution available to browsers, especially on Linux, compared to the app.
massive innovation centers for new database technologies, etc.
Citation needed? So far I only know about then either “stealing” existing tech as a service. Usually in an inferior way to self housing, usually lagging behind.
The other thing is putting whatever in-house DB they had and making it available. That was a one time thing though and since they largely predate the cloud I think it doesn’t make sense to call it innovation.
Yet another thing is a classic strategy of the big companies which is taking small companies or university projects and turn them into products
So I’d argue that innovations do end up in the could (duh!) it’s rarely ever driving them.
Maybe the major thing being around things like virtualization and related things but even here I can’t think of a particular one. All that stuff seems to steem largely from Xen which again originated at University.
As for bubbles: one could also argue that dotcom also still exists?
But I agree that hypes is a better term.
I wonder how many of these are as large because of the (unjustified part of) hype they received. I mean promises that were never kept and expectations that were never met, but investments (learning Java, writing java, making things cloud ready, making things depend on cloud tech, building Blockchain Knowhow, investing into currencies, etc) are the reason why they are still so big.
See Java. You learn that language at almost every university. All the companies learned you can get cheap labor right from University. It’s not about the language but about the economy that built around it. The same it’s true for many other hypes.
This is an okay stance to take & on the other end I can agree that CSS isn’t hard & don’t want to memorize a billion classname conventions, but what grinds my gears is when a tech lead or back-end team has mandated it on a front-end team that would prefer to not have that decision made for them—as can be said about most tools put on teams by folks not on those teams.
To me, if I want something that follows a system, the variables in Open Props covers what I need to have a consistent layer that a team can use—which is just CSS variables with a standardlized naming scheme other Open Prop modules can use. It is lighter weight, doesn’t need a compile step to be lean, & lets you structure your CSS or your HTML as you want without classname soup.
May not hard to write, but certainly it is hard to maintain. CSS makes it SO EASY to make a mess. In no time you’ll be facing selector specificity hell. If you have a team with juniors or just some backend folks trying to do some UI, that’s very common.
“But what about BEM?”. I like BEM! But, again it’s an extra step and another thing to learn (and you’re choosing not to deal with CSS specificiy to avoid its pitfalls).
IME, the BEM components I wrote were more effective and portable the smaller they were. I ended up with things like text text--small text--italic, which were basically in-house versions of Tailwind (before I knew what it was).
You can use utility classes & Open Prop names & still not use exclusively utility classes. No one has said Tailwind can’t be used for utility when needed, but in practice I see almost all name go away. There is nothing to select on that works for testing, or scraping, or filter lists.
Having a system is difficult since you have to make it stringly-typed one way or another, but that doesn’t discount the semantics or considering the balance needed. Often the UI & its architecture are low-priority or an afterthought since matching the design as quickly as possible tends to trump anything resembling maintainability & the same can happen in any code base if no standards are put in place & spaghetti is allowed to pass review.
It really is just the same old tired arguments on both sides tho really here. This isn’t the first time I have seen them & that post doesn’t really convince me given it has an agenda & some of the design choices seem intentionally obtuse without use of “modern” CSS from like the last 4–5 years.
but what grinds my gears is when a tech lead or back-end team has mandated it on a front-end team that would prefer to not have that decision made for them
Is this something that happened to you? Why would the back-end team decide on what technology the front-end team should use?
On multiple occasions have I seen a CTO, tech lead, or back-end team choose the stack for the front-end either before it was even started or purely based on a some proof-of-concept the back-end devs built & did not want the stack to change… just that it was entirely rewritten/refactored.
It raises the question that maybe the abstractions that were adopted by CSS are not the right ones in the long term, as other solutions are more team-friendly, easier to reason about.
Separation of presentation and content makes a lot of sense for a document format, and has been widely successful in that segment (think LaTex).
But for how the web is used today (think landing websites or webapps), the presentation is often part of the content. So the classic “Zen garden” model of CSS falls apart.
I’m in the same boat. I was appalled by it the first time I saw it.
But it fits my backend brain and makes building things so much easier. I like it in spite of myself. I’ve just never been able to bend my brain to automatically think in terms of the cascade, and reducing that to utilities makes it so much more workable for me, and lets me try things out faster. I’m excited about this release.
I am happy that you got a decent website up with Tailwind. I’m sad that you had a hard time with CSS and the conclusion you reached was that you were the one that sucked.
Honestly I sympathise a lot with hating the Debian packaging of your tool - this author is not the first and probably won’t be the last, and the way they package Rust is genuinely awful. Flaming is definitely counterproductive, but I wouldn’t want to support any of my code on Debian either, and would consider any bugs anyone runs into on Debian to be their own fault for doing that.
the way they [Debian] package Rust is genuinely awful
The Debian project’s goal is not to make Rust folks happy, rather it is to make Debian users happy. Perhaps the way they packaged Rust was the best that they could do under the constraints (which are enormous, given the existing infrastructure, user base, and cultural expectations)? I would personally show some humility when criticizing a project like Debian which stood the test of time like very few other open source projects.
On a more constructive note, can anyone summarize the different between Debian and Fedora when it comes to packaging Rust? I don’t hear any complaints about Rust in Fedora so they must have gotten it right?
Debian patches all Rust code to use common versions of libraries in order to keep the libraries in separate packages. The problem is that usually that means using versions of libraries that are older, buggier and were not tested by the original developer (and the developer itself is puzzled because they start to receive bug reports which are impossible to reproduce given their original code where using such older libraries is impossible).
Yes, that was probably a mistake. They should have rejected any Rust application that has unstable (buggy, fast-churning, etc) dependencies as itself being too immature for inclusion into Debian. All of this is IMHO, I don’t have the complete picture.
Of course, if they did that, they would have received a lot of flack for not including the latest hot stuff, which is what bcachefs was until recently.
Yes, that was probably a mistake. They should have rejected any Rust application that has unstable
Just because libraries receive bug fixes doesn’t mean that they are “unstable”. Though some libraries are permanently unstable, like bindgen. If you’re only allowing one version of bindgen, you’re never, ever going to be able to ship Rust software that actually works correctly.
Like you say, the goal is to make Debian users happy, so I wonder, as more and more software gets written in Rust, will Debian users be happy to either have to live with Rust software that’s buggy and doesn’t work, or live without a larger and larger share of software?
Debian’s packaging policies are often annoying. I gave up helping Debian users who insisted on using the packaged versions of GNUstep. Debian required everything to be built with the system compiler. GCC supported Objective-C (modulo occasionally deciding that 100% broken Objective-C codegen was not a release blocker), but it was an ancient dialect of the language. Supporting some of the new features required ABI changes and so, if you compiled GNUstep (implementation of the Foundation and AppKit core standard libraries for Objective-C) with GCC, a load of stuff would not work well even if you compiled things that used them with clang. The way that they would fail was known and you could, if you were very careful, work around them. But Debian would not let the GNUstep package maintainers simply compile with clang and have things work as users expected because GNUstep could build with GCC (it just wasn’t a recommended configuration and came with a bunch of warnings).
Debian required everything to be built with the system compiler.
I think we both would agree this policy is there for a good reason. For example, if Debian allowed building with either GCC or Clang at maintainer’s choosing, sooner or later someone will want to build with libc++ instead of libstdc++. And now we have two sets of libraries that cannot be mixed in the same application. Funny enough, I am trying to figure out how to deal with the exact same problem but in Homebrew.
So the two plausible solutions to this problem seem to be either to stick to this policy or to start handing out exceptions on a case-by-case basis after carefully analyzing each case for potential fallout. I would venture a guess that the vast majority of Debian users don’t care about GNUstep. So Debian deciding to stick to this policy looks like a pretty sensible choice to me. It’s a tradeoff. As with all tradeoffs, someone will think the wrong one was made.
I think we both would agree this policy is there for a good reason. For example, if Debian allowed building with either GCC or Clang at maintainer’s choosing
I think it’s a fine policy for C and C++. It’s not a good policy for the other languages that GCC kind-of supports. There was no mechanism to define the default compiler for other languages.
It’s ok to struggle with solutions when you have constraints. That’s completely understandable. The issue with Debian is that those constraints are sometimes both self-inflicted and don’t actually make anyone’s life easier if taken to extreme. They really could have some exceptions.
It’s anecdotal, but this Debian user of more than two decades can tell you with certainty that “those constraints” do make his life easier.
I don’t want to have 50 versions of every Rust dependency installed on my machine nor do I want 50 copies statically linked into Rust applications that I may want to install. I am happy to leave 1G+ incremental updates to Windows and Mac OS users to enjoy.
The TL;DR: is that you get a lot of that with C and C++ too… it’s just that, without access to proper dependency management, it comes in the form of meant-to-be-vendored header-only libraries that the Debian maintainers can’t split out and bespoke re-implementations of things that can’t be deduplicated.
TL;DR: C++ libraries that use templates behave the same way Rust does… if you build something that’s all-templates as a dynamic library, you get an empty .so file.
Dynamic linking for generic/templated code without the kind of trade-offs that Swift incurs to achieve it is an unsolved problem.
I don’t want to have 50 versions of every Rust dependency installed on my machine nor do I want 50 copies statically linked into Rust applications that I may want to install. I am happy to leave 1G+ incremental updates to Windows and Mac OS users to enjoy.
In particular, either way, Rust dependencies are statically linked and Debian packaging does not support incremental updates on the same package, much less across packages. So you’re paying almost the exact same bandwidth and storage cost regardless of the number of Rust dependency versions in play.
That’s where I’m at with Debian as well. I’ve been using it for a long time but I’m just fed up with these stories where Debian wants to do something weird and completely contrary to the developer’s intention (the recent KeePassXC thing for example). The attitude just rubs me the wrong way and feels very old fashioned. All power to those who like to do things that way, but I’m looking elsewhere these days.
Agreed. The Debian approach was brilliant in the 90s, and for a long time after .. but the world has just changed too much. There’s just too much damned software that’s changing too fast for the traditional approach, I think. I’m slowly moving over to NixOS, which definitely has its own problems and can be very wasteful of disk space, but I’m already finding it less aggravating in a lot of ways.
Me too on all of that, but I imagine Debian is a lot better than NixOS for systems without lots of spare storage, which is still most computers if you count the ones in cars and whatever.
Having said that, I’m 100% sure that Nix or something like it is going to take over literally everything (modulo energy crisis etc.)
where Debian wants to do something weird and completely contrary to the developer’s intention
Keep in mind the scale of what Debian has to do… they have to manage the intentions of tens of thousands of diverse developers, and they do nearly all of it on volunteer time.
Although I have Things To Say about some of the tools they use, I am overall quite happy that Debian is deliberately conservative with their policies in order to deliver the most stable OS they can.
Debian, and similar traditional 90s style distros - chose a model that doesn’t scale well and requires massive labor - O(all available software) to produce the next version of the system. Their model is also hostile to backwards compatibility (using software built last year on this year’s system) or using multiple versions of the same software. These systems fit together in such a brittle manner that changing a subsystem or a few decisions requires standing up a whole separate distribution which lead to the proliferation of slightly different but quite distinct Linux systems. The ecosystem evolved in a way to make it so challenging to distribute software, that the easiest thing to do is to package software with an entire distro and ship that around. It’s ironic that Debian’s fight against static linking and vendored dependencies means most developers targeting Linux prefer to static link the entire operating system into their software.
To me the amount of volunteer hours spent on projects like Debian is like trying to feed thirsty people by having each volunteer walk to the reservoir, scoop up water in a cup, and then walk that water to the thirsty person somewhere in town. Commendable effort, but the world would be much better if we build an aqueduct and plumbing instead with that volunteer time, to completely eliminate the need for toil in perpetuity.
From your last paragraph, it sounds like you are concerned with merely shipping the OS in the most efficient way possible… Debian, etc are concerned with releasing a stable OS that works out of the box and doesn’t surprise their many millions of users. Yes, maintaining a versioned distro like Debian is a lot of work. But the results are worth it, otherwise thousands of people wouldn’t volunteer their time toward making it happen.
It’s not clear to me what you are proposing as an alternative.
If you are advocating for a rolling-release distro like Arch or Gentoo or NixOS, those have their issues too. Namely the sheer amount of constant package churn and never being quite sure that the exact combination of package versions that you have just installed are known to be compatible with each other. On Debian, I only have to worry about my workflow or applications breaking every two years. On a rolling-release distro, I have to worry about it every time I run the command to update. Often my worries turned out to be justified as I spent a few hours figuring out how to fix my system. I was an Gentoo/Arch user for a long time, I am VERY familiar with this. Maybe this isn’t everyone’s experience but it certainly was mine.
There are versioned distros that update at a higher cadence, like Alpine.
Honestly, if you ever have to worry about your workflow breaking, something is wrong, regardless of whether it’s every week or once every two years.
Namely the sheer amount of constant package churn and never being quite sure that the exact combination of package versions that you have just installed are known to be compatible with each other.
Maybe if we applied the Rust model to all software, that wouldn’t be something to worry about. (And I mean the actual Rust model, not the way Debian does Rust)
Library developers and distribution packagers have different perspectives on the user needs. When they disagree, developers are quick to blame the packagers (“my stuff works fine on all other distributions, don’t be annoying with your own rules!”) and packagers are quick to blame the developers (“all other packages work fine with these rules, don’t be annoying with your unruly development practices!”). Different users have different needs and they sometimes align more with one side or the other.
My personal rule of thumb:
For development environments (I want to hack on stuff), use the language’s packager managers and forget about the distribution package managers and its rules.
When I want to install an application as an end-user, prefer the distribution package manager.
Sometimes it is useful to make exceptions to preference (2) (eg. maybe your web browser you want to manage yourself and let it auto-update, etc.; maybe jj is not packaged, etc.), but each exception comes with a convenience cost unless you are very disciplined.
This does not avoid all troubles because packaging an end-user application (2) still requires packaging its libraries, which are written by people who prefer perspective (1), and this generates the sort of complaints we can read around on the web. But mostly it’s fine, and getting my distribution to manage and update my applications and their dependencies brings a large convenience bonus.
Snap, Flatpack, etc. are trying to replace package managers in a way is closer to how upstream developers test and release their software. This is clearly convenient for proprietary applications, but I believe the jury is still out for other applications.
Same, I don’t hate Debian though.. It’s mostly the users of Debian Stable that report bugs to upstream that frustrate me. Like no, your version is 1-2 years old, we do not support it anymore.
I wish Debian users reported the bugs to Debian, not upstream. Debian devs can make better calls whether to report bugs to upstream or patch them out themselves.
I wish Debian users reported the bugs to Debian, not upstream.
FWIW, the Debian project always instructs it’s users to report bugs in software that it packages to Debian, never to upstream. I, for example, always do so.
Maybe they need to communicate that better. (And they also need to communicate that if you like playing with experimental filesystems, Debian is probably not the right distro for you!)
I do appreciate that they’re trying to play nice with the rest of the ecosystem, though. Knowing that they’re willing to take on bug reports reframed things for me—Debian being weird and contrary luddites, versus Debian having a specific LTS goal that (quixotic or not) they’re making a good faith effort to pursue. It’s not my goal, personally, but I can root for them from the sidelines.
My plan is to put it in big bold letters in my bug-reporting template that you must reproduce bugs using the officially supported builds before reporting them, including a checkbox that they have, and then to add some code to my --version display which adds something like -system to the end of the version string if installed under /bin, /usr/bin, /sbin, or /usr/sbin. (/usr/local/bin and /usr/local/sbin are fine.)
If a -system version turns up, then I’ll close the bug with a “Reopen with proof that it’s not a distro build” message and, if I catch a distro patching that out, then I’ll look into using either trademark law or a license change to force an Iceweasel situation.
Can you share some links which provide context on how Debian packages Rust and what issues that causes specifically for Kent? I haven’t encountered this drama before
Oh wow, this is some wild editorialization from Phoronics:
It was simple at first when it was simple C code but since the Bcachefs tools transitioned to Rust, it’s become an unmaintainable mess for stable-minded distribution vendors
Anyway, thanks for the context.
It sounds weird to insist on having system packages for every dependency in a language where everything’s statically linked, doesn’t it? Does Debian hack Rust to do dynamic linking against system libraries?
It sounds weird to insist on having system packages for every dependency in a language where everything’s statically linked, doesn’t it?
It’s very weird and essentially puts a ton more work on package maintainers since they either need to ship multiple versions of 1 dependency in separate packages or update packages to use a newer dependency.
Does Debian hack Rust to do dynamic linking against system libraries?
debian (and fedora) package rust libraries by moving the code into a system folder, and most of the debian rust libs aren’t even marked for all architectures so now you get the same tarball for every architecture supported
Does Debian hack Rust to do dynamic linking against system libraries?
I’m jaded on debian so I’ll put my troll hat on and say — no, of course not. The dependencies situation is nothing but a power play / bullying by part of the developer team. They can write C-with-classes and mailinglist messages, not rust. See also the Objective-C situation in neighbour comments.
Yes, probably they wanted to use deciseconds as they give more flexibility and allow to express quantities that may not be enough fine grained if you were using whole seconds.
If using a “finer-grained than seconds” time unit, milliseconds or nanoseconds are the common choices. Note that DHH himself in the quoted-tweet says “100ms”, not “1 decisecond”. If you noticed that a feature that was configured with a 1 was waiting for 1ms, it’d be easier to determine the causal link.
The original Raspberry Pi Model B with 256MiB of RAM cost $35 dollars in 2012. Adjusted for inflation, that’s $48. Today, the Raspberry Pi 5 with 2GiB costs $50. You can also buy a Raspberry Pi 4 Model B with 1GiB of RAM for $35, which in 2012 dollars would’ve been $25.
So, exact same cost, or cheaper, vastly more capable at things the original really couldn’t do that all (like be a desktop) while still being just as good as what it was originally designed for.
Do I think you should buy one? Honestly probably not. 98% of people don’t use the GPIO, and if all you need is a low power server, N100 based products have you covered. But really, RPis haven’t gotten super expensive or drifted from their original purpose much at all, other products have just gotten better and cheaper at the main stuff tech professionals and tech hobbyists have used RPis for.
Myself and basically everyone I know who has one has it because it’s a linux box with GPIO/I2C/SPI. That’s what it’s for. Maybe my social circle is abnormally hardware focused, but 98% is a pretty decisive figure. I wonder if there’s some way to actually through like raspbian popcon or something
Maybe my social circle is abnormally hardware focused,
I think it must be.
I have at least half a dozen Pis and have used them for 12-13 years. I have never used the GPIO for anything and would be happy with a model without one. Talking to others, most people I know don’t use them either.
98% is certainly a number I pulled from my ass, but I don’t know anyone who uses the GPIOs, they all use RPis as low power servers. When RPis got scarce from the pandemic supply shock, almost all discussion online that I saw about what to buy instead centered around MiniPCs that don’t have GPIO. Occasionally I’d see someone point out that they didn’t work as RPi replacement due to the lack of GPIO, but most people didn’t seem to care.
Oops, I accidentally a word, was meant to be “some way to actually check…”
popcon is the Debian project that measures package usage: https://popcon.debian.org/, raspbian is the (i think official?) raspi distro, based on Debian. If using GPIO required installing some package, and raspbian had some equivalent of popcon (idr if it does), it’d be possible to tell roughly how many people use it.
So I’m assuming the question is something like “Is there an equivalent for Raspbian that could deliver useful statistics about people using software that uses GPIOs”. My guess is that the answer is no, because people would more often download such software outside package manager, write their own using libraries obtained elsewhere, …
Yeah, I think this is the reason to get one though. So I have one of the old raspberry pi 2s that I did get for $35 many years ago, and it is in some ways disappointing - it is sitting in a junk box that I almost never use, since it is fairly disappointing as a general purpose computer, not even especially good as an X terminal! (which is why I’m both interested and disappointed in this thing - the new ones might be better computers, but once the price goes up, it has to compete with everything else for that space.)
But it has been useful to me for two things: one is just experimenting with the GPIO pins. Never made anything useful for myself with it…. but knowing how they worked transferred well to work when a client said they have this other ARM device with GPIO pins and also had these lights and buzzers they wanted programmed on it in their business… like that’s not that hard and i could have probably figured it out on the job and stayed on schedule anyway, but having that preexisting experience made it a simple task to do what they wanted to do. So the education aspect worked for me, not lifechanging or anything, but some value.
The other thing I did with that was make a little noise maker and sound monitor for the baby’s room, a small program on a mini computer tucked away on a shelf. Of course, you can buy that kind of thing commercially for cheap anyway, and tons of things can do audio so that’s not unique like gpio (kinda) is, but still, recycling this thing I already had to do the job was kinda cool.
So I feel it was worth the $35, even if it is sitting in a box of random junk right now. Higher price point though feels different, though you do make a fair point they actually do provide a similar thing after inflation nowadays. Oh well, probably not worth getting another one anyway, even if a stronger machine, i don’t actually need it for anything.
EDIT: I am trying to clarify this. RiscOS is not a Linux, not a Unix, not even in the entire family tree of C-based OSes; it is considerably more different from any C-based OS than Windows is different from Unix.
But RO is a multitasking GUI OS that can handle IPv6, USB and so on, with a choice of languages, editors, and apps, and it runs well on a decade-old Pi with 256 or 512MB RAM. It was originally built for an 8MHz ARM2 with 512 kB of RAM, although this is clearly not that version.
With my journo hat on, talking to RasPi users, my impression – and I am still amazed by it – is that most people never replace the default OS. They should. There are smaller Linuxes for it than Rasbian (e.g. Alpine), and multiple non-Linux options. The Pi can run more different OSes than any other ARM SBC.
Very much so. It’s FOSS now, it runs on Pi models up to the Pi 4 and 400 (plus various other Arm SBCs), and in the latest release it gained support for the Pi’s built-in wifi chip.
That’s amazing. I thought that was all long dead. I have happy memories of the Archimedes (including many hours playing Lander but I’m pretty sure I did some more productive activities too…)
It’s strange to think that I was using a beautiful graphical desktop operating system in the ’90s, only to then spend most of my professional career in the 21st century using a VT100 emulator.
so tbh I didn’t know there still was a $35 model. Every ad I’ve seen for the raspberry pi since covid was like $80 or more, including this one here, hence my original comment. But I’m glad to be corrected that they still offer one!
I bought an $80 pi + case + charger + micdrosd + weird hmdi cable, never used the GPIO but I use it as a little home server for things like pihole, and it’s so good for that. The problem is you basically jump up 3-5x the price once you start looking at the next class of hardware like the Intel NUC’s and such. So even in this $100-150 range it’s still pretty amazing.
Edit: Looks like Intels N100 is probably a good middleground between the two these days.
right, that’s the thing, it’s not 3x with those. a cheap N100 mini PC including an SSD can be <$200 too. (I run a Pi I had myself still for such things, but the more expensive RPis are not the “no-brainer, almost no alternative” choice anymore)
I fondly remember using the original B+ as a desktop in 2012. Actually laptop. It was fine for editing and programming, but scrolling in Firefox was unusably laggy. I had to use my phone to read documentation. You can blame people like me for wanting SBCs with more and more ram ever since.
What’s wrong with a considerably expensier SBC from a competitor then? Simple: Only RPi has had graphics drivers, let alone HDMI compatibility to speak of. I could give one honorable mention of the Nitrogen6x, which you could buy with a working portable screen. But here we are over a decade later, and RPi has not really had any competition for all that it is.
Probably others I haven’t tried, but my experence of the first decade after RPi is that the competition is catching up very slowly.
The competition, as in other Arm SBCs, heavily agreed. They all seem to have a lot of drawbacks compared to Raspberry Pis. But the competition, as in x86 mini PCs? I think they’re basically on par or strictly better for many use cases. Faster GPUs, faster CPUs, more IO, not quite as low power but still pretty low power, and basically the same price as the RPi models with more RAM, if you need to buy a power supply and case for your RPi. And that price might include decent enough SSD storage too!
I’m looking into Intel N-series CPUs now, but it seems like the N100 and N200 only have default clock speeds at 100MHz vs RPI5’s 2.4GHz, and the Pi5 seems like it consumes about half the idle power. At a glance, the most compelling thing about the N-series chips is the hardware transcoding, assuming it Just Works. I mostly just use my RPIs for home server type workloads; I care about power consumption but not GPIO–should I be looking at N-series computers for these kinds of workloads? Why/not?
Also why is this post hidden/marked as spam? This seems eminently interesting and on topic for this forum?
an N100 has not a “default clock” (whatever that exactly means) of 100Mhz either. (And given its an entirely different CPU, comparing clock speeds as raw numbers is pretty pointless anyways)
see other comments: it fairly sure doesn’t, at least not in actual products, and either way focusing on idle clock speed is telling you nothing about performance.
Everything I’ve found on the Internet suggests the N100 has a clock speed of 100MHz. For example. I may be ignorant, but I have a hard time imagining a processor operating at 100MHz (i.e., not TurboBoosting) is going to outperform another operating at 2.4GHz even if they are different chips.
You’re misunderstanding something. There is no such thing as a “default clock”; all processors, both the N100 and the RPi’s BCM2712 included, do not have a fixed clock. The clock speed of the CPU will vary with the workload it is subjected to, so both the N100 and the RPi will clock down as low as they can when idle; this is the 100MHz you’re seeing for the N100. Both CPUs will also clock up with workload; the N100 can clock all the way up to 3.4 GHz on one core while the BCM2712 can to 2.4 GHz. In realistic use cases the N100 isn’t sitting at 100MHz, ever. I can guarantee you my N100 mini PC would be good for nothing if that were the case.
Either way, as the GP says, clock speeds don’t matter for much. You don’t care about what the clock speed is, you care about how fast the CPU can get things done or for how much power. You can compare online benchmarks for that: here’s a comparison on Geekbench of an N100 PC and the Rpi5, the N100 is about 40% faster for a lower TDP. Better yet, get both and measure your workload. Personally, I’d say there’s no reason to get an RPi5 over an N100 unless you need the GPIO.
(I’m fairly sure the “100Mhz” number is just people scraping Intels product database and using a stupid default because there is no number in there for whatever reasons. Looking at a few more useful results shows 800Mhz set on actual products using it, which makes more sense. Or maybe it can indeed be set to go down to 100Mhz theoretically, it’s not entirely impossible)
If it has work to do, it’s not running at base frequency. The same way the RPi5 will not be running at its base frequency of (if a quick search is to be trusted) 1.2/1.4 Ghz when it has work to do, but rather speed up. In both cases, the actual frequency under long-term load will depend on the power envelope set (I’m assuming with the N100 that’ll almost always be 6W) and if the cooling system can match that, for N100 miniPCs I’m seeing numbers of 2.9 GHz quoted for all cores being under load. With modern chips, the frequency they run when they got nothing/almost nothing to do is almost meaningless. So even if we ignore that its two entirely different architectures, you should be looking at measured performance instead the entire time.
All Raspberry Pi models (except Raspberry Pi 2 B+) are still available without deprecation notice. You don’t need a Pi 5 with 16GB, the most expensive one: you can choose whatever fits you best
ngl I don’t really understand why the AUR takes such a hard-line stance against like WSL2-specific (especially since you can install plain Arch in WSL2 via the official bootstrap tar.zst releases) and now ARM-specific packages. it’s… entirely user maintained. it’s not like it’s extra workload on Arch maintainers, I think?
feel free to correct me or explain it, I’m just confused.
I guess they’re trying to play extra safe until the new rules for new arches, which as discussed on a RFC will change the Arch stance on arches different than x86-64. In the end, even if it’s a very small cost (economic and time wise), you’re using the resources of a distro for something that is not supported by them. Arch Linux ARM is a different project right now. Just like Ubuntu and Debian are different projects, although one is based on the another.
Since the cited examples rejected are all of software that does not make sense to install on amd64, not really? I think your argument would more support a position of “if this software can be built for amd64, you need to support that”.
8th is a commercial Forth-like, with native GUI support etc. in many languages, which seems extremely cool. I discovered it while reading about Gambas. Currently, I only know of 4 languages with built in cross platform GUI support: Racket, Free Pascal (Lazarus) and 8th, so it seems worthy of sharing.
I’ll need to try this! Previously, my ISP didn’t offer IPv6, but I moved and the new ISP does support IPv6 (at least in theory, I haven’t experimented a lot with it yet!)
The slides do not seem to contain a performance evaluation of the work. How does it compare to the non-JIT interpreter and to other Prolog implementations?
If you want to tell other people about this work, I think it would be helpful to start with a summary of the work and its current state to manage expectations. Maybe have a website with a short summary of this, that also points at your slides for full details. I looked at your slides to figure out what you had done, but there is basically no context and no status information at the beginning or at the end, so I had no idea what the completion status was.
We plan to sunset the default REPL implementations in the Kotlin compiler […] We will continue to promote the Kotlin Notebook plugin and IDE Scratch files as solutions for interactive Kotlin development.
This is the kind of thing that makes me completely ignore Kotlin, even though it is a very nice language. It’s completely hamstrung by JetBrains’ incentive to sell an IDE for it.
I think there’s also a chicken and egg problem there. Sometimes they release stuff that doesn’t require the IDE like this REPL was, and how many people used it? I do Kotlin backend development for a living and I think it is very under appreciated. But many people still think is only useful for Android development. Jetbrains libraries like Exposed or Ktor are very nice too, but even in backend we tend to prefer Java based libs instead of the pure Kotlin ones, which is a shame.
Following that logic, Kotlin should sunset any use outside of Android then, since how many people are using that? I’m not saying people aren’t using the REPL (I sure as hell haven’t in the time I used Kotlin), but that there is a tendency in JetBrains, as stewards of the language, to prioritize features which sell IntelliJ licenses and sunset features which don’t. You cannot tell me with a straight face a notebook plugin and IDE scratch files are alternatives to a REPL; the fact that JetBrains does shows that they don’t care about you using Kotlin if it doesn’t mean they can sell you an IntelliJ license. That’s the biggest thing holding Kotlin back. Even Apple doesn’t do that with Swift.
No, what I’m saying is that (and this is a personal feeling, very subjective), JetBrains has been trying to create a good Kotlin ecosystem, independent of the Java one, but the company isn’t big enough to push all the projects they start and the community isn’t there to continue them. So they create nice stuff, but very few users, they do not consolidate, in the end, some people don’t even bother to enter the community because the ecosystem isn’t different enough to the Java one, and Java has improved too, so the ecosystem doesn’t improve further by external developers. You say JetBrains makes stuff to sell IDEs. It might be true even though both IntelliJ and Android Studio have free versions. But the REPL was a feature developed by JetBrains too. They also made the Ki Shell, which according to the announcement will also be deprecated
BTW, Java didn’t have a REPL for decades and it wasn’t a problem for people
JetBrains has been trying to create a good Kotlin ecosystem, independent of the Java one, but the company isn’t big enough to push all the projects they start and the community isn’t there to continue them.
That’s what I mean by Kotlin sunsetting anything outside of Android. Nothing else has the community to continue it fully, except maybe kotlin for backend with spring, but even that is a tiny portion and I’m not even sure that would survive if JetBrains dies off. If that’s what JetBrains is concerned about, they wouldn’t be plugging notebooks as an alternative; I doubt notebooks have more of a community supporting them.
The next thing is also fairly straightforward: we expect Kotlin to drive the sales of IntelliJ IDEA. […] And while the development tools for Kotlin itself are going to be free and open-source, the support for the enterprise development frameworks and tools will remain part of IntelliJ IDEA Ultimate, the commercial version of the IDE.
Emphasis mine. They openly resist making a language server for Kotlin, explicitly because it would eat into their bottom line. So yes, while they made a REPL to begin with, the fact that they’re sunsetting it in favour of plugins for the IDE they sell shows that they don’t want you writing Kotlin if it doesn’t happen in their environment. Even Microsoft - who as @kameko pointed in a sibling comment try very hard to make you use Visual Studio for .NET - at least let you use a somewhat gimped LSP for it.
BTW, Java didn’t have a REPL for decades and it wasn’t a problem for people
Java was never billed as a scripting language, which JetBrains is clearly trying to make happen for Kotlin.
It’s also worth mentioning that in the past there was another language based on Codd’s ideas: QUEL. Which in fact it was the original qury language of what is now known as PostgreSQL.
We’ve been using CosmosDB for a new feature and it’s a database that tries to be one-size-fits-all with the multiple APIs. So we tried to use it instead of MongoDB Atlas and it worked with the same client libraries but it’s not really a MongoDB. The pricing aspect of CosmosDB is complex too, you have the vCore based (similar to Atlas) and the RU based, which may also have some provisioned throughput.
I think many people wouldn’t consider itself Europeans because it’s a very broad term. Many people still don’t know the difference between Europe, the European Union, the Eurozone, the Council of Europe and Eurovision! Regarding (Br|L|.*)exit, it’s true it’s not just conservatives that want it, this was just the main force behind Brexit. The EU is a trade union after all, that evolved to harmonize more and more law between countries so trade could be made more efficiently. But there’s no real culture behind. We usually speak together in English, which is a language with very few native speakers in the EU nowadays. There are no newspapers, radio stations, TVs, that are cross border,… there are no memes that only an EU person would understand (but within a country, there are). Most memes that are widely known in the EU, are because they’re also popular in the US. Even in European Elections political parties are the local political parties, they go with that name, and then they form alliances to create the groups in the parliament.
Quite honestly to feel more European we need to move the population around. I would love to see for all the citizens a 1year mandatory erasmus/social service in a randomly selected location of the union.
There are no newspapers, radio stations, TVs, that are cross border,…
Although I mostly agree with your point of view, this is actually not true. Look at arte a TV station conceived to create and broadcast mostly the same programming for German and French audiences.
surprising that the Yanks expect anyone to be or feel “European”
It may help explain their mistake, to consider that the people of the USA “[are] or feel” “American” even while the USA is very close in size to Europe (and more than twice as large as the EU).
From reading history, I think the EU is akin to the United States before the Civil War (or even before WW2) where the primary allegiance was to the constituent state, not the federal government.
I find amazing that so many people are discovering Niri right now (me too!). Last weekend I even took the time to write a short ArchWiki page about it. It’s not complete and I encourage people to add stuff to it.
The first interesting bit mentions xkb options. I appreciate that a lot, as an user of few extremely esoteric ones that luckily someone supports.
My wife recently started a second round of studies at a local university in the US and was quite taken aback by how courses were structured (she has an MBA from Vienna economic university, graduating 2008). She describes it as a very strict regimen of classes, like she took in her high school equivalent: lecture courses that have weekly quizzes and mandatory attendance, many and frequent exams; contrast that to how you’d get credits for a lecture in Europe: you do the exam at the end of the semester, and it’s on you to keep up with the material (or cram it all just prior to the exam).
Undergrad students in the US aren’t given a whole lot of freedom; that they compete in these prizes at all points to near heroic effort.
This is also happening in Europe, at least in some countries like Spain. It all started with the Bologna Process. Now, Bologna doesn’t really require universities to follow a more high school like approach but most universities here interpreted that way and now it’s difficult to pass without daily attendance, the weekly quizzes, etc You can still pass doing a final exam, but you will probably be the only one in the class taking that approach and you lose the opportunity of the recovery exam (because that exam is the recovery exam, the normal exam is allowed to take daily work into account and most people do it).
OP wrote about students in 2020, so take my old stories with a grain of salt, I graduated in 2010 but I have one of these old-school German diplomas (generally 9 semesters, most needed more vs BSc+MSc in 6+4?) from the other Munich university, and they were shifting towards the Bologna system while I was there (TUM had already moved over iirc). So unless they changed it meaningfully since them (and assuming most German universities have the same course load, which makes sense in a standardized system) that already seemed like a hard shift from “do whatever you want, you need to attend some stuff, then you have a test at the end and only some exercises to hand in inbetween” to “this is your schedule, welcome back to school”.
I wouldn’t say taking off a semester was common but it was also not completely unheard of.
Daily attendance to the courses you follow, and the choice of courses — these are two different dimensions, though. You can definitely have heavier continuous assesment or midterm+final or final-only across the courses in the same program, and then there is a question of which courses you take.
I think Germany has a lot of course choice (and also some courses with unpredictable availability), and also some universities where succeeding in graduating on time is feasible only for very few (and the academic culture that adapts to this fact). French universities are likely to have fixed schedules and failing students might have to retake a year not mix-and-match, although in a sense this sometimes makes taking a gap year easier; not sure about the amount of course choice in the fancier «engineering schools» and ENS (not well-aligned with Bologna process, more prestigious than universities). Also in France if you just barely fail one course and excel in the others, you probably still pass the year by compensation between courses.
Europe is not a single place, even if it has adopted a course load standard to simplify student exchanges…
To add a Canadian perspective: our universities seem to be a mix and it really depends on the department. I know people in Engineering programs often had mandatory attendance and even daily quizzes in their courses, whereas people in Math programs often just had monthly assignments and a final exam. As someone who did a Computer Science degree, I only ever had mandatory attendance in my elective courses that involved a lot more collaboration like a French course which needed students to be there to practice speaking with each other in the presence of the teacher. Everything else I took was similar to this monthly assignments + final exam model. I have no idea if this changed after COVID though.
I teach in a computer science program in the U.S. and I follow something like that. Making the entire grade be the final project or final exam would be too far from the norm to get away with without complaints, I think, but it is possible to do a project-based class that doesn’t grade attendance or have quizzes. A grading scheme along the lines of: 20% homeworks (mostly a completion grade), 20% project checkpoint #1, 20% project checkpoint #2, 40% final project (w/ deliverables and presentation). I have also taught a more exam-based class where the scheme is: 20% homeworks, 25% midterm exam 1, 25% midterm exam 2, 30% final exam. That one obviously requires coming to class on the exam days, but you could in theory come only those three days and still get an A.
This varies a lot by university though, both in terms of formal policies and student/faculty culture.
Adding the Italian perspective, when I graduated in physics, well after Bologna was already in place, some of the courses had 2 intermediate written exams. If you passed both of them you would not have to do the final written exam. However, everyone had to do the final oral exam, which covered the whole program of the course. In my experience, less than half of the course would pass both intermediates, so the “final” written exams where quite crowded. No one gave an absolute crap if you were in class or not. You could literally show up just for the exam and you would get a fair examination. I know it firsthand because as a working student I did it fairly often. The only exception would have been labs courses, which is very reasonable. Also, in Italy tuition is quite affordable and if you want to skip a semester, or even a year, it’s just up to you. There are no repercussions and nowadays you could get a discount on the tuition if you declare beforehand that this year you plan to take less credits. They won’t kick you out if you don’t complete courses, as long as you pay. You are considered an adult, it’s up to you what you want to do with your life. On the “not so bright side”, no one gives an absolute crap about exam failure rates and professors are completely unaccountable for that. Take it or leave it, I guess…
You say on the article that “There really tends to be only one at a time though” but I think that’s not true, the industry is big enough that supports multiple technologies at the same. Which makes me question what is the scope of this “tech”, consumer technology, computery-stuff, relation to software? But neither of the definitions of tech satisfy all the examples you mentioned.
This is because I want to share with you a real hype cycle that ended very bad at the time, but it was mostly managed by specific companies only: solar panels. In 2008 you could find many solar panel companies in Europe. It was seen as the industries of the future. In my city solar panel companies sponsored the local sports clubs. However, it was unsustainable, there was not enough demand at that time, solar panels were not that efficient, and electricity was still kind of cheap. There were many layoffs when this companies closed. Later they were replaced by Chinese makers which were cheaper.
Now it’s a mature technology, each year we beat a new record of solar panels production but more than 80% of them are made in China, and China wants to regulate the production (kind of solar OPEC) to control the market and make it sustainable long term.
Very cogent points! These things happen all over and all the time; I could tell similar stories about the various swings in oil prices being connected to expectations vs reality of new technologies and prospects (the Marcellus Shale gas fields being the one I know the most personally). This list in particular is often the things that touch me through the media/news I consume, so, tech and programmer-y stuff. In particular I see it often driven by the Silicon Valley startup and funding culture, which has insane amounts of money and a propensity to spend it in flashy ways that create lots of… well, hype, even among people who aren’t practitioners in the field.
So I spend a year having to hear about bitcoin on the radio while driving to work, roll my eyes at the stupidity at it all and say “it will end mostly-badly because these people aren’t usually creating anything of new value”. Lo and behold it ends mostly-badly and the successful bits that are genuinely good ideas fade into the background of existence, and a year or two the tech industry is kinda fallow and nobody is really being all that creative… and suddenly I start hearing about the big AI boom and how it’s going to change the world! Just like blockchain, Big Data, IoT, etc etc all did. ie, lots of money will be wasted, lots of money will be siphoned from the have-not’s to the have’s, using the internet will get slightly more resource-heavy and painful, and we’ll grow a new and exciting sub-field for security researchers.
This is interesting. Bubble implies a pop, yes?
I mean, things were overhyped and ridiculous, but can anyone say that the internet isn’t at the core of the economy?
Still one of the most widely used languages, powering many multi-billion dollar companies.
What we now just think of as “the web”.
Again, powering multi-billion dollar workloads, a major economic factor for top tech companies, massive innovation centers for new database technologies, etc.
Still massively important, much to everyone’s regret.
This one is interesting. I don’t have anything myself but most people I know have a Smart TV (I don’t get it tbh, an HDMI cord and a laptop seems infinitely better)
Still a thing, right? I mean, more than ever, probably.
Weirdly still a thing, but I see this as finally being relegated to what it was primarily good for (with regards to crypto) - crime.
Do we expect this “bubble” to “pop” like the others? If so, I expect AI to be a massive part of the industry in 20 years. No question, things ebb and flow, and some of that ebbing and flowing is extremely dramatic (dot com), but in all of these cases the technology has survived and in almost every case thrived.
All of these things produced real things that are still useful, more or less, but also were massively and absolutely overhyped. I’m looking at the level of the hype more than the level of the technology. Most of these things involved huge amounts of money being dumped into very dubious ventures, most of which has not been worth it, and several of them absolutely involved a nearly-audible pop that destroyed companies.
Yeah, I was just reflecting on the terminology. I’d never really seen someone list out so many examples before and I was struck by how successful and pervasive these technologies are. It makes me think that bubble is not the right word other than perhaps in the case of dot com where there was a very dramatic, bursty implosion.
The typical S-shaped logistic curves of exponential processes seeking (and always eventually finding!) new limits. The hype is just the noise of accelerating money. If you were riding one of these up and then it sort of leveled off unexpectedly, you might experience that as a “pop”.
See https://en.wikipedia.org/wiki/Gartner_hype_cycle
To me the distinguishing feature is the inflated expectations (such as NVidia’s stock price tripling within a year, despite them not really changing much as a company), followed by backlash and disillusionment (often social/cultural, such as few people wanting to associate with cryptobro’s, outside of their niche community). This is accompanied by vast amounts of investment money flooding into, and then out of, the portion of the industry in question both of which self-reinforce the swing-y tendency.
“Dumb TVs” are nigh-impossible to find, and significantly more expensive. Even if you don’t use the “Smart” features, they’ll be present (and spying).
Not for everyone and also not cheap, but many projectors come with something like android on a compute stick that is just plugged into the hdmi port, so unplug and it’s dumb.
Yeah, I’ve been eying a projector myself for a while now, but my wife is concerned about we’d be able to make the space dark enough for the image to be visible.
That’s assuming you make the mistake of connecting it to your network.
At least for now… once mobile data becomes cheap enough to get paid for using stolen personal data we are SO fucked.
Why have a TV though? Sports, maybe?
Multiplayer video games, and watching TV (not necessarily sports) with other people.
I use a monitor with my console for video games, same with watching TV with others. I think the only reason this wouldn’t work is if people just don’t use laptops or don’t like having to plug in? Or something idk
It’s the UX. Being able to watch a video with just your very same TV remote or a mobile phone it’s much much better than plugging your laptop with an HDMI cord. The same reason why still dedicated video game consoles exist even if there are devices like smartphones or computers that are technically just better. Now almost all TVs sold are Smart TVs, but even before, many people (like me) liked to buy TV boxes and TV dongles.
And that’s taking into account that a person own a laptop, because the number of people that doesn’t use PCs outside of work is increasing.
I have a dumb TV with a smart dongle - a jailbroken FireStick running LineageOS TV. The UX is a lot better than I’d have connecting a laptop, generally. If I were sticking to my own media collection, the difference might not be that big, but e.g. Netflix limits the resolution available to browsers, especially on Linux, compared to the app.
Citation needed? So far I only know about then either “stealing” existing tech as a service. Usually in an inferior way to self housing, usually lagging behind.
The other thing is putting whatever in-house DB they had and making it available. That was a one time thing though and since they largely predate the cloud I think it doesn’t make sense to call it innovation.
Yet another thing is a classic strategy of the big companies which is taking small companies or university projects and turn them into products
So I’d argue that innovations do end up in the could (duh!) it’s rarely ever driving them.
Maybe the major thing being around things like virtualization and related things but even here I can’t think of a particular one. All that stuff seems to steem largely from Xen which again originated at University.
As for bubbles: one could also argue that dotcom also still exists?
But I agree that hypes is a better term.
I wonder how many of these are as large because of the (unjustified part of) hype they received. I mean promises that were never kept and expectations that were never met, but investments (learning Java, writing java, making things cloud ready, making things depend on cloud tech, building Blockchain Knowhow, investing into currencies, etc) are the reason why they are still so big.
See Java. You learn that language at almost every university. All the companies learned you can get cheap labor right from University. It’s not about the language but about the economy that built around it. The same it’s true for many other hypes.
People are rightly hard on tailwind.
And yet… for someone who sucks at front end responsive design I can’t deny it didn’t take me long to get a decent website up and running.
IME, the critics of Tailwind CSS are often CSS experts. So to them, “CSS isn’t hard”. The Tailwind abstractions just seem an extra step to them.
To me, though, they’re very useful and portable.
This is an okay stance to take & on the other end I can agree that CSS isn’t hard & don’t want to memorize a billion classname conventions, but what grinds my gears is when a tech lead or back-end team has mandated it on a front-end team that would prefer to not have that decision made for them—as can be said about most tools put on teams by folks not on those teams.
To me, if I want something that follows a system, the variables in Open Props covers what I need to have a consistent layer that a team can use—which is just CSS variables with a standardlized naming scheme other Open Prop modules can use. It is lighter weight, doesn’t need a compile step to be lean, & lets you structure your CSS or your HTML as you want without classname soup.
May not hard to write, but certainly it is hard to maintain. CSS makes it SO EASY to make a mess. In no time you’ll be facing selector specificity hell. If you have a team with juniors or just some backend folks trying to do some UI, that’s very common.
“But what about BEM?”. I like BEM! But, again it’s an extra step and another thing to learn (and you’re choosing not to deal with CSS specificiy to avoid its pitfalls).
IME, the BEM components I wrote were more effective and portable the smaller they were. I ended up with things like
text text--small text--italic, which were basically in-house versions of Tailwind (before I knew what it was).So, to paraphrase Adam, I rather have static CSS and change HTML than the reverse.
You can use utility classes & Open Prop names & still not use exclusively utility classes. No one has said Tailwind can’t be used for utility when needed, but in practice I see almost all name go away. There is nothing to select on that works for testing, or scraping, or filter lists.
Having a system is difficult since you have to make it stringly-typed one way or another, but that doesn’t discount the semantics or considering the balance needed. Often the UI & its architecture are low-priority or an afterthought since matching the design as quickly as possible tends to trump anything resembling maintainability & the same can happen in any code base if no standards are put in place & spaghetti is allowed to pass review.
It really is just the same old tired arguments on both sides tho really here. This isn’t the first time I have seen them & that post doesn’t really convince me given it has an agenda & some of the design choices seem intentionally obtuse without use of “modern” CSS from like the last 4–5 years.
Is this something that happened to you? Why would the back-end team decide on what technology the front-end team should use?
On multiple occasions have I seen a CTO, tech lead, or back-end team choose the stack for the front-end either before it was even started or purely based on a some proof-of-concept the back-end devs built & did not want the stack to change… just that it was entirely rewritten/refactored.
It raises the question that maybe the abstractions that were adopted by CSS are not the right ones in the long term, as other solutions are more team-friendly, easier to reason about.
I find the original tailwind blog post really enlightening: https://adamwathan.me/css-utility-classes-and-separation-of-concerns/
Separation of presentation and content makes a lot of sense for a document format, and has been widely successful in that segment (think LaTex). But for how the web is used today (think landing websites or webapps), the presentation is often part of the content. So the classic “Zen garden” model of CSS falls apart.
I’m in the same boat. I was appalled by it the first time I saw it.
But it fits my backend brain and makes building things so much easier. I like it in spite of myself. I’ve just never been able to bend my brain to automatically think in terms of the cascade, and reducing that to utilities makes it so much more workable for me, and lets me try things out faster. I’m excited about this release.
I am happy that you got a decent website up with Tailwind. I’m sad that you had a hard time with CSS and the conclusion you reached was that you were the one that sucked.
I really can’t be bothered to learn CSS or deal with “web standards”.
Honestly I sympathise a lot with hating the Debian packaging of your tool - this author is not the first and probably won’t be the last, and the way they package Rust is genuinely awful. Flaming is definitely counterproductive, but I wouldn’t want to support any of my code on Debian either, and would consider any bugs anyone runs into on Debian to be their own fault for doing that.
The Debian project’s goal is not to make Rust folks happy, rather it is to make Debian users happy. Perhaps the way they packaged Rust was the best that they could do under the constraints (which are enormous, given the existing infrastructure, user base, and cultural expectations)? I would personally show some humility when criticizing a project like Debian which stood the test of time like very few other open source projects.
On a more constructive note, can anyone summarize the different between Debian and Fedora when it comes to packaging Rust? I don’t hear any complaints about Rust in Fedora so they must have gotten it right?
Debian patches all Rust code to use common versions of libraries in order to keep the libraries in separate packages. The problem is that usually that means using versions of libraries that are older, buggier and were not tested by the original developer (and the developer itself is puzzled because they start to receive bug reports which are impossible to reproduce given their original code where using such older libraries is impossible).
previous discussion
Yes, that was probably a mistake. They should have rejected any Rust application that has unstable (buggy, fast-churning, etc) dependencies as itself being too immature for inclusion into Debian. All of this is IMHO, I don’t have the complete picture.
Of course, if they did that, they would have received a lot of flack for not including the latest hot stuff, which is what bcachefs was until recently.
Just because libraries receive bug fixes doesn’t mean that they are “unstable”. Though some libraries are permanently unstable, like bindgen. If you’re only allowing one version of bindgen, you’re never, ever going to be able to ship Rust software that actually works correctly.
Like you say, the goal is to make Debian users happy, so I wonder, as more and more software gets written in Rust, will Debian users be happy to either have to live with Rust software that’s buggy and doesn’t work, or live without a larger and larger share of software?
Debian’s packaging policies are often annoying. I gave up helping Debian users who insisted on using the packaged versions of GNUstep. Debian required everything to be built with the system compiler. GCC supported Objective-C (modulo occasionally deciding that 100% broken Objective-C codegen was not a release blocker), but it was an ancient dialect of the language. Supporting some of the new features required ABI changes and so, if you compiled GNUstep (implementation of the Foundation and AppKit core standard libraries for Objective-C) with GCC, a load of stuff would not work well even if you compiled things that used them with clang. The way that they would fail was known and you could, if you were very careful, work around them. But Debian would not let the GNUstep package maintainers simply compile with clang and have things work as users expected because GNUstep could build with GCC (it just wasn’t a recommended configuration and came with a bunch of warnings).
I think we both would agree this policy is there for a good reason. For example, if Debian allowed building with either GCC or Clang at maintainer’s choosing, sooner or later someone will want to build with
libc++instead oflibstdc++. And now we have two sets of libraries that cannot be mixed in the same application. Funny enough, I am trying to figure out how to deal with the exact same problem but in Homebrew.So the two plausible solutions to this problem seem to be either to stick to this policy or to start handing out exceptions on a case-by-case basis after carefully analyzing each case for potential fallout. I would venture a guess that the vast majority of Debian users don’t care about GNUstep. So Debian deciding to stick to this policy looks like a pretty sensible choice to me. It’s a tradeoff. As with all tradeoffs, someone will think the wrong one was made.
I think it’s a fine policy for C and C++. It’s not a good policy for the other languages that GCC kind-of supports. There was no mechanism to define the default compiler for other languages.
It’s ok to struggle with solutions when you have constraints. That’s completely understandable. The issue with Debian is that those constraints are sometimes both self-inflicted and don’t actually make anyone’s life easier if taken to extreme. They really could have some exceptions.
It’s anecdotal, but this Debian user of more than two decades can tell you with certainty that “those constraints” do make his life easier.
I don’t want to have 50 versions of every Rust dependency installed on my machine nor do I want 50 copies statically linked into Rust applications that I may want to install. I am happy to leave 1G+ incremental updates to Windows and Mac OS users to enjoy.
Give this a read:
https://wiki.alopex.li/LetsBeRealAboutDependencies#gotta-go-deeper
The TL;DR: is that you get a lot of that with C and C++ too… it’s just that, without access to proper dependency management, it comes in the form of meant-to-be-vendored header-only libraries that the Debian maintainers can’t split out and bespoke re-implementations of things that can’t be deduplicated.
See also https://blogs.gentoo.org/mgorny/2012/08/20/the-impact-of-cxx-templates-on-library-abi/
TL;DR: C++ libraries that use templates behave the same way Rust does… if you build something that’s all-templates as a dynamic library, you get an empty
.sofile.Dynamic linking for generic/templated code without the kind of trade-offs that Swift incurs to achieve it is an unsolved problem.
Why?
In particular, either way, Rust dependencies are statically linked and Debian packaging does not support incremental updates on the same package, much less across packages. So you’re paying almost the exact same bandwidth and storage cost regardless of the number of Rust dependency versions in play.
That’s where I’m at with Debian as well. I’ve been using it for a long time but I’m just fed up with these stories where Debian wants to do something weird and completely contrary to the developer’s intention (the recent KeePassXC thing for example). The attitude just rubs me the wrong way and feels very old fashioned. All power to those who like to do things that way, but I’m looking elsewhere these days.
Agreed. The Debian approach was brilliant in the 90s, and for a long time after .. but the world has just changed too much. There’s just too much damned software that’s changing too fast for the traditional approach, I think. I’m slowly moving over to NixOS, which definitely has its own problems and can be very wasteful of disk space, but I’m already finding it less aggravating in a lot of ways.
Me too on all of that, but I imagine Debian is a lot better than NixOS for systems without lots of spare storage, which is still most computers if you count the ones in cars and whatever.
Having said that, I’m 100% sure that Nix or something like it is going to take over literally everything (modulo energy crisis etc.)
Keep in mind the scale of what Debian has to do… they have to manage the intentions of tens of thousands of diverse developers, and they do nearly all of it on volunteer time.
Although I have Things To Say about some of the tools they use, I am overall quite happy that Debian is deliberately conservative with their policies in order to deliver the most stable OS they can.
Debian, and similar traditional 90s style distros - chose a model that doesn’t scale well and requires massive labor - O(all available software) to produce the next version of the system. Their model is also hostile to backwards compatibility (using software built last year on this year’s system) or using multiple versions of the same software. These systems fit together in such a brittle manner that changing a subsystem or a few decisions requires standing up a whole separate distribution which lead to the proliferation of slightly different but quite distinct Linux systems. The ecosystem evolved in a way to make it so challenging to distribute software, that the easiest thing to do is to package software with an entire distro and ship that around. It’s ironic that Debian’s fight against static linking and vendored dependencies means most developers targeting Linux prefer to static link the entire operating system into their software.
To me the amount of volunteer hours spent on projects like Debian is like trying to feed thirsty people by having each volunteer walk to the reservoir, scoop up water in a cup, and then walk that water to the thirsty person somewhere in town. Commendable effort, but the world would be much better if we build an aqueduct and plumbing instead with that volunteer time, to completely eliminate the need for toil in perpetuity.
From your last paragraph, it sounds like you are concerned with merely shipping the OS in the most efficient way possible… Debian, etc are concerned with releasing a stable OS that works out of the box and doesn’t surprise their many millions of users. Yes, maintaining a versioned distro like Debian is a lot of work. But the results are worth it, otherwise thousands of people wouldn’t volunteer their time toward making it happen.
It’s not clear to me what you are proposing as an alternative.
If you are advocating for a rolling-release distro like Arch or Gentoo or NixOS, those have their issues too. Namely the sheer amount of constant package churn and never being quite sure that the exact combination of package versions that you have just installed are known to be compatible with each other. On Debian, I only have to worry about my workflow or applications breaking every two years. On a rolling-release distro, I have to worry about it every time I run the command to update. Often my worries turned out to be justified as I spent a few hours figuring out how to fix my system. I was an Gentoo/Arch user for a long time, I am VERY familiar with this. Maybe this isn’t everyone’s experience but it certainly was mine.
There are versioned distros that update at a higher cadence, like Alpine.
Honestly, if you ever have to worry about your workflow breaking, something is wrong, regardless of whether it’s every week or once every two years.
Maybe if we applied the Rust model to all software, that wouldn’t be something to worry about. (And I mean the actual Rust model, not the way Debian does Rust)
Library developers and distribution packagers have different perspectives on the user needs. When they disagree, developers are quick to blame the packagers (“my stuff works fine on all other distributions, don’t be annoying with your own rules!”) and packagers are quick to blame the developers (“all other packages work fine with these rules, don’t be annoying with your unruly development practices!”). Different users have different needs and they sometimes align more with one side or the other.
My personal rule of thumb:
Sometimes it is useful to make exceptions to preference (2) (eg. maybe your web browser you want to manage yourself and let it auto-update, etc.; maybe
jjis not packaged, etc.), but each exception comes with a convenience cost unless you are very disciplined.This does not avoid all troubles because packaging an end-user application (2) still requires packaging its libraries, which are written by people who prefer perspective (1), and this generates the sort of complaints we can read around on the web. But mostly it’s fine, and getting my distribution to manage and update my applications and their dependencies brings a large convenience bonus. Snap, Flatpack, etc. are trying to replace package managers in a way is closer to how upstream developers test and release their software. This is clearly convenient for proprietary applications, but I believe the jury is still out for other applications.
Same, I don’t hate Debian though.. It’s mostly the users of Debian Stable that report bugs to upstream that frustrate me. Like no, your version is 1-2 years old, we do not support it anymore.
I wish Debian users reported the bugs to Debian, not upstream. Debian devs can make better calls whether to report bugs to upstream or patch them out themselves.
With LTS, the bugs are LTS too.
FWIW, the Debian project always instructs it’s users to report bugs in software that it packages to Debian, never to upstream. I, for example, always do so.
Maybe they need to communicate that better. (And they also need to communicate that if you like playing with experimental filesystems, Debian is probably not the right distro for you!)
I do appreciate that they’re trying to play nice with the rest of the ecosystem, though. Knowing that they’re willing to take on bug reports reframed things for me—Debian being weird and contrary luddites, versus Debian having a specific LTS goal that (quixotic or not) they’re making a good faith effort to pursue. It’s not my goal, personally, but I can root for them from the sidelines.
Yeah, I know, sadly a lot of the users don’t. :/
My plan is to put it in big bold letters in my bug-reporting template that you must reproduce bugs using the officially supported builds before reporting them, including a checkbox that they have, and then to add some code to my
--versiondisplay which adds something like-systemto the end of the version string if installed under/bin,/usr/bin,/sbin, or/usr/sbin. (/usr/local/binand/usr/local/sbinare fine.)If a
-systemversion turns up, then I’ll close the bug with a “Reopen with proof that it’s not a distro build” message and, if I catch a distro patching that out, then I’ll look into using either trademark law or a license change to force an Iceweasel situation.Can you share some links which provide context on how Debian packages Rust and what issues that causes specifically for Kent? I haven’t encountered this drama before
Here’s a critique on Debian’s approach from a long-time Debian developer: https://diziet.dreamwidth.org/10559.html
Debian Orphans Bcachefs-Tools
Oh wow, this is some wild editorialization from Phoronics:
Anyway, thanks for the context.
It sounds weird to insist on having system packages for every dependency in a language where everything’s statically linked, doesn’t it? Does Debian hack Rust to do dynamic linking against system libraries?
It’s very weird and essentially puts a ton more work on package maintainers since they either need to ship multiple versions of 1 dependency in separate packages or update packages to use a newer dependency.
debian (and fedora) package rust libraries by moving the code into a system folder, and most of the debian rust libs aren’t even marked for all architectures so now you get the same tarball for every architecture supported
I’m jaded on debian so I’ll put my troll hat on and say — no, of course not. The dependencies situation is nothing but a power play / bullying by part of the developer team. They can write C-with-classes and mailinglist messages, not rust. See also the Objective-C situation in neighbour comments.
I don’t know when it started but we’ve been using ndjson for many years at $dayjob as it’s compatible with both Azure IoT Hub and Apache Spark.
Yes, probably they wanted to use deciseconds as they give more flexibility and allow to express quantities that may not be enough fine grained if you were using whole seconds.
If using a “finer-grained than seconds” time unit, milliseconds or nanoseconds are the common choices. Note that DHH himself in the quoted-tweet says “100ms”, not “1 decisecond”. If you noticed that a feature that was configured with a
1was waiting for 1ms, it’d be easier to determine the causal link.I thought the whole point of the raspberry pi was that it was a $35 computer. I guess not anymore lol.
The original Raspberry Pi Model B with 256MiB of RAM cost $35 dollars in 2012. Adjusted for inflation, that’s $48. Today, the Raspberry Pi 5 with 2GiB costs $50. You can also buy a Raspberry Pi 4 Model B with 1GiB of RAM for $35, which in 2012 dollars would’ve been $25.
So, exact same cost, or cheaper, vastly more capable at things the original really couldn’t do that all (like be a desktop) while still being just as good as what it was originally designed for.
Do I think you should buy one? Honestly probably not. 98% of people don’t use the GPIO, and if all you need is a low power server, N100 based products have you covered. But really, RPis haven’t gotten super expensive or drifted from their original purpose much at all, other products have just gotten better and cheaper at the main stuff tech professionals and tech hobbyists have used RPis for.
Myself and basically everyone I know who has one has it because it’s a linux box with GPIO/I2C/SPI. That’s what it’s for. Maybe my social circle is abnormally hardware focused, but 98% is a pretty decisive figure. I wonder if there’s some way to actually through like raspbian popcon or something
I think it must be.
I have at least half a dozen Pis and have used them for 12-13 years. I have never used the GPIO for anything and would be happy with a model without one. Talking to others, most people I know don’t use them either.
I’d agree with the “98% never use them” figure.
I have used the GPIO pins on several RPis but not the ones on the (single) unit I have at home - where does that put me? ;)
Standing very heroically indeed with a leg in both camps, perhaps?
A veritable Colossus of Boards.
98% is certainly a number I pulled from my ass, but I don’t know anyone who uses the GPIOs, they all use RPis as low power servers. When RPis got scarce from the pandemic supply shock, almost all discussion online that I saw about what to buy instead centered around MiniPCs that don’t have GPIO. Occasionally I’d see someone point out that they didn’t work as RPi replacement due to the lack of GPIO, but most people didn’t seem to care.
BTW – I forgot to ask. What does:
… mean? I can’t make head or tail of it.
Oops, I accidentally a word, was meant to be “some way to actually check…”
popcon is the Debian project that measures package usage: https://popcon.debian.org/, raspbian is the (i think official?) raspi distro, based on Debian. If using GPIO required installing some package, and raspbian had some equivalent of popcon (idr if it does), it’d be possible to tell roughly how many people use it.
Aha! Right. When I posted I had not only tripped over the missing word but misread “popcon” as “popcorn”, which left me very confused.
popcon is Debians package statistics system: https://popcon.debian.org/
So I’m assuming the question is something like “Is there an equivalent for Raspbian that could deliver useful statistics about people using software that uses GPIOs”. My guess is that the answer is no, because people would more often download such software outside package manager, write their own using libraries obtained elsewhere, …
Yeah, I think this is the reason to get one though. So I have one of the old raspberry pi 2s that I did get for $35 many years ago, and it is in some ways disappointing - it is sitting in a junk box that I almost never use, since it is fairly disappointing as a general purpose computer, not even especially good as an X terminal! (which is why I’m both interested and disappointed in this thing - the new ones might be better computers, but once the price goes up, it has to compete with everything else for that space.)
But it has been useful to me for two things: one is just experimenting with the GPIO pins. Never made anything useful for myself with it…. but knowing how they worked transferred well to work when a client said they have this other ARM device with GPIO pins and also had these lights and buzzers they wanted programmed on it in their business… like that’s not that hard and i could have probably figured it out on the job and stayed on schedule anyway, but having that preexisting experience made it a simple task to do what they wanted to do. So the education aspect worked for me, not lifechanging or anything, but some value.
The other thing I did with that was make a little noise maker and sound monitor for the baby’s room, a small program on a mini computer tucked away on a shelf. Of course, you can buy that kind of thing commercially for cheap anyway, and tons of things can do audio so that’s not unique like gpio (kinda) is, but still, recycling this thing I already had to do the job was kinda cool.
So I feel it was worth the $35, even if it is sitting in a box of random junk right now. Higher price point though feels different, though you do make a fair point they actually do provide a similar thing after inflation nowadays. Oh well, probably not worth getting another one anyway, even if a stronger machine, i don’t actually need it for anything.
Try RISC OS on it. You will probably be amazed how well it runs.
https://www.riscosopen.org/content/
EDIT: I am trying to clarify this. RiscOS is not a Linux, not a Unix, not even in the entire family tree of C-based OSes; it is considerably more different from any C-based OS than Windows is different from Unix.
But RO is a multitasking GUI OS that can handle IPv6, USB and so on, with a choice of languages, editors, and apps, and it runs well on a decade-old Pi with 256 or 512MB RAM. It was originally built for an 8MHz ARM2 with 512 kB of RAM, although this is clearly not that version.
With my journo hat on, talking to RasPi users, my impression – and I am still amazed by it – is that most people never replace the default OS. They should. There are smaller Linuxes for it than Rasbian (e.g. Alpine), and multiple non-Linux options. The Pi can run more different OSes than any other ARM SBC.
Oh, wow! I had no idea that RISC OS was still around. I wrote my first C program in !Edit on RISC OS.
Very much so. It’s FOSS now, it runs on Pi models up to the Pi 4 and 400 (plus various other Arm SBCs), and in the latest release it gained support for the Pi’s built-in wifi chip.
I reviewed the latest FOSS release last year.
https://www.theregister.com/2024/05/02/rool_530_is_here/
Since then the Risc OS Direct distro has also been updated, which adds more apps. Not tried that yet.
https://www.riscosdev.com/direct/
A Webkit browser is in development – not sure if it’s released yet.
That’s amazing. I thought that was all long dead. I have happy memories of the Archimedes (including many hours playing Lander but I’m pretty sure I did some more productive activities too…)
It’s strange to think that I was using a beautiful graphical desktop operating system in the ’90s, only to then spend most of my professional career in the 21st century using a VT100 emulator.
For that you can get the $35 model (or $10 RPi Zero) though, the big expensive ones are a somewhat more niche mix.
so tbh I didn’t know there still was a $35 model. Every ad I’ve seen for the raspberry pi since covid was like $80 or more, including this one here, hence my original comment. But I’m glad to be corrected that they still offer one!
I bought an $80 pi + case + charger + micdrosd + weird hmdi cable, never used the GPIO but I use it as a little home server for things like pihole, and it’s so good for that. The problem is you basically jump up 3-5x the price once you start looking at the next class of hardware like the Intel NUC’s and such. So even in this $100-150 range it’s still pretty amazing. Edit: Looks like Intels N100 is probably a good middleground between the two these days.
right, that’s the thing, it’s not 3x with those. a cheap N100 mini PC including an SSD can be <$200 too. (I run a Pi I had myself still for such things, but the more expensive RPis are not the “no-brainer, almost no alternative” choice anymore)
I fondly remember using the original B+ as a desktop in 2012. Actually laptop. It was fine for editing and programming, but scrolling in Firefox was unusably laggy. I had to use my phone to read documentation. You can blame people like me for wanting SBCs with more and more ram ever since.
What’s wrong with a considerably expensier SBC from a competitor then? Simple: Only RPi has had graphics drivers, let alone HDMI compatibility to speak of. I could give one honorable mention of the Nitrogen6x, which you could buy with a working portable screen. But here we are over a decade later, and RPi has not really had any competition for all that it is.
The competition, as in other Arm SBCs, heavily agreed. They all seem to have a lot of drawbacks compared to Raspberry Pis. But the competition, as in x86 mini PCs? I think they’re basically on par or strictly better for many use cases. Faster GPUs, faster CPUs, more IO, not quite as low power but still pretty low power, and basically the same price as the RPi models with more RAM, if you need to buy a power supply and case for your RPi. And that price might include decent enough SSD storage too!
I’m looking into Intel N-series CPUs now, but it seems like the N100 and N200 only have default clock speeds at 100MHz vs RPI5’s 2.4GHz, and the Pi5 seems like it consumes about half the idle power. At a glance, the most compelling thing about the N-series chips is the hardware transcoding, assuming it Just Works. I mostly just use my RPIs for home server type workloads; I care about power consumption but not GPIO–should I be looking at N-series computers for these kinds of workloads? Why/not?
Also why is this post hidden/marked as spam? This seems eminently interesting and on topic for this forum?
You must be misreading something, because the N100 has a max clock of 3.4 GHz: https://www.intel.com/content/www/us/en/products/sku/231803/intel-processor-n100-6m-cache-up-to-3-40-ghz/specifications.html
We’re talking about different clock speed values. I specified “default clock” and you responded with “max clock”.
an N100 has not a “default clock” (whatever that exactly means) of 100Mhz either. (And given its an entirely different CPU, comparing clock speeds as raw numbers is pretty pointless anyways)
If it’s completely idling I’m not sure if it would make any difference… but 100MHz just sounds awfully low to run a modern system.
see other comments: it fairly sure doesn’t, at least not in actual products, and either way focusing on idle clock speed is telling you nothing about performance.
Everything I’ve found on the Internet suggests the N100 has a clock speed of 100MHz. For example. I may be ignorant, but I have a hard time imagining a processor operating at 100MHz (i.e., not TurboBoosting) is going to outperform another operating at 2.4GHz even if they are different chips.
You’re misunderstanding something. There is no such thing as a “default clock”; all processors, both the N100 and the RPi’s BCM2712 included, do not have a fixed clock. The clock speed of the CPU will vary with the workload it is subjected to, so both the N100 and the RPi will clock down as low as they can when idle; this is the 100MHz you’re seeing for the N100. Both CPUs will also clock up with workload; the N100 can clock all the way up to 3.4 GHz on one core while the BCM2712 can to 2.4 GHz. In realistic use cases the N100 isn’t sitting at 100MHz, ever. I can guarantee you my N100 mini PC would be good for nothing if that were the case.
Either way, as the GP says, clock speeds don’t matter for much. You don’t care about what the clock speed is, you care about how fast the CPU can get things done or for how much power. You can compare online benchmarks for that: here’s a comparison on Geekbench of an N100 PC and the Rpi5, the N100 is about 40% faster for a lower TDP. Better yet, get both and measure your workload. Personally, I’d say there’s no reason to get an RPi5 over an N100 unless you need the GPIO.
Thanks for the clarification, the link, and the advice. That’s very helpful. 👍
(I’m fairly sure the “100Mhz” number is just people scraping Intels product database and using a stupid default because there is no number in there for whatever reasons. Looking at a few more useful results shows 800Mhz set on actual products using it, which makes more sense. Or maybe it can indeed be set to go down to 100Mhz theoretically, it’s not entirely impossible)
If it has work to do, it’s not running at base frequency. The same way the RPi5 will not be running at its base frequency of (if a quick search is to be trusted) 1.2/1.4 Ghz when it has work to do, but rather speed up. In both cases, the actual frequency under long-term load will depend on the power envelope set (I’m assuming with the N100 that’ll almost always be 6W) and if the cooling system can match that, for N100 miniPCs I’m seeing numbers of 2.9 GHz quoted for all cores being under load. With modern chips, the frequency they run when they got nothing/almost nothing to do is almost meaningless. So even if we ignore that its two entirely different architectures, you should be looking at measured performance instead the entire time.
2.4GHz is the max clock of the RPi5’s processor?
I don’t think rpi5’s chop has anything like TurboBoost so I think the max and default clock speeds are the same, overclocking notwithstanding,
Modern CPU clock speed management is both very complex and also very different from how you understand it.
All Raspberry Pi models (except Raspberry Pi 2 B+) are still available without deprecation notice. You don’t need a Pi 5 with 16GB, the most expensive one: you can choose whatever fits you best
ngl I don’t really understand why the AUR takes such a hard-line stance against like WSL2-specific (especially since you can install plain Arch in WSL2 via the official bootstrap tar.zst releases) and now ARM-specific packages. it’s… entirely user maintained. it’s not like it’s extra workload on Arch maintainers, I think?
feel free to correct me or explain it, I’m just confused.
I guess they’re trying to play extra safe until the new rules for new arches, which as discussed on a RFC will change the Arch stance on arches different than x86-64. In the end, even if it’s a very small cost (economic and time wise), you’re using the resources of a distro for something that is not supported by them. Arch Linux ARM is a different project right now. Just like Ubuntu and Debian are different projects, although one is based on the another.
at least as long as arch is an amd64-only distro, do you not think there is value in being able to install any package listed on the AUR?
Since the cited examples rejected are all of software that does not make sense to install on amd64, not really? I think your argument would more support a position of “if this software can be built for amd64, you need to support that”.
i’m saying as long as arch is amd64-only, it doesn’t make sense to list packages which don’t make sense to install on amd64
iow, the
archfield should ideally be useless as of now, kept only for a hypothetical future - the only other “arch” isanywhich is used for scripts8th is a commercial Forth-like, with native GUI support etc. in many languages, which seems extremely cool. I discovered it while reading about Gambas. Currently, I only know of 4 languages with built in cross platform GUI support: Racket, Free Pascal (Lazarus) and 8th, so it seems worthy of sharing.
I’m curious if anyone has experience with it etc.
Tcl/Tk comes with a cross-platform GUI, and you can use it from Python with the standard library’s tkinter package.
There are Smalltalk derivatives with cross-platform GUIs such as Pharo and Squeak.
Perl can use Tk even without a Tcl interpreter. (Python uses the Tcl interpreter under the hood, AFAIK). I had good experience with both of them.
Java (and all JVM languages by extension) has GUI support too: AWT uses the native GUI widgets, while Swing draws its own widgets.
You forgot about REBOL (or its descendants like Red) :)
I think, after Gentoo, this is the second western distribution that supports loongarch64
I had to look up what that means, it’s a new RISC instruction set architecture being developed by a Chinese fabless chip company.
Can you even buy these chips in the west?
Never tried, but apparently there are some Loongson computers and SBCs for sale on AliExpress
I’ll need to try this! Previously, my ISP didn’t offer IPv6, but I moved and the new ISP does support IPv6 (at least in theory, I haven’t experimented a lot with it yet!)
Cheers mate, looks interesting!
Thank you!
The slides do not seem to contain a performance evaluation of the work. How does it compare to the non-JIT interpreter and to other Prolog implementations?
Sadly, I don’t have any meaningful benchmarks right now since I still don’t have enough instructions implemented
If you want to tell other people about this work, I think it would be helpful to start with a summary of the work and its current state to manage expectations. Maybe have a website with a short summary of this, that also points at your slides for full details. I looked at your slides to figure out what you had done, but there is basically no context and no status information at the beginning or at the end, so I had no idea what the completion status was.
Audio of the talk if you can understand my English: https://youtu.be/RFukO5Mi_NI
This is the kind of thing that makes me completely ignore Kotlin, even though it is a very nice language. It’s completely hamstrung by JetBrains’ incentive to sell an IDE for it.
I think there’s also a chicken and egg problem there. Sometimes they release stuff that doesn’t require the IDE like this REPL was, and how many people used it? I do Kotlin backend development for a living and I think it is very under appreciated. But many people still think is only useful for Android development. Jetbrains libraries like Exposed or Ktor are very nice too, but even in backend we tend to prefer Java based libs instead of the pure Kotlin ones, which is a shame.
Following that logic, Kotlin should sunset any use outside of Android then, since how many people are using that? I’m not saying people aren’t using the REPL (I sure as hell haven’t in the time I used Kotlin), but that there is a tendency in JetBrains, as stewards of the language, to prioritize features which sell IntelliJ licenses and sunset features which don’t. You cannot tell me with a straight face a notebook plugin and IDE scratch files are alternatives to a REPL; the fact that JetBrains does shows that they don’t care about you using Kotlin if it doesn’t mean they can sell you an IntelliJ license. That’s the biggest thing holding Kotlin back. Even Apple doesn’t do that with Swift.
That is concerning. A similar thing happened for C# and that was what made me quit using it, after 10 years of using it as my main language.
I think if I ever try Android development I’ll just stick with Clojure.
No, what I’m saying is that (and this is a personal feeling, very subjective), JetBrains has been trying to create a good Kotlin ecosystem, independent of the Java one, but the company isn’t big enough to push all the projects they start and the community isn’t there to continue them. So they create nice stuff, but very few users, they do not consolidate, in the end, some people don’t even bother to enter the community because the ecosystem isn’t different enough to the Java one, and Java has improved too, so the ecosystem doesn’t improve further by external developers. You say JetBrains makes stuff to sell IDEs. It might be true even though both IntelliJ and Android Studio have free versions. But the REPL was a feature developed by JetBrains too. They also made the Ki Shell, which according to the announcement will also be deprecated
BTW, Java didn’t have a REPL for decades and it wasn’t a problem for people
That’s what I mean by Kotlin sunsetting anything outside of Android. Nothing else has the community to continue it fully, except maybe kotlin for backend with spring, but even that is a tiny portion and I’m not even sure that would survive if JetBrains dies off. If that’s what JetBrains is concerned about, they wouldn’t be plugging notebooks as an alternative; I doubt notebooks have more of a community supporting them.
I’m not the only one saying it. JetBrains said it themselves: https://blog.jetbrains.com/kotlin/2011/08/why-jetbrains-needs-kotlin/
Emphasis mine. They openly resist making a language server for Kotlin, explicitly because it would eat into their bottom line. So yes, while they made a REPL to begin with, the fact that they’re sunsetting it in favour of plugins for the IDE they sell shows that they don’t want you writing Kotlin if it doesn’t happen in their environment. Even Microsoft - who as @kameko pointed in a sibling comment try very hard to make you use Visual Studio for .NET - at least let you use a somewhat gimped LSP for it.
Java was never billed as a scripting language, which JetBrains is clearly trying to make happen for Kotlin.
It’s also worth mentioning that in the past there was another language based on Codd’s ideas: QUEL. Which in fact it was the original qury language of what is now known as PostgreSQL.
We’ve been using CosmosDB for a new feature and it’s a database that tries to be one-size-fits-all with the multiple APIs. So we tried to use it instead of MongoDB Atlas and it worked with the same client libraries but it’s not really a MongoDB. The pricing aspect of CosmosDB is complex too, you have the vCore based (similar to Atlas) and the RU based, which may also have some provisioned throughput.
Disclaimer: I’m from Spain.
I think many people wouldn’t consider itself Europeans because it’s a very broad term. Many people still don’t know the difference between Europe, the European Union, the Eurozone, the Council of Europe and Eurovision! Regarding (Br|L|.*)exit, it’s true it’s not just conservatives that want it, this was just the main force behind Brexit. The EU is a trade union after all, that evolved to harmonize more and more law between countries so trade could be made more efficiently. But there’s no real culture behind. We usually speak together in English, which is a language with very few native speakers in the EU nowadays. There are no newspapers, radio stations, TVs, that are cross border,… there are no memes that only an EU person would understand (but within a country, there are). Most memes that are widely known in the EU, are because they’re also popular in the US. Even in European Elections political parties are the local political parties, they go with that name, and then they form alliances to create the groups in the parliament.
Quite honestly to feel more European we need to move the population around. I would love to see for all the citizens a 1year mandatory erasmus/social service in a randomly selected location of the union.
Although I mostly agree with your point of view, this is actually not true. Look at arte a TV station conceived to create and broadcast mostly the same programming for German and French audiences.
There’s also https://www.euronews.com/
I’m aware of arte.tv, but it doesn’t broadcast in Spain, so for me, it doesn’t count as a TV station.
it is beamed to you from Hotbird 13 and ASTRA 1:
Coverage map if you go sailing: https://www.kvh.com/support/coverage-maps/tracvision-marine-satellite-coverage-maps/europe-astra-1kr/
It’s always markedly surprising that the Yanks expect anyone to be or feel “European”.
Do they not know that they’re European too?
Err…no? I’d say in my neighborhood probably half the population has no ties to Europe whatsoever.
It may help explain their mistake, to consider that the people of the USA “[are] or feel” “American” even while the USA is very close in size to Europe (and more than twice as large as the EU).
From reading history, I think the EU is akin to the United States before the Civil War (or even before WW2) where the primary allegiance was to the constituent state, not the federal government.