“biz > user > ops > dev” is used for “late capitalism”, but perhaps “open source software” would be a better fit, where it can lead to software actually optimized for users.
maybe the two can join forces? vc-paid open source projects with no expectation of becoming a business, producing the best user-focused software.
I think you can and should put user ahead of biz because the user is screwed if the business closes due to lack of funding. The idea is to take enough money to keep the lights on in service to the user.
Reminds me of my friend who works as tech support in a hospital. From her I have gathered two main facts:
Medical professionals are usually idiots, and
It should be a capital offense to force quarterly password changes upon cleaning staff who have zero real need or desire to touch the medical IT system but still have to use it to log timesheets
In my experience, it is never productive (but possibly cathartic) to label people as idiots.
Computer people often seem to have a hard time understanding this, but just because someone isn’t a computer expert doesn’t make them an idiot. They have better things to do than trying to work a stupid piece of machinery, and as you can read elsewhere in these comments, they have to do so under a tremendous amount of stress and often while sleep deprived.
They might be a “computer idiot”, but by that standard I’m a total idiot in most non-computer fields. And I suspect a lot of us are.
One would think a system could be designed wherein someone whose role requires one simple set of permissions (view and modify one’s own timesheet) would be exempt from the normal security standards of rotating passwords because the permissions of the role simply can’t result in any kind of harmful system breach.
There is absolutely no reason to use passwords in the hospital setting at all. Everyone should just use a card that is unlocked with a PIN code for the next 12 hours or until the person clocks out. Then the person should be able to move freely between the terminals.
I’m smiling a little bit at the history you reminded me of. https://en.wikipedia.org/wiki/Sun_Ray These units had smart card-based authentication and transparently portable sessions. You walk away from a terminal and take your card with you, plug it into a new terminal, and carry on working on whatever you were doing. Virtually zero friction other than network latency sometimes making the UX a bit laggy. In practice it worked really nicely the vast majority of the time.
I used these briefly in Bristol in 2001 through a friend from the Bristol and Bath Linux Users Group, it was really impressive, using tmux sometimes reminds me of playing with those Sun boxes.
Apologies that it took me a while to reply. How does web-based make sessions portable? It seems to me, without having put a ton of though into it, that being browser-based makes portability significantly more challenging. You can’t use a shared client environment (i.e. you need a separate Windows/Linux/OSX/whatever account for each user) and you’ll still have to manage security within the confines of a browser environment. I agree that it makes it really easy to access content from everywhere, but you’re also dealing with multi-layer authentication (Windows login, browser login, cookie expiration times, etc).
With a SunRay you could pull your smartcard, walk to the other end of the building, put your smartcard in, and have everything magically pop back up exactly how you’d left it.
Ah, right. I was assuming that at this point the software stack runs whole in the browser and the workstations would be basically kiosks.
Some equipment has a dedicated computer (radiology basically), but since there usually are only so many operators and the software might not even support multiple system users, shared OS session would probably work fine in practice.
But yeah, we might just as well just launch a container on a shared server for every user and let them run a lightweight desktop over spice.
It looks like you can configure ChromeOS to use a smartcard for both login authentication and SSO inside the browser, as well as being able to use HTTPS Client Certificates from the card. It’ll also forward those credentials along to Citrix (gross, but common in healthcare) and VMware for remote desktop sessions. Very cool!
There are so many users and their work is so important to us that they should and mostly could afford a solution tailored to their needs even if it meant a custom operating system. Which it most probably doesn’t - as you’ve found out.
Instead we are trying to shoehorn them into environments designed for office clerks which just doesn’t cover all the use cases and will always cause friction.
And it’s not just login method. A person dear to me who works in the best hospital in my small country has caused trouble to the IT department a year or so back. Apparently there is a mandatory field in the patient record. Well, if you don’t know at that point in time, you just enter 555555/5555 doctors figured. So a doctor filed a new patient, my dear person figured there is a duplicate record and the patient isn’t new afterwards so they merged the record and whoops, couple thousand patients with their 555555/5555 has gotten merged together. Yay!
There is a concept from manufacturing described e.g. in 5S, but readily described by any traditionally brought up woman as well. You should declutter and not just that. You should actively build decluttering infrastructure. That means e.g. making it possible to omit fields that are required from the process point of view, allow the operator flag records for review and describe the discrepancies, add screens for dealing with such records and so on.
Instead of trying to make it impossible to enter data on behalf of a doctor, only leading to credential sharing and obscuring of who did what, make it possible and simplest under the name of the actual person and hold thrm accountable by letting the doctor review such operations e.g. next morning and so on.
Also, when there is a terminal to help e.g. carry out a surgery or a scan or enter data about patient inside the bed or in the room, you probably don’t want your desktop. You want to log into the context specific terminal and just access or enter data under your name. So remote desktops might be useful in some cases, but not all of them.
/me cries in custom SQL generator because Django ORM can’t JOIN and everyone keeps repeating “you must have a wrong problem because Django is the perfect solution”.
It seems to me that Django has painted itself into a corner interface-wise and now any improvement will require major refactoring that will break a lot of production stuff for a majority of users.
I tend to agree. On the other hand, doing a small project in Flask that quickly gets out of control teaches you why you want to use the MVC patterns, etc. Once you have internalized the patterns, you don’t need Django anymore, but it is helpful to have a place to learn it from and trying to go it alone and failing also gives you the motivation to learn it.
One way to think about it is when to use which in the learning process. For a student doing independent study in university, Flask is probably fine for making a small mess. For a junior dev at a company, you want Django so they don’t screw up the company app and you can give them books to read to learn the basics.
I don’t believe that postponing learning SQL after the N+1 bites you in production is the way to go, to be honest. Maybe use Flask (or FastAPI or something) and train the newcomer a bit more extensively? Maybe even walk them through the codebase. Or perhaps write a CONTRIBUTING.md that actually incorporates feedback from newcomers?
What’s the learning context we are imagining? I think junior on their own and junior in a firm with senior code review are pretty different. And student experiments are even more different.
Using at least something like staticjinja is not going to hurt you. You can keep using plain HTML and only make use of template inheritance and includes to at least move the unimportant parts of the pages away from the content in a way that makes it easy to come back to them and change things without going through multiple files and repeating the edits.
I would personally probably also add a way to use Markdown, because I find writing plain HTML tedious and distracting. Good when I need it’s power, bad when I need to &, " and <strong> constantly. I suffer from ADHD, which means that any accidental complexity will make it easier to procrastinate. Also </strong>.
You can setup your css so <b> can be synonymous with <strong>
Yea, typing & is annoying, I suggest typing and instead and using regular quotes than " - maybe there’s some other solution to this that isn’t immediately obvious to me :p
Looks like zero? Especially if you just used it from the command line.
Yea, typing & is annoying, I suggest typing and instead and using regular quotes than " - maybe there’s some other solution to this that isn’t immediately obvious to me :p
Quoting MDN:
If you have to display reserved characters such as <, >, &, and “ within the <pre> tag, the characters must be escaped using their respective HTML entity.
The problem I have with using a form of text markup (like Markup) is properly escaping asterisks when I really want an asterisk (which isn’t often, but does happen from time to time). It’s especially bad when it happens in code I’m posing. Then I have to go back and fix the post. Yes, this happened recently, and it affected the output of the code to render it useless.
nav, header, footer. cp template.html todays_post.html. If you need extremely basic templating, template.html.sh > todays_post.html.
Why do you need breadcrumbs? If we’re talking beyond blogs, you probably shouldn’t write any static site generator and use what exists, because others will need to come along to maintain it…
Writing raw HTML is also a matter of your editor. If your editor supports snippets and smart expansion, you can write just plain HTML pretty quick. There’s some great vim plugins like emmet.
I’ve wanted to go to an SSG for years but the simplicity of HTML for such a simple and rarely updated personal site is unmatched. I dealt with Wordpress for a decade more than a decade ago and I’m glad I got away from it because I spent more time upgrading and hardening it than I did blogging on it.
I know it’s probably something wrong with me but where can I find this proposed legislation (containing this sinister article 45) and read it? Also, if it is really true, I don’t understand how any country in Europe would agree to this. Will eg. Hungary be able to create a new certificate for riksdagen.se and browsers in the whole Europe will just accept it? Or is there any more detail to this story?
Also, if it is really true, I don’t understand how any country in Europe would agree to this. Will eg. Hungary be able to create a new certificate for riksdagen.se and browsers in the whole Europe will just accept it? Or is there any more detail to this story?
It looks as if this is another case of well-intentioned legislation being written by people with no understanding of the subject at hand and without proper consultation. I believe (judging from the analysis in the letters) the intent was to require that EU countries are able to host CAs that are trusted in browsers, without the browser vendors (which are all based outside the EU) being able to say ‘no, sorry, we won’t include your certificate’. Ensuring that EU citizens can get certificates signed without having to trust a company that is not bound by the GDPR (for example) is a laudable goal.
Unfortunately, the way that it’s written looks as if it has been drafted by various intelligence agencies and introduces fundamental weaknesses into the entire web infrastructure for EU citizens. It’s addressing a hypothetical problem in a way that introduces a real problem. I’m aware that any sufficiently advanced incompetence is indistinguishable from malice, but this looks like plain incompetence. Most politicians are able to understand the danger if US companies can act as gatekeepers for pieces of critical infrastructure. They’re not qualified to understand the security holes that their ‘solution’ introduces and view it as technical mumbo-jumbo.
Ensuring that EU citizens can get certificates signed without having to trust a company that is not bound by the GDPR (for example) is a laudable goal.
As far as I am aware of, the real reason for this is to facilitate citizen access to public services using certificates that actually certify to citizens that they are in fact communicating with the supposed public organization.
EU would be served well by a PKI extension that would allow for CAs that can only vouch for a limited set of domains where the list of such domains is public and signed by a regular CA in a publicly auditable way.
Or even simpler, they could just define a standard of their own with an extra certificate for the regular certificate. Then they could contribute to browsers some extra code that loads a secondary certificate and validates it using a different chain when a site sends a X-From-Government: /path/to/some.cert header and displays some nice green padlock with the agency name or something.
Looking how misguided this legislation is, I’m not sure if these agencies are from any European country. This level of incompetence feels utterly disappointing. Politicians should be obliged to consult experts in a domain that given new legislation affects. I also don’t understand what’s so secret about it that justifies keeping it behind closed doors. Sounds like an antithesis of what EU should stand for. Perfect fuel for some people that tend to call EU 2nd USSR and similar nonsense like this.
Looking how misguided this legislation is, I’m not sure if these agencies are from any European country
It depends. Several EU countries have agencies that clearly separate the offensive and defensive parts and this is the kind of thing that an offensive agency might think is a good idea: it gives them a tool to weaken everyone.
Politicians should be obliged to consult experts in a domain that given new legislation affects
This is tricky because it relies on politicians being able to identify domain experts and to differentiate between informed objective expert opinion and biases held by experts. A lot of lobbying evolved from this route. Once you have a mechanism by which politicians are encouraged to trust outside judgement, you have a mechanism that’s attractive for people trying to push an agenda. I think the only viable long-term option is electing more people who actually understand the issues that they’re legislating.
I also don’t understand what’s so secret about it that justifies keeping it behind closed doors. Sounds like an antithesis of what EU should stand for
The EU has a weird relationship with scrutiny. They didn’t make MEPs voting records public until fairly recently, so there was no way of telling if your representative actually voted for or against your interests. I had MEPs refuse to tell me how they voted on issues I cared about (and if they’d lied, I wouldn’t have been able to tell) before they finally fixed this. I don’t know how anyone ever thought it was a good idea to have secret ballots in a parliamentary system.
I think the only viable long-term option is electing more people who actually understand the issues that they’re legislating.
This is the sort of thing that an unelected second chamber is better at handling. Here’s an excerpt from an interview with Julia King, who chairs the Select Committee on Science and Technology in the UK’s House of Lords:
You get a chance to comment on legislation because we are a revising chamber. We’re there to make legislation better, to ask the government to think again, not to disagree permanently with the government that the voters have voted in because we are an unelected House, but to try and make sure that legislation doesn’t have unintended consequences.
You look at the House of Commons and there’s probably a handful now of people with science or engineering backgrounds in there. I did a quick tot up - and so it won’t be the right number - of my colleagues just on the cross-benches in the House of Lords and I think there must be around 20 of us who are scientists, engineers, or medics. So there’s a real concentration of science and engineering in the House of Lords that you just don’t get in the elected House. And that’s why I think there is something important about the House of Lords. It does mean we have the chance to make sure that scientists and engineers have a real look at legislation and a real think about the implications of it. I think that’s really important.
I think the only viable long-term option is electing more people who actually understand the issues that they’re legislating.
I don’t see how this could ever be viable given existing political structures. The range of issues that politicians have to vote on is vast, and there just aren’t people that exist that are simultaneous subject matter experts on all of them. If we voted in folks that had a deep understanding of technology, would they know how to vote on agriculture bills? Economics? Foreign policy?
You don’t need every representative to be an expert in all subjects, but you need the legislature to contain experts (or, at least, people that can recognise and properly interrogate experts) in all relevant fields.
I’m not sure if it’s still the case, but my previous MP, Julian Hubert, was the only MP in parliament with an advanced degree in a science subject and one of a very small number of MPs with even bachelors degrees in any STEM field. There were more people with Oxford PPE degrees than the total of STEM degrees. Of the ones with STEM degrees, the number that had used their degree in employment was lower. Chi Onwurah is one of a very small number of exceptions (the people of Newcastle are lucky to have her).
We definitely need some economists in government (though I’ve yet to see any evidence that people coming out of an Oxford PPE actually learn any economics. Or philosophy, for that matter), but if we have no one with a computer science or engineering background, they don’t even have the common vocabulary to understand what experts say. This was painfully obvious during the pandemic when the lack of any general scientific background, let alone on in medicine, caused huge problems in trying to convert scientific advice into policy decisions.
Because as with all legislation here, every person involved cannot understand the threat model. They believe that obviously this will only do good things, and don’t understand that different places have different ideas of what is “good”. They similarly don’t understand that the threat model includes people compromising the issuer, and don’t understand that given the power of their own CAs they will be extremely valuable, and also part of a section of all governments is generally underfunded.
Fundamentally they don’t understand how trust works, and why the CARB policies that exist, exist.
I don’t know what’s in the proposed legislation, but the version of eIDAS that was published in 2014 already contains an Article 45 about certificates (link via digital-strategy.ec.europa.eu):
Article 45 - Requirements for qualified certificates for website authentication
Qualified certificates for website authentication shall meet the requirements laid down in Annex IV [link].
The Commission may, by means of implementing acts, establish reference numbers of standards for qualified certificates for website authentication. Compliance with the requirements laid down in Annex IV shall be presumed where a qualified certificate for website authentication meets those standards. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 48(2) [link].
I suppose the proposed legislation makes this worse.
Ah, I still remember my high school lecture on networks. “Here is the OSI model. We are interested in L2-L3, rest is meh so don’t worry about it. Here’s how ARP works and here’s IP, UDP and TCP. OK, now let’s move on, it’s time to write our first web application…”
Anyway, I find it more important to teach the more important underlying concepts beneath. Especially serialization and differences between:
framing (using extra symbols that cannot appear in the payload),
tag-length-value (nesting serial transmissions by prefixing them with length and optionally also a type) and
escaping (nesting serial transmissions by introducing escaping grammar to avoid having to specify length beforehand).
As for the “networks” aspect, I find that the routing theory (who do I forward the message to next) combined with discussion of globally unique, flat namespace addresses (MAC) and managed hierarchical namespace (IP) is mostly sufficient.
I feel like I live in a different universe than the author and some of the commenters here.
maybe you have low-latency stuff that has to run on a bespoke version of Ethernet designed by your in-house Turing Award winner
Or maybe you just don’t target the US market where your customers can trivially pay for cloud costs with their double to triple absolute purchasing power compared to the rest of the world.
Or maybe you need a lot of egress traffic.
Or maybe you are required to have an exit strategy and thus cannot utilize unique hosted services and instead rely just on IaaS.
Or maybe you don’t want the provider to just delete all your data when you miss a payment or two because a bank decided you should not be getting any money.
Or maybe you are morally opposed to relying on infrastructure provided by companies that unscrupulously exploit their employees/customers/partners only to enrich a handful of already rich people who live on a different continent than you.
Or maybe you don’t like the fact that some of your payments to cloud providers go towards lobbying and (most probably, no direct evidence) corruption in your local government that, among other things, influences what kind of IT is taught in schools.
Or maybe you believe that investing in somebody setting up a bunch of scripts, cluster RAID and Zabbix is an investment in future flexibility. Possibly even building skills that would pay off when you rent some of the cloud capacity. Unlike going in totally blind.
Also, competent sysadmins are not that expensive. And if you don’t feel like you need a whole one, just let them bill you by the hour.
I feel like I live in a different universe than the author and some of the commenters here.
I feel like the author wrote a pretty reasonable reply/counter-argument to arguments that were being made by DHH and folks who agreed with him, which primarily were about cost and staffing tradeoffs.
It seems like you want to have an entirely different sort of discussion. In which case perhaps you’d like to write your own post about it, focusing on the things you’d like to focus on?
I understand the post as concentrating on the technical/cost side. If you have specific business, ethical, or other requirements then of course they will trump other aspects, but that’s… a different scope of the discussion? The entry point for the post seems to be “we can use the cloud or on prem, or we’re considering switching” which most of your situations won’t even arrive at.
I honestly don’t know. Maybe I am just too old (in my thirties), but I feel that people shouldn’t concentrate on the short-term monetary aspects of things and take a step back to consider the broader situation? I seems that e.g. programmers with negative LOC who mentor the young are celebrated… Maybe we should ask ourselves whether we can vote with company coffers to have better ecosystems? Since we are willing to spend on negative LOC mentor people, it wouldn’t be such a reach to maybe hire one more member of the team and then use the extra time to get everyone just a bit more comfortable around RAIDs and NICs. It’s not exactly a rocket science. And who knows, maybe next time you need to store a lot of data and then slowly chew on them, with users on a single continent only, you might be able to save enough to pay it off and then some. But it also means having some of those physical servers housed and managing them as a part of your routine.
Ignoring the ideological stuff, the problem I’ve encountered with sysadmin + hosted infrastructure is that the sysadmin team takes a long time to get what you need. It takes weeks or a month to get a VM provisioned. I’m guessing my company just hired the wrong sysadmins or something and the good ones would have essentially built out our own internal cloud (complete with self-service provisioning, IAM, billing, monitoring, logging, etc) for cheaper than what the cloud providers charge, but it’s nice to be able to go with Amazon or Google and know that what I’m buying will work (rather than rolling the dice on a sysadmin team) and moreover, those cloud environments are well documented and well-trodden–a company can hire developers with experience in those clouds rather than relying on internal training to get people up to speed (sort of the inverse of your last point). I also know that I can get started now rather than waiting for a team to finish the buildout of our datacenter.
sysadmin team takes a long time to get what you need
That’s what usually happens for sysadmin teams that take care of internal infrastructure of a non-tech organization. They tend to be poorly financed and poorly run. Thus they tend to be understaffed or staffed with people who don’t really care about the job nor expanding their own skills. Outside and internal pressure to procure specific systems prevents all real architecture and pressure to cut salaries prevents hiring actual developers and growing systems that match the needs of the organization. So everybody just gives up and becomes a middleman who dusts third party systems, removes spyware and gets a bonus for making sure that correct people get the most recent iPhone or something along those lines.
but it’s nice to be able to go with Amazon or Google and know that what I’m buying will work
If you worked in my (imaginary) company of 50+ people and you provisioned some stuff outside our perimeter, without taking care to integrate identities, document data recovery and exit strategy, I would order you to de-provision the service and try again. If you did it a second time, I would fire you for endangering the organization.
It’s actually hard work to make sure the organization stays resilient long-term and it’s usually people too eager to “just get this thing out the door” who ignore minor stuff like “If user requests we forget about them (with a bunch of additional expletives, on support call), how does our new shiny cloud-based messaging service know not to contact them anymore so that they won’t report us to authorities for ignoring GDPR?”
It takes weeks or a month to get a VM provisioned.
That’s just sad. I mean, you don’t even need an internal cloud. All relevant virtualization solutions have self-service capabilities. But still seen pretty often. Especially when there is some kind of internal billing going on and someone has to ACK the expense.
Anyway, I don’t think that your organization is relevant to the discussion. Or the discussion to your organization. At this point, it’s probably no longer even capable of making competent technical nor financial decisions. It lives of inertia and unless publicly funded or a monopoly, it will eventually dissolve. Otherwise it will just pass on the costs to the public and everyone involved (and not blind) will skim as much as possible, speeding it’s demise.
I believe that the discussion is about organizations that are able to define and meet their technical targets and are striving for continuous improvement.
Finally finishing the first part of my blog series around implementing a small bytecode interpreter for arithmetic expressions. Also fixing bugs in said interpreters parser i just finished today.
Also working on my university paper titled „Modern Algorithms for Garbage Collection - Outlining modern algorithms for garbage collection on the examples of Go and Java„. I just finished the overview of garbage collection and the problem garbage collection solves.
Holding a whole day hobby group meeting on Saturday.
Teaching some kids to make weird noises by feeding aplay numerical sequences generated upon key presses. Sequences include “X samples 0, X samples 255” and “going from 0 to 255 in multiple steps”.
I am optimistic that we’ll even manage to “keep a bunch of samples around and loop through them” maybe even with a little bit of “average every sample with the one before it”. That’s gonna throw themfor a loop for sure.
My sister in law asked me to fix this cheap baby keyboard for her daughter, so I made sure to replace the board completely, throw in a decent DAC and apart from making it sound a little bit more similar to an actual piano (as opposed to a tortured cat) I have included some domestic sounds.
Babies should learn to recognize fun sounds, right? Spooked hen, rooster, frog, broken glass, barking dog and so on.
There is a hidden menu to adjust the volume and toggle allowed fonts, though. I am not a monster.
I would put off the actual programming until they are at least 9-10 unless they are already invested. Or get them some kind of retro hardware with BASIC or something.
I would put off the actual programming until they are at least 9-10 unless they are already invested.
I didn’t really know what programming was and had very limited exposure to computers when I was 7, but my school did a few programming lessons in BASIC and Logo then and, well, the results are pretty obvious. 9-10 feels like leaving it a bit late, especially with the increased exposure to computers that children have now.
I really can’t say. I would probably just test the waters with SHENZHEN I/O, EXAPUNKS and Human Resource Machine and if it works out, proceed with something more serious. Especially with today’s kids attention spans.
especially with the increased exposure to computers that children have now
I think the big difference between now and when I was a child is that children are a lot more aware of what computers can do. I taught programming to people a few years younger than me when I was a teenager and most of them had barely touched computers, and where they had they were primarily just using them as glorified typewriters. A few had used game consoles but these were not really perceived as computers. Making the computer respond to inputs and do something with them, even writing very simple games, was outside their experience. Now, they’ll have seen this kind of thing in the web and things like Scratch give you an easy way of starting to build your own.
This bit in the article is quite unfair though:
When it became apparent that computers were going to be important, the UK Government recognised that ICT should probably become part of the core curriculum in schools. Being a bunch of IT illiterates themselves, the politicians and advisers turned to industry to ask what should be included in the new curriculum.
I am not normally one to defend the Tories, but this is one of two things that Margret Thatcher deserves credit (rather than blame) for. They defined a set of criteria in the ‘80s that allowed schools to get central-government funding towards a computer. Among them was were things like support for programming environments with support for structured programming. As part of this initiative, the BBC created a load of teaching materials, including lessons that could be recorded and played for the class and source code that was broadcast over teletext. It wasn’t until the mid ’90s that all of this work was undone.
Schools naturally searched long and hard for appropriate office software to teach with, and after much care they chose Microsoft Office
Actually, most of them chose MS Works, because even the subsidised version of Office was too expensive.
So since 2000 schools have been teaching students Microsoft skills
Since this article was written, the Computer Science GCSEs and A-Levels were introduced. Unfortunately, this has been a step back for inclusion because very few schools have teachers that are able to teach them and so don’t offer them. Mostly, fee-paying independent schools are the only ones that can provide them and so people entering computer science degrees from such a school have an advantage over ones from state schools.
The computers access the internet through proxy servers that aggressively filter anything less bland than Wikipedia
I particularly remember the school where my mother taught in the late ’90s using the number of instances of the letter X on a page as a filtering rule. Anything related to UNIX, Linux, or X11 was blocked.
A hundred years ago, if you were lucky enough to own a car then you probably knew how to fix it.
I think this is a dangerous analogy because it understates the flexibility of the systems. Fixing a car is just that: returning it to its original state. If you modify a car, you might make it a bit faster or a bit more responsive, but you’re not fundamentally changing what it can do. This is not true of computing. A computer can do an unbounded number of things and if you’re using it only to do things that someone else has told it to do then you’re losing a lot of the power of the device. When I was 11, the head teacher at another school said something that stuck with me:
In your lifetime, society will be split into two tiers, the people that program computers and the people that use them.
In a reference to the Butlerian Jihad, Frank Herbert wrote (paraphrasing slightly from memory):
By giving power to machines, they gave power to those that programmed the machines.
I have tried very hard during my professional life to avoid building the kind of society that these people warned me about as a child.
Thanks for engaging so extensively. I also disagree with several points the author makes, but overall I also feel that there is not much going for the implication that young people nowadays are more exposed to computers and thus are either more competent with them or more interested in learning to program them.
Among the regular young folk (with non-intellectual parents), many are being actively prevented from spending “too much time in front of the damned computer” only to be allowed unlimited time with their phone, which is a practice I find absolutely, mind-bogglingly idiotic, to put it mildly. The “I am just an user” mentality the author also mentions (albeit with less flattering words) is also extremely widespread even among otherwise intelligent and curious people of all ages.
There have been young people in my family and of my friends who literally said to me that they “don’t want to be eggheads like me” when I suggested that natural sciences are fun and we might perhaps do something a little bit more intellectual. They they returned to their Instagram feed on iPhone. I have celebrated when we managed to get one of them interested enough to take a picture of Venus through a telescope of a nearby observatory. Anyway…
Here in Czechia, what happened when government has started getting interested in teaching IT, was exactly what the author wrote. Microsoft literally wrote the curriculum and then schools have gotten funds to get some computers to teach with and they’ve all installed Microsoft AD Domain, locked down everything down to right click on Desktop and then taught Microsoft Office. The only companies able to help the schools to get the funds were Microsoft Partners.
I have been scolded by my high school teacher for writing a LAN chat application using Visual Basic for Applications embedded in Excel. We were supposed to learn to copy a form given on a paper handout using Excel table border formatting. I have originally wanted to show him a game I have wrote in Visual Basic 6.0 at home, but the system did not allow me to execute an arbitrary executable. The only schools that were not affected were those where parents have donated computers years before the government noticed and there were actually teachers interested in teaching IT.
So I guess it probably differs a lot from country to country and I am glad that you had it better. I have been saved by the fact that my stepfather was a programmer and when I switched schools due to us moving, I have been able to attend electable classes of a person who run a small web hosting / server hosting business and thus have been able to teach us Linux, PHP, Apache, MySQL and Bind. Nowadays there are “gymnasiums” (general high school) that actually teach Python and materials to teach young kids with e.g. Scratch are available as well. More “practical” high schools are hopelessly vendor-locked, though. MSDN Academic Alliance, Visual Studio and so on. And colleges as well. For example, college IP networking is basically a Cisco course with a Cisco certification at the end.
That’s partly the reason why I run the hobby group. To make it possible for kids that are actually interested to do something real, using the same tools we here use. Like I have been allowed to in my second high school. Outside of Microsoft and AutoDesk customer funnels.
But I have zero hopes for kids in my extended family to get interested. Less disappointment that way.
Among the regular young folk (with non-intellectual parents), many are being actively prevented from spending “too much time in front of the damned computer” only to be allowed unlimited time with their phone, which is a practice I find absolutely, mind-bogglingly idiotic, to put it mildly.
That’s astonishing. I’ve mostly seen ‘screen time’ lumped together, which still misses a distinction between creative and consumptive uses of glowing machines but at least doesn’t favour the wrong one.
There have been young people in my family and of my friends who literally said to me that they “don’t want to be eggheads like me” when I suggested that natural sciences are fun and we might perhaps do something a little bit more intellectual.
That attitude was prevalent when I was growing up but I think it started to fade around the .com boom, when it started to sink into the collective awareness that there was a strong correlation between geeky and high-earning professions. The down side of that was a load of people learning to program to get rich, with no real interest in the activity itself. At least some of them then developed the interest and at least it let the people who were genuinely interested use it as cover.
I have been scolded by my high school teacher for writing a LAN chat application using Visual Basic for Applications embedded in Excel.
Our official course was much as you describe in the ‘90s. I was incredibly fortunate that my school had an IT technician who was an old school greybeard (well, his beard wasn’t yet grey, but I, sure dealing with us would have sent it that way soon). While we were supposed to be learning how to add a column of numbers in Works Spreadsheet, he was showing us how to recover a damaged FAT filesystem in a hex editor. He also encouraged us to try to bypass the restrictions that locked down the school network, and we never got into trouble if we didn’t do any damage.
I’d been programming for a few years by that point. My previous school’s headmaster taught one lesson a week called ‘general’ for each year group. For the 7-year-olds, he taught BASIC and Logo. I think one of the things that made this compelling was that the gap between what a professional game developer could make and what I could make were not that far hit the limits of the hardware long before you hit the limits of the programmer’s ability. There was maybe a one or two order of magnitude difference in quality, whereas for a modern AAA title it’s a lot more. I do wonder about starting with something like Godot now though: there’s a real programming language and making something like Candy Crush is well with it the grasp of a single person.
Yeah I was thinking more along the lines of a simple setup he could type numbers or stuff into and get sounds out. Just start on the idea of “I can make the machine do what I tell it to”.
Ah, I see. Well learning to use keyboard and making the association “keyboard = making the computer do what I want” early would be definitely good. :-)
I am not sure if it’s a realistic goal, though. It usually ends up being “whoa, there is a whole different universe to explore”. Computers don’t really do “what we tell them to”, or do they for you? At best, after many years of struggling, you end up being able to make some fairly good compromises. :-)
I always promise myself to do just that and then go on to another project. Sorry for that. But thanks for the ask, it might just make me write it down sooner.
Great post! However, I am not sure how helpful it is for normal users to think about ECC DRAM.
Just curious, For me (a student and researcher), if I have some budget, adding more DRAM and more power CPU will be useful, but is changing to ECC DRAM a better option? In other words, is memory error today at the point that a normal user should worry about?
Depending on who you ask (torvalds - note that ECC won’t help you with modern rowhammer attacks), then everyone should just have it, and it shouldn’t be something special. If you’re going for a homeserver and don’t want it to eat up your data by chance, then go for it. Stuff like checksum + compression filesystems (ZFS) can give a bad experience if your RAM does fault.
Other people say “never got a problem, don’t care”. So who knows, flip your coin ;)
Personally: Use it on stuff that behaves like servers (data security, uptime, weeks of no human intervention..).
If you notice GCC randomly crashing it might be useful to run a memtest (that is how I noticed many years ago).
But TBH ever since I started using DRAM with heatsinks I never had that problem, even without ECC.
DDR5 does actually have some ECC builtin but not true/full ECC.
As I recall, it has fewer bits, but the big difference is that (without ECC DIMM support) it doesn’t give you the error reporting. There are a few different kinds of failure:
Error is detected and corrected. Most things just expose this as a counter where you can look at the list and say ‘ah, good, ECC is working’ but some systems will track pages that get these errors and will mark them as dodgy after a while and stop using them. This can still happen without ECC, you just don’t get the counter.
Error is detected but not corrected. This is where ECC starts to really matter: enough bits have been flipped that you just have to treat the line as nonsense. An OS might be able to recover from this (hopefully the error isn’t in kernel code!) but it’s typically hard. A result from memory is nonsense, what do you do? Maybe kill the process that owns the page? Without ECC, I don’t think DDR5 can report this and so it looks like the next condition:
Error is not detected. This happens with low probability with ECC, but now you have random corruption.
The surprising thing is how much everything works with memory errors. I saw a demo some years ago of inducing bit flips to try to get to a JVM escape by holding a hairdryer to the RAM chips. I think you needed something like a 1% error rate to get random crashes: a huge amount of memory is full of buffers of data, strings, and so on, where the program works fine(ish) if they are slightly corrupted. If you corrupt the low bits of a code pointer, you often just miss some stack spills and that might work for a while. If you corrupt the high bits, you probably crash. If you corrupt the low bits of a data pointer, you may find that it points into a string or array rather than to the start but, again, things are likely to keep working.
The danger here, of course, is that the things that are ‘working’ are giving subtly wrong answers and may keep doing so for a long time.
I think that DDR5’s ECC works on the module level. Namely it will fix errors before they are transmitted to the CPU.
The classic ECC takes case of errors generated on the signal between the RAM and the CPU in addition to errors on the RAM module level.
DDR5’s on-die ECC, to my knowledge, is there so that manufacturers can cut costs with RAM that produces errors every so often.
Consumer-grade RAM has intense competition – people are just not willing to pay that much for it. But what if you do pay more for a higher-quality product, especially one your livelihood depends on?
Thank you! This is helpful and aligns with what I thought — long-running jobs will benefit from ECC DRAM and daily activity should be fine (I read somewhere that Windows OS crashes sometimes due to bit errors, but not sure how often) :)
For me, who falls into a similar category of use, I go with ECC RAM whenever possible. That means almost always, almost everywhere.
It’s not because of a paranoia over bit flip errors, though. Those do happen, and regularly enough (with big enough implications) that you really should care.
But the real reason I always go with ECC RAM is because if you stick to workstation or server platforms from a generation or two behind, you’ll actually end up paying less for larger quantities of ECC RAM than you would for even the simplest RAM from the current generation.
Using hardware 10+ years old is probably too far for most people, but I built a dual Xeon X5690 server with 128 GB of ECC RAM a year ago for less than $250 for everything…
Old hardware tends to have worse computations/watt ratio and it does not really pay off. I remember couple years back turning off some ancient servers running test environment, replacing them with a couple of VMs and then having to reconfigure the air conditioning because the server room got too cold for some of the equipment.
And with nowadays energy prices (especially in Europe), I have probably payed more in those than the actual price of the Ryzen 2700 based server. Remember, in a data center you pay for the electricity twice.
I was pretty comfortable paying $360 for ECC RAM that’s in the device I use for my livelihood. Probably wouldn’t pay that much for my gaming PC.
Though one advantage of ECC is that you can overclock it easier, especially since the base spec of the RAM I bought is e.g. 5600MHz while the base spec of most consumer DDR5 RAM is 4800MHz and you have to pump it up with XMP/EXPO.
I wanted to try this when Zen 4 came out.
It just seemed like a big project.
You are way braver than me to try to get this working.
I don’t even know how that other person easily shorted one of the DRAM data lines. That technique is a quick way to burn out one of the drivers and brick a CPU.
I ended up just getting on older Dell Xeon Skylake server.
One other note. DDR5 does do some onboard ECC, but is not exposed.
It would be great if AMD officially supported this stuff.
Then don’t short it. Add a stub made from twisted pair with one wire to GND and the other to the pin to disturb. The signal will bounce back from the end of the stub, creating a notch filter. You should be able to hit the notch sooner or later and corrupt some bits. Simulation.
As for the method, they probably squeezed the wires along the DRAM inside the socket.
… by being funded via a heavy store tax from an eternally buggy mess of a proprietary app store whose main value add is their network effect, set of sketchy engagement APIs, DRM (as in ‘Digital Rights Management’ or ‘corporate sanctioned malware’ depending on your optics) mainly selling proprietary software.
Its main reasons for ‘contributing’ being part of a risk management strategy for breaking away and more directly competing with Microsoft, empowering specifically those FOSS projects that fits their narrative and promoting an architecture that is close to a carbon copy of its eventual end-game competitor. This time with more anti-cheat making its way should there be sufficient traction.
It is the Android story again on a smaller scale. How did that turn out last time, how many of the ‘contributions’ failed to generalise? or is it different this time because Valve is good because games? Colour me sceptic.
I think Valve as a company has a lot of problems (though the DRM is pretty mild and one of their lesser problems tbh) and the Steam Deck iffier of a product than people make it out to be, but they’re actually being a good citizen here. Yes, they’re funding the things relevant to them out of self-interest (i.e. case-insensitive FS, WaitForMultipleObjects API clone, HDR, Mesa work, etc.), but they’re working with upstreams like the kernel, Mesa, and freedesktop.org to get the work merged upstream, be properly reviewed by the people that work on it, and be usable for everyone else. Android never worked with upstreams until maintaining their own things in an opaquely developed fork became unsustainable.
(Sometimes I think they might be using a bit too much commodity - the Arch base of SteamOS 3 seems weird to me, especially since they’re throwing A/B root at it and using Flatpak for anything user visible…)
You only need mild DRM and DHCP for the intended barrier to entry, suppressive effect and legal instruments, anything more exotic is just to keep the hired blackhats happy.
If I’m going to be a bit more cynical - they are not going the FreeDesktop/Linux instead of Android/Linux route out of the goodness of their hearts as much as they simply lack access to enough capable system engineers of their own and the numbers left after Google then ODMs then Facebook/Meta then all the other embedded shops have had their fill aren’t enough to cover even the configuration management needs.
Take their ‘contributions’ in VR. Did we get even specs for the positioning system? activation / calibration for the HMD? or was that a multi-year expensive reversing effort trying to catch up and never actually being able to just to be able to freely tinker with hardware we payed for? that was for hardware they produced and sold in a time where open source wasn’t a hard sell by any means.
Did we get source code for the ‘Open’VR project that killed off others actually open ones? Nope, binary blob .so:s and headers. Ok, at least they followed the ID software beaten path of providing copyleft version of the iterations of the source engine so people can play around with ports, exploring rendering tech etc? Nope. If you have spotted the source on the ’hubs its because its a version that was stolen/leaked.
Surely the lauded SteamDeck was sufficiently opened and upstreamed into the kernel? Well not if you include the gamepad portions. It’s almost as if the contributions hit exactly that which fit their business case and happens to feed into the intended stack and little to nothing else. Don’t anthropomorphise the lawnmower and all that.
To me, it looks like Valve wanted to make a gaming console, and used Linux as a way to pull that off. If you’d told me 25 years ago that you’d be able to play Windows games on a Linux machine that was handheld, I’d have been blown away. To me it still seems almost miraculous. And they doing this while (as far as I know) fulfilling their obligations to the various licenses used in the Linux ecosystem.
Does the fact they’re doing this for commercial gain invalidate that?
I don’t know what you expected. They’re almost certainly not going to give you the crown jewels used to make the headset, but all the infrastructure work is far more useful as it benefits everyone, not just the people with a niche headset.
They patented and own the hardware, the binary blobs and sold the hardware devices at a hefty price tag. I’d expect for an average FOSS participant to integrate and enforce existing infrastructure and not vertically integrate a side band that locks you into their other products.
I own a Steamdeck for about a year now. No what idea why do people have problem with the size. My kids play on it and don’t complain, and we do have a Nintendo Switch (smaller) to compare. Sure it’s bigger, but I don’t think it’s a big deal. On top of it - with a dock, it works perfectly as a home console system, so the size matters even less.
I really enjoy it and recommend getting it if you’re thinking about it.
I don’t like the size and ergonomics. But I’m in the minority on that one; people with big hands especially seen to love it.
There are more abstract concerns regarding its place as a product (is it a PC or a console? whichever is more convenient to excuse a fault), but otherwise the device is a pretty good value. I just don’t game that much, and when I do, it’s a social thing.
It might have a problem depending on what input method you use. I have average sized hands and I like to use the trackpad for FPSes and strategy games. For the FPS case, reaching the trackpads and reaching the face buttons gets really annoying. You can map the grip buttons to the face buttons, but then you’re losing out there.
Even with big piano friendly hands the steamdeck ergonomics are hard. If you don’t have a big toy budget, testing someone else’s is highly recommended. I mostly use my three steamdecks for various debugging / UI / … experiments (nreal air glasses + dactyls and the deck tucked away somewhere). If Asus would be able to not be Asus for 5 minutes the ROG Ally would’ve been an easy winner for me.
I hacked mine to run Linux as I’ve done with all other devices I’ve used throughout the years, it didn’t boot WIndows once. As far as their ‘intentions’ - whatever laptops I have scattered, all of them came bundled with Windows ‘intended’ for that to be the used OS.
DSDT fixes and kernel config to get the Ally working was less effort than I had to do to get actual access to the controllers and sensors on the Steam Deck.
Fair enough. I admin RHEL for dayjob, and use Arch on my laptop, when getting SD I knew I’ll keep it more appliance-y than getting into fully custom OS. I just wanted something to play Persona 5 in bed.
It’s heavy. Significantly heavy. It took me a while to figure out how to use it in a way that didn’t quickly give me wrist fatigue/pain, and even now it’s not perfect.
Also Valve’s Deck Verified program is very flawed. It’s quite a bit better than nothing, but it’s flawed. The biggest (but not only) problem IMO is that a game that has a control scheme not optimized for controllers - but still fully supports controllers - can be marked Verified. As an example, Civ V and Civ VI both basically just use the trackpad like a mouse, and the other buttons have some random keybinds that are helpful. Now, those are basically keyboard-and-mouse games… so to a certain extent I totally get it. But I should be able to click into a list of things and use the joysticks or D-pad to scroll down the list. I can’t. Instead, I have to use the trackpad to position the cursor over the scroll bar, then hold right trigger, then scroll with my thumb. This is extremely unergonomic.
It’s heavy. Significantly heavy. It took me a while to figure out how to use it in a way that didn’t quickly give me wrist fatigue/pain, and even now it’s not perfect.
Right, it’s really chunky - I might use it more if they had a mini version. The only use for the portability is at home (i.e. on the porch). It’s not small enough I’d want to carry it in a bag if I commute, around town, or waiting for someone, and if I’m on vacation, the last thing I want to do is play video games instead of touch grass or spend time or people. If I really want to play a game, I’ll probably use the laptop I have (even if that restricts choice of game - because I have a Mac…). Again, not that much of a gamer, so it’s different values I guess.
I have normal sized male hands and my girlfriend has relatively small hands and both work very well. She was actually surprised how ergonomic the Steam Deck is given the size. Other than that I only got positive reactions to the ergonomics.
Right now you can install Steam on your regular desktop Linux system, throw in Lutris to get games from other stores and you are good to go. This has been so far the best year to turn your family into Linux users yet.
It is far from ideal, but still a great improvement. And if we manage to get up to – let’s say – 10% penetration in EU, this is going to help immensely to combat mandatory remote attestation and other totalitarian crap we are going to end up with if Microsoft, Apple and Google keep their almost absolute dominance.
I appreciate that both this comment and its parent make good points that are not necessarily in conflict. I would distill this as a call for “critical support” for Valve, to borrow a term from leftist politics.
I have to say that I have had far more luck managing non-Steam game installs within Steam (you can add an entry to it, with a path, and it will manage it as a Proton install if you’d like; you basically just use Steam as a launcher and Proton prefix manager) than via Lutris.
My opinion of Lutris is that it is a janky pile of hacked-together non-determinism which was developed over a long period of time over many, many versions of Wine, and over many, many GPU architectures and standards, and long before Proton existed… which miraculously may work for you, although often will require expert hand-holding. Avoid if you are new to Linux.
Their improvements to proton/wine have made it so I could go from loading windows once a day to play games to loading it once a month to play specific games. Like all other for-profit companies their motives are profit driven, but so far they are contributing in ways that are beneficial and compatible with the broader Linux ecosystem. Unlike Microsoft, Oracle, Google, and Amazon they don’t have incentive to take over a FOSS project, they just don’t want to rely on Windows. But we should always keep an eye out.
Getting games to work by default on Linux also makes it much easier for people interested in Linux to try it out and people not interested to use it when convenient, which is a win in my book.
Did you look at the slides or watch the video of the talk? All their contributions are upstreamed and applicable to more than just their usecase. Everything is available on Arch as well (before SteamOS was released they actually recommend Manjaro, because they are so similar). You can use Proton for the Epic games store or other Windows apps. Of course they are doing this in self interest, but according to Greg Kroah Hartman and a lot of other kernel maintainers this isn’t a bad thing.
The Steam Deck is first „real“ consumer Linux computer which has been sold over a million times. I hope more Linux handhelds are being released in the coming years :)
The obvious irony here is that is in Valve’s best interest for their stuff to be upstreamed. It’s not like they can fork KDE (for example since it’s used by SteamOS) and maintain & support their own fork.
Oooooh, that’s very cool! I was not aware. Even better reason to give it a go. :-)
I distinctly remember the fun moment when I finally rebooted into a working system to realize that the only tools I have available to “get online” and build further are ping, telnet and ftp.
On the other hand, this might be a risky endeavor for you. Since you like building such alternatives, you might be tempted to cook yourself a custom package manager. And that’s a deep rabbit hole to fall into. I mean, trying to automatically detect shell script dependencies…
There’s a related issue: my favourite bit of productivity research shows that changing your process typically results in a measurable productivity improvement (usually 10-20%, depending on the study you read). This gradually fades but it’s one of the main reasons that it’s easy to sell management consultancy: you come in, you change something, you measure the output, and hurray, things are better. Then you come in a year later and change it back and demonstrate improvement again!
Also known as “if you want to lose weight, get on a diet”. Literally any diet will work. You just need to pay attention to what you eat for a while to see an improvement.
If you want to sustain the benefits of a good process, you just need to sustain the discussion about the process. Some organizations (in drug manufacturing for example) actually gather feedback on how things are done and how well it works to update the processes.
It’s pretty easy to do in most orgs, simply because you usually can’t directly measure actual productivity. Just pick an easily tweaked metric, bump it, everybody pats each other on the back, repeat in a year or two.
I also remember reading that, and like you it stuck in my mind. I haven’t been able to find the original paper since though - do you happen to have the reference?
I am self hosting Gitea + Woodpecker + Registry along with a webhook endpoint to redeploy my podman containers and it’s working fairly well. It’s nowhere near the GitLab level of sophistication, but it’s comparably lightweight and I feel more confident in my ability to troubleshoot it.
I’m not a fan of either format, but I strongly prefer working with XML. I’d even go so far as to say it doesn’t make sense to compare them because YAML doesn’t solve the same probelms as XML. Does YAML have schemas? Does it have XPath? Does it have XSLT? Does it have DTDs? Does it have namespaces?
And aside from tooling and programatic access the editor support for XML is a lot better than, too.
YAML feels like an ambiguous version of JSON when I have to use it - anything goes as long as you remember all of the weird rules and corner cases.
Every time I have to work with XML I end up stripping away the namespaces as the first step. As for XSD, they frequently do not match the data. XSLT is horrible. Even imperative code is more readable and flexible transformation tool than that. XPath is likely the only interesting piece of the stack.
If you don’t need the tools XML provides then it’s just added complexity and you should use something else - or pick and choose specific parts to use. But when you do need those features then it’s nice to have a standard way of doing it across programming languages and platforms.
YAML doesn’t have the tools - at least not built-in and standardized - so if you need them YAML isn’t an option.
As for XSD, they frequently do not match the data.
It’s like saying “the compiler gives a bunch of errors about my code”…
All programming languages I remember would interpret decimal literal the very same way. JSON parser would too. What’s the problem?
You just have to always quote everything.
No, you have to always quote string literals (unless they are known not to be literals of different types, such as boolean Yes/No) and never quote numbers, booleans, nulls nor dates.
Also the YAML specification has all these features that nobody ever uses, because they’re really confusing, and hard, and you can include documents inside of other documents, with references and stuff…
References make life simpler when writing such documents by hands, though. I’ve written JSON schema documents in YAML because it’s simpler that way.
So while being like 80% there and 20% kinda sucky for a lot of hand-kept data, sometimes I just don’t have the time nor willpower to repeat myself writing JSON nor roll a custom DSL. So YAML it is.
All programming languages I remember would interpret decimal literal the very same way. JSON parser would too. What’s the problem?
The problem is being loose with types. 1.20.1 is autocoerced to a string, 1.20 is a number. It makes it exceedingly easy to do the wrong thing, especially if you’re just editing a preexisting configuration. IMO, any time you’re placing additional burden on a human to understand and navigate unexpected behavior, that’s a problem.
JSON has its own problems, but at least in this limited case you aren’t going to make a two-character content change and have your datatype completely altered.
No, you have to always quote string literals (unless they are known not to be literals of different types, such as boolean Yes/No) and never quote numbers, booleans, nulls nor dates.
You should, but it’s incorrect to say you must.
foo: bar is entirely valid.
So while being like 80% there and 20% kinda sucky for a lot of hand-kept data, sometimes I just don’t have the time nor willpower to repeat myself writing JSON nor roll a custom DSL. So YAML it is.
Seems pragmatic. I certainly haven’t gone out of my way to migrate random YAML files in my older Rails projects.
FWIW, my opinion is that it’s objectively an awful config language, but it’s too late now to stuff that particular horror back into Pandora’s Box.
It sure is an awful configuration language, but for hand administered dataset it’s the path of least resistance when you don’t feel up to the task of making your own format.
On the other hand, there has been an enormous amount of developer time spent on format conversions and home grown formats tend to suck even more than YAML and XML.
I was into the right side for quite some time, but in the last couple of years I try really hard to delay splitting the function until there is another caller. Usually it provides better insights into what should be factored out and what should be kept.
That is, only split the different oven drivers when you get the second one.
The other extreme is writing a unified oven drivers that decides on minor differences using a bunch of well-placed its. This ends up being a nightmare with no way to extend support for cool new features of the newer model without making a total mess of things.
So in conclusion, it’s probably not the function count that hinders understanding. It’s the total distance jumped where calls to familiar functions do not count.
“
biz> user > ops > dev” is used for “late capitalism”, but perhaps “open source software” would be a better fit, where it can lead to software actually optimized for users.maybe the two can join forces? vc-paid open source projects with no expectation of becoming a business, producing the best user-focused software.
Perhaps tax-paid would be a better fit.
I think you can and should put user ahead of biz because the user is screwed if the business closes due to lack of funding. The idea is to take enough money to keep the lights on in service to the user.
Reminds me of my friend who works as tech support in a hospital. From her I have gathered two main facts:
Ah, the dichotomy of human existence.
I think you’re being unreasonable toward medical professionals. I think the points could be more succinctly stated as:
People are usually idiots (including IT professionals like us commenting on this)
It should be a capital offense to force quarterly password changes
Yeah, I can get behind that.
In my experience, it is never productive (but possibly cathartic) to label people as idiots.
Computer people often seem to have a hard time understanding this, but just because someone isn’t a computer expert doesn’t make them an idiot. They have better things to do than trying to work a stupid piece of machinery, and as you can read elsewhere in these comments, they have to do so under a tremendous amount of stress and often while sleep deprived.
They might be a “computer idiot”, but by that standard I’m a total idiot in most non-computer fields. And I suspect a lot of us are.
One would think a system could be designed wherein someone whose role requires one simple set of permissions (view and modify one’s own timesheet) would be exempt from the normal security standards of rotating passwords because the permissions of the role simply can’t result in any kind of harmful system breach.
There is absolutely no reason to use passwords in the hospital setting at all. Everyone should just use a card that is unlocked with a PIN code for the next 12 hours or until the person clocks out. Then the person should be able to move freely between the terminals.
I’m smiling a little bit at the history you reminded me of. https://en.wikipedia.org/wiki/Sun_Ray These units had smart card-based authentication and transparently portable sessions. You walk away from a terminal and take your card with you, plug it into a new terminal, and carry on working on whatever you were doing. Virtually zero friction other than network latency sometimes making the UX a bit laggy. In practice it worked really nicely the vast majority of the time.
I used these briefly in Bristol in 2001 through a friend from the Bristol and Bath Linux Users Group, it was really impressive, using tmux sometimes reminds me of playing with those Sun boxes.
Most of the systems are web-based nowadays, it would be trivial to make the sessions portable or even multi-headed.
Apologies that it took me a while to reply. How does web-based make sessions portable? It seems to me, without having put a ton of though into it, that being browser-based makes portability significantly more challenging. You can’t use a shared client environment (i.e. you need a separate Windows/Linux/OSX/whatever account for each user) and you’ll still have to manage security within the confines of a browser environment. I agree that it makes it really easy to access content from everywhere, but you’re also dealing with multi-layer authentication (Windows login, browser login, cookie expiration times, etc).
With a SunRay you could pull your smartcard, walk to the other end of the building, put your smartcard in, and have everything magically pop back up exactly how you’d left it.
Ah, right. I was assuming that at this point the software stack runs whole in the browser and the workstations would be basically kiosks.
Some equipment has a dedicated computer (radiology basically), but since there usually are only so many operators and the software might not even support multiple system users, shared OS session would probably work fine in practice.
But yeah, we might just as well just launch a container on a shared server for every user and let them run a lightweight desktop over spice.
At first blush I read your comment and was pretty concerned about the overall security model, but instead of brushing it off I did a bit of digging.
It looks like ChromeOS could potentially support both what I was proposing and what you’re proposing:
https://support.google.com/chrome/a/answer/7014520?hl=en&ref_topic=7015274&sjid=2940490462416568896-NC
https://support.google.com/chrome/a/answer/10038005
It looks like you can configure ChromeOS to use a smartcard for both login authentication and SSO inside the browser, as well as being able to use HTTPS Client Certificates from the card. It’ll also forward those credentials along to Citrix (gross, but common in healthcare) and VMware for remote desktop sessions. Very cool!
It doesn’t really matter.
There are so many users and their work is so important to us that they should and mostly could afford a solution tailored to their needs even if it meant a custom operating system. Which it most probably doesn’t - as you’ve found out.
Instead we are trying to shoehorn them into environments designed for office clerks which just doesn’t cover all the use cases and will always cause friction.
And it’s not just login method. A person dear to me who works in the best hospital in my small country has caused trouble to the IT department a year or so back. Apparently there is a mandatory field in the patient record. Well, if you don’t know at that point in time, you just enter 555555/5555 doctors figured. So a doctor filed a new patient, my dear person figured there is a duplicate record and the patient isn’t new afterwards so they merged the record and whoops, couple thousand patients with their 555555/5555 has gotten merged together. Yay!
There is a concept from manufacturing described e.g. in 5S, but readily described by any traditionally brought up woman as well. You should declutter and not just that. You should actively build decluttering infrastructure. That means e.g. making it possible to omit fields that are required from the process point of view, allow the operator flag records for review and describe the discrepancies, add screens for dealing with such records and so on.
Instead of trying to make it impossible to enter data on behalf of a doctor, only leading to credential sharing and obscuring of who did what, make it possible and simplest under the name of the actual person and hold thrm accountable by letting the doctor review such operations e.g. next morning and so on.
Also, when there is a terminal to help e.g. carry out a surgery or a scan or enter data about patient inside the bed or in the room, you probably don’t want your desktop. You want to log into the context specific terminal and just access or enter data under your name. So remote desktops might be useful in some cases, but not all of them.
You’d think so, yes.
Basically all have been named already.
In addition, I really appreciate Fedora. It’s has been my main driver since F13 I believe. 13 years. Wow.
What comes to mind today are neovim and kitty. I cannot imagine going back from either of them.
/me cries in custom SQL generator because Django ORM can’t JOIN and everyone keeps repeating “you must have a wrong problem because Django is the perfect solution”.
It seems to me that Django has painted itself into a corner interface-wise and now any improvement will require major refactoring that will break a lot of production stuff for a majority of users.
Joins are one of my favourite features of the Django ORM. What’s not workied for you?
I tend to agree. On the other hand, doing a small project in Flask that quickly gets out of control teaches you why you want to use the MVC patterns, etc. Once you have internalized the patterns, you don’t need Django anymore, but it is helpful to have a place to learn it from and trying to go it alone and failing also gives you the motivation to learn it.
One way to think about it is when to use which in the learning process. For a student doing independent study in university, Flask is probably fine for making a small mess. For a junior dev at a company, you want Django so they don’t screw up the company app and you can give them books to read to learn the basics.
Flask is MVC:
I don’t believe that postponing learning SQL after the N+1 bites you in production is the way to go, to be honest. Maybe use Flask (or FastAPI or something) and train the newcomer a bit more extensively? Maybe even walk them through the codebase. Or perhaps write a
CONTRIBUTING.md
that actually incorporates feedback from newcomers?What’s the learning context we are imagining? I think junior on their own and junior in a firm with senior code review are pretty different. And student experiments are even more different.
Why You Should Just Write HTML
OK, I’ll expand.
Using at least something like staticjinja is not going to hurt you. You can keep using plain HTML and only make use of template inheritance and includes to at least move the unimportant parts of the pages away from the content in a way that makes it easy to come back to them and change things without going through multiple files and repeating the edits.
I would personally probably also add a way to use Markdown, because I find writing plain HTML tedious and distracting. Good when I need it’s power, bad when I need to
&
,"
and<strong>
constantly. I suffer from ADHD, which means that any accidental complexity will make it easier to procrastinate. Also</strong>
.Using something like staticjinja looks like it’d exactly be a pain. Look at how many fricking breaking changes there’s been in 2 years https://staticjinja.readthedocs.io/en/stable/dev/changelog.html
You can setup your css so
<b>
can be synonymous with<strong>
Yea, typing
&
is annoying, I suggest typingand
instead and using regular quotes than"
- maybe there’s some other solution to this that isn’t immediately obvious to me :pLooks like zero? Especially if you just used it from the command line.
Quoting MDN:
Quotation marks as
“…”
typographically, as Godzilla intendedThe problem I have with using a form of text markup (like Markup) is properly escaping asterisks when I really want an asterisk (which isn’t often, but does happen from time to time). It’s especially bad when it happens in code I’m posing. Then I have to go back and fix the post. Yes, this happened recently, and it affected the output of the code to render it useless.
With some way to include navigation and common header, footer, breadcrumbs…
nav, header, footer.
cp template.html todays_post.html
. If you need extremely basic templating,template.html.sh > todays_post.html
.Why do you need breadcrumbs? If we’re talking beyond blogs, you probably shouldn’t write any static site generator and use what exists, because others will need to come along to maintain it…
Writing raw HTML is also a matter of your editor. If your editor supports snippets and smart expansion, you can write just plain HTML pretty quick. There’s some great vim plugins like emmet.
I just can’t quit HTML
I’ve wanted to go to an SSG for years but the simplicity of HTML for such a simple and rarely updated personal site is unmatched. I dealt with Wordpress for a decade more than a decade ago and I’m glad I got away from it because I spent more time upgrading and hardening it than I did blogging on it.
I know it’s probably something wrong with me but where can I find this proposed legislation (containing this sinister article 45) and read it? Also, if it is really true, I don’t understand how any country in Europe would agree to this. Will eg. Hungary be able to create a new certificate for riksdagen.se and browsers in the whole Europe will just accept it? Or is there any more detail to this story?
It looks as if this is another case of well-intentioned legislation being written by people with no understanding of the subject at hand and without proper consultation. I believe (judging from the analysis in the letters) the intent was to require that EU countries are able to host CAs that are trusted in browsers, without the browser vendors (which are all based outside the EU) being able to say ‘no, sorry, we won’t include your certificate’. Ensuring that EU citizens can get certificates signed without having to trust a company that is not bound by the GDPR (for example) is a laudable goal.
Unfortunately, the way that it’s written looks as if it has been drafted by various intelligence agencies and introduces fundamental weaknesses into the entire web infrastructure for EU citizens. It’s addressing a hypothetical problem in a way that introduces a real problem. I’m aware that any sufficiently advanced incompetence is indistinguishable from malice, but this looks like plain incompetence. Most politicians are able to understand the danger if US companies can act as gatekeepers for pieces of critical infrastructure. They’re not qualified to understand the security holes that their ‘solution’ introduces and view it as technical mumbo-jumbo.
As far as I am aware of, the real reason for this is to facilitate citizen access to public services using certificates that actually certify to citizens that they are in fact communicating with the supposed public organization.
EU would be served well by a PKI extension that would allow for CAs that can only vouch for a limited set of domains where the list of such domains is public and signed by a regular CA in a publicly auditable way.
Or even simpler, they could just define a standard of their own with an extra certificate for the regular certificate. Then they could contribute to browsers some extra code that loads a secondary certificate and validates it using a different chain when a site sends a
X-From-Government: /path/to/some.cert
header and displays some nice green padlock with the agency name or something.Looking how misguided this legislation is, I’m not sure if these agencies are from any European country. This level of incompetence feels utterly disappointing. Politicians should be obliged to consult experts in a domain that given new legislation affects. I also don’t understand what’s so secret about it that justifies keeping it behind closed doors. Sounds like an antithesis of what EU should stand for. Perfect fuel for some people that tend to call EU 2nd USSR and similar nonsense like this.
It depends. Several EU countries have agencies that clearly separate the offensive and defensive parts and this is the kind of thing that an offensive agency might think is a good idea: it gives them a tool to weaken everyone.
This is tricky because it relies on politicians being able to identify domain experts and to differentiate between informed objective expert opinion and biases held by experts. A lot of lobbying evolved from this route. Once you have a mechanism by which politicians are encouraged to trust outside judgement, you have a mechanism that’s attractive for people trying to push an agenda. I think the only viable long-term option is electing more people who actually understand the issues that they’re legislating.
The EU has a weird relationship with scrutiny. They didn’t make MEPs voting records public until fairly recently, so there was no way of telling if your representative actually voted for or against your interests. I had MEPs refuse to tell me how they voted on issues I cared about (and if they’d lied, I wouldn’t have been able to tell) before they finally fixed this. I don’t know how anyone ever thought it was a good idea to have secret ballots in a parliamentary system.
This is the sort of thing that an unelected second chamber is better at handling. Here’s an excerpt from an interview with Julia King, who chairs the Select Committee on Science and Technology in the UK’s House of Lords:
That’s from an episode of The Life Scientific.
I don’t see how this could ever be viable given existing political structures. The range of issues that politicians have to vote on is vast, and there just aren’t people that exist that are simultaneous subject matter experts on all of them. If we voted in folks that had a deep understanding of technology, would they know how to vote on agriculture bills? Economics? Foreign policy?
You don’t need every representative to be an expert in all subjects, but you need the legislature to contain experts (or, at least, people that can recognise and properly interrogate experts) in all relevant fields.
I’m not sure if it’s still the case, but my previous MP, Julian Hubert, was the only MP in parliament with an advanced degree in a science subject and one of a very small number of MPs with even bachelors degrees in any STEM field. There were more people with Oxford PPE degrees than the total of STEM degrees. Of the ones with STEM degrees, the number that had used their degree in employment was lower. Chi Onwurah is one of a very small number of exceptions (the people of Newcastle are lucky to have her).
We definitely need some economists in government (though I’ve yet to see any evidence that people coming out of an Oxford PPE actually learn any economics. Or philosophy, for that matter), but if we have no one with a computer science or engineering background, they don’t even have the common vocabulary to understand what experts say. This was painfully obvious during the pandemic when the lack of any general scientific background, let alone on in medicine, caused huge problems in trying to convert scientific advice into policy decisions.
You currently cannot as per the first paragraph. The working documents are not public.
My reading comprehension clearly leaves a lot to be desired, my bad.
Because as with all legislation here, every person involved cannot understand the threat model. They believe that obviously this will only do good things, and don’t understand that different places have different ideas of what is “good”. They similarly don’t understand that the threat model includes people compromising the issuer, and don’t understand that given the power of their own CAs they will be extremely valuable, and also part of a section of all governments is generally underfunded.
Fundamentally they don’t understand how trust works, and why the CARB policies that exist, exist.
Considering how the EU works, this was probably proposed by a member government, and with how it’s going, many member governments probably support it.
I don’t know what’s in the proposed legislation, but the version of eIDAS that was published in 2014 already contains an Article 45 about certificates (link via digital-strategy.ec.europa.eu):
I suppose the proposed legislation makes this worse.
Ah, I still remember my high school lecture on networks. “Here is the OSI model. We are interested in L2-L3, rest is meh so don’t worry about it. Here’s how ARP works and here’s IP, UDP and TCP. OK, now let’s move on, it’s time to write our first web application…”
Anyway, I find it more important to teach the more important underlying concepts beneath. Especially serialization and differences between:
As for the “networks” aspect, I find that the routing theory (who do I forward the message to next) combined with discussion of globally unique, flat namespace addresses (MAC) and managed hierarchical namespace (IP) is mostly sufficient.
I feel like I live in a different universe than the author and some of the commenters here.
Or maybe you just don’t target the US market where your customers can trivially pay for cloud costs with their double to triple absolute purchasing power compared to the rest of the world.
Or maybe you need a lot of egress traffic.
Or maybe you are required to have an exit strategy and thus cannot utilize unique hosted services and instead rely just on IaaS.
Or maybe you don’t want the provider to just delete all your data when you miss a payment or two because a bank decided you should not be getting any money.
Or maybe you are morally opposed to relying on infrastructure provided by companies that unscrupulously exploit their employees/customers/partners only to enrich a handful of already rich people who live on a different continent than you.
Or maybe you don’t like the fact that some of your payments to cloud providers go towards lobbying and (most probably, no direct evidence) corruption in your local government that, among other things, influences what kind of IT is taught in schools.
Or maybe you believe that investing in somebody setting up a bunch of scripts, cluster RAID and Zabbix is an investment in future flexibility. Possibly even building skills that would pay off when you rent some of the cloud capacity. Unlike going in totally blind.
Also, competent sysadmins are not that expensive. And if you don’t feel like you need a whole one, just let them bill you by the hour.
I feel like the author wrote a pretty reasonable reply/counter-argument to arguments that were being made by DHH and folks who agreed with him, which primarily were about cost and staffing tradeoffs.
It seems like you want to have an entirely different sort of discussion. In which case perhaps you’d like to write your own post about it, focusing on the things you’d like to focus on?
It seems like most people don’t even think about most of the things you mention. Which is very sad indeed.
I understand the post as concentrating on the technical/cost side. If you have specific business, ethical, or other requirements then of course they will trump other aspects, but that’s… a different scope of the discussion? The entry point for the post seems to be “we can use the cloud or on prem, or we’re considering switching” which most of your situations won’t even arrive at.
I honestly don’t know. Maybe I am just too old (in my thirties), but I feel that people shouldn’t concentrate on the short-term monetary aspects of things and take a step back to consider the broader situation? I seems that e.g. programmers with negative LOC who mentor the young are celebrated… Maybe we should ask ourselves whether we can vote with company coffers to have better ecosystems? Since we are willing to spend on negative LOC mentor people, it wouldn’t be such a reach to maybe hire one more member of the team and then use the extra time to get everyone just a bit more comfortable around RAIDs and NICs. It’s not exactly a rocket science. And who knows, maybe next time you need to store a lot of data and then slowly chew on them, with users on a single continent only, you might be able to save enough to pay it off and then some. But it also means having some of those physical servers housed and managing them as a part of your routine.
Sorry for writing this, it was uncalled for.
Ignoring the ideological stuff, the problem I’ve encountered with sysadmin + hosted infrastructure is that the sysadmin team takes a long time to get what you need. It takes weeks or a month to get a VM provisioned. I’m guessing my company just hired the wrong sysadmins or something and the good ones would have essentially built out our own internal cloud (complete with self-service provisioning, IAM, billing, monitoring, logging, etc) for cheaper than what the cloud providers charge, but it’s nice to be able to go with Amazon or Google and know that what I’m buying will work (rather than rolling the dice on a sysadmin team) and moreover, those cloud environments are well documented and well-trodden–a company can hire developers with experience in those clouds rather than relying on internal training to get people up to speed (sort of the inverse of your last point). I also know that I can get started now rather than waiting for a team to finish the buildout of our datacenter.
That’s what usually happens for sysadmin teams that take care of internal infrastructure of a non-tech organization. They tend to be poorly financed and poorly run. Thus they tend to be understaffed or staffed with people who don’t really care about the job nor expanding their own skills. Outside and internal pressure to procure specific systems prevents all real architecture and pressure to cut salaries prevents hiring actual developers and growing systems that match the needs of the organization. So everybody just gives up and becomes a middleman who dusts third party systems, removes spyware and gets a bonus for making sure that correct people get the most recent iPhone or something along those lines.
If you worked in my (imaginary) company of 50+ people and you provisioned some stuff outside our perimeter, without taking care to integrate identities, document data recovery and exit strategy, I would order you to de-provision the service and try again. If you did it a second time, I would fire you for endangering the organization.
It’s actually hard work to make sure the organization stays resilient long-term and it’s usually people too eager to “just get this thing out the door” who ignore minor stuff like “If user requests we forget about them (with a bunch of additional expletives, on support call), how does our new shiny cloud-based messaging service know not to contact them anymore so that they won’t report us to authorities for ignoring GDPR?”
That’s just sad. I mean, you don’t even need an internal cloud. All relevant virtualization solutions have self-service capabilities. But still seen pretty often. Especially when there is some kind of internal billing going on and someone has to ACK the expense.
Anyway, I don’t think that your organization is relevant to the discussion. Or the discussion to your organization. At this point, it’s probably no longer even capable of making competent technical nor financial decisions. It lives of inertia and unless publicly funded or a monopoly, it will eventually dissolve. Otherwise it will just pass on the costs to the public and everyone involved (and not blind) will skim as much as possible, speeding it’s demise.
I believe that the discussion is about organizations that are able to define and meet their technical targets and are striving for continuous improvement.
Finally finishing the first part of my blog series around implementing a small bytecode interpreter for arithmetic expressions. Also fixing bugs in said interpreters parser i just finished today.
The github project is located here: https://github.com/xNaCly/calculator
Also working on my university paper titled „Modern Algorithms for Garbage Collection - Outlining modern algorithms for garbage collection on the examples of Go and Java„. I just finished the overview of garbage collection and the problem garbage collection solves.
Looking forward to another lisp. :-)
Not this time, not this time :^)
Holding a whole day hobby group meeting on Saturday.
Teaching some kids to make weird noises by feeding
aplay
numerical sequences generated upon key presses. Sequences include “X samples 0, X samples 255” and “going from 0 to 255 in multiple steps”.I am optimistic that we’ll even manage to “keep a bunch of samples around and loop through them” maybe even with a little bit of “average every sample with the one before it”. That’s gonna throw them for a loop for sure.
Oh this sounds like a great skill for me to start teaching my 5-year-old nephew. I’m sure his parents will love it.
My sister in law asked me to fix this cheap baby keyboard for her daughter, so I made sure to replace the board completely, throw in a decent DAC and apart from making it sound a little bit more similar to an actual piano (as opposed to a tortured cat) I have included some domestic sounds.
Babies should learn to recognize fun sounds, right? Spooked hen, rooster, frog, broken glass, barking dog and so on.
There is a hidden menu to adjust the volume and toggle allowed fonts, though. I am not a monster.
I would put off the actual programming until they are at least 9-10 unless they are already invested. Or get them some kind of retro hardware with BASIC or something.
I didn’t really know what programming was and had very limited exposure to computers when I was 7, but my school did a few programming lessons in BASIC and Logo then and, well, the results are pretty obvious. 9-10 feels like leaving it a bit late, especially with the increased exposure to computers that children have now.
I really can’t say. I would probably just test the waters with SHENZHEN I/O, EXAPUNKS and Human Resource Machine and if it works out, proceed with something more serious. Especially with today’s kids attention spans.
http://www.coding2learn.org/blog/2013/07/29/kids-cant-use-computers/ though.
I think the big difference between now and when I was a child is that children are a lot more aware of what computers can do. I taught programming to people a few years younger than me when I was a teenager and most of them had barely touched computers, and where they had they were primarily just using them as glorified typewriters. A few had used game consoles but these were not really perceived as computers. Making the computer respond to inputs and do something with them, even writing very simple games, was outside their experience. Now, they’ll have seen this kind of thing in the web and things like Scratch give you an easy way of starting to build your own.
This bit in the article is quite unfair though:
I am not normally one to defend the Tories, but this is one of two things that Margret Thatcher deserves credit (rather than blame) for. They defined a set of criteria in the ‘80s that allowed schools to get central-government funding towards a computer. Among them was were things like support for programming environments with support for structured programming. As part of this initiative, the BBC created a load of teaching materials, including lessons that could be recorded and played for the class and source code that was broadcast over teletext. It wasn’t until the mid ’90s that all of this work was undone.
Actually, most of them chose MS Works, because even the subsidised version of Office was too expensive.
Since this article was written, the Computer Science GCSEs and A-Levels were introduced. Unfortunately, this has been a step back for inclusion because very few schools have teachers that are able to teach them and so don’t offer them. Mostly, fee-paying independent schools are the only ones that can provide them and so people entering computer science degrees from such a school have an advantage over ones from state schools.
I particularly remember the school where my mother taught in the late ’90s using the number of instances of the letter X on a page as a filtering rule. Anything related to UNIX, Linux, or X11 was blocked.
I think this is a dangerous analogy because it understates the flexibility of the systems. Fixing a car is just that: returning it to its original state. If you modify a car, you might make it a bit faster or a bit more responsive, but you’re not fundamentally changing what it can do. This is not true of computing. A computer can do an unbounded number of things and if you’re using it only to do things that someone else has told it to do then you’re losing a lot of the power of the device. When I was 11, the head teacher at another school said something that stuck with me:
In a reference to the Butlerian Jihad, Frank Herbert wrote (paraphrasing slightly from memory):
I have tried very hard during my professional life to avoid building the kind of society that these people warned me about as a child.
Thanks for engaging so extensively. I also disagree with several points the author makes, but overall I also feel that there is not much going for the implication that young people nowadays are more exposed to computers and thus are either more competent with them or more interested in learning to program them.
Among the regular young folk (with non-intellectual parents), many are being actively prevented from spending “too much time in front of the damned computer” only to be allowed unlimited time with their phone, which is a practice I find absolutely, mind-bogglingly idiotic, to put it mildly. The “I am just an user” mentality the author also mentions (albeit with less flattering words) is also extremely widespread even among otherwise intelligent and curious people of all ages.
There have been young people in my family and of my friends who literally said to me that they “don’t want to be eggheads like me” when I suggested that natural sciences are fun and we might perhaps do something a little bit more intellectual. They they returned to their Instagram feed on iPhone. I have celebrated when we managed to get one of them interested enough to take a picture of Venus through a telescope of a nearby observatory. Anyway…
Here in Czechia, what happened when government has started getting interested in teaching IT, was exactly what the author wrote. Microsoft literally wrote the curriculum and then schools have gotten funds to get some computers to teach with and they’ve all installed Microsoft AD Domain, locked down everything down to right click on Desktop and then taught Microsoft Office. The only companies able to help the schools to get the funds were Microsoft Partners.
I have been scolded by my high school teacher for writing a LAN chat application using Visual Basic for Applications embedded in Excel. We were supposed to learn to copy a form given on a paper handout using Excel table border formatting. I have originally wanted to show him a game I have wrote in Visual Basic 6.0 at home, but the system did not allow me to execute an arbitrary executable. The only schools that were not affected were those where parents have donated computers years before the government noticed and there were actually teachers interested in teaching IT.
So I guess it probably differs a lot from country to country and I am glad that you had it better. I have been saved by the fact that my stepfather was a programmer and when I switched schools due to us moving, I have been able to attend electable classes of a person who run a small web hosting / server hosting business and thus have been able to teach us Linux, PHP, Apache, MySQL and Bind. Nowadays there are “gymnasiums” (general high school) that actually teach Python and materials to teach young kids with e.g. Scratch are available as well. More “practical” high schools are hopelessly vendor-locked, though. MSDN Academic Alliance, Visual Studio and so on. And colleges as well. For example, college IP networking is basically a Cisco course with a Cisco certification at the end.
That’s partly the reason why I run the hobby group. To make it possible for kids that are actually interested to do something real, using the same tools we here use. Like I have been allowed to in my second high school. Outside of Microsoft and AutoDesk customer funnels.
But I have zero hopes for kids in my extended family to get interested. Less disappointment that way.
(We sure did derail the thread. Ah-ah-ah.)
That’s astonishing. I’ve mostly seen ‘screen time’ lumped together, which still misses a distinction between creative and consumptive uses of glowing machines but at least doesn’t favour the wrong one.
That attitude was prevalent when I was growing up but I think it started to fade around the .com boom, when it started to sink into the collective awareness that there was a strong correlation between geeky and high-earning professions. The down side of that was a load of people learning to program to get rich, with no real interest in the activity itself. At least some of them then developed the interest and at least it let the people who were genuinely interested use it as cover.
Our official course was much as you describe in the ‘90s. I was incredibly fortunate that my school had an IT technician who was an old school greybeard (well, his beard wasn’t yet grey, but I, sure dealing with us would have sent it that way soon). While we were supposed to be learning how to add a column of numbers in Works Spreadsheet, he was showing us how to recover a damaged FAT filesystem in a hex editor. He also encouraged us to try to bypass the restrictions that locked down the school network, and we never got into trouble if we didn’t do any damage.
I’d been programming for a few years by that point. My previous school’s headmaster taught one lesson a week called ‘general’ for each year group. For the 7-year-olds, he taught BASIC and Logo. I think one of the things that made this compelling was that the gap between what a professional game developer could make and what I could make were not that far hit the limits of the hardware long before you hit the limits of the programmer’s ability. There was maybe a one or two order of magnitude difference in quality, whereas for a modern AAA title it’s a lot more. I do wonder about starting with something like Godot now though: there’s a real programming language and making something like Candy Crush is well with it the grasp of a single person.
Yeah I was thinking more along the lines of a simple setup he could type numbers or stuff into and get sounds out. Just start on the idea of “I can make the machine do what I tell it to”.
Ah, I see. Well learning to use keyboard and making the association “keyboard = making the computer do what I want” early would be definitely good. :-)
I am not sure if it’s a realistic goal, though. It usually ends up being “whoa, there is a whole different universe to explore”. Computers don’t really do “what we tell them to”, or do they for you? At best, after many years of struggling, you end up being able to make some fairly good compromises. :-)
I certainly like to think that they do. Sometimes that might even be approaching truth.
They always do what I tell them. Occasionally they do what I want.
Do you count your phone?
Have you done any writeups about this, so others with similar toys might do similarly?
I always promise myself to do just that and then go on to another project. Sorry for that. But thanks for the ask, it might just make me write it down sooner.
Great post! However, I am not sure how helpful it is for normal users to think about ECC DRAM.
Just curious, For me (a student and researcher), if I have some budget, adding more DRAM and more power CPU will be useful, but is changing to ECC DRAM a better option? In other words, is memory error today at the point that a normal user should worry about?
Depending on who you ask (torvalds - note that ECC won’t help you with modern rowhammer attacks), then everyone should just have it, and it shouldn’t be something special. If you’re going for a homeserver and don’t want it to eat up your data by chance, then go for it. Stuff like checksum + compression filesystems (ZFS) can give a bad experience if your RAM does fault.
Other people say “never got a problem, don’t care”. So who knows, flip your coin ;)
Personally: Use it on stuff that behaves like servers (data security, uptime, weeks of no human intervention..).
If you notice GCC randomly crashing it might be useful to run a memtest (that is how I noticed many years ago). But TBH ever since I started using DRAM with heatsinks I never had that problem, even without ECC.
If you’re worried just run https://www.memtest.org/ regularly.
ECC is useful in situations where you can’t afford the possibility of bit errors, like compiling packages for your Linux distro.
It’d of course be great if there was easily affordable ECC available for desktop use. DDR5 does actually have some ECC builtin but not true/full ECC.
As I recall, it has fewer bits, but the big difference is that (without ECC DIMM support) it doesn’t give you the error reporting. There are a few different kinds of failure:
The surprising thing is how much everything works with memory errors. I saw a demo some years ago of inducing bit flips to try to get to a JVM escape by holding a hairdryer to the RAM chips. I think you needed something like a 1% error rate to get random crashes: a huge amount of memory is full of buffers of data, strings, and so on, where the program works fine(ish) if they are slightly corrupted. If you corrupt the low bits of a code pointer, you often just miss some stack spills and that might work for a while. If you corrupt the high bits, you probably crash. If you corrupt the low bits of a data pointer, you may find that it points into a string or array rather than to the start but, again, things are likely to keep working.
The danger here, of course, is that the things that are ‘working’ are giving subtly wrong answers and may keep doing so for a long time.
I think that DDR5’s ECC works on the module level. Namely it will fix errors before they are transmitted to the CPU. The classic ECC takes case of errors generated on the signal between the RAM and the CPU in addition to errors on the RAM module level.
DDR5’s on-die ECC, to my knowledge, is there so that manufacturers can cut costs with RAM that produces errors every so often.
Consumer-grade RAM has intense competition – people are just not willing to pay that much for it. But what if you do pay more for a higher-quality product, especially one your livelihood depends on?
Thank you! This is helpful and aligns with what I thought — long-running jobs will benefit from ECC DRAM and daily activity should be fine (I read somewhere that Windows OS crashes sometimes due to bit errors, but not sure how often) :)
For me, who falls into a similar category of use, I go with ECC RAM whenever possible. That means almost always, almost everywhere.
It’s not because of a paranoia over bit flip errors, though. Those do happen, and regularly enough (with big enough implications) that you really should care.
But the real reason I always go with ECC RAM is because if you stick to workstation or server platforms from a generation or two behind, you’ll actually end up paying less for larger quantities of ECC RAM than you would for even the simplest RAM from the current generation.
Using hardware 10+ years old is probably too far for most people, but I built a dual Xeon X5690 server with 128 GB of ECC RAM a year ago for less than $250 for everything…
Old hardware tends to have worse computations/watt ratio and it does not really pay off. I remember couple years back turning off some ancient servers running test environment, replacing them with a couple of VMs and then having to reconfigure the air conditioning because the server room got too cold for some of the equipment.
And with nowadays energy prices (especially in Europe), I have probably payed more in those than the actual price of the Ryzen 2700 based server. Remember, in a data center you pay for the electricity twice.
I was pretty comfortable paying $360 for ECC RAM that’s in the device I use for my livelihood. Probably wouldn’t pay that much for my gaming PC.
Though one advantage of ECC is that you can overclock it easier, especially since the base spec of the RAM I bought is e.g. 5600MHz while the base spec of most consumer DDR5 RAM is 4800MHz and you have to pump it up with XMP/EXPO.
I wanted to try this when Zen 4 came out. It just seemed like a big project. You are way braver than me to try to get this working. I don’t even know how that other person easily shorted one of the DRAM data lines. That technique is a quick way to burn out one of the drivers and brick a CPU.
I ended up just getting on older Dell Xeon Skylake server.
One other note. DDR5 does do some onboard ECC, but is not exposed.
It would be great if AMD officially supported this stuff.
Then don’t short it. Add a stub made from twisted pair with one wire to GND and the other to the pin to disturb. The signal will bounce back from the end of the stub, creating a notch filter. You should be able to hit the notch sooner or later and corrupt some bits. Simulation.
As for the method, they probably squeezed the wires along the DRAM inside the socket.
… by being funded via a heavy store tax from an eternally buggy mess of a proprietary app store whose main value add is their network effect, set of sketchy engagement APIs, DRM (as in ‘Digital Rights Management’ or ‘corporate sanctioned malware’ depending on your optics) mainly selling proprietary software.
Its main reasons for ‘contributing’ being part of a risk management strategy for breaking away and more directly competing with Microsoft, empowering specifically those FOSS projects that fits their narrative and promoting an architecture that is close to a carbon copy of its eventual end-game competitor. This time with more anti-cheat making its way should there be sufficient traction.
It is the Android story again on a smaller scale. How did that turn out last time, how many of the ‘contributions’ failed to generalise? or is it different this time because Valve is good because games? Colour me sceptic.
I think Valve as a company has a lot of problems (though the DRM is pretty mild and one of their lesser problems tbh) and the Steam Deck iffier of a product than people make it out to be, but they’re actually being a good citizen here. Yes, they’re funding the things relevant to them out of self-interest (i.e. case-insensitive FS, WaitForMultipleObjects API clone, HDR, Mesa work, etc.), but they’re working with upstreams like the kernel, Mesa, and freedesktop.org to get the work merged upstream, be properly reviewed by the people that work on it, and be usable for everyone else. Android never worked with upstreams until maintaining their own things in an opaquely developed fork became unsustainable.
(Sometimes I think they might be using a bit too much commodity - the Arch base of SteamOS 3 seems weird to me, especially since they’re throwing A/B root at it and using Flatpak for anything user visible…)
You only need mild DRM and DHCP for the intended barrier to entry, suppressive effect and legal instruments, anything more exotic is just to keep the hired blackhats happy.
If I’m going to be a bit more cynical - they are not going the FreeDesktop/Linux instead of Android/Linux route out of the goodness of their hearts as much as they simply lack access to enough capable system engineers of their own and the numbers left after Google then ODMs then Facebook/Meta then all the other embedded shops have had their fill aren’t enough to cover even the configuration management needs.
Take their ‘contributions’ in VR. Did we get even specs for the positioning system? activation / calibration for the HMD? or was that a multi-year expensive reversing effort trying to catch up and never actually being able to just to be able to freely tinker with hardware we payed for? that was for hardware they produced and sold in a time where open source wasn’t a hard sell by any means.
Did we get source code for the ‘Open’VR project that killed off others actually open ones? Nope, binary blob .so:s and headers. Ok, at least they followed the ID software beaten path of providing copyleft version of the iterations of the source engine so people can play around with ports, exploring rendering tech etc? Nope. If you have spotted the source on the ’hubs its because its a version that was stolen/leaked.
Surely the lauded SteamDeck was sufficiently opened and upstreamed into the kernel? Well not if you include the gamepad portions. It’s almost as if the contributions hit exactly that which fit their business case and happens to feed into the intended stack and little to nothing else. Don’t anthropomorphise the lawnmower and all that.
To me, it looks like Valve wanted to make a gaming console, and used Linux as a way to pull that off. If you’d told me 25 years ago that you’d be able to play Windows games on a Linux machine that was handheld, I’d have been blown away. To me it still seems almost miraculous. And they doing this while (as far as I know) fulfilling their obligations to the various licenses used in the Linux ecosystem.
Does the fact they’re doing this for commercial gain invalidate that?
I don’t know what you expected. They’re almost certainly not going to give you the crown jewels used to make the headset, but all the infrastructure work is far more useful as it benefits everyone, not just the people with a niche headset.
They patented and own the hardware, the binary blobs and sold the hardware devices at a hefty price tag. I’d expect for an average FOSS participant to integrate and enforce existing infrastructure and not vertically integrate a side band that locks you into their other products.
What’s iffy about the Steamdeck? I’ve considered buying it but haven’t pulled the trigger yet.
I own a Steamdeck for about a year now. No what idea why do people have problem with the size. My kids play on it and don’t complain, and we do have a Nintendo Switch (smaller) to compare. Sure it’s bigger, but I don’t think it’s a big deal. On top of it - with a dock, it works perfectly as a home console system, so the size matters even less.
I really enjoy it and recommend getting it if you’re thinking about it.
As an owner of both, Steam Deck is quite heavier than Switch, and could have used an OLED screen.
I don’t like the size and ergonomics. But I’m in the minority on that one; people with big hands especially seen to love it.
There are more abstract concerns regarding its place as a product (is it a PC or a console? whichever is more convenient to excuse a fault), but otherwise the device is a pretty good value. I just don’t game that much, and when I do, it’s a social thing.
Thanks. Now I have to consider the size of my hands in relation to other people’s, something I understand can be fraught.
FWIW, I have small hands (I joke they are “surgeon’s hands”) and I don’t have a problem with the SD.
It might have a problem depending on what input method you use. I have average sized hands and I like to use the trackpad for FPSes and strategy games. For the FPS case, reaching the trackpads and reaching the face buttons gets really annoying. You can map the grip buttons to the face buttons, but then you’re losing out there.
Even with big piano friendly hands the steamdeck ergonomics are hard. If you don’t have a big toy budget, testing someone else’s is highly recommended. I mostly use my three steamdecks for various debugging / UI / … experiments (nreal air glasses + dactyls and the deck tucked away somewhere). If Asus would be able to not be Asus for 5 minutes the ROG Ally would’ve been an easy winner for me.
And you wouldn’t consider Ally’s intended OS a problem?
I hacked mine to run Linux as I’ve done with all other devices I’ve used throughout the years, it didn’t boot WIndows once. As far as their ‘intentions’ - whatever laptops I have scattered, all of them came bundled with Windows ‘intended’ for that to be the used OS.
DSDT fixes and kernel config to get the Ally working was less effort than I had to do to get actual access to the controllers and sensors on the Steam Deck.
Fair enough. I admin RHEL for dayjob, and use Arch on my laptop, when getting SD I knew I’ll keep it more appliance-y than getting into fully custom OS. I just wanted something to play Persona 5 in bed.
It’s heavy. Significantly heavy. It took me a while to figure out how to use it in a way that didn’t quickly give me wrist fatigue/pain, and even now it’s not perfect.
Also Valve’s Deck Verified program is very flawed. It’s quite a bit better than nothing, but it’s flawed. The biggest (but not only) problem IMO is that a game that has a control scheme not optimized for controllers - but still fully supports controllers - can be marked Verified. As an example, Civ V and Civ VI both basically just use the trackpad like a mouse, and the other buttons have some random keybinds that are helpful. Now, those are basically keyboard-and-mouse games… so to a certain extent I totally get it. But I should be able to click into a list of things and use the joysticks or D-pad to scroll down the list. I can’t. Instead, I have to use the trackpad to position the cursor over the scroll bar, then hold right trigger, then scroll with my thumb. This is extremely unergonomic.
Right, it’s really chunky - I might use it more if they had a mini version. The only use for the portability is at home (i.e. on the porch). It’s not small enough I’d want to carry it in a bag if I commute, around town, or waiting for someone, and if I’m on vacation, the last thing I want to do is play video games instead of touch grass or spend time or people. If I really want to play a game, I’ll probably use the laptop I have (even if that restricts choice of game - because I have a Mac…). Again, not that much of a gamer, so it’s different values I guess.
I have normal sized male hands and my girlfriend has relatively small hands and both work very well. She was actually surprised how ergonomic the Steam Deck is given the size. Other than that I only got positive reactions to the ergonomics.
Right now you can install Steam on your regular desktop Linux system, throw in Lutris to get games from other stores and you are good to go. This has been so far the best year to turn your family into Linux users yet.
It is far from ideal, but still a great improvement. And if we manage to get up to – let’s say – 10% penetration in EU, this is going to help immensely to combat mandatory remote attestation and other totalitarian crap we are going to end up with if Microsoft, Apple and Google keep their almost absolute dominance.
I appreciate that both this comment and its parent make good points that are not necessarily in conflict. I would distill this as a call for “critical support” for Valve, to borrow a term from leftist politics.
I have to say that I have had far more luck managing non-Steam game installs within Steam (you can add an entry to it, with a path, and it will manage it as a Proton install if you’d like; you basically just use Steam as a launcher and Proton prefix manager) than via Lutris.
My opinion of Lutris is that it is a janky pile of hacked-together non-determinism which was developed over a long period of time over many, many versions of Wine, and over many, many GPU architectures and standards, and long before Proton existed… which miraculously may work for you, although often will require expert hand-holding. Avoid if you are new to Linux.
There is also the Heroic Games Launcher, which I would also recommend over Lutris: https://heroicgameslauncher.com/
Noted. But I have personally had zero issues and the ability to download and install GOG games is hard to replicate in Steam.
Their improvements to proton/wine have made it so I could go from loading windows once a day to play games to loading it once a month to play specific games. Like all other for-profit companies their motives are profit driven, but so far they are contributing in ways that are beneficial and compatible with the broader Linux ecosystem. Unlike Microsoft, Oracle, Google, and Amazon they don’t have incentive to take over a FOSS project, they just don’t want to rely on Windows. But we should always keep an eye out.
Getting games to work by default on Linux also makes it much easier for people interested in Linux to try it out and people not interested to use it when convenient, which is a win in my book.
Did you look at the slides or watch the video of the talk? All their contributions are upstreamed and applicable to more than just their usecase. Everything is available on Arch as well (before SteamOS was released they actually recommend Manjaro, because they are so similar). You can use Proton for the Epic games store or other Windows apps. Of course they are doing this in self interest, but according to Greg Kroah Hartman and a lot of other kernel maintainers this isn’t a bad thing. The Steam Deck is first „real“ consumer Linux computer which has been sold over a million times. I hope more Linux handhelds are being released in the coming years :)
How dare they upstream improvements because of their malicious profit driven motivations!
The obvious irony here is that is in Valve’s best interest for their stuff to be upstreamed. It’s not like they can fork KDE (for example since it’s used by SteamOS) and maintain & support their own fork.
I don’t see why they couldn’t fork KDE? I see many reasons why they’d prefer not to.
For the last 20 years, I’ve whispered: “one day, I will read you. One day, i will install you on a computer. One day…”
Me too…I have also tought the same about installing Gentoo from stage 1
I have to admit, that matches my own experience with it. It feels good to say so out loud.
You should absolutely do that! I have very fond memories of it. Also,
w3m
+gpm
for the win!Already coded my own alternative to w3m ;-)
Oooooh, that’s very cool! I was not aware. Even better reason to give it a go. :-)
I distinctly remember the fun moment when I finally rebooted into a working system to realize that the only tools I have available to “get online” and build further are
ping
,telnet
andftp
.On the other hand, this might be a risky endeavor for you. Since you like building such alternatives, you might be tempted to cook yourself a custom package manager. And that’s a deep rabbit hole to fall into. I mean, trying to automatically detect shell script dependencies…
That’s exactly why I don’t want to enter that rabbit hole. I’ve only one life and I sadly need to make hard choices.
I think that LFS will be for my next reincarnation.
Is it public? 👀
yep: https://sr.ht/~lioploum/offpunk/
V2.0 is finished and “only” need to be packaged for release.
There’s a related issue: my favourite bit of productivity research shows that changing your process typically results in a measurable productivity improvement (usually 10-20%, depending on the study you read). This gradually fades but it’s one of the main reasons that it’s easy to sell management consultancy: you come in, you change something, you measure the output, and hurray, things are better. Then you come in a year later and change it back and demonstrate improvement again!
that sure sounds like the phenomenon boils down to “pay an iota of attention, and you’re sure to fix something”?
Also known as “if you want to lose weight, get on a diet”. Literally any diet will work. You just need to pay attention to what you eat for a while to see an improvement.
If you want to sustain the benefits of a good process, you just need to sustain the discussion about the process. Some organizations (in drug manufacturing for example) actually gather feedback on how things are done and how well it works to update the processes.
It’s pretty easy to do in most orgs, simply because you usually can’t directly measure actual productivity. Just pick an easily tweaked metric, bump it, everybody pats each other on the back, repeat in a year or two.
ha! I don’t think I’ve ever heard of that research, so just “changing” something tends to improve things? Was there any determinations on why?
I suspect it’s this, which is a bit more complex and controversial than what most people recall from popular summaries:
https://en.wikipedia.org/wiki/Hawthorne_effect
I also remember reading that, and like you it stuck in my mind. I haven’t been able to find the original paper since though - do you happen to have the reference?
It’s not really a paper so much as a control you have to have in social science research.
I am self hosting Gitea + Woodpecker + Registry along with a webhook endpoint to redeploy my podman containers and it’s working fairly well. It’s nowhere near the GitLab level of sophistication, but it’s comparably lightweight and I feel more confident in my ability to troubleshoot it.
I assume Registry is a container registry? Care to elaborate as to why you don’t run with the Gitea built-in?
I did not notice it had one. Thanks for the tip! :-)
Also did you try gitea’s actions for CI?
Not yet. It seems that I’ve first deployed it before they were available.
I’m not a fan of either format, but I strongly prefer working with XML. I’d even go so far as to say it doesn’t make sense to compare them because YAML doesn’t solve the same probelms as XML. Does YAML have schemas? Does it have XPath? Does it have XSLT? Does it have DTDs? Does it have namespaces?
And aside from tooling and programatic access the editor support for XML is a lot better than, too.
YAML feels like an ambiguous version of JSON when I have to use it - anything goes as long as you remember all of the weird rules and corner cases.
Every time I have to work with XML I end up stripping away the namespaces as the first step. As for XSD, they frequently do not match the data. XSLT is horrible. Even imperative code is more readable and flexible transformation tool than that. XPath is likely the only interesting piece of the stack.
That’s what I mean, really.
If you don’t need the tools XML provides then it’s just added complexity and you should use something else - or pick and choose specific parts to use. But when you do need those features then it’s nice to have a standard way of doing it across programming languages and platforms.
YAML doesn’t have the tools - at least not built-in and standardized - so if you need them YAML isn’t an option.
It’s like saying “the compiler gives a bunch of errors about my code”…
All programming languages I remember would interpret decimal literal the very same way. JSON parser would too. What’s the problem?
No, you have to always quote string literals (unless they are known not to be literals of different types, such as boolean Yes/No) and never quote numbers, booleans, nulls nor dates.
References make life simpler when writing such documents by hands, though. I’ve written JSON schema documents in YAML because it’s simpler that way.
So while being like 80% there and 20% kinda sucky for a lot of hand-kept data, sometimes I just don’t have the time nor willpower to repeat myself writing JSON nor roll a custom DSL. So YAML it is.
The problem is being loose with types.
1.20.1
is autocoerced to a string,1.20
is a number. It makes it exceedingly easy to do the wrong thing, especially if you’re just editing a preexisting configuration. IMO, any time you’re placing additional burden on a human to understand and navigate unexpected behavior, that’s a problem.JSON has its own problems, but at least in this limited case you aren’t going to make a two-character content change and have your datatype completely altered.
You should, but it’s incorrect to say you must.
foo: bar
is entirely valid.Seems pragmatic. I certainly haven’t gone out of my way to migrate random YAML files in my older Rails projects.
FWIW, my opinion is that it’s objectively an awful config language, but it’s too late now to stuff that particular horror back into Pandora’s Box.
It sure is an awful configuration language, but for hand administered dataset it’s the path of least resistance when you don’t feel up to the task of making your own format.
On the other hand, there has been an enormous amount of developer time spent on format conversions and home grown formats tend to suck even more than YAML and XML.
I was into the right side for quite some time, but in the last couple of years I try really hard to delay splitting the function until there is another caller. Usually it provides better insights into what should be factored out and what should be kept.
That is, only split the different oven drivers when you get the second one.
The other extreme is writing a unified oven drivers that decides on minor differences using a bunch of well-placed its. This ends up being a nightmare with no way to extend support for cool new features of the newer model without making a total mess of things.
So in conclusion, it’s probably not the function count that hinders understanding. It’s the total distance jumped where calls to familiar functions do not count.
At least for me.