From running, enduring, and observing several rounds of hiring:
Keep it short, ideally a page front and back. Resume, not CV.
I will check that you have public code. No public code is–usually–negative signal.
I will check that your Github stuff (for example) is not just forks of other work.
Be specific in what you did in a role–I know how people write these things, and it’s a red flag to say “helped ship a project”. You could’ve been the coffee gopher and that statement would still be true.
Cover letter, especially if asked. Easy “can this person read and follow simple directions?” test.
For higher-end and executive positions, write thank-you notes. This one surprised me, but I’ve seen it cost some veep candidates their shot.
Paragraphs are not as helpful as concise bullets.
Dates and titles are helpful.
If you have one, mention your clearance.
Take a second to trim your mentioned experience to the job–if I’m hiring an EM, code experience is not quite as interesting to me. If I’m hiring an IC for a web thing, your school raytracing project is off-topic.
Don’t add any social media you wouldn’t want to be considered from a culture-fit standpoint. Your X account owning the libs or your Mastodon making fun of boomers may not have the effect you expect.
Spelling and grammar mistakes are extra bad. Easy problem to solve, and it makes you look sloppy and inattentive to detail…typically bad qualities in a candidate.
If you are applying for a job outside your skillset (say, MUMPS programming), including experience that emphasizes your adaptability.
All of these of course have exceptions, of course–if you spent a few years at a defense contractor I’m not going to be too surprised if you don’t have a lot of public source code.
…don’t want to jump through the (perceived or real) legal hoops of publishing code under a FOSS license
…do allow their employees to publish code, but put a bunch of red tape in the way, so that it would be self-defeating for any employee to actually try to do it
It’s not at all limited to defense contractors.
The larger problem with using public code as a signal is that it puts people at a disadvantage if they don’t have the time or energy to publish projects outside of work. Lots of people have caregiving responsibilities that don’t leave them time for outside-of-work work, and a hiring process that values a well-stocked GitHub profile implicitly devalues parents and other groups.
I read it charitably as “usually” and “signal” doing a lot of heavy lifting. I.e., no public code won’t instantly disqualify a candidate, but will be a nail in the coffin if there are other negative signals. Which I think is valid.
Right, so in a heads to heads comparison between two candidates you’ll choose the one without kids? Or you’ll favor the young one over the older, because the older one “can’t show what code they’ve been writing because of having an actual job” whereas the young one can more easily point to work done in public recently?
Like you understand “can have publicly listed code” is going to be significantly biased by age, right?
Similarly the way a lot of women are treated on line means many intentionally limit their public presence, so I suspect you’ll get gender bias there as well.
The problem with @friendlysock’s approach with regards to the public code is that a lack of a positive signal is not the same as a negative signal.
Lacking a positive signal means that the things you could have learned (in this case: code quality, motivation to code off of work hours, etc) you have to learn from another way.
A negative signal is something that is either an instant disqualification (a belligerent public online persona) or something that needs to be combatted by more positive signals (a spelling error on the resume might be mitigated by a long-standing blog that communicates clearly).
For most companies/positions, using lack of a Github profile shouldn’t be considered a negative signal unless the position is something like “Open Source Developer Evangelist”.
And I agree with @olliej’s reply below that a lack of a Github profile isn’t a great filtering measure, even if you are so flooded by resumes that you need some kind of mass filtering measure. Here are some reasons I wouldn’t use it as a first filtering mechanism:
It’s not a simple pass (everyone with a Github passes the screen)
It’s not a simple reject (“usually negative” means you need to weigh against something else anyway)
It’s subjective
It takes a significant amount of an engineer’s time to do it
You are trying to quickly evaluate code in an unfamiliar project or projects, and perhaps in an unfamiliar language, which will have big room for error
Bingo. In practice, I almost always ask about it–some people just have private hosting or whatever, or have some other reason.
The thing I also think a lot of people miss is: I had over a thousand (no joke, 1e3) applicants for a junior position I opened up. When you are trying to plow through that many applicants, applicants without easy code to show their talent are automatically lower priority in the heap than those with.
… so you looked all the code from those folk, or you just went “does/does not have a GitHub profile” as a filter?
Again, this seems like a really good way to discriminate against people who are lower income, have families, etc. Not intentionally, just that that is the result of such filtering.
For example, when I was at uni there was a real sharp divide between people who did open source work and those who did not, and it was super strongly correlated with wealth, and not “competence”. It’s far easier to do code beyond your assignments and whatnot if you don’t also have essentially a full-time job, or you don’t have children to care for, etc. The person that I would say was the single best developer in my uni’s CS department was also working pretty much every hour outside of uni for his entire time there. By your metric they would be worse than one of the people in my year, who I would argue was far below in competence but did have a lot of open source code and “community involvement” because his family was loaded.
This reminds me of the discussions about how screening résumés with names removed to prevent bias still end up failing because you can tell so much from other clues like someone playing lacrosse in college, or that they went to a HBCU or an all-women’s college, etc.
Software development is a qualified job – you have to invest something (your time at first) before you can earn money. You read books, follow tutorials, discuss things with more experienced folks, study at university, do your own projects, study existing free software and contribute to it, get some junior job or internship etc. This is all part of preparing for a more qualified job.
How does a university degree requirement differ from taking your own public projects into consideration? Both cost you your time. (not mentioning that diploma is often a mandatory requirement while own projects are just softly appreciated + getting a diploma is a much larger investment than writing and publishing some code, the entry-barrier in IT is very low, compare it also to other fields).
If I ask a candidate: show me a photo of your bookshelf (or list of eBooks), tell me something about your favorite books that helped you grow professionally or tell something about an article you read and that opened your eyes… do you think that it is also bad and discriminatory? Because not everyone has time to study books and read articles…
Another aspect is enthusiasm. The abovementioned activities are not done intentionally to look good for future employer, but because you like them, find them entertaining or enriching.
I will check that you have public code. No public code is–usually–negative signal
Then you’re rejecting a lot of excellent people for no good reason. Many (most?) jobs don’t let you publish your work code, put restrictions your ability to contribute to OSS projects, and consider code developed by employees to be there’s (e.g. you need special permission to publish anything). This is in no way restricted to defense contractors, in my experience this is the norm for any case where your job is not explicitly working on OSS software. You may philosophically disagree with these employer’s policies but that still the reality for most developers.
I will check that you have public code. No public code is–usually–negative signal.
I will check that your Github stuff (for example) is not just forks of other work.
The older I get the weirder this idea seems: evaluating someone for a paid position based on the quality and quantity of work they do outside of the time that they’re paid to do a job as a professional. Does any other profession work this way?
But if your hiring an accountant and there’s one who runs audits for fun and has a blog with the places where they caught major errors in the audits that they did for fun, you can bet they’d be near the top of the hiring pile.
For a lot of other professions, (especially arts and engineering) there’s a concept of a portfolio: a curated set of work that you bring to the interview to talk through, and which you may be asked to provide up front. With software engineering, it!s easy to make your portfolio public so it can be used earlier in the hiring process.
Nobody has an expectation that accountants or many other professions will have professional-quality work done, for free, on one’s spare time, or suggests that the presence/absence of such should be a significant factor in hiring decisions.
Also, it’s not “easy to make your portfolio public” in software. Out of all the companies I’ve worked for across my entire career, do you know how many of them even have a listing of their main repositories public on a site like GitHub? One, and that was Mozilla. Every other company has been private locked-down repos that nobody else can see. I can’t even see former employers’ repos.
The only way to have a “portfolio” like you’re suggesting is thus to do unpaid work in one’s own free time. Which is not something we should expect of candidates and not something we should use as a way to compare them or decide between them.
Also, it’s not “easy to make your portfolio public” in software.
In the time it took me to write my comments in this thread, I could’ve signed up for Github (or Gitlab or Bitbucket or whatever) and opened a new repository with a basic Sinatra, Express, or even Bash script demonstrating some basic skill. Hundreds of thousands of developers, millions probably, have done this–and it’s near standard practice for any bootcamp graduate of the last decade.
The only way to have a “portfolio” like you’re suggesting is thus to do unpaid work in one’s own free time. Which is not something we should expect of candidates and not something we should use as a way to compare them or decide between them.
You don’t have to have a portfolio online. You don’t have to ever do any work that isn’t attached to a billable hour. Similarly, I also don’t have to take a risk on interviewing or hiring you when other people show more information.
Similarly, I also don’t have to take a risk on interviewing or hiring you when other people show more information.
This sounds more like a failure in your interviewing process than anything else.
So, look. I’ve run more interviews than I could count or care to remember. I’ve helped design interview processes at multiple companies. I’ve written about interviewing processes and given conference talks about interviewing processes. I am not lacking in experience with interviewing.
And this is just a gigantic red flag. As others keep telling you, what you’re doing is not hiring the best candidates. What you’re doing is artificially restricting your candidate pool in a way that excludes lots of perfectly qualified people who, for whatever reason – and the reason is none of your business and in many cases is something that, at least in civilized countries, you wouldn’t even legally be allowed to ask about in the interview – don’t have a bunch of hobby/open-source projects on GitHub.
I feel I’ve explained my process (including many “this is not a hard-and-fast rule” qualifications) sufficiently well and accurately, and have given honest and conservative advice for people that I sincerely believe will help them get a job or at least improve their odds. If this is unsatisfactory to you, so be it.
I’m not interested in discussing this further with you, good day.
More than that: introspective professionals are valuable. All paid coders should be able to write up some fun algorithms and discover them for a given need, but not all will go above and beyond in their understanding and mentorship.
It’s a useful signal when present. It’s not a useful signal if absent. It’s a very negatively useful signal if all you have on your public commits is messages like “blah” and zero sanity in your repository layout.
I tell people who are learning to code to get blame in other people’s projects, to learn good style and show some useful activity beyond forking a project and uploading commits of questionable value to the internet.
Supposedly all these hoops we make people jump through in programming interviews are because the interviewers say they see too many people with degrees and impressive credentials who can’t write a for loop.
If the software certification exams were anything like the CPA certification exams, we wouldn’t need to do nearly as many technical interviews. In other fields getting certified is an ordeal.
Other fields managed it: the CPA standardized exam takes 16 hours (not to study, to actually take) and the architecture ARE takes 22 hours.
Or we could not throw software engineers through that kind of meat grinder and stick with using other signals, like portfolios and technical interviews.
If it were possible to build a single exam that actually did it, I don’t know if I’d mind just because it would end a lot of pointless discussions and avert a lot of horrible processes.
Meanwhile, asking for a “portfolio” or using it to decide between candidates has problems that are well-documented, including in this thread, and I don’t really think we should be perpetuating it. It’s one of those interview practices that just needs to go away.
Nobody asks artists for a portfolio? Nobody asks engineers for previous specific projects, even if the details are obscured?
The projects one is way more than a job role. The portfolio is often of paid work where there has been a release for a portfolio, or work done outside the office.
I’ve worked for multiple companies that used GitHub for their repositories. If I were applying for a job with you today, and you browsed my GitHub profile, you would not see any of the code I wrote at those companies, or even the names of the repositories.
When people talk about a “portfolio” they always mean code written, unpaid, in one’s own spare time, unrelated to one’s current job, and many perfectly well-qualified programmers either do not do that or cannot do that due to not having the luxury of enough time to do so and make it look good.
Nobody asks civil engineers to have a portfolio of bridges they built as hobby projects.
Not true. Architects, for example, design many buildings during studies or send proposals for architectural design competitions. Most of that buildings are never build and remained only on paper. And were created in spare time. Guess what such architect would discuss on the job interview… Portfolio of the proposals and unrealized designs is very important.
Doctors spend long time in poorly paid or unpaid work before they gain enough experience. Journalists or even writers have to write pages and pages for nothing, before they earn some money. Music bands, actors, painters, carpenters, joiners, blacksmiths, etc. etc. Actually it is quite common pattern across the society that you have to prove your skills before getting a good job.
Maybe the world is „unfair“ and „cruel“, but if I compare IT with other fields… we have not much to complain about.
Again, nobody expects a civil engineer to have a portfolio of actually completely-constructed full-scale physical real-world bridges built in their spare time for free as hobby projects.
If you want to argue for apprenticeship as a form of education, feel free to, but apprenticeship is different from “do unpaid work on your own time”.
Most open source code exists to ‘scratch an itch’. It’s written because the author had a problem that wasn’t solved by anything that existed on the market today. If you have never encountered a problem that can be solved by writing software in your life then you’re almost certainly in a tiny minority. If you’ve encountered such problems but not tried to solve them, that tells me something about you. If you’ve encountered them and not been able to solve them, that also tells me something.
If you’ve encountered such problems but not tried to solve them, that tells me something about you.
Yes, it tells you that they’ve encountered such problems but not tried to solve them. Nothing more. You can’t know why someone doesn’t spend their free time doing their day job again for fun. Maybe they just don’t enjoy doing their day job again, which would be terrible, somehow, according to this thread. But maybe they just have even more important things to do than that?
Why guess? What do you think you’re indirectly detecting and why can’t you just ask about it?
As others have pointed out to you repeatedly in this thread, no one is saying don’t ask. But if people encounter problems that are within their power to fix, yet don’t fix them unless they consider it part of their job, then that’s definitely an attitude I’d like to discuss in some detail before I considered making a job offer,
Nobody has pointed out anything to me on this thread before, repeatedly or otherwise.
Everyone encounters problems that are “within their power to fix” and doesn’t fix them all the time. I don’t think that’s hyperbole. We could fix any of them, but we can’t fix all of them because our problem-fixing resources are finite. I take your position to be that if they happen to prioritise the software in their life over any other kind of problem they might encounter that that means they are going to be better at their job. I think this is a bit silly.
For what it’s worth, I get home from my computer job most days somewhere on the mood spectrum between wanting to set fire to all computers and wanting to set fire to anyone who’s ever touched one. I’d love to get a job that doesn’t make me feel like that, and it’s rather frustrating to know that my job sucking all the joy out of computing for me also makes me unqualified to get a better one, at least in the eyes of quite a lot of people here.
What does this mean to you? They’re synonyms to me, so I’ve never really tried to define how they might differ.
I will check that your Github stuff (for example) is not just forks of other work
This seems a bit of a red-herring to me. I include my GH to show that yes I really know how to program so we can skip the mutually-embarrassing “are you a complete fraud using somebody else’s CV” stage, not to show that I own several interesting repos. I mean, there’s a few in there that I actually started and they used to be things people used. But 90+ percent of “my” repos are forks because that’s how you contribute to many existing projects.
But 90+ percent of “my” repos are forks because that’s how you contribute to many existing projects.
Two things you can do here that are useful:
Make a branch that contains code that you wrote the default. I will probably click on them. If I see branches that have raised PRs and good interactions between you and the upstream that’s a very positive thing. Especially if the PRs are merged.
Pin repos that you want me to look at. GitHub gives you (I think) six repos to show in the profile screen. These should be the ones that you think best showcase your work.
I’m used (rightly or wrongly) to resumes being shorter documents that are typically more focused for a particular job, especially in the US. CVs are typically longer, have a lot more detail including coursework, talks, presentations, publications, and other stuff. My understanding is that CVs are also more common in academia, which I’ve never hired for.
But 90+ percent of “my” repos are forks because that’s how you contribute to many existing projects.
Indeed, which is why I also tend to click-through to a few of the repos to see if people have commits or attempted commits in those projects.
There are folks that, if you exclude forks, suddenly go from scores of repos to perhaps less than 10. There are folks I’ve seen who only have a few forks and no source repos of their own, but who have made significant contributions to those forks. My experience is that there are far more of the former than the latter, because the first order signalling is “how many repos do you have on Github” for people that care about such things and that’s how you spoof.
It’s pretty common to use “CV” to mean a complete list of all prior work, education, and awards, and “resume” to mean a one page summary of relevant experience.
I will check that your Github stuff (for example) is not just forks of other work.
If those forked repos are there because the person is contributing to others’ open-source projects, I would argue that kind of work is probably more reflective of the skills that are useful in most professional programming jobs in industry than a bunch of solo projects, however impressive.
You’re guaranteed to have more information in the future.
That is true, but this post is framed as though that is the only relevant thing that is going to change if you wait.
If you jump out of an airplane, you will have more information at 10 feet above the ground than you did at 10,000 feet, but that doesn’t mean it’s a good idea to wait until then to open your parachute.
This post hits a nerve for me because I suspect it represents the way many of my bosses have thought. It’s intensely frustrating to be on the receiving end of this endless indecision. As you wait for more information, costs accrue, customers get fed up and churn, competitors ship first, and employees burn out.
Google gets its way with anything related to the Web as a platform because it has 66% marketshare. Stop using Chrome if you don’t like its unilateral decision making.
I think regulation is the necessary step here. The GDPR has had real effect, this new surveillance method is in part a way for them to try to work around GDPR. Time for updated regulation.
I’m in agreement. “Just switch” is not particularly reasonable. At best, in many years, that approach could start to reduce Google’s power. But it’s unlikely.
If we want change we have to force change through regulation.
I mean, I think there should be both, plus also speaking as an ex-Google advertising privacy person, Google’s advertising businesses are, like, at LEAST four or five distinct business models which should each be separate companies. more realistically, at least a dozen.
the current situation with Google in adtech is as if a stock exchange could also be a broker, and a company listed on the exchange, and a high-frequency trading firm, and a hedge fund, and a bank, and … well, you get the idea
I often find it cathartic to read through legal proceedings involving my former employer. the currently-ongoing one in NY state has filings which go into some detail on legal theories that broadly agree with me about this (there’s a definition in the law of what constitutes a distinct market), so that’s nice to see. maybe someday there’ll be some real action on it.
I’d also love to see an antitrust regulator look at the secondary effects from Chrome’s dominance. Google supports Chrome on a small handful of platforms and refuses to accept patches upstream that support others (operating systems and architectures). This is bad enough in making other operating systems have less browser choice (getting timely security updates is hard when you have to maintain a large downstream patch set, for example) but has serious knock-on effects because Chrome is the basis for Electron. The Electron team supports all platforms that Chrome support upstream, but they don’t support anything else (they are happy to take patches but they can’t make guarantees if their upstream won’t). This means that projects that choose Electron for portable desktop apps are locked out.
Google did take the Fuchsia patches into upstream Chromium. A new client OS from Google doesn’t have these problems but a new client OS from anyone else will. That seems like a pretty clear example of using dominance in one market to prevent new players in another (where, via Android, they are also one of the largest players) from emerging. But I am not a lawyer.
Splitting up Google is definitely a form of regulation. My feeling is that splitting it up is one of the forms of regulation least likely to have accidental negative consequences.
We see the negative effects of Google being together all the time: AMP was a very ugly attempt to use the search monopoly to force a change to preserve their ad monopoly on mobile where it was being eaten away by Facebook at the price of breaking the web. More recently, the forced transition from Google Analytics Universal Analytics to Google Analytics 4 was something only a monopoly would do. No company that actually expected its analytics to actually make money directly would just break every API so gratuitously.
That said, even break ups can have unexpected consequences. The AT&T break up of the 80s did lead to a telecom renaissance in the 90s, but it also fatally crippled the Unix team and led to the end of Bell Labs as a research powerhouse.
The AT&T break up of the 80s did lead to a telecom renaissance in the 90s, but it also fatally crippled the Unix team and led to the end of Bell Labs as a research powerhouse.
Did it? The division into RBOCs had dubious benefits for consumers, because it replaced a well-regulated national monopoly with several less regulated local monopolies. The original plan of splitting out Western Electric would have made a lot more sense (WE was getting creamed by Nortel in the switching market, breaking up the phone system messes up the balance sheet elsewhere), but AT&T execs thought computer revenue from commercializing Unix was too good.
I am not sure if breaking up ATT did any good to me as a consumer, since the only internet choices I have is ATT and Comcast! US feels like an undeveloped country with the crawling internet speeds here in the San Francisco Bay Area.
The current “AT&T” is really Southwestern Bell, which somehow was allowed to eat all its neighbors. It is silly to let the telcos merge into a megablob a short decade after breaking them up in the first place.
In the broadest sense yes, but I feel that the term has come to mean setting up rules of conduct for the regulated businesses and possibly some form of oversight. Somehow it doesn’t pop into my mind that when the large companies call for regulation, they might be actually asking to be split up. I hope I make sense.
It’s not as if the choices are mutually exclusive.
I abandoned Chrome as a daily driver a few years back, but I’d do it today in a heartbeat based on this news. I rather enjoy the Firefox user experience, and switching was not a huge cost. I suppose YMMV and if switching does pose a large cost for someone, that’s their calculus, it’s just hard for me to imagine.
I’m also pushing for regulation how I can (leaving messages for my congresscritters, for what that’s worth). For me, I can’t imagine doing that but continuing to use Chrome.
Advertisement would help too. This announcement is buried for a reason. Google may have just handed Mozilla a huge cannon to use to get people off Chrome and onto Firefox, but Mozilla has to actually take advantage of it.
It’s not clear to me that this does bypass the GDPR. The GDPR requires informed consent to tracking. It sounds like this uses intentionally misleading text so will not constitute informed consent. It’s then a question of whether it counts as tracking. Google is potentially opening up some huge liability here because the GDPR makes you liable for collecting PII in anonymised form if it is later combined with another data set to deanonymise it.
I’d agree with that if it worked for Microsoft, Apple, Samsung, Sony (and Google). We need more than regulation; we need a cultural shift away from things like Google Chrome being the “defacto” for the Web. We have to get people understand that they have choice.
I would say regulation absolutely worked on Microsoft. A key part of why Google was able to succeed in the early 2000s was Microsoft was being very careful after losing a major anti-trust action. I was at Google at the time and I was definitely worried that Microsoft would use its browser or desktop dominance to crush the company. It never did but I’m confident it would have without the anti-trust concern.
All regulations end up the same way. Simply walking around it. Paying consultants to figure out the legal way. The biggest players will find the way, and the poorest and smallers players will die out. And that’s one of the ways how you can create a monopoly.
I just arrived in another EU country, and thanks to the derided regulation I can call and use mobile internet at the same pricing as at home. This means it’s easier for me to search for transport, lodging etc. to the benefit of both me and the providers of these services. The ones losing out are the telecom operators, who have to try to compete on services instead of inflated fees for roaming.
I’m not “deriding” regulations. I simply question the motives that are used when creating them. Maybe it’s because of the legacy of “central-planned economy” I was subjected to.
Also I think you’ve just given an example of a company in a sector that requires an explicit permission from the government to be able to even start the business.
It’s not true that large companies always find a way to bypass legislation or that regulation is always anti-competitive in any interesting sense.
Large companies can often work around regulations, but sometimes they clearly lose and regulation is passed and enforced that hurts their interests. E.g. GDPR, pro-union laws, minimum wages, etc.
Yes, richer and more powerful players are usually more likely to survive a financial hit. That’s not a feature of regulation. That’s a feature of capitalism: power and money have exponential returns (up to a point).
It has to be fixed with redistributive policies, not regulation.
Also, mobile telecoms consume a finite public good (EM spectrum noisiness in an area). They’re a natural target for public regulation. I don’t think that’s really a problem, tho I would prefer if public control was not state control.
It’s not true that large companies always find a way to bypass legislation or that regulation is always anti-competitive in any interesting sense.
I disagree. Companies will always try, they may not succeed. In particular, if the cost of complying with regulations is lower than the cost of finding work arounds, then they will comply. This is part of the reason that the GDPR sets a limit on fines that is astronomical: the cost of trying and failing to work around the GDPR is far lower than the cost of complying.
I’m a bit confused. I didn’t say anything about companies trying or not. I agree with all of your post except the bit about the GDPR fine limit, which I think is probably high enough (4% of global turnover) to exceed the benefits of non-compliance in most cases.
I don’t want to get into the Keynes vs. von Hayek (although if redistribution is involved then maybe we should include Marx) dispute regarding whether regulations are good or bad, because the moderator removes threads related to politics, and I don’t want him to remove this one.
(also I’m not sure we can convince each other to our point of view)
I did stop using Chrome, a long time ago. But, if my frontend colleagues are any indication, a deep hostility toward non-Chrome browsers is rampant among the people who are responsible for supporting them. And more and more teams just don’t bother. I would prefer not to have Chrome installed at all, but I have to because many websites that I don’t have a choice about using (e.g., to administer my 401(k), to access government services, to dispute a charge on my credit card) just flat-out don’t work in anything else.
You might have some luck reporting such issues to the responsible government agencies. They don’t usually write the sites themselves but contract the work out. The clerk will usually just forward your complaint to the supplier who will gladly bill the additional work.
The problem is systemic - if they don’t test it except with Chrome, they might fix the “one time issue” only for it to break the next time around they make some larger change.
Depending on the jurisdiction, supporting a single vendor’s product with public money may be illegal. It’s a direct subsidy on Google. Whether a particular state / national government can subsidise Google without violating laws / treaties varies, but even in places where they can they typically have to do some extra process. If you raise the issue as a query about state subsidy of a corporation then you may find it gets escalated very quickly. If it was approved by an elected person then they may be very keen to avoid ‘candidate X approved using taxpayer money to subsidise Google, a corporation that pays no tax in this state’ on PSAs in the next election.
I doubt any regulator would perceive “failed to test a web application in minority browsers” as a subsidy. Maybe if they specifically developed an application that targeted that specific proprietary vendor’s stack.
But I imagine a public organization such as a library building a virtual environment to be used specifically in VR Chat to target young audiences as a part of promotional strategy would be perceived as completely mostly fine.
In Czechia, government purchased several (pretty important, duty declarations for example) information systems that were only usable with Microsoft Silverlight. They are still running, by the way. As far as I know, the agencies were not even fined for negligence.
Most people out of IT treat large tech companies like a force of nature, not vendors.
I read a very apt quote[1] on HN a month ago, about how much Google values Chrome users thoughts, which directly relates to people complaining, but then continuing to use it:
Chrome user opinion to them is important to their business in about the same way meatpackers care about what cattle think of the design of the feeding stations. As long as they keep coming to eat, it’s just mooing.
Stop using Chrome if you don’t like its unilateral decision making.
More like make sure you convert everyone around you as well. If you have any say in your company policy, just migrate your office staff to Firefox. Make sure to explain to your family and convert them as well. uBlock on mobile Firefox should help to ease some conversion there as well.
Recently I have been travelling quite a bit and I could appreciate the fact to pay for bus/metro rides or coffee/beers around just with contactless technology. Apple/Google/Samsung-Pay based systems require actively unlocking your tech device and this generates some slow-down in the payment process.
Coffee and beers, yes, but for transit no unlocking is required, at least with Apple Pay and the NYC subway and bus systems. You just hold it next to the reader and it beeps, no other interaction required.
If you follow this reasoning to its logical conclusion, E2E encryption is impossible since there will always be some software doing the encryption for you, and said software is part of the threat model so the distributor of said software is part of what the encryption is supposed to protect against.
For example, PGP is incoherent because the PGP program is performing the encryption, thus you have to trust PGP’s developers, the distro or website you downloaded it from, your toolchain if you built it yourself … all of whom are part of the threat model.
It’s kind of a reducto ad absurdam. Perfect security is impossible. E2E is more secure because it reduces the number of points of compromise. Yes, the JS code you downloaded from the website could be secretly sending cleartext or using a backdoored algorithm or whatever; but assuming that code isn’t malicious, you do eliminate the much larger security problem of people with access to the server being able to see the cleartext, a gaping hole that gets exploited pretty often in real life.
Not quite, the article is arguing that E2E is incoherent when it’s protecting from the distributor itself. PGP is protecting you from someone else that does not distribute PGP.
But there isn’t one distributor, there are hundreds or thousands of distributors involved in any meaningful software execution today, many of whom you cannot even be aware of (for example, the person who distributed the compiler used by the packager of the PGP binary you are running). You don’t get to pick your adversary. PGP could be compromised in its source, during its compilation, during its physical distribution over the network, by a hostile OS or runtime environment, etc. etc.
The author of this piece should read “Reflections on Trusting Trust.” All of this stuff is a matter of degree; web cryptography is not unique in that regard, nor does that mean that it’s “snake oil.”
Perfect home security is impossible because even if you lock your doors, Yevgeny Prigozhin can lead a private army to your house and knock down the wall with a tank. Security is always relative.
As a long time SPA apologist and licensed reverend of the church of tiny backends, I find this genuinely difficult to follow. What is “hypermedia” even? A tree with some implied semantics? How is that different than any other data? Why should I be constructing it on the backend (that place that knows comparatively nothing about the client)?
The back button has been solved for over a decade.
The complexity of “the backend has to care what things look like” is also enormous.
Theres talk of longevity and churn, but I’m pretty sure if I wrote hx-target=... in 2012, I would not get the desired effect.
I haven’t managed state on a server beyond session cookies and auth in ages.
I saw a computer from 20 years ago use the internet just fine last weekend, and it needed some horrifying reverse proxy magic to make a secure connection, so “I’m using HTTPS” and “I’m supporting old hardware/OSs” is a contradiction anyway because decrypting HTTPS is more computationally intense than doom, and it’s also a moving target that we don’t get to pin. The end result is that if you can securely exchange information with a browser, it’s not ancient enough to need more than a few servings of polyfills to run a reasonably modern app.
React is the currently popular thing that makes stuff go vroom on the screen, so of course a lot of people make it more complicated than it needs to be, but like… remember early 2000s PHP CMSs? Those weren’t better, and if you did those wrong it was a security issue. At least a poorly written react UI can’t introduce a SQL injection.
To each their own, but I don’t get it 🤷♀️. I also don’t get how people end up with JS blobs bigger than a geocities rainbow divider gif, so maybe I’m just a loony.
Anything can be done wrong, and the fact that popular tools are used wrong often and obviously seems like a statistical inevitability, not a reason to try to popularize something different.
not a reason to try to popularize something different.
Why would you prevent people to popularize anything that actually solves some problems? Isn’t having choice a good thing? I’m this author of this talk about a React->htmx move, and I’m completely freaked out by how many people have seen my talk, as if it was a major relief for the industry. I am also amazed, when hiring young developers, by how most of them don’t even know that sending HTML from the server is possible. Javascript-first web UI tools have become so hegemonic that we need to remind people that they have been invented to tackle certain kind of issues, and come with costs and trade-offs that some (many? most?) projects don’t have to bear. And that another way is possible.
Anything can be done wrong, and the fact that popular tools are used wrong often and obviously seems like a statistical inevitability,
Probably the statistics are way higher for technologies that carry a lot of complexity. Like I said in my talk, it’s very easy for JS programmers to feel overwhelmed by the complexity of their stack. Many companies have to pay for a very experienced developer, or several of them. And it’s becoming an impossible economical equation.
The complexity of “the backend has to care what things look like” is also enormous.
With htmx or other similar technologies, “what things look like” is obviously managed in the browser: that’s where CSS and JS run. Server-side web frameworks are amazingly equipped for more than a decade now to generate HTML pages and fragments very easily and serve them at high speed to the browser without the need of a JS intermediary.
young developers … most of them don’t even know that sending HTML from the server is possible
I am shocked and stunned every single time I talk to someone who doesn’t know this. And if they are interested, I explain a little bit about how the web server can return any data, not just json.
Hypermedia encapsulates both current object state and valid operations on it in one partially machine-readable and partially user-readable structure.
A lobsters page, for example, lists the link and comments (the current state) and has a definition of how to comment: you can type in text and post it to the server. After you do that, the system replies with the updated state and possibly changed new valid operations. These are partially machine-readable - a generic program that understands HTML* can see it wants text to post to a particular server point - and partially user-readable, with layout and English text describing what it means and what it does.
Notice that this is all about information the backend applications knows: current data state and possible operations on it. It really has nothing to do with the client… which is part of why, when done well, it works on such a wide variety of clients.
hypermedia doesn’t have to be html either, but that’s the most common standard
Exposing directly an internal database schema directly through your API is a bit dangerous as it creates a strong coupling with the storage. ORM, like Prisma or Graphile, has native integration with GraphQL pushing this idea further.
We went with GraphQL via Hasura, and while Hasura isn’t representative of GraphQL as a whole, Hasura’s base offering took about a year to turn from best practice to deprecated. In that time, we grew from 10 engineers to 20.
A “benefit” of GraphQL is that your frontend and backend engineers are more decoupled and can communicate less. However, this also means that backend engineers are not naturally motivated to understand frontend needs.
Our DB schema quickly became unergonomic for frontend consumers, yet because the DB schema was directly coupled with the frontend, we wrote repetitive ad-hoc data transformations all over the frontend to massage the GraphQL schema to a higher level data model.
So…don’t do that. The downside of any solution that turns your database into an api is that your database needs to be designed to present a good api. This is true whether or not you’re exposing a graphql or rest api.
It’s somewhat less painful if you’re doing real rest (as opposed to slashes-are-what-makes-rest), because the tables or views can be the resources and the mapping may be fairly natural.
However, this also means that backend engineers are not naturally motivated to understand frontend needs.
This seems like a problem in the organization not the technology. Are you all trying to deliver the same product? If yes, why aren’t you measuring the impact of the backend on the the performance of the frontend?
In my experience, wish-it-was-rest apis either force the frontend to make many calls or result in joining lots of redundant data onto endpoints, impacting speed.
For me, this wasn’t about whether a better result was possible; of course one is possible. But the happy path of Hasura led to the results that we got, and in deciding a path forwards, one of our evaluations was to do GraphQL better, without DB->API and with intentional modeling of resources and mutations. We decided to do something else instead.
This seems like a problem in the organization not the technology.
It is both a technological and organization problem; the challenges that came along with Hasura specifically weren’t a good match with our organization. We preferred a technological solution that intrinsically motivated better backend<>frontend working patterns instead of a solution that required additional work to motivate the desired outcomes.
We preferred a technological solution that intrinsically motivated better backend<>frontend working patterns
Partly, I feel that this does seem to reflect an organizational failure, and that introducing friction just to make developers do their damn job is wasteful and defeatist. On the other hand, there is something interesting about this scenario. I wonder if there is a name for it? I am often a proponent for introducing constraints that some might experience as friction in their day to day work, because I think it leads to better outcomes and less friction over time.
The first thing to do when exposing an API that is automatically mapped to a database (which I am a big believer in) is to set up a separate view-only schema (ie a schema containing only database views), and expose only the data that is relevant for the client, possibly with some denormalization and aggregation as needs arise.
we wrote repetitive ad-hoc data transformations all over the frontend
The place to do that is in the exposed specialized schema. The underlying schema with the actual tables remains decoupled from the client.
A “benefit” of GraphQL is that your frontend and backend engineers are more decoupled and can communicate less.
Only if you treat GraphQL as a way to mechanically expose a low-level interface (like your DB). So don’t do that!
We switched to GraphQL as part of a push to get frontend and backend working more closely. We’ve found the schema to be a very helpful locus of collaboration; when engineers from either side can propose interfaces using the shared language of the schema (which requires concreteness specificity), it’s much less common to end up in situations where the two teams are talking past each other.
we wrote repetitive ad-hoc data transformations all over the frontend to massage the GraphQL schema to a higher level data model.
If a schema requires its consumers to do this kind of transformation, I would argue that it’s not well-designed (for those consumers). Sounds like your GraphQL schema should have exposed a higher-level API to begin with. (A design goal for our GraphQL API is that the frontend should not need to do any significant reshaping of data it receives from the backend, and in particularly it should never need to do its own joining, filtering, or sorting.)
There are plenty of issues with GraphQL (weird and oddly-specified serialization, clunky and limiting type system) but so much of the criticism I see boils down to “sometimes people make bad APIs using GraphQL,” which, sure. Designing a good API is still a problem that requires human thought and intention; no technology is going to do that for you.
I believe a lot of the problems originated with our out-of-the-box use of Hasura, directly exposing the underlying schema. It was definitely not well designed. In our case the problem was that the bad API was the happy path, and that I believe is Hasura-specific, not GraphQL-specific.
The Apple ARM-chips are really great in terms of performance/Watt, but Apple, in my opinion, really dropped the ball in terms of software quality. I had been in the Apple ecosystem for years until I dropped it in 2012 when it became apparent that macOS was on a downward spiral from the excellence I had become used to.
The other operating systems/desktop environments in Windows and Linux can still learn quite a bit from macOS, but the latter is suffering from UI/UX inconsistencies and is unnecessarily locked down. While you could be relatively free 10 years ago with any software of your choice (especially OSS) and have rare breakage between system upgrades, you now have to fight with all kinds of gatekeepers and the system usually wrecks your whole setup with each upgrade.
This might be the main reason why fewer and fewer professionals choose Apple: It becomes less and less justified to pay the Apple tax the more you use your system for actual work.
I dropped it in 2012 when it became apparent that macOS was on a downward spiral from the excellence I had become used to
2012 was eleven years ago, and eight years prior to the introduction of the first MacOS devices running ARM. MacOS software quality has gone up and down over the years, but I don’t think “it sucked over a decade ago on a completely different architecture” is a very useful data point for assessing the quality of MacOS on an M2 machine today.
I have been using macOS as my primary desktop since 2007 (before that Linux and I had a 2 year part-time Linux excursion around 2018 or so). I would agree with the quality suffering after the terrible 2016 MacBooks until about 2019/2020 or so, but the last few releases have been great for me. (And it’s not like early macOS 10.5 or 10.6 releases didn’t have horrible bugs.)
Apple Silicon has been a huge step forward, my machines are lightning fast and last on battery for a long time. I also love the work that they are doing on system security, like sealed volumes, memory protection through the the secure enclave, etc.
With regards to the article, Apple Silicon provides great performance per watt compared to most GPUs. But for some reason people overhyped Apple Silicon GPUs and believe that Apple waved a magical wand and is suddenly competitive with NVIDIA performance-wise. The compute power of the M1 Ultra GPU is 21 TFLOPS, the tensor cores on an RTX 2060 Super are 57 TFLOPS and that’s a budget card from years ago. If you want to do machine learning, get a Linux machine and put an NVIDIA card in it. GPU training on Apple Silicon is currently only useful for small test runs (if the training process doesn’t burn in a fire due to the bugs that are still in the PyTorch/Tensorflow backends).
I use a MacBook desktop, because I get all the nice apps and an extremely predictable environment and use a headless Linux machine with a beefy NVIDIA GPU for training models.
This might be the main reason why fewer and fewer professionals choose Apple: It becomes less and less justified to pay the Apple tax the more you use your system for actual work.
I think he is kind of missing the point. If TikTok is banned from Google and Apple app stores, it will become less popular and won’t melt brains at the same rate. Sure it can be circumvented, but it is not a “terrible idea” with “intolerable” side effects. And sure, there will be other apps that replace TikTok; when that happens maybe it will be easier to argue for comprehensive rather than ad hoc regulation.
Also disappointed to see him arguing for “commerce” as an important bedrock value, and leaning on State Department talking points like Cuba being a “censorship-loving autocracy.” I suppose Schneier is still a good source on the technical side of things.
From here it looks like you missed the point. He says the effective bans would be terrible/intolerable. Then he points out that merely banning the apps would not be effective.
If you’re disappointed to learn that Schneier isn’t a hardline Leftist, you may have been mistaking him for someone else, maybe Noam Chomsky?
It’s not about being a “hardline Leftist.” It’s about parroting false propaganda. Cuba has public wifi hotspots that provide access to the open web and are not meaningfully firewalled. Its internet practices are nothing like China’s and Iran’s and it is an error of fact to claim that they are.
In strictly technical terms, that’s true, but… uh, how do I put it so that I don’t start a political flamewar again.
It’s very easy to underestimate how governments like the Cuban government can enforce these things if you haven’t lived under one. The Cuban government doesn’t use the exact same technical means that China uses partly because it has better, more easily-enforceable non-technical means to achieve its goals, and partly because it just doesn’t have the tremendous resources that the Chinese government has.
The two don’t belong together in terms of specific technical means (deep packet inspection firewalls) but that’s quite literally a technicality. I understand why it doesn’t look the same from a technical perspective, but take it from someone who’s familiar with that kind of legal climate – it’s pretty much the same.
I don’t really understand what you are alluding to. Cubans can and do routinely use mainstays of the open internet like Google, Wikipedia, Facebook, Reddit, and Youtube, all of which are blocked in China. Cuba does not employ any means–whether deep packet inspection, social pressure, mind control rays, or anything else–to prevent this.
I’m sorry, I’m not trying to be mysterious here :-(. I just don’t want to go there because the last time I did, I started a big flamewar and I really regret it. I know it comes off as pretentious. I’m just trying to stay away from the politics underneath it.
Let me try to state it in as non-political terms as I can, because I really think this is technically relevant, the way social engineering attacks are technically relevant for network security, even though they are a non-technical matter. Please don’t take any of this as a political statement. This is really not my intention.
If one’s goal is to ensure that some information doesn’t go through a censorship-resistant network (like the Internet), or that if it does, it at least doesn’t spread, there are more ways to do it than one. One is through tight content access control at the network layer – firewalling, strict control of telecom equipment etc.. Another is through tight information access and dissemination control, where one openly allows access at the network layer but ensures everyone stays away from information they want restricted, and that anyone who does not is at least unable to disseminate it easily. Both can be equally effective.
I don’t want to get into the “how” of it because I don’t think I can do that in a way that’s not open to political interpretation and this is not the place. All I want is to caution, based not just on specific technical and legal understanding of this particular matter, but also on my own experience, against a line of thought like “Internet access is effectively open, as it is not subject to firewall restrictions”. “Not subject to firewall restrictions” is one conotation of open, and it’s correct in this case. But many others are not, and “not subject to firewall restrictions” doesn’t automatically imply all the other ones.
Another is through tight information access and dissemination control, where one openly allows access at the network layer but ensures everyone stays away from information they want restricted, and that anyone who does not is at least unable to disseminate it easily. Both can be equally effective.
I don’t want to get into the “how” of it because I don’t think I can do that in a way that’s not open to political interpretation and this is not the place.
If this is not the place to explain your very political claim, maybe it’s also not the place to state it?
I don’t think what I stated is a political claim, otherwise I wouldn’t have stated it. I’ve strived to make sure that:
It’s not about a political current or doctrine.
It’s stated in generic terms, rather than political notions – i.e. in terms of how the flow of information can be restricted, not in terms of what information ought to be restricted or not, or if it should be restricted in the first place.
It doesn’t include my position on whether that is good or not.
I’m sorry if it made anyone uncomfortable, or if I didn’t keep my own views out of it as well as I should have. It wasn’t my intention.
Edit: just to clarify, I’m obviously not insensitive to the fact that this is all being said in a thread regarding a government’s policies. My remarks apply equally well to information access in any network environment, from schools to corporate networks. They are about the specific case being dicussed here only insofar as… this is literally what the topic is about. They aren’t – or at least I have no intention of them being – any more political than your own root post in this thread about Schneier “leaning on State Department talking points”.
I’m not aware of a taboo on political discussion, and the article is about government policy, so I didn’t see a problem with pointing out State Department talking points.
My issue with your statements is that they require more detail to evaluate – Is the Cuban government restricting the flow of information in a way that is comparable to network layer consorship, or in a way that exceeds what Western governments do? That would require going beyond generic statements that apply to literally every government, and explaining the non-technical means that you think are employed by the Cuban government. But you have refused to do saying it would cross a line into being too political.
I’m not aware of a taboo on political discussion, and the article is about government policy, so I didn’t see a problem with pointing out State Department talking points.
There is one. Just look at how many people have flagged this as off-topic.
I don’t know if you familiar with American-mass media or social networking, but there is a lot of easily-enforceable non-technical censorship at play. Its easy to handwave about some technical or non-techinal cencorship in Cuba but ff Iran or Cuba had the same ability to project propaganda as the US there would certainly be a great American firewall.
Apps like TikTok (or FB, Youtube, Twitter … ) rely on network effects to get their popularity. People use TikTok because their peers are on TikTok. Make it sufficiently hard to install (and yes, sideloading apks on a device is suffiently hard that most people won’t bother), and people will flock to the next ephemeral video platform.
Sure, it won’t prevent a dedicated person from installing TikTok on their phone - but most people won’t even want to.
Then the question becomes “should companies like Apple and Google be required to facilitate the installation of TikTok, and, if so, can the US govt require them not to?”. That question seems to revolve more about free trade/commerce than about free speech.
And I would wager that there are several clones to TikTok spinning up as we speak. They’ll use the same dark patterns to increase engagement that TikTok does, but at least one of them will be owned / controlled by a Western company, and thus be “acceptable” to the State Department.
All that’s missing is sourcing some content to start things off, and spending some millions on advertising to start to attract users.
The end result will be nearly the same amount of harm to the users, but with less spying by the CCP, and more spying by some Western companies.
These seem like two separate concerns to me. Unfortunately, we live in a time when companies can iterate quickly to make their products as addictive as possible.
Nearly the same amount of harm, but still less. There would still be a drop in addictive usage patterns before the new western TikTok becomes socially compulsory for teens. Could make a difference in the development of children who otherwise wouldn’t have a gap in that mode of interaction during their school years.
From here it looks like you missed the point. He says the effective bans would be terrible/intolerable. Then he points out that merely banning the apps would not be effective.
And sure, there will be other apps that replace TikTok; when that happens maybe it will be easier to argue for comprehensive rather than ad hoc regulation.
YouTube Shorts is already eating TikTok’s lunch in a lot of ways. The addiction-optimized-queue-of-clips format is almost certainly here to stay.
Am I… not the right audience for YouTube Shorts? I do watch a fair amount of YouTube, but these clips are mostly uninteresting to me. The best of them are just clips from channels I already subscribe to.
The one thing I want but don’t get with the Shorts is how old the video is. If I’m seeking news on The War, space and astronomy news, etc, I don’t want to look at something from last year or even six months ago. But since the Shorts don’t show the date, I’m mostly unlikely to click on them, and am usually unsatisfied when I do. I just looked in the Settings again, and don’t see a way to just hide those on the home screen.
You and me both! I’m basing my anecdote on what I’ve observed among friends and family, particularly those who are banned from using TikTok by their government and government-adjacent employers. I think it’s just very hard to fit genuinely interesting content into such a short clip, but presenting many such clips in rapid sequence is great for engaging that slot-machine-seeking hunger some people seem to have.
(Unless I’m misunderstanding your comment. If you’re implying that you were able to get what you wanted out of TikTok, teach me your ways! I’ve been trying and failing to get into it.)
I think as the intro implies this can be extended to machines and tools and maybe even further
I think in the context of computers in particular there’s a bit of a political problem where we force people to use them, sometimes by lawn, sometimes through society. They have to use computers, Smartphones and even certain apps.
At the same time we see a rise in scams and are surprised how people who might not even need or want this devices and only have them because they are forced to fill out some form online.
Some decades ago it was relatively easy to come by without almost any particular tool one can think of. You might be odd for it, but it allowed you to stop make use of your rights, etc.
Today you need apps to log in to your bank, websites to do your taxes, sometimes even the web to apply for elderly homes. And smartphones are pretty complex, and force you to fit example have or create an email address, require passwords, etc. You need to know how to use software, understand what the internet is, should have done concept of pop-ups, online ads, spam, updates, understand that there is no other person sitting on the other end right now and so on .
I think a lot of ruthlessness comes from this. Then even if you know about all of the above you end up like in Kafka’s The Trial and even if you know what things mean the processes behind the scenes for the vast majority of use cases will remain completely intransparent to you.
In a non automated/digitalized world is easy to ask quick questions and people who can ask other people handle exceptions. In the digital world one has to hope the developer has to have thought of it and handle it accordingly. If you are lucky there’s a support hotline but these seem to go away, especially for bigger so often more important companies
I see tools more on the morally neutral side, but I don’t think that’s the issue really. I don’t think computers are impressive but there’s an unintentional direction we move towards to whete things are forced upon people often thinking it’s a good thing when it’s at least debatable.
As a side note there’s certainly cases where things were done in the name of digitalization, progress, efficiency and things were just harder, slower, turned out to be less cost effective, less secure and required more real people to be involved
Of course these are the bad example, but given the adjective here is oppressive. Usually even in (working/stable) oppressive societies it works for most people most of the time. Things start to shift when it doesn’t for you many or there’s war. Only the ones not fitting in tend to have problems and while I would have titled it differently I think that is true for how computers are used that’s true today for all sorts of computers.
In a non automated/digitalized world is easy to ask quick questions and people who can ask other people handle exceptions.
In the land of unicorn and rainbows? ;)
From my experience, people in positions of “HTML form actions” absolutely aren’t inclined to answer any questions and handle exceptions, unless they have any real retribution to fear. Worse yet, it’s a rational behavior for them: they almost certainly will be reprimanded if they break the intended logic, so it’s much safer for them to follow its letter.
Just past month I had to file a certain application for a somewhat uncommon case. The humans responsible for handling them rejected it as invalid because my scenario wasn’t in their “cache” of common cases and they used the default “contact our parent organization” response instead of trying to handle it, and not even in a polite manner. I contacted the parent organization and, luckily, people there were willing to handle it and told me that my application was valid all along and should have been accepted, and that I should file it again.
I suppose the application form handlers received quite a “motivational speech” from the higher-ups because they were much more polite and accepted it without questions, but it’s still wasted me a lot of time traveling to a different city to file it and standing in lines.
It may be one of the more egregious example in my practice, but it’s far from unique. I very much prefer interacting with machines because at least I can communicate with them remotely. ;)
Your anecdote just demonstrates the author’s point; you had to escalate to a more-responsible human, but you successfully did so and they were able to accommodate the uncommon circumstances, even though those cirumstances were not anticipated by the people who designed the process. When was the last time you pulled that off with an HTML form?
They were anticipated by the people who designed the process. It’s just that their subordinates did a sloppy job executing the logic written for them by the higher-ups. If the higher-ups programmed a machine to do that, it wouldn’t fail.
And I got very lucky with the sensible higher-ups. It could have been much worse: in that particular case it was obvious who the higher-ups were and they had publicly-accessible contact information. In many other cases you may never even find out who they are and how to reach them.
I love that, and I wish more of the web worked that way, but it’s worth pointing out that the only reason it can work is because ultimately the input I put into that form gets interpreted by a human at the post office. It would not be possible to create a form for inputting an email address which would be as resilient to errors or omissions.
yes, and a lot of the information filled into the form doesn’t make sense to me – I just copy it on the envelope. It makes sense in peels as it is routed along: first country, then ZIP, then street, then name. That’s flexibility! Subsidiarity at work.
Some decades ago, here in the US, we were deep in the midst of making a large proportion of physical social institutions at best undignified and at worst somewhere between unsafe and impossible to independently access without ownership and operation of a dangerous, expensive motor vehicle, something unavailable to a significant proportion of the population that ruthlessly grinds tens of thousands of people a year into meat just here into the US.
I think this article is technically correct but in this particular case it might just not be quite the best kind of correct :-).
There are always going to be people who romanticize “the old way” but painting all criticism of Flatpak & friends as rose-tinted glasses is one of the reasons why Flatpak is six years old and still weird – this story is, ironically enough, on the frontpage along with this article.
(Disclaimer: I like Flathub, I think it’s a good idea, and I use it). But a half-realized idea of a better system is usually worse than a fully-realized idea of a worse system. Plenty of things break when installing stuff from Flathub and many applications there are minimally sandboxed, to the point where you might as well just install the bloody .deb if it exists. Filing all the breakage under “yeah users don’t need that” (font rendering bugs, themes etc.) or “well the next release of this particular Wayland compositor is going to support that” is the same kind of obliviousness to reality as “but Flatpak breaks the Unix philosophy”, just of a more optimistic nature.
This leads to a curious state of affairs that’s unsatisfying for everyone.
It’s certainly in the nature of FOSS software that things don’t happen overnight and software evolves in the open. But if you want to appeal to an audience (technical and non-technical) that’s wider than “people who contribute to FOSS desktop projects”, shipping half-finished implementations is not the way, whether it’s in the nature of FOSS or not. You can say that Linux is not a product but that won’t change the (entirely reasonable) expectation of this wider audience that it should at least work.
Meanwhile, elitism and gatekeeping are one unpleasant aspect of romanticizing the old ways but, elitism and gatekeeping aside, I think it’s important to be realistic and acknowledge that the old way works – as in, it allows you to install applications, manage, update and uninstall applications which work as intended, to a degree that Flatpak is still aspiring to. While some people may be yearning for the days when being a package maintainer granted you demigod status in cyberspace, I think it’s more realistic to assume that most people just aren’t willing to spend the extra troubleshooting hours on a system that doesn’t always deliver even the security guarantees it’s meant to deliver, and sometimes results in a functional downgrade, too.
Edit: oh, while we’re on the topic of rose-tinted glasses, it’s also worth keeping in mind that the goalposts have changed quite significantly since then, too. Lots of people today point out that hey, back in 2000 you’d have had to fiddle with XF86Config and maybe fry your monitor, why are you complaining about far softer breakage today? Well, sure, but the alternative back in 2000 – especially if you were on a CS student’s budget – was Windows Me (I’m being as charitable as “maybe fry your monitor” here, realistically it was probably Windows 98). You maybe fried your monitor but got things many computer users couldn’t even dream of in return, unless they were swimming in money to shed out on Windows 2000, Visual Studio and so on. The standard to beat is no longer Windows Me.
Especially true when you’re not interested in desktop but servers. I’m very happy that I know I can just apt install php apache and it’ll give me a working bundle. The same for everything built on top of this. Also debian does specify a release cycle by this. I won’t have to worry that my php 7.4 is completely outdated in the next month just because someone thought moving ahead to php 8 is the new flashy thing. No, it’ll certainly work for a long time on php 7.4 as that’s the current debian stable release. And that’s perfectly fine, I don’t have the time to upgrade all the time just because someone though it would be neat to use one feature of php8. Those “gatekeeper” also ship most of these services with very sane defaults (config, location of configs, systemd units,…).
Yeah that probably won’t work for the new release of $desktopapp, but it works flawless for the server environment.
No docker is not an answer. It’s a completely different way of operating stuff.
But a half-realized idea of a better system is usually worse than a fully-realized idea of a worse system.
Oh, wow, I could not disagree more strongly with this. Give me something that is functionally complete over something that is broken and half-baked but has some kind of vague conceptual superiority any day.
I remember trying Clojure a bit, and being super interested in a lot of the ideas of the language.
There is the universal quibbles about syntax (and honestly I do kinda agree that f(x, y) and (f x y) are not really much different, and I like the removal of commas). But trying to write some non-trivial programs in Clojure/script made me realize that my quibble with lisps and some functional languages is name bindings.
The fact that name bindings require indentation really messes with readability. I understand the sort of… theoretical underpinning of this, and some people will argue that it’s better, but when you’re working with a relatively iterative process, being able to reserve indentation for loops and other blocks (instead of “OK from this point forward this value is named foo”) is nice!
It feels silly but I think it’s important, because people already are pretty lazy about giving things good names, so any added friction is going to make written code harder to read.
(Clojure-specific whine: something about all the clojure tooling feels super brittle. Lots of inscrutable errors for beginners that could probably be mangled into something nicer. I of course hit these and also didn’t fix them, though…)
EDIT: OTOH Clojure-specific stuff for data types is very very nice. Really love the readability improvements from there
Interesting to hear this–indentation to indicate binding scope is one of the things I really miss when I’m using a non-Lisp. I feel like the mental overhead of trying to figure out where something is bound and where it’s not is much higher.
(I strongly agree on the state of Clojure tooling.)
Static or dynamic refers to whether the webserver serves requests by reading a static file off disk or running some dynamic code (whether in process or not). While the word “dynamic” can apply broadly to any change, reusing a term with a well-understood definition in this context to refer to unrelated changes like SSL cert renewal and HTTP headers is really confusing. Late in the article it refers to “the filesystem API used to host static files” so it’s clear the author knows the definition. It’s unfortunate that the article is written in this way; it’s self-fulfilling that misusing a clear and well-established term just results in confusion. Maybe a better metaphor for the points it’s trying to make would be Stewart Brand’s concept of pace layering.
Yeah I agree, I think the article is generally good, but the title is misleading.
My summary is “We should try to make dynamic sites as easy to maintain as static sites”, using sqlite, nginx, whatever.
The distinction obviously exists – in fact the article heavily relies on the distinction to make its point.
I agree with the idea of moving them closer together (who wouldn’t want to make dynamic sites easier to maintain?) But I think there will be a difference no matter what.
Mainly that’s because the sandboxing problem (which consists of namespace isolation and resource isolation) is hard on any kernel and on any hardware. When you have a static site, you don’t need to solve that problem at all.
We will get better at solving that problem, but it will always be there. There are hardware issues like Spectre and Meltdown (which required patches to kernels and compilers!), but that’s arguably not even the hardest problem.
I also think recognizing this distinction will lead to more robust architectures. Similar to how progressive enhancement says that your website should still work without JS, your website’s static part should still work if the dynamic parts are broken (the app servers are down). That’s just good engineering.
Funnily enough, sqlite + nginx is what I use for most of my smaller dynamic websites, usually wish a server process as well.
EDIT: Reading further, yeah, almost all of my side projects use that setup, outside of some Phoenix stuff, and I’ve definitely noticed those projects requiring not very much maintenance at all.
SQLite, at least, partially compensates via extensive testing, and a slow/considered pace of work (or so I understand). It’s the antithesis of many web-apps in that regard. And the authors come from a tradition that allows them to think outside the box much more than many devs, and do things like auto-generate the SQLite C header, rather than trying to maintain it by hand.
C and C++ can be used effectively, as demonstrated by nginx, sqlite, curl, ruby, python, tcl, lua and others, but it’s definitely a different headspace, as I understand it from dipping into such things just a bit.
For me, I don’t use nginx talking directly to SQLite, I just use it as a reverse proxy. It’s just that it makes it easy to set up a lot of websites behind one server, and using SQLite makes it easy to manage those from a data storage standpoint.
You articulated that without using expressions that would be inappropriate in the average office setting. I admire you for that.
The whole act of reusing a common, well-understood content-related term to instead refer to TLS certs and HTTP headers left me ready to respond with coarse language and possibly question whether OP was trolling.
The idea that maybe we’re comparing a fast layer to a slow layer is somewhat appealing, but I don’t think it quite fits either. I think OP is muddling content and presentation. Different presentations require differing levels of maintenance even for the same content. So if I publish a book, I might need to reprint it every few hundred years as natural conditions cause paper to degrade, etc. Whereas if I publish the same content on a website, I might need to alter the computer that hosts that content every X days as browsers’ expectations change.
That content doesn’t change. And that’s what we commonly mean when we say “a static website.” The fact that the thing presenting the content needs to change in order to adequately serve the readers doesn’t, in my view, make the content dynamic. And I don’t think it moves it from a slow layer to a faster one either.
This is a reasonable criticism, but I think it’s slightly more complicated than that — a collection of files in a directory isn’t enough to unambiguously know how to correctly serve a static site. For instance, different servers disagree on the file extension → mimetype mapping. So I think you need to accept that you can’t just “read a static file off disk”, in order to serve it, you also need other information, which is encoded in the webserver configuration. But nginx/apache/etc let you do surprisingly dynamic things (changing routing depending on cookies/auth status/etc, for instance). So what parts of the webserver configuration are you allowed to use while still classifying something as “static”?
That’s what I’m trying to get at — a directory of files can’t be served as a static site without a configuration system of some sort, and actual http server software in order to serve a static site. But once you’re doing that sort of thing, how do you draw a principled line about what’s “static” and what isn’t?
Putting a finer point on the mimetype thing, since I understand that it could be seen as a purely academic issue: python2 -m SimpleHTTPServer and python3 -m http.server will server foo.wasm with different mimetypes (application/wasm and application/octet-stream) Only the wasm bundle served by the python3 version will be executed by browsers, due to security constraints. Thus, what the website does, in a very concrete way, will be dependent not on the files, but on the server software. That sounds like a property of a “dynamic” system to me — why isn’t it?
You could say, ok, so a static website needs a filesystem to serve from and a mapping of extensions to content types. But there are also other things you need — information about routing, for instance. What domain and port is the content supposed to be served on, and at what path? If you don’t get that correct, links on the site likely won’t work. This is typically configured out of band — on GitHub pages, for instance, this is configured with the name of the repo.
So you need a extension to mimetype mapping, and routing information, and a filesystem. But you can have a static javacsript file that then goes and talks to the sever it was served from, and arbitrarily changes its behavior based on the HTTP headers that were returned. So really, if you want a robust definition of what a “static” website is, you need to pretty completely describe the mapping between HTTP requests and HTTP responses. But isn’t “a mapping between HTTP requests and HTTP responses” just a FP sort of way of describing a dynamic webserver?
If you disagree with some part of this chain of logic, I’m curious which part.
All the configuration parts and “dynamic” nature of serving files in a static site are about that: serving them, how the file gets on my computer. But at the end of the day, with a static site the content of the document I get is the same as the content on the filesystem on the server. And with a dynamic site it is not. That is the difference. It’s about what is served.
All this talk about mime types and routing just confuses things. One can do the same kinds of tricks with a file system and local applications. For instance: changing the extension, setting default applications, etc. can all change the behavior you observe by opening a file. Does that mean my file system is dynamic too? Isn’t everything dynamic if you look at it that way?
It seems very odd to be talking about whether or not WASM gets executed to make a point about static websites.
When the average person talks about a static site, they are talking about a document-like site with some HTML, some CSS, maybe some images. Yes, there may be some scripting, but it’s likely to be non-essential to the functionality of the site. For these kinds of sites, in practice MIME types are basically never something you as the site author will have to worry about. Every reasonable server will serve HTML, CSS, etc. with reasonable MIME types.
Sure, you can come up with some contrived example of an application-like site that reliant on WASM to function and call it a static site. But that is not what the phrase means in common usage, so what point do you think you are proving by doing so?
What about that is “misconfigured”? It’s just configuration, in some cases you might want all files to be served with a particular content type, regardless of path.
My point is that just having a set of files doesn’t properly encode the information you need to serve that website. That, to me, seems to indicate that defining a static site as one that responds to requests by “reading static files off disk” is at the very least, incomplete.
I think this discussion is kind of pointless then.
Ask 10 web developers and I bet 9 would tell you that they will assume a “normal” or “randomly picked” not shit webserver will serve html/png/jpeg/css files with the correct header so that clients can meaningfully interpret them. It’s not really a web standard but it’s common knowledge/best practice/whatever you wanna call it. I simply think it’s disingenious to call this proper configuration then and not “just assuming any webserver that works”.
I found your point (about the false division of static and dynamic websites) intuitive, from when you talked about isolation primitives in your post. (Is a webserver which serves a FUSE filesystem static or dynamic, for example? What if that filesystem is archivemount?)
But this point about MIME headers is also quite persuasive and interesting, perhaps more so than the isolation point, you should include it in your post.
Given this WASM mimetype requirement, what happens when you distribute WASM as part of a filesystem trees of HTML files and open it with file://? Is there an exception, or… Is this just relying on the browser’s internal mimetype detection heuristics to be correct?
Yeah, I probably should have included it in the post — I might write a follow up post, or add a postscript.
Loading WASM actually doesn’t work from file:// URLs at all! In general, file:// URLs are pretty special and there’s a bunch of stuff that doesn’t work with them. (Similarly, there are a handful of browser features that don’t work on non-https origins). If you’re doing local development with wasm files, you have to use a HTTP server of some sort.
Loading WASM actually doesn’t work from file:// URLs at all!
Fascinating! That’s also good support for your post! It disproves the “static means you can distribute it as a tarball and open it and all the content is there” counter-argument.
This is for a good reason. Originally HTML pages were self-contained. Images were added, then styles and scripts. Systems were made that assumed pages wouldn’t be able to just request any old file, so when Javascript gained the ability to load any file it was limited to only be able to load files from the same Origin (protocol + hostname + port group) to not break the assumptions of existing services. But file:// URLs are special, they’re treated as a unique origins so random HTML pages on disk can’t exfiltrate all the data on your drive. People still wanted to load data from other origins, so they figured out JSONP (basically letting 3rd-party servers run arbitrary JS on your site to tell you things because JS files are special) and then browsers added CORS. CORS allowed servers to send headers to opt in to access from other origins.
WebAssembly isn’t special like scripts are, you have to fetch it yourself and it’s subject to CORS and the same origin policy so loading it from a file:// URL isn’t possible without disabling security restrictions (there are flags for this, using them is a bad idea) but you could inline the WebAssembly file as a data: URL. (You can basically always fetch those.)
What domain and port is the content supposed to be served on, and at what path? If you don’t get that correct, links on the site likely won’t work.
These days when getting a subdomain is a non-issue, I can’t see why anyone would want to use absolute URLs inside pages, other than in a few very special cases like sub-sites generated by different tools (e.g. example.com/docs produced by a documentation generator).
I also haven’t seen MIME type mapping become a serious problem in practice. If a client expects JS or WASM, it doesn’t look at the MIME type at all normally because the behavior for it is hardcoded and doesn’t depend on the MIME type reported by the server. Otherwise, for loading non-HTML files, whether the user agent displays it or offers to open it with an external program by default isn’t a big issue.
MIME bites you where you least expect it. Especially when serving files to external apps or stuff that understands both xml and json and wants to know which one it got. My last surprise was app manifests for windows click-once updates which have to have their weird content-type which the app expects.
If a client expects JS or WASM, it doesn’t look at the MIME type at all normally because the behavior for it is hardcoded and doesn’t depend on the MIME type reported by the server.
This is incorrect. Browsers will not execute WASM from locations that do not have a correct mimetype. This is mandated by the spec: https://www.w3.org/TR/wasm-web-api-1/
You might not have seen this be a problem in practice, but it does exist, and I and many other people have ran into it.
Thanks for the pointer, I didn’t know that the standard requires clients to reject WASM if the MIME type is not correct.
However, I think the original point still stands. If the standard didn’t require rejecting WASM with different MIME types but some clients did it on their own initiative, then I’d agree that web servers with _different but equally acceptable behavior could make or break the website. But if it’s mandated, then a web server that doesn’t have a correct mapping is incorrectly implemented or misconfigured.
Since WASM is relatively new, it’s a more subtle issue of course—some servers/config used to be valid, but no longer are. But they are still expected to conform with the standard now.
The other replies have explained this particular case in detail, but I think it’s worth isolating the logical fallacy you’re espousing. Suppose we believe that there are two distinct types of X, say PX and QX. But there exist X that are simultaneously PX and QX. Then those existing X are counterexamples, and we should question our assumption that PX and QX were distinct. If PX and QX are only defined in opposition to each other, then we should also question whether P and Q are meaningful.
The abilities to dump and restore a running image, and to easily change everything at runtime, are the two biggest things I miss about Common Lisp. It feels barbaric now when I have to restart a JVM to pick up classpath changes, or when I have to wait a minute on startup for everything to get evaluated instead of just resuming a saved image instantly.
I finally got a long-sought promotion, but it was a bit of Pyrrhic victory as it came with only a 3% raise–much less than I’d been led to expect.
Had to move unexpectedly after our lease wasn’t renewed, and rent went up
my wife had a really rough year of job hunting
got the freedom to run with a crazy idea at work for a while. I don’t know if we’ll end up shipping it or not, but it’s been a great experience that has developed our team’s capacity and if nothing else it will provide a point of comparison to other potential solutions. This work was a refreshing change from the feature-factory treadmill I’d gotten stuck on for the past year or so.
Rode my bike a lot, camped a lot.
2022:
I need to look for a new job. I’ve been procrastinating here because I enjoy my team and my work, but I’ve been getting jerked around on comp for two years now and it’s clear that this place is never going to value me.
Some exposure to the “modern” front-end development world this year at work (I am primarily a backend programmer) has made me want play with alternatives like HTMX, Hyperscript, etc. React/Apollo/GraphQL can’t be the best we can do.
SQLite is my go-to for small to medium size webapps that could reasonably run on a single server. It is zero effort to set up. If you need a higher performance DB, you probably need to scale past a single server anyway, and then you have a whole bunch of other scaling issues, where you need a web cache and other stuff anyway.
Reasons not to do that are handling backup at a different place than the application, good inspection tools while your app runs, perf optimization things (also “shared” memory usage with one big dbms instance) you can’t do in sqlite and the easier path for migrating to a multi-machine setup. Lastly you’ll also get separation of concerns, allowing you to split up some parts of your up into different permission levels.
If I’m reading that right you’ll have to implement that into your application. postgres/mariadb can be backed up (and restored) without any application interaction. Thus it can also be performed by a specialized backup user (making it also a little bit more secure).
As far as I know, you can use the sqlite3 CLI tool to run .backup while your application is still running. I think it’s fine if you have multiple readers while one process is writing to the DB.
Ok but instead of adding another dependency that solves the shortcomings of not using a DBMS (and I’ll also have to care about) I could instead use a DBMS.
OK, but then you need to administer a DBMS server, with security, performance, testing, and other implications. The point is that there are tradeoffs and that SQLite offers a simple one for many applications.
Not just that, but what exactly are the problems that make someone need a DBMS server? Sqlite3 is thread safe and for remote replication you can just use something like https://www.symmetricds.org/, right? Even then, you can safely store data up to a couple of terabytes in a single Sqlite3 server, too, and it’s pretty fault tolerant by itself. Am I missing something here?
My experience has been that managing Postgres replication is also far from easy (though to be fair, Amazon will now do this for you if you’re willing to pay for it).
SymmetricDS supports many databases and can replicate across different databases, including Oracle, MySQL, MariaDB, PostgreSQL, MS SQL Server (including Azure), IBM DB2 (UDB, iSeries, and zSeries), H2, HSQLDB, Derby, Firebird, Interbase, Informix, Greenplum, SQLite, Sybase ASE, Sybase ASA (SQL Anywhere), Amazon Redshift, MongoDB, and VoltDB databases.
This seems quite remarkable - any experience with it?
Where do you see the difference between litestream and a tool to backup Postgres/MariaDB? Last time I checked my self-hosted Postgres instance didn’t backup itself.
You have a point but nearly every dbms hoster has automatic backups and I know many backup solutions that automate this. I am running stuff only by myself though (no SaaS)
No, it’s fine to open a SQLite database in another process, such as the CLI. And as long as you use WAL mode, a writer doesn’t interrupt a reader, and a reader can use a RO transaction to operate on a consistent snapshot of the database.
(I wonder how many good companies that Accelerate book is going to kill before engineering managers move on to the next shiny object.)
More on-topic … this article seems to set up a false dichotomy between E2E tests and unit (or component) tests. Integration tests which exercise the whole system can be fast and not flaky if you replace external service dependencies like queues, HTTP transport, etc. with synchronous, in-process components.
Did you know that it’s possible today to create something for your browser that works like a native app on your device?
This is categorically false. It is a marketing fiction spread by those who want to develop their apps on the cheap, and, ok, fine. But PWAs do not work anything like native apps from the perspective of the end user, and acting otherwise is just gaslighting those users.
I assume most people here that use Ubiquiti have disabled remote access to devices if they haven’t already.
Legal overrode the repeated requests to force rotation of all customer credentials, and to revert any device access permission changes within the relevant period
I’m struggling to see how this is good advice. Was it really to protect the stock value (rotating would reveal something bad happened and open it up to questions)? Even that is short sighted.
A comment from a former employee lifted from the HN thread:
While I was there, the CEO loved to just fly between offices (randomly) on his private jet. You never knew where he’d pop up, and that put everybody on edge, because when he was unhappy he tended to fire people in large chunks (and shut down entire offices).
This seems consistent with some Glassdoor reviews; for example:
No one is safe here. you expendable just like the trashbag in your garbage can. owner gives unreasonable goals and when not met, he fires. upper management/cfo like money and rjp [Robert J. Pera, the CEO] clout over the product. over the consumer experience. the company morale is everyone tries to fly under RJP’s radar due to random firings. Upper Management is number people, worried about the stock more than employees and the product. Very muddy project mangement and very foggy leadership. No one really knows where the ship is sailing. Everyone is on the same ride trying to avoid a wreck at the same time avoiding RJP.
The company is a one-man show who completely ignores people value.
You are being questioned, demoralized and you even don’t believe your skills in the end.
No feedback, no HR, no planning.
Incredibly toxic culture where most people would rather not have to deal with the CEO at all (“be invisible”) due to his behaviour and complete lack of respect towards his employees. I have witnessed or experimented a lot of what you can see in the other negative reviews on this site.
This may vary from office to office, but there doesn’t seem to be a general HR department. If the CEO is being disrespectful or abusive, who can you complain to, really?
And a bunch more.
Seems like the owner/CEO is just a twat that everyone is afraid of, and for good reasons too. This kind of company culture incentives the wrong kind of decision-making; from a business, ethical, and legal perspective. It’s no surprise that whistleblower “Adam” wants to remain anonymous.
It’s all a classic story repeated untold times over history innit? People will go to great lengths to avoid strong negative consequences to themselves, whether that’s a child lying about things to avoid a spanking, a prisoner giving a false confession under torture, or an employee making bad decisions to avoid being fired. We only have several thousand years of experience with this so it’s all very new… Some people never learn.
This kind of company culture incentives the wrong kind of decision-making; from a business, ethical, and legal perspective.
Indeed, and it makes its way right into the product too; you can tell when release feature quantity is prized over quality. This honestly explains more than I thought it could about my experience with their products so far — they feel so clearly half-baked, in a persistent, ongoing sense.
I never even heard of Ubiquiti until a few days ago when there was a story on HN that their management interface started displaying huge banner ads for their products – I just use standard/cheap/whatever’s available kind of hardware most of the time so I’m not really up to speed with these kind of things. Anyway, the response from that customer support agent is something else. The best possible interpretation is that it’s a non-native speaker on a particularly bad day: the wife left him yesterday, the dog died this morning, and this afternoon he stepped on a Lego brick. But much more likely is that it’s just another symptom of the horrible work environment and/or bad decision making, just like your meh experience with their products.
Yeah, I had similar experiences with Ubiquiti stuff–I bought it because I liked the idea of separating routing and access point functionality, but it never stopped being flaky. After the last time throughput slowed to a crawl for no reason I got a cheap TP-Link consumer router instead and I haven’t had to think about it once.
Great write-up, I had no idea the REPL of lisp/smalltalk was so powerful. I need to get around to learning clojure.
I think the elixir* REPL fits the bill for the most part - if I start up one iex instance and connect to it from another node I can define modules/functions and they show up everywhere. And for hot-fixing in production one can connect to a running erlang/elixir node and fix modules/functions on the REPL live, and as long as the node doesn’t get restarted the fix will be there.
* erlang doesn’t quite fit the bill since one can’t define modules/functions on the REPL, you have to compile them from the REPL.
Does Clojure actually have these breakloops though? I think I’ve seen some libraries that allow doing parts of it (restarts), but isn’t the default a stacktrace and “back to the prompt”?
Well, prompt being the Clojure repl, but you’re correct that the breakloop isn’t implemented, as far as I got in the language. You can must implement the new function and re-execute, so you lose all of the context previous to the break. I think with all of the customizability of what happens when a stack trace happens, it’s possibly possible.
I THINK the expected use with Clojure is to try to keep functions so small and side effect free that they are easy to iterate on in a vacuum. Smalltalk and CL have not doubled down on functional and software transactional memory like Clojure has. That makes this a little more nuanced than “has/doesn’t have a feature”.
You’re correct. Interactivity and REPL affordances are areas where Clojure–otherwise an advancement over earlier Lisps–really suffers compared to, for instance, Common Lisp. You don’t have restarts, there is a lot you can’t do from the REPL, and it’s easy to get a REPL into a broken state that can’t be fixed without either a full process restart or using something like Stuart Sierra’s Component to force a full reload of your project (unless you know a ton about both the JVM and the internals of the Clojure compiler). You also can’t take a snapshot of a running image and start it back up later, as you can with other Lisps (and I believe Smalltalk). (This can be useful for creating significant applications that start up very quickly; not coincidentally, Clojure apps start up notoriously slowly.)
From running, enduring, and observing several rounds of hiring:
All of these of course have exceptions, of course–if you spent a few years at a defense contractor I’m not going to be too surprised if you don’t have a lot of public source code.
As you point out, there are many employers who
It’s not at all limited to defense contractors.
The larger problem with using public code as a signal is that it puts people at a disadvantage if they don’t have the time or energy to publish projects outside of work. Lots of people have caregiving responsibilities that don’t leave them time for outside-of-work work, and a hiring process that values a well-stocked GitHub profile implicitly devalues parents and other groups.
I read it charitably as “usually” and “signal” doing a lot of heavy lifting. I.e., no public code won’t instantly disqualify a candidate, but will be a nail in the coffin if there are other negative signals. Which I think is valid.
Right, so in a heads to heads comparison between two candidates you’ll choose the one without kids? Or you’ll favor the young one over the older, because the older one “can’t show what code they’ve been writing because of having an actual job” whereas the young one can more easily point to work done in public recently?
Like you understand “can have publicly listed code” is going to be significantly biased by age, right?
Similarly the way a lot of women are treated on line means many intentionally limit their public presence, so I suspect you’ll get gender bias there as well.
Sounds convenient!
The problem with @friendlysock’s approach with regards to the public code is that a lack of a positive signal is not the same as a negative signal.
Lacking a positive signal means that the things you could have learned (in this case: code quality, motivation to code off of work hours, etc) you have to learn from another way.
A negative signal is something that is either an instant disqualification (a belligerent public online persona) or something that needs to be combatted by more positive signals (a spelling error on the resume might be mitigated by a long-standing blog that communicates clearly).
For most companies/positions, using lack of a Github profile shouldn’t be considered a negative signal unless the position is something like “Open Source Developer Evangelist”.
And I agree with @olliej’s reply below that a lack of a Github profile isn’t a great filtering measure, even if you are so flooded by resumes that you need some kind of mass filtering measure. Here are some reasons I wouldn’t use it as a first filtering mechanism:
Bingo. In practice, I almost always ask about it–some people just have private hosting or whatever, or have some other reason.
The thing I also think a lot of people miss is: I had over a thousand (no joke, 1e3) applicants for a junior position I opened up. When you are trying to plow through that many applicants, applicants without easy code to show their talent are automatically lower priority in the heap than those with.
… so you looked all the code from those folk, or you just went “does/does not have a GitHub profile” as a filter?
Again, this seems like a really good way to discriminate against people who are lower income, have families, etc. Not intentionally, just that that is the result of such filtering.
For example, when I was at uni there was a real sharp divide between people who did open source work and those who did not, and it was super strongly correlated with wealth, and not “competence”. It’s far easier to do code beyond your assignments and whatnot if you don’t also have essentially a full-time job, or you don’t have children to care for, etc. The person that I would say was the single best developer in my uni’s CS department was also working pretty much every hour outside of uni for his entire time there. By your metric they would be worse than one of the people in my year, who I would argue was far below in competence but did have a lot of open source code and “community involvement” because his family was loaded.
This reminds me of the discussions about how screening résumés with names removed to prevent bias still end up failing because you can tell so much from other clues like someone playing lacrosse in college, or that they went to a HBCU or an all-women’s college, etc.
Notes for people outside the USA, a HBCU is a “historically Black college or university”.
Software development is a qualified job – you have to invest something (your time at first) before you can earn money. You read books, follow tutorials, discuss things with more experienced folks, study at university, do your own projects, study existing free software and contribute to it, get some junior job or internship etc. This is all part of preparing for a more qualified job.
How does a university degree requirement differ from taking your own public projects into consideration? Both cost you your time. (not mentioning that diploma is often a mandatory requirement while own projects are just softly appreciated + getting a diploma is a much larger investment than writing and publishing some code, the entry-barrier in IT is very low, compare it also to other fields).
If I ask a candidate: show me a photo of your bookshelf (or list of eBooks), tell me something about your favorite books that helped you grow professionally or tell something about an article you read and that opened your eyes… do you think that it is also bad and discriminatory? Because not everyone has time to study books and read articles…
Another aspect is enthusiasm. The abovementioned activities are not done intentionally to look good for future employer, but because you like them, find them entertaining or enriching.
Then you’re rejecting a lot of excellent people for no good reason. Many (most?) jobs don’t let you publish your work code, put restrictions your ability to contribute to OSS projects, and consider code developed by employees to be there’s (e.g. you need special permission to publish anything). This is in no way restricted to defense contractors, in my experience this is the norm for any case where your job is not explicitly working on OSS software. You may philosophically disagree with these employer’s policies but that still the reality for most developers.
I agree with this. Like programing in MUMPS its not usual to public the code because the type of bussiness
The older I get the weirder this idea seems: evaluating someone for a paid position based on the quality and quantity of work they do outside of the time that they’re paid to do a job as a professional. Does any other profession work this way?
Nobody asks accountants to show audits they’ve run or tax forms they’ve filed in their spare time for fun.
Nobody asks civil engineers to have a portfolio of bridges they built as hobby projects.
Nobody should ask developers to have a “GitHub résumé”.
But if your hiring an accountant and there’s one who runs audits for fun and has a blog with the places where they caught major errors in the audits that they did for fun, you can bet they’d be near the top of the hiring pile.
For a lot of other professions, (especially arts and engineering) there’s a concept of a portfolio: a curated set of work that you bring to the interview to talk through, and which you may be asked to provide up front. With software engineering, it!s easy to make your portfolio public so it can be used earlier in the hiring process.
Nobody has an expectation that accountants or many other professions will have professional-quality work done, for free, on one’s spare time, or suggests that the presence/absence of such should be a significant factor in hiring decisions.
Also, it’s not “easy to make your portfolio public” in software. Out of all the companies I’ve worked for across my entire career, do you know how many of them even have a listing of their main repositories public on a site like GitHub? One, and that was Mozilla. Every other company has been private locked-down repos that nobody else can see. I can’t even see former employers’ repos.
The only way to have a “portfolio” like you’re suggesting is thus to do unpaid work in one’s own free time. Which is not something we should expect of candidates and not something we should use as a way to compare them or decide between them.
In the time it took me to write my comments in this thread, I could’ve signed up for Github (or Gitlab or Bitbucket or whatever) and opened a new repository with a basic Sinatra, Express, or even Bash script demonstrating some basic skill. Hundreds of thousands of developers, millions probably, have done this–and it’s near standard practice for any bootcamp graduate of the last decade.
You don’t have to have a portfolio online. You don’t have to ever do any work that isn’t attached to a billable hour. Similarly, I also don’t have to take a risk on interviewing or hiring you when other people show more information.
This sounds more like a failure in your interviewing process than anything else.
So, look. I’ve run more interviews than I could count or care to remember. I’ve helped design interview processes at multiple companies. I’ve written about interviewing processes and given conference talks about interviewing processes. I am not lacking in experience with interviewing.
And this is just a gigantic red flag. As others keep telling you, what you’re doing is not hiring the best candidates. What you’re doing is artificially restricting your candidate pool in a way that excludes lots of perfectly qualified people who, for whatever reason – and the reason is none of your business and in many cases is something that, at least in civilized countries, you wouldn’t even legally be allowed to ask about in the interview – don’t have a bunch of hobby/open-source projects on GitHub.
I feel I’ve explained my process (including many “this is not a hard-and-fast rule” qualifications) sufficiently well and accurately, and have given honest and conservative advice for people that I sincerely believe will help them get a job or at least improve their odds. If this is unsatisfactory to you, so be it.
I’m not interested in discussing this further with you, good day.
More than that: introspective professionals are valuable. All paid coders should be able to write up some fun algorithms and discover them for a given need, but not all will go above and beyond in their understanding and mentorship.
It’s a useful signal when present. It’s not a useful signal if absent. It’s a very negatively useful signal if all you have on your public commits is messages like “blah” and zero sanity in your repository layout.
I tell people who are learning to code to get blame in other people’s projects, to learn good style and show some useful activity beyond forking a project and uploading commits of questionable value to the internet.
I thought writing software was an art or a craft…
This sounds a lot like people wanting to have it every which way whatever’s convenient.
No, but they do require education and formal credentialing.
Supposedly all these hoops we make people jump through in programming interviews are because the interviewers say they see too many people with degrees and impressive credentials who can’t write a
for
loop.If the software certification exams were anything like the CPA certification exams, we wouldn’t need to do nearly as many technical interviews. In other fields getting certified is an ordeal.
Sure. Now, come up with a standardized exam that everyone will agree covers what you need to be hireable as a programmer :)
Other fields managed it: the CPA standardized exam takes 16 hours (not to study, to actually take) and the architecture ARE takes 22 hours.
Or we could not throw software engineers through that kind of meat grinder and stick with using other signals, like portfolios and technical interviews.
If it were possible to build a single exam that actually did it, I don’t know if I’d mind just because it would end a lot of pointless discussions and avert a lot of horrible processes.
Meanwhile, asking for a “portfolio” or using it to decide between candidates has problems that are well-documented, including in this thread, and I don’t really think we should be perpetuating it. It’s one of those interview practices that just needs to go away.
I’d say that’s not true for any graduate from my university. Many people from the non-CS faculties are even forced through a basic programming course.
Nobody asks artists for a portfolio? Nobody asks engineers for previous specific projects, even if the details are obscured?
The projects one is way more than a job role. The portfolio is often of paid work where there has been a release for a portfolio, or work done outside the office.
I’ve worked for multiple companies that used GitHub for their repositories. If I were applying for a job with you today, and you browsed my GitHub profile, you would not see any of the code I wrote at those companies, or even the names of the repositories.
When people talk about a “portfolio” they always mean code written, unpaid, in one’s own spare time, unrelated to one’s current job, and many perfectly well-qualified programmers either do not do that or cannot do that due to not having the luxury of enough time to do so and make it look good.
Not true. Architects, for example, design many buildings during studies or send proposals for architectural design competitions. Most of that buildings are never build and remained only on paper. And were created in spare time. Guess what such architect would discuss on the job interview… Portfolio of the proposals and unrealized designs is very important.
Doctors spend long time in poorly paid or unpaid work before they gain enough experience. Journalists or even writers have to write pages and pages for nothing, before they earn some money. Music bands, actors, painters, carpenters, joiners, blacksmiths, etc. etc. Actually it is quite common pattern across the society that you have to prove your skills before getting a good job.
Maybe the world is „unfair“ and „cruel“, but if I compare IT with other fields… we have not much to complain about.
Again, nobody expects a civil engineer to have a portfolio of actually completely-constructed full-scale physical real-world bridges built in their spare time for free as hobby projects.
If you want to argue for apprenticeship as a form of education, feel free to, but apprenticeship is different from “do unpaid work on your own time”.
Most open source code exists to ‘scratch an itch’. It’s written because the author had a problem that wasn’t solved by anything that existed on the market today. If you have never encountered a problem that can be solved by writing software in your life then you’re almost certainly in a tiny minority. If you’ve encountered such problems but not tried to solve them, that tells me something about you. If you’ve encountered them and not been able to solve them, that also tells me something.
Yes, it tells you that they’ve encountered such problems but not tried to solve them. Nothing more. You can’t know why someone doesn’t spend their free time doing their day job again for fun. Maybe they just don’t enjoy doing their day job again, which would be terrible, somehow, according to this thread. But maybe they just have even more important things to do than that?
Why guess? What do you think you’re indirectly detecting and why can’t you just ask about it?
As others have pointed out to you repeatedly in this thread, no one is saying don’t ask. But if people encounter problems that are within their power to fix, yet don’t fix them unless they consider it part of their job, then that’s definitely an attitude I’d like to discuss in some detail before I considered making a job offer,
Nobody has pointed out anything to me on this thread before, repeatedly or otherwise.
Everyone encounters problems that are “within their power to fix” and doesn’t fix them all the time. I don’t think that’s hyperbole. We could fix any of them, but we can’t fix all of them because our problem-fixing resources are finite. I take your position to be that if they happen to prioritise the software in their life over any other kind of problem they might encounter that that means they are going to be better at their job. I think this is a bit silly.
For what it’s worth, I get home from my computer job most days somewhere on the mood spectrum between wanting to set fire to all computers and wanting to set fire to anyone who’s ever touched one. I’d love to get a job that doesn’t make me feel like that, and it’s rather frustrating to know that my job sucking all the joy out of computing for me also makes me unqualified to get a better one, at least in the eyes of quite a lot of people here.
Your open source app doesn’t have to look good. It just kinda has to exist, and maybe have a readme. If it works, that’s even nicer.
Accountants are certified. I have a CS degree from a brick and mortar university.
Do you think we shouldn’t hire people without credentials?
Which credentials do you intend to require?
The signal exists. Should it be ignored because other industries don’t have an analogous signal?
What does this mean to you? They’re synonyms to me, so I’ve never really tried to define how they might differ.
This seems a bit of a red-herring to me. I include my GH to show that yes I really know how to program so we can skip the mutually-embarrassing “are you a complete fraud using somebody else’s CV” stage, not to show that I own several interesting repos. I mean, there’s a few in there that I actually started and they used to be things people used. But 90+ percent of “my” repos are forks because that’s how you contribute to many existing projects.
Two things you can do here that are useful:
(answering you and @enn in same place)
I’m used (rightly or wrongly) to resumes being shorter documents that are typically more focused for a particular job, especially in the US. CVs are typically longer, have a lot more detail including coursework, talks, presentations, publications, and other stuff. My understanding is that CVs are also more common in academia, which I’ve never hired for.
Indeed, which is why I also tend to click-through to a few of the repos to see if people have commits or attempted commits in those projects.
There are folks that, if you exclude forks, suddenly go from scores of repos to perhaps less than 10. There are folks I’ve seen who only have a few forks and no source repos of their own, but who have made significant contributions to those forks. My experience is that there are far more of the former than the latter, because the first order signalling is “how many repos do you have on Github” for people that care about such things and that’s how you spoof.
It’s pretty common to use “CV” to mean a complete list of all prior work, education, and awards, and “resume” to mean a one page summary of relevant experience.
If those forked repos are there because the person is contributing to others’ open-source projects, I would argue that kind of work is probably more reflective of the skills that are useful in most professional programming jobs in industry than a bunch of solo projects, however impressive.
That is true, but this post is framed as though that is the only relevant thing that is going to change if you wait.
If you jump out of an airplane, you will have more information at 10 feet above the ground than you did at 10,000 feet, but that doesn’t mean it’s a good idea to wait until then to open your parachute.
This post hits a nerve for me because I suspect it represents the way many of my bosses have thought. It’s intensely frustrating to be on the receiving end of this endless indecision. As you wait for more information, costs accrue, customers get fed up and churn, competitors ship first, and employees burn out.
Is it though? Seems clear that it’s not framed that way
Should I make that bold or something?
Google gets its way with anything related to the Web as a platform because it has 66% marketshare. Stop using Chrome if you don’t like its unilateral decision making.
I think regulation is the necessary step here. The GDPR has had real effect, this new surveillance method is in part a way for them to try to work around GDPR. Time for updated regulation.
I’m in agreement. “Just switch” is not particularly reasonable. At best, in many years, that approach could start to reduce Google’s power. But it’s unlikely.
If we want change we have to force change through regulation.
Not regulation. Google must be split up.
It’s advertising business must be kept separate from the browser AND the search.
I mean, I think there should be both, plus also speaking as an ex-Google advertising privacy person, Google’s advertising businesses are, like, at LEAST four or five distinct business models which should each be separate companies. more realistically, at least a dozen.
the current situation with Google in adtech is as if a stock exchange could also be a broker, and a company listed on the exchange, and a high-frequency trading firm, and a hedge fund, and a bank, and … well, you get the idea
I often find it cathartic to read through legal proceedings involving my former employer. the currently-ongoing one in NY state has filings which go into some detail on legal theories that broadly agree with me about this (there’s a definition in the law of what constitutes a distinct market), so that’s nice to see. maybe someday there’ll be some real action on it.
I’d also love to see an antitrust regulator look at the secondary effects from Chrome’s dominance. Google supports Chrome on a small handful of platforms and refuses to accept patches upstream that support others (operating systems and architectures). This is bad enough in making other operating systems have less browser choice (getting timely security updates is hard when you have to maintain a large downstream patch set, for example) but has serious knock-on effects because Chrome is the basis for Electron. The Electron team supports all platforms that Chrome support upstream, but they don’t support anything else (they are happy to take patches but they can’t make guarantees if their upstream won’t). This means that projects that choose Electron for portable desktop apps are locked out.
Google did take the Fuchsia patches into upstream Chromium. A new client OS from Google doesn’t have these problems but a new client OS from anyone else will. That seems like a pretty clear example of using dominance in one market to prevent new players in another (where, via Android, they are also one of the largest players) from emerging. But I am not a lawyer.
Yes. Also, don’t forget web attestation, which may very well lock out anyone running their own OS on their own hardware.
I was trying really hard to!
very much agreed.
Wouldn’t that imply regulation, as in defining criteria about when and how to split it up (assuming that would not be a voluntary step by Google)?
Splitting up Google is definitely a form of regulation. My feeling is that splitting it up is one of the forms of regulation least likely to have accidental negative consequences.
We see the negative effects of Google being together all the time: AMP was a very ugly attempt to use the search monopoly to force a change to preserve their ad monopoly on mobile where it was being eaten away by Facebook at the price of breaking the web. More recently, the forced transition from Google Analytics Universal Analytics to Google Analytics 4 was something only a monopoly would do. No company that actually expected its analytics to actually make money directly would just break every API so gratuitously.
That said, even break ups can have unexpected consequences. The AT&T break up of the 80s did lead to a telecom renaissance in the 90s, but it also fatally crippled the Unix team and led to the end of Bell Labs as a research powerhouse.
Did it? The division into RBOCs had dubious benefits for consumers, because it replaced a well-regulated national monopoly with several less regulated local monopolies. The original plan of splitting out Western Electric would have made a lot more sense (WE was getting creamed by Nortel in the switching market, breaking up the phone system messes up the balance sheet elsewhere), but AT&T execs thought computer revenue from commercializing Unix was too good.
I am not sure if breaking up ATT did any good to me as a consumer, since the only internet choices I have is ATT and Comcast! US feels like an undeveloped country with the crawling internet speeds here in the San Francisco Bay Area.
The current “AT&T” is really Southwestern Bell, which somehow was allowed to eat all its neighbors. It is silly to let the telcos merge into a megablob a short decade after breaking them up in the first place.
In the broadest sense yes, but I feel that the term has come to mean setting up rules of conduct for the regulated businesses and possibly some form of oversight. Somehow it doesn’t pop into my mind that when the large companies call for regulation, they might be actually asking to be split up. I hope I make sense.
It’s not as if the choices are mutually exclusive.
I abandoned Chrome as a daily driver a few years back, but I’d do it today in a heartbeat based on this news. I rather enjoy the Firefox user experience, and switching was not a huge cost. I suppose YMMV and if switching does pose a large cost for someone, that’s their calculus, it’s just hard for me to imagine.
I’m also pushing for regulation how I can (leaving messages for my congresscritters, for what that’s worth). For me, I can’t imagine doing that but continuing to use Chrome.
Advertisement would help too. This announcement is buried for a reason. Google may have just handed Mozilla a huge cannon to use to get people off Chrome and onto Firefox, but Mozilla has to actually take advantage of it.
It’s not clear to me that this does bypass the GDPR. The GDPR requires informed consent to tracking. It sounds like this uses intentionally misleading text so will not constitute informed consent. It’s then a question of whether it counts as tracking. Google is potentially opening up some huge liability here because the GDPR makes you liable for collecting PII in anonymised form if it is later combined with another data set to deanonymise it.
I’d agree with that if it worked for Microsoft, Apple, Samsung, Sony (and Google). We need more than regulation; we need a cultural shift away from things like Google Chrome being the “defacto” for the Web. We have to get people understand that they have choice.
I would say regulation absolutely worked on Microsoft. A key part of why Google was able to succeed in the early 2000s was Microsoft was being very careful after losing a major anti-trust action. I was at Google at the time and I was definitely worried that Microsoft would use its browser or desktop dominance to crush the company. It never did but I’m confident it would have without the anti-trust concern.
All regulations end up the same way. Simply walking around it. Paying consultants to figure out the legal way. The biggest players will find the way, and the poorest and smallers players will die out. And that’s one of the ways how you can create a monopoly.
I just arrived in another EU country, and thanks to the derided regulation I can call and use mobile internet at the same pricing as at home. This means it’s easier for me to search for transport, lodging etc. to the benefit of both me and the providers of these services. The ones losing out are the telecom operators, who have to try to compete on services instead of inflated fees for roaming.
I can’t see any monopolies forming. Do you?
I’m not “deriding” regulations. I simply question the motives that are used when creating them. Maybe it’s because of the legacy of “central-planned economy” I was subjected to.
Also I think you’ve just given an example of a company in a sector that requires an explicit permission from the government to be able to even start the business.
It’s not true that large companies always find a way to bypass legislation or that regulation is always anti-competitive in any interesting sense.
Large companies can often work around regulations, but sometimes they clearly lose and regulation is passed and enforced that hurts their interests. E.g. GDPR, pro-union laws, minimum wages, etc.
Yes, richer and more powerful players are usually more likely to survive a financial hit. That’s not a feature of regulation. That’s a feature of capitalism: power and money have exponential returns (up to a point).
It has to be fixed with redistributive policies, not regulation.
Also, mobile telecoms consume a finite public good (EM spectrum noisiness in an area). They’re a natural target for public regulation. I don’t think that’s really a problem, tho I would prefer if public control was not state control.
I disagree. Companies will always try, they may not succeed. In particular, if the cost of complying with regulations is lower than the cost of finding work arounds, then they will comply. This is part of the reason that the GDPR sets a limit on fines that is astronomical: the cost of trying and failing to work around the GDPR is far lower than the cost of complying.
I’m a bit confused. I didn’t say anything about companies trying or not. I agree with all of your post except the bit about the GDPR fine limit, which I think is probably high enough (4% of global turnover) to exceed the benefits of non-compliance in most cases.
Sorry, I misread your post. And then I wrote ‘lower’ when I meant ‘higher’, so I clearly can’t be trusted with thinking today.
No worries!
I don’t want to get into the Keynes vs. von Hayek (although if redistribution is involved then maybe we should include Marx) dispute regarding whether regulations are good or bad, because the moderator removes threads related to politics, and I don’t want him to remove this one.
(also I’m not sure we can convince each other to our point of view)
I don’t really know what your position is or what you might have disagreed with me about, but I am totally fine leaving this convo here.
I did stop using Chrome, a long time ago. But, if my frontend colleagues are any indication, a deep hostility toward non-Chrome browsers is rampant among the people who are responsible for supporting them. And more and more teams just don’t bother. I would prefer not to have Chrome installed at all, but I have to because many websites that I don’t have a choice about using (e.g., to administer my 401(k), to access government services, to dispute a charge on my credit card) just flat-out don’t work in anything else.
You might have some luck reporting such issues to the responsible government agencies. They don’t usually write the sites themselves but contract the work out. The clerk will usually just forward your complaint to the supplier who will gladly bill the additional work.
The problem is systemic - if they don’t test it except with Chrome, they might fix the “one time issue” only for it to break the next time around they make some larger change.
Oh sure it is. But if you pester the clerk a couple of times, the Firefox support requirement might just make it to the next tender spec.
Depending on the jurisdiction, supporting a single vendor’s product with public money may be illegal. It’s a direct subsidy on Google. Whether a particular state / national government can subsidise Google without violating laws / treaties varies, but even in places where they can they typically have to do some extra process. If you raise the issue as a query about state subsidy of a corporation then you may find it gets escalated very quickly. If it was approved by an elected person then they may be very keen to avoid ‘candidate X approved using taxpayer money to subsidise Google, a corporation that pays no tax in this state’ on PSAs in the next election.
I doubt any regulator would perceive “failed to test a web application in minority browsers” as a subsidy. Maybe if they specifically developed an application that targeted that specific proprietary vendor’s stack.
But I imagine a public organization such as a library building a virtual environment to be used specifically in VR Chat to target young audiences as a part of promotional strategy would be perceived as
completelymostly fine.In Czechia, government purchased several (pretty important, duty declarations for example) information systems that were only usable with Microsoft Silverlight. They are still running, by the way. As far as I know, the agencies were not even fined for negligence.
Most people out of IT treat large tech companies like a force of nature, not vendors.
I read a very apt quote[1] on HN a month ago, about how much Google values Chrome users thoughts, which directly relates to people complaining, but then continuing to use it:
1: https://news.ycombinator.com/item?id=37035733
More like make sure you convert everyone around you as well. If you have any say in your company policy, just migrate your office staff to Firefox. Make sure to explain to your family and convert them as well. uBlock on mobile Firefox should help to ease some conversion there as well.
Coffee and beers, yes, but for transit no unlocking is required, at least with Apple Pay and the NYC subway and bus systems. You just hold it next to the reader and it beeps, no other interaction required.
If you follow this reasoning to its logical conclusion, E2E encryption is impossible since there will always be some software doing the encryption for you, and said software is part of the threat model so the distributor of said software is part of what the encryption is supposed to protect against.
For example, PGP is incoherent because the PGP program is performing the encryption, thus you have to trust PGP’s developers, the distro or website you downloaded it from, your toolchain if you built it yourself … all of whom are part of the threat model.
It’s kind of a reducto ad absurdam. Perfect security is impossible. E2E is more secure because it reduces the number of points of compromise. Yes, the JS code you downloaded from the website could be secretly sending cleartext or using a backdoored algorithm or whatever; but assuming that code isn’t malicious, you do eliminate the much larger security problem of people with access to the server being able to see the cleartext, a gaping hole that gets exploited pretty often in real life.
Not quite, the article is arguing that E2E is incoherent when it’s protecting from the distributor itself. PGP is protecting you from someone else that does not distribute PGP.
But there isn’t one distributor, there are hundreds or thousands of distributors involved in any meaningful software execution today, many of whom you cannot even be aware of (for example, the person who distributed the compiler used by the packager of the PGP binary you are running). You don’t get to pick your adversary. PGP could be compromised in its source, during its compilation, during its physical distribution over the network, by a hostile OS or runtime environment, etc. etc.
The author of this piece should read “Reflections on Trusting Trust.” All of this stuff is a matter of degree; web cryptography is not unique in that regard, nor does that mean that it’s “snake oil.”
Perfect home security is impossible because even if you lock your doors, Yevgeny Prigozhin can lead a private army to your house and knock down the wall with a tank. Security is always relative.
As a long time SPA apologist and licensed reverend of the church of tiny backends, I find this genuinely difficult to follow. What is “hypermedia” even? A tree with some implied semantics? How is that different than any other data? Why should I be constructing it on the backend (that place that knows comparatively nothing about the client)?
The back button has been solved for over a decade.
The complexity of “the backend has to care what things look like” is also enormous.
Theres talk of longevity and churn, but I’m pretty sure if I wrote
hx-target=...
in 2012, I would not get the desired effect.I haven’t managed state on a server beyond session cookies and auth in ages.
I saw a computer from 20 years ago use the internet just fine last weekend, and it needed some horrifying reverse proxy magic to make a secure connection, so “I’m using HTTPS” and “I’m supporting old hardware/OSs” is a contradiction anyway because decrypting HTTPS is more computationally intense than doom, and it’s also a moving target that we don’t get to pin. The end result is that if you can securely exchange information with a browser, it’s not ancient enough to need more than a few servings of polyfills to run a reasonably modern app.
React is the currently popular thing that makes stuff go vroom on the screen, so of course a lot of people make it more complicated than it needs to be, but like… remember early 2000s PHP CMSs? Those weren’t better, and if you did those wrong it was a security issue. At least a poorly written react UI can’t introduce a SQL injection.
To each their own, but I don’t get it 🤷♀️. I also don’t get how people end up with JS blobs bigger than a geocities rainbow divider gif, so maybe I’m just a loony.
Anything can be done wrong, and the fact that popular tools are used wrong often and obviously seems like a statistical inevitability, not a reason to try to popularize something different.
You must be using a different web than me.
Why would you prevent people to popularize anything that actually solves some problems? Isn’t having choice a good thing? I’m this author of this talk about a React->htmx move, and I’m completely freaked out by how many people have seen my talk, as if it was a major relief for the industry. I am also amazed, when hiring young developers, by how most of them don’t even know that sending HTML from the server is possible. Javascript-first web UI tools have become so hegemonic that we need to remind people that they have been invented to tackle certain kind of issues, and come with costs and trade-offs that some (many? most?) projects don’t have to bear. And that another way is possible.
Probably the statistics are way higher for technologies that carry a lot of complexity. Like I said in my talk, it’s very easy for JS programmers to feel overwhelmed by the complexity of their stack. Many companies have to pay for a very experienced developer, or several of them. And it’s becoming an impossible economical equation.
With htmx or other similar technologies, “what things look like” is obviously managed in the browser: that’s where CSS and JS run. Server-side web frameworks are amazingly equipped for more than a decade now to generate HTML pages and fragments very easily and serve them at high speed to the browser without the need of a JS intermediary.
I am shocked and stunned every single time I talk to someone who doesn’t know this. And if they are interested, I explain a little bit about how the web server can return any data, not just json.
Hypermedia encapsulates both current object state and valid operations on it in one partially machine-readable and partially user-readable structure.
A lobsters page, for example, lists the link and comments (the current state) and has a definition of how to comment: you can type in text and post it to the server. After you do that, the system replies with the updated state and possibly changed new valid operations. These are partially machine-readable - a generic program that understands HTML* can see it wants text to post to a particular server point - and partially user-readable, with layout and English text describing what it means and what it does.
Notice that this is all about information the backend applications knows: current data state and possible operations on it. It really has nothing to do with the client… which is part of why, when done well, it works on such a wide variety of clients.
To be fair, “the client” is a web page 9 out of 10 times so why abstract it away.
here you go:
https://hypermedia.systems
TLDR: hypermedia is a media, say, a text, with hypermedia controls in it. A lot more detail to be found in the book, or on the essays page:
https://htmx.org/essays
We went with GraphQL via Hasura, and while Hasura isn’t representative of GraphQL as a whole, Hasura’s base offering took about a year to turn from best practice to deprecated. In that time, we grew from 10 engineers to 20.
A “benefit” of GraphQL is that your frontend and backend engineers are more decoupled and can communicate less. However, this also means that backend engineers are not naturally motivated to understand frontend needs.
Our DB schema quickly became unergonomic for frontend consumers, yet because the DB schema was directly coupled with the frontend, we wrote repetitive ad-hoc data transformations all over the frontend to massage the GraphQL schema to a higher level data model.
So…don’t do that. The downside of any solution that turns your database into an api is that your database needs to be designed to present a good api. This is true whether or not you’re exposing a graphql or rest api.
It’s somewhat less painful if you’re doing real rest (as opposed to slashes-are-what-makes-rest), because the tables or views can be the resources and the mapping may be fairly natural.
This seems like a problem in the organization not the technology. Are you all trying to deliver the same product? If yes, why aren’t you measuring the impact of the backend on the the performance of the frontend?
In my experience, wish-it-was-rest apis either force the frontend to make many calls or result in joining lots of redundant data onto endpoints, impacting speed.
For me, this wasn’t about whether a better result was possible; of course one is possible. But the happy path of Hasura led to the results that we got, and in deciding a path forwards, one of our evaluations was to do GraphQL better, without DB->API and with intentional modeling of resources and mutations. We decided to do something else instead.
It is both a technological and organization problem; the challenges that came along with Hasura specifically weren’t a good match with our organization. We preferred a technological solution that intrinsically motivated better backend<>frontend working patterns instead of a solution that required additional work to motivate the desired outcomes.
Partly, I feel that this does seem to reflect an organizational failure, and that introducing friction just to make developers do their damn job is wasteful and defeatist. On the other hand, there is something interesting about this scenario. I wonder if there is a name for it? I am often a proponent for introducing constraints that some might experience as friction in their day to day work, because I think it leads to better outcomes and less friction over time.
The first thing to do when exposing an API that is automatically mapped to a database (which I am a big believer in) is to set up a separate view-only schema (ie a schema containing only database views), and expose only the data that is relevant for the client, possibly with some denormalization and aggregation as needs arise.
The place to do that is in the exposed specialized schema. The underlying schema with the actual tables remains decoupled from the client.
Only if you treat GraphQL as a way to mechanically expose a low-level interface (like your DB). So don’t do that!
We switched to GraphQL as part of a push to get frontend and backend working more closely. We’ve found the schema to be a very helpful locus of collaboration; when engineers from either side can propose interfaces using the shared language of the schema (which requires concreteness specificity), it’s much less common to end up in situations where the two teams are talking past each other.
If a schema requires its consumers to do this kind of transformation, I would argue that it’s not well-designed (for those consumers). Sounds like your GraphQL schema should have exposed a higher-level API to begin with. (A design goal for our GraphQL API is that the frontend should not need to do any significant reshaping of data it receives from the backend, and in particularly it should never need to do its own joining, filtering, or sorting.)
There are plenty of issues with GraphQL (weird and oddly-specified serialization, clunky and limiting type system) but so much of the criticism I see boils down to “sometimes people make bad APIs using GraphQL,” which, sure. Designing a good API is still a problem that requires human thought and intention; no technology is going to do that for you.
I believe a lot of the problems originated with our out-of-the-box use of Hasura, directly exposing the underlying schema. It was definitely not well designed. In our case the problem was that the bad API was the happy path, and that I believe is Hasura-specific, not GraphQL-specific.
The Apple ARM-chips are really great in terms of performance/Watt, but Apple, in my opinion, really dropped the ball in terms of software quality. I had been in the Apple ecosystem for years until I dropped it in 2012 when it became apparent that macOS was on a downward spiral from the excellence I had become used to.
The other operating systems/desktop environments in Windows and Linux can still learn quite a bit from macOS, but the latter is suffering from UI/UX inconsistencies and is unnecessarily locked down. While you could be relatively free 10 years ago with any software of your choice (especially OSS) and have rare breakage between system upgrades, you now have to fight with all kinds of gatekeepers and the system usually wrecks your whole setup with each upgrade.
This might be the main reason why fewer and fewer professionals choose Apple: It becomes less and less justified to pay the Apple tax the more you use your system for actual work.
2012 was eleven years ago, and eight years prior to the introduction of the first MacOS devices running ARM. MacOS software quality has gone up and down over the years, but I don’t think “it sucked over a decade ago on a completely different architecture” is a very useful data point for assessing the quality of MacOS on an M2 machine today.
I have been using macOS as my primary desktop since 2007 (before that Linux and I had a 2 year part-time Linux excursion around 2018 or so). I would agree with the quality suffering after the terrible 2016 MacBooks until about 2019/2020 or so, but the last few releases have been great for me. (And it’s not like early macOS 10.5 or 10.6 releases didn’t have horrible bugs.)
Apple Silicon has been a huge step forward, my machines are lightning fast and last on battery for a long time. I also love the work that they are doing on system security, like sealed volumes, memory protection through the the secure enclave, etc.
With regards to the article, Apple Silicon provides great performance per watt compared to most GPUs. But for some reason people overhyped Apple Silicon GPUs and believe that Apple waved a magical wand and is suddenly competitive with NVIDIA performance-wise. The compute power of the M1 Ultra GPU is 21 TFLOPS, the tensor cores on an RTX 2060 Super are 57 TFLOPS and that’s a budget card from years ago. If you want to do machine learning, get a Linux machine and put an NVIDIA card in it. GPU training on Apple Silicon is currently only useful for small test runs (if the training process doesn’t burn in a fire due to the bugs that are still in the PyTorch/Tensorflow backends).
I use a MacBook desktop, because I get all the nice apps and an extremely predictable environment and use a headless Linux machine with a beefy NVIDIA GPU for training models.
Data?
I think he is kind of missing the point. If TikTok is banned from Google and Apple app stores, it will become less popular and won’t melt brains at the same rate. Sure it can be circumvented, but it is not a “terrible idea” with “intolerable” side effects. And sure, there will be other apps that replace TikTok; when that happens maybe it will be easier to argue for comprehensive rather than ad hoc regulation.
Also disappointed to see him arguing for “commerce” as an important bedrock value, and leaning on State Department talking points like Cuba being a “censorship-loving autocracy.” I suppose Schneier is still a good source on the technical side of things.
From here it looks like you missed the point. He says the effective bans would be terrible/intolerable. Then he points out that merely banning the apps would not be effective.
If you’re disappointed to learn that Schneier isn’t a hardline Leftist, you may have been mistaking him for someone else, maybe Noam Chomsky?
Maybe keep the over-the-top snark to Hacker News or somewhere else?
It’s not about being a “hardline Leftist.” It’s about parroting false propaganda. Cuba has public wifi hotspots that provide access to the open web and are not meaningfully firewalled. Its internet practices are nothing like China’s and Iran’s and it is an error of fact to claim that they are.
In strictly technical terms, that’s true, but… uh, how do I put it so that I don’t start a political flamewar again.
It’s very easy to underestimate how governments like the Cuban government can enforce these things if you haven’t lived under one. The Cuban government doesn’t use the exact same technical means that China uses partly because it has better, more easily-enforceable non-technical means to achieve its goals, and partly because it just doesn’t have the tremendous resources that the Chinese government has.
The two don’t belong together in terms of specific technical means (deep packet inspection firewalls) but that’s quite literally a technicality. I understand why it doesn’t look the same from a technical perspective, but take it from someone who’s familiar with that kind of legal climate – it’s pretty much the same.
I don’t really understand what you are alluding to. Cubans can and do routinely use mainstays of the open internet like Google, Wikipedia, Facebook, Reddit, and Youtube, all of which are blocked in China. Cuba does not employ any means–whether deep packet inspection, social pressure, mind control rays, or anything else–to prevent this.
I’m sorry, I’m not trying to be mysterious here :-(. I just don’t want to go there because the last time I did, I started a big flamewar and I really regret it. I know it comes off as pretentious. I’m just trying to stay away from the politics underneath it.
Let me try to state it in as non-political terms as I can, because I really think this is technically relevant, the way social engineering attacks are technically relevant for network security, even though they are a non-technical matter. Please don’t take any of this as a political statement. This is really not my intention.
If one’s goal is to ensure that some information doesn’t go through a censorship-resistant network (like the Internet), or that if it does, it at least doesn’t spread, there are more ways to do it than one. One is through tight content access control at the network layer – firewalling, strict control of telecom equipment etc.. Another is through tight information access and dissemination control, where one openly allows access at the network layer but ensures everyone stays away from information they want restricted, and that anyone who does not is at least unable to disseminate it easily. Both can be equally effective.
I don’t want to get into the “how” of it because I don’t think I can do that in a way that’s not open to political interpretation and this is not the place. All I want is to caution, based not just on specific technical and legal understanding of this particular matter, but also on my own experience, against a line of thought like “Internet access is effectively open, as it is not subject to firewall restrictions”. “Not subject to firewall restrictions” is one conotation of open, and it’s correct in this case. But many others are not, and “not subject to firewall restrictions” doesn’t automatically imply all the other ones.
If this is not the place to explain your very political claim, maybe it’s also not the place to state it?
I don’t think what I stated is a political claim, otherwise I wouldn’t have stated it. I’ve strived to make sure that:
I’m sorry if it made anyone uncomfortable, or if I didn’t keep my own views out of it as well as I should have. It wasn’t my intention.
Edit: just to clarify, I’m obviously not insensitive to the fact that this is all being said in a thread regarding a government’s policies. My remarks apply equally well to information access in any network environment, from schools to corporate networks. They are about the specific case being dicussed here only insofar as… this is literally what the topic is about. They aren’t – or at least I have no intention of them being – any more political than your own root post in this thread about Schneier “leaning on State Department talking points”.
I’m not aware of a taboo on political discussion, and the article is about government policy, so I didn’t see a problem with pointing out State Department talking points.
My issue with your statements is that they require more detail to evaluate – Is the Cuban government restricting the flow of information in a way that is comparable to network layer consorship, or in a way that exceeds what Western governments do? That would require going beyond generic statements that apply to literally every government, and explaining the non-technical means that you think are employed by the Cuban government. But you have refused to do saying it would cross a line into being too political.
There is one. Just look at how many people have flagged this as off-topic.
Plus they have El Paquete, which I’m sure a lot of Americans would envy if they knew about it.
(Yes, admittedly, El Paquete is illegal, there as here.)
I don’t know if you familiar with American-mass media or social networking, but there is a lot of easily-enforceable non-technical censorship at play. Its easy to handwave about some technical or non-techinal cencorship in Cuba but ff Iran or Cuba had the same ability to project propaganda as the US there would certainly be a great American firewall.
Apps like TikTok (or FB, Youtube, Twitter … ) rely on network effects to get their popularity. People use TikTok because their peers are on TikTok. Make it sufficiently hard to install (and yes, sideloading apks on a device is suffiently hard that most people won’t bother), and people will flock to the next ephemeral video platform.
Sure, it won’t prevent a dedicated person from installing TikTok on their phone - but most people won’t even want to.
Then the question becomes “should companies like Apple and Google be required to facilitate the installation of TikTok, and, if so, can the US govt require them not to?”. That question seems to revolve more about free trade/commerce than about free speech.
And I would wager that there are several clones to TikTok spinning up as we speak. They’ll use the same dark patterns to increase engagement that TikTok does, but at least one of them will be owned / controlled by a Western company, and thus be “acceptable” to the State Department.
All that’s missing is sourcing some content to start things off, and spending some millions on advertising to start to attract users.
The end result will be nearly the same amount of harm to the users, but with less spying by the CCP, and more spying by some Western companies.
These seem like two separate concerns to me. Unfortunately, we live in a time when companies can iterate quickly to make their products as addictive as possible.
Nearly the same amount of harm, but still less. There would still be a drop in addictive usage patterns before the new western TikTok becomes socially compulsory for teens. Could make a difference in the development of children who otherwise wouldn’t have a gap in that mode of interaction during their school years.
And do you see what’s missing from that?
YouTube Shorts is already eating TikTok’s lunch in a lot of ways. The addiction-optimized-queue-of-clips format is almost certainly here to stay.
Am I… not the right audience for YouTube Shorts? I do watch a fair amount of YouTube, but these clips are mostly uninteresting to me. The best of them are just clips from channels I already subscribe to.
The one thing I want but don’t get with the Shorts is how old the video is. If I’m seeking news on The War, space and astronomy news, etc, I don’t want to look at something from last year or even six months ago. But since the Shorts don’t show the date, I’m mostly unlikely to click on them, and am usually unsatisfied when I do. I just looked in the Settings again, and don’t see a way to just hide those on the home screen.
You and me both! I’m basing my anecdote on what I’ve observed among friends and family, particularly those who are banned from using TikTok by their government and government-adjacent employers. I think it’s just very hard to fit genuinely interesting content into such a short clip, but presenting many such clips in rapid sequence is great for engaging that slot-machine-seeking hunger some people seem to have.
(Unless I’m misunderstanding your comment. If you’re implying that you were able to get what you wanted out of TikTok, teach me your ways! I’ve been trying and failing to get into it.)
I think as the intro implies this can be extended to machines and tools and maybe even further
I think in the context of computers in particular there’s a bit of a political problem where we force people to use them, sometimes by lawn, sometimes through society. They have to use computers, Smartphones and even certain apps.
At the same time we see a rise in scams and are surprised how people who might not even need or want this devices and only have them because they are forced to fill out some form online.
Some decades ago it was relatively easy to come by without almost any particular tool one can think of. You might be odd for it, but it allowed you to stop make use of your rights, etc.
Today you need apps to log in to your bank, websites to do your taxes, sometimes even the web to apply for elderly homes. And smartphones are pretty complex, and force you to fit example have or create an email address, require passwords, etc. You need to know how to use software, understand what the internet is, should have done concept of pop-ups, online ads, spam, updates, understand that there is no other person sitting on the other end right now and so on .
I think a lot of ruthlessness comes from this. Then even if you know about all of the above you end up like in Kafka’s The Trial and even if you know what things mean the processes behind the scenes for the vast majority of use cases will remain completely intransparent to you.
In a non automated/digitalized world is easy to ask quick questions and people who can ask other people handle exceptions. In the digital world one has to hope the developer has to have thought of it and handle it accordingly. If you are lucky there’s a support hotline but these seem to go away, especially for bigger so often more important companies
I see tools more on the morally neutral side, but I don’t think that’s the issue really. I don’t think computers are impressive but there’s an unintentional direction we move towards to whete things are forced upon people often thinking it’s a good thing when it’s at least debatable.
As a side note there’s certainly cases where things were done in the name of digitalization, progress, efficiency and things were just harder, slower, turned out to be less cost effective, less secure and required more real people to be involved
Of course these are the bad example, but given the adjective here is oppressive. Usually even in (working/stable) oppressive societies it works for most people most of the time. Things start to shift when it doesn’t for you many or there’s war. Only the ones not fitting in tend to have problems and while I would have titled it differently I think that is true for how computers are used that’s true today for all sorts of computers.
In the land of unicorn and rainbows? ;)
From my experience, people in positions of “HTML form actions” absolutely aren’t inclined to answer any questions and handle exceptions, unless they have any real retribution to fear. Worse yet, it’s a rational behavior for them: they almost certainly will be reprimanded if they break the intended logic, so it’s much safer for them to follow its letter.
Just past month I had to file a certain application for a somewhat uncommon case. The humans responsible for handling them rejected it as invalid because my scenario wasn’t in their “cache” of common cases and they used the default “contact our parent organization” response instead of trying to handle it, and not even in a polite manner. I contacted the parent organization and, luckily, people there were willing to handle it and told me that my application was valid all along and should have been accepted, and that I should file it again.
I suppose the application form handlers received quite a “motivational speech” from the higher-ups because they were much more polite and accepted it without questions, but it’s still wasted me a lot of time traveling to a different city to file it and standing in lines.
It may be one of the more egregious example in my practice, but it’s far from unique. I very much prefer interacting with machines because at least I can communicate with them remotely. ;)
Your anecdote just demonstrates the author’s point; you had to escalate to a more-responsible human, but you successfully did so and they were able to accommodate the uncommon circumstances, even though those cirumstances were not anticipated by the people who designed the process. When was the last time you pulled that off with an HTML form?
They were anticipated by the people who designed the process. It’s just that their subordinates did a sloppy job executing the logic written for them by the higher-ups. If the higher-ups programmed a machine to do that, it wouldn’t fail.
And I got very lucky with the sensible higher-ups. It could have been much worse: in that particular case it was obvious who the higher-ups were and they had publicly-accessible contact information. In many other cases you may never even find out who they are and how to reach them.
everytime the form allows freedom (which they are admittedly rarely used for, but could be), e.g. https://mro.name/2021/ocaml-stickers
I love that, and I wish more of the web worked that way, but it’s worth pointing out that the only reason it can work is because ultimately the input I put into that form gets interpreted by a human at the post office. It would not be possible to create a form for inputting an email address which would be as resilient to errors or omissions.
yes, and a lot of the information filled into the form doesn’t make sense to me – I just copy it on the envelope. It makes sense in peels as it is routed along: first country, then ZIP, then street, then name. That’s flexibility! Subsidiarity at work.
Some decades ago, here in the US, we were deep in the midst of making a large proportion of physical social institutions at best undignified and at worst somewhere between unsafe and impossible to independently access without ownership and operation of a dangerous, expensive motor vehicle, something unavailable to a significant proportion of the population that ruthlessly grinds tens of thousands of people a year into meat just here into the US.
I think this article is technically correct but in this particular case it might just not be quite the best kind of correct :-).
There are always going to be people who romanticize “the old way” but painting all criticism of Flatpak & friends as rose-tinted glasses is one of the reasons why Flatpak is six years old and still weird – this story is, ironically enough, on the frontpage along with this article.
(Disclaimer: I like Flathub, I think it’s a good idea, and I use it). But a half-realized idea of a better system is usually worse than a fully-realized idea of a worse system. Plenty of things break when installing stuff from Flathub and many applications there are minimally sandboxed, to the point where you might as well just install the bloody
.deb
if it exists. Filing all the breakage under “yeah users don’t need that” (font rendering bugs, themes etc.) or “well the next release of this particular Wayland compositor is going to support that” is the same kind of obliviousness to reality as “but Flatpak breaks the Unix philosophy”, just of a more optimistic nature.This leads to a curious state of affairs that’s unsatisfying for everyone.
It’s certainly in the nature of FOSS software that things don’t happen overnight and software evolves in the open. But if you want to appeal to an audience (technical and non-technical) that’s wider than “people who contribute to FOSS desktop projects”, shipping half-finished implementations is not the way, whether it’s in the nature of FOSS or not. You can say that Linux is not a product but that won’t change the (entirely reasonable) expectation of this wider audience that it should at least work.
Meanwhile, elitism and gatekeeping are one unpleasant aspect of romanticizing the old ways but, elitism and gatekeeping aside, I think it’s important to be realistic and acknowledge that the old way works – as in, it allows you to install applications, manage, update and uninstall applications which work as intended, to a degree that Flatpak is still aspiring to. While some people may be yearning for the days when being a package maintainer granted you demigod status in cyberspace, I think it’s more realistic to assume that most people just aren’t willing to spend the extra troubleshooting hours on a system that doesn’t always deliver even the security guarantees it’s meant to deliver, and sometimes results in a functional downgrade, too.
Edit: oh, while we’re on the topic of rose-tinted glasses, it’s also worth keeping in mind that the goalposts have changed quite significantly since then, too. Lots of people today point out that hey, back in 2000 you’d have had to fiddle with XF86Config and maybe fry your monitor, why are you complaining about far softer breakage today? Well, sure, but the alternative back in 2000 – especially if you were on a CS student’s budget – was Windows Me (I’m being as charitable as “maybe fry your monitor” here, realistically it was probably Windows 98). You maybe fried your monitor but got things many computer users couldn’t even dream of in return, unless they were swimming in money to shed out on Windows 2000, Visual Studio and so on. The standard to beat is no longer Windows Me.
Especially true when you’re not interested in desktop but servers. I’m very happy that I know I can just
apt install php apache
and it’ll give me a working bundle. The same for everything built on top of this. Also debian does specify a release cycle by this. I won’t have to worry that my php 7.4 is completely outdated in the next month just because someone thought moving ahead to php 8 is the new flashy thing. No, it’ll certainly work for a long time on php 7.4 as that’s the current debian stable release. And that’s perfectly fine, I don’t have the time to upgrade all the time just because someone though it would be neat to use one feature of php8. Those “gatekeeper” also ship most of these services with very sane defaults (config, location of configs, systemd units,…).Yeah that probably won’t work for the new release of $desktopapp, but it works flawless for the server environment.
No docker is not an answer. It’s a completely different way of operating stuff.
Oh, wow, I could not disagree more strongly with this. Give me something that is functionally complete over something that is broken and half-baked but has some kind of vague conceptual superiority any day.
I’ve read your comment 3 times now and I’m pretty sure you actually strongly agree with the comment you’re replying to.
Damn it, you’re right.
I remember trying Clojure a bit, and being super interested in a lot of the ideas of the language.
There is the universal quibbles about syntax (and honestly I do kinda agree that
f(x, y)
and(f x y)
are not really much different, and I like the removal of commas). But trying to write some non-trivial programs in Clojure/script made me realize that my quibble with lisps and some functional languages is name bindings.The fact that name bindings require indentation really messes with readability. I understand the sort of… theoretical underpinning of this, and some people will argue that it’s better, but when you’re working with a relatively iterative process, being able to reserve indentation for loops and other blocks (instead of “OK from this point forward this value is named
foo
”) is nice!It feels silly but I think it’s important, because people already are pretty lazy about giving things good names, so any added friction is going to make written code harder to read.
(Clojure-specific whine: something about all the clojure tooling feels super brittle. Lots of inscrutable errors for beginners that could probably be mangled into something nicer. I of course hit these and also didn’t fix them, though…)
EDIT: OTOH Clojure-specific stuff for data types is very very nice. Really love the readability improvements from there
Interesting to hear this–indentation to indicate binding scope is one of the things I really miss when I’m using a non-Lisp. I feel like the mental overhead of trying to figure out where something is bound and where it’s not is much higher.
(I strongly agree on the state of Clojure tooling.)
I think that racket solves this:
Static or dynamic refers to whether the webserver serves requests by reading a static file off disk or running some dynamic code (whether in process or not). While the word “dynamic” can apply broadly to any change, reusing a term with a well-understood definition in this context to refer to unrelated changes like SSL cert renewal and HTTP headers is really confusing. Late in the article it refers to “the filesystem API used to host static files” so it’s clear the author knows the definition. It’s unfortunate that the article is written in this way; it’s self-fulfilling that misusing a clear and well-established term just results in confusion. Maybe a better metaphor for the points it’s trying to make would be Stewart Brand’s concept of pace layering.
Yeah I agree, I think the article is generally good, but the title is misleading.
My summary is “We should try to make dynamic sites as easy to maintain as static sites”, using sqlite, nginx, whatever.
The distinction obviously exists – in fact the article heavily relies on the distinction to make its point.
I agree with the idea of moving them closer together (who wouldn’t want to make dynamic sites easier to maintain?) But I think there will be a difference no matter what.
Mainly that’s because the sandboxing problem (which consists of namespace isolation and resource isolation) is hard on any kernel and on any hardware. When you have a static site, you don’t need to solve that problem at all.
We will get better at solving that problem, but it will always be there. There are hardware issues like Spectre and Meltdown (which required patches to kernels and compilers!), but that’s arguably not even the hardest problem.
I also think recognizing this distinction will lead to more robust architectures. Similar to how progressive enhancement says that your website should still work without JS, your website’s static part should still work if the dynamic parts are broken (the app servers are down). That’s just good engineering.
Funnily enough, sqlite + nginx is what I use for most of my smaller dynamic websites, usually wish a server process as well.
EDIT: Reading further, yeah, almost all of my side projects use that setup, outside of some Phoenix stuff, and I’ve definitely noticed those projects requiring not very much maintenance at all.
What’s also a bit funny is that sqlite and nginx are both extremely old school, state machine-heavy, plain C code.
Yet we reach for them when we want something reliable. I recommend everyone look at the source code for both projects.
This reminds me of these 2 old articles:
https://tratt.net/laurie/blog/entries/how_can_c_programs_be_so_reliable.html
http://damienkatz.net/2013/01/the_unreasonable_effectiveness_of_c.html
(And I am not saying this is good; I certainly wouldn’t and can’t write such C code. It’s just funny)
SQLite, at least, partially compensates via extensive testing, and a slow/considered pace of work (or so I understand). It’s the antithesis of many web-apps in that regard. And the authors come from a tradition that allows them to think outside the box much more than many devs, and do things like auto-generate the SQLite C header, rather than trying to maintain it by hand.
C and C++ can be used effectively, as demonstrated by nginx, sqlite, curl, ruby, python, tcl, lua and others, but it’s definitely a different headspace, as I understand it from dipping into such things just a bit.
I did not know that nginx can talk to sqlite by itself. Can you share your setup?
For me, I don’t use nginx talking directly to SQLite, I just use it as a reverse proxy. It’s just that it makes it easy to set up a lot of websites behind one server, and using SQLite makes it easy to manage those from a data storage standpoint.
I see, yes that makes sense. I use it that way too.
You articulated that without using expressions that would be inappropriate in the average office setting. I admire you for that.
The whole act of reusing a common, well-understood content-related term to instead refer to TLS certs and HTTP headers left me ready to respond with coarse language and possibly question whether OP was trolling.
The idea that maybe we’re comparing a fast layer to a slow layer is somewhat appealing, but I don’t think it quite fits either. I think OP is muddling content and presentation. Different presentations require differing levels of maintenance even for the same content. So if I publish a book, I might need to reprint it every few hundred years as natural conditions cause paper to degrade, etc. Whereas if I publish the same content on a website, I might need to alter the computer that hosts that content every X days as browsers’ expectations change.
That content doesn’t change. And that’s what we commonly mean when we say “a static website.” The fact that the thing presenting the content needs to change in order to adequately serve the readers doesn’t, in my view, make the content dynamic. And I don’t think it moves it from a slow layer to a faster one either.
This is a reasonable criticism, but I think it’s slightly more complicated than that — a collection of files in a directory isn’t enough to unambiguously know how to correctly serve a static site. For instance, different servers disagree on the file extension → mimetype mapping. So I think you need to accept that you can’t just “read a static file off disk”, in order to serve it, you also need other information, which is encoded in the webserver configuration. But nginx/apache/etc let you do surprisingly dynamic things (changing routing depending on cookies/auth status/etc, for instance). So what parts of the webserver configuration are you allowed to use while still classifying something as “static”?
That’s what I’m trying to get at — a directory of files can’t be served as a static site without a configuration system of some sort, and actual http server software in order to serve a static site. But once you’re doing that sort of thing, how do you draw a principled line about what’s “static” and what isn’t?
Putting a finer point on the mimetype thing, since I understand that it could be seen as a purely academic issue:
python2 -m SimpleHTTPServer
andpython3 -m http.server
will serverfoo.wasm
with different mimetypes (application/wasm
andapplication/octet-stream
) Only the wasm bundle served by the python3 version will be executed by browsers, due to security constraints. Thus, what the website does, in a very concrete way, will be dependent not on the files, but on the server software. That sounds like a property of a “dynamic” system to me — why isn’t it?You could say, ok, so a static website needs a filesystem to serve from and a mapping of extensions to content types. But there are also other things you need — information about routing, for instance. What domain and port is the content supposed to be served on, and at what path? If you don’t get that correct, links on the site likely won’t work. This is typically configured out of band — on GitHub pages, for instance, this is configured with the name of the repo.
So you need a extension to mimetype mapping, and routing information, and a filesystem. But you can have a static javacsript file that then goes and talks to the sever it was served from, and arbitrarily changes its behavior based on the HTTP headers that were returned. So really, if you want a robust definition of what a “static” website is, you need to pretty completely describe the mapping between HTTP requests and HTTP responses. But isn’t “a mapping between HTTP requests and HTTP responses” just a FP sort of way of describing a dynamic webserver?
If you disagree with some part of this chain of logic, I’m curious which part.
All the configuration parts and “dynamic” nature of serving files in a static site are about that: serving them, how the file gets on my computer. But at the end of the day, with a static site the content of the document I get is the same as the content on the filesystem on the server. And with a dynamic site it is not. That is the difference. It’s about what is served.
All this talk about mime types and routing just confuses things. One can do the same kinds of tricks with a file system and local applications. For instance: changing the extension, setting default applications, etc. can all change the behavior you observe by opening a file. Does that mean my file system is dynamic too? Isn’t everything dynamic if you look at it that way?
It seems very odd to be talking about whether or not WASM gets executed to make a point about static websites.
When the average person talks about a static site, they are talking about a document-like site with some HTML, some CSS, maybe some images. Yes, there may be some scripting, but it’s likely to be non-essential to the functionality of the site. For these kinds of sites, in practice MIME types are basically never something you as the site author will have to worry about. Every reasonable server will serve HTML, CSS, etc. with reasonable MIME types.
Sure, you can come up with some contrived example of an application-like site that reliant on WASM to function and call it a static site. But that is not what the phrase means in common usage, so what point do you think you are proving by doing so?
You can also misconfigure nginx to send html files as text/plain, if that is your point. python2 predates wasm, it’s simply a wrong default -today-.
What about that is “misconfigured”? It’s just configuration, in some cases you might want all files to be served with a particular content type, regardless of path.
My point is that just having a set of files doesn’t properly encode the information you need to serve that website. That, to me, seems to indicate that defining a static site as one that responds to requests by “reading static files off disk” is at the very least, incomplete.
I think this discussion is kind of pointless then.
Ask 10 web developers and I bet 9 would tell you that they will assume a “normal” or “randomly picked” not shit webserver will serve html/png/jpeg/css files with the correct header so that clients can meaningfully interpret them. It’s not really a web standard but it’s common knowledge/best practice/whatever you wanna call it. I simply think it’s disingenious to call this proper configuration then and not “just assuming any webserver that works”.
I found your point (about the false division of static and dynamic websites) intuitive, from when you talked about isolation primitives in your post. (Is a webserver which serves a FUSE filesystem static or dynamic, for example? What if that filesystem is archivemount?)
But this point about MIME headers is also quite persuasive and interesting, perhaps more so than the isolation point, you should include it in your post.
Given this WASM mimetype requirement, what happens when you distribute WASM as part of a filesystem trees of HTML files and open it with file://? Is there an exception, or… Is this just relying on the browser’s internal mimetype detection heuristics to be correct?
Yeah, I probably should have included it in the post — I might write a follow up post, or add a postscript.
Loading WASM actually doesn’t work from
file://
URLs at all! In general,file://
URLs are pretty special and there’s a bunch of stuff that doesn’t work with them. (Similarly, there are a handful of browser features that don’t work on non-https origins). If you’re doing local development with wasm files, you have to use a HTTP server of some sort.Fascinating! That’s also good support for your post! It disproves the “static means you can distribute it as a tarball and open it and all the content is there” counter-argument.
This is for a good reason. Originally HTML pages were self-contained. Images were added, then styles and scripts. Systems were made that assumed pages wouldn’t be able to just request any old file, so when Javascript gained the ability to load any file it was limited to only be able to load files from the same Origin (protocol + hostname + port group) to not break the assumptions of existing services. But
file://
URLs are special, they’re treated as a unique origins so random HTML pages on disk can’t exfiltrate all the data on your drive. People still wanted to load data from other origins, so they figured out JSONP (basically letting 3rd-party servers run arbitrary JS on your site to tell you things because JS files are special) and then browsers added CORS. CORS allowed servers to send headers to opt in to access from other origins.WebAssembly isn’t special like scripts are, you have to fetch it yourself and it’s subject to CORS and the same origin policy so loading it from a
file://
URL isn’t possible without disabling security restrictions (there are flags for this, using them is a bad idea) but you could inline the WebAssembly file as adata:
URL. (You can basically always fetch those.)These days when getting a subdomain is a non-issue, I can’t see why anyone would want to use absolute URLs inside pages, other than in a few very special cases like sub-sites generated by different tools (e.g.
example.com/docs
produced by a documentation generator).I also haven’t seen MIME type mapping become a serious problem in practice. If a client expects JS or WASM, it doesn’t look at the MIME type at all normally because the behavior for it is hardcoded and doesn’t depend on the MIME type reported by the server. Otherwise, for loading non-HTML files, whether the user agent displays it or offers to open it with an external program by default isn’t a big issue.
MIME bites you where you least expect it. Especially when serving files to external apps or stuff that understands both xml and json and wants to know which one it got. My last surprise was app manifests for windows click-once updates which have to have their weird content-type which the app expects.
This is incorrect. Browsers will not execute WASM from locations that do not have a correct mimetype. This is mandated by the spec: https://www.w3.org/TR/wasm-web-api-1/
You might not have seen this be a problem in practice, but it does exist, and I and many other people have ran into it.
Thanks for the pointer, I didn’t know that the standard requires clients to reject WASM if the MIME type is not correct.
However, I think the original point still stands. If the standard didn’t require rejecting WASM with different MIME types but some clients did it on their own initiative, then I’d agree that web servers with _different but equally acceptable behavior could make or break the website. But if it’s mandated, then a web server that doesn’t have a correct mapping is incorrectly implemented or misconfigured.
Since WASM is relatively new, it’s a more subtle issue of course—some servers/config used to be valid, but no longer are. But they are still expected to conform with the standard now.
You don’t need any specific mimetype for WASM, you can load bytes however you want and pass them to
WebAssembly.instantiate
as an ArrayBuffer.The other replies have explained this particular case in detail, but I think it’s worth isolating the logical fallacy you’re espousing. Suppose we believe that there are two distinct types of X, say PX and QX. But there exist X that are simultaneously PX and QX. Then those existing X are counterexamples, and we should question our assumption that PX and QX were distinct. If PX and QX are only defined in opposition to each other, then we should also question whether P and Q are meaningful.
The abilities to dump and restore a running image, and to easily change everything at runtime, are the two biggest things I miss about Common Lisp. It feels barbaric now when I have to restart a JVM to pick up classpath changes, or when I have to wait a minute on startup for everything to get evaluated instead of just resuming a saved image instantly.
2021:
2022:
SQLite is my go-to for small to medium size webapps that could reasonably run on a single server. It is zero effort to set up. If you need a higher performance DB, you probably need to scale past a single server anyway, and then you have a whole bunch of other scaling issues, where you need a web cache and other stuff anyway.
Reasons not to do that are handling backup at a different place than the application, good inspection tools while your app runs, perf optimization things (also “shared” memory usage with one big dbms instance) you can’t do in sqlite and the easier path for migrating to a multi-machine setup. Lastly you’ll also get separation of concerns, allowing you to split up some parts of your up into different permission levels.
Regarding backups: what’s wrong with the .backup command
If I’m reading that right you’ll have to implement that into your application. postgres/mariadb can be backed up (and restored) without any application interaction. Thus it can also be performed by a specialized backup user (making it also a little bit more secure).
As far as I know, you can use the sqlite3 CLI tool to run .backup while your application is still running. I think it’s fine if you have multiple readers while one process is writing to the DB.
Yes, provided you use WAL mode, which you should probably do anyway.
You could use litestream to stream your SQLite changes to local and offsite backups. Works pretty well.
Ok but instead of adding another dependency that solves the shortcomings of not using a DBMS (and I’ll also have to care about) I could instead use a DBMS.
OK, but then you need to administer a DBMS server, with security, performance, testing, and other implications. The point is that there are tradeoffs and that SQLite offers a simple one for many applications.
Not just that, but what exactly are the problems that make someone need a DBMS server? Sqlite3 is thread safe and for remote replication you can just use something like https://www.symmetricds.org/, right? Even then, you can safely store data up to a couple of terabytes in a single Sqlite3 server, too, and it’s pretty fault tolerant by itself. Am I missing something here?
What does a “single sqlite3 server” mean in the context of an embedded database?
How do you run N copies of your application for HA/operational purposes when the database is “glued with only one instance of the application”?
It’s far from easy in my experience.
My experience has been that managing Postgres replication is also far from easy (though to be fair, Amazon will now do this for you if you’re willing to pay for it).
This seems quite remarkable - any experience with it?
Where do you see the difference between litestream and a tool to backup Postgres/MariaDB? Last time I checked my self-hosted Postgres instance didn’t backup itself.
You have a point but nearly every dbms hoster has automatic backups and I know many backup solutions that automate this. I am running stuff only by myself though (no SaaS)
No, it’s fine to open a SQLite database in another process, such as the CLI. And as long as you use WAL mode, a writer doesn’t interrupt a reader, and a reader can use a RO transaction to operate on a consistent snapshot of the database.
(I wonder how many good companies that Accelerate book is going to kill before engineering managers move on to the next shiny object.)
More on-topic … this article seems to set up a false dichotomy between E2E tests and unit (or component) tests. Integration tests which exercise the whole system can be fast and not flaky if you replace external service dependencies like queues, HTTP transport, etc. with synchronous, in-process components.
This is categorically false. It is a marketing fiction spread by those who want to develop their apps on the cheap, and, ok, fine. But PWAs do not work anything like native apps from the perspective of the end user, and acting otherwise is just gaslighting those users.
This is insane…
I assume most people here that use Ubiquiti have disabled remote access to devices if they haven’t already.
I’m struggling to see how this is good advice. Was it really to protect the stock value (rotating would reveal something bad happened and open it up to questions)? Even that is short sighted.
A comment from a former employee lifted from the HN thread:
This seems consistent with some Glassdoor reviews; for example:
And a bunch more.
Seems like the owner/CEO is just a twat that everyone is afraid of, and for good reasons too. This kind of company culture incentives the wrong kind of decision-making; from a business, ethical, and legal perspective. It’s no surprise that whistleblower “Adam” wants to remain anonymous.
It’s all a classic story repeated untold times over history innit? People will go to great lengths to avoid strong negative consequences to themselves, whether that’s a child lying about things to avoid a spanking, a prisoner giving a false confession under torture, or an employee making bad decisions to avoid being fired. We only have several thousand years of experience with this so it’s all very new… Some people never learn.
holy shit.
Indeed, and it makes its way right into the product too; you can tell when release feature quantity is prized over quality. This honestly explains more than I thought it could about my experience with their products so far — they feel so clearly half-baked, in a persistent, ongoing sense.
I never even heard of Ubiquiti until a few days ago when there was a story on HN that their management interface started displaying huge banner ads for their products – I just use standard/cheap/whatever’s available kind of hardware most of the time so I’m not really up to speed with these kind of things. Anyway, the response from that customer support agent is something else. The best possible interpretation is that it’s a non-native speaker on a particularly bad day: the wife left him yesterday, the dog died this morning, and this afternoon he stepped on a Lego brick. But much more likely is that it’s just another symptom of the horrible work environment and/or bad decision making, just like your meh experience with their products.
Yeah, I had similar experiences with Ubiquiti stuff–I bought it because I liked the idea of separating routing and access point functionality, but it never stopped being flaky. After the last time throughput slowed to a crawl for no reason I got a cheap TP-Link consumer router instead and I haven’t had to think about it once.
Ironically, I can’t. The UniFi Protect phone apps require it, so I have to choose between security of my network and physical security of my house.
Great write-up, I had no idea the REPL of lisp/smalltalk was so powerful. I need to get around to learning clojure.
I think the elixir* REPL fits the bill for the most part - if I start up one
iex
instance and connect to it from another node I can define modules/functions and they show up everywhere. And for hot-fixing in production one can connect to a running erlang/elixir node and fix modules/functions on the REPL live, and as long as the node doesn’t get restarted the fix will be there.* erlang doesn’t quite fit the bill since one can’t define modules/functions on the REPL, you have to compile them from the REPL.
Does Clojure actually have these breakloops though? I think I’ve seen some libraries that allow doing parts of it (restarts), but isn’t the default a stacktrace and “back to the prompt”?
Well, prompt being the Clojure repl, but you’re correct that the breakloop isn’t implemented, as far as I got in the language. You can must implement the new function and re-execute, so you lose all of the context previous to the break. I think with all of the customizability of what happens when a stack trace happens, it’s possibly possible.
I THINK the expected use with Clojure is to try to keep functions so small and side effect free that they are easy to iterate on in a vacuum. Smalltalk and CL have not doubled down on functional and software transactional memory like Clojure has. That makes this a little more nuanced than “has/doesn’t have a feature”.
You’re correct. Interactivity and REPL affordances are areas where Clojure–otherwise an advancement over earlier Lisps–really suffers compared to, for instance, Common Lisp. You don’t have restarts, there is a lot you can’t do from the REPL, and it’s easy to get a REPL into a broken state that can’t be fixed without either a full process restart or using something like Stuart Sierra’s Component to force a full reload of your project (unless you know a ton about both the JVM and the internals of the Clojure compiler). You also can’t take a snapshot of a running image and start it back up later, as you can with other Lisps (and I believe Smalltalk). (This can be useful for creating significant applications that start up very quickly; not coincidentally, Clojure apps start up notoriously slowly.)