Nice! I don’t know what it is but there is something really satisfying about hosting your website at home.
You can have some fun as well, like getting an LED to blink on every hit to the site.
I do want to do something hardware related because right now I’m under utilising the pi’s hardware abilities, but I feel like I’d have trouble distinguishing real traffic from bot traffic.
I have an interactive pixel grid that syncs to an ePaper in my home on my website: https://www.svenknebel.de/posts/2023/12/2/ (picture, grid it self at the top of the homepage feed)
Very intentionally very low-res so I don’t have to worry about people writing/drawing bad stuff, and its an entirely separate small program, so if someone ever manages to crash it only that part is gone.
I just moved my blog off of EC2 to my Raspberry Pi Kubernetes cluster at home just today. The whole idea behind running it on EC2 was that I figured I would have fewer reliability issues than on my homelab Kubernetes cluster, but the Kubernetes cluster has been remarkably stable (especially for stateless apps) and my EC2 setup was remarkably flaky[^1]. It’s definitely rewarding to run my own services, and it saves me a bunch of time/money to boot.
[^1]: not because of EC2, but because I would misconfigure Linux things, or not properly put my certificates in an EBS volume, or not set the spot instance termination policy properly, or any of a dozen other things–my k8s cluster runs behind cloudflare which takes care of the https stuff for me
I disagree with this, only because it’s imperialism. I’m British, in British English I write marshalling (with two of the letter l), sanitising (-sing instead of -zing except for words ending in a z), and -ise instead of -ize, among other things. You wouldn’t demand an Arabic developer to write all his comments in English for your sake for the sake of being idiomatic, would you?
I’ve worked for a few companies in Germany now, about half of them with their operating language being in German. All of them had comments being written exclusively in English. I don’t know how that is in other countries, but I get the impression from Europeans that this is pretty standard.
That said, my own preference is for American English for code (i.e. variable names, class names, etc), but British English for comments, commit messages, pull requests, etc. That’s because the names are part of the shared codebase and therefore standardised, but the comments and commit messages are specifically from me. As long as everyone can understand my British English, then I don’t think there’s much of a problem.
EDIT: That said, most of these suggestions feel more on the pedantic end of the spectrum as far as advice goes, and I would take some of this with a pinch of salt. In particular, when style suggestions like “I tend to write xyz” become “do this”, then I start to raise eyebrows at the usefulness of a particular style guide.
All of them had comments being written exclusively in English. I don’t know how that is in other countries, but I get the impression from Europeans that this is pretty standard.
Developers in China seem to prefer Chinese to English. When ECharts was first open-sourced by Baidu most of the inline comments (and the entire README) were in Chinese:
In Japan I feel like the tech industry is associated with English, and corporate codebases seem to use mostly English in documentation. However, many people’s personal projects have all the comments/docs in Japanese.
If someone wants to force everyone to spell something the same within a language they should make sure it’s spelled wrong in all varieties, like with HTTP’s ‘referer’.
The Go core developers feel so strongly about their speling that they’re wiling to change the names of constants from other APIs.
The gRPC protocol contains a status code enum (https://grpc.io/docs/guides/status-codes/), one of which is CANCELLED. Every gRPC library uses that spelling except for go-grpc, which spells it Canceled.
Idiosyncratic positions and an absolute refusal to concede to common practice is part and parcel of working with certain kinds of people.
We’re drifting off-topic, but I have to ask: gRPC is a Google product; Go is a Google product; and Google is a US company. How did gRPC end up with CANCELLED in the first place?!
You wouldn’t demand an Arabic developer to write all his comments in English for your sake for the sake of being idiomatic, would you?
If this is something other than a private pet project of a person who has no ambition of ever working with people outside of his country? Yes, yes I would.
I believe the advice is still applicable to non-native speakers. In all companies I worked for in France, developers write code in English, including comments, sometimes even internal docs. There are a lot of inconsistencies (typically mixing US English and GB English, sometimes in the same sentence.)
In my experience (LatAm) the problem with that is people tend to have pretty poor English writing skills. You end up with badly written comments and commit messages, full of grammatical errors. People were aware of this so they avoided writing long texts in order to limit their mistakes, so we had one-line PR descriptions, very sparse commenting, no docs to speak of, etc.
Once I had the policy changed for the native language (Portuguese) in PRs and docs they were more comfortable with it and documentation quality improved.
In Europe people are much more likely to have a strong English proficiency even as a second or third language. You have to know your audience, basically.
While I like to write paragraphs of explanation in-between code, my actual comments are rather ungrammatical, with a bit of git style verb-first, removing all articles and other things. Proper English feels wrong in these contexts. Some examples from my currently opened file:
; Hide map’s slider when page opens first time
;; Giv textbox data now
;;Norm longitude within -180-180
; No add marker when click controls
;; Try redundant desperate ideas to not bleed new markers through button
;; Scroll across date line #ToDo Make no tear in marker view (scroll West from Hawaii)
Those comments would most likely look weird to a person unfamiliar with your particular dialect.
In a small comment it’s fine to cut some corners, similar to titles in newspapers, but we can’t go overboard: the point of these things is to communicate, we don’t want to make it even more difficult for whoever is reading them. Proper grammar helps.
For clarification, this is not my dialect/way of speaking. But I see so many short interline comments like this, that I started thinking they feel more appropriate and make them too, now. Strange!
Is “hat” a standard term regularly used in the golang ecosystem for a specific thing and on the list given in the article? If not, it is not relevant to the point in the article.
(And even generalized: if it happens to be an important term for your code base or ecosystem, it probably makes sense to standardize on how to spell it. in whatever language and spelling you prefer. I’ve worked on mixed-language codebases, and it’d been helpful if people consistently used the German domain-specific terms instead of mixing them with various translation attempts. Especially if some participants don’t speak the language (well) and have to treat terms as partially opaque)
I had to solve this once. I maintain a library that converts between HTML/CSS color formats, and one of the formats is a name (and optional spec to say which set of names to draw from). HTML4, CSS2, and CSS2 only had “gray”, but CSS3 added “grey” as another spelling for the same color value, and also added a bunch of other new color names which each have a “gray” and a “grey” variant.
Which raises the question: if I give the library a hex code for one of these and ask it to convert to name, which name should it convert to?
The solution I went with was to always return the “gray” variant since that was the “original” spelling in earlier HTML and CSS specs:
I don’t think it’s really “imperialism”—firstly, “marshaling” isn’t even the preferred spelling in the US. Secondly in countries all over the world job listings stipulate English language skills all the time (even Arabic candidates) and the practice is widely accepted because facilitating communication is generally considered to be important. Lastly, while empires certainly have pushed language standardization as a means to stamp out identities, I don’t think it follows that all language standards exist to stamp out identities (particularly when they are optional, as in the case of this post).
“marshaling” isn’t even the preferred spelling in the US
What makes you say that? (Cards on the table, my immediate thought was “Yes, it is.” I had no data for that, but the ngram below suggests that the single l spelling is the (currently) preferred US spelling.)
So all the reporting on this seems to muddle various aspects a bit… If someone here reads Spanish well I’d love for them to look at the linked presentation and verify some things. My impression from it is that:
a) this is undocumented commands in the HCI protocol, i.e. the interface between a host system and its own Bluetooth chip (i.e. not something accessible remotely)
b) it does give quite low-level control over the ESP, potentially circumventing firmware integrity mechanisms there
c) it gives a lower-level access to Bluetooth protocol handling than most Bluetooth peripherals do
and the “scary” suggestions are around b) and c) – i.e. if you manage to compromise a device that uses an ESP for Bluetooth, then you could use this to potentially persist a backdoor on the ESP (which then could allow remote control) or to use the ESP for fairly advanced Bluetooth attacks. But it is not a remotely accessible general exploit in ESP-based devices, as some comments seem to take it.
If anyone has clarifications on this I want to hear it.
This is my takeaway as well. This is not a remote attack, and it doesn’t let you compromise any devices that you don’t already control. It’s just undocumented functionality in the chip firmware, but not a “backdoor” that is usable for anything evil.
ESP32 chips are like taking an Arduino and connecting It to a Bluetooth dongle, all in one chip. The researchers found commands in the dongle part’s firmware that let you do out-of-spec things, if you already control the Arduino part.
Those out-of-spec things also aren’t anything new. Things like sniffing and changing the Bluetooth address have been possible for many years with hacked Bluetooth dongles. So this just means the ESP32 is now one more platform you can do fun Bluetooth hacks with, but many others exist!
Maaaybe you could use this to escalate to a persistent attack under some contrived compromise scenario? I’m not sure if that makes sense, you’d need a much subtler analysis of the ESP32 security model to figure out if this gives you any access you wouldn’t already have under any realistic scenario, at least as long as the main firmware isn’t doing something completely silly like exposing the raw HCI interface over WiFi or something (which would already be a major security problem either way).
this just means the ESP32 is now one more platform you can do fun Bluetooth hacks with
this was my takeaway from this whole thing. Incredibly poorly communicated, but basically esp32s got a fun new trick they can do. I look forward to a new wave of esp32 firmwares for conducting bluetooth attacks.
There are several models for decentralized code collaboration platforms, ranging from ActivityPub’s (Forgejo) federated model, to Radicle’s entirely P2P model.
I didn’t understand this. As far as I’m aware, Forgejo has no relationship with ActivityPub.
I thought maybe they meant that ActivityPub’s development happens on a Forgejo server, but the ActivityPub spec points to Github as the official repo.
Side thread: personally I don’t feel like Github stars are a good metric for the popularity of a projects. What do you think?
I don’t have a better way to estimate project popularity; just saying that Github stars seem not useful to me. In about 16 years of using Github I have starred less than 30 projects, but I’ve probably used ten times as many Github projects (probably much more). Look like I just don’t star projects :-) .
And there might actually be a bias in the star counts, in that some projects attract users that are more likely to hand out stars.
What makes you give a star to a Github project? Do you give stars for any project that sounds interesting, or any project that you use, or any project that you feel exceptionally thankful for?
agreed, they are pretty useless as a metric for anything. I think they mostly measure “how much reach on social media/reddit/HN/…. has this ever gotten” in many cases, and that’s not informative of anything. (I personally star a lot, but really treat it as a bookmark along the lines of “looks vaguely interesting from a brief skim”, its not an endorsement in any way)
I’m pretty sure I’ve never starred a project on GitHub, or at least I haven’t in the past decade, and I don’t know why anyone would! It’s an odd survival of GitHub’s launch era, when “everything has to be social” was the VC meme of the moment and “social open source” was GitHub’s play.
I don’t get why popularity is so important here. Isn’t it effectively an implementation detail of your language? Even if I’m misunderstanding that and it is not, isn’t the more important question “are there good implementations”, not “are the implementations more popular than the ones for the other thing”?
One huge use-case for my language is sending and receiving and storing programs, so yes, it’s an implementation detail, but it’s also a very important one that will be impossible to change later.
But you’re totally right – that is the main question. I’m still exploring the space of serialization candidates, and these two particularly stood out to me.
I mostly care about popularity because convincing people to trust/adopt technology is way harder than actually implementing it. Extending a trusted format seems less risky than other options
Putting the politics aside, isn’t it a bit much to equate counting visitors (Google Analytics) with “transferring information about someone applying for a job with the security services”?
Sure someone applying there will end up being counted in visitor statistics, but the https://www.werkenvoornederland.nl/ website is for all government jobs. The fact that you visit it doesn’t really mean anything, especially considering that anyone from any country can do so.
That argument only makes sense if the analytics only count front-page visits and do not log any information from other sub-pages visited. That does seem unlikely (to the contrary, it happens way to often that such systems get setup to log even more precise data, e.g. from forms, and not just page visits)
Yeah I just checked and Google indeed gets information about “conversions” and “site exit”. The main page initially has no GA, but when you actually enter your data and submit an application it at least downloads the gtm.js script (albeit without cookies).
They also use google fonts, with an X-Client-Data that can likely be used to uniquely identify you(r machine) in combination with your IP.
Edit: obviously I disabled my ad blocker while doing this. Other than the Google Fonts thing none of this data will be transmitted if you use an ad blocker.
Going clay pigeon shooting for a friend’s stag do, then presumably spending the afternoon in the pub. Think the last time I went clay shooting was a different friend’s stag do in the before times. If I hit something, I’ll be happy.
I’ve to Google to understand what is clay pigeon shooting. Still didn’t get what it means stag do. It seems a place where do such activities especially in UK. Is that correct?
Was so proud of myself for putting this together recently on a trip to Scotland! Saw phrases like “having a do” around town on posters, and eventually had an aha moment like “oooooh as in doing, like an event.” Already knew the phrase “going stag” which is when a man goes to a primarily couples event solo. When the phrase “stag do” came up, I immediately got it and was so proud of myself :) For context, am from the US
This repository and its contents are licensed under the GPL v3 license, with additional terms applied.
As far as I know, additional terms and the gpl are quite incompatible. You can try but the GPL states among the lines of, any additional requirements are void.
Then maybe you could read the terms and the GPL section they reference and learn more… The GPL does not say “all extra clauses are void”, but is more nuanced about that, and EAs terms here comply. (And even for those where it says that, it’s far from obvious what legal consequence that has if the extra clauses are applied by the copyright holder - sure, the license isn’t GPL anymore at that point, but that doesn’t necessarily make it void)
Given the pile of dependencies I suspect the main value will be for modders though, but some of these for sure have the fanbase a full replacement project could happen.
oh, I didn’t know about that project, very interesting! License mismatch, but small enough contributor number they can hopefully fix that if they want to.
Zero Hour is the one with the most memories for me too. Never was good at it, especially not multiplayer, but lots of fun was had.
Zero Hour! My childhood :D I thought maybe I could deep dive on this source code, but 1.3m lines of code, thought projects like that would be around 200k :D
If you want to use Git as a backend for non-(software-)technical users, you really need a different UI on top of it. It sounds like fairly simple “upload new file version with drag-and-drop” options would already help a lot here.
My personal suspicion is that, in twenty-five years, using version control will be considered a basic life-skill for all employed people. Just like we’re now expected to know how to operate our own elevators, pump our own gas, scan our own groceries, and type our own e-mails, a kindergarten teacher in 2050 will be expected to write their own commits of updates to grades.
I expect someone will comment on how dystopian that idea is and I’m not necessarily disagreeing. It just feels like it is in that sweet spot of being useful enough for businesses to demand it and there are enough people who can muddle through for the mandate to come down. Suddenly, knowing git (or hopefully something better that comes along) will be your problem.
I doubt it, at least in a general sense. Many more systems will gain change tracking and history features, sure, but not in the sense of a single widespread VCS.
I’m not a Gopher myself, but this article seems interesting from an algorithms and data structures point of view. Would it be worthwhile to add the compsci tag to this link as well?
To clarify for anyone who is on this comments page and does not see a “suggest” button, you can only see that button from the homepage where posts are listed. (That is, I read fanf’s comment from this comments page and thought, “What ‘suggest’ button?”)
Edit: actually, this is weird. I only see the “suggest” button for some of the posts in the homepage’s listing. Does anyone know whether that is deliberate? (Image here: https://imgur.com/LQHuJJM. Some posts have the “suggest” link and some don’t. I am confused.)
Second edit: see sknebel’s comment below. I misunderstood. You can see the “suggest” button on the comments page if there is a “suggest” button to be seen.
chicken-egg problem now. Who would want to use a QR code encoding that won’t work with the majority of QR code readers for only a very small gain, and how many reader implementations are actively maintained and will add support for something nobody uses yet?
The “byte” encoding works well for that. Don’t forget, URls can contain the full range of Unicode, so a restricted set is never going to please everyone. Given that QR codes can contain ~4k symbols, I’m not sure there’s much need for an entirely new encoding.
Yes, although… there’s some benefit to making QR codes smaller even when nowhere near the limits. Smaller ones scan faster and more reliably. I find it difficult to get a 1kB QR code to scan at all with a phone.
QR codes also let you configure the amount of error correction. For small amounts of data, you often turn up the error correction which makes them possible to scan with a very poor image, so they can often scan while the camera is still trying to focus.
URIs can contain the full Unicode range, but the average URL does not. Given that’s become a major use case for qrcodes it’s definitely a shame it does not have a better mode for them: binary mode needs a byte per byte while alnum only needs 5.5 bits per byte.
All non-ascii characters in URLs can be %-encoded so unicode isn’t a problem. A 6-bit code has room for lower case, digits, and all the URL punctuation, plus room to spare for space and an upper-case shift and a few more.
So ideally a QR encoder should ask the user whether the text is a URL, and should check which encoding is the smallest (alnum with %-encoding, or binary mode). Of course this assumes that any paths in the URL are also case-insensitive (which depends on the server).
Btw. can all HTTP servers correctly handle requests with uppercase domain names? I’m thinking about SNI, maybe certificate entries…? Or do browsers already “normalize” the domain name part of a URL into lowercase?
The spec says host names are case-insensitive. In practice I believe all (?) browsers normalize to lowercase so I’m not sure if all servers would handle it correctly but a lot certainly do. I just checked and curl does not normalize, so it would be easy to test a particular server that way.
Host name yes, but not path. So if you are making a URL that includes a path the path should be upper case (or case insensitive but encoded as upper case for the QR code).
No, a URI consists of ASCII characters only. A particular URI scheme may define how non-ASCII characters are encoded as ASCII, e.g. via percent-encoding their UTF-8 bytes.
Byte encoding is fun in practice, as I recently discovered, because Android’s default QR code API returns the result as a Java String. But aren’t Java Strings UTF-16? Why yes, and so the byte data is interpreted as UTF-8, then converted to UTF-16, and then provided as a string.
The work around, apparently, if you want raw byte data, is to use an undocumented setting to tell the API that the data is in an 8-bit code page that can be safely round tripped through Unicode, and then extract the data from the string by exporting it in that encoding.
I read somewhere that the EU’s COVID passes used text mode with base45 encoding because of this tendency for byte mode QR codes to be interpreted as UTF-8.
I think at this point rustls can use multiple cryptographic backends but I could be wrong. Last time I was doing crypto stuff with it I had to explicitly choose to use ring for some stuff iirc
One thing I’ve always wondered is why Rust doesn’t use major version increments for major editions like this? The only thing I can think of is the residual fear of the Python 2 -> 3 jump. Or do they just not use semver?
Because Editions are not breaking changes in that sense: Rust 1.85 will compile your project from Rust 1.80 just fine.
Editions introduce breaking changes, but they are opt-in on a crate level and can be mixed: you can upgrade without your dependencies having moved to the new Editions, or choose to not upgrade even if your dependencies do (although you then need to use a new-enough compiler of course, but your code doesn’t need to change).
If they ever decide that’s not enough, something really needs to be broken properly for everybody in a way Editions can’t express, then it’ll be a major version increase.
Why? Shorter expiry times don’t require any new browser support, 90 days certificates will continue to be available, shorter certs are opt-in, and other TLS certificate providers are available (even if your parameters are “free” and “supports ACME”).
It puts a lot more centralized dependency on LetsEncrypt. If your site has to get a new cert every 6 days and something happens to LE, your site is now unusable without intervention.
It’s not out of the realm of possibility that an attacker could force LE’s issuing/validating servers offline for 6 days (which is also the longest possible expiry in this scenario, there could be sites that have to renew the same day the outage starts).
The ACME client can implement multiple issuers and do some kind of load balancing or fallback between them, should one of them be inaccessible. Like Caddy does.
I get why for the browser guard, but why for this? If regular 90 day certificates are already working, then there is absolutely no reason that a 6 day one wouldn’t. Sure you might need to do some work on the backend to sort out the automation (though that is hopefully already being done with 90 day certs), but for the client side this should not matter whatsoever.
Let’s encrypt is great. HTTPS should not be reserved to companies which can afford to pay for certificates, which was what happened before, and it should not be difficult to set up, either. I don’t care what content you’re serving, plain HTTP (and others) should just not be used, it’s a big tracking and attack vector.
The article explained why they want to start offering 6-day certificates. It is because if your private key leaks then anyone can impersonate your site until the certificate expires, unless you revoke the certificate with the leaked key. And certificate revocation is not reliable.
I accept that certificate revocation is somewhat unreliable, but I will admit I am puzzled about just who it is that loses their private keys so frequently that they need a maximum of a 6-day period in which the leaked key could be used.
Nice! I don’t know what it is but there is something really satisfying about hosting your website at home. You can have some fun as well, like getting an LED to blink on every hit to the site.
I do want to do something hardware related because right now I’m under utilising the pi’s hardware abilities, but I feel like I’d have trouble distinguishing real traffic from bot traffic.
I have an interactive pixel grid that syncs to an ePaper in my home on my website: https://www.svenknebel.de/posts/2023/12/2/ (picture, grid it self at the top of the homepage feed)
Very intentionally very low-res so I don’t have to worry about people writing/drawing bad stuff, and its an entirely separate small program, so if someone ever manages to crash it only that part is gone.
there is a neat little project at https://lights.climagic.com/ where you can switch the lights on and off remotely…
I just moved my blog off of EC2 to my Raspberry Pi Kubernetes cluster at home just today. The whole idea behind running it on EC2 was that I figured I would have fewer reliability issues than on my homelab Kubernetes cluster, but the Kubernetes cluster has been remarkably stable (especially for stateless apps) and my EC2 setup was remarkably flaky[^1]. It’s definitely rewarding to run my own services, and it saves me a bunch of time/money to boot.
[^1]: not because of EC2, but because I would misconfigure Linux things, or not properly put my certificates in an EBS volume, or not set the spot instance termination policy properly, or any of a dozen other things–my k8s cluster runs behind cloudflare which takes care of the https stuff for me
Espressif has put out a formal statement and accompanying blogpost
I disagree with this, only because it’s imperialism. I’m British, in British English I write marshalling (with two of the letter l), sanitising (-sing instead of -zing except for words ending in a z), and -ise instead of -ize, among other things. You wouldn’t demand an Arabic developer to write all his comments in English for your sake for the sake of being idiomatic, would you?
I’ve worked for a few companies in Germany now, about half of them with their operating language being in German. All of them had comments being written exclusively in English. I don’t know how that is in other countries, but I get the impression from Europeans that this is pretty standard.
That said, my own preference is for American English for code (i.e. variable names, class names, etc), but British English for comments, commit messages, pull requests, etc. That’s because the names are part of the shared codebase and therefore standardised, but the comments and commit messages are specifically from me. As long as everyone can understand my British English, then I don’t think there’s much of a problem.
EDIT: That said, most of these suggestions feel more on the pedantic end of the spectrum as far as advice goes, and I would take some of this with a pinch of salt. In particular, when style suggestions like “I tend to write xyz” become “do this”, then I start to raise eyebrows at the usefulness of a particular style guide.
Developers in China seem to prefer Chinese to English. When ECharts was first open-sourced by Baidu most of the inline comments (and the entire README) were in Chinese:
In Japan I feel like the tech industry is associated with English, and corporate codebases seem to use mostly English in documentation. However, many people’s personal projects have all the comments/docs in Japanese.
If someone wants to force everyone to spell something the same within a language they should make sure it’s spelled wrong in all varieties, like with HTTP’s ‘referer’.
The Go core developers feel so strongly about their speling that they’re wiling to change the names of constants from other APIs.
The gRPC protocol contains a status code enum (https://grpc.io/docs/guides/status-codes/), one of which is
CANCELLED. Every gRPC library uses that spelling except for go-grpc, which spells itCanceled.Idiosyncratic positions and an absolute refusal to concede to common practice is part and parcel of working with certain kinds of people.
We’re drifting off-topic, but I have to ask: gRPC is a Google product; Go is a Google product; and Google is a US company. How did gRPC end up with
CANCELLEDin the first place?!When you use a lot of staff on H-1B and E-3 visas, you get a lot of people who write in English rather than American!
Wait until you hear about the HTTP ‘Referer’ header. The HTTP folks have been refusing to conform to common practice for more than 30 years!
If this is something other than a private pet project of a person who has no ambition of ever working with people outside of his country? Yes, yes I would.
I believe the advice is still applicable to non-native speakers. In all companies I worked for in France, developers write code in English, including comments, sometimes even internal docs. There are a lot of inconsistencies (typically mixing US English and GB English, sometimes in the same sentence.)
In my experience (LatAm) the problem with that is people tend to have pretty poor English writing skills. You end up with badly written comments and commit messages, full of grammatical errors. People were aware of this so they avoided writing long texts in order to limit their mistakes, so we had one-line PR descriptions, very sparse commenting, no docs to speak of, etc.
Once I had the policy changed for the native language (Portuguese) in PRs and docs they were more comfortable with it and documentation quality improved.
In Europe people are much more likely to have a strong English proficiency even as a second or third language. You have to know your audience, basically.
While I like to write paragraphs of explanation in-between code, my actual comments are rather ungrammatical, with a bit of git style verb-first, removing all articles and other things. Proper English feels wrong in these contexts. Some examples from my currently opened file:
Those comments would most likely look weird to a person unfamiliar with your particular dialect.
In a small comment it’s fine to cut some corners, similar to titles in newspapers, but we can’t go overboard: the point of these things is to communicate, we don’t want to make it even more difficult for whoever is reading them. Proper grammar helps.
For clarification, this is not my dialect/way of speaking. But I see so many short interline comments like this, that I started thinking they feel more appropriate and make them too, now. Strange!
“If you use standard terms, spell them in a standard way” is not the same as “use only one language ever”.
Is “chapéu” or “hat” the standard way of spelling hat in Golang? If it’s “hat”, your standard is “only use American English ever”.
Is “hat” a standard term regularly used in the golang ecosystem for a specific thing and on the list given in the article? If not, it is not relevant to the point in the article.
(And even generalized: if it happens to be an important term for your code base or ecosystem, it probably makes sense to standardize on how to spell it. in whatever language and spelling you prefer. I’ve worked on mixed-language codebases, and it’d been helpful if people consistently used the German domain-specific terms instead of mixing them with various translation attempts. Especially if some participants don’t speak the language (well) and have to treat terms as partially opaque)
What? England had the word “hat” long before the USA existed.
I had to solve this once. I maintain a library that converts between HTML/CSS color formats, and one of the formats is a name (and optional spec to say which set of names to draw from). HTML4, CSS2, and CSS2 only had “gray”, but CSS3 added “grey” as another spelling for the same color value, and also added a bunch of other new color names which each have a “gray” and a “grey” variant.
Which raises the question: if I give the library a hex code for one of these and ask it to convert to name, which name should it convert to?
The solution I went with was to always return the “gray” variant since that was the “original” spelling in earlier HTML and CSS specs:
https://webcolors.readthedocs.io/en/latest/faq.html#why-does-webcolors-prefer-american-spellings
I thought you guys loved imperialism?
Imperialism is like kids, you like your own brand.
I don’t think it’s really “imperialism”—firstly, “marshaling” isn’t even the preferred spelling in the US. Secondly in countries all over the world job listings stipulate English language skills all the time (even Arabic candidates) and the practice is widely accepted because facilitating communication is generally considered to be important. Lastly, while empires certainly have pushed language standardization as a means to stamp out identities, I don’t think it follows that all language standards exist to stamp out identities (particularly when they are optional, as in the case of this post).
What makes you say that? (Cards on the table, my immediate thought was “Yes, it is.” I had no data for that, but the ngram below suggests that the single l spelling is the (currently) preferred US spelling.)
https://books.google.com/ngrams/graph?content=marshaling%2Cmarshalling&year_start=1800&year_end=2022&corpus=en-US&smoothing=3&case_insensitive=true
It’s imperialist to use social and technical pressure to “encourage” people to use American English so their own codebases are “idiomatic”.
I disagree. I don’t see how it is imperialism in any meaningful sense. Also “pressure” is fairly absurd here.
Article feels more like marketing desperate to attach to current (very misleading) headlines than useful and thoughtful contribution.
So all the reporting on this seems to muddle various aspects a bit… If someone here reads Spanish well I’d love for them to look at the linked presentation and verify some things. My impression from it is that:
a) this is undocumented commands in the HCI protocol, i.e. the interface between a host system and its own Bluetooth chip (i.e. not something accessible remotely)
b) it does give quite low-level control over the ESP, potentially circumventing firmware integrity mechanisms there
c) it gives a lower-level access to Bluetooth protocol handling than most Bluetooth peripherals do
and the “scary” suggestions are around b) and c) – i.e. if you manage to compromise a device that uses an ESP for Bluetooth, then you could use this to potentially persist a backdoor on the ESP (which then could allow remote control) or to use the ESP for fairly advanced Bluetooth attacks. But it is not a remotely accessible general exploit in ESP-based devices, as some comments seem to take it.
If anyone has clarifications on this I want to hear it.
This is my takeaway as well. This is not a remote attack, and it doesn’t let you compromise any devices that you don’t already control. It’s just undocumented functionality in the chip firmware, but not a “backdoor” that is usable for anything evil.
ESP32 chips are like taking an Arduino and connecting It to a Bluetooth dongle, all in one chip. The researchers found commands in the dongle part’s firmware that let you do out-of-spec things, if you already control the Arduino part.
Those out-of-spec things also aren’t anything new. Things like sniffing and changing the Bluetooth address have been possible for many years with hacked Bluetooth dongles. So this just means the ESP32 is now one more platform you can do fun Bluetooth hacks with, but many others exist!
Maaaybe you could use this to escalate to a persistent attack under some contrived compromise scenario? I’m not sure if that makes sense, you’d need a much subtler analysis of the ESP32 security model to figure out if this gives you any access you wouldn’t already have under any realistic scenario, at least as long as the main firmware isn’t doing something completely silly like exposing the raw HCI interface over WiFi or something (which would already be a major security problem either way).
this was my takeaway from this whole thing. Incredibly poorly communicated, but basically esp32s got a fun new trick they can do. I look forward to a new wave of esp32 firmwares for conducting bluetooth attacks.
What is Ibuprofen in this context? It’s completely unsearchable.
the obvious one: painkillers, against the headache from playing
I didn’t understand this. As far as I’m aware, Forgejo has no relationship with ActivityPub.
I thought maybe they meant that ActivityPub’s development happens on a Forgejo server, but the ActivityPub spec points to Github as the official repo.
i think they are referencing forgefed (https://forgefed.org/) but the last time i looked forgejo was still working on their implementation of it.
That was my bad! I will amend that to say Forgefed.
https://forgefed.org/
Side thread: personally I don’t feel like Github stars are a good metric for the popularity of a projects. What do you think?
I don’t have a better way to estimate project popularity; just saying that Github stars seem not useful to me. In about 16 years of using Github I have starred less than 30 projects, but I’ve probably used ten times as many Github projects (probably much more). Look like I just don’t star projects :-) .
And there might actually be a bias in the star counts, in that some projects attract users that are more likely to hand out stars.
What makes you give a star to a Github project? Do you give stars for any project that sounds interesting, or any project that you use, or any project that you feel exceptionally thankful for?
agreed, they are pretty useless as a metric for anything. I think they mostly measure “how much reach on social media/reddit/HN/…. has this ever gotten” in many cases, and that’s not informative of anything. (I personally star a lot, but really treat it as a bookmark along the lines of “looks vaguely interesting from a brief skim”, its not an endorsement in any way)
I’m pretty sure I’ve never starred a project on GitHub, or at least I haven’t in the past decade, and I don’t know why anyone would! It’s an odd survival of GitHub’s launch era, when “everything has to be social” was the VC meme of the moment and “social open source” was GitHub’s play.
I find it useful as a bookmark, a way to search a curated portion of GitHub later on.
I use it as a “read later” flag when I see a link to a project there but don’t have time to fully consider it in the moment.
Note that I also tried to use Google trends, but both keywords fell under the threshold for tracking over time !
You can also compare download count from package managers like NPM, but I didn’t have an easy way to do that for so many libraries
I don’t get why popularity is so important here. Isn’t it effectively an implementation detail of your language? Even if I’m misunderstanding that and it is not, isn’t the more important question “are there good implementations”, not “are the implementations more popular than the ones for the other thing”?
One huge use-case for my language is sending and receiving and storing programs, so yes, it’s an implementation detail, but it’s also a very important one that will be impossible to change later.
But you’re totally right – that is the main question. I’m still exploring the space of serialization candidates, and these two particularly stood out to me.
I mostly care about popularity because convincing people to trust/adopt technology is way harder than actually implementing it. Extending a trusted format seems less risky than other options
Putting the politics aside, isn’t it a bit much to equate counting visitors (Google Analytics) with “transferring information about someone applying for a job with the security services”?
Sure someone applying there will end up being counted in visitor statistics, but the https://www.werkenvoornederland.nl/ website is for all government jobs. The fact that you visit it doesn’t really mean anything, especially considering that anyone from any country can do so.
That argument only makes sense if the analytics only count front-page visits and do not log any information from other sub-pages visited. That does seem unlikely (to the contrary, it happens way to often that such systems get setup to log even more precise data, e.g. from forms, and not just page visits)
Yeah I just checked and Google indeed gets information about “conversions” and “site exit”. The main page initially has no GA, but when you actually enter your data and submit an application it at least downloads the gtm.js script (albeit without cookies).
They also use google fonts, with an X-Client-Data that can likely be used to uniquely identify you(r machine) in combination with your IP.
Edit: obviously I disabled my ad blocker while doing this. Other than the Google Fonts thing none of this data will be transmitted if you use an ad blocker.
Going clay pigeon shooting for a friend’s stag do, then presumably spending the afternoon in the pub. Think the last time I went clay shooting was a different friend’s stag do in the before times. If I hit something, I’ll be happy.
I’ve to Google to understand what is clay pigeon shooting. Still didn’t get what it means stag do. It seems a place where do such activities especially in UK. Is that correct?
Was so proud of myself for putting this together recently on a trip to Scotland! Saw phrases like “having a do” around town on posters, and eventually had an aha moment like “oooooh as in doing, like an event.” Already knew the phrase “going stag” which is when a man goes to a primarily couples event solo. When the phrase “stag do” came up, I immediately got it and was so proud of myself :) For context, am from the US
https://en.wikipedia.org/wiki/Bachelor_party
Ah! Learned something new today. Thank you!
As far as I know, additional terms and the gpl are quite incompatible. You can try but the GPL states among the lines of, any additional requirements are void.
Then maybe you could read the terms and the GPL section they reference and learn more… The GPL does not say “all extra clauses are void”, but is more nuanced about that, and EAs terms here comply. (And even for those where it says that, it’s far from obvious what legal consequence that has if the extra clauses are applied by the copyright holder - sure, the license isn’t GPL anymore at that point, but that doesn’t necessarily make it void)
The additional terms read to me as “don’t say you are us” and “you don’t hold us liable when you distribute”.
Huh! Were those not in the gpl already? :o
If they were (sorry don’t have time to read the tome), my guess would be that the lawyers wanted specific wording.
Oh thanks. Yeah that would make sense.
Although in the README it says
Which is probably not valid under the GPL
That phrasing is probably more along the lines of “assets are not included” than license terms. It’s also, notably, not in the license file.
They’ve also open sourced Tiberian Dawn, Renegade and Generals with Zero Hour!
Given the pile of dependencies I suspect the main value will be for modders though, but some of these for sure have the fanbase a full replacement project could happen.
Really excited about Zero Hour, hopefully this will be helpful to Thyme.
oh, I didn’t know about that project, very interesting! License mismatch, but small enough contributor number they can hopefully fix that if they want to.
Zero Hour is the one with the most memories for me too. Never was good at it, especially not multiplayer, but lots of fun was had.
I suspect the main benefit is for the OpenRA project. The license is compatible…
Zero Hour! My childhood :D I thought maybe I could deep dive on this source code, but 1.3m lines of code, thought projects like that would be around 200k :D
If you want to use Git as a backend for non-(software-)technical users, you really need a different UI on top of it. It sounds like fairly simple “upload new file version with drag-and-drop” options would already help a lot here.
My personal suspicion is that, in twenty-five years, using version control will be considered a basic life-skill for all employed people. Just like we’re now expected to know how to operate our own elevators, pump our own gas, scan our own groceries, and type our own e-mails, a kindergarten teacher in 2050 will be expected to write their own commits of updates to grades.
I expect someone will comment on how dystopian that idea is and I’m not necessarily disagreeing. It just feels like it is in that sweet spot of being useful enough for businesses to demand it and there are enough people who can muddle through for the mandate to come down. Suddenly, knowing git (or hopefully something better that comes along) will be your problem.
I doubt it, at least in a general sense. Many more systems will gain change tracking and history features, sure, but not in the sense of a single widespread VCS.
I’m not a Gopher myself, but this article seems interesting from an algorithms and data structures point of view. Would it be worthwhile to add the
compscitag to this link as well?Use the “suggest” button instead of posting a meta-comment
TIL: thanks.
To clarify for anyone who is on this comments page and does not see a “suggest” button, you can only see that button from the homepage where posts are listed. (That is, I read fanf’s comment from this comments page and thought, “What ‘suggest’ button?”)
Edit: actually, this is weird. I only see the “suggest” button for some of the posts in the homepage’s listing. Does anyone know whether that is deliberate? (Image here: https://imgur.com/LQHuJJM. Some posts have the “suggest” link and some don’t. I am confused.)
Second edit: see sknebel’s comment below. I misunderstood. You can see the “suggest” button on the comments page if there is a “suggest” button to be seen.
it disappears once an edit through suggestions has been applied, to prevent “edit wars” I guess.
Thanks for clarifying. I see what you mean about edit wars, but isn’t it possible that a post deserves multiple (non-competing) suggestions?
If encoding of typical URLs doesn’t work really well, shouldn’t they make a new encoding version that is specialized for alphanumeric, / : ? & etc?
chicken-egg problem now. Who would want to use a QR code encoding that won’t work with the majority of QR code readers for only a very small gain, and how many reader implementations are actively maintained and will add support for something nobody uses yet?
The encoding could be invented for internal use by a very large company, kinda like UPS’s MaxiCode: https://en.m.wikipedia.org/wiki/MaxiCode
What you’re describing is the problem for all new standards. How do they ever work? ;-)
Better in environments that are not as fragmented and/or can provide backwards compatibility? ;)
The “byte” encoding works well for that. Don’t forget, URls can contain the full range of Unicode, so a restricted set is never going to please everyone. Given that QR codes can contain ~4k symbols, I’m not sure there’s much need for an entirely new encoding.
Yes, although… there’s some benefit to making QR codes smaller even when nowhere near the limits. Smaller ones scan faster and more reliably. I find it difficult to get a 1kB QR code to scan at all with a phone.
QR codes also let you configure the amount of error correction. For small amounts of data, you often turn up the error correction which makes them possible to scan with a very poor image, so they can often scan while the camera is still trying to focus.
IME small QR codes scan very fast even with the FEC at minimum.
URIs can contain the full Unicode range, but the average URL does not. Given that’s become a major use case for qrcodes it’s definitely a shame it does not have a better mode for them: binary mode needs a byte per byte while alnum only needs 5.5 bits per byte.
All non-ascii characters in URLs can be %-encoded so unicode isn’t a problem. A 6-bit code has room for lower case, digits, and all the URL punctuation, plus room to spare for space and an upper-case shift and a few more.
So ideally a QR encoder should ask the user whether the text is a URL, and should check which encoding is the smallest (alnum with %-encoding, or binary mode). Of course this assumes that any paths in the URL are also case-insensitive (which depends on the server).
Btw. can all HTTP servers correctly handle requests with uppercase domain names? I’m thinking about SNI, maybe certificate entries…? Or do browsers already “normalize” the domain name part of a URL into lowercase?
The spec says host names are case-insensitive. In practice I believe all (?) browsers normalize to lowercase so I’m not sure if all servers would handle it correctly but a lot certainly do. I just checked and curl does not normalize, so it would be easy to test a particular server that way.
Host name yes, but not path. So if you are making a URL that includes a path the path should be upper case (or case insensitive but encoded as upper case for the QR code).
No, a URI consists of ASCII characters only. A particular URI scheme may define how non-ASCII characters are encoded as ASCII, e.g. via percent-encoding their UTF-8 bytes.
Ok? You see how that makes the binary/byte encoding even worse right?
Furthermore, the thing that’s like a URI but not limited to ASCII is a IRI (Internationalized Resource Identifier).
You’re right (oh and I should know that URLs can contain anything…. Let’s blame it on Sunday :-))
Byte encoding is fun in practice, as I recently discovered, because Android’s default QR code API returns the result as a Java String. But aren’t Java Strings UTF-16? Why yes, and so the byte data is interpreted as UTF-8, then converted to UTF-16, and then provided as a string.
The work around, apparently, if you want raw byte data, is to use an undocumented setting to tell the API that the data is in an 8-bit code page that can be safely round tripped through Unicode, and then extract the data from the string by exporting it in that encoding.
I read somewhere that the EU’s COVID passes used text mode with base45 encoding because of this tendency for byte mode QR codes to be interpreted as UTF-8.
Do you really mean base45 or was that a typo for base64? I’m confused because base64 fits fine into utf8’s one-byte-per-character subset. :)
There’s an RFC: https://datatracker.ietf.org/doc/rfc9285/ and yes, base45, which is numbers and uppercase and symbols that matches the QR code “alphanumeric” set.
ahhh ty
Yeah, kind of a big problem for something so huge and important.
It does appear that a few volunteers are stepping forward to handle patching.
For something important the README is quite lacking.
It seems to be a dependency for rustls.
I think at this point rustls can use multiple cryptographic backends but I could be wrong. Last time I was doing crypto stuff with it I had to explicitly choose to use ring for some stuff iirc
In what way?
It doesn’t say what it is. It says it has code from BoringSSL but not an outright fork?
fair. The one-liner is at the top of the docs though at least: “Safe, fast, small crypto using Rust with BoringSSL’s cryptography primitives.”
One thing I’ve always wondered is why Rust doesn’t use major version increments for major editions like this? The only thing I can think of is the residual fear of the Python 2 -> 3 jump. Or do they just not use semver?
Because Editions are not breaking changes in that sense: Rust 1.85 will compile your project from Rust 1.80 just fine.
Editions introduce breaking changes, but they are opt-in on a crate level and can be mixed: you can upgrade without your dependencies having moved to the new Editions, or choose to not upgrade even if your dependencies do (although you then need to use a new-enough compiler of course, but your code doesn’t need to change).
If they ever decide that’s not enough, something really needs to be broken properly for everybody in a way Editions can’t express, then it’ll be a major version increase.
previous discussions of the plans to offer this: https://lobste.rs/s/nnuufh/let_s_encrypt_will_begin_offering_6_day and https://lobste.rs/s/7sjhpm/announcing_six_day_ip_address
This is very bad for web openness and long term accessibility, much like the Rails browser version guard.
Why? Shorter expiry times don’t require any new browser support, 90 days certificates will continue to be available, shorter certs are opt-in, and other TLS certificate providers are available (even if your parameters are “free” and “supports ACME”).
It puts a lot more centralized dependency on LetsEncrypt. If your site has to get a new cert every 6 days and something happens to LE, your site is now unusable without intervention.
It’s not out of the realm of possibility that an attacker could force LE’s issuing/validating servers offline for 6 days (which is also the longest possible expiry in this scenario, there could be sites that have to renew the same day the outage starts).
That explains why it introduces potential fragility but not why 6 day certs are bad for the open web and accessibility.
The ACME client can implement multiple issuers and do some kind of load balancing or fallback between them, should one of them be inaccessible. Like Caddy does.
I get why for the browser guard, but why for this? If regular 90 day certificates are already working, then there is absolutely no reason that a 6 day one wouldn’t. Sure you might need to do some work on the backend to sort out the automation (though that is hopefully already being done with 90 day certs), but for the client side this should not matter whatsoever.
Let’s encrypt is great. HTTPS should not be reserved to companies which can afford to pay for certificates, which was what happened before, and it should not be difficult to set up, either. I don’t care what content you’re serving, plain HTTP (and others) should just not be used, it’s a big tracking and attack vector.
The article explained why they want to start offering 6-day certificates. It is because if your private key leaks then anyone can impersonate your site until the certificate expires, unless you revoke the certificate with the leaked key. And certificate revocation is not reliable.
I accept that certificate revocation is somewhat unreliable, but I will admit I am puzzled about just who it is that loses their private keys so frequently that they need a maximum of a 6-day period in which the leaked key could be used.
I don’t get how “so frequently” comes into it. If you loose your key very very rarely, you don’t care about for how long it could be misused?
Any individual doesn’t, but the whole web does. And if let’s encrypt loses trust, then the whole web suffers.
One key is one key/site which is 100s of millions of keys. Those 100s of millions of keys do pose a risk to trusting let’s encrypt on the whole.
You only need to lose your private keys once for the validity duration to matter.
Unless you consider less than a year (the longest expiration in typical use, AFAIK) to be “long term”, I don’t get your point.