As a European, I see little use for iMessage because everybody is using Signal or WhatsApp already. In US the blue bubble has been important culturally.
There’s that social “blue bubble” thing that @pimeys mentioned.
But the real upside is that if your iMessage-using contacts’ devices see that you’re an iMessage user, their messages to you will be encrypted end-to-end. They will also have an easier time creating group messages that include you, because iMessage has some interface deficiencies creating SMS group threads.
This glosses over the biggest advantage of iMessage vs. SMS: iMessage removes the carrier from the equation entirely, and doesn’t require the user to have a phone. iMessage users can communicate with other iMessage users over the IP network, rather than over the carrier network, which means I can text my friends from my MacBook, my iPad, or my iPhone, without needing to have a SIM for any of them. SMS messages (green bubbles) breaks this feature, requiring me to have my SIM-enabled phone within some small distance of my Macbook for message-forwarding to work. This, in a word, sucks. It means I can’t text on a plane, for instance.
iMessage exclusivity is anti-competitive and bad, but as a technology, iMessage is far, far, far superior to SMS and RCS. Blue bubbles isn’t a social stigma because people like the color blue, it is because texting with a green bubble is a significantly worse user experience than texting with blue bubbles.
Blue bubbles isn’t a social stigma because people like the color blue, it is because texting with a green bubble is a significantly worse user experience than texting with blue bubbles.
This is true, but not exclusively so. At least here in the US, green bubbles also carry social stigma around (perceived) wealth. Google claims that iMessage dominates because of bullying. I don’t think it’s fair to claim that the blue/green bubble dynamic is solely due to iMessage being a superior experience (which to be clear, it is).
iMessage removes the carrier from the equation entirely, and doesn’t require the user to have a phone.
That first part is certainly one reason that I like end-to-end encryption. I didn’t mean to omit the second part. It just wasn’t front of mind for me because it still requires an Apple device right now. And while I do use a macbook much of the time, I still spend enough time on a Linux laptop and desktop (which don’t get to participate in this device- and carrier-independence) that I wasn’t thinking about that side of it.
For people who do use devices that can participate in that, I’m sure that makes texting with blue bubbles better than texting with green bubbles.
For others, it is more related to the group chat UX, the tapback annoyances, and a few other papercuts. And I’d like my chats to be encrypted because carriers are happy to mine them for creepy purposes, given the opportunity.
Note: Adding the lang attribute is particularly important for assistive technology users. For instance, screen readers alter their voice and pronunciation based on the language attribute.
I’m sorry, it is the year 2023. It is trivial to identify the language of a paragraph of text, and, if you fail and just use the default voice, any screen reader user will be either a) as confused as I would be, reading a language I clearly don’t understand, or b) able to determine that they are getting German with a bad Spanish accent, assuming they speak both languages. Please, please, please, accessibility “experts”, stop asking literally millions of people to do work on every one of their pieces of content, when the work can be done trivially, automatically.
These are heursics, and not always correct. Especially for shorter phrases it is very possible that it is valid in multiple languages. I think it is of course good they threse heuristics exist but it seems that it is best to also provide more concrete info.
The ideal situation is probably both. Treat the HTML tags as a strong signal, but if there is lots of text and your heuristics are fairly certain that it is wrong consider overriding it, but if it is short text or you aren’t sure go with what it says.
Makes me wonder if there is a way to indicate “I don’t know” for part of the text. For example if I am embedding a user-submitted movie title that may be another language. I could say that most of this site is in English, but I don’t know what language that title is, take your best guess.
How does one indicate undetermined languages using the ISO 639 language codes?
In some situations, it may be necessary to indicate that the identity of the language used in an information object has not been determined. If the situation is that it is undetermined because there is no language content, the following identifier is provided by ISO 639-2:
zxx (No linguistic content; Not applicable)
If there is language content, but the specific language cannot be determined a special identifier is provided by ISO 639-2:
und (Undetermined)
Also in fun ISO language codes: You can add -fonipa to a language code to indicate IPA transcription:
It is trivial to identify the language of a paragraph of text
It’s an AGI-hard problem…
Consider my cousin Ada. The only way a screen reader (or person) can read that sentence correctly without a <span lang=tr> is by knowing who she is.
What is possible, though far from trivial, is to apply a massive list of heuristics, which is sometimes the best option available, i.e. user-generated content. However, When people who do have the technical knowledge to take care of these things don’t, responsible authors who mark their languages will then have to work around them.
But never, in all of human history, has a letter, or book, or magazine article ever noted your cousin’s name language in obscure markup. That’s not how humans communicate, and we shouldn’t start now.
I write lang="xy" attributes. I, for one, certainly would prefer that the relatively small number of HTML authors take the small amount of care to write lang="xy" attributes, so that user agents can simply read those nine bytes, than that the much larger number of users spend the processing power to run the heuristics to identify the language (and maybe fail to guess correctly). Consider users over authors. Maybe, if one considers only screen readers, the effect shrinks away, but there are other user agents that care in what language is the text on the Web, as common as Google Chrome, which identifies the language so that it can offer to Google-Translate it.
I, for one, certainly would prefer that the relatively small number of HTML authors take the small amount of care to write lang=“xy” attributes, so that user agents can simply read those nine bytes, than that the much larger number of users spend the processing power to run the heuristics to identify the language (and maybe fail to guess correctly).
This is the fundamental disconnect. You are not making this ask of the “relatively small number of HTML authors”. You are making this ask of literally every single person who tweets, posts to facebook or reddit, or sends an email. This is an ask of, essentially, every person who has ever used a computer. The content creator is the only person who knows the language they are using.
Yesterday you found this fly website about amateur radio, and you want to explore more—but how can you find related websites?
“Fly” as slang was dated even in the 90s - I think Offspring’s Pretty Fly (for a White Guy) was the last gasp of that word in pop culture, and in that context deliberately using a piece of dated and black-coded slang was part of the message of the song.
I have nothing against webrings, personally, but I don’t particularly have nostalgia for them either. I remember them from some of the websites I visited as a kid in the late 90s/early 2000s, but I don’t remember taking them very seriously as a way to find new interesting websites. Hand-curated pages of links to other websites by contrast were a good way to find content another human thinks is good, and of course those never went away.
Also, remembering the terrible internet speeds of the 1990s, traversing a webring meant you had to click and wait (paying by the minute) for another site to load, which might be random, or only vaguely related to the one you were on now. It was really a worthless navigation interface back then. With faster and unmetered connections, they might be slightly less awful. But still not very respectful of the readers’ time and attention.
I like idea of wearings (I need to add one to my website), but for me it looks painful, that it requires so much manual steps, often involving GitHub repository or other. I believe that most of these could be automated to some degree.
The whole point is to remove the automation, though. Webrings should be manually vetted collections of content that one or more humans have decided is cool.
I love these little tools. They present a new way of creating something, and make it easy to try things out, even if you aren’t “trained” as a visual artist or musician.
The real killer feature for these tiny apps is being able to share creations via URLs. You can encode the state as a base64 value in the URL itself.
I can’t actually find the really tiny ones, but these ones I like none-the-less:
Bitsy - a 1-bit pixel art game maker for little adventure games
PuzzleScript is a domain-specific language for making sliding-block puzzles
I’ve also made a few things like this myself; they don’t encode programs in URLs, but either provide the ability to download a self-contained single-file html export, include a built-in pastebin service, or both:
These are fantastic - especially Octo - I just went down a Chip-8 rabbit hole. It seems feasible to make a Chip-8 emulator for the Arduboy I bought recently.
EDIT: ah, the Chip-8 requires more buttons (basically a num-pad) than the Arduboy has (more like a Gameboy).
Email should be a human-to-human discussion stream only. We have better technologies for notifications, file exchanges, collaboration, etc. Humans should write human-readable text, and then read those messages when other people send them.
The point of mailing the patch, though, is that a computer will process the text in the patch, from the email, and apply it. It’s layering a protocol on top of a protocol.
An when email was the only available tool, that made sense. Things like ReviewBoard were available 20 years ago, Phabricator, GitHub, and so on have been around for a big chunk of that time, and have big advantages:
Different comments on the same line are easy to find, they’re not separate threads.
Arguably layering protocols on top of each other is what got us to where we are in the first place, I don’t see why would it be a problem to layer them some more. Especially since IMO email is way more suited for discussions over internet than whatever half of an mailing archive and email agent “modern” forges cook up.
You wouldn’t normally apply a patch without reading it first and you might reject it or ask for changes instead of applying it. Patch emails are just as much for humans as they are for computers.
I would certainly apply a patch without reading it first. I need to apply the anyways as part of review, e,g. to make sure that it compiles and passes tests. The email rendering of the patch may not include all the desired context, syntax highlighting, whitespace ignoring, IDE functionality, etc., and overall it’s generally painful.
Since the review functionality that I can do by reading a patch is a subset of that which I can do after applying it, applying the patch is my default mode of review. The workflow is similar to checking out a remote branch: the transport protocol is ultimately irrelevant as long as I can get the commit into my repo for review somehow.
Plus, you can’t represent certain changes in Git patches anyways (merge commits), so you have to use a different protocol if you want to transmit them.
It’s funny that you list sourcehut in this list since git-send-email.io was created by Drew DeVault to evangelize for email based workflow over using a web based UI to prepare a pull request.
Yes, I’m aware. Drew is a very opinionated person, and I tend to agree with many of his opinions. I like sr.ht. I’m a paying member.
But his opinion on this is simply wrong, as evidenced by “You will probably also want to link to the archives so that potential contributors can read other people’s work to get a feel for your submission process.” in the sr.ht email list documentation. Moving ‘research how people did things in the past’ out of an email client, but ‘contribute in the future’ INTO an email client means users must have two separate interfaces to handling patches depending on when they began participating in a project, and That’s Bad.
That’s a fair point. I doubt anyone would actually use it, but it’d be interesting from a theoretical standpoint if there was a gateway that let you receive historical emails from a list in your mail client.
Do any of the other services offer a way for you to contribute to a repo that’s not on your platform? The only one I’m aware of is GitLab’s in progress work on using ActivityPub for notifying remote repos about pull requests. I think some of the value mismatch is that you seem to prefer an integrated UI over having a workflow that favors decentralization like Drew’s email setup.
Why not just upload a branch to the remote with your changes? There’s no reason you can’t configure a repo with public branch creation, and a separate repo that is the actual project, and pull from one repo to another after the branch/patch has been approved.
I’m not sure about “over” since sourcehut implements a web based UI to send patches. This how to site is useful for people who prefer command line, but not required or even the most obvious way to do things with sourcehut
For the Fennel project we accept patches on the mailing list (or PRs on the read-only github mirror for people who prefer that) but most review for small changes happens on IRC/Matrix. Someone will drop a link to either a commit on a git hosting site or a patch in a pastebin and we’ll talk it thru there. Obviously that’s bad for cases where the discussion is important to refer back to later, but for most minor changes it’s the nicest flow I’ve ever used.
Yeah. Honestly I think email is handy for discussing patches, and it’s convenient if the email contains the changes so you can talk about them, but I think the actual changes being propagated over git+ssh or git+https is fine. You can paste the patch into your message using your MUA for purposes of discussion but also include a remote+branch as well for purposes of actually applying the changes.
I’ve been using sourcehut “the right way” for a few years now and while I like it better than github for the discussions, the process of sending the patch sucks! Trying to get git send-email to make your v2 patch show up as a reply in an existing thread (which is trivial to do in your MUA) is a profoundly stupid experience.
Unfortunately sorucehut also doesnt detect the new patch when it’s done as a reply this way, etc, so I’ve resorted to having v2s etc be ner threads. This seems to be how the tool prefers to be used.
I just let pinboard suggest tags for all of my links. It makes them pretty searchable. It’s definitely not perfect, but it’s a lot less work than doing the work on my own.
$0.5 for 10GB of blob storage? $30 for a single vCPU?
Norvig’s numbers were at least somewhat grounded in reality; these prices and the hand-wavey calculations driving them are incredibly goofy. Questionable if anybody should bother “knowing” them.
Also the idea that “every programmer” should know about this kind of stuff is very silly. Apparently it’s some kind of forgotten knowledge that other kinds of programming exist beyond “send your program to a big company’s computers to run them for you”.
The important thing to know is probably the ratio between cloud costs and developer salary. If using some cloud thing costs $10, how much of your time does it need to save to be worthwhile? If every developer is using $100/month of cloud things, it probably won’t even show up in accounting noise. If they’re using $10,000/month, that’s a different question. If they need to spend a day justifying spending $100, your process is costing more than it’s saving.
I pulled up the ec2 pricing and filtered to 1 vCPU. The monthly cost ranges from $4.23 to $60.96.
The cheapest 32-vCPU machine is $26.38/mo/vCPU.
10gb of S3 is $0.23/mo which is much cheaper than $0.50, but Azure has a premium blob storage tier at $1.50/mo. But there are also cheaper tiers at both providers so $0.50 is not wildly off but probably not a good average either.
AWS prices is what isn’t grounded in the reality of its market!
I’m a happy hetzner customer and these prices don’t sound very relatable. To this date I still don’t undersant why people pay well over 2x the price for a worse service, especially in terms of its awful UX.
I was considering them but their reviews are awful. 39% one-star is one of the worst scores I’ve seen of any company, though not as bad as OVH’s 67%. A lot of the complaints are about accounts being suspended with no information about why and no recourse, which makes me incredibly nervous about using them.
Their dedicated offerings from look very interesting but those reviews scare me. You can get a moderately decent quad-core machine with some big disks and a decent amount of RAM for the same price as an AWS or Azure two-VCPU instance.
One benefit of the “smart” features that this article doesn’t mention is accessibility for blind people. I have a Cosori convection oven, and by connecting it, I can adjust its controls through the Vesync app on my phone, which is accessible through a screen reader. It’s unfortunate that the oven and my phone have to go all the way out to some remote server to talk to each other, but I’ll take the benefit. At least the oven requires that I physically press the “Start” button, so being connected isn’t dangerous.
I recently wanted a shopping cart that would take itself back to the rack after you’d unloaded it. It struck me as the most fun-to-build and least cost-effective robotics project I’d ever imagined.
No really, you could make your cart just follow you around, you could have swarm behaviors to make piles of them organize themselves nicely, they’d need to be able to find their chargers sensibly, they’d have to deal with inclement weather and temperatures and traffic… and when inevitably one gets lost or stolen you would end up with the very cyberpunk scene of a lost shopping cart forlornly down a sidewalk, looking for its home until it’s eventually adopted by a stray car or UAV and taught to survive in the harsh world of the semi-feral city-bot.
I think these are produced by the same people as the Mayday aircraft crash series, so ty for the time link halfway through the episode. The first half of the episode is always padding, the explanation only occurs in the second.
For a while, years ago, some stores tried out a shopping cart system that had some kind of geofencing running on it, probably just measuring the signal strength of a transmitter in the store. If the cart got too far away from the store, it was supposed to flip up the wheels, making it hard to push and mostly useless to likely shopping cart takers.
That system seemed short-lived, and stores went back to regular (dumb) shopping carts. There are some obvious failure modes for the system as you might imagine. Sometimes it is better to just deal with a bit of shrinkage than trying to fight the problem directly.
I want an app that detects people who don’t return their shopping carts to the corral. Then we can post their pictures to a website naming and shaming lazy jerks.
I’ve seen grocery stores in my area that use a version of that. I believe they use an “invisible fence” like dog collars, a wire buried around the area that the system detects when it crosses it. Then (I assume based on signs, I haven’t tried) little mechanisms in the wheels lock them so they can’t turn.
I want an app that detects people who don’t return their shopping carts to the corral. Then we can post their pictures to a website naming and shaming lazy jerks.
Conversely, there will be things you miss that will surprise you. We found the largest one of these was the entire AAA suite: LOAS/Ganpati/LsAclRep/RpcSecurityPolicy et cetera. It is unsurprising they are missing in the outside world, since the combination of homogeneity of environment and NIH-spirit doesn’t really apply anywhere else. But we strongly miss the ability to look at what access the team-mate beside you has by looking at a small set of tools, duplicating that, and getting on with your day. Or even providing patches to a tool to compare what you have versus what you should have. There’s just no equivalent that we’re aware of, and the cloud provider IAM systems are all gigantic tire fires no-one is coming to put out. Prepare for a future where finding out what you have access to, why, and why not, is an exercise in effort and determination.
I found this super interesting on a number of counts.
I’m a recently ex-Amazonian, and my perspective on NIH culture is just the opposite of what they’re citing here. To my mind, many of the internal only tools WERE best of breed ~10 years ago when they were built but the unending emphasis on MVP means that very often they’re built and then left to rot, leading to a really unpleasant experience for those of us who are still manacled to them, largely unchanged, 10 years later.
From what I know of GOOG culture things are very different over there, and there’s much more emphasis on taking the time to do it right and pay down technical debt so it’s interesting to see that made manifest this way.
Also? They’re right. I know they’re trying not to say the quiet bit out loud but I’ll say it: IAM is a raging tire fire. It’s incredibly powerful, but human brains just aren’t designed to handle that many layers of overlapping complexity. It’s FAR too difficult to reason about.
Sure, there’s a TON of tooling out there to combat this, but it’s a problem IMO that we need it.
I’ve been at both, and in my experience, AMZN has far better internal tools than GOOG. One thing about Google is that at every technical decision point, I think Google has made the wrong technical decision. But then they have executed on that flawed decision flawlessly. There’s so much stuff that is invented at Google specifically because their previous decisions forced them to invent something new. And that’s seen as great, internally, but it’s really, really not.
My impression at Google was that there definitely was a tendency to produce highly complex, highly integrated vertically systems:
a good, thoughtful, polished design doc is produced for a complex, all-encompassing solution (the perf process incentivizes design docs to become a kind of performative art).
then the outlined solution gets executed, solidly and professionally, with due repayment of tech debt and iteration.
it becomes both too good and too complex to replace, so it ossifies, until a Great Deprecation comes.
With perf happening every half a year and some distance to actual customer needs, this gets waterfall-ish, producing solutions that in many cases could be better with more ad-hoc and more chaotic process. Actual engineering feedback and true iteration might get lost in the process.
A few years ago, I realized you can easily solve NYT puzzles just by taking a photo of the blank grid, identifying the grid shape, and looking it up in a database.
For whatever reason, my employer (the world’s largest Brazilian themed bookstore) didn’t think that was a fun party trick to include in their garbage telephone. Oh well.
heh, I thought the weird turn was from the present tense (how things work now) to the past (how they came to be) but it seems to have been subtle enough :-)
This seems kind of dumb. $88,000 is the annual salary of a junior software engineer in the US. If it will take more than 1/4 of the time of a senior engineer to make monitoring work as well as it does now without datadog, that’s probably a net loss. Certainly you’ll pay the opportunity cost of spending engineering resources on redoing all your monitoring.
I’m surprised by your stats of $88k for a junior developer. Do you have a source for that? I can believe that might be the case in California or New York, but it feels off for a national average. Our junior devs make less than half that. Heck, I’m only making a little over half that. My boss’s boss’s boss isn’t making $88k and I’m not entirely sure her boss is making that much.
Don’t get me wrong, I know we’re underpaid, but, out of the last three places I interviewed at, no one was offering more that $60k for senior devs. And one of those was in New York.
I made $65k my first year out of school working a government-ish job at a university research center 20 years ago. $45k for a dev in the US in 2023 is WILDLY underpaid.
Even if you do that, can’t any middle-box or intermediating agent see mytopic as part of a request URL?
Ultimately I think this is fine! It just means that ntfy.sh is basically a demo server, that topic names underneath that domain provide no form of access control or delivery guarantees, and that any actual user will need to self-host with appropriate security measures. Which is, I guess, important to make clear in the docs. Specifically, there is no way for users to “choose a unique topic name” in a way that “keeps messages private”.
i think any webhook works the same way 😅, as does many cloud file providers that have things like “Anyone with this link” on things like google docs or dropbox… invitations to chats on systems like whatsapp for anyone with the link (or QR code)…
it really all depends on what you do with the URL, and the administrative practices of the people running the site that utilizes this method of security
as long as you don’t misuse it, and it’s using https, and the people running the site do it knowing this is how the security works, it is absolutely secure… and, as long as everyone is aware, as secure as using an unencrypted API key…
The consequence of an unauthorized POST to a Discord webhook is an unauthorized message in an e.g. Discord channel. No downstream consumer would assume that an arbitrary message, whether submitted via webhook or otherwise, is actionable without additional authn/authz. So I don’t think this, or any other kind of webhook, is directly comparable. I could be wrong! If ntfy.sh topic notifications are understood by consumers to be un-trusted, then no problem, and mea culpa! But that’s not what I took away from the docs.
You seem to be dead set to find a fatal flaw in ntfy, with quite the dedication. :-) I disagree with your assessment that the security of an API key and a secret URL are fundamentally different. And with that fundamental disagreement, our argument comes to an end.
On the wire, a HTTP request with an API key looks like this:
POST /api HTTP/1.1
Authorization: Bearer this-is-a-secret
some message
A request against the ntfy API looks like this (excluding the JSON endpoint, which is more like the above):
POST /this-is-a-secret HTTP/1.1
some message
The only difference is that the secret is in a different spot in the HTTP request.
You made an argument that you cannot rely on TLS: That is completely flawed, because if you cannot trust TLS, then your header-based auth also falls apart.
You also made an argument saying that you cannot rely on people making HTTPS requests. That also applies to the traditional Bearer/Basic/whatever auth.
IMHO, the only valid argument to be made is the one that the HTTP path is cached and prominently displayed by browsers. That’s correct. That makes it less secure.
ntfy is a tool usually used for casual notifications such as “backup done” or “user xyz logged in”. It is geared towards simplicity: simple simple simple. It doesn’t do end-to-end encryption, and the examples are (partially at least) suggesting the use of HTTP over HTTPS (for curl). So yes, it’s not a fort-knox type tool. It’s a handy tool that makes notifying super simple, and if used right, is just as secure as you’d like. But yes, it can also be used in a way that is less secure and that’s okay (for me, and for many users).
I really didn’t want to get into such a (what it feels like) heated discussion. I just wanted to show off a cool thing I did …
Technically, I agree with you that secret links and API keys are the same. I also agree that secret links are a simple, adequate solution for a simple service like ntfy.
When reasoning about the security of secret links, I’d encourage you to also think about the practicalities of how people tend to use links: It’s extremely easy to share them and people see them more as public information.
This can be seen in the behavior of some tools that automatically upload and store them elsewhere without encryption, e.g. browser history sync.
IIRC this also lead to leaked password reset links when outlook automatically scanned users’ emails for links and added them to the bing index.
The consequence of an unauthorized POST to a Discord webhook is an unauthorized message in an e.g. Discord channel.
Which can be catastrophic. I’ve heard many stories of crypto scams that were fueled by a hacked “official” project Discord account sending out a scam phishing link or promoting a pump-and-dump scheme.
Sure. But who cares? Then I abandon my channel and switch to a new one. They can’t find it, because I’m using https and they can’t MITM anything useful.
If your use case allows you to abandon one topic and switch to a new topic on an ad-hoc basis, that’s great, but it’s not something that most applications are able to do, or really even reliably detect. This is all fine! It just means that any domain that provides unauthenticated write access to topics is necessarily offering a relatively weak form of access control, and can’t be assumed to be trusted by consumer applications. No problem, as long as it’s documented.
I should really write this up as a FAQ, because it comes up so much. :-) First of, thanks for the vivid discussion on ntfy. I love reading feedback on it. Much appreciated.
The original premise of ntfy is that topics are your secret, so if you pick a dumb secret, you cannot expect it to be or remain private. so ntfy.sh/mytopic is obviously just a demo. Most people in real life pick a unique-ish ID, something like a UUID or another password-like string. Assuming your transport is encrypted, This is no less or more secure than using a an Authorization header with a bearer token (other than the notable difference that it’s in the server logs and such).
If you want your topics to be password-protected in the traditional sense (username/password or a token), you can do so by using the fine grained access control features (assuming a selfhosted instance), or by reserving a topic (screenshots). This can also be on the official instance, assuming you pay for a plan.
Most people in real life pick a unique-ish ID, something like a UUID or another password-like string. Assuming your transport is encrypted, This is no less or more secure than using a an Authorization header with a bearer token (other than the notable difference that it’s in the server logs and such).
It simply is not true that a URL containing a “unique-ish ID” like a UUID is “no more or less secure” than using an authorization header, or any other form of client auth. URLs are not secrets! Even if you ensure they’re only requested over HTTPS – which you can’t actually do, as you can’t prevent clients from making plain HTTP requests – it’s practically impossible to ensure that HTTPS termination occurs within your domain of control – see e.g. Cloudflare – and in any case absolutely impossible to ensure that middleboxes won’t transform those requests – see e.g. Iran. There are use cases that leverage unique URLs, sure, like, login resets or whatever, but they’re always time-bounded.
If you want your topics to be password-protected in the traditional sense (username/password or a token), you can do so by using the fine grained access control features (assuming a selfhosted instance), or by reserving a topic (screenshots). This can also be on the official instance, assuming you pay for a plan.
If you pay to “reserve” a topic foo, does that mean that clients can only send notifications to ntfy.sh/foo with specific auth credentials? If so, all good! 👍
, as you can’t prevent clients from making plain HTTP requests
Well, that’s the clients fault? The client leaking their secrets is just as possible with an authorization header.
it’s practically impossible to ensure that HTTPS termination occurs within your domain of control
It’s trivial to do this. I don’t understand and I don’t see how an authorization header is different.
but they’re always time-bounded.
No they aren’t. Unique URLs are used all the time. Like every time you click “Share” on a document in Paper/Drive and it gives you some really long url.
We’re discussing “Capability URLs” as defined by this W3C doc which says that
The use of capability URLs should not be the default choice in the design of a web application because they are only secure in tightly controlled circumstances. However, in section 3. Reasons to Use Capabilty URLs we outlined three situations in which capability URLs are useful:
To avoid the need for users to log in to perform an action.
To make it easy for those with whom you share URLs to share them with others.
To avoid authentication overheads in APIs.
and further dictates (among other constraints) that
Sure, it’s not the most massively secure thing in the world, but anyone using this service can be confident their client isn’t making plain HTTP requests else they’d pick something normal. I don’t know why my HTTPS termination would be at CloudFlare unless I’d set it up (or ntfy started using it), and even if it were of all people I trust CloudFlare to not-spam me the most. It’s not that big a deal.
To clarify, an application developer using this service, being the type of developer to use a service like this, would be able to feel confident an application request to this web service is via HTTPS.
I’ve got plans, and hardware, to put together a system that would notify me of my non-smart washer and dryer finishing based on accelerometers that I’d stick to the back of it, and then firing off a message to ntfy.sh. I haven’t built it yet, because there’s never any time, but someday.
This is how I do it; I have an Aeotec Z-Wave power switch between the dryer and the outlet. It’s hooked up to Home Assistant, which sends me a text message when the power usage drops back down after being up for at least a few minutes. It works pretty well. I was going to do the same for my dishwasher but my local building code requires that they be hardwired, so I’m going to have to put a clamp on the circuit or something instead.
Codes change and are weird. In our bathroom (late 80s vintage) the washer is connected to a hardwired panel (protected by a rubber seal). In our vacation home bathroom, recently rebuilt, the washer connection is a socket[1] - albeit placed high on the wall. In both cases there are concerns about moisture but somehow they did a 180 regarding what’s considered safe.
Dishwashers are socketed here too, the outlet has to be a bit higher than normal though.
[1] possibly the socket is specifically moisture-rated.
Remember that the US uses 110V mains, which roughly doubles the current that a device needs to draw for the same power relative to most of the rest of the world.
While this is a fun read I find the idea that we are sending washing machine timings to the cloud so that we can read them in some app still completely bonkers. So much infrastructure for so little value. It just feels so wasteful.
I am not criticizing the author and it’s hack, I am criticizing the trend to run massive cloud operations to store the time when the washing machine is done. That is a complete waste of resources in these times.
I think it’s more likely they already had a massive cloud operation to manage supply chains, invoicing, employee data security, and product research, and figured “why not use a bit of it to add some IoT features”.
I’d bet even the 3rd party API is a throw-in, and the actual reason they have IoT is for telemetry. It’d make it easier for the company to learn f.ex how long each part lasts in under light vs heavy loads.
Miele being an old (123 years!) German company I doubt that. They are an old style manufacturing company not a young cloud shop. I highly doubt they run their business from the cloud and if they do that, that is a recent thing. These type of companies move slow and German companies are being extra conservative.
Googling “Meile SAP” shows a promo page on SAP’s site which states
how Miele improved their sales processes with the support of the SAP Cloud Success Services Team.
Now sales != manufacturing but it would not surprise me if Miele and SAP are tight, and that this kind of stuff (consumer connection) could be part of that
Ah! Yeah I will admit I’m struggling to think of a time when I’d have found cloud access to my washing machine useful. Like … if I’m at home I’ll hear it; if not, … what action can I take?
Also, what happens when the manufacturer decides this washing machine is EOL and they pull the plug on the servers? And what about the security of having a washing machine connected to the internet?
It would be so much better if these “smart devices” would be LAN-only. This shouldn’t even be all that difficult to set up - printers have been LAN-only for a long time and despite the shitty reputation of printers, people have been making this work. Or perhaps bluetooth - people know how to make Bluetooth stuff work from their phones. You could even have an app. And if spying is really important for them (of course it is), they could still do it by making the app phone home.
I think the biggest problem is the bundling of the device and the cloud service. Now, the washing machine is dependent on some remote cloud service for some of its features. I’d mind a lot less if the washing machine spoke MQTT (over TLS) and defaulted to the cloud provider’s endpoint, but also had an option to connect wherever I pointed it.
It is so refreshing to see an accessibility article where the author acknowledges that the correct solution is to fix bugs in 7 pieces of software, rather than expecting everyone who has ever typed anything into a computer to change how they act.
Except Unicode latin numerals aren’t exactly in common usage, so it’s not clear how this would affect “everyone who has ever typed anything into a computer”
This is just an example of a bigger pattern. Similar problems affect math symbols that some people use for fake bold/italic, or even use of multiple emoji.
What’s the upside of an Android user being recognised as an iMessage user?
As a European, I see little use for iMessage because everybody is using Signal or WhatsApp already. In US the blue bubble has been important culturally.
There’s that social “blue bubble” thing that @pimeys mentioned.
But the real upside is that if your iMessage-using contacts’ devices see that you’re an iMessage user, their messages to you will be encrypted end-to-end. They will also have an easier time creating group messages that include you, because iMessage has some interface deficiencies creating SMS group threads.
Thanks. I was hoping that the social capital aspect wasn’t the end goal.
This glosses over the biggest advantage of iMessage vs. SMS: iMessage removes the carrier from the equation entirely, and doesn’t require the user to have a phone. iMessage users can communicate with other iMessage users over the IP network, rather than over the carrier network, which means I can text my friends from my MacBook, my iPad, or my iPhone, without needing to have a SIM for any of them. SMS messages (green bubbles) breaks this feature, requiring me to have my SIM-enabled phone within some small distance of my Macbook for message-forwarding to work. This, in a word, sucks. It means I can’t text on a plane, for instance.
iMessage exclusivity is anti-competitive and bad, but as a technology, iMessage is far, far, far superior to SMS and RCS. Blue bubbles isn’t a social stigma because people like the color blue, it is because texting with a green bubble is a significantly worse user experience than texting with blue bubbles.
This is true, but not exclusively so. At least here in the US, green bubbles also carry social stigma around (perceived) wealth. Google claims that iMessage dominates because of bullying. I don’t think it’s fair to claim that the blue/green bubble dynamic is solely due to iMessage being a superior experience (which to be clear, it is).
That first part is certainly one reason that I like end-to-end encryption. I didn’t mean to omit the second part. It just wasn’t front of mind for me because it still requires an Apple device right now. And while I do use a macbook much of the time, I still spend enough time on a Linux laptop and desktop (which don’t get to participate in this device- and carrier-independence) that I wasn’t thinking about that side of it.
For people who do use devices that can participate in that, I’m sure that makes texting with blue bubbles better than texting with green bubbles.
For others, it is more related to the group chat UX, the tapback annoyances, and a few other papercuts. And I’d like my chats to be encrypted because carriers are happy to mine them for creepy purposes, given the opportunity.
I’m sorry, it is the year 2023. It is trivial to identify the language of a paragraph of text, and, if you fail and just use the default voice, any screen reader user will be either a) as confused as I would be, reading a language I clearly don’t understand, or b) able to determine that they are getting German with a bad Spanish accent, assuming they speak both languages. Please, please, please, accessibility “experts”, stop asking literally millions of people to do work on every one of their pieces of content, when the work can be done trivially, automatically.
These are heursics, and not always correct. Especially for shorter phrases it is very possible that it is valid in multiple languages. I think it is of course good they threse heuristics exist but it seems that it is best to also provide more concrete info.
The ideal situation is probably both. Treat the HTML tags as a strong signal, but if there is lots of text and your heuristics are fairly certain that it is wrong consider overriding it, but if it is short text or you aren’t sure go with what it says.
Makes me wonder if there is a way to indicate “I don’t know” for part of the text. For example if I am embedding a user-submitted movie title that may be another language. I could say that most of this site is in English, but I don’t know what language that title is, take your best guess.
From https://www.loc.gov/standards/iso639-2/faq.html#25:
Also in fun ISO language codes: You can add
-fonipa
to a language code to indicate IPA transcription:From my resume:
It’s an AGI-hard problem…
Consider my cousin Ada. The only way a screen reader (or person) can read that sentence correctly without a
<span lang=tr>
is by knowing who she is.What is possible, though far from trivial, is to apply a massive list of heuristics, which is sometimes the best option available, i.e. user-generated content. However, When people who do have the technical knowledge to take care of these things don’t, responsible authors who mark their languages will then have to work around them.
But never, in all of human history, has a letter, or book, or magazine article ever noted your cousin’s name language in obscure markup. That’s not how humans communicate, and we shouldn’t start now.
I write
lang="xy"
attributes. I, for one, certainly would prefer that the relatively small number of HTML authors take the small amount of care to writelang="xy"
attributes, so that user agents can simply read those nine bytes, than that the much larger number of users spend the processing power to run the heuristics to identify the language (and maybe fail to guess correctly). Consider users over authors. Maybe, if one considers only screen readers, the effect shrinks away, but there are other user agents that care in what language is the text on the Web, as common as Google Chrome, which identifies the language so that it can offer to Google-Translate it.This is the fundamental disconnect. You are not making this ask of the “relatively small number of HTML authors”. You are making this ask of literally every single person who tweets, posts to facebook or reddit, or sends an email. This is an ask of, essentially, every person who has ever used a computer. The content creator is the only person who knows the language they are using.
“Fly” as slang was dated even in the 90s - I think Offspring’s Pretty Fly (for a White Guy) was the last gasp of that word in pop culture, and in that context deliberately using a piece of dated and black-coded slang was part of the message of the song.
I have nothing against webrings, personally, but I don’t particularly have nostalgia for them either. I remember them from some of the websites I visited as a kid in the late 90s/early 2000s, but I don’t remember taking them very seriously as a way to find new interesting websites. Hand-curated pages of links to other websites by contrast were a good way to find content another human thinks is good, and of course those never went away.
Ouch.
Also, remembering the terrible internet speeds of the 1990s, traversing a webring meant you had to click and wait (paying by the minute) for another site to load, which might be random, or only vaguely related to the one you were on now. It was really a worthless navigation interface back then. With faster and unmetered connections, they might be slightly less awful. But still not very respectful of the readers’ time and attention.
I like idea of wearings (I need to add one to my website), but for me it looks painful, that it requires so much manual steps, often involving GitHub repository or other. I believe that most of these could be automated to some degree.
The whole point is to remove the automation, though. Webrings should be manually vetted collections of content that one or more humans have decided is cool.
I love these little tools. They present a new way of creating something, and make it easy to try things out, even if you aren’t “trained” as a visual artist or musician.
The real killer feature for these tiny apps is being able to share creations via URLs. You can encode the state as a base64 value in the URL itself.
I can’t actually find the really tiny ones, but these ones I like none-the-less:
Please share more of your favorites in this universe.
PuzzleScript is a domain-specific language for making sliding-block puzzles
I’ve also made a few things like this myself; they don’t encode programs in URLs, but either provide the ability to download a self-contained single-file html export, include a built-in pastebin service, or both:
These are fantastic - especially Octo - I just went down a Chip-8 rabbit hole. It seems feasible to make a Chip-8 emulator for the Arduboy I bought recently.
EDIT: ah, the Chip-8 requires more buttons (basically a num-pad) than the Arduboy has (more like a Gameboy).
There’s actually a chip8-based game jam going on right now- plenty of time left to participate if you’re so inclined.
tixy.land - 16x16 javascript tiny art environment
Oh yes! This one’s clever - it saves the code as plain URL-encoded text in the URL when you press enter - frictionless!
Email should be a human-to-human discussion stream only. We have better technologies for notifications, file exchanges, collaboration, etc. Humans should write human-readable text, and then read those messages when other people send them.
Arguably, code and patches are human-readable text.
(Still don’t like git-send-email, personally.)
The point of mailing the patch, though, is that a computer will process the text in the patch, from the email, and apply it. It’s layering a protocol on top of a protocol.
The point of emailing the patch is for it to be read and reviewed by other developers.
An when email was the only available tool, that made sense. Things like ReviewBoard were available 20 years ago, Phabricator, GitHub, and so on have been around for a big chunk of that time, and have big advantages:
Arguably layering protocols on top of each other is what got us to where we are in the first place, I don’t see why would it be a problem to layer them some more. Especially since IMO email is way more suited for discussions over internet than whatever half of an mailing archive and email agent “modern” forges cook up.
You wouldn’t normally apply a patch without reading it first and you might reject it or ask for changes instead of applying it. Patch emails are just as much for humans as they are for computers.
I would certainly apply a patch without reading it first. I need to apply the anyways as part of review, e,g. to make sure that it compiles and passes tests. The email rendering of the patch may not include all the desired context, syntax highlighting, whitespace ignoring, IDE functionality, etc., and overall it’s generally painful.
Since the review functionality that I can do by reading a patch is a subset of that which I can do after applying it, applying the patch is my default mode of review. The workflow is similar to checking out a remote branch: the transport protocol is ultimately irrelevant as long as I can get the commit into my repo for review somehow.
Plus, you can’t represent certain changes in Git patches anyways (merge commits), so you have to use a different protocol if you want to transmit them.
What better technologies did you have in mind?
For what?
For collaboration on code: Git branches and a dedicated service like GitLab, sourcehut, GitHub, Gitea, or any of the other similar tools.
For file sharing: links to an FTP, SFTP, HTTP, or other central storage server that hosts the file.
For machine-generated notifications: RSS, Web Push, or any of the various PubSub solutions like ntfy or what have you.
It’s funny that you list sourcehut in this list since git-send-email.io was created by Drew DeVault to evangelize for email based workflow over using a web based UI to prepare a pull request.
Yes, I’m aware. Drew is a very opinionated person, and I tend to agree with many of his opinions. I like sr.ht. I’m a paying member.
But his opinion on this is simply wrong, as evidenced by “You will probably also want to link to the archives so that potential contributors can read other people’s work to get a feel for your submission process.” in the sr.ht email list documentation. Moving ‘research how people did things in the past’ out of an email client, but ‘contribute in the future’ INTO an email client means users must have two separate interfaces to handling patches depending on when they began participating in a project, and That’s Bad.
That’s a fair point. I doubt anyone would actually use it, but it’d be interesting from a theoretical standpoint if there was a gateway that let you receive historical emails from a list in your mail client.
Do any of the other services offer a way for you to contribute to a repo that’s not on your platform? The only one I’m aware of is GitLab’s in progress work on using ActivityPub for notifying remote repos about pull requests. I think some of the value mismatch is that you seem to prefer an integrated UI over having a workflow that favors decentralization like Drew’s email setup.
FWIW, you can download a mail archive from most mailing list software and import it into your mail client.
Why not just upload a branch to the remote with your changes? There’s no reason you can’t configure a repo with public branch creation, and a separate repo that is the actual project, and pull from one repo to another after the branch/patch has been approved.
I’m not sure about “over” since sourcehut implements a web based UI to send patches. This how to site is useful for people who prefer command line, but not required or even the most obvious way to do things with sourcehut
For the Fennel project we accept patches on the mailing list (or PRs on the read-only github mirror for people who prefer that) but most review for small changes happens on IRC/Matrix. Someone will drop a link to either a commit on a git hosting site or a patch in a pastebin and we’ll talk it thru there. Obviously that’s bad for cases where the discussion is important to refer back to later, but for most minor changes it’s the nicest flow I’ve ever used.
it is nice but it’s not really a replacement for email, more of a supplement
Yeah. Honestly I think email is handy for discussing patches, and it’s convenient if the email contains the changes so you can talk about them, but I think the actual changes being propagated over git+ssh or git+https is fine. You can paste the patch into your message using your MUA for purposes of discussion but also include a remote+branch as well for purposes of actually applying the changes.
I’ve been using sourcehut “the right way” for a few years now and while I like it better than github for the discussions, the process of sending the patch sucks! Trying to get
git send-email
to make your v2 patch show up as a reply in an existing thread (which is trivial to do in your MUA) is a profoundly stupid experience.Unfortunately sorucehut also doesnt detect the new patch when it’s done as a reply this way, etc, so I’ve resorted to having v2s etc be ner threads. This seems to be how the tool prefers to be used.
I just let pinboard suggest tags for all of my links. It makes them pretty searchable. It’s definitely not perfect, but it’s a lot less work than doing the work on my own.
$0.5 for 10GB of blob storage? $30 for a single vCPU?
Norvig’s numbers were at least somewhat grounded in reality; these prices and the hand-wavey calculations driving them are incredibly goofy. Questionable if anybody should bother “knowing” them.
Also the idea that “every programmer” should know about this kind of stuff is very silly. Apparently it’s some kind of forgotten knowledge that other kinds of programming exist beyond “send your program to a big company’s computers to run them for you”.
Also, I work for a big company. I could not possibly care less about fifty cents.
The important thing to know is probably the ratio between cloud costs and developer salary. If using some cloud thing costs $10, how much of your time does it need to save to be worthwhile? If every developer is using $100/month of cloud things, it probably won’t even show up in accounting noise. If they’re using $10,000/month, that’s a different question. If they need to spend a day justifying spending $100, your process is costing more than it’s saving.
What exactly isn’t grounded in reality?
I pulled up the ec2 pricing and filtered to 1 vCPU. The monthly cost ranges from $4.23 to $60.96.
The cheapest 32-vCPU machine is $26.38/mo/vCPU.
10gb of S3 is $0.23/mo which is much cheaper than $0.50, but Azure has a premium blob storage tier at $1.50/mo. But there are also cheaper tiers at both providers so $0.50 is not wildly off but probably not a good average either.
Prices change and if you’re on any non-AWS offering you can get a whole cloud VM with 1 CPU for 5-10 and not 30.
“hey this is an interesting napkin math overview of the current market” - yes, good
“things everyone should know” - no
AWS prices is what isn’t grounded in the reality of its market!
I’m a happy hetzner customer and these prices don’t sound very relatable. To this date I still don’t undersant why people pay well over 2x the price for a worse service, especially in terms of its awful UX.
I was considering them but their reviews are awful. 39% one-star is one of the worst scores I’ve seen of any company, though not as bad as OVH’s 67%. A lot of the complaints are about accounts being suspended with no information about why and no recourse, which makes me incredibly nervous about using them.
I’m a happy OVH customer for the past two years for what it’s worth. Just $80/mo for a dedicated machine:
Also builtin anti-DDoS. And cheap bandwidth.
Their dedicated offerings from look very interesting but those reviews scare me. You can get a moderately decent quad-core machine with some big disks and a decent amount of RAM for the same price as an AWS or Azure two-VCPU instance.
Why am I passing x and y to that rectangle at all? They’re never used.
The “
// Getters omitted
” line implies there will be agetX()
andgetY()
that readx
andy
.One benefit of the “smart” features that this article doesn’t mention is accessibility for blind people. I have a Cosori convection oven, and by connecting it, I can adjust its controls through the Vesync app on my phone, which is accessible through a screen reader. It’s unfortunate that the oven and my phone have to go all the way out to some remote server to talk to each other, but I’ll take the benefit. At least the oven requires that I physically press the “Start” button, so being connected isn’t dangerous.
This feels like it could be better accomplished with physical dials on the stove, though.
Basically no-one makes household devices that accommodate non-sighted people.
But it has capacitive touch spot “buttons” that a blind person can’t learn to use, right?
I recently wanted a shopping cart that would take itself back to the rack after you’d unloaded it. It struck me as the most fun-to-build and least cost-effective robotics project I’d ever imagined.
No really, you could make your cart just follow you around, you could have swarm behaviors to make piles of them organize themselves nicely, they’d need to be able to find their chargers sensibly, they’d have to deal with inclement weather and temperatures and traffic… and when inevitably one gets lost or stolen you would end up with the very cyberpunk scene of a lost shopping cart forlornly down a sidewalk, looking for its home until it’s eventually adopted by a stray car or UAV and taught to survive in the harsh world of the semi-feral city-bot.
Slightly related: https://wiki.hackerspace.pl/projects:w00zek
This is a very hard problem to solve. Even Busytown’s best engineers have failed to solve it.
I think these are produced by the same people as the Mayday aircraft crash series, so ty for the time link halfway through the episode. The first half of the episode is always padding, the explanation only occurs in the second.
For a while, years ago, some stores tried out a shopping cart system that had some kind of geofencing running on it, probably just measuring the signal strength of a transmitter in the store. If the cart got too far away from the store, it was supposed to flip up the wheels, making it hard to push and mostly useless to likely shopping cart takers.
That system seemed short-lived, and stores went back to regular (dumb) shopping carts. There are some obvious failure modes for the system as you might imagine. Sometimes it is better to just deal with a bit of shrinkage than trying to fight the problem directly.
I want an app that detects people who don’t return their shopping carts to the corral. Then we can post their pictures to a website naming and shaming lazy jerks.
I’ve seen grocery stores in my area that use a version of that. I believe they use an “invisible fence” like dog collars, a wire buried around the area that the system detects when it crosses it. Then (I assume based on signs, I haven’t tried) little mechanisms in the wheels lock them so they can’t turn.
yes
Yep, still present in my area. I feel like I haven’t seen a grocery store without them in a while.
This was the most interesting chunk for me:
I found this super interesting on a number of counts.
I’m a recently ex-Amazonian, and my perspective on NIH culture is just the opposite of what they’re citing here. To my mind, many of the internal only tools WERE best of breed ~10 years ago when they were built but the unending emphasis on MVP means that very often they’re built and then left to rot, leading to a really unpleasant experience for those of us who are still manacled to them, largely unchanged, 10 years later.
From what I know of GOOG culture things are very different over there, and there’s much more emphasis on taking the time to do it right and pay down technical debt so it’s interesting to see that made manifest this way.
Also? They’re right. I know they’re trying not to say the quiet bit out loud but I’ll say it: IAM is a raging tire fire. It’s incredibly powerful, but human brains just aren’t designed to handle that many layers of overlapping complexity. It’s FAR too difficult to reason about.
Sure, there’s a TON of tooling out there to combat this, but it’s a problem IMO that we need it.
Interesting article, great food for thought!
I’ve been at both, and in my experience, AMZN has far better internal tools than GOOG. One thing about Google is that at every technical decision point, I think Google has made the wrong technical decision. But then they have executed on that flawed decision flawlessly. There’s so much stuff that is invented at Google specifically because their previous decisions forced them to invent something new. And that’s seen as great, internally, but it’s really, really not.
It’s all about what your forcing factors are, right?
There is some wisdom in allowing profit motive to drive your engineering decision making if you’re a for-profit company.
Personally I’m delighted to be out of that particular game for the moment. It’s not my thing :)
My impression at Google was that there definitely was a tendency to produce highly complex, highly integrated vertically systems:
With perf happening every half a year and some distance to actual customer needs, this gets waterfall-ish, producing solutions that in many cases could be better with more ad-hoc and more chaotic process. Actual engineering feedback and true iteration might get lost in the process.
A few years ago, I realized you can easily solve NYT puzzles just by taking a photo of the blank grid, identifying the grid shape, and looking it up in a database.
For whatever reason, my employer (the world’s largest Brazilian themed bookstore) didn’t think that was a fun party trick to include in their garbage telephone. Oh well.
What database is that? I’d love to hear more about this.
https://www.xwordinfo.com/Grids
This took a weird turn from GPS to UTC, but otherwise I liked it.
heh, I thought the weird turn was from the present tense (how things work now) to the past (how they came to be) but it seems to have been subtle enough :-)
This seems kind of dumb. $88,000 is the annual salary of a junior software engineer in the US. If it will take more than 1/4 of the time of a senior engineer to make monitoring work as well as it does now without datadog, that’s probably a net loss. Certainly you’ll pay the opportunity cost of spending engineering resources on redoing all your monitoring.
I’m surprised by your stats of $88k for a junior developer. Do you have a source for that? I can believe that might be the case in California or New York, but it feels off for a national average. Our junior devs make less than half that. Heck, I’m only making a little over half that. My boss’s boss’s boss isn’t making $88k and I’m not entirely sure her boss is making that much.
Don’t get me wrong, I know we’re underpaid, but, out of the last three places I interviewed at, no one was offering more that $60k for senior devs. And one of those was in New York.
I made $65k my first year out of school working a government-ish job at a university research center 20 years ago. $45k for a dev in the US in 2023 is WILDLY underpaid.
Yes, junior devs here in Minnesota (low cost of living area) are routinely getting offers for $100k or so. There’s a ton of data on levels.fyi.
You should be more ambitious about compensation.
Junior dev at big bank in a random city got that within 1 year of learning to code.
How do I ensure that only authorized publishers are able to submit notifications to
ntfy.sh/mytopic
?keep
mytopic
a secret and only give it to authorized publishersEven if you do that, can’t any middle-box or intermediating agent see
mytopic
as part of a request URL?Ultimately I think this is fine! It just means that
ntfy.sh
is basically a demo server, that topic names underneath that domain provide no form of access control or delivery guarantees, and that any actual user will need to self-host with appropriate security measures. Which is, I guess, important to make clear in the docs. Specifically, there is no way for users to “choose a unique topic name” in a way that “keeps messages private”.an intermediating agent shouldn’t be able to see anything w/ https
you could think of that URL as being analogous to an API key
It would be incorrect to think of any URL as equivalent (in the security sense) to a secret like an API key.
Discord webhooks work the same way ¯\_(ツ)_/¯
i think any webhook works the same way 😅, as does many cloud file providers that have things like “Anyone with this link” on things like google docs or dropbox… invitations to chats on systems like whatsapp for anyone with the link (or QR code)…
it really all depends on what you do with the URL, and the administrative practices of the people running the site that utilizes this method of security
as long as you don’t misuse it, and it’s using https, and the people running the site do it knowing this is how the security works, it is absolutely secure… and, as long as everyone is aware, as secure as using an unencrypted API key…
The consequence of an unauthorized POST to a Discord webhook is an unauthorized message in an e.g. Discord channel. No downstream consumer would assume that an arbitrary message, whether submitted via webhook or otherwise, is actionable without additional authn/authz. So I don’t think this, or any other kind of webhook, is directly comparable. I could be wrong! If
ntfy.sh
topic notifications are understood by consumers to be un-trusted, then no problem, and mea culpa! But that’s not what I took away from the docs.You seem to be dead set to find a fatal flaw in ntfy, with quite the dedication. :-) I disagree with your assessment that the security of an API key and a secret URL are fundamentally different. And with that fundamental disagreement, our argument comes to an end.
On the wire, a HTTP request with an API key looks like this:
A request against the ntfy API looks like this (excluding the JSON endpoint, which is more like the above):
The only difference is that the secret is in a different spot in the HTTP request.
You made an argument that you cannot rely on TLS: That is completely flawed, because if you cannot trust TLS, then your header-based auth also falls apart.
You also made an argument saying that you cannot rely on people making HTTPS requests. That also applies to the traditional Bearer/Basic/whatever auth.
IMHO, the only valid argument to be made is the one that the HTTP path is cached and prominently displayed by browsers. That’s correct. That makes it less secure.
ntfy is a tool usually used for casual notifications such as “backup done” or “user xyz logged in”. It is geared towards simplicity: simple simple simple. It doesn’t do end-to-end encryption, and the examples are (partially at least) suggesting the use of HTTP over HTTPS (for curl). So yes, it’s not a fort-knox type tool. It’s a handy tool that makes notifying super simple, and if used right, is just as secure as you’d like. But yes, it can also be used in a way that is less secure and that’s okay (for me, and for many users).
I really didn’t want to get into such a (what it feels like) heated discussion. I just wanted to show off a cool thing I did …
Technically, I agree with you that secret links and API keys are the same. I also agree that secret links are a simple, adequate solution for a simple service like ntfy.
When reasoning about the security of secret links, I’d encourage you to also think about the practicalities of how people tend to use links: It’s extremely easy to share them and people see them more as public information. This can be seen in the behavior of some tools that automatically upload and store them elsewhere without encryption, e.g. browser history sync. IIRC this also lead to leaked password reset links when outlook automatically scanned users’ emails for links and added them to the bing index.
Sorry! My intent is definitely not to find some fatal flaw. I’m providing feedback as requested:
Haha. I suppose I got what I asked for :-)
Which can be catastrophic. I’ve heard many stories of crypto scams that were fueled by a hacked “official” project Discord account sending out a scam phishing link or promoting a pump-and-dump scheme.
You can also DELETE a discord webhook
why?
In what way could/would a URL (containing a long random string) not be a secret in that sense?
Or a user is sending notifications like “The dishwasher is done” or “The Mets lost” and there’s no need for security.
Sending notifications to what? If you can
then I can
right?
Sure. But who cares? Then I abandon my channel and switch to a new one. They can’t find it, because I’m using https and they can’t MITM anything useful.
If your use case allows you to abandon one topic and switch to a new topic on an ad-hoc basis, that’s great, but it’s not something that most applications are able to do, or really even reliably detect. This is all fine! It just means that any domain that provides unauthenticated write access to topics is necessarily offering a relatively weak form of access control, and can’t be assumed to be trusted by consumer applications. No problem, as long as it’s documented.
I should really write this up as a FAQ, because it comes up so much. :-) First of, thanks for the vivid discussion on ntfy. I love reading feedback on it. Much appreciated.
The original premise of ntfy is that topics are your secret, so if you pick a dumb secret, you cannot expect it to be or remain private. so
ntfy.sh/mytopic
is obviously just a demo. Most people in real life pick a unique-ish ID, something like a UUID or another password-like string. Assuming your transport is encrypted, This is no less or more secure than using a anAuthorization
header with a bearer token (other than the notable difference that it’s in the server logs and such).If you want your topics to be password-protected in the traditional sense (username/password or a token), you can do so by using the fine grained access control features (assuming a selfhosted instance), or by reserving a topic (screenshots). This can also be on the official instance, assuming you pay for a plan.
It simply is not true that a URL containing a “unique-ish ID” like a UUID is “no more or less secure” than using an authorization header, or any other form of client auth. URLs are not secrets! Even if you ensure they’re only requested over HTTPS – which you can’t actually do, as you can’t prevent clients from making plain HTTP requests – it’s practically impossible to ensure that HTTPS termination occurs within your domain of control – see e.g. Cloudflare – and in any case absolutely impossible to ensure that middleboxes won’t transform those requests – see e.g. Iran. There are use cases that leverage unique URLs, sure, like, login resets or whatever, but they’re always time-bounded.
If you pay to “reserve” a topic foo, does that mean that clients can only send notifications to
ntfy.sh/foo
with specific auth credentials? If so, all good! 👍Well, that’s the clients fault? The client leaking their secrets is just as possible with an authorization header.
It’s trivial to do this. I don’t understand and I don’t see how an authorization header is different.
No they aren’t. Unique URLs are used all the time. Like every time you click “Share” on a document in Paper/Drive and it gives you some really long url.
We’re discussing “Capability URLs” as defined by this W3C doc which says that
and further dictates (among other constraints) that
I don’t really care about that doc tbh
edit: To elaborate slightly, I’m extremely familiar with capabilities
Sure, it’s not the most massively secure thing in the world, but anyone using this service can be confident their client isn’t making plain HTTP requests else they’d pick something normal. I don’t know why my HTTPS termination would be at CloudFlare unless I’d set it up (or ntfy started using it), and even if it were of all people I trust CloudFlare to not-spam me the most. It’s not that big a deal.
Looks like plain HTTP to me.
To clarify, an application developer using this service, being the type of developer to use a service like this, would be able to feel confident an application request to this web service is via HTTPS.
You can either refuse to listen on port 80, or you can detect they’ve transmitted it in the clear and roll that key.
I’ve got plans, and hardware, to put together a system that would notify me of my non-smart washer and dryer finishing based on accelerometers that I’d stick to the back of it, and then firing off a message to ntfy.sh. I haven’t built it yet, because there’s never any time, but someday.
That sound cool. Wouldn’t it be easier though to use some “smart” socket which measures power usage?
This is how I do it; I have an Aeotec Z-Wave power switch between the dryer and the outlet. It’s hooked up to Home Assistant, which sends me a text message when the power usage drops back down after being up for at least a few minutes. It works pretty well. I was going to do the same for my dishwasher but my local building code requires that they be hardwired, so I’m going to have to put a clamp on the circuit or something instead.
That seems odd to me (Australian). Ours are all just socketed; they don’t draw that much current do they?
Codes change and are weird. In our bathroom (late 80s vintage) the washer is connected to a hardwired panel (protected by a rubber seal). In our vacation home bathroom, recently rebuilt, the washer connection is a socket[1] - albeit placed high on the wall. In both cases there are concerns about moisture but somehow they did a 180 regarding what’s considered safe.
Dishwashers are socketed here too, the outlet has to be a bit higher than normal though.
[1] possibly the socket is specifically moisture-rated.
I think the idea is to discourage sockets underneath the dishwasher that could potentially be flooded
Remember that the US uses 110V mains, which roughly doubles the current that a device needs to draw for the same power relative to most of the rest of the world.
That feels like it involves deadly amounts of current and more than $15 worth of parts. I do software, not electricity.
While this is a fun read I find the idea that we are sending washing machine timings to the cloud so that we can read them in some app still completely bonkers. So much infrastructure for so little value. It just feels so wasteful.
It’s cool that Miele allows you to extract some of the data for your own use, though.
I would be impressed with a clothes line that could send me a text message though.
Just tape a moisture sensor to the inside of a clothes pin and you’re about 80% of the way there.
I believe these are called a dryer
By definition it’s not waste if you’re deriving enough value from it. You may not enjoy this particular sort of hacking but the author clearly does.
I am not criticizing the author and it’s hack, I am criticizing the trend to run massive cloud operations to store the time when the washing machine is done. That is a complete waste of resources in these times.
I think it’s more likely they already had a massive cloud operation to manage supply chains, invoicing, employee data security, and product research, and figured “why not use a bit of it to add some IoT features”.
I’d bet even the 3rd party API is a throw-in, and the actual reason they have IoT is for telemetry. It’d make it easier for the company to learn f.ex how long each part lasts in under light vs heavy loads.
Miele being an old (123 years!) German company I doubt that. They are an old style manufacturing company not a young cloud shop. I highly doubt they run their business from the cloud and if they do that, that is a recent thing. These type of companies move slow and German companies are being extra conservative.
Googling “Meile SAP” shows a promo page on SAP’s site which states
Now sales != manufacturing but it would not surprise me if Miele and SAP are tight, and that this kind of stuff (consumer connection) could be part of that
Ah! Yeah I will admit I’m struggling to think of a time when I’d have found cloud access to my washing machine useful. Like … if I’m at home I’ll hear it; if not, … what action can I take?
Also, what happens when the manufacturer decides this washing machine is EOL and they pull the plug on the servers? And what about the security of having a washing machine connected to the internet?
It would be so much better if these “smart devices” would be LAN-only. This shouldn’t even be all that difficult to set up - printers have been LAN-only for a long time and despite the shitty reputation of printers, people have been making this work. Or perhaps bluetooth - people know how to make Bluetooth stuff work from their phones. You could even have an app. And if spying is really important for them (of course it is), they could still do it by making the app phone home.
I think the biggest problem is the bundling of the device and the cloud service. Now, the washing machine is dependent on some remote cloud service for some of its features. I’d mind a lot less if the washing machine spoke MQTT (over TLS) and defaulted to the cloud provider’s endpoint, but also had an option to connect wherever I pointed it.
My dude.
Thank you.
Perhaps the whole site is a subtle trolling of people who don’t use Reader Mode in their browser.
It is so refreshing to see an accessibility article where the author acknowledges that the correct solution is to fix bugs in 7 pieces of software, rather than expecting everyone who has ever typed anything into a computer to change how they act.
Except Unicode latin numerals aren’t exactly in common usage, so it’s not clear how this would affect “everyone who has ever typed anything into a computer”
This is just an example of a bigger pattern. Similar problems affect math symbols that some people use for fake bold/italic, or even use of multiple emoji.