This feels like a bit of a gotcha, but I’m not shocked the guy whose blog begs you for your email address in a big popup halfway down is fine with software having small annoyances.
It’s not his blog software. It’s Substack, a well known newsletter hosting company that has been doing this annoying pop up thing for the last six months at least. I’m surprised you haven’t been annoyed by it before. I see it like three times a day.
Once medium started doing that (complete with smarmy “pardon the interruption” language) I moved all my posts to a static site generator hosted on gitlab and stopped clicking on medium links. It’s a shame substack links are not similarly easily identified.
It’d be cool if we posted fewer of these. They seem like a content farm or something, with many of them following the pattern: “I had a thought. It may shock you. I think [thing]. Maybe you disagree. Thanks for reading my blog.” and an annoying popup invitation to subscribe for more of the same porridge.
Interesting. Indeed, looks like he successfully registered @s3.amazonaws.com
… luckily he indeed works as a security engineer at AWS so there’s that. Could it be that he, by being able to do this, pointed out a potential problem with the protocol?
This is exactly what he did: pointed out a problem with the protocol. As far as I can tell, it doesn’t require any special access to be able to do that.
Yeah, as long as you could write files to the xrpc
bucket you could do that. I’m frequently surprised at how generic some of my bucket names are, so it’s not entirely shocking that xrpc
was free.
Wait… analog buttons? Yep, turns out the original xbox controller has a hardware feature not seen since the PS3 era - the A, B, X, Y, white, and black buttons have 256 pressure levels.
A capability that led to many unfortunate guard stabbings in Metal Gear Solid 3 on the PS3. To hold a guard and interrogate them you had to slightly hold down one of the buttons, but hold it down too much and you stabbed them! I’ve never seen another game use that “feature”, thankfully.
MGS2 had it initially with “slowly lowering the weapon” vs. “firing the weapon.” That caused, uh, similar outcomes when trying to hold up guards.
It’s Kojima or someone who only reports to Kojima. Death Stranding’s controller interactions are similarly complex and fuzzy. You use the same pressure-sensitive triggers to hike up the straps of Sam’s backpack to keep balance better and also throw piss at ghosts, with only some really detailed semantics about timing and pressure to differentiate them.
Gran Turismo 3 used it for acceleration. I also felt like some platformers (Jak and Daxter perhaps?) used it, unless that was just me hitting the button harder in a vain attempt to jump higher.
(edit: I figured out the list and I am disappointed to find I wore out my controller for nothing)
Not a regression since it was basically greenfield code, but the first protobuffs pass on Riak Time Series didn’t have a float64 field, just an ascii-encoded numeric field. I begged and begged for feedback from existing protobuffs experts since it was my first work on that, but didn’t hear anything until after it was merged in to develop
with field numbers assigned. We did fix it before it made it to customer-world, but still.
I’m not sure if any startup really needs CDN. The only scenario might be when it is serving large volumes of media like videos or audio.
Every site which works internationally could use a CDN. The round-trip between some common areas can take up to half a second with good connections on both sides. That’s per-packet so now you have actual seconds before images even start to download.
Then there are server costs - why would you spend compute time/bandwidth serving images if you can offload that for next to nothing?
Both as a hobbyist and as someone building apps I’d much rather use a CDN when the alternatives are running my own web server or using expensive Rails requests to send assets. It’s not a thing where you have to get a contract or buy a huge appliance, it’s something Github Pages doesn’t even let you opt out of, and it’s not a huge mission to set up aws cloudfrot with backend frameworks that have an “asset pipeline” concept.
may be the argument is not against a CDN, but at which stage in the growth of a company this is useful. If one uses say TornadoVPS that does not charge for network traffic, and one’s user base is relatively small, the cheapest host is good enough performance-wise for their dynamic (not static pages) app – then why a CDN needed?
First-time user accessing web site for an international location, will benefit from CDN speed if the JS size is big or the images are big. But if the JS size compressed is, say, under 500kb, and the images are 80kb on average with less than 1-5k hits per day (of a dynamic app) – then, the CDN benefits are not clear (unless it is literally already built into hosting).
Don’t forget CDNs often do TLS termination close to users, which makes a visible improvement in perceived latency for end users. It matters even more for very small payloads where the TLS handshake might represent up to 50% of overall latency.
Because CDNs have a very low failure rate. My personal VPS goes down for a few random hours every year as Vultr does whatever it’s doing to the boxes, but CloudFront is basically always up.
I don’t understand what’s cursed about how they switch OSs on sleep. What could go so very wrong with it?
The filesystem sharing does sound quite cursed and fragile. I imagine they used a journal rather than direct access in-part so that windows would be notified that the files were changing so it could update any internal filesystem state and potentially also pass notifications to any listening apps.
Testament to someone’s ego that any products launched with “fast booting” OSs that weren’t actually fast.
I don’t understand what’s cursed about how they switch OSs on sleep. What could go so very wrong with it?
At least fifteen years ago, the OS going to sleep assumed it’d be the next thing awake, and those assumptions are coded into things like open filehandles and such being saved and then restored. You’re basically right about the journal being used to let Windows keep up to date on any file state changes, but I think a lot of engineers wouldn’t want to get to the point where they need that.
I had a coworker that got in the habit of (some sequence of) hibernating Linux in a VM, hibernating Windows, un-hibernating Linux on the host, and then un-hibernating Windows in a VM. It all came crashing down when something got shut down out of sequence, and the un-hibernated thing still assumed stuff on disk was laid out in the expected way and it completely trashed the filesystem.
Yeah, mounting the filesystem read/write on two OSs is obviously a problem. I just don’t see why the other idea is bad. I guess even if you don’t edit any data outside your partitioned disk and memory space then you’re still going to be changing state in the hardware devices that might confuse drivers?
Similar ones were handed out at the SHA-2017 camp: https://wiki.sha2017.org/w/Projects:Badge
They were pretty fun, and well-thought out! Readable in direct sunlight, a kit of tricolor LEDs you could learn to solder on to have fun with after dark, a collection of apps, and best of all, they still work.
The %n
format specifier is nice because you can use it to overwrite arbitrary memory if you control the format string: https://formatstringexploiter.readthedocs.io/en/latest/examples/hacker_level.html
I’m somewhat surprised to see bitwarden left out of the comparison here. It’s open source and (I think) a very popular alternative to 1password and lastpass within the tech world. Perhaps there weren’t any security vulnerabilities found and the quality of their response could therefore not be compared?
I hadn’t found any flaws in Bitwarden at the time, but I need to do a thorough review of it. Since it’s open source, it’s one of the few I can thoroughly review.
chromium has one of the longest compile times of any open source project, I’m not sure rust would make it substantially worse.
As a Gentoo user I regularly witness hours-long chromium compile times and know the pain all too well, even when it’s running in the background. Isn’t it scary to think that we might reach new extremes in chromium compile times now?
From what I heard is that this might not be a relevant metric to the Chromium project? AFAIU most of the contributors compile “in the cloud” and don’t care about downstream compilation :/
I have first-hand experience with this. My compiling machine was a jaw-droppingly beefy workstation which sat at my desk. A cached compilation cycle could take over an hour if it integrated all of Chromium OS, but there were ways to only test the units that I was working on. Maybe this is a disappointing answer – Google threw money at the problem, and we relied upon Conway’s Law to isolate our individual contributions.
Chromium’s build tool Goma support distributed remote build. So as long as you have a server farm to support Goma, the build is actually pretty fast.
Similarly, at Mozilla they used distcc+ccache with Rust. So the compilation has always been distributed to the data center instead of running locally.
Either you own your computers or you do not. If I need a datacenter/cluster to make software compilation at least half-bearable, the problem is software complexity/compilation speed and not too little compuational power. And even in the cloud it takes many minutes, also for incremental builds.
The first step to solving a problem is admitting there is one.
The reason Chrome exists is to allow a multinational advertising company to run their software on your computer more efficiently. Slow compile times are not a problem they experience and they are not a problem they will address for you.
I agree that the software complexity has grown. I don’t necessarily think of it as a problem though. Chromium to me is the new OS of the web. You build apps running on top of this OS and there are complex security, performance, and multi-tenancies concerns.
IMO, modern software has gotten complicated but that simply follows a natural growth as technologies mature and progress over time. You could solve this “problem” by inventing better tools, and better abstractions… that help you navigate the complexity better. But I don’t think reducing the complexity is always possible.
This is where I fundamentally disagree. There are many good examples which tell that complexity is often unnecessary. Over time, software tends to build up layers over layers, often completely for no reason other than historic growth and cruft.
The system will not collapse, I think, but fall into disrepair. Currently, there is enough money in companies to pay armies of developers to keep this going, but now that we are already deep in a recession and going into a depression, it might be that the workload to feed these behemoths exceeds the available manpower. This might then motivate companies to give more weight to simplicity as it directly affects the bottom end and competitivity.
Systems tend to collapse, replacing complex mechanisms with simpler equivalents. This used to be called “systems collapse theory” but apparently is now called collapsology. For example, we are seeing an ongoing migration away from C, C++, and other memory-unsafe languages; the complexity of manual memory safety is collapsing and being replaced with automatic memory management techniques.
This is a bit like saying “well the browser already has so many features: versions of HTML and CSS to support, Bluetooth, etc. – could adding more make it substantially worse?”
Yes, it could – there’s no upper bound on compile times, just like there’s no upper bound on “reckless” features.
That said, I only use Chrome as a backup browser, so meh
Rust compile times are really good now. It’s not C fast but a lot better than C++. At this point it’s a non-issue for me.
(YMMV, depends on dependencies, etc etc)
Is there any evidence to support the claim that replacing C++ with Rust code would substantially slow down compile times? As someone who writes C++ code every day and also has done some Rust projects that see daily use at work, I really don’t see much of a difference in terms of compile time. It’s slow for both languages.
In the context of gentoo, you now need to compile both clang and rustc, this probably is a substantial increase.
It appears to be about the same, perhaps slightly worse.
Builds would be way quicker if they just ported everything from C++ to C.
Memory safety problems would be way worse, but this thread seems unconcerned with that question.
I wonder if there’s a reason they didn’t use a fleet macOS machines running multiple iOS simulators.
I was wondering that as well, but it’s likely a cost/performance tradeoff rather than lack of functionality on macOS.
My preliminary speed tests were fairly slow on my Macbook. However, once I deployed the app to an actual iPhone the speed of OCR was extremely promising (possibly due to the Vision framework using the GPU). I was then able to perform extremely accurate OCR on thousands of images in no time at all, even on the budget iPhone models like the 2nd gen SE.
I wonder if his MacBook is an Intel one or an Apple Silicon one. The latter has an architecture closer to the iPhone’s.
Or run the Vision APIs on macOS, when it’s documented as supported….
I’m aware of ocrit, a command-line tool that uses Apple’s Vision for OCR.
The article says:
My preliminary speed tests were fairly slow on my Macbook. However, once I deployed the app to an actual iPhone the speed of OCR was extremely promising (possibly due to the Vision framework using the GPU).
Not clear why the MacBook was so much slower, however!
Sounds like it’s because of COVID-19 infection risk for attendees though they don’t say it explicitly.
It’s fascinating that this is still such a concern after everyone who wanted (and even more) got their shots.
It’s not surprising – the Congressseuche was a kind of flu that was common during previous congresses, and while being out sick for a week was already not great, with Covid and the risk of long-term damage, the tradeoff has changed quite a bit. With vaccines, the morbidity risk of Covid is mostly solved and long-term damage has been reduced, but it’s still not entirely gone.
I guess the bigger issue is COVID is never going away, so the tradeoff at this point is do you want to do something now with reasonable precautions like wearing masks in crowded halls or just never ever do it in person again. The never do it in person option makes sense for lots of things. There are tons of conferences that could just be webinars. But if you think doing it in person is good, the risks from COVID are going to be more or less identical in 2023, 2024, etc. Like I hope they do come out with that vaccine that’s nasal and addresses all variants, but uh, even after that it’s not realistically going to get 100% uptake.
DEF CON (similar size) in August had close to 700 of 25,000 people report positive cases, but within that group over 12% of volunteer “goons” that had a better reporting rate.
The main Congress event isn’t held in a wildly different space (big convention center), and while it does have fewer cramped, hot, and sweaty hotel room parties than DC (I’m pretty sure I got COVID at one this year), instead it has more mixing of attendees with the general public in public transport.
By contrast, Camp is entirely outdoors (to the point that during a thunderstorm there’s nowhere really safe to go), with lots of fresh air and space for everyone.
Yeah, after Oktoberfest in Munich the numbers were spiking. Hospitals are full and they assume it will only be worse later this year. I think it is the right move, but still I am infinitely sad about it being cancelled
Wired earbuds are also not repairable! The wire or connector often breaks! I’ve tried to solder them back together, but they are almost microscopic enamel-coated wires which are pretty much impossible to fix at home.
My airpods have lasted over 3 years, which is far longer than any pair of wired earbuds I ever had.
Wired earbuds start at, like, $1 and don’t have chips or lithium ion batteries at that price point. That they are fragile and disposable is a problem, yeah, but you’d have to shithouse a lot of even the fancy lightning headphones with the mic and controls on the cord to equal the aggregate waste of running a single pair of AirPods through the laundry.
At my work our backend uses LISTEN/NOTIFY to listen to database changes and inform the UI over the websocket connection if the user’s (who is using the browser) view needs to be refreshed.
I think the idea is good but our implementation is not good. Would love to see better working examples of something like this.
I’m doing the same in a Phoenix app. I start a GenServer
as part of the app that handles the listens and sends out an Endpoint.broadcast/3
when a relevant one comes in (on the busy one that broadcast includes the query results the clients crave). The LiveView
instances clients are on subscribe to the endpoint channel when they start up, so the updated query results generate new HTML and bang it out to the browsers.
I dunno about your implementation, but like you said the idea sounds fine. We did something similar at a previous job. We had a 3rd party integration which made changes to certain models in the background, which would then trigger LISTEN/NOTIFY
to tell other parts of the software to restart a computation.
In my current job, we also have a “main” server which runs and sends updates to clients, and when a command line task or cron job makes some changes to the db, it informs the main server about the changes so that it can send updates to the clients.
What do we think the cause is? It seems unlikely to me that physical bit flips would cause this. Lossy compression software gone wrong?
One thing I’ve seen mentioned one of those JPEG sequels like HEIF or whatever, so I could see that they’re trying to optimize the hot thumbnails and proxies you get to see and mess with before the original has time to get retrieved.
In Lightroom non-classic, I jumped to a photo from Dec. 31, 2020, noticed it was a “Smart Preview” proxy, made some edits, and at some point it had finished downloading the 30mb raw. I didn’t notice any visible change once it quit working with the proxy and started on the raw directly, but the original is bigger than my screen and may not be in the same color space either so ¯\_(ツ)_/¯
My first thought was that Google has been experimenting with running the images through some sort of lossy NN compression system, but on a second glance I think that’s less likely.
Some people report that clicking on edit on the photo helps. Not really sure what that implies though, but maybe just the exported/cached format causing issues? Some conversion from a format used back then maybe?
This is great! One issue that recently came up for me is that the Ubuntu 20.04 LTS docker.io package omits the /etc/init.d/docker script from upstream, so there was no (easy) way to start the docker daemon in WSL. Honestly, a super odd packaging decision given how much Ubuntu seems to be the first-est class citizen of WSL.
From my experience, WSL would really rather you run Docker Desktop (which runs the docker-machine in hyper-v) and let it push the docker cli into the WSL guest.
I’m not sure I understand. If you read only the key ID from the authenticated payload in order to authenticate it, is there a problem? Or is the problem that this is error-prone to implementers? I’m no crypto expert, but I suppose I care about security more than average, and I thought it was obvious that nothing but the key ID should be used before authentication.
My interpretation is that reaching in and grabbing just one thing from the untrusted payload is bad spec design, since it means that API developers are going to want to implement grabbing any ol’ thing out of the untrusted payload.
(Facetiously) I’m beginning to think JWT is bad?
Meanwhile I’m beginning to think you can have an implementation of jwt that is non compliant but good; like ignoring any specified algorithm, and only verifying but never encrypting.
I agree with you… but what’s the point of JWT if the only good implementations are non-compliant? I remember reading good things about paseto but I’ve never actually used it.
The point is to have a tool that can be used to track sessions without maintaining much state on the server (revocation list is an obvious but, depending on your environment, plausibly optional thing). That’s all I need.
I’m really not a fan of JWT, but I have questions here. X.509 certificates also have an issuer field that is part of the signed data even though it doesn’t strictly need to be. Would X.509 be better if we stopped signing the issuer?
It has some of the other problems that have gotten JWT in trouble, too: certificates identify their own issuer, leaving it to the application to decide whether that issuer is acceptable, and their own signature algorithm.
Of course X.509 is much more tightly specified, and includes a standard way for issuer certificates to identify what kind of key they have. It also doesn’t mix asymmetric and symmetric cryptosystems. But I wonder if the main reason we consider it a reasonable security standard isn’t exactly the same reason developers might prefer JWT—the bar to implement X.509 at all is so high that people aren’t tempted to roll their own.
A lovely little diversion into physical machines. And also a little surreal how much expertise goes into it while at the same time never thinking of ‘turning off’ their solar panels by just putting them face-down on a towel or something instead of carefully moving them somewhere dark. Everyone will always have these “oh why didn’t I think of that” moments sooner or later.
They’re probably worried about light passing through the back of the panel making a current, and also sunlight can absolutely be too bright to work in. If you’ve got a shipping container workshop set up for solar panel work, why not?
I’ll bite: this was a ridiculous situation before left-bad
broke the world and it’s only more ridiculous since then. That checking for numeric properties is expedited instead of hindered by pulling in multiple dependencies over multiple HTTP requests each is kind of an indictment of JavaScript and its curation over the last thirty years.
This is a post sponsored by Redis Enterprise, and I’m not sure how relevant it is to Mastodon performance. My guess is Masto’s queue workers spend basically no time doing Redis stuff and almost all their time talking to remote machines.