This is what happens when people don’t update Windows!
How can you determine that? Are you joking?
[Comment removed by author]
It was specifically this patch, released just under 2 months ago.
So, if I don’t update, I will get this virus?
Do you have a source for that? The trackers seem to still be counting up for infections. I don’t see anything in a quick scan of the news about a kill switch?
Thanks! Wouldn’t it be relatively easy for someone to repackage this without that vulnerability though?
Incredibly. It’ll be a long weekend for ops folks everywhere.
Remember folks, The Cloud is just somebody else’s computer. :)
They’re still better at managing it than me.
Sure, but what if you don’t need something as complex as s3? If I just want to serve static files I can probably manage that just fine - probably better than AWS can manage something as complex as s3.
Internally I’m sure S3 is super complex, but none of that is exposed to me.
If you were using the AWS-recommended setup for a static site on s3 (that is, putting a CDN in front of it) then you likely didn’t notice the outage at all (for a few hours you couldn’t post new stuff but existing content is served out of your CDN).
Had a static site setup exactly this way on us-east-1 and it went down.
Was able to get a back up working on Firebase is about 30 minutes.
although I’m not authorized to speak for Amazon, I can confirm that Amazon has multiple servers.
There were still other availability zones up for S3–most devs are just not interested enough in HA to use them.
Regions. All availability zones in east-1 were down.
For S3, it’s difficult and expensive (you pay again for each region you duplicate into, and the automatic duplication features aren’t up to symmetric multi-region setups) to go multi-region. For most other things, it’s either impossible or seriously kills the benefits of a cloud platform. (You don’t have to deal with renting remote server space or setting up a VPN; all the other technical issues with setting up a widely-distributed system remain your problem.) And finally, when Amazon goes down, your particular outage is probably not the most severe one your customers are facing.
So yeah, you’re not wrong, it would certainly be possible with good distributed design to have stayed up through this outage. It’s a hell of a lot more difficult and expensive than just flipping a switch, though.
It’s worth noting also that you can get better uptime even within a single region by not making that one region us-east-1. As far as I can tell it’s the oldest, most crowded, and least stable one out there.
They could just do good clustering on a reliable OS like VMS did in 80’s. Those clusters often had uptime that exceeded lifetime of current clouds.
Compared to all the piles of tech I see in cloud-style deployments that are constantly changing, varing quality, and varying docs?
Yes, “just” install this OS on a few boxes, follow manual on setting up clustering, setup networking dide, and you’re good to go. Even consultants that can do it for you available for reasonable fees. Old timers call such tech a “known quantity” where most surprises have been ironed out.
OpenVMS won’t even have x86-64 support until 2019. Not sure it’s a super great candidate for a modern business operating system unless you’re in it out of necessity. (http://vmssoftware.com/pdfs/VSI_Roadmap_20161205.pdf)
Remember I said “like VMS did in 80’s.” I’m not saying you have to use that specific product. I meant it more open-ended. You’d have to be OK with Itanium servers for VMS itself. There’s other clustering solutions out there that are similar. There’s potential for FOSS to clone some of them. And so on.
So I was around in the 80s. The clustering options were super proprietary, fragile, and rarely survived an upgrade. Although we had high reliability systems, we had exactly zero distributed, globally accessible, low latency, highly available, geographically disparate systems. And even fewer storage systems of that type. In fact, I don’t remember anything that could survive the loss of 5U of a rack, much less a whole rack, much less a cage, much less a datacenter. In fact, I could tell you stories about various hilarious attempts to make SCSI reliable that would make your hair stand on end.
The reason why you don’t see a proliferation of that type of system today, and in fact reach to find any example, is that it turned out to be an evolutionary dead end. The pattern of using many commodity components and tolerating failure turned out to be far more successful than using a smaller number of highly engineered components that armor themselves against failure. It turns out that in the presence of rapidly mutating state, and arbitrary threats, there are no tradeoff-free solutions.
In other words, nobody read the RAID paper. ;) (ok, so that was 1988, too late to save the 80s)
Appreciate the perspective. Many others told me something different where their stuff was quite resilient with VMS admins saying it the most. Far as 5U, one bank lost a whole site of VMS servers in WTC with failover happening with claimed loss of no transactions. Who knows what story is for remote filesystems.
Is there a modern OS that can provide the strongly reliable clustering you describe at a relatively comparable cost to S3? Honest question.
I havent surveyed them in a while. I doubt it will be that cheap, though.
%CLOUD-F-NTCMFL, cloud network connection failure
that will not help you if it is a network problem.
That’s why standard practice in highly-available clusters is redundant, networking links over different providers. Many used leased lines, too.
If you’re doing it right it’s several people’s computers.
Possession is an obsolete concept. If you can ssh into it it might as well be yours.
I’m just going to assume you don’t run multi-user systems.
Let’s be honest - is anyone running them outside of shell services and webhosting?
Uh…yes? Think universities, engineering companies (real ones, not YCStartupprgonnabegoneintwomonths.ly), research labs…
Could use a forgot password function.
Yeah. I’m working on that. I’ve had a couple users email me, and I’ve manually reset their passwords. If you need me to manually reset yours, just let me know.
“From the negative perspective, people can use our cross-browser tracking to violate users' privacy by providing customized ads,” Yinzhi Cao, the lead researcher who is an assistant professor in the Computer Science and Engineering Department at Lehigh University, told Ars. “Our work makes the scenario even worse, because after the user switches browsers, the ads company can still recognize the user. […]”
The value of work like this is evident: certainly there are advertisers looking for privacy vulnerabilities like this one, and if they find a hole they’ll keep it secret and exploit it. It’s good to have people finding these holes on behalf of the advertised-to, and publishing them so they can be fixed.
A question: is it usual to publish immediately when one discovers a privacy vulnerability? Would it be good to treat privacy vulnerabilities like (other) security vulnerabilities, and give browser vendors a head start to fix the vulnerability before it is published?
I’m not sure what could be done to fix this vulnerability. Scanning for WebGL capabilities isn’t exactly a bug, nor is checking the system font list.
If you did you wouldn’t even be able to post that.
True, I use a selective whitelist. You could also disable just WebGL.
Selectively enabling WebGL for sorted that request it would be nice as a privacy option. Sort of how browsers treat Java and Flash.
Hmm, I did find CanvasBlocker for Firefox:
Users can choose to block the <canvas> API entirely on some or all websites
If it was a really specific bug with a clear fix, I’d treat it like a security vulnerability and give the vendor a chance to fix it first. But this is more like a design flaw than a specific exploit, and I think it’s unlikely those can be fixed without substantial public discussion, because you need to build consensus around a design change and argue about the tradeoffs. For example, that’s how the privacy leak through the CSS :visited selector was eventually fixed.