I think it’s interesting for relative comparison, as they are two startup sales just a few weeks apart. Also, considering that Google offered $200m for digg (.20 instagrams), which was considered outrageously high just a couple of years ago, it’s interesting to see the rate of inflation apparent in these sales.
For the record, I was merely being facetious. I do agree that it’s quite interesting to note the inflation; what’s also interesting is to see just how much value Digg has lost in the world. There was a time where it might have fetched a much handsomer price. It’s fascinating how the tides of the Internet change.
jcs, would you care to elborate on the sorting a bit? Will clicking Home give us the highest ranked stories within the last couple of days with same-rank-stories being sorted on date? And how is everything out of the two day range displayed? It doesn’t seem to be sorted by submit time.
It uses the same algorithm as Reddit for ordering stories. I’ve just changed the window on which it operates to 36 hours. As the site grows it will shrink down to 24 or 12 hours.
No, all custom Ruby on Rails code. I had a desire to rewrite it in Go but as the features have been piled on, it has been getting more complex and looking less likely like that will happen.
I used Reddit’s “hot” sorting algorithm for stories, and Evan Miller’s confidence algorithm for comments (which Reddit also uses).
https://lobste.rs/s/JwhCfO/plesk_0day_for_sale_as_thousands_of_sites_hacked
I need to tweak the duplicate story finder to match URLs with varying http/https and trailing slashes…
I am unconvinced that this is a good idea. Shouldn’t the default deployment behavior minimize surprise, which is almost impossible with automatic updates?
I’m not sure the tail of cloud deployments should be wagging the dog.
They also mention IoT. Not sure how much of that runs Debian but such devices tend to run unattended anyway.
These devices are essentially appliances and fall under the model of the manufacturer ensuring firmware updates. It’s still something that needs to be planned and engineered, not left to chance.
Moreover, the trend in cloud is for immutable, stateless servers that never have an uptime of more than a few days before they’re destroyed. In this context, it makes even less sense.
In that situation, automatic upgrades are irrelevant. They’ll be blown away / rolled up in the next release. So it’s not really an argument for or against.
You could certainly end up getting unexpected updates while the servers are alive if you don’t proactively disable it in your image, so yes it is still a concern.
A compromised machine is surprising.
Agreed. I’ve thought about adding an “apt-get update && apt-get dist-upgrade” cron job for years now, but never do because I just like to know when and what things are changing.
I’d think so, but this is Debian. They do a lot of surprising things already, so it fits the mindset. People who don’t like surprises probably aren’t running Debian.
What you consider “surprising” is consistency and reliability, the reasons why people who don’t like surprises do run Debian en masse.
I’m suspicious of automatic updates, but less so with Debian than anything else, because their release management is so good that their unstable branch is more stable than some people’s releases. But I think the current system of just auto-installing security updates and packages from the updates suite (contains things like tzdata, spamassassin, clamav… things that are only useful if they’re up to date) is good enough for me :)