hai! i work at DO (as an engineer) & asked about this internally.
it looks like we ported all 670 scotch.io articles to digitalocean.com/community/tutorials/<article>, and planned on redirecting all of the scotch.io URLs to the community site. this plan was executed, but for some reason we only redirected a portion of the URLs - the rest of them hit our fallback redirect (to digitalocean.com/community).
we’re pushing a fix to redirect scotch.io/tutorials/* to digitalocean.com/community/tutorials/$1, which should correct most of the old scotch.io links.
while looking into this we noticed the same issue with alligator.io (a site we also acquired & redirected to the community docs site), and we’re pushing for the same correction there.
tldr; this wasn’t malicious - we just overlooked some redirects. sorry! most of these old links should work soon. ❤️
Isn’t the point of buying a blog to continue to host it and add more things?
People prefer to link to technical documentation (including me!) but when the editors of MDN are laid off, it’s helpful to at least reference blog posts when doing something.
Isn’t the point of buying a blog to continue to host it and add more things?
Exactly!
The value of blog is its content, not the click(not directly). People click on the links because they want to see the content. They can’t remove the content and just hijack the click with redirect, people won’t stay after being misguided into their website. How expensive is it to just host scotch.io, really, considering the buyer is a web host provider? Add a banner to each page and they can get more juice out of it.
They can’t remove the content and just hijack the click with redirect, people won’t stay after being misguided into their website.
Yes, yes they can. They bought the damn thing, so they may do as they please. That’s sad from my point of view, and probably people will not stay in the long run :(
Let’s be grateful of archive.org that will allow us to still read those very, very good articles from css-tricks
I think the meaning was “They can’t remove the content and hijack the click and thus extract value.” Unless they’re doing click fraud with ad impressions, I guess.
I got furious following this series of blog buyout by Digital Ocean and seeing them destroy it afterwards. I want to write down a record what they have done over the years and make them accountable for their unethical business practice even after they correct some of the things I pointed out directly. I am curious what others here think of what they did.
I want to write down a record what they have done over the years and make them accountable for their unethical business practice
I’m confused about what’s unethical? They don’t seem to be buying these sites/businesses specifically in order to shut them down (as you point out in the blog, that would only harm their reputation).
We’re not entitled to DO continuing to host content for free forever, or for them to pay people to edit it. It would be nice if they did, but not doing so is not unethical, even if we don’t like it.
The title was changed from “Don’t Sell Your Indie Business to Digital Ocean!” to “Digital Ocean has shut down two technical sites” by mod, but I want to clarify that the CSS Tricks is not officially shut down yet and I am still hoping Digital Ocean will do the right thing with it — keep it live and free
When to use target="_blank" on CSS-Tricks was one of those posts I’ve had to reference so many times in my career–especially giving it to product team, project managers, and junior developers that didn’t understand that you don’t mess with the user agent defaults without a good reason. It’d be a shame to see CSS-Tricks go away.
I’m really desperate for a tool to preserve these websites in an ‘open web” way. Of course, archive.org exists, but if (when?) their datacenter catches fire or similar, everything on it might be lost as well.
I think solutions like archivebox handle the archiving part well, but there’s no clear story on how to easily host archived sites and make them discoverable.
Of course, archive.org exists, but if (when?) their datacenter catches fire or similar, everything on it might be lost as well.
Maybe a good idea to donate then for that to not happen.
However I agree that decentralizing these things is a good idea. I know archive.org had some browser extension or something at some point to help with indexing things that crawlers have a hard time to reach. Maybe it would be worthwhile to base of that so both benefit?
I want to move to a world where an entire web site, as of a particular moment in time, exists as a snapshot in a distributed content-addressed storage system and your browser can be readily directed to download the entire thing
this would of course necessarily entail having fewer features that depend on server interaction, but I think uh… most sites should not be apps, heh
I’m aware that this is sort-of throwing a technical solution at a social problem, but I think in this case the technology could dovetail well with a cultural change where site owners want to do something about preservation - it would give an easy, immediately actionable thing that people who care can do that makes a real difference
I have, yes! I think IPFS is a very solid architecture, should definitely be the basis of anything like this, and probably solves about 90% of the problem. Of the part that’s left, most of it is documentation that explains what people might want to do, why, and how, and the smallest part is any small glue code that’s needed to make that easy.
One idea I had was for an appliance thing that could bring static IPFS blog / site publishing to the masses. Something like:
A SBC (RPi, Rock64, whatevs) running a Free OS.
Some sort of file share on it that was mDNS discoverable.
Each of these appliances has a (changeable, but default) unique IPNS identifier, with a QR code sticker on it that you can scan and share however you want (social media, IRL, as text, as an image, etc.)
Then you just write your content, copy it to the box (Samba? SFTP? …?), it generates a static site, you eyeball it, then hit ‘publish’ when you’re happy.
Aim would be for it to be simple enough for non-techies to use. There is a lot of devil in that detail, though. Some things I was spiking:
How to trigger the static generation? Samba is very bad at knowing when a file operation is “done”.
How to keep the thing updated and secure? I looked into Ubuntu’s IoT infra but there’s an entire herd of yaks to shave there.
How to support Windows? which still doesn’t do mDNS well, last I looked.
I’m all for this. This is very similar to what I’ve been thinking about. I would personally choose sftp over Samba because managing ssh credentials a skill that I think is very empowering and worth teaching, and because I never like tying my future to the whims of a megacorp, but that does incur an additional burden for documentation, since most people won’t know how to use it.
Your point 1 brings up another possibility though, which is using git-over-ssh. Then the generation can be kicked off by an on-push trigger in git.
With regard to your point 2 I personally lean very heavily towards NixOS as it’s good at this sort of thing, but teaching people how to manage appliances like this is a big writing task. I’m not a technical writer, and I’m not really the right person to take that on, although I’m always happy to chat with anyone who does.
Windows support does seem quite challenging, I don’t have good answers there.
Yeaaaahhhhhhhhhhh … I’m kinda reluctant to have to expose non-techies to Git. It’d be perfect for a coding-savvy market though.
but teaching people how to manage appliances like this is a big writing task.
I was thinking of something that wouldn’t have to be managed … updates would “just happen”. That turns out to be surprisingly difficult (c.f. herd of yaks).
It’s surprising to me that releasing an open-source appliance like this would still be a lot of work, but honestly, it really does seem like it would.
I’ve even started to build an archival app on top of it, but there are many thorny problems. How do you ensure the authenticity of archives published by other people? Where and how do you index archived content across the network? How do you get other people to re-host already archived content? How do you even get enough people interested in this to make it useful at all?
I’m definitely interested in this as well, I’ve started to believe that personal archives of sites/articles is the most resilient way to preserve information.
Wow, after dozens of visits to various substack articles (and often giving up early because of the glaring red-on-white), it was only this comment that made me realise that’s what that was, and it wasn’t just a branding choice.
I empathize with the OP, but once the blog is Digital Ocean’s property, they can do whatever they want with it, including shutting it down or hijacking the clicks.
The thing you said is so obvious, that you can do what you want with a thing that is yours to do with what you want, that I wonder why you said anything at all. What were you going for here?
Exactly. The assumption DO or anyone else is buying up hosted blogs as an act of unadulterated altruism is flawed. There is nothing unethical with any of this. Incompetent? Perhaps.. but certainly not anything to warrant a “DO tyranny be thy name” rant captured by the OP.
One further point I might add - the internet is not immutable. Things are in constant flux. I feel like this doesn’t need to be pointed out, but here we are.
hai! i work at DO (as an engineer) & asked about this internally.
it looks like we ported all 670 scotch.io articles to
digitalocean.com/community/tutorials/<article>
, and planned on redirecting all of the scotch.io URLs to the community site. this plan was executed, but for some reason we only redirected a portion of the URLs - the rest of them hit our fallback redirect (to digitalocean.com/community).we’re pushing a fix to redirect
scotch.io/tutorials/*
todigitalocean.com/community/tutorials/$1
, which should correct most of the old scotch.io links.while looking into this we noticed the same issue with alligator.io (a site we also acquired & redirected to the community docs site), and we’re pushing for the same correction there.
tldr; this wasn’t malicious - we just overlooked some redirects. sorry! most of these old links should work soon. ❤️
Thanks for the clarification!
np!
Isn’t the point of buying a blog to continue to host it and add more things?
People prefer to link to technical documentation (including me!) but when the editors of MDN are laid off, it’s helpful to at least reference blog posts when doing something.
Exactly! The value of blog is its content, not the click(not directly). People click on the links because they want to see the content. They can’t remove the content and just hijack the click with redirect, people won’t stay after being misguided into their website. How expensive is it to just host scotch.io, really, considering the buyer is a web host provider? Add a banner to each page and they can get more juice out of it.
I am being a little cynical here, but:
Yes, yes they can. They bought the damn thing, so they may do as they please. That’s sad from my point of view, and probably people will not stay in the long run :( Let’s be grateful of archive.org that will allow us to still read those very, very good articles from css-tricks
I think the meaning was “They can’t remove the content and hijack the click and thus extract value.” Unless they’re doing click fraud with ad impressions, I guess.
There are two questions:
Are they entitled to do it?
Should they do it?
I think the answer to 1 is yes, and to 2 is no. We can acknowledge the former, while still decrying the latter.
I got furious following this series of blog buyout by Digital Ocean and seeing them destroy it afterwards. I want to write down a record what they have done over the years and make them accountable for their unethical business practice even after they correct some of the things I pointed out directly. I am curious what others here think of what they did.
Have you tried contacting them? Maybe they just didn’t think about it at all (which clearly is still very bad)
I’m confused about what’s unethical? They don’t seem to be buying these sites/businesses specifically in order to shut them down (as you point out in the blog, that would only harm their reputation).
We’re not entitled to DO continuing to host content for free forever, or for them to pay people to edit it. It would be nice if they did, but not doing so is not unethical, even if we don’t like it.
The title was changed from “Don’t Sell Your Indie Business to Digital Ocean!” to “Digital Ocean has shut down two technical sites” by mod, but I want to clarify that the CSS Tricks is not officially shut down yet and I am still hoping Digital Ocean will do the right thing with it — keep it live and free
When to use
target="_blank"
on CSS-Tricks was one of those posts I’ve had to reference so many times in my career–especially giving it to product team, project managers, and junior developers that didn’t understand that you don’t mess with the user agent defaults without a good reason. It’d be a shame to see CSS-Tricks go away.I’m really desperate for a tool to preserve these websites in an ‘open web” way. Of course, archive.org exists, but if (when?) their datacenter catches fire or similar, everything on it might be lost as well. I think solutions like archivebox handle the archiving part well, but there’s no clear story on how to easily host archived sites and make them discoverable.
Maybe a good idea to donate then for that to not happen.
However I agree that decentralizing these things is a good idea. I know archive.org had some browser extension or something at some point to help with indexing things that crawlers have a hard time to reach. Maybe it would be worthwhile to base of that so both benefit?
I want to move to a world where an entire web site, as of a particular moment in time, exists as a snapshot in a distributed content-addressed storage system and your browser can be readily directed to download the entire thing
this would of course necessarily entail having fewer features that depend on server interaction, but I think uh… most sites should not be apps, heh
I’m aware that this is sort-of throwing a technical solution at a social problem, but I think in this case the technology could dovetail well with a cultural change where site owners want to do something about preservation - it would give an easy, immediately actionable thing that people who care can do that makes a real difference
Have you looked into IPFS?
https://ipfs.tech/
I have, yes! I think IPFS is a very solid architecture, should definitely be the basis of anything like this, and probably solves about 90% of the problem. Of the part that’s left, most of it is documentation that explains what people might want to do, why, and how, and the smallest part is any small glue code that’s needed to make that easy.
One idea I had was for an appliance thing that could bring static IPFS blog / site publishing to the masses. Something like:
Then you just write your content, copy it to the box (Samba? SFTP? …?), it generates a static site, you eyeball it, then hit ‘publish’ when you’re happy.
Aim would be for it to be simple enough for non-techies to use. There is a lot of devil in that detail, though. Some things I was spiking:
Etc.
I’m all for this. This is very similar to what I’ve been thinking about. I would personally choose sftp over Samba because managing ssh credentials a skill that I think is very empowering and worth teaching, and because I never like tying my future to the whims of a megacorp, but that does incur an additional burden for documentation, since most people won’t know how to use it.
Your point 1 brings up another possibility though, which is using git-over-ssh. Then the generation can be kicked off by an on-push trigger in git.
With regard to your point 2 I personally lean very heavily towards NixOS as it’s good at this sort of thing, but teaching people how to manage appliances like this is a big writing task. I’m not a technical writer, and I’m not really the right person to take that on, although I’m always happy to chat with anyone who does.
Windows support does seem quite challenging, I don’t have good answers there.
Yeaaaahhhhhhhhhhh … I’m kinda reluctant to have to expose non-techies to Git. It’d be perfect for a coding-savvy market though.
I was thinking of something that wouldn’t have to be managed … updates would “just happen”. That turns out to be surprisingly difficult (c.f. herd of yaks).
It’s surprising to me that releasing an open-source appliance like this would still be a lot of work, but honestly, it really does seem like it would.
I’ve even started to build an archival app on top of it, but there are many thorny problems. How do you ensure the authenticity of archives published by other people? Where and how do you index archived content across the network? How do you get other people to re-host already archived content? How do you even get enough people interested in this to make it useful at all?
I’m definitely interested in this as well, I’ve started to believe that personal archives of sites/articles is the most resilient way to preserve information.
I’m very glad to see this post and learn about what’s happening. I had no idea.
Ugh, another site with a distracting, red no-JavaScript banner… *closes tab*
Wow, after dozens of visits to various substack articles (and often giving up early because of the glaring red-on-white), it was only this comment that made me realise that’s what that was, and it wasn’t just a branding choice.
Substack’s entire business model is tracking how people read their customer’s newsletters. Requiring JS is just a part of that .
Oh, I get it, I just found it funny that being so bold in an attempt to get attention is what made me ignore it, like banner ad blindness.
I empathize with the OP, but once the blog is Digital Ocean’s property, they can do whatever they want with it, including shutting it down or hijacking the clicks.
The thing you said is so obvious, that you can do what you want with a thing that is yours to do with what you want, that I wonder why you said anything at all. What were you going for here?
Exactly. The assumption DO or anyone else is buying up hosted blogs as an act of unadulterated altruism is flawed. There is nothing unethical with any of this. Incompetent? Perhaps.. but certainly not anything to warrant a “DO tyranny be thy name” rant captured by the OP.
One further point I might add - the internet is not immutable. Things are in constant flux. I feel like this doesn’t need to be pointed out, but here we are.