It’s interesting, I think everyone goes through phases when it comes to maintaining a website, a parallel to their life:
Thanks for the talk cadey :D
I just stuff my markdown files into zola (static site gen), copy them via SSHFS onto my host and let nginx take over. Doesn’t look as nice as I’d like to (my theme isn’t the best, still hoping for tree sitter in zola etc), or nice features like stats are missing. But what ever, not gonna fall down the mentioned rabbit hole ;)
There’s also the people who just stay on step 0 (for any reason).
you go back to the basics
you go back to the basics
What would you say are the basics? I am going through similar phases. I would like to learn from the wisdom of those who have gone through these phases and I would like to know what the final phase looks like. Would you say that writing your posts and pages in plain HTML or Markdown and rendering them with a small static site generator is going back to the basics? Or do you mean something even less complicated?
The final phase looks different to everyone, but generally, yes, it’s something where you 1. write 2. upload. My current setup is: write a txt or html file in ~/Wiki/public/, then deploy.sh (which is just rsync really). My deployment script makes sure nginx is installed and the config file is there too.
Do you have another step where you add common header and footer to all pages? That’s the reason I am still using a static site generator but I guess it should be easy to solve using a simple shell script too. Like deploy.sh adding a header and footer to every file before rsyncing it. What do you do for common headers and footers?
This is mostly the template step.
For me it was Wordpress+MySQL. Then my bloody provider turned off the server after 10 years and faced with the daunting task of migrating everything, I backed it all up and just never brought it back up.
There’s a 7th step where you abandon the Web for Gemini ;-P My website is now basically just a resume and a link to my Gemini capsule.
I had various systems over the years, but at certain point I decided I wanted to take the Tumblr posts I had made about programming and separate them from the others, so I made a Hugo static site out of them, and then I started using Hugo for everything at work because I now knew Hugo really well. I think there’s definitely a lot of cross pollination between work and personal projects.
I’m somewhere between 5 and 6. Right now I’m writing in Markdown, and I have a shell script that runs pandoc on all the markdown files to generate a static site, and I kinda like it. The only problem with it now is that it regenerates everything every time, so I’ve been toying with the idea of changing the shell script to something weird like a Makefile.
I’ve found that when I’m burning out, I gravitate toward very old school tools like I had when I was first beginning my career; I think that’s where the Makefile is coming from.
I’m running through all the steps consistently and constantly, but I never do any actual blogging. It’s fun to set up things, but I rarely have a feeling I have worthwhile thoughts, so i give up after hello world post, or maybe two more.
I don’t know the performance of Pandoc, nor the amount of content you have, but my simple microblog has almost 10K entries and renders in less than 10s. I use the Perl interface to CommonMark to generate HTML.
it’s slow-ish for me right now because I’m lazy, and I’ve embedded the images with data: URLs until I can come up with something less brute force. :) It seems like the process of turning graphics into huge data: URLs (unsuprisingly) adds a lot of time.
The first part of the post up to Maud reads to me like more condemnation of Markdown as a format unsuitable for any real content. If you want definition lists, and not some garbage front matter that isn’t standardized across implementations or codified in the spec and doesn’t render properly by the basic parsing tools, then don’t use Markdown—and you’ll get more features like figures, admonitions, detail/summary/collapsible syntax, blockquotes with attribution, file imports, section numbers, table of contents, ya know, stuff for actually posting rich text content for books and blogs. A lot of the hurdles encountered could have been circumvented for a long time with Asciidoctor alone. Slap on a standalone static binary tool like Soupault and you can do a lot of ‘dynamic’-looking things by pre/post-processing and augmenting that output by parsing metadata that already exists in elements on the page without resorting to all of this custom frontstuff, custom elements, shortcodes, et. al. syntax hacks that lock you into specific static site generators and breaking the lightweight markup spec.
I’m not calling out the author specifically for having gone down this route—I had too—but we as a community need to start choosing the right tool for the job and Markdown ain’t it.
But in the end the post endorses Nix in lieu of more complex tools (Kubernetes, Hereby/Dokku), so it’s okay.
I mean, markdown is suitable for “real content”, but I needed to extend it as things got more complicated. I assume I’d have to do this with any other template language too.
I do think it’s a valid criticism of Markdown that it doesn’t have clean, well-defined extension points. E.g. reStructuredText / LaTeX / HTML all allow you to add some additional tag-like things without leaving the format, to an extent that Markdown doesn’t.
It always needs extensions once you try to write a blog or book… or even a semifancy README. The base language does not give you enough for these tasks and the extensions collide with other implementations pretty quickly because of how Spartan the basics are.
I’ve been flailing a bit on it as a ~project, but I’ve been trying to write about this rough part of the map lately. I may poke some of the people in this subthread when I get the next post sorted…
Resisting the urge to vomit much context here, but a little: over the past ~decade, I’ve had multiple projects drag me through the pain of trying to automate things that deal with soft/idiomatic use of presentational markup.
Last year I started prototyping a ~toolchain for documentation single-sourcing on top of a very flexible markup language I’d wanted to try out for a while, D★Mark.
Both building it and figuring out how to write about it have made me wrestle with markup a lot. I have a nagging self-doubt that they’re all trite observations that everyone else knows, but it’s helped me make sense of a lot of things that felt unrelated. TLDR: everything sucks (at least, outside of narrow use-cases) but I have more empathy as I get a better handle on why :)
I use a custom markup language that is an unholy mix of Markdown, Org Mode and my own custom quirks but the messages aren’t stored in that format, but in the rendered HTML. That way, I’m not stuck with any one Markdown (or other) format. And it’s from the HTML format that I re-render the output for gopher and Gemini.
To each his own at the end of the day, but sticking to a spec takes a lot of burden off of yourself and lets you participate in a larger community. I’m curious what was missing from Org Mod for you? It seems it lacks some of the features of AsciiDoc or reStructuredText.
My main point was not to write custom markup languages, but rather to use whatever markup language you want, but don’t store the blog post in said markup language, but the final HTML output.
As to your question, my custom markup was inspired by OrgMode  (for the block level elements) and Markdown  (more for the inline elements). And I wasn’t trying to use a format to replace HTML, just make it easier to write HTML .
I am also able to gear my custom markup to what I write. I know of no other markup language (admittedly, I don’t know many) that allow one to mark up acronyms. Or email. The table format I came up with is (in my opinion) lightweight enough to be tolerable to generate an HTML table (again, just to support what I need). I’ve also included some bits from TeX (using two grave accents (ASCII 96) to get real double quotes for example), as well as other characters (like ‘(C)’ will turn into the Unicode Copyright character). And a way to redact information (as can be seen here).
Standards are good but they fall short in supporting what I need. And there is no way I would want to force my blogging quirks onto other people.
 I don’t use Emacs, nor any other tool that support it.
 There are too many different Markdown standards to pick just one.
 Which is an element I think many people miss about Markdown—it wasn’t developed to replace HTML, but to make it easier for John Gruber to write his blog posts, which is why you can type HTML in his version of Markdown.
Interesting! Do you edit the HTML directly or re-render it after changes?
If I need to update an entry after it’s posted, I will edit the HTML directly.
Cool talk. Are there a bit more details about the jsonfeed automation thingy? That seems interesting but I don’t know how this works. Did you also build the tools to read and act upon new stuff there? 3rd party? Something else?
I built it myself. I’ll write an article on it after I rewrite part of it to be more modern. I made a mistake with its design and need to rethink a lot of it.
Cool, looking forward to it.
I’m also in the process of RiiR my site, with a somewhat similar design, and I want to add “companions” like you and cool bear from fasterthanlime. Your explanation of what you did was helpful. I was planning of doing markdown extension, but I’m going to skip that now.
iFrame works :)
(and now I feel really bad when looking at the header, but maybe it’s a fair warning to regular viewers of your blog)
It’s actually me talking to Hacker News readers. They get very pissy when you have “irrelevant” images as if I have committed a grave sin that will curse my family for generations (joke’s on them, I’m never having kids).
Yeah but I did also comment something along the lines of having problems, reading the previous entry here with my adhd brain. After you apparently invested so much time for the original talk. This time it worked better for me..
neat. making a mental note to try shifting my own blog to rust as an experiment.
Oddly, on Safari 16.1 (186188.8.131.52.1), the slides are being picked up as the AVIF version - even though they’re not directly supported in mainline, only the technology preview - which leads to them not being displayed (although you can download them individually.)
Can you file a radar for that? I can mitigate it by removing the AVIF option from the list, but AVIF saves so much bandwidth.
I think it’s something peculiar to your HTML/CSS/CDN because I’ve tried to create a test case for the Radar (https://rjp.is/test-avif.html) and it loads my AVIF perfectly for me. Which is a Bit Confusing. But filed anyway since it shouldn’t break randomly anyway.
Do the Mara pictures show up? I’m using a different AVIF encoder for the slides compared to the one that encoded those.
Ah, yes, 2 Cadey’s and 1 Mara show up in the call-out things.
Can you link me to how you install the safari tech preview?
You can download the disk images from Apple’s website here: https://developer.apple.com/safari/resources/
I’m not on my Mac currently, but I believe they run an installer to actually install the browser after mounting.
(neat website btw)
I’m having the same problem on iOS 16 if that helps. No tech preview or anything, just the normal iOS 16 Safari.
Also I was halfway through reading this post before I updated, and it didn’t happen on iOS 15.7.
Oh joy. It’s also not happening to me on Safari 16 technical preview on my MacBook. This is gonna be fun lol
Amazingly, I get no debug output to help. I’m going to file a radar and an open plea for help on my blog.
Update: there’s a webkit bug about this. I may have found a bug in iOS 16 Safari. https://bugs.webkit.org/show_bug.cgi?id=245097
Instead of a linkedlist, why not a hashtable ?
I don’t know of a rust hash table that maintains insertion order. I rely on that for generating feeds.
Have you seen the indexmap crate?
I have now!
You can use a LinkedHashMap implementation, don’t know if there’s one in rust, but you are right you will end up keeping a LinkedLisr anyway.
There’s also the option of a TreeMap with the sort order defined by the creation date of the article, that would at least reduce a little bit the direct access to pages.
But we are speaking about low volumes of elements kept in your data structure so O(n) is not that bad, unless you decided to write hundred of thousands of articles.
In any case, kudos for your blog, I am frequent reader.
Speaking of JSONFeed, have you seen actually simple syndication (ass)?
It’s the simplest thing that can work.
It looks like if you want a full-content feed from ASS you need to fetch the data from the URI and somehow render it?
The advantage of Atom and JSONfeed is that the content is actually in the feed itself so the reader client doesn’t need to implement any extra steps to get it.
I have now, I’ll have to think if I can use that.
Thanks for sharing yet another article in the long list of good articles.
Since you mention being price sensitive in it, and at the same time you mention:
I set up a Kubernetes cluster with Digital Ocean.
I set up a Kubernetes cluster with Digital Ocean.
Did you end up using their managed offering ? Or setup your own on top of it.
At the time I set up the Kubernetes cluster, I was no longer price sensitive. I used their managed K8s thing.