At first I thought “Oh it’s server-side includes again”, but if it lives up to its description then it’s actually significantly more powerful and made by someone with a good grasp of prior art.
What is the point of making a fancy website and a readme if I still have to check the source what it does?
This is an Honest question. It is effectively worse than just linking to a source code browser.
The description is vague and ambiguous.
Thanks for the feedback. How can I improve “Use CSS selectors to find and replace elements on pages with content from other sources.”? Or the rest of the readme… suggestions for how to make it clearer welcome!
By describing what this software does and how it does it, rather than what a user does with it. The difference might be subtle but I still don’t know what this actually does.
Does it replace the contents of html elements at built time prior upload?
Does it include a server component to update the contents throuh, say, a script, image iframe?
A quick sample usage on the front page with just with the relevant piece of code without the boilerplate.
I opened the exampe and I have no clue about what this does. I could spend a larger amount of time reading through the source of the whole project, but I am not a go programmer nor I think that is a reasonable requirement for using a program.
I believe this would be a a good description:
Stitcherd can automatically modify the element tree of a page before serving it to users. You can tell it where to insert new content using CSS selectors.
For example, if you want to be evil, inject a huge animated banner just before the <main> element of every page you serve.
Thank you, That sounds awesome!
And done (with credit).
I had hoped the name (stitch == sew… d daemon) would be almost enough of a hint with a bit of supporting copy to show HOW it works would be enough, alas clearly it’s not.
Let me see if I can explain it… but it’s wordy. Hopefully this is a bit clearer.
It’s a server that reads source content (typically a local static file, but could come from a remote source), fetches one or more pieces of remote “dynamic” content and injects them into the source document using a css selector to find the place where it should insert the content. These can be nested. The remote dynamic content can be the remote html or output of a Go template that can also process remote html and/or remote json data.
The resulting content is then served to the client. You can have multiple routes with different sets of content to be replaced (and indeed source document).
In other words a server that does server side includes, but with css/dom manipulation and so doesn’t require special directives in the source documents.
A couple of Use cases:
Yes, it’s server that has to be hosted somewhere and you need to decide if that’s okay for your use case or not, but then so are most of the alternatives, except JS of course, but same-origin, et al leads it to be harder (imo) to use than this.
It could probably be used for styling, too. Diazo - a “new” style/theme solution for zope/plone (one of the very first object db/app servers) uses a similar technique:
Note that diazo “compiles to” xlst and can be deployed directly in varnish, nginx or other edge side caches/proxies.
I remember Zope/Plone :) Looks interesting, I’ll give this a deeper look in the morning.
Did I accidentally start a trend for automated HTML manipulation tools? ;)
This sounds like a great idea. Element tree editing is much more powerful than Server/Edge Side Includes that can only insert a piece of content in a fixed place, and CSS selectors make it reasonable simple to tell the tool what to do with the tree.
I’d be curious to see stress test results. I believe with modern libraries and hardware, automated HTML manipulation is not much slower than template rendering, but it would be nice to see a proof.
:)… my limited very unscientific testing so far, it looks like 1 or two insertions is (much) faster than executing Go templates. But the templates are probably typical complexity so I expect them to be slower than a straight find(”#elementid”).replace() call.
Note, my numbers right now are end to end to they include the remote fetch time.
I’m all for less-complicated site hosting, i.e. no database, just content in files converted to markup and served.
But I’m also all for making the workflow less complicated complex. As the author points out, including dynamic content in otherwise static files is not new, and there is IMO a huge benefit to having some separation of concerns.
If you have for example, a tool that converts your source material into static html (or, shtml, html with ESI tags, whatever) files, and either pushes them back into a different branch of a repo, or a separate repo, or pushes them directly to where the content is served, or whatever, and then have a pretty standard web server/cache that handles the inclusion of said “dynamic” content, when (not if) something goes wrong you’re going to have a much easier time working out what, because each step is doing just one thing.
Something like stitcherd has been floating around my head for a few years now and I finally sat down and cranked out some code. It’s early days, but I am no longer totally embarassed by the code and would like to start getting some feedback.
This looks super useful. I was in the market for something like this very recently for an idea but it actually eludes me at this time what the project was. Such is the way of these things.
What are you using to populate the bot knowledge base?
I am currently using https://github.com/x-way/crawlerdetect (which tracks https://github.com/JayBizzle/Crawler-Detect) for the list of user agents of known bots. Right now there’s only a policy of rate limiting them, but an option for out right rejection might be useful to some.
I also took a quick look at datadome.co too and would not rule out supporting that at some point for something that might be more accurate.