1. 1

    For my own website I have a more complex system. By default it renders html but I also generate ASCII as well as pdf files.

    Another important aspect, my RSS contain my full article. And I also use a slightly more complex script to generate RSS from files and not generate the RSS directly from my static page generator.

    And latest but nice touch, I use a generic build system that is incredibly smarter and faster than all previous specialized static site generators (https://hackage.haskell.org/package/shake).

    That being said I fully agree with the spirit of this blog post. We should self host as much as we can because you could always be put out of big centralized products. But instead of just keeping things in my personal area I duplicate them. I have github and self-hosted gitea. I use both espial and pinboard. I use espial notes to tweet, etc…

    If you are curious here is a post about it https://her.esy.fun/posts/0004-how-i-internet/index.html

    I haven’t written about how I use shake yet.

    1. 8

      I built Org-roam, which i had initially intended to be a simple layer on top of Org-mode that added backlinks to regular Org-mode files. A bunch of tools such as org-roam-bibtex and org-roam-server have since been built by community users to work with citations and provide a graphical overview of the notes. My notes are automatically published via netlify here.

      Org-mode is unparalleled as a plain-text system, which beginners can use as a simple outliner, and power users can use it to build complex workflows (GTD, literate programming etc.). It’s simply a gift that keeps on giving.

      1. 2

        Thanks jethro for org-roam. It’s been a few months now that I use it daily and I really love it.

        As far as I can tell, this is the ultimate note taking tool.

        Along the rest of org-mode (org-agenda, org-capture, etc…) this is a life changer tool for me.

        1. 3

          you’re welcome!

        2. 1

          After hearing about org-mode and org-agenda for a while and then org-roam yesterday, I’ve finally decided to dive into Emacs. I’m starting from the basics with a vanilla installation and reading through a few people’s config files and the docs before I attempt to use org-roam though; I’ve heard it’s a challenge to work with.

          My notes are automatically published via netlify here.

          That’s incredibly similar to what someone sent me yesterday, which was the final straw that convinced me to try Emacs. Is the output a template from a specific package or something you’ve created yourself?

          1. 3

            As a new user I was glad to start to use emacs using a configuration framework like doom-emacs or spacemacs. In fact, after a few years getting used to emacs I now believe that doom emacs make a better job than I could ever do myself to create an emacs configuration.

            That being said, let just say that diving into org-mode was probably one of the best use of my time, and I hope you’ll enjoy it as much as I do.

            1. 2

              I’m starting from the basics with a vanilla installation and reading through a few people’s config files and the docs before I attempt to use org-roam though; I’ve heard it’s a challenge to work with.

              It’s hard if you fight it, easier if you are ready to learn – seeing that you have the right stance, you’ll probably be fine.

              1. 1

                I’m starting from the basics with a vanilla installation and reading through a few people’s config files and the docs before I attempt to use org-roam though; I’ve heard it’s a challenge to work with.

                Yes it is. Emacs is a complex beast, and so is Org-mode, and Org-roam, it really does take some time to get used to. Maybe this guide can help you: https://github.com/nobiot/Zero-to-Emacs-and-Org-roam

                That’s incredibly similar to what someone sent me yesterday, which was the final straw that convinced me to try Emacs. Is the output a template from a specific package or something you’ve created yourself?

                That’s my hugo theme, cortex, which that website in that link had modified for use directly with org-publish. I had since taken some of the modifications (javascript, mostly) and placed them back into my theme :)

            1. 31

              I have recently gone down the rabbit-hole of note-taking apps, since none of them seem to meet my criteria:

              • works offline with my local files (ideally in a human-readable format, if not compliant Markdown)
              • excellent math support (see my prosemirror-math extension for an example)
              • wysiwym (this is crucial for documents with lots of math)
              • support for tags + bidirectional links, for easy categorization and interlinking between notes
              • citations
              • custom css themes
              • free and open source, for extensibility

              Here’s a summary of the different workflows I’ve tried over the years:

              • Initially, I simply took notes on paper. Writing math was easy, but it was too difficult to stay organized or to make changes later.
              • Next, I switched to a system where I’d take rough notes on paper, then polish them into a nice LaTeX document with overleaf that I could reference later. This works out well for some things, but when I’m writing LaTeX I feel to pressured to make everything look pretty. This also isn’t ideal because of the lack of links, difficulty inserting images, etc..
              • For a while I used Jupyter notebooks, since they enable a nice mix of code / math / markdown. Eventually it just grew too cumbersome to start a notebook server every time I wanted to write. (however, these days I think notebooks are built into VS Code–so maybe it’s better).
              • Next, I started using Typora for rough notes, and I’d eventually synthesize the most important ones into a LaTeX document with Overleaf. This was fine, but I had a lot of trouble with organization. Typora doesn’t support anything like tags / wikilinks.
              • Next, I started using OneNote to keep a daily work journal, in linear order. If I’ve forgotten something, I can usually remember what else I had going on the same month/week, so having everything in a linear order really helped when I wanted to search over my past notes. It also helps remind me of my thought process when I go on a long depth-first tangent.
              • Unfortunately, OneNote has terrible math support. So at this point, my notes were spread betweeen paper, OneNote, Typora, and Overleaf. I had no idea where to look for the most “up to date” version of anything.

              When the pandemic started, I found myself with a lot of free time, so I decided it was time to make my own note-taking app called Noteworthy! I’ve been using it exclusively for my notes the past 3-4 months and it’s almost ready for public release!

              In the process of making Noteworthy I’ve been inspired by all the other great note-taking apps out there. Here are just a few of my favorites:

              • Typora, a nicely polished Markdown editor – has the best support for math input I’ve seen
              • Obsidian, a split-pane Markdown editor focused on bidirectional linking
              • Zettlr a Markdown editor focused on publishing / academics
              • RemNote converts your notes into spaced-repetition flash cards, similar to Anki
              • foambubble, a family of VS Code extensions to help search + organize your notes
              • logseq, a GitHub-hosted alternative to Roam
              • Neuron Notes a neat Zettelkasten system written in Haskell, based on GitHub repos
              • R Studio includes an awesome Markdown publishing experience, similar to Jupyter Notebooks
              • (coming soon) Athens Research, an open-source alternative to Roam
              • (coming soon, made by me) Noteworthy, which aims to be an extensible, open-source alternative to Obsidian and Typora, with a focus on wikilinks and excellent math support

              Some honorable mentions:

              • Dendron, a hierarchical note-taking editor based on VS Code
              • kb, a minimal text-oriented command-line note manager
              • Notebag a minimal Markdown app with tag support
              1. 4

                I would add to your list Joplin, which I’ve had very good experiences with. I think it ticks a lot of your boxes, and it also has quite good mobile support which can come in handy.

                1. 4

                  I can’t believe I left out Joplin! It’s multiplatform (mobile. pc, terminal) has more features than practically every other note-taking app out there.

                  It’s been a while since I last tried Joplin, but I think I remember choosing not to use it for my own notes since it uses split-pane (code+rendered) editing instead of wysiwym, which isn’t ideal for notes with lots of math. I believe there’s also no support bidirectional links, but I could be misremembering.

                  1. 4

                    I currently use Joplin, which works decently well for me. I have a few complaints about it: it’s a heavy Electron app that doesn’t exist in my distro’s package manger, so I have to build it from source. I don’t like the distinction it makes between “notes” and “notebooks” - I wish that notes could have arbitrarily-deeply-nested children, like some other notetaking software I’ve used has had. I do appreciate that it has a mobile app, but I’ve run into a few useability nits inputting text in that mobile app. And I wish there was a browser version.

                    This Noteworthy project looks interesting, and if can solve some of these problems better than Joplin, while still doing everything that I do like out of Joplin, I would consider switching to it.

                    1. 4

                      I wish that notes could have arbitrarily-deeply-nested children

                      A quick skim of the Joplin site didn’t really make it clear what a “notebook” is – are you just talking about being able to easily define hierarchies of notes? Or do you mean a full-on, infinitely-nested-list style app like Athens / Logseq / Roam, where every list bullet is considered to be a separate note? And where all the notes are connected as a big graph?

                      With Noteworthy, the goal is not to impose too much structure on your notes – you shouldn’t have to change how you think just to work with a new app. I decided that an approach based on tags would give the most freedom, similar to how Obsidian does it.

                      • (done!) Include tags anywhere using [[wikilink]] , #tag, or @[citation] syntax.
                      • (done!) easily search for all documents referencing a specific tag
                      • (done!) By default, filenames are tags and tags are filenames! Each file can additionally define a list of aliases, which allows for e.g. an abbreviation and its expansion to point to the same file.
                      • (planned) define your own tag hierarchies for easier search / disambiguation
                      • (planned) use logical operations (and/or/etc) in tag searches

                      I’d like to experiment with a Datalog-esque syntax for tag search as well. Roam and Athens both use Datalog internally to facilitate searches, and I believe it has worked out well for them. It would be super cool to expose some kind of tagging system based on predicate logic to the user.

                      This Noteworthy project looks interesting, and if can solve some of these problems better than Joplin, while still doing everything that I do like out of Joplin, I would consider switching to it.

                      What would you say are your most-used features? Regarding Electron vs native vs browser,

                      • Noteworthy is an Electron app, and I’m trying to keep it as lightweight as possible. This is mostly by necessity, since projects like KaTeX and ProseMirror have no native counterparts, afaik. I’ll happily re-write it if an alternative emerges – I’ve been following React-Native and Rust GUI for that reason. I also plan to delegate some of the heavy lifting to Rust programs like ripgrep.

                      • Browser version – I can definitely imagine Noteworthy running in a local server, accessible through a browser.

                      If all goes as planned, there will be a public beta of Noteworthy in a couple months, where I’ll try to gather feedback about what’s working / what’s missing. Keep an eye out :)

                      1. 1

                        In Joplin a notebook is a collection of notes, which map to single markdown files. Notebooks can be arbitrarily nested, but a notebook can only contain notes, not have raw text associated with it. So there’s effectively two types of node in the tree structure it exposes to you. I would prefer it if you could have a tree of notes, all of which contain text, and may or may not have any kind of nesting under them.

                    2. 1

                      IIRC, they have an editor that lets you edit the markdown as it is rendered (one pane). I think this feature is still experimental though.

                      I’m not sure what you mean by bidirectional? In the sense that linking from node A to note B also creates a back-link in note B to note A? That’s not a thing in Joplin to my knowledge.

                      I’ve got the skeleton of a graph viewer inspired by Obsidian which talks to Joplin over it’s REST API, but it’s not currently working and I haven’t had the time to finish a PoC yet. I’m far enough into it to determine that creating such a companion app is definitely do-able – Joplin’s API is quite nice.

                  2. 1

                    I think org-roam fill all your checkboxes.

                    The author answered in this thread here

                    1. 1

                      I haven’t personally tried org-roam due to a phobia of emacs, but it looks like a great alternative to the other roam-likes. One thing that’s not clear – does it support some kind of instant math preview?

                      1. 2

                        Due to how ridiculously extensible emacs is, you can be certain that the answer will be “yes, with some elisp”.

                        1. 1

                          and if you want a packaged solution, org-fragtog :)

                    2. 1

                      This is a great resource, thank you.

                    1. 14

                      Why did Haskell’s popularity wane so sharply?

                      What is the source that Haskell’s popularity are declining so sharply? Is there really some objective evidence for this, I mean numbers, statistics, etc.?

                      It’s anecdotal and just my personal impression by observing the Haskell reddit 1 for 10 years, but I have never seen so many Haksell resources, Conferences, Books and even postings for jobs as now. I have not at all the impression that the language is dying. It has accumulated cruft, has some inconsistencies, is struggling to get a new standard proposal out, but other than that I have the impression that it attracts quite some people that come up with new ideas.

                      1. 2

                        Haskell had glory days when SPJ/Marlow were traveling to various conferences talking about the new language features. Mileweski’s posts, LYAH, Parsec, STM, and Lenses are from that era. The high-brow crowd was of course discussing Lenses. Sure, these things drove adoption, and there’s a little ecosystem for the people who went on the Haskell bandwagon back then.

                        What innovation has it had over the last 5 years? The community couldn’t agree on how to implement any of the features of a respectable dependent-type system, so they invented a bunch of mutually incompatible flags, and destroyed the language. Thanks to the recent hacking, GHC is plastered with band-aids.

                        It’s true that you can’t explain these things with some points on a pretty graph, but that doesn’t make it anecdotal. Look at the commits going into ghc/ghc, and look at the activity on the bread-and-butter Haskell projects: lens, trifecta, cloud-haskell. Maintenance mode. Where are the bold new projects?

                        1. 23

                          These assertions about Haskell are all simply false. There are plenty of problems with Haskell, we don’t need to add ones that aren’t true.

                          The community couldn’t agree on how to implement any of the features of a respectable dependent-type system, so they invented a bunch of mutually incompatible flags, and destroyed the language. Thanks to the recent hacking, GHC is plastered with band-aids

                          The reason GHC didn’t just turn on all flags by default is that many of them are mutually incompatible, so your individual .hs file has to pick a compatible set of language features it wants to work with.

                          You keep saying this in multiple places, but it’s not true. Virtually no GHC extensions are incompatible with one another. You have to work hard to find pairs that don’t get along and they involve extremely rarely used extensions that serve no purpose anymore.

                          The community is also not divided on how to do dependent types. We don’t have two camps and two proposals to disagree about. The situation is that people are working together to figure out how to make them happen. GHC also doesn’t contain bad hacks for dependent types, avoiding this is exactly why building out dependent types is taking time.

                          That being said, dependent types work today with singletons. I use them extensively. It is a revolution in programming. It’s the biggest step forward in programming that I’ve seen in 20 years and I can’t imagine life without them anymore, even in their current state.

                          Look at the commits going into ghc/ghc, and look at the activity on the bread-and-butter Haskell projects: lens, trifecta, cloud-haskell. Maintenance mode. Where are the bold new projects?

                          Haskell is way more popular today than it was 5 years ago, and 10 years ago, and 20 years ago. GHC development is going strong, for example, we just got linear types, a huge step forward. There’s been significant money lately from places like cryptocurrency startups. For the first time I regularly see Haskell jobs advertised. What is true, is that the percentage of Haskell questions on stack overflow has fallen, but not the amount. The size of Stack Overflow exploded.

                          Even the community is much stronger than it was 5 years ago. We didn’t have Haskell Weekly news for example. Just this year a category theory course was taught at MIT in Haskell making both topics far more accessible.

                          Look at the commits going into ghc/ghc

                          Let’s look. Just in the past 4 years we got: linear types, a new low-latency GC, compact regions, deriving strategies & deriving via, much more flexible kinds, all sorts of amazing new plugins (type plugins, source plugins, etc.) that extend the language and provide reliable tooling that was impossible 5 years ago, much better partial type signatures, visible type applications (both at the term level and the type level), injective type families, type in type, strict by default mode. And much more!

                          This totally changed Haskell. I don’t write Haskell the way I did 5 years ago, virtually nothing I do would work back then.

                          It’s not just GHC. Tooling is amazing compared to what we had in the past. Just this year we got HLS so that Haskell works beautifully in all sorts of editors now from Emacs, to vscode, to vim, etc.

                          look at the activity on the bread-and-butter Haskell projects: lens, trifecta, cloud-haskell. Maintenance mode. Where are the bold new projects?

                          lens is pretty complete as it is and is just being slowly polished. Haskell packages like lens are based on a mathematical theory and that theory was played out. That’s the beauty of Haskell, we don’t need to keep adding to lens.

                          I would never use trifecta today, megaparsec is way better. It’s seen a huge amount of development in the past 5 years.

                          There are plenty of awesome Haskell packages. Servant for example for the web. Persistent for databases. miso for the frontend. 5 years ago I couldn’t dream of deploying a server and frontend that have a type-checked API. For bold new ideas look at all the work going into neural network libraries that provide type safety.

                          I’m no fanboy. Haskell has plenty of issues. But it doesn’t have the issues you mentioned.

                          1. 1

                            Right. Most of my Haskell experience is dated: from over five years ago, and the codebase is proprietary, so there are few specifics I can remember. I’m definitely not the best person to write on the subject. In any case, I’ve rewritten the Haskell section of the article, with more details. Thanks.

                            1. 6

                              From my definition of “dying language” it means losing popularity, or losing interest. For Haskell this is absolutely not clear. Also your section is about “why Haskell is bad” not “why it is dying”. People do not talk about Haskell as they used to in my opinion, but I still see a lot of activity in Haskell ecosystem. And it doesn’t really look like it’s dying.

                              I think it is easier to agree about Clojure dying looking at Google trends for example: https://trends.google.com/trends/explore?cat=5&date=all&geo=US&q=haskell,clojure

                              But Haskell looks more like a language that will never die but still probably never become mainstream.

                              1. 5

                                I’m definitely not the best person to write on the subject. In any case, I’ve rewritten the Haskell section of the article, with more details. Thanks.

                                Great! Although there are still many issues that are factually untrue.

                                I think this is just a sign that you’ve been away from the community for many years now, and don’t see movement on the things that were hot 5-10 years ago. Like “The high-brow crowd was obssessed with transactional memory, parser combinators, and lenses.” Well, that’s over. We figured out lenses and have great libraries, we figured out parser combinators, and have great libraries. The problems people are tackling now for those packages are engineering problems, not so much science problems. Like how do we have lenses and good type errors? And there, we’ve had awesome progress lately with custom error messages https://kodimensional.dev/type-errors that you would not have seen 5 years ago.

                                The science moved on to other problems.

                                The issue is that different extensions interact in subtle ways to produce bugs, and it’s very difficult to tell if a new language extension will play well with the others (it often doesn’t, until all the bugs are squashed, which can take a few years).

                                This still isn’t true at all. As for the release cadence of GHC, again, things have advanced amazingly. New test environments and investments have resulted in regular GHC releases. We see several per year now!

                                In Atom, the Haskell addon was terrible, and even today, in VSCode, the Haskell extension is among the most buggy language plugins.

                                That was true a year ago, it is not true today. HLS merged all efforts into a single cross-editor package that works beautifully. All the basic IDE functionality you would want is a solved problem now, the community is moving on to fun things like code transformations.

                                Then there’s Liquid Haskell that allows you to pepper your Haskell code with invariants that it will check using Z3. Unfortunately, it is very limited in what it can do: good luck checking your monadic combinator library with LH

                                Not true for about 3 years. For example: https://github.com/ucsd-progsys/liquidhaskell/blob/26fe1c3855706d7e87e4811a6c4d963d8d10928c/tests/pos/ReWrite7.hs

                                The worst case plays out as follows: the typechecker hangs or crashes, and you’re on the issue tracker searching for the issue; if you’re lucky, you’ll find a bug filed using 50~60% of the language extensions you used in your program, and you’re not sure if it’s the same issue; you file a new issue. In either case, your work has been halted.

                                In 15 years of using Haskell I have never run into anything like this. It is not the common experience. My code is extremely heavy and uses many features only available in the latest compiler, with 20-30 extensions enabled. Yet this just doesn’t happen.

                                There is almost zero documentation on language extensions. Hell, you can’t even find the list of available language extensions with some description on any wiki.

                                Every single version of GHC has come with a list of the extensions available, all of which have a description, most of which have code: https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/glasgow_exts.html You can link to the manual that neatly explains everything, rather than to the git repo.

                                Looking at the big picture: first, this is a poor way to do software development; as the number of language extensions increase, your testing burden increases exponentially.

                                This is only true if you can’t prove how extensions interact, or more fundamentally, that they don’t interact.

                                Second, the problem of having a good type system is already solved by a simple dependent type theory; you study the core, and every new feature is just a small delta that fits in nicely with the overall model.

                                That’s totally untrue. There is no such general-purpose language today. We have no idea how to build one.

                                As opposed to having to read detailed papers on each new language extension. And yes, there’s a good chance that very few people will be able to understand your code if you’re using some esoteric extensions.

                                Again, that’s just not true. You don’t need to know how the extensions are implemented. I have not read a paper on any of the extensions I use all the time.

                                In summary, language extensions are complicated hacks to compensate for the poverty of Haskell’s type system.

                                That’s just the wrong way to look at language extensions. Haskell adds features with extensions because the design is so good. Other languages extend the language forcing you into some variant of it because their core is too brittle and needs fundamental changes. Haskell’s core is so solid we don’t need to break it.

                                However, PL research has shifted away from Haskell for the most part

                                That’s again totally factually untrue. Just look at Google Scholar, the number of Haskell papers per year is up, not down. The size of the Haskell workshop at ICFP is the same as 5 years ago.

                                Moreover, there are no tools to help you debug the most notorious kind of bug seen in a complicated codebase: memory blowups caused by laziness.

                                Again, that’s not factually true.

                                We have had a heap profiler for two decades, in the past few years we got ThreadScope to watch processes in real time. We have systematic ways to find such leaks quickly, you just limit the GC to break when leaks happen. https://github.com/ndmitchell/spaceleak We also got stack traces in the past few years so we can locate where issues come from. In the past few years we got Strict and StrictData.

                                As for the code examples. I can pick 2 lines of any language out of context and you’ll have no idea what they do.

                                Who cares what every extension does for every example? That’s the whole point! I have literally never looked at a piece of Haskell code and wondered what an extension does. I don’t need to know. GHC tells me when I need to add an extension and it tells me when an extension is unused.

                                How many more language features are missing?

                                Extensions are not missing language features.

                              2. 1

                                GHC also doesn’t contain bad hacks for dependent types, avoiding this is exactly why building out dependent types is taking time.

                                Honestly, I’d much rather prefer a simple core model, like that of HoTT.

                                1. 3

                                  Honestly, I’d much rather prefer a simple core model, like that of HoTT.

                                  I’d love that too! As would everyone!

                                  But the reality is, we don’t know how to do that. We don’t even know how to best represent computations in HoTT. It might be decades before we have a viable programming language. We do have dependent types that work well in Haskell today, that I can deploy to prod, and that prevent countless bugs while making code far easier to write.

                                  1. 1

                                    I think HoTT with computations is “cubical type theory”? It’s very active currently.

                                    As for the dependent types as the backend for advanced type level features, I think it’s what Dotty/scala 3 is about. It’s definitely not the only way to do it, but it’s also not decades away. Idris 2 is also an interesting effort.

                              3. 4

                                Dependent types aren’t that useful for production software, and full blown dependent types are really contrary to the goals of Haskell in a lot of ways. Any language that’s >20 years old (basically 30) is gonna have some band-aids. I’m not convinced that Haskell is waning in any meaningful way except that people don’t hype it as much on here/hn. Less hype and more doing is a good thing, imho.

                                1. 3

                                  Reminds me of the days when people said FP and complete immutability weren’t useful for production software. It is true that there is no decent general purpose language that implements dependent types, but that’s besides the point.

                                  It’s true, hype is a poor measure.

                                  1. 4

                                    Yeah, that’s an interesting comparison, but I think it’s a totally different situation. Immutability and dependent types both are things you do to make certain assumptions about your code. In that, immutability allows you to know that some underlying value won’t change. Dependent types allow you to make more general statements/proofs of some invariant. The big difference is that immutability is a simplification. You’re removing complexity by asserting some assumption throughout your code. Generally, dependent types are adding complexity. You have to provide proofs of some statement externally or you have to build the proof of your invariants intrinsically into your constructions. IMHO, that’s a huge difference for the power to weight ratio of these two tools. Immutability is really powerful and fairly light weight. Dependent types are not really that powerful and incredibly heavy. I’m not saying dependent types are worthless. Sometimes you really really want that formal verification (eg. compilers, cryptography, etc). The vast majority of code doesn’t need it, and you’re just adding complexity, something I think should be avoided in production software.

                                    1. 3

                                      Tl;dr I have a good amount of experience with dependently typed languages, and I write Haskell for a living. After all of my experience, I have come to the conclusion that dependent types are over hyped.

                                      1. 1

                                        I’ve started writing a post on dependent types. Here’s early draft: https://artagnon.com/articles/dtt

                                      2. 3

                                        What about Ada?

                                1. 1


                                  I think this deserve a word here. I worked with it around 2014-2016. Now the project is passed to apache.

                                  This was like a “secret weapon” for our team at that time. I worked on make a social media analytics (think a dashboard for twitter/FB etc…)

                                  With that we were able to provide analysis in real time about a ridiculously big amount of data. It was very hard to configure correctly. But once done, it was incredible. You could click on a button and hundreds of aggregations were processed about ten of millions big json object in less than 200ms. That was a really cool tool.

                                  Here is a short presentation about it: http://yogsototh.github.io/mkdocs/druid/druid.reveal.html#/

                                  1. 1

                                    Very cool I was thinking about building something similar

                                  1. 1

                                    I use org-publish + a few scripts to generate an RSS feed and optimize the size of the website by taking advantage of a full HTML+CSS minimizer. I previously used hakyll, and nanoc. I tried to have a CSS that look like a markdown in a terminal.

                                    my blog: https://her.esy.fun

                                    1. 2

                                      Having tried just about all kinds of static site generators under the sun—from the more mainstream ones like Jekyll and Hugo to more exotic ones like ssg, ox-hugo, org-page, Org publish, org-static-blog, Haunt, and even a custom one written in Haskell—I’m now back to hand-written HTML files + SSI rules (for simple templating), and love the simplicity. The only thing missing right now is an Atom feed. I wonder if I could use GNU M4 for that, like @technomancy does.

                                      Result at https://bandali.eu.org, “sources” at https://git.bandali.eu.org/site.

                                      1. 2

                                        If your page structure is more or less consistent (looks like it is), you can extract metadata from pages with an HTML parser and generate a feed from it. That approach allows some things that are impossible or unwieldy in the traditional paradigm, such as using an arbitrary paragraph for post excerpt, not the first one.

                                        My own generator automates that process: the blog index of soupault.neocities.org/blog is produced by this config. It can dump exported metadata to JSON, which isn’t hard to produce Atom from. I’m still to complete the JSONFeed/Atom generator script good for public release.

                                        That said, making a custom page to Atom script using an HTML parsing library that supports querying the data with CSS selectors (BeautifulSoup, lambdasoup etc.) isn’t that hard if making it work for anyone else’s site isn’t a goal.

                                        1. 1

                                          Indeed; that’s one of the approaches I’m considering. Also, soupalt seems quite interesting, thanks for the links; I’ll check it out!

                                        2. 1

                                          Any particular reason you switched back to handwriting html files?

                                          1. 2

                                            Yeah, a few, actually:

                                            • I’d like my site setup to be very lightweight, both on the server and on my machine when editing, and use tools that are nearly universally available on all GNU/Linux systems. This rules out things like my custom static site generator written Haskell, since Haskell and its ecosystem are arguably quite the opposite of my two criteria above.

                                            • I’d like to have convenient complete control over the generated HTML output, and this is rather hard to come by in most existing static site generators and rules out most markup formats, since almost none are as expressive and flexible as HTML.

                                            Aside from the repetitive pre/postamble bits in each file, I find writing plain HTML quite alright, actually. The annoyance of HTML’s lack of built-in templating and simple conditionals can be solved by using SSI directives, which are fairly widely supported across web servers. Alternatively, I’m considering using GNU M4 in place of SSI, if it results in a simpler and cleaner setup. And it fits the two criteria in my first point above too.

                                            1. 3

                                              These are rather valid arguments. I used to write in plain html too, it’s perfectly fine, especially with an editor’s auto-completion like emmet. Nevertheless, I now mostly write in markdown and enjoy it. Whenever I need something more complex, I just embed html.

                                          2. 1

                                            I also missed a tool to generate RSS from a tree of HTML files. This is how I generate my RSS with a very basic shell script using html-xml-utils:


                                            1. 1

                                              Nice, thanks for sharing!

                                          1. 1

                                            My current main website is: https://her.esy.fun

                                            Interesting aspects:

                                            • org-mode instead of markdown (it’s way better IMHO)
                                            • use org-publish with a bit of magic but not too much (more infos here
                                            • different CSS choices (all support light/dark theme, I’m particularly fond of the sci dark one when there are images)
                                            • no tracker of any sort
                                            • no js (except for pages displaying formula with Mathjax)
                                            • RSS is generated via a self made shell script (I was quite surprised no tool like that existed before more info here
                                            • Optimize the size by using all CSS and HTML, surprisingly it was very efficient (~30% better than classics minimizers) even using quite naive approach more info here

                                            My older one was https://yannesposito.com. I used hakyll for this one.

                                            The hardest part is always to produce content, and not to lose too much time optimizing its blog tech. Still I love doing that time to time :).

                                              1. 36

                                                I use fish on macOS. There’s occasional headaches due to its lack of bash compatibility, but I find the ergonomics of using it to be much nicer than bash, zsh, or any other bash-compatible shell I’ve tried.

                                                1. 2

                                                  I find autocomplete and highlighting in fish are amazing compared to any other shell I’ve used

                                                  1. 2

                                                    As you I use fish for my shell (with oh-my-fish).

                                                    For short/quick scripts I use zsh generally using a nix-shell bang pattern like this:

                                                    #!/usr/bin/env nix-shell
                                                    #!nix-shell -i zsh
                                                    #!nix-shell -I nixpkgs="https://github.com/NixOS/nixpkgs/archive/19.09.tar.gz"
                                                    #!nix-shell -p minify
                                                    minify $1 > $2

                                                    And when I’m serious about a script that I switch to turtle.

                                                    1. 0

                                                      Sell me zsh. A few friends use it but they’ve not been able to clearly convey its advantages over ye olde bash that is just… everywhere.

                                                      1. 4

                                                        auto completion works even if you don’t type the string from the beginning. i.e. you have a series of folders




                                                        you can do an ls and start typing videos and it will tab-autocomplete to the right folder. It was the one feature that signaled to me, you made a good decision to switch.

                                                        1. 2

                                                          If bash is good enough for you then you probably have no reason to switch.

                                                          In the past I used zsh for some of its fancier features: I could do more expressive expansions to make my prompt pretty or do clever directory chomping; I found its completion much faster than bash, but that’s only useful if you use flag or subcommand completion; menu completion can be useful. I think also the history settings in zsh were or are more featureful than bash, but I’m not entirely sure what bash’s current features are like.

                                                          I’ve been using bash just fine for the last ~10 years on personal systems but I do have zsh on some servers so that I can do more clever things with prompting.

                                                          1. 2

                                                            The main reason I used zsh was to handle correctly files with special chars in them ([ \t\n] for example). It also has “real” lists and associative arrays. Mainly it was often better to write scripts. After I also found that print in zsh is often better than echo. For example print -P "%Bsomething bold%b". Also things like ${fic:t} instead of basename $fic, ${fic:s/x/_/} instead of echo $fic | sed 's/x/_/' and a lot of small niceties.

                                                            I no longer use zsh as my main shell, I switched to fish. Still I always preferred zsh over bash. But it was a long time ago, perhaps bash is better now.

                                                            I use fish for basic usage (completion is great), but when I script I generally use zsh.

                                                            1. 1

                                                              I’d start with the zsh-lovers document.

                                                            1. 5

                                                              As someone who has occasionally played with Haskell for years and is finally considering to use it for larger projects, this post concerns me. The complexity of monad stacks is a little scary, but I figure the type system makes it manageable. However, if it’s true that monad transformers end up being a source of memory leaks then I’m back to thinking Haskell should only be used for larger, production-level projects by those knowledgeable of GHC internals and edge-case language tricks to hack around inherent problems.

                                                              Can someone with experience comment on the author’s claims? They do seem weak when no specific examples of memory leaks (or abstraction leaks) are provided.

                                                              1. 4

                                                                Do not use StateT or WriterT for a long running computation. Using ReaderT Context IO is safe. You can stash an IORef or two in your Context.

                                                                Every custom Monad (or Applicative) should address a concern. For example a web request handler should provide some means for logging, to dissect request, query domain model and prepare response. Clearly a case for ReaderT Env IO.

                                                                Form data parser should only access form definition and form data and since it’s short lived, it can be simplified greatly with use of ReaderT Form stacked with StateT FormData. And so on.


                                                                1. 3

                                                                  Yes it is know to never use RWST+ with a Writer Monad in the stack because of space leak.

                                                                  The choices about big-code organisation in Haskell is large. You have:

                                                                  • run like in imperative language, everything in IO
                                                                  • split using the Handler pattern https://jaspervdj.be/posts/2018-03-08-handle-pattern.html
                                                                  • use the MTL, in that case you should not use WriterT (if I remember correctly)
                                                                  • use the ReaderT Context IO Pattern
                                                                  • use Free Monads, the paint is still fresh here apparently.

                                                                  I used MTL-style to make a bot with long living states and logs (using https://hackage.haskell.org/package/logging-effect). It works perfectly fine for many days (weeks ?) without any space leak.

                                                                  I now started to go toward the simpler route of the Handler Pattern I pointed out. And, in the end, I tend to prefer that style that is very slightly more manual but more explicit.

                                                                1. 39

                                                                  I’m really burning out on “simplicity” posts. I get it, simplicity is good. But that doesn’t actually inform me as a developer. Why do things become complex? What kinds of simplicity are there? How do we detect simplicity? How do we know when we shouldn’t simplify? None of these posts ever answer that.

                                                                  It’s like if I stood on stage and said “Be good! Don’t be evil! Being evil is bad!” Sure, everybody agrees with that, but does it actually help people make moral choices?

                                                                  (Also the analogy is dumb. Yes, we should totally base our engineering practice on a movie! A movie where the engineers are wrong because of magic.)

                                                                  1. 11

                                                                    Why do things become complex? What kinds of simplicity are there? How do we detect simplicity? How do we know when we shouldn’t simplify? None of these posts ever answer that.

                                                                    Because your questions are difficult and answers are dependent on a lot of factors.

                                                                    I’ll tell you what I do to detect simplicity, maybe you’ll find it useful. Let’s start with a real-life example.

                                                                    I needed tokens for authorization, I reviewed existing formats, JWTs look conservative and Macaroons look powerful.

                                                                    What do I do? I dissect the formats. For JWTs I read the RFCs and implemented software to create them and verify them (each in 2 languages) for various options (key algorithms).

                                                                    For Macaroons I read the whitepaper, then implemented verifier based on the whitepaper, reviewed existing implementations, found out differences between the whitepaper and de-facto code with explanations. While comparing my implementation I found out some security issues with existing code. Additionally I implemented the rest of the stack (de/serialization, UI for manipulation of Macaroons). After two months I knew precisely where does complexity lie in Macaroons and of course there are the only spots all blogposts don’t mention (spoilers: cycles in third party caveats, no standards for encoded caveats…)!

                                                                    Then I looked at my JWT proof-of-concept code - it uses base64(url) and JSON, primitives that basically all programming environments have built-in. After limiting the algorithms used the entire verifier takes just a couple of lines of code! It’s vastly simpler than the Macaroon one.

                                                                    What’s the moral here? That you need a lot of time to see for yourself what is simple and what is complex. Now every time I see a post recommending Macaroons I can already see the author didn’t use them in practice (compare that with the Tess Rinearson post linked at the end of that article).

                                                                    That’s only the example, I routinely implement various protocols and re-implement software (ActivityPub, Mailing Lists, roughtime client) and each time I discover what’s simple or what’s complex in each one of them.

                                                                    (By the way your book is excellent!)

                                                                    1. 9

                                                                      I get it, simplicity is good.

                                                                      Alas, not everybody gets it. The best that these kinds of exhortations can do (all that they aim to do, as far as I can tell) is to persuade people to modify their own set of values. This doesn’t immediately result in better code… but I think it’s a necessary precondition. The only developers who will even ask the good questions you suggest (let alone look for good answers) are the developers who hold simplicity as a value.

                                                                      (The analogy is pretty dumb though, and not especially motivating.)

                                                                      1. 10

                                                                        I’ve never met a developer who does not claim to hold simplicity as a value. But as a concept it is so subjective that this is meaningless. It’s extremely common for two developers arguing for opposing approaches each to claim that their approach is the simpler one.

                                                                        1. 7

                                                                          I get the value of exhortations. I think more examples would be better. Pairs of solutions where the simple one meets requirements with a number of better attributes. Developers often prefer to see the difference and benefits instead of being told.

                                                                        2. 6

                                                                          Exactly. This is one of those things you can’t explain in a book. When to compose, when to decompose. When to extract methods, when to inline methods. When to add a layer of abstraction, when to remove one. When is it too flexible, when is it too simplistic?

                                                                          No amount of rules of thumb is going to answer those question. I only know of one way to learn it: practice. Which takes effort and most importantly, time. Rendering this kind of posts mostly useless.

                                                                          1. 3

                                                                            P.S. They do feel good to write though, so people will keep writing them, and there’s nothing wrong with it either.

                                                                          2. 5

                                                                            In my experience a lot of commercial companies that develop under tight deadlines produce a lot of suboptimal and dreadful code. Often it takes more time, to produce less code simply because the more time you spend on a difficult problem, the better you understand it. I think the reason that most a lot of software is bloated and complex is because it’s “good enough” which is optimal from an economic point of view.

                                                                            The other day there was a discussion here on Lobsters about all the required pieces needed to run a Mastodon instance and the popular solution of abstracting all that away in a Docker container. There are alternative implementations that depend on a smaller number of components alleviating the need for dumping everything in a container (of course the question is, do these alternatives offer the same functionality).

                                                                            How do we detect simplicity?

                                                                            For me personally simplicity has to do with readability, maintainability and elegance of code or infrastructure. If someones solution involves three steps, and someone else can do it in two steps (with comparable cognitive load per step), I would say it’s more simple.

                                                                            How do we know when we shouldn’t simplify?

                                                                            If that would cut some features you cannot miss.

                                                                            1. 5

                                                                              I agree that anecdotes like this can get old, but I’ve been meaning to actually write a similar post to this… on something I’ve been calling the “too many buttons” syndrome. This issue pops up a ton in large pieces of software (Though I’m specifically thinking of projects like JRA and Confluence) where there’s an option for everything.

                                                                              Not everyone gets that simplicity is good because it can be harder to sell. “If a user wants it, we should do it” is something I’ve heard just a few too many times without bothering to look at the use case or if it could be done better. Sometimes it’s worth stepping back and looking at the complexity something will add to the project (in both code and testing… especially when it comes to options and how they interact with each other) rather than just adding all the little features.

                                                                              1. 5

                                                                                You are so right. After years of experience, I only start to clarify my idea of “simplicity”. There are different kind of simplicity most of them are not totally compatible. And in my opinion some need to be preferred to other, but there is no clear rule. To make a choice between different complexity I still use a lot of intuition and I debate a lot, and I am still unsure my choice are the best.

                                                                                • only using basic feature of a language (do not use advanced programming language feature) is certainly the most important aspect in simplicity. It will make your code easy to read by more people.
                                                                                • don’t use too much intermediate functions, and if possible don’t disperse those function in many different files before really feel you are copy/pasting too much. My rule of thumb is, 2 or 3 times duplications is totally fine and superior to centralisation of code. It start to be really clear that code factorisation is good when you start repeating yourself more than 6 to 10 times
                                                                                • only really use advanced feature of the language after having tried not to use it for some time and really lack the ability of that advanced feature. Some examples of what I call advanced feature of a language are; class heritage, protocols in Clojure, writing your own typeclasses in Haskell, meta programming (macros in LISP), etc…
                                                                                • prefer stateless functions to objects/service with internal states
                                                                                • prefer pure functions (side effect free) other procedures (functions with side effects)
                                                                                • give a lot of preference to composable solutions ; composable in the algebraic meaning. For example, I do my best not to use LISP macros, because most of the time macros break composability. The same could be said when you start to deal with type-level programming in Haskell, or when you are doing meta-programming in ruby/python.

                                                                                For now, all those rules are still quite artisanal. I don’t have any really hard metrics or strong rules. Everything I just said is “preferable” but I’m pretty sure we can find exception to most of those rules.

                                                                                1. 5

                                                                                  Amen, +1, etc. “Simplicity” often just means that a concept fits cleanly in the maker’s head at a particular point in time. How many times have I returned to a project I thought was simple only to find I had burdened it with spooky magic because I didn’t benefit from critical distance at the time? When was the last time I deemed another person’s work “too complex” because I couldn’t understand it in one sitting and wasn’t aware of the constraints they were operating under? Answers: too often and too recently.

                                                                                  1. 3

                                                                                    What kinds of simplicity are there?

                                                                                    This is a good question (as are the others). Borrowing from Holmes, I’d say there’s a continuum from naive simplicity, to complexity, to simplicity on the other side of complexity (which is what is truly interesting)

                                                                                    For example, “naively simple” code would only cover a small subset (say, the happy path) of a business problem. Complex code would handle all, or most, of the business complexity but in a messy, complicated way. “Other side” simplicity refines that complex code into something that can handle the business complexity without itself becoming overly complicated.

                                                                                    1. 2

                                                                                      What happens to simplicity? We trade it for a other things of course. For example, you can have simple regular expressions, but most people prefer less simple and more powerful implementation like Perls.

                                                                                      Simplicity is often a tradeoff versus easyness, performance, flexibility, reusability, useability, etc. So simplicity is good, but those other things are also good.

                                                                                      1. 1

                                                                                        Most people seem to agree that simplicity is best. However, when it comes down to simplicity for the user versus the developer, I have seen disagreement. Each trade off is going to be situation and implementation dependent, but at my job I’ve been pushing for a simpler developer environment.

                                                                                        In my office, there is a tendency to create exceptions to rules because it makes things simpler for the user. Since the environment has more exceptional circumstances, it tends to have more errors when people forget the undocumented exception case. In my opinion, this causes an uneven experience for the user despite being “simpler.”

                                                                                        My experience is coming from a medium sized, non-tech company. I work in the IT department so we are a cost center. There is an emphasis on white glove treatment of the revenue producing portions of the company. YMMV

                                                                                      1. 1

                                                                                        Hey guys I’ve wrote some thoughts on working with complex software projects. I would appreciate your feedback!

                                                                                        1. 3

                                                                                          Nice article!

                                                                                          I think you forgot one very important aspect, politic. Most failed project I saw was generally involved with someone lying. Generally I saw at least one person, middle or high in the hierarchy. That person want the project to fail while pretending the opposite during all those meetings. I even saw worse, two parties involved wanted the project to fail while their higher hierarchy pushed to make it a success.

                                                                                          You can detect such problem because involved people start to adopt a very defensive communication. Mainly each part program the failure, but none will want to be responsible for it. So each part will start to accumulate evidence the problem comes from the other party involved. During each meeting, the same thing will be repeated again and again. Generally almost no written document will be involved. When this is the case, it only address superficial issues, etc…

                                                                                          Also, I disagree with your “estimate” part. Estimate the time it will take for a very small project is already almost impossible to do. Estimate is in fact a failed metric, only fool can rely on it (see Hofstadter’s law). That being said, I think the methodology of applying the mean of 3 to 4 people estimate is the wrong thing to do. If you want something closer to the reality, take the worst estimate and double it (at least). The rule about the +20% is ridiculous, because, most task will be estimated correctly, but there will be those 2, 3 tasks that instead of taking 1h will take 3 months.

                                                                                          Also, as you pointed out, in a big project you are almost certain some part will fail to deliver at all. You’ll need to adapt and have a plan B for each potential sub-task failure.

                                                                                          1. 8

                                                                                            Its been a very long time since I haven’t updated my blog, mostly about functional programming & Haskell. Still here it is:


                                                                                            1. 5

                                                                                              As much as I would like a decentralised web to take over. I think I see a major issue with ActivityPub. Apparently, search doesn’t appear to be specified. And one advantage big centralised services will have is the ability to search all their network. For example, I would like to search all the network for a specific username, keyword, etc… Without that, it’s like going back to the net before google search.

                                                                                              1. 2

                                                                                                Yeah, I’m curious if ActivityPub can support arbitrary aggregation over multiple nodes. It seems to me that in this kind of architecture, maybe nodes ought to support publishing changes over websocket to interested listeners. You could have aggregation nodes doing things like search, or post rankings, which could attach themselves in this fashion. Plus this would have the added benefit that if you didn’t like a particular aggregator’s implementation (a hotness formula for example, or search indexing algorithm) you could switch to a different one.

                                                                                                1. 1

                                                                                                  Usernames include the server, so I’m not sure that case makes sense.

                                                                                                  Not supporting keyword search means I don’t get random and bots sealioning my conversations.

                                                                                                  1. 1

                                                                                                    I was thinking about peer tube for example. It would be very nice if I was able to search only for the current node or on all fediverse for the things I’d like to find. Like niche programming languages for example. The goal of publishing something is to be read, even by bots. Also I’m not sure a federated network would be more robust in face of bots. Pretty sure it would be the opposite, because each node would have less data to analyze for bot detection. Still by the end, I’m pretty sure the only good solution would be to have a global “Web of Trust”.

                                                                                                    1. 2

                                                                                                      Ahh, I see. 99% of my social media us is to connect with people I already know (or have in my extended network). For that use case, it’s a feature to be less easily discovered.

                                                                                                      For publishing content it’s definitely the opposite; you want it to be found. Difficult though because now you’re competing for attention with spammers.

                                                                                                1. 6


                                                                                                  #!/usr/bin/env stack
                                                                                                  {- stack script
                                                                                                     --resolver lts-11.6
                                                                                                     --package protolude
                                                                                                  {-# LANGUAGE NoImplicitPrelude #-}
                                                                                                  {-# LANGUAGE OverloadedStrings #-}
                                                                                                  import Protolude
                                                                                                  main = putText "Hello, world!"
                                                                                                  1. 4

                                                                                                    Mainly for that reason I switched to a combo bitlbee + weechat + (screen + mosh + weechat-ncurses)

                                                                                                    It was a bit long to set up everything and there are still some rough edges. But, now with about 3Mo of RAM I can chat with:

                                                                                                    • 3 slack community with lot of channels
                                                                                                    • some gitter channels
                                                                                                    • many IRC channels freenode
                                                                                                    • hipchat

                                                                                                    I have manually set alerts. The text use my preferred color theme. Typically the only other reason to get rid of slack is that I couldn’t have clear text on dark background.

                                                                                                    Now my chat system feels like a calm place to discuss.