1. 2

Working on https://muboard.net/.

I wrote this tool initially for our analytic number theory book club. We meet roughly once everyday and study a couple of pages of an analytic number theory book together. During the meeting sessions, I often share my screen and use this tool to scribble some mathematics to expand some steps not explained in the book or illustrate a new theorem we are trying to understand with some examples. The tool also supports archiving the scribbled mathematics snippets as distributable, self-rendering board files too.

After a couple of weeks of using this tool, I released it as an open source project at https://github.com/susam/muboard and received good feedback on the tool. In this weekend, I intend to expose some of the internal properties of the tool as configurable options to the user and expand the documentation to explain these configurable options in detail.

1. 2

Nice work, I love seeing new tools for writing math on the web! My side project recently has been prosemirror-math, which is my attempt to make wysiwyg math editing on the web a little more tolerable. It’s a plugin for ProseMirror that others can hopefully use / modify to add better math support to their markdown editors, etc..

1. 1

ProseMirror Math looks great. Thanks for sharing. Good luck with your project!

2. 2

Interesting. I just finished basic IntelliSense support for MathJaX for a vscode extension that I use with my SSG to render math on the server side. Unfortunately, most of my math is handwritten, and it’s too painful to typeset it, so I’m currently investigating a way to convert handwriting from my tablet to the MathJaX subset of LaTeX automatically.

1. 2

For OCR, there’s some freemium software called MathPix, which does a pretty great job. However they limit your monthly snips on the free version.

1. 1

Just finished the final touches on my website, and writing documentation for the SSG that powers it. The weekend will be dedicated towards correcting and finishing some of the notebooks in the math section.

1. 3

Depends on what you want to do. If your dream is to become a highly paid full-stack engineer at some hot SV startup, then yes: ditch your grad-school dreams, and keep hacking away. The first job is always the hardest to land: past that point, everyone just looks for prior work experience. Why? Because you don’t learn how to write good maintainable code in grad school, and you can’t afford to treat your job like a grad-school project.

However, if you’re looking to learn about what computer science is, and explore your interests, I’d highly recommend at least a Masters in the subject. I had a ton of open source experience prior to grad school, and wondered what value a Masters in the subject would add, but jumped into it anyway. In grad school, I wrote my first compiler, my first scheduler for Linux, and learnt a ton about computer architecture. The algorithms course was a breeze, and I aced it without studying, but I enjoyed the project-driven courses the most. Before grad school, I thought I wanted to do systems engineering, because I was very good with C and had great attention to detail, but my career trajectory changed when I wrote my first compiler. My first job was in compiler engineering, and I enjoyed it immensely.

So, what does a good grad school education give you? The time to experiment and learn new things; the motivation to complete a difficult project, show it off in front of your class, and get graded for it. You’ll be surprised to suddenly find that you’re interested in things that you didn’t consider previously.

Sure, attending Columbia made my resumé look a little more impressive to potential employers, and the prestige does help boost your confidence when starting out, but a few years of extraordinary work eclipses that.

1. 3
1. Strengthen rings and modules fundamentals from Dummit & Foote.
2. Work out an exam paper on rings, and send it to mathematician to check them.
3. Revise homotopy fundamentals from Hatcher.
4. Read Friedman’s survey article on simplicial sets, in preparation for the next module.
5. Lightly browse through Goerss-Jardin, and have a map of the contents.
6. Make minor progress on my cubical type theory research project in Coq.
7. [In downtime] Start writing a production-grade parser for claytext, a custom markup-syntax I use to generate my website.
8. [In downtime] Make some progress on my xypic LaTeX parser, for commutative diagrams.
9. [In downtime] Improve my French by passively listening to podcasts and news.
10. [In downtime] Cook one elaborate meal.
1. 5

My site is very basic, but the colors seem to be very polarizing. I like ’em though. https://nickjurista.com

1. 2

I feel the content is too wide and the complete change in theme between your homepage and blog is jarring. I like bold colours though, so didn’t mind the ones your had chosen.

The background colour on links makes them difficult to identify too. I personally stick with different colour text, or the old favourite - underline.

1. 1

Oh yeah I don’t really use the blog and haven’t updated it in forever. There is also barely any content on the page so I get the wideness being annoying for sure.

2. 2

I also like this kind of unusual colors. Moreover yours don’t compromise contrast and legibility.

1. 1

If there’s a hidden toothpaste joke I didn’t find it ;)

It’s a bit colorful for me, but it’s not “argh, I need to leave”.

1. 1

The specific colors are too aggressive for my eyes to handle more than a few seconds.

1. 1

You might want to pick pastels that are close to the colors you’re currently using.

1. 1

Some nitpicks: selected text and links look the same; based on color cues it would seem that the title itself is a link, but it’s not; the links at the bottom have unequal margins on the left and right side.

1. 4

https://rubenvannieuwpoort.nl is my site – or maybe more a collection of articles. I tried to keep the styling minimalistic, use no javascript (math is statically rendered).

• What do you think of the layout on mobile? I like the style but feel that the text is maybe too tiny to read comfortably. I have spend some time trying to fix this but haven’t succeeded so far (it can’t be very hard, but web dev is not my niche).
• Would you prefer dynamically rendered math?

It is made with a static site generator I wrote myself (the source can be found on https://github.com/rubenvannieuwpoort/static-site-generator). Ironically, it uses node.js (and bash) and renders markdown to html.

I deliberately removed the dates from the blog posts since I tend to make large numbers of minor adjustments.

1. 4

Looks very simple, loads fast and pleasing to the eye. The equations look really cool and well rendered on my machine.

Few things:

1. It doesn’t seem responsive on the mobile. Needs the meta content viewport tag.

<meta content="width=device-width, initial-scale=1.0" name="viewport"/>

to turn it into this: https://i.ibb.co/Cbn90jC/Screen-Shot-2020-10-26-at-01-16-34.png

1. The blog post layout led me to believe that it is a PDF file because of the article being inside a box surrounded by a sea of gray. It could be a deliberate design decision, but that’s what it felt.
1. 1
1. Thank you, this is exactly what I wanted but didn’t know how to!
2. Indeed the design is based on how PDF’s are displayed. However, it’s not meant to be confusing. Maybe I’ll just get rid of the gray background.
2. 3

Out of the ones I saw so far I like yours the most.

One question - I noticed that you do not link back to your home from within the articles. Curious if you simply didn’t find an elegant way of doing it, or is there another reason?

1. 1

Thanks for the kind words. I basically wanted the layout to match that of a printed article as close as possible. This also inspired the PDF-viewer-like look.

2. 3

This I’m struggling to read. The font is too thin, and even scaling up a lot it stays too thin for me to read. I had switch to Safari and activate Reader mode to read this. (I normally use Firefox but for some reason I can’t activate reader mode on Firefox for this website!)

1. 1

I’ve got the same issue, iPhone se 2020, with large letters (accessibility) turned on, when I zoom in, I have to scroll right and left, no word breaks. Except for that, cool minimal style and interesting content!

1. 1

Thanks for mentioning the problem and the kind words!

2. 3

I really like the layout and style of the blog as it is, looks similar to the aethetic of Tufte’s work. My opinion is kind of the opposite to that of animesh in that I think the color surrounding the article works well.

The statically rendered math is amazing, I would love a solution like that for myself. Without any experience in web development I’ve not had success with it, but your blog works really well!

1. 3

You could consider putting a “published date” and a “last updated date”. I use that approach personally.

1. 2

The thin font and low contrast colour (?) make this quite hard for me to read on my phone (Firefox, android).

1. 1

I assume that this is just for the overview page, not for the articles themselves?

1. 1

Yes. I didn’t click on the articles at the time. Trying now, the text is very small on mobile. It might be that you can’t fix that while maintaining the mathematical article presentation style.

2. 2

I like the design and the content. I’d subscribe if there was an Atom/RSS feed.

1. 1

Thank you! Your words make me really enthusiastic to write something :) I might consider making a feed, but I don’t feel like I will have time for it anytime soon.

2. 2

It’s absolutely beautiful but there’s something wrong in the CSS. On my phone (Safari on iOS) it’s all “zoomed in” (I can’t see the full width of the page even) at certain zoom levels (for instance when I set 115% at the left of the address bar with the “aA” button).

Edit: I think one of the main issues comes from here:

	article {
…
width: 715px;
…
}


I would use max-width instead of width. max-width makes the width flexible for small displays.

1. 1

Thank you! I will look into this :)

2. 2

Like the simple layout, and overall clean feel. Saw someone already suggested adding the meta tag to make it responsive. Also, I feel the contrast of the articles description text on the homepage is low: light gray + light font = hard to read

1. 1

Mine is at https://secluded.site. I’d love to get some suggestions for improvements!

I recently started with Emacs and fell in love with Org mode; I’m seriously considering ditching Hugo and going much more minimal with a small, handwritten stylesheet and HTML pages generated from Org.

1. 2

Just as a heads up, you can also write your blog posts in orgmode and have hugo render it (if you weren’t aware).

1. 2

On Firefox android, the pages always start in dark mode and then pop to light mode after remembering my preference (set by tapping the icon once on the home page).

Seems too low contrast in dark mode.

I also read Butterick and like the circle links and gradients, I think they’re fun though I don’t know how many people would miss them.

1. 2

A few nitpicks not liking the gradients when the circle links expand; seems like “home” and “about” could be merged into one page - nothing much is going on at “home”. Also not a fan of how code blocks look in both light and dark themes.

1. 1

Also not a fan of how code blocks look in both light and dark themes.

You don’t like the colours of the highlighting or something else?

1. 1

I think they are too dark. The background is black and the text itself is quite dark within. For me it stands out too much from your otherwise light theme. They do look considerably better in darkmode, now that I took a second look at it.

2. 2

Clean, and loads quickly, but the contrats seem a bit too low for my taste. I’m not sure if this even qualifies as an oppinion, but I was expecting more on the front-page, and tried to scroll. The blinking cursor animation is actually calming!

I’m seriously considering ditching Hugo and going much more minimal with a small, handwritten stylesheet and HTML pages generated from Org.

On that topic, there is ox-hugo that can convert a org-document into a hugo-compatible site, and then render it.

1. 1

the contrats seem a bit too low for my taste

Yeah, I’ve been meaning to increase it a bit haha. Thank you for reminding me!

there is ox-hugo that can convert a org-document into a hugo-compatible site

I’m actually working on a blog post that will be exported with ox-hugo but there’s still the whole “Hugo workflow” as well as dependency on Hugo itself. Simply using Emacs and nothing else is pretty attractive.

2. 2

I enjoy your theme. The site is fast, which is really good.

I really like the way your blog posts are organized on the blog page. I like the tags at the top.

Overall the content is well formatted, with good margins.

If I had to suggest anything it would be to do something with the homepage. Use it as an opportunity to shepherd the user to relevant content, and or give them roads to walk down.

1. 2

Others have mentioned the all-but-blank homepage as well so I will very likely make some changes there.

I appreciate the suggestion and kind words!

2. 2

Reviewed on iPhone X.

To me, the low contrast and short line-length makes reading uncomfortable. I suggest a slightly smaller font so more words fit on a line on portrait mobile, and increasing the line-height and text contrast to compensate for the smaller size.

The link treatment was surprising, but delightful. A great touch for a personal site.

1. 2

Beautiful. I like the theme and the fonts. There is one annoyance though: the system fonts are displayed momentarily, before it’s swapped out to use your fonts, but I think that’s an unsolved problem today.

1. 2

I came back to this thread specially to say I love the pipe section on your site. I hope you get around to reviewing the pipes and tobacco, I’m now reading your pipe origin articles.

I see you do rss feeds per category, I like that a lot. I do it myself on my site for each and every tag, but had never seen it elsewhere.

1. 1

Hugo is what I currently generate the site with and it actually has feeds for every taxonomy, not just categories

I hope you get around to reviewing the pipes and tobacco

Once I’m finished documenting my email setup, I’ll get to work on some of those!

1. 3

Hey! My site, which was originally based on @icy’s excellent page, is here. Web development isn’t my strong point, but I do like to think I my site is at least passable :)

1. 1

The content is too narrow: on a desktop browser, I’m having to horizontally scroll to view your code.

1. 1

It looks really nice. I like narrow content, it’s easier for me to read without having to rely on reader view, but it does become a challenge when you have code snippets. Fortunately, your posts are not very code heavy, so I don’t think that’s a problem.

1. 2

https://artagnon.com

The SSG that powers it is over 7 years old, and some of the content is even older, and ported from another hand-written SSG; preserving backward compatibility while extending it is a big pain. The SSG doesn’t process markdown: it’s a custom syntax, that has evolved with time. The design is essentially hand-written CSS. Fighting with CSS is a real pain, and I wish there were an easier way to design webpages.

1. 2

I would do more organisation. It’s hard to understand the different indexes at the top. Some of them are abbreviations, others are cryptic. The pages themselves do not have that many entries within them, why not put all of them under one page, and separate your categories with headers?

1. 2

I really enjoy the personality of this site; the background gives it a warmth and friendliness. I found the top nav mysterious (less charitably: confusing). I think the loading progress / MathJax rendering thing is cute.

1. 2

There are a few things to fix in your design, but the first things I would look at IMHO in no particular order

1. Max width for articles will affect the readability
2. Nav links could use some reorganization to improve discoverability
3. Articles from different links (ex: HOTT, at, etc.) need to reorganized with the use of tags if possible with your SSG

It takes a lot of time to do these kinds of changes, but is a great pleasure (read: sweet sweet torture) to do so. I would recommend doing this step with a new SSG (jekyll, hugo zola) and go for some minimal brutalistic design. Regarding design I couldn’t say as I suck at design myself, so I would say to start with a themes for these SSGs. Your site and articles seem very math and compsci oriented and IMHO deserve some thought into this.

1. 6

Hand-written notes on the reMarkable. No eye strain, so I can study on it for hours at a stretch comfortably. Zero distractions; it doesn’t even display the time, so I’m sometimes making study notes until 2a.

1. 1

Are they going to support existing models once the new version is out?

1. 14

Why did Haskell’s popularity wane so sharply?

What is the source that Haskell’s popularity are declining so sharply? Is there really some objective evidence for this, I mean numbers, statistics, etc.?

It’s anecdotal and just my personal impression by observing the Haskell reddit 1 for 10 years, but I have never seen so many Haksell resources, Conferences, Books and even postings for jobs as now. I have not at all the impression that the language is dying. It has accumulated cruft, has some inconsistencies, is struggling to get a new standard proposal out, but other than that I have the impression that it attracts quite some people that come up with new ideas.

1. 2

Haskell had glory days when SPJ/Marlow were traveling to various conferences talking about the new language features. Mileweski’s posts, LYAH, Parsec, STM, and Lenses are from that era. The high-brow crowd was of course discussing Lenses. Sure, these things drove adoption, and there’s a little ecosystem for the people who went on the Haskell bandwagon back then.

What innovation has it had over the last 5 years? The community couldn’t agree on how to implement any of the features of a respectable dependent-type system, so they invented a bunch of mutually incompatible flags, and destroyed the language. Thanks to the recent hacking, GHC is plastered with band-aids.

It’s true that you can’t explain these things with some points on a pretty graph, but that doesn’t make it anecdotal. Look at the commits going into ghc/ghc, and look at the activity on the bread-and-butter Haskell projects: lens, trifecta, cloud-haskell. Maintenance mode. Where are the bold new projects?

1. 23

These assertions about Haskell are all simply false. There are plenty of problems with Haskell, we don’t need to add ones that aren’t true.

The community couldn’t agree on how to implement any of the features of a respectable dependent-type system, so they invented a bunch of mutually incompatible flags, and destroyed the language. Thanks to the recent hacking, GHC is plastered with band-aids

The reason GHC didn’t just turn on all flags by default is that many of them are mutually incompatible, so your individual .hs file has to pick a compatible set of language features it wants to work with.

You keep saying this in multiple places, but it’s not true. Virtually no GHC extensions are incompatible with one another. You have to work hard to find pairs that don’t get along and they involve extremely rarely used extensions that serve no purpose anymore.

The community is also not divided on how to do dependent types. We don’t have two camps and two proposals to disagree about. The situation is that people are working together to figure out how to make them happen. GHC also doesn’t contain bad hacks for dependent types, avoiding this is exactly why building out dependent types is taking time.

That being said, dependent types work today with singletons. I use them extensively. It is a revolution in programming. It’s the biggest step forward in programming that I’ve seen in 20 years and I can’t imagine life without them anymore, even in their current state.

Look at the commits going into ghc/ghc, and look at the activity on the bread-and-butter Haskell projects: lens, trifecta, cloud-haskell. Maintenance mode. Where are the bold new projects?

Haskell is way more popular today than it was 5 years ago, and 10 years ago, and 20 years ago. GHC development is going strong, for example, we just got linear types, a huge step forward. There’s been significant money lately from places like cryptocurrency startups. For the first time I regularly see Haskell jobs advertised. What is true, is that the percentage of Haskell questions on stack overflow has fallen, but not the amount. The size of Stack Overflow exploded.

Even the community is much stronger than it was 5 years ago. We didn’t have Haskell Weekly news for example. Just this year a category theory course was taught at MIT in Haskell making both topics far more accessible.

Look at the commits going into ghc/ghc

Let’s look. Just in the past 4 years we got: linear types, a new low-latency GC, compact regions, deriving strategies & deriving via, much more flexible kinds, all sorts of amazing new plugins (type plugins, source plugins, etc.) that extend the language and provide reliable tooling that was impossible 5 years ago, much better partial type signatures, visible type applications (both at the term level and the type level), injective type families, type in type, strict by default mode. And much more!

This totally changed Haskell. I don’t write Haskell the way I did 5 years ago, virtually nothing I do would work back then.

It’s not just GHC. Tooling is amazing compared to what we had in the past. Just this year we got HLS so that Haskell works beautifully in all sorts of editors now from Emacs, to vscode, to vim, etc.

look at the activity on the bread-and-butter Haskell projects: lens, trifecta, cloud-haskell. Maintenance mode. Where are the bold new projects?

lens is pretty complete as it is and is just being slowly polished. Haskell packages like lens are based on a mathematical theory and that theory was played out. That’s the beauty of Haskell, we don’t need to keep adding to lens.

I would never use trifecta today, megaparsec is way better. It’s seen a huge amount of development in the past 5 years.

There are plenty of awesome Haskell packages. Servant for example for the web. Persistent for databases. miso for the frontend. 5 years ago I couldn’t dream of deploying a server and frontend that have a type-checked API. For bold new ideas look at all the work going into neural network libraries that provide type safety.

I’m no fanboy. Haskell has plenty of issues. But it doesn’t have the issues you mentioned.

1. 1

Right. Most of my Haskell experience is dated: from over five years ago, and the codebase is proprietary, so there are few specifics I can remember. I’m definitely not the best person to write on the subject. In any case, I’ve rewritten the Haskell section of the article, with more details. Thanks.

1. 6

From my definition of “dying language” it means losing popularity, or losing interest. For Haskell this is absolutely not clear. Also your section is about “why Haskell is bad” not “why it is dying”. People do not talk about Haskell as they used to in my opinion, but I still see a lot of activity in Haskell ecosystem. And it doesn’t really look like it’s dying.

But Haskell looks more like a language that will never die but still probably never become mainstream.

1. 5

I’m definitely not the best person to write on the subject. In any case, I’ve rewritten the Haskell section of the article, with more details. Thanks.

Great! Although there are still many issues that are factually untrue.

I think this is just a sign that you’ve been away from the community for many years now, and don’t see movement on the things that were hot 5-10 years ago. Like “The high-brow crowd was obssessed with transactional memory, parser combinators, and lenses.” Well, that’s over. We figured out lenses and have great libraries, we figured out parser combinators, and have great libraries. The problems people are tackling now for those packages are engineering problems, not so much science problems. Like how do we have lenses and good type errors? And there, we’ve had awesome progress lately with custom error messages https://kodimensional.dev/type-errors that you would not have seen 5 years ago.

The science moved on to other problems.

The issue is that different extensions interact in subtle ways to produce bugs, and it’s very difficult to tell if a new language extension will play well with the others (it often doesn’t, until all the bugs are squashed, which can take a few years).

This still isn’t true at all. As for the release cadence of GHC, again, things have advanced amazingly. New test environments and investments have resulted in regular GHC releases. We see several per year now!

In Atom, the Haskell addon was terrible, and even today, in VSCode, the Haskell extension is among the most buggy language plugins.

That was true a year ago, it is not true today. HLS merged all efforts into a single cross-editor package that works beautifully. All the basic IDE functionality you would want is a solved problem now, the community is moving on to fun things like code transformations.

Then there’s Liquid Haskell that allows you to pepper your Haskell code with invariants that it will check using Z3. Unfortunately, it is very limited in what it can do: good luck checking your monadic combinator library with LH

The worst case plays out as follows: the typechecker hangs or crashes, and you’re on the issue tracker searching for the issue; if you’re lucky, you’ll find a bug filed using 50~60% of the language extensions you used in your program, and you’re not sure if it’s the same issue; you file a new issue. In either case, your work has been halted.

In 15 years of using Haskell I have never run into anything like this. It is not the common experience. My code is extremely heavy and uses many features only available in the latest compiler, with 20-30 extensions enabled. Yet this just doesn’t happen.

There is almost zero documentation on language extensions. Hell, you can’t even find the list of available language extensions with some description on any wiki.

Every single version of GHC has come with a list of the extensions available, all of which have a description, most of which have code: https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/glasgow_exts.html You can link to the manual that neatly explains everything, rather than to the git repo.

Looking at the big picture: first, this is a poor way to do software development; as the number of language extensions increase, your testing burden increases exponentially.

This is only true if you can’t prove how extensions interact, or more fundamentally, that they don’t interact.

Second, the problem of having a good type system is already solved by a simple dependent type theory; you study the core, and every new feature is just a small delta that fits in nicely with the overall model.

That’s totally untrue. There is no such general-purpose language today. We have no idea how to build one.

As opposed to having to read detailed papers on each new language extension. And yes, there’s a good chance that very few people will be able to understand your code if you’re using some esoteric extensions.

Again, that’s just not true. You don’t need to know how the extensions are implemented. I have not read a paper on any of the extensions I use all the time.

In summary, language extensions are complicated hacks to compensate for the poverty of Haskell’s type system.

That’s just the wrong way to look at language extensions. Haskell adds features with extensions because the design is so good. Other languages extend the language forcing you into some variant of it because their core is too brittle and needs fundamental changes. Haskell’s core is so solid we don’t need to break it.

However, PL research has shifted away from Haskell for the most part

That’s again totally factually untrue. Just look at Google Scholar, the number of Haskell papers per year is up, not down. The size of the Haskell workshop at ICFP is the same as 5 years ago.

Moreover, there are no tools to help you debug the most notorious kind of bug seen in a complicated codebase: memory blowups caused by laziness.

Again, that’s not factually true.

We have had a heap profiler for two decades, in the past few years we got ThreadScope to watch processes in real time. We have systematic ways to find such leaks quickly, you just limit the GC to break when leaks happen. https://github.com/ndmitchell/spaceleak We also got stack traces in the past few years so we can locate where issues come from. In the past few years we got Strict and StrictData.

As for the code examples. I can pick 2 lines of any language out of context and you’ll have no idea what they do.

Who cares what every extension does for every example? That’s the whole point! I have literally never looked at a piece of Haskell code and wondered what an extension does. I don’t need to know. GHC tells me when I need to add an extension and it tells me when an extension is unused.

How many more language features are missing?

Extensions are not missing language features.

2. 1

GHC also doesn’t contain bad hacks for dependent types, avoiding this is exactly why building out dependent types is taking time.

Honestly, I’d much rather prefer a simple core model, like that of HoTT.

1. 3

Honestly, I’d much rather prefer a simple core model, like that of HoTT.

I’d love that too! As would everyone!

But the reality is, we don’t know how to do that. We don’t even know how to best represent computations in HoTT. It might be decades before we have a viable programming language. We do have dependent types that work well in Haskell today, that I can deploy to prod, and that prevent countless bugs while making code far easier to write.

1. 1

I think HoTT with computations is “cubical type theory”? It’s very active currently.

As for the dependent types as the backend for advanced type level features, I think it’s what Dotty/scala 3 is about. It’s definitely not the only way to do it, but it’s also not decades away. Idris 2 is also an interesting effort.

3. 4

Dependent types aren’t that useful for production software, and full blown dependent types are really contrary to the goals of Haskell in a lot of ways. Any language that’s >20 years old (basically 30) is gonna have some band-aids. I’m not convinced that Haskell is waning in any meaningful way except that people don’t hype it as much on here/hn. Less hype and more doing is a good thing, imho.

1. 3

Reminds me of the days when people said FP and complete immutability weren’t useful for production software. It is true that there is no decent general purpose language that implements dependent types, but that’s besides the point.

It’s true, hype is a poor measure.

1. 4

Yeah, that’s an interesting comparison, but I think it’s a totally different situation. Immutability and dependent types both are things you do to make certain assumptions about your code. In that, immutability allows you to know that some underlying value won’t change. Dependent types allow you to make more general statements/proofs of some invariant. The big difference is that immutability is a simplification. You’re removing complexity by asserting some assumption throughout your code. Generally, dependent types are adding complexity. You have to provide proofs of some statement externally or you have to build the proof of your invariants intrinsically into your constructions. IMHO, that’s a huge difference for the power to weight ratio of these two tools. Immutability is really powerful and fairly light weight. Dependent types are not really that powerful and incredibly heavy. I’m not saying dependent types are worthless. Sometimes you really really want that formal verification (eg. compilers, cryptography, etc). The vast majority of code doesn’t need it, and you’re just adding complexity, something I think should be avoided in production software.

1. 3

Tl;dr I have a good amount of experience with dependently typed languages, and I write Haskell for a living. After all of my experience, I have come to the conclusion that dependent types are over hyped.

1. 1

I’ve started writing a post on dependent types. Here’s early draft: https://artagnon.com/articles/dtt

2. 3

1. 5

Rust may be important in being a practical language filling a niche that desperately needed it, but I don’t think it has made history yet. Rust itself is a derivative language that merely popularized concepts from other languages.

Rust is a language that mostly cribs from past languages. Nothing new.

https://venge.net/graydon/talks/intro-talk-2.pdf

1. 1

Agreed. It’s the newest language I was willing to include (albeit, a little reluctantly).

1. 1

There are ways of making history other than by innovating. But even given that, I think Rust does innovate by combining those features.

I don’t know exactly what you’re talking about though, since you’ve cut out some work for the reader. Time to watch that talk!

1. 1

I’m not sure Java should be included in this graph. The graph focuses on the language design, and IMO the main point of Java is not the language itself, but the runtime. The language itself was intentionally stripped from many features in order to “dumb it down”, so it’s probably true the language won’t influence anything, but it’s also true that Java ecosystem has influenced .NET environment, and both JRE+NET are being used to actually run businesses, unlike majority of languages from OP’s list. Of course the OP can think whatever he likes, but I believe that a “remarkedly bad design” wouldn’t allow the JRE design to grow to such sizes.

But again, I’m not a big fan of the Java language, but telling how Java is bad, slow and poorly designed makes me think the person telling such things doesn’t know Java that well.

1. 1

Agreed. I included Java precisely for the JVM. See the (*) on top of Java.

1. 7

That Simula/67 is missing makes this diagram potentially misleading for newcomers. The biggest influence on Smalltalk was Simula (Alan Kay HOPL), as is the direct derivation of both C++ and Java [1,2] from Simula/67. This reinforces the false belief that SmallTalk is the foundation of the concepts of object orientation. Alan Kay didn’t get his Turing Award (see its citation) for the ideas of object orientation. Nygaard and Dahl got their Turing Awards for just that.

———

[1] Java object model https://dl.dropboxusercontent.com/s/lmfmvdsgw3lqoew/2017-10-03_15-33-11.png

[2] “And my idea was very simple: to take the ideas from SIMULA for general abstraction for the benefit of sort of humans representing things… so humans could get it with low level stuff, which at that time was the best language for that was C, which was done at Bell Labs by Dennis Ritchie. And take those two ideas and bring them together so that you could do high-level abstraction, but efficiently enough and close enough to the hardware for really demanding computing tasks. And that is where I came in. And so C++ has classes like SIMULA but they run as fast as C code, so the combination becomes very useful.” — Bjarne Stroustrup

1. 3

1. 2

Merci bien for a thought provoking diagram/article :)

1. 7

Algol, Simula and COBOL are glaring omissions. Algol has basically influenced every C-like language out there. Simula was the start of the object idea. COBOL is still running and written today.

Fine. It’s opinionated. It’s a poorly informed opinion, then.

1. 1

Thanks to you, and others on this thread, I’ve been doing some reading on ALGOL and Simula. Indeed, they are glaring omissions, and I’ve included them now. I’m still not convinced COBOL is worth including though.

1. 4

I think omitting COBOL would be a mistake. It’s still used with literally billions of lines of code still running in production. Whether this is a good or bad thing is up for discussion. COBOL as a language has been updated as recently as 2014, with companies such as IBM and Micro Focus still producing and supporting compilers and other optimizers for it.

How it fits into any kind of lineage map is another point of discussion, but perhaps it should be a minor influence on another omission: BASIC. BASIC was on pretty much every home computer for over a decade and was the shell for many of those systems. It’s how an entire generation of computer uses learned to program, for better or worse.

1. 6

I can’t visually parse the chart. The distinctions between color and line thickness require more visual acuity than I have.

You’re by no means required to make your publications accessible, but you might consider making it a little easier on those of us who lack your visual acuity :)

Also, C as a novel root language with no predessor? BCPL? B? Algol? :)

1. 2

Hehe, sorry :) I’m not sure how to improve the readability though: it’s quite a dense chart, and I just went for the path of least resistance: much of my site is filled with commutative diagrams in xypic, so I figured why not use that? Any other solution would require a significant time investment :(

Pretty much all the root languages have some predecessors, but I’ve got to cut the chart off to some point. If it’s any consolation, I think all the root languages are sufficiently novel to appear in bold. I’ve tweaked the wording to “no significant predecessor” though.

1. 6

No significant predecessor? ALGOL is probably the most influential language of all time.

1. 2

No worries, I do totally understand, and it’s a nice piece of work. Good on you for publishing it!

FWIW I love this stuff, I actually own the ACM 3 volume (Now 4 but I haven’t bought 4 yet because it’s ungodly expensive) HoPL, and they’re great :)

1. 5

Obviously missing a lot of languages, but prolog seems like a major omission. It was one of the inspirations for Erlang. In fact Erlang was prototyped in prolog before being rewritten in C for performance purposes.

Joe Armstrong even helped the writer of 7 languages in 7 weeks learn prolog as part of writing the book:

1. 1

Historically speaking, I’m of the opinion that Prolog isn’t that important. I know that it’s at least mentioned in many university-level PL courses, but why? It didn’t influence the design of any of the other languages on that chart: I didn’t know Erlang was prototyped in Prolog, but as far as language design goes, I don’t think Erlang derives inspiration from Prolog.

What other omissions did you have in mind?

1. 4

Well, lots of languages: Algol, scheme, self, simula, sql, c#, COBOL, pascal, etc. (Edit: didn’t see Self before! Originally looked at the graphic on mobile. My bad.)

Prolog I bring up because it is early logic programming language which is another (major? minor?) paradigm and it is a parent to Erlang, which is on the list.

Ultimately, this is an opinionated list. So whatever floats your boat.

I don’t think Erlang derives inspiration from Prolog.

Erlang started out as a modified prolog.

1. 3

Scheme is somewhat a major language, but it didn’t have the kind of influence that I imagined it would. The chart does show LISP, CL, and their influences, if it’s any consolation. The others, I still can’t justify crowding the chart with.

Self was added after I got some comments about it. Sorry, the site isn’t really made for mobile: there are way too many large commutative diagrams that don’t render well on mobile.

2. 3

don’t think Erlang derives inspiration from Prolog

seeing how it’s almost the weekend I’m gonna leave this here but I’m also gonna say that when Joe Armstrong was thinking about this new language he wrote down the rules and somebody else told him he had done “an algebra” and that he could embed it in prolog easily (not sure about where the quote comes from).

1. 1

1. 4

The graph gave me some good chuckles. :-)

The only reason why I don’t object that C++ isn’t categorized as “Unlikely to influence anything in the future, due to remarkably poor design.” is that its remarkably poor design has influence, as it tells people what not to do in future languages.

I think the author is missing the point a bit in the Rust to C++ comparison though, with its “look at what C++ can do, but Rust can’t!”.

I mean, … yeah, that’s the idea of not adding everything to the language?

1. 4

C++ wasn’t designed from the beginning to be the way it is today and has evolved to stay relevant over its 30+ year history. As much grief as the C++ standards committee gets, they’ve done a good job significantly improving the language over the last decade, while maintaining compatibility even with projects with 5-6 millions of lines of existing code. In terms of influence, RAII and const/non-const member functions (i.e. &self, &mut self) in Rust are direct from C++.

As many features as it has, quite a few of them are vestigial for practical purposes. Every team I’ve been on has actually used a subset of the language: very limited inheritance, no exceptions, no RTTI, and few (for specific purposes) templates. Since the downturn of OOP hype, a lot of C++ you find these days is much closer to FP than to OOP.

1. 2

laughs

I must admit that I have a soft corner for C++. Pragmatically speaking, it’s a very effective language for engineering, for people willing to invest the time to read the std documentation. Yes, I agree that no future language would want to keep wedging in features like this, but I’m impressed C++ is able to do it at this pace without breaking the language.

1. 2

In zfc-based mathematics, say in abelian groups, A⊕B=B⊕A, where the equality is a set-based equality. In modern mathematics based on category theory

Is this implying that modern mathematics is not based on ZFC?

1. 4

Corrected; thanks.

1. 3

Agree, the phrasing “modern mathematics” is completely unnecessary. To me it seems the point is to draw a distinction between set-based foundations vs category-based foundations.

1. 1

I think they’re drawing a distinction between modern category-theory-based mathematics and older techniques.