I personally hope the Matrix protocol continues to improve and become more popular. They’ve been working on improving their privacy lately, and it seems to me that they’re the only one with E2E, federated, decentralized group chats seeing growing adoption (e.g. France, KDE, etc.).
Definitely not an easy issue from what I’ve seen, especially compared to something like Signal where iirc they just use phone numbers as ID. Right now in E2E isn’t enabled by default since the support of so many platforms and different devices results in the hassle of having to verify each device ID. However, it’s been improving over the years and I believe one of their long term goals is to make this as convenient as possible.
I encourage people to check out the various homeserver implementations and contribute to them if interested, the reference implementation is Synapse written in Python, there’s Dendrite written in Go, Ruma written in Rust, and probably others as well. The various project members are also very easy to reach through the official rooms like MatrixHQ (#matrix:matrix.org), and Matthew the project lead is on lobste.rs as well (/u/arathorn).
The problem, as usual, is that desktop clients aren’t great, from my limited experience. I’m still missing something like Gajim or Pidgin or Miranda (from a usability/non-electron point of view). I’m in the process of looking intro spectral and quaternion right now, in order to replace XMPP. (Need proper, small clients for Linux and Windows, not necessarily the same for each).
Definitely agree with you on that, right now Riot is so ahead of the alternative clients it basically forces you to use either Electron or WeeChat. It’s getting better over time though, especially with something like Fractal which is developed by GNOME. I’m personally waiting for a nice Qt client to mature and reach parity with Riot. But I have faith in the underlying ideas and spec, and I expect that in the future users will have great freedom in choosing whatever client they want to use.
I actually got nheko, quaternion and spectral to run on windows yesterday. I think currently spectral looks best, but quaternion just needs some more polish (maybe take some hints from quassel). nheko is a little too non-native for me.
Laughed when I got to C++. Unfortunately it’s still very popular in the industry, I hope Rust replaces it in the future.
There are also alternatives to using browsers like Firefox or Chromium with a vim plugin, personally I use qutebrowser as my daily driver.
personally I use qutebrowser as my daily driver.
While I have pi-hole, being unable to have uBlock Origin in my browser is what drove me away from qutebrowser.
Can you try to “sell” it to me, i.e. what are qutebrowser’s advantages over my current browser, Vivaldi?
Thanks to qutebrowser, I pretty much never have to use a mouse. Plus its integration with gopass including TOTP, extremely customizable config settings, and custom keybindings to integrate stuff like MPV for playing videos or opening magnet links with aria2, it’s incredible.
I’m not a very good salesman, and I don’t know much about Vivaldi, but for me what makes qutebrowser the best I’ve used so far is the extensiability and keyboard focused workflow. d_ listed a few possible uses, and really you can do anything you want with it.
Wow, this is really nice. Really user friendly (login was so easy) and intuitive. Very sane defaults. Thanks for following the XDG Base Dir spec btw.
This is pretty cool. I just don’t understand your insistence that it’s all about the tree notation. I’d love to see a proper description of the grammar. It seems to have echoes of OMeta.
I just noticed the link to https://github.com/treenotation/jtree/blob/master/languageChecklist.md. The notation for grammars seems quite barbaric. But maybe I just don’t understand. A more elaborate write-up than a checklist may help.
Tree Notation is helpful for lexing. But your demos inevitably have a lot more than lexing. Either you need to treat them as first class research objectives. Or you need to make it easy to tease apart all the moving parts, by say showing how the same language is implemented with the same grammar language in tree notation vs a more conventional lexer.
I just don’t understand your insistence that it’s all about the tree notation.
I know I’ve done a lousy job so far at explaining what I see. I gave a talk 2 years ago about thinking about source code in 2 and 3 dimensions. You can definitely do this type of language in traditional manners with parsers that have colons and semicolons and brackets etc, but my insistence is none of that is necessary. Our mission is to rid the world of unnecessary complexity (while leaving art untouches :) ). For reference, Dumbdown took me approximately 50 minutes from idea to shipping an MVP with autocomplete, syntax highlighting, parsing, type checking, compiling, and unit tests. And the code is relatively timeless. I know I am not doing a good job at explaining why Tree Notation makes this possible, but I hope empirically the data points will start adding up, and maybe people smarter than I am can explain it better.
I’d love to see a proper description of the grammar.
The Dumbdown grammar is 84 lines long. It it written in a Tree Language called Grammar. Grammar is written in itself and is 300 lines long: https://treenotation.org/designer/#standard%20grammar (though at the moment the compiler for Grammar is still written in TypeScript).
The notation for grammars seems quite barbaric.
I’ve gotten super fast with it but there’s lots to be improved. The “Infer Grammar” button on the designer app can be helpful. But really I should record some demo videos or something for now, until it’s good enough that it is self explanatory.
But your demos inevitably have a lot more than lexing. Either you need to treat them as first class research objectives. Or you need to make it easy to tease apart all the moving parts, by say showing how the same language is implemented with the same grammar language in tree notation vs a more conventional lexer.
An immense amount of work to do. The goal is to build a movement that can do these sort of things. Tree Notation now is about where the Web was in ~1989. Just at the very beginning. There are endless applications of this. We need help with all the things. Perhaps the biggest help of all would be someone better than me at organizing the helpers :).
I think you should set aside time to explain what it is, how it works, and with examples. It’s worth having an accessible description of the thing you spent years on if you’re trying to present it to others. The submission makes me think you are. So, definitely make something that lets us know what Grammar is and how it works. Then, people can start making comparisons, experimenting, etc.
I think you should set aside time to explain what it is, how it works, and with examples.
This is very good advice and feedback. I think the problem is I’m trying to do this, and now have papers, FAQs, blog posts, demo apps, source code, over a dozen sample languages, and I’m still doing a terrible job at it. Here in person at the Cancer Center coworkers are able to explain it to new people better than me. I think we just need to find someone who is great at visuals, youtubes, or whatever, to lead a better explanatory site. I could work on improving my skills in that domain (and I will keep improving), but there’s still so much work to do on the programming end that if we could find the right collaborators to lead that sort of site, that might be the more efficient approach.
Yes, Ometa, TXL, and Rascal are all ones I studied. Thank you for pointing those out. Often when someone mentions something, even though I may have studied it at one point, it’s a great signal that I should go take another look to see what things I missed. Thanks.
What would you say is the type of person that can see what you see? I’m basically just a techy Joe Schmoe but I find it easier to understand M theory than your work on Tree Notation. Do I have to have a PhD in math/cs to understand this stuff?
Do I have to have a PhD in math/cs to understand this stuff?
Well I only have a bachelors in Economics (and flunked out of college once), so I guess that would mean no.
What would you say is the type of person that can see what you see?
I just stumbled upon this notation which seemed a simple way to do things. I asked people much smarter than me for years for help and guidance and whether they thought it would be useful. No one thought it was interesting but no one could explain to me why it wasn’t. So I spent years and years trying to find flaws in it, and thinking of all the places where it could help. So I have about 7 years or so of thinking about this and researching it somewhat obsessively. I built a database of 10K+ notations and languages with 1k+ columns and there’s not a single instance found where another notation or language can do things simpler than Tree Notation (by simpler I mean with fewer parts). I built crude hardware that could operate directly on Trees (instead of our register based models of computing). I built 3-D models of programs (copying the research techniques of watson and crick). I even spent 2 months trying microdosing to figure it out.
And I’ve probably made some obvious dumb flaws but there’s something interesting here. I bet today’s version is garbage to the version that is out in 1 year.
Anyway, so what type of person does it take to see what I see? A really dumb stubborn person I guess.
The Dumbdown grammar is 84 lines long. It it written in a Tree Language called Grammar. Grammar is written in itself and is 300 lines long: https://treenotation.org/designer/#standard%20grammar (though at the moment the compiler for Grammar is still written in TypeScript).
Are you familiard with paper STEPS Toward The Reinvention of Programming and generally VPRI?
Yes, I’ve read that paper a couple times and am familiar with VPRI. Alan Kay’s ideas have very much helped me along the way. I don’t believe I’ve talked to anyone from VPRI though (or the later YC backed lab that did similar work). Thank you for mentioning that again. I’ll give it another look.
“Well I only have a bachelors in Economics” “So I spent years and years trying to find flaws in it, and thinking of all the places where it could help. So I have about 7 years or so of thinking about this”
You’ve put about a Ph.D.’s worth of time into this, though. It might be harder out of the blue than you realize.
You’ve put about a Ph.D.’s worth of time into this
I laughed out loud at this. I never thought of it so succinctly. You got me thinking and I did a Google search for “Can you get a PhD without going to graduate school?”. Looks like such a thing might be possible, but only in the UK where it looks like if you have some peer reviewed papers you can apply for a “phd by merit”. Not sure if that’s accurate, just what a quick Google search.
It might be harder out of the blue than you realize.
I realize it’s really hard. I’ve done thousands of experiments on Tree Notation and many more experiments that I’ve just “run in my mind” over the years. So I see what it will become. (And that’s if there isn’t some fatal flaw that I have overlooked, which I still has at least a 10% probability of happening). So I’ve envisioned all the use cases, the benefits, and how to overcome the challenges. Only now are things starting to get good enough where other people are seeing the potential.
Here at the Cancer Center I no longer explain it to people when I can help it, letting instead other people at the lab explain it who have far less experience with it than me. They do a better job at explaining it. I’m hoping we find one or more people who are good ad visuals and communicating and could build us a new website (https://www.reddit.com/r/treenotation/comments/cyjfpy/wanted_someone_to_redo_the_treenotationorg/) to explain it better. Pull requests are also welcome.
I’ve gone through your website and previous posts in the past, and I legitimately had a false memory in my mind (until you corrected it) that your website said you were working on your PhD. Very strange.
Hmmm. I have toyed with pursuing it, but the way the system is setup in the U.S. doesn’t really make sense. Seems more like indentured servitude than science to me :)
This is kind of out of left field, but I’m very much against what some wise people call “Intellectual Slavery” laws of copyrights and patents, so it would be hard for me to stomach going along with a system of closed access journals, expensive textbooks, etc. And like I said, I already flunked out of school once so I just don’t think it’s a good fit :).
My wife has a PhD though and so do most of my coworkers (some of them are phd students). Do you have one? Would you recommend it? I’m kind of curious if there’s a way to do it that would make sense, perhaps outside of the US.
I’m nowhere near having a PhD so unfortunately I can’t give any advice from my own experience. From what I’ve seen though, a PhD is only really good for academia/research (even in CS) so I feel like you probably wouldn’t benefit from it, all things considered. You would probably get better advice from your wife and coworkers though :p
Below is a visualization that you have to look at with a grain of salt because it’s not quite accurate, but this is the type of thing I see when I look at Tree Languages. Basically, programs are like spreadsheets, 2D grids of cells. The location of a word is meaningful and grounded in geometry. The parsed version has the same shape.
You seem open to some feedback, so, here’s some of mine, as someone who’s looked a couple of times at Tree Notation and still hasn’t quite “got it”. I’ve just spent the last 20 minutes looking again at various parts of the site.
(Before I go any further, please be assured this is meant with the best of intentions, and no malice. I feel like there’s something here, and a lot of work has gone into it but somehow I’m not getting it - and I’m not sure what steps I would need to take to get there).
The “What is Tree Notation?” on the front page doesn’t explain what it is, except for the hint “Tree Notation is an error-free base notation like binary”. But I don’t really know what that means.
“Tree Notation […] is grounded in 3-dimensional geometry” (again, from the front page) doesn’t mean anything to me at this point. Honestly, it made me think of Time Cube. I’m sure there’s something there, but I feel intimidated by the idea that I need to understand something mathsy to use Tree Notation. I think I have completely misunderstood what this sentence means, but I also have no idea what it means.
The example with package.json
looks like an alternative syntax for JSON. I think the point is that you can write a syntax for JSON itself in Tree Notation? Which makes it like… JSON Schema? What’s the comparison to something I might be familiar with?
The Lisp example is basically the same. It just looks like an alternative grammar: S-Exps without the parens. It’s not obvious how or why this is useful.
On the Grammer section of the front page:
You can write new Grammar files to define new languages. By creating a grammar file you get a parser, a type checker, syntax highlighting, autocomplete, a compiler, and virtual machine for executing your new language.
You know, I’d probably lead with this. I think it’s the most concrete information on the page
In the “Who this Library is For” section, I’m unclear what “this Library” is at this point. Is it everything I need to write a language? Is it a parser for tree notation?
The checklist is halfway to being a tutorial. It would be good if it explained the “why” as well as the “what” (and be careful about snipping previously-seen code out of the examples in the tutorial. Much better, IMO, to show all code and note what has changed from the previous block than to elide the code and expect people to remember what it was - don’t make me think ;) )
For reference, Dumbdown took me approximately 50 minutes from idea to shipping an MVP with autocomplete, syntax highlighting, parsing, type checking, compiling, and unit tests.
A walkthrough tutorial for putting this together would probably be super helpful and cool.
Here at the Cancer Center I no longer explain it to people when I can help it, letting instead other people at the lab explain it who have far less experience with it than me. They do a better job at explaining it.
Maybe get those written down? Or see if there are common ways it’s being communicated by these people that aren’t currently on the site.
as someone who’s looked a couple of times at Tree Notation and still hasn’t quite “got it”. I’ve just spent the last 20 minutes looking again at various parts of the site.
Thank you very much for taking the time. Truly grateful. This is such good data that the current docs are just not doing a good job. I’m thinking perhaps we need to take a much more “show vs tell” approach, with videos, screenshots, animated gifs, visualizations, demos up front, etc, and move the text and FAQ.
Before I go any further, please be assured this is meant with the best of intentions, and no malice.
You are too nice!
- on the front page doesn’t explain what it is,
Good point. The more I think about it the more I think the front page should all be show versus tell.
- Tree Notation […] is grounded in 3-dimensional geometry” (again, from the front page) doesn’t mean anything to me at this point.
This just means that programs have a 3-D representation. Think of them like molecules. Here’s a video of a talk I gave 2 years ago (https://www.youtube.com/watch?v=ldVtDlbOUMA) that might clarify that more. It was at JSConf 2017 in SF, I’m looking for the actual video and not just my screen recording, but can’t find it. With traditional languages the first step in parsing is to generally strip whitespace and turn the code into a 1D sequence. Your code is not mapped into the 3D world. With Tree Notation it is, and there’s a location of each word/line, and changing that changes the meaning of the program. It’s like a design constraint. Tree Notation programs in a sense need to obey the laws of physics (I know that’s an inaccurate cliche, but might hint at what I’m getting at?).
it made me think of Time Cube
I’m not familiar with that one, but I totally understand the feedback that this is “A New Kind of Science” type of thing. That’s why I spend most of my time on code, tooling, data gathering, products, user tests and demos and putting the theory into practice, and comparatively little of my time on papers and theoretical work. I’ve tried to see where others who have stumbled upon something similarly simple and perhaps profound have erred. A New Kind of Science has been very influential to me in particular as an experiment of what would it look like to go all in on the theoretical aspect, and my takeaway from that is that it’s a brilliant 50-100 page book, but then it just goes way to far out on a limb and so the priority when dealing with a new library should be to slowly and incrementally build up the tooling to bring practical benefits from the work out early, while putting the theoretical implications as a lower priority. Not descending into madness, while not staying totally practical, while also not listening to people who are trying to disparage the work by saying you are descending into madness, it’s a tricky balance to maintain but I’ve got some good strategies built up to walk that line :)
The example with package.json looks like an alternative syntax for JSON. I think the point is that you can write a syntax for JSON itself in Tree Notation? Which makes it like… JSON Schema? What’s the comparison to something I might be familiar with?
The Grammar Tree Language is sort of like JSON Schema. I just updated that example a bit with more information. Tree Notation is very low level (think binary or ascii). The Tree Languages can do anything (so you can have a Tree Language that maps to JSON like the Dug demo language, or you can make a Tree Language for building languages like Grammar, or you can make a general purpose Tree Programming language, etc). The idea is none of those things require parens or brackets or colons etc. Simple positioning in 3D geometry is the only thing we need to do all of the things that we traditionally use syntax for. Positioning gives you abstraction, scope, trees, etc. The building blocks for everything else.
The Lisp example is basically the same. It just looks like an alternative grammar: S-Exps without the parens. It’s not obvious how or why this is useful.
If you look at any one example you can say they all are only incremental improvements. It’s all about the network effects. I did not write syntax highlighting for my lisp like Tree Language. I wrote it for my HTML Tree Language. But It works for both. Etc. If you master the basics of Tree Notation syntax (maybe there’s like 5 rules? I’m not sure how many but it’s less than 10 for sure), you understand the syntax for now dozens of languages (and hopefully in the future, thousands). The semantics you need to learn from that domain, but hopefully we eliminate one large category of mistakes and confusion.
- You know, I’d probably lead with this. I think it’s the most concrete information on the page
Good idea. Thanks! Will probably do that in the new version.
- In the “Who this Library is For” section, I’m unclear what “this Library” is at this point. Is it everything I need to write a language? Is it a parser for tree notation?
This is a common point of confusion. We should reorganize it so there’s the Tree Notation landing page, and then there’s the JTree Landing page (the tree notation library for TypeScript/Javascript). In the new redesign, we’ll remove the Jtree specific stuff and instead add links to all the different implementations, similar to json.org.
- The checklist is halfway to being a tutorial. It would be good if it explained the “why” as well as the “what” (and be careful about snipping previously-seen code out of the examples in the tutorial. Much better, IMO, to show all code and note what has changed from the previous block than to elide the code and expect people to remember what it was - don’t make me think ;) )
Good feedback! Thanks. Someone is volunteering to do a new one. Hopefully he’ll post that soon.
- A walkthrough tutorial for putting this together would probably be super helpful and cool.
Noted. Will add..
Maybe get those written down? Or see if there are common ways it’s being communicated by these people that aren’t currently on the site.
Great idea. I just sent an email to the lab folks. Will add.
Yeah, videos would be great.
How quickly you can use it is not a great signal, because it’s your baby and you’ve created it from scratch. Needs more hallway usability testing.
Needs more hallway usability testing.
Yeah, it’s painful in this regard. Yesterday’s designer launch addressed a number of the most common issues, but still the list of things to do and user requests is a mile long. The in person community here is very helpful, but hoping we’ll be able to get a community going of remote contributors in the months ahead.
How quickly you can use it is not a great signal
Agreed. It’s what I have though, until the tooling gets better. I guess the promising signal though is that this sort of thing would have taken me 10 hours or more 6 months ago (speaking in rough estimates), so the trend is in the right direction.
There’s a thing at work we’re currently representing rather clumsily with JSON that I’ve been wanting to find a better encoding for. Defining a tree language for it, and a back-and-forth converter between tree notation and the current JSON might be a cheap experiment to see what it’s all about.
If tree notation is as useful as you make it sound, that could be a huge productivity boost. It sounds like it could make it easier for people to edit these structures by hand, to write more advanced features for the visual editor, create automatic transformations of common patterns, and so on.
You have me intrigued.
Please do let me know how it goes.
I apologize in advance for the difficulty as the current state of things is early, but appreciate any help toward making things better for the future community!
I use it myself and I really like it. Although I self host it, I still bought a subscription to support the dev. It used to be partly proprietary but he decided to just open source everything, which I thought was really cool. If anyone wants a dark theme scss file for it let me know.
I was originally planning on following the “open core” model a la Gitlab, but I became so disillusioned with proprietary software that I decided to never go down that path again. I’d never comfortably self host proprietary software in my personal servers, so why subject others to it?
Sometimes it’s perfectly possible to build a sustainable business with free software to support yourself; as an example, Commento is already profitable [1].
Btw, thanks for supporting the project!
[1] the same can’t be said of many IPO’d Silicon Valley companies, eh? ;)
Anyone have recommendations for other types of testing for C? Like property testing and mutation testing, etc.
For property-based testing, look at theft.
I think a lot of it is just the power of the status quo. Chrome took over back then because it was so much faster than anything else.
The market share is predominantly non-techy users that won’t go through the effort of switching without an obvious benefit. I got my friends to switch when I told them that Firefox is now faster (it is for me, but definitely not as clear cut as Chrome back then). But besides a small speed up, why would the average user switch? I have a hard time answering that question, and that’s the problem.
There must be a name for the saying, but I don’t know what it is so I’ll just state it: new technology must be vastly superior to succeed its predecessors. Otherwise the status quo bias will not be overcome.
Chrome took over back then because it was so much faster than anything else
ehh. It started as this new cool simple fast browser. But it took over because it was persistently advertised on the main page of google dot com and most other Google properties.
Very true, back then I thought it was nice that Google was successfully raising awareness for such a great browser, but looking back it seems kind of shady. A hint of the continuing attempt to vertically integrate the Web.
Do you know how the Nim ecosystem is these days with following the XDG Base Dir spec? I’ve been looking at the github issues about it but it’s kind of hard to tell how compliant Nim is or if I have to use workarounds.
Nim follows XDG_CONFIG_HOME and XDG_CACHE_HOME. If you see something broken please ping me or open an issue :)
Freedesktop.org made it.
Not sure if it matter what I think of it, it just happens to be confusing from an outsider’s perspective of how Nim handles user directories and stuff like just from Github, and I was hoping federico had the inside scoop.
I’m just asking because I genuinely don’t know what is the ‘right’ way to deal with dot files or why.
For Windows, %AppData% and similar directories should be used. Dotfiles/dotfolders aren’t even hidden on Windows and can cause issues, so they shouldn’t be used at all really. MacOS, I think it’s just ~/Library? Take the last two with a grain of salt because I don’t use them.
For Linux, XDG Base Dirs is the ‘right’ way. I think the Arch Wiki page has the best summary of it. Most modern languages will have a package for it, older languages usually entail manual implementation. Dvisvgm is an example of a C++ program that recently followed it.
I hope that helps, not sure if that’s what you were looking for.
Edit: actually now that I think about it, I’ve heard many MacOS users prefer XDG Base Dirs, so if lazy one can just use XDG Base Dir spec for all Unixy OSes I think.
I agree. While I have a ton of dotfiles in my home directory, putting things in ~/.config
is the standard for most newer software on my machine. On the other hand, considering how popular just adding a dotfolder to the home directory is, I doubt most users would care too much if your software did that instead.
I’m not sure about the proportion of users, but there are definitely many that do not appreciate it.
Rob Pike for example:
Second, and much worse, the idea of a “hidden” or “dot” file was created. As a consequence, more lazy programmers started dropping files into everyone’s home directory. I don’t have all that much stuff installed on the machine I’m using to type this, but my home directory has about a hundred dot files and I don’t even know what most of them are or whether they’re still needed. Every file name evaluation that goes through my home directory is slowed down by this accumulated sludge.
http://xahlee.info/UnixResource_dir/writ/unix_origin_of_dot_filename.html
Some Reddit threads about the frustration:
https://www.reddit.com/r/linux/comments/971m0z/im_tired_of_folders_littering_my_home_directory/
https://www.reddit.com/r/linux/comments/971m0z/im_tired_of_folders_littering_my_home_directory/
Similar frustrations on Windows:
https://www.reddit.com/r/pcgaming/comments/3jff7a/so_many_games_just_throw_their_save_crap_into/
https://www.rockpapershotgun.com/2012/01/23/stop-it-put-save-games-in-one-place/
https://www.reddit.com/r/valve/comments/60b8ld/could_valve_please_update_game_developer/
Debian seems to refer to it as a bug to be reported:
Debian does not require packages to conform to the XDGBDS and there is not (yet) a coordinated effort to encourage upstreams to do so. But to avoid duplication of effort we can collect upstream bug reports here regarding XDGBDS conformance.
https://wiki.debian.org/XDGBaseDirectorySpecification
GNOME has a long list of reasons why you shouldn’t:
Even ancient programs such as Emacs are seeing movement towards adopting the spec:
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=583
There’s probably more but it’s my general experience that people do care.
The spec is followed by Qt, GTK, GNOME, KDE, LXDE/LXQT, Xfce, etc. (add Redhat and others and that’s most of what Freedesktop.org is, maybe minus the GUI toolkits). There is clearly an agreement of the biggest players in the Linux desktop world. The original Arch wiki page I linked has a long list of software and their compliance status.
Personally, for anyone reading this, I beg you to listen to the spec. Please.
I work in a large organization that employs a lot of programmers. There’s definitely pressure to do professional development outside of work hours. No one has ever said it’s mandatory, but people are encouraged to do some udemy courses are their ilk and are praised highly and publically for completing them. No one has ever been fired for not doing that but depending on how cynical you are this can come off a lot like “putting in hours outside of work is how you advance”. It’s a little different than what’s described in the article as my employer actually discourages open source contribution (they issued a (in my opinion) fraudulent copyright complaint against one of my github repos that was subsequently reversed) but the idea that you need to pick up new skills relevant to the company’s work on your own time is definitely there.
This definitely isn’t universal but I’ve heard similar stories from elsewhere fairly often.
You can see why it makes sense, training people at work means giving up productivity, it’s expensive, and it generally doesn’t work very well. If you can actually get people to do it on their own time that’s a massive benefit you don’t have to pay a penny for.
And I’m someone who enjoys working on hobby projects and using new stuff for them at home, but even I loathe the de-facto policy. It makes something I do for fun feel like rendering unpaid services to my employer.
When I worked as a consultant, the only hours that counted as work were the ones I logged at the client. Besides that, meetings were in my own time. We also had some mandatory evenings for information, and some semi-mandatory evenings for learning new technologies (I did attend them at first, but things didn’t really work out between me and that employer for various reasons, and I stopped attending them).
At one client, there were a few eager people who shouted that they put in extra time at home to learn. This can create an atmosphere in which it’s expected that you work some more at home (though I never really experienced it this way).
Some other times some of the management hinted that you should put in more time than what’s in your contract. Sometimes subtly (“you should only log the hours you worked, unless you messed something up and need to repair it”), sometimes more blantantly (at an intake for a potential new client: “tell them that you may not be familiar with all the technologies they use but you will spend the evenings learning them if this happens”).
All together, it’s not that common in my experience. I’ve heard much worse stories in other lines of work. I was more annoyed by managers with manipulative tendencies (when I worked as a consultant – it might have to do with them literally getting payed for every hour I work, no matter the quality of my work).
I think people rely on JavaScript too much. With sourcehut I’m trying to set a good example, proving that it’s possible (and not that hard!) to build a useful and competitive web application without JavaScript and with minimal bloat. The average sr.ht page is less than 10 KiB with a cold cache. I’ve been writing a little about why this is important, and in the future I plan to start writing about how it’s done.
In the long term, I hope to move more things out of the web entirely, and I hope that by the time I breathe my last, the web will be obsolete. But it’s going to take a lot of work to get there, and I don’t have the whole plan laid out yet. We’ll just have to see.
I’ve been thinking about this a lot lately. I really don’t like the web from a technological perspective, both as a user and as a developer. It’s completely outgrown its intended use-case, and with that has brought a ton of compounding issues. The trouble is that the web is usually the lowest-common-denominator platform because it works on many different systems and devices.
A good website (in the original sense of the word) is a really nice experience, right out of the box. It’s easy for the author to create (especially with a good static site generator), easy for nearly anyone to consume, doesn’t require a lot of resources, and can be made easily compatible with user-provided stylesheets and reader views. The back button works! Scrolling works!
Where that breaks down is with web applications. Are server-rendered pages better than client-rendered pages? That’s a question that’s asked pretty frequently. You get a lot of nice functionality for free with server-side rendering, like a functioning back button. However, the web was intended to be a completely stateless protocol, and web apps (with things like session cookies) are kind of just a hack on top of that. The experience of using a good web app without JavaScript can be a bit of a pain with many different use cases (for example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page). Security is difficult to get right when the server manages state.
I’ll argue, if we’re trying to avoid the web, that client-side rendering (single-page apps) can be better. They’re more like native programs in that the client manages the state. The backend is simpler (and can be the backend for a mobile app without changing any code). The frontend is way more complex, but it functions similarly to a native app. I’ll concede poorly-built SPA is usually a more painful experience than a poorly-built SSR app, but I think SPAs are the only way to bring the web even close to the standard set by real native programs.
Of course, the JavaScript ecosystem can be a mess, and it’s often a breath of fresh air to use a site like Sourcehut instead of ten megs of JS. The jury’s still out as to which approach is better for all parties.
(for example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page)
Some of the UI benefits of SPA are really nice tbh. Reddit for example will have a notification icon that doesn’t update unless you refresh the page, which can be annoying. It’s nice when websites can display the current state of things without having to refresh.
I can’t find the video, but the desire for eliminating stale UI (like outdated notifications) in Facebook was one of the reasons React was created in the first place. There just doesn’t seem to be a way to do things like that with static, js-free pages.
The backend is simpler (and can be the backend for a mobile app without changing any code).
I never thought about that before, but to me that’s a really appealing point to having a full-featured frontend design. I’ve noticed some projects with the server-client model where the client-side was using Vue/React, and they were able to easily make an Android app by just porting the server.
The jury’s still out as to which approach is better for all parties.
I think as always it depends. In my mind there are some obvious choices for obvious usecases. Blogs work great as just static html files with some styling. Anything that really benefits from being dynamic (“reactive” I think is the term webdevs use) confers nice UI/UX benefits to the user with more client-side rendering.
I think the average user probably doesn’t care about the stack and the “bloat”, so it’s probably the case that client-side rendering will remain popular anytime it improves the UI/UX, even if it may not be necessary (plus cargo-culting lol). One could take it to an extreme and say that you can have something like Facebook without any javascript, but would people enjoy that? I don’t think so.
But you don’t need to have a SPA to have notifications without refresh. You just need a small dynamic part of the page, which will degrade gracefully when JavaScript is disabled.
Claim: Most sites are mostly static content. For example, AirBNB or Grubhub. Those sites could be way faster than they are now if they were architected differently. Only when you check out do you need anything resembling an “app”. The browsing and searching is better done with a “document” model IMO.
Ditto for YouTube… I think it used to be more a document model, but now it’s more like an app. And it’s gotten a lot slower, which I don’t think is a coincidence. Netflix is a more obvious example – it’s crazy slow.
To address the OP: for Sourcehut/Github, I would say everything except the PR review system could use the document model. Navigating code and adding comments is arguably an app.
On the other hand, there are things that are and should be apps: Google Maps, Docs, Sheets.
edit: Yeah now that I check, YouTube does the infinite scroll thing, which is slow and annoying IMO (e.g. breaks bookmarking). Ditto for AirBNB.
I’m glad to see some interesting ideas in the comments about achieving the dynamism without the bloat. A bit of Cunningham’s law in effect ;). It’s probably not easy to get such suggestions elsewhere since all I hear about is the hype of all the fancy frontend frameworks and what they can achieve.
Yeah SPA is a pretty new thing that seems to be taking up a lot of space in the conversation. Here’s another way to think about it.
There are three ways to manage state in a web app:
As you point out, #1 isn’t viable anymore because users need more features, so we’re left with a choice between #2 and #3.
We used to do #2 for a long time, but #3 became popular in the last few years.
I get why! #2 is is legitimately harder – you have to decide where to manage your state, and managing state in two places is asking for bugs. It was never clear if those apps should work offline, etc.
But somehow #3 doesn’t seem to have worked out in practice. Surprisingly, hitting the network can be faster than rendering in the browser, especially when there’s a tower of abstractions on top of the browser. Unfortunately I don’t have references at the moment (help appreciated from other readers :) )
I wonder if we can make a hybrid web framework for #2. I have seen a few efforts in that direction but they don’t seem to be popular.
edit: here are some links, not sure if they are the best references:
https://news.ycombinator.com/item?id=13315444
https://adamsilver.io/articles/the-disadvantages-of-single-page-applications/
Oh yeah I think this is what I was thinking of. Especially on Mobile phones, SPA can be slower than hitting the network! The code to render a page is often bigger than the page itself! And it may or may not be amortized depending on the app’s usage pattern.
https://medium.com/@addyosmani/the-cost-of-javascript-in-2018-7d8950fbb5d4
https://news.ycombinator.com/item?id=17682378
A good example of #2 is Ur/Web. Pages are rendered server-side using templates which looks very similar to JSX (but without the custom uppercase components part) and similarly desugars to simple function calls. Then at any point in the page you can add a dyn
tag, which takes a function returning a fragment of HTML (using the same language as the server-side part, and in some cases even the same functions!) that will be run every time one of the “signals” it subscribes to is triggered. A signal could be triggered from inside an onclick handler, or even from an even happening on the server. This list of demos does a pretty good job at showing what you can do with it.
So most of the page is rendered on the server and will display even with JS off, and only the parts that need to be dynamic will be handled by JS, with almost no plumbing required to pass around the state: you just need to subscribe to a signal inside your dyn
tag, and every time the value inside changes it will be re-rendered automatically.
This link may interest you as well: https://medium.com/@cramforce/designing-very-large-javascript-applications-6e013a3291a3
Reddit for example will have a notification icon that doesn’t update unless you refresh the page, which can be annoying. It’s nice when websites can display the current state of things without having to refresh.
On the other hand, it can be annoying when things update without a refresh, distracting you from what you were reading. Different strokes for different folks. Luckily it’s possible to fulfill both preferences, by degrading gracefully when JS is disabled.
I think the average user probably doesn’t care about the stack and the “bloat”, so it’s probably the case that client-side rendering will remain popular anytime it improves the UI/UX, even if it may not be necessary (plus cargo-culting lol).
The average user does care that browsing the web drains their battery, or that they have to upgrade their computer every few years in order to avoid lag on common websites. I agree that we will continue see the expansion of heavy client-side rendering, even in cases where it does not benefit the user, because it benefits the companies that control the web.
Some of the UI benefits of SPA are really nice tbh. Reddit for example will have a notification icon that doesn’t update unless you refresh the page, which can be annoying. It’s nice when websites can display the current state of things without having to refresh.
Is this old reddit or new reddit? The new one is sort of SPA and I recall it updating without refresh.
Old reddit definitely has the issue I described, not sure about the newer design. If the new reddit doesn’t have that issue, that aligns with my experience of it being bloated and slow to load.
example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page
There are lots of ways to do this. Here’s two:
Security is difficult to get right when the server manages state.
I would’ve thought the exact opposite. Can you explain?
In the case where you have lots of buttons like that isn’t loading multiple completely separate doms and then reloading one or more of them somewhat worse than just using a tiny bit of js? I try to use as little as possible but I think that kind of dynamic interaction is the use case js originally was made for.
Worse? Well, iframes are faster (marginally), but yes I’d probably use JavaScript too.
I think most NoScript users will download tarballs and run ./configure && make -j6
without checking anything, so I’m not sure why anyone wants to turn off JavaScript anyway, except for maybe because adblockers aren’t perfect.
That being said, I use NoScript…
I’m not sure if this would work, but an interesting idea would be to use checkboxes that restyle when checked, and by loading a background image with a query or fragment part, the server is notified of which story is upvoted.
That’d require using GET, which might be harder to prevent accidental upvotes. Could possibly devise something though.
One thing I really miss with SPA’s (when used as apps), aside from performance, is the slightly more consistent UI/UX/HI that you generally get with desktop apps. Most major OS vendors, and most oss desktop toolkits, at least have some level of uniformity of expectation. Things like: there is a general style for most buttons and menu styles, there are some common effects (fade, transparency), scrolling behavior is more uniform.
With SPAs… well, good luck! Not only is it often browser dependent, but matrixed with a myriad JS frameworks, conventions, and render/load performance on top of it. I guess the web is certainly exciting, if nothing else!
I consider the “indented use-case” argument a bit weak, since for the last 20 years web developers, browser architects and our tech overlords have been working on making it work for applications (and data collection), and to be honest it does so most of the time. They can easily blame the annoyances like pop-ups and cookie-banners on regulations and people who use ad blockers, but from a non technical perspective, it’s a functional system. Of course when you take a look underneath, it’s a mess, and we’re inclined to say that these aren’t real websites, when it’s the incompetence of our operating systems that have created the need to off-load these applications to a higher level of abstraction – something had to do it – and the web was just flexible enough to take on that job.
You’re implying it’s Unix’s fault that the web is a mess but no other OS solved the problem either? Perhaps you would say that Plan 9 attempted to solve part of it, but that would only show that the web being what it is today isn’t solely down to lack of OS features.
I’d argue that rather than being a mess due to the incompetence of the OS it’s a mess due to the incremental adoption of different technologies for pragmatic reasons. It seems to be this way sadly, even if Plan 9 was a better Unix from a purely technological standpoint Unix was already so widespread that it wasn’t worth putting the effort in to switch to something marginally better.
No, I don’t think Plan 9 would have fixed things. It’s still fundamentally focused on text processing, rather than hypertext and universal linkability between objects and systems – ie the fundamental abstractions of an OS rather than just it’s features. Looking at what the web developed, tells us what needs were unformulated and ultimately ignored by OS development initiatives, or rather set aside for their own in-group goals (Unix was a research OS after all). It’s most unprobable that anyone could have foreseen what developments would take place, and even more that anyone will be able to fix them now.
From reading the question of the interviewer I get the feeling that it’s easy for non technical users to create a website using wordpress. Adding many plugins most likely leads to a lot of bloaty JavaScript and CSS.
I would argue that it’s a good thing that non technical users can easily create website but the tooling to create it isn’t ideal. For many users a wysiwyg editor which generates a static html page would be fine but such a tool does not seem to exists or isn’t known.
So I really see this as a tooling/solution problem, which isn’t for users to solve but for developers to create an excellent wordpress alternative.
I am not affiliated to this in any way but I know of https://forestry.io/ which looks like what you describe. I find their approach quite interesting.
for example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page)
If a user clicks a particular upvote button, you should know where on that page it is located, and can use a page anchor in your response to send them back to it.
It’s not perfectly seamless, sadly, and it’s possible to set up your reverse proxy incorrectly enough to break applications relying on various http headers to get exactly the right page back.
I don’t use Github and only use Gitlab as a mirror. In general it’s better to avoid features which get you stuck to the platform in a manner where you can’t easily move away later.
Since they were acquired by Microsoft, GitHub is doubling down on their “value-added” model. There should be a point where those additions should be standardised in some extent though, because that lock-in might become a big issue in the future.
I don’t think it’s in microsoft’s best interest to ‘standardize’ with other CI services. They want to lock you in.
There’s a book out there about how big change won’t occur until a disaster strikes. It might be “Lessons of Disaster” but I’m not sure if that was it. It was pretty convincing and gave good examples in history. Most importantly, the book showed how a lot of safety laws are implemented, not when people raise concerns, but after many people die from the lack of such laws. It takes a disaster to implement disaster preventions.
I think that might happen to a lot of FOSS communities, where people talking about how it’s bad to get locked-in to a proprietor/vendor won’t be taken seriously (to the point of action) until disaster strikes. It probably won’t happen for a while and won’t be as dramatic, but I think there’s a good possibility that without standardization/decentralization, many will eventually be confronted with the pain that is vendor lock-in.
I think Fossil has the right idea about including the issue tracker, wiki, etc. in the decentralized repos. I hope we see more solutions like that come up and see adoption.
There should be a point where those additions should be standardised in some extent though, because that lock-in might become a big issue in the future.
For you, or for the org tasked with maximizing the number of mouths at the feeding trough?
features which get you stuck
Are you talking about GitHub actions or GitLab CI here?
Because I don’t think that is much of a problem for GitLab CI. Since your jobs are purely script based, it’s quite easy to transition to different platforms. Yes, you can create stages, job dependencies and what not, but still.
GitLab have had CI/CD for ages and it works great, I get that git hub ci is also nice but it feels overly hyped? Does it bring something that other providers lack?
I don’t get how OP can write such a post and not mention GitLab. Reminds me of Apple’s habit of adopting old tech (e.g. NFC) and calling it “innovative”.
This reads a lot like a paid advertisement. Fail to consider alternatives that have existed for years? Check. Do not mention anything bad/negative? Check.
Yeah, doesn’t look to me like there’s anything that Gitlab hasn’t been doing for a while already. I guess I can understand the hype though, a few months I set up Gitlab CI for one of the project I’m working on at my current job and to somebody who’s never done this before it looks cool and exciting.
Something that surprised me was the ability to schedule workflows to run regularly – it eliminated a cronjob from a VPS and keeps the schedule with the code.
I mean, the whole point of “Open Source” was to escape the baggage of the social/political/economic philosophies and ideologies of “Free Software”. So anyone that says “Open Source” is more than licenses is really just helping to propagate the redefinition attempted by the OSI. The term has existed before the OSI and its definition is pretty obvious. The source is open. That’s pretty much it.
I do understand that “Open Source” is a nicer and more mainstream term than “Free Software”, but you can’t have your cake and eat it too. I think the use of that term as “source is open” + “a bit of free software stuff” is disingenious.
Free Software is more than licenses.
That is so weird, I opened an issue 5 days ago asking about variable width, and I couldn’t find any discussion about it whatsoever. Then suddenly I see it on the frontpage of lobsters.
I’ve been reading the FSF’s advice for making money from free software, the lemonade stand repo’s advice, some of the ideas from the heated Github issues, and I feel we’re still really far from making open source a financially stable thing for most. It’s really sad, I personally wish I could just contribute to FOSS all day, but that’s just not practical these days.
It’s kind of like being an artist. You either need a trust fund or a wealthy patron or a day job or a willingness to live in poverty.
Things like this are why I don’t trust Martin’s opinions. He didn’t say a single bad thing about Clojure, he didn’t have any nuance, he doesn’t respect the other viewpoints. He does the same thing with TDD and clean code, where it’s impossible for them to be the wrong tool. He’s been programming for five decades and still thinks in silver bullets.
For the record, his Clojure example is even shorter in J. It’s just *: i. 25
.
His example is shorter even in Clojure:
(map sqr (range 25))
but that misses the point. Both Clojure examples are easy to explain and understand, where in J it is not obvious what *: and . stand for, and how these should be changed should we wanted to compute something different. But even that is not the point.
The point is that Uncle Bob is writing about his own experience with the language that he finds fascinating. He writes about his experience and he gets to choose how he writes it. If anyone disagrees (plenty people do, I suppose) they are very well entitled to write about their experience themselves.
I don’t want to sound like an asshole, but what exactly is his experience besides teaching and writing books ? Cause we see so many people advocating for specific language/technology without any substantial real world experience.
As professional advocates go, he’s well known and (at least be me) well regarded.
A professional advocate advocating for something is a signal too… and a lot of the things he was advocating 25 years ago are still relevant today.
http://web.archive.org/web/20000310234010/http://objectmentor.com/base.asp?id=42
A professional advocate advocating for something is a signal too
Yes, it’s called Appeal to Authority.
I’m also not convinced he’s much of an authority. I’d say he’s a zealot. His tirades against types are tired. His odes to discipline are masturbatory. His analogies… well… This is the same guy who said C++ is a “man’s language” and that you need big balls to write it.
His analogies… well… This is the same guy who said C++ is a “man’s language” and that you need big balls to write it.
This is called an ad hominem. If you’re going to be a stickler about logical fallacies I’m surprised that you can’t even make it a few sentences without contradicting yourself. Are they important or not?
A professional advocate advocating for something is a signal too
This is called inductive reasoning. Given some evidence, such as a well-regarded professional advocating for some tool, we can try to generalize that evidence, and decide the tool has a good chance of being useful. You’ve surely heard of Bayesian probability; signals exist and they’re noisy and often incorrect but minding them is necessary if you want to make any sense of the world around you.
Yes, it’s called Appeal to Authority.
Logical fallacies only really apply when you’re working in the world of what’s called deductive reasoning. Starting from some premises which are assumed to be true, and moving forward using only techniques which are known to be sound, we can reach conclusions which are definitely true (again, assuming the premises). In this context, the one of deductive reasoning, appeal to authority is distinctly unsound and yet quit common, so it’s been given a nice name and we try to avoid it.
Tying it all together, the parent is saying something like “here’s some evidence”, and you’re interjecting with “evidence isn’t proof”. Great, everybody already knew that wasn’t proof, all that we’ve really learned from your comment is that you’re kind of rude.
Fallacies can apply to inductive arguments too, but you are right in that there’s an important distinction between the two types and how they differ. I would say that the comment you’re replying to is referring to the idea of informal fallacies in the more non-academic context. The Stanford encyclopedia has a good in-depth page about the term.
Also, not all fallacies are equal, appeal to authority may be seen as worse than ad hominem these days.
This thread started with, “Things like this are why I don’t trust Martin’s opinions.” Uncle Bob’s star power (or noteriety) and whether that qualifies as social proof or condemnation, is the point of the discussion, not a distraction.
The point is that Uncle Bob is writing about his own experience with the language that he finds fascinating. He writes about his experience and he gets to choose how he writes it.
I wouldn’t be complaining if he was just sharing a language he liked. The problem is he’s pushing clojure as the best language for (almost) everything. Every language has tradeoffs. We need to know those to make an informed decision. Not only is he not telling us the tradeoffs, he’s saying there aren’t any! He’s either naïve or disingenuous, so why should we trust his pitch?
The problem is he’s pushing clojure as the best language for (almost) everything.
That’s not what he said though. The closest he came to that is:
Building large systems is Clojure is just simpler and easier than in any other language I’ve used.
Note the qualification: ‘… than any other language I’ve used’. This implies there may well be languages which are easier for building large systems. He just hasn’t used them.
Not only is he not telling us the tradeoffs, he’s saying there aren’t any!
He repeated, three times for emphasis, that it doesn’t have static typing. And that it doesn’t give you C-level performance.
Note the qualification: ‘… than any other language I’ve used’. This implies there may well be languages which are easier for building large systems. He just hasn’t used them.
We need to consider the connotations and broader context here. He frames the post with
I’ve programmed systems in many different languages; from assembler to Java. I’ve written programs in binary machine language. I’ve written applications in Fortran, COBOL, PL/1, C, Pascal, C++, Java, Lua, Smalltalk, Logo, and dozens of other languages. […] Over the last 5 decades, I’ve used a LOT of different languages.
He doesn’t directly say it, but he’s really strongly implying that he’s seen enough languages to make a universal judgement. So “than anything other language I used” has to be seen in that context.
Nor does he allow special cases. Things like
But what about Javascript? ClojureScript compiles right down to Javascript and runs in the browser just fine.
Strongly connotating that “I’m writing frontend code for the web” is not a good enough reason to use Clojure, and he brushes off the lack of “C-level performance” with
But isn’t it slow? … 99.9% of the software we write nowadays has no need of nanosecond performance.
If Clojure is not the best choice for only 0.1% of software, or even 5% of software, that’s pretty darn close to “best language for (almost) everything.”
He repeated, three times for emphasis, that it doesn’t have static typing.
He repeats it as if the reader is hung up on that objection, and not listening to him in dismissing it. Note the increasing number of exclamations he uses each time. And he ends with
OK, I get it. You like static typing. Fine. You use a nice statically typed language, and I’ll use Clojure. And I’ll be in Scotland before ye.
Combined with his other posts (see “The Dark Path”), he doesn’t see static typing as a drawback. We can infer it as a drawback, but he thinks we’d be totally wrong in doing so.
You have to explain both examples for them to make sense. What does map
do? How do you change sqr
out for a different function? If you learn the purpose of the snippet, or the semantics of each of the individual elements, you can understand either the J or Clojure example just as well as the other (if your understanding of both languages is equal).
Also the meat of the article is trying to convince the reader to use Clojure (by explaining the syntax and semantics, comparing its syntax to two of the big 5 languages, and rebutting a bunch of strawman arguments - nothing particularly in-depth). I don’t see a balance of pros and cons that would be in a true account of an experience learning and using the language, including more than just a bullet point on the ecosystem, tooling, optimisation, community, etc.
I am sure that any programmer that has any experience in any language would guess that you change sqr out for a different function by typing the name of that other function. For example, you compute exp instead of sqr by, well, typing “exp” instead of “sqr”.
The same with map. Of course that someone has to know what particular function does to be able to use it effectively. The thing with Clojure (and other Lisps) is that it is enough to know that. You don’t need special case syntax rules. Any expression that has pretty much complex semantics is easy to write following a few basic rules.
I understand the benefits of the uniformity of Lisp, but my point was just that you can’t really say that (map sqr (range 25))
is any more or less understandable than *: i. 25
if you know the purpose of the expressions and the semantics of their constituent parts. And given that knowledge, you can reasonably make substitutions like exp
for sqr
or ^:
for *:
(though I would end up consulting a manual for the exact spelling).
Further experimentation would require more knowledge of either language. For instance, why if
isn’t a function in Clojure, or why lists don’t have delimiters in J. It’s all apples and oranges at this superficial level.
My version of Clojure doesn’t define sqr
—is that built in?
That aside, I don’t find either version very easy to explain to someone who isn’t already experienced with functional programming. What does “map” mean? How does it make sense that it takes a function as an argument? These seem obvious once you’ve internalized them, but aren’t easy to understand from scratch at all.
If I were reviewing this code, I would suggest they write (for [x (range 25)] (* x x))
Of course that one has to understand the semantics of what they’re doing. But, in Clojure, and Lisps it is enough to understand the semantics, while in most other languages, one has to additionally master many syntax rules for special cases.
Closure has quite a lot of special syntax compared to many Lisps. for
example, data type literals and other reader macros like literal lambdas, def
forms, let
forms, if
forms and other syntax macros like ->
are all built in. Each of these has their own special rules for syntax and semantics.
We’re on the same page I think, except that I think knowledge of semantics should be enough to understand any language. If you see a verb and a noun in close proximity, you’d be able to make a good guess as to what’s happening regardless of the glyphs representing their relationship on screen.
If you want a language that emphases semantics over syntax, then APL is the language for you! There are just a few things to understand about syntax, in order of importance.
¯
. Some dialects have special-case syntax for complex or rational numbers: 42 3.14 1J¯4
''
quotes. Doubling the quote inside an array escapes it: 'is'
or 'isn''t'
[]
braces: 'cafe'[3 2 1 4]
←→ 'face'
(Many APLers have a disdain for this form because it has some inconsistency with the rest of the language.){}
braces.⋄
(Mainly useful for jamming more code into a single line)From there, the grammatical rules are simple and natural in the form of verb noun
or noun verb noun
or verb adverb noun
etc. Probably the most difficult thing to learn and remember is that there is no operator precedence and evaluation reduces from right-to-left.
When I’m programming in APL, I rarely think about the syntax. When I’m programming in Clojure, syntax is often a concern. Should I use map
or for
? Should I nest these function calls or use ->
?
When I’m programming in Clojure, syntax is often a concern. Should I use map or for? Should I nest these function calls or use ->?
None of those are syntax. map
is a function and the rest are macros. They’re all inside the existing Clojure syntax.
Macros can be used to define syntactic constructs which would require primitives or built-in support in other languages. [my emphasis]
True enough. However, at least in Clojure, macros are pretty deliberately limited so as not to allow drastically changing the look-and-feel of the language. So I’m pretty sure every macro you’ll come across (except I guess reader macros) will have the same base syntax, (a b ...)
.
10+ years ago, I switched to git because I considered it better than Subversion. Nowadays, I use git primarily because it’s very popular. I skimmed the article, and didn’t see anything convincing. There’s too much social cost to switching myself and all the teams I’m a member of to something other than git. It almost doesn’t matter how superior any other solution is from a technical standpoint.
I believe there is a “rule” or something out there that says that in order for something technologically superior to overtake its more popular competitor, it must be vastly superior. Not sure if fossil is there in that regard.
Fossil doesn’t just have to be vastly superior to git (not that difficult given git’s very low bar on UI consistency); it has to be vastly superior to “git plus whatever tooling people have added on to make it actually usable” such as magit; and that’s a lot harder.
Although git’s UI is terrible, it may be difficult to vastly improve upon its performance and reliability, considering others have pointed out that fossil has scaling issues. I do think the idea of fossil is great though, our reliance on proprietary and siloed services for stuff like issue tracking makes me uncomfortable.
I’d highly recommend this book on the economics of technology and information: https://en.wikipedia.org/wiki/Information_Rules
Part of the ‘value’ of git is that it’s very widely used, beyond just the functionality of the software itself. Those are network effects.
I agree. We’ve finally arrived at a standard solution that’s both free and works OK. Next stop: \n line-endings everywhere!
In my experience the fastest way to navigate is a combination of fish completions and nnn. I even have a static binary of nnn that I use for remote machines.