What a coincidence, just yesterday I came across ATS again and figured it would be a good time to learn it. Anybody crazy enough to join me?
I’m procrastinating learning ATS right now. Btw, how’s your followup to part 1 of “Reading Ur/Web signatures”?
Yeah, about that… I kind of sketched out the ToC and then left it there to rot. I should finish that first, maybe refreshing my thoughts on it will help me with learning ATS, too.
I saw the youtube video - while ATS strikes me as extremely interersting, I struggle to see a use professional use case for it. Would you mind to elaborate your motivation behind it, except, of course, intellectual cusiosity? I am already licking my wounds from my excursion into the jungles of Haskell, where Monads, Functors and other monsters bit me…
In my case I used it for writing network services. I used linear types and proofs to help ensure program correctness and no memory leaks without a GC. Similar to what you’d use Rust for now, but with the ability to do proofs and dependent types.
thanks for listing your project.
Similar to what you’d use Rust for now,
Similar to what you’d use Rust for now,
Could you give a comparison between ATS and Rust? My understanding is that Rust’s ecosystem and UI/tooling is way more developed, while ATS seems to give more freedom on how programs can be written. Or does Rust replace ATS in all function and form?
Rust is definitely more developed and user friendly. Rust doesn’t replace ATS in that ATS has a more advanced type system. Things you need to do in unsafe Rust, like pointer arithmetic and dereferencing, can be done in safe ATS. ATS feels more low level - at the level of C but with an ML feel. ATS compiles to C so you can use any C compiler to use on another platform. Embedding C and calling C is easy. There are very few libraries though. I’ve written more about ATS here: https://bluishcoder.co.nz/tags/ats/
Do you think ATS is used anywhere in industry? Drivers would be my first guess -
That is quite an impressive post record. Thanks!
A “professional use case” isn’t exactly what I’m after by learning it, but if you were to turn a blind eye to ergonomics and other stuff that supposedly could be fixed, I don’t think it’s such a hard sell. C with a great type system + all the nice ML features sounds like the best thing after sliced bread to me, and if I get to learn more about proofs and linear types in the process all the better. And even if I never get to use it I can still steal its ideas for the language I’ll never get around to implement.
There’s a C compiler mathematically proven not to change program semantics when doing optimizations—CompCert. It’s the dependently typed language/proof assistant (Coq) that made that possible.
I guess one problem with dependent types isn’t that they are inherently hard, but that they are not in the “mass consciousness”. Simple algebraic types and pattern matching are slowly getting into it, even though languages like Swift shy away from calling them what they are.
At one point in hisroty, the concept of variables was revolutionary, but now it’s completely intuitive to everyone. Dependent types probably will be too.
Good luck. I dismissed learning ATS after watching this video:
It’s not as hard as its reputation. Especially if you start with the ML side of things and slowly add dependent types and proofs as you get more familiar.
I’m watching that right now. My confidence comes from having already learnt Ur/Web, which is similar (derived from SML, bad ergonomics, arcane type system, lots of syntax, small community, sparse documentation, etc.) though a bit less threatening than ATS.
Man, I loved that talk - one of my top 3 from that year’s strange loop.
I went through a (long) period of OCaml development. It’s surprisingly common in the enterprise, or at least was when I was using it. It’s a truly beautiful language.
That being said, I’ve always fallen on the Caml side of the ML-vs-OCaml debate. I’ve always been loathe to mix paradigms and the “O” part of OCaml is, while absolutely done in the right way, a lot to add on top of Caml (or ML).
(Again, it’s easy to criticize. I’ve never written a beautiful language used by millions. It’s solely a personal qualm of mine.)
Are there advantages to Caml over Standard ML?
Countless advantages. It’s purely a minimalism thing on my part.
could you sketch out the type of projects you were building in Ocaml? If starting over, would you consider Haskell as a suitable replacement?
Does the “O” part leak into everything else or can you easily work more or less only with the Caml side of OCaml?
In my experience (YMMV), you can pretty much ignore the O part…but (a) that was years ago and (b) I’m really bad at ignoring things even when I’m not using them.
It is still the same. In fact you almost never need to use The O part, and also when you think you do, in most cases, it is more idiomatic to use modules and module functors (parametrised modules)
Though there are cases where the O part might be useful. Personally I don’t use it, but just as @lorddimwit noted it is quite an interesting take on “objects” that might even benefit predominantly functional programmers.
Not to be that guy, but they didn’t list FastMail under GMail alternatives. I’ve used FastMail for a while, and it’s pretty cool. It’s also been around for a pretty long time, which makes me feel better.
I have to say that it is not necessarily Email that I am after when it comes to Google alternatives. Email, after all, is one of the most insecure communications types in use (emails are stored on a server operated by people who are often unknown and untrusted to the user, they are often passed around between servers without encryption).
Upgrades to email are either not on point in terms of security, or inconvienient e.g. PGP has several key shortcomings 1) ugly 2) malicious “man in the middle” option, and confusing key exchange rituals and 3) people don’t really bother about [their] human rights.
Perhaps a new communications standard e.g. Bitmessage or something else will come up that can prove that it can improve the situation and kill email.
I’m using mailbox.org.
Unfortunately, a while ago they killed their email support for non-business customers.
mailbox.org is listed as a Google Calendar alternative but not e-mail, interesting. Are you satisfied with mailbox.org? I’m tempted to switch from fastmail.
From the top of my head:
Been on mailbox.org for about a year, so far very happy. Particularly nice that I can have a shared calendar with my wife.
+1 - their web interface is excellent and doesn’t hate partially blind people, and they’re super interested in standards compliance. Big fan.
An interesting article about the topic of de-googling one’s life can be found at https://restoreprivacy.com/google-alternatives/
It is indeed an important topic when it comes to the control of the common/open space. And so is open source hardware, and open access to scientific publications. We live in interesting times.
This is not a very good idea, IMO. Many utilities/tools tend to create folders and files in the home directory, like Ghidra for example, creates a ~/ghidra_scripts. It’s bound to get messy fast.
It’s probably a better idea to maintain a separate “dotfiles” folder where you symlink config files/folders that you want to persist across installs, and version that using git instead.
What do you mean?
The first line in the ~/.gitignore file causes it to ignore everything. Anything you want to track you need to add explicitly and new files won’t even show up in git status.
Oh right, yes. My bad. I’d failed to notice the first like in your .gitignore.
Not mine, but yeah I wondered :-). I’m using the more traditional dotfiles approach as well.
I had actually tried to put my home dir under git ages ago, running exactly into the issues you’ve mentioned. Ignoring everything by default and only adding files explicitly does sound like an interesting alternative to consider at least.
This argument makes sense to me when it comes to synchronizing two or more different computers.
What kind of version control systems is preferable when it comes to the synchronization/backup of the ~/Documents folder alone? I guess syncing Windows/Linux mashines should not be a problem here, no?
I using rsync for backups these days, and never thought about looking into version control systems. Any experiences here?
It depends. If your files for backup are generally text based, i.e. config files, then using a VCS is a good idea. Something like git offers great ease in managing them. But if your files are binaries, like pictures for example, then rsync is alright, I guess.
Has anyone here used it in earnest? I had reason to be upset with git again last week and was considering making the jump back to mercurial, but if this is as close to ready as it sounds, I might give pijul a shot.
I’m one of the authors of Pijul. It’s obviously not as “ready” as Git, and you can probably expect some hiccups, but they should be quite minimal, and even more minimal if your repositories are stored on nest.pijul.com: the current repository format is probably not going to change much, but I did convert repositories on the Nest between versions in the past, every time a change happened.
The disk space usage of Pijul is still relatively high compared to Git, and (in part because of that), super large files are not very well supported yet. But this should be mitigated in the next release.
I believe people have found the SSH interface to be quite stable: we were hearing lots of complaints a year ago, they’ve mostly stopped now (maybe these users have also stopped using Pijul, I don’t know ;-) ). The HTTP interface is still relatively slow, and breaks sometimes (I believe this is because nest.pijul.com runs on one small machine).
Another painpoint is the tooling: I’d love to be able to use Pijul from within Emacs, and also have a fetchPijul command in NixOS, but no one has written them yet, and these things are out of my focus at the moment (I try to focus on Pijul’s development, among other things).
Edit: also, yes, we’ve been using it ourselves for two years now. It was much harder in the beginning than it is now. Also, bootstrapping is not always completely obvious: in the last few weeks, I’ve authored patches named “This patch contains a fix that allowed me to record it”. The latest simplifications in the algorithms gave me reasons to believe that this is mostly behind us now.
thank you for writing about your project. Looking around your website, I only could get a rough conceptional idea how pijul would be advantageous in comparison to say, git or other versioning systems. I read a bit through your category theory based explanations of your merging algorithm - unfortunately, for me at this point, a real world example might help here, e.g. a python program or something similar. Would you perhaos have such an example at hand?
This is a good series of blog posts that explain why patches are fundamentally different from commits:
The fundamental idea is that each line has its own identity, and a patch is the addition and removal of identified lines. This means that to get to your repository to a certain state, you look at the patches that get you there, and you apply them, where that means reconstructing the file out of uniquely identified lines. This means that in some cases you can reorder your patches with no problem. It also means that you have better tools for resolving merge conflicts.
In snapshot-based systems like git and Mercurial, every reordering of commits requires a merge. Most of them are usually automated, but when they’re not, you’re dropped into a dreaded merge conflict and place the user into a situation they would really like to not be in.
Patches don’t guarantee that there won’t be any merge conflicts, but patches have more information to help you resolve conflicts when they happen. Since each line has its own identity, Pijul can make merge conflicts less likely to happen and easier to resolve when they do.
Here is a concrete example of a Git merge doing the wrong thing on a C program: https://tahoe-lafs.org/~zooko/badmerge/concrete-good-semantics.html
Quick question, I have many repos that use darcs. Is Pijul stable enough to replace darcs yet? Is there any reason for me to stick with darcs for now?
It is not stable enough to replace darcs completely, in the sense that it is very unlikely that darcs will ever change its repository format, whereas we might still change that a little bit in Pijul. The patch format is almost certainly stable, which means that conversion will be just a matter of re-cloning existing Pijul repositories, but there will still be a conversion step.
For full disclosure, I still use darcs for some parts of the development of Pijul, such as the Nest, to avoid bootstrapping issues whenever possible.
Still, it’s exciting to see that darcs’s ideas evolved and matured to make Pijul. Is there a darcs->pijul import? I guess if there is a git->pijul import than I can do darcs -> git -> pijul.
A side note, I tried compiling pijul. I have rust 1.32 and I’m stuck at downloading crates! Cargo actually segfaults, so I’m kind of stuck until I update rust. But it makes me sad to see segfaults in rust since, like, it tries really hard to avoid them as a language!
Pijul is indeed tested on the latest stable Rust. I’ve seen segfaults in Cargo and Rustc before, they were due to a linking problem in NixOS. Can you try to find out which crate causes a segfault?
I just tried again (after many tries) and now it just works. I guess I had the right bits in memory this time. It’s built now, thanks.
I am in week 3 of the OCaml MOOC: https://bit.ly/2y2V989
Very interesting course, I currently find OCaml interesting because it influenced F# & Rust. I tried F# but find the CLR slow and the language tools are not a good fit when you are in a Unix-like system and and prefer a plain editor rather than IDEs. I still find Rust very interesting, but less so for business type software. Also with OCaml you can easily apply the patterns described in https://pragprog.com/book/swdddf/domain-modeling-made-functional
Hello @eterps -
very intersting statement: * “I still find Rust very interesting, but less so for business type software.”*
I am curious why this is the case? Is it the state of the library ecosystem, or the fact that it is too much overkill to use Rust’s version of memory management for business software?
The second one, for most business type software I doubt that memory management is the foremost concern. I do like Rust’s library ecosystem, its package management and its modern tooling in general. Rust’s type system is also flexible enough to describe business problems in a similar way as: https://fsharpforfunandprofit.com/posts/no-uml-diagrams/ , if there was something like Rust with garbage collection I would be very interested. For low-level programming Rust is perfect as it is. Maybe something like http://gluon-lang.org can fulfill that role in the future, however currently its type marshalling is a productivity killer.
Thank you for clarifying your comment. Pity that these is not equivalent to garbage collected Rust that has all the bells and whistles you would like to have, and not the baggage of the JVM/Mono juggernaut.
Over a decade ago one of my professors told me that the main barrier to implanted stimulation/recording devices was materials science. The brain tends to be very paranoid about foreign matter inserted in it and quickly encapsulates it. There have been numerous attempts to put neurotrophic factors on implanted electrodes (to encourage neurons to grow towards them) and to come up with conducting but more neutral materials (like diamond, or other funky carbon forms) and I don’t know if there has been a proper breakthrough.
Neural interfaces are interesting because you start out feeling you need to understand how the brain codes information and it turns out the brain can adapt it’s code (to an extent) to what you can read/write just like it adapts to all other situations.
There was an article not too long ago mentioned on the Skeptic’s Guide to the Universe podcast:
Flexibility was key to reducing scar tissue formation.
One of the many troubles of understanding the brain is the fact that it is a bio-electric mashine which introduces an incredible depth of biochemical, biological and biophysical aspects. We are speaking here about interdependent problem domains, from the key-keyhole principle and neurotransmitters, energy/matter flow, all the horrors of cell biology and fun protein circus. We have very little understanding at this stage due to the depth and breadth of the challenge. I guess it is one of the great puzzles of life - and looking at the state of our society, perhaps it is good we do not understand it yet, even if it means that we can not heal certain psychological diseases such as depressions, psychosis, et cetera.