I guess the “true hacker” would be using a shell, tmux, emacs and what have you. But for someone like me I like the idea of running something like VSCode in the browser and having an online IDE I can use from anywhere (maybe underneath it’s just running linux on a VM and I have access to the shell for npm i etc.). Does anyone use anything like that. And are they doing it on an iPad?
I develop with VSCode on a Surface Go, which is quite small. I enjoy it, even if the type cover is a bit cramped.
TIL Joe Armstrong has a lobste.rs account that he uses to repost from beyond the grave.
clicked the wrong button, muscle memory as I’m used to submitting my own work. fixed.
“muscle memory as I’m used to submitting my own work”
That’s the best reason I’ve ever read for this mistake. I always look forward to your articles. :)
Brings a whole new meaning to “necroposting”.
(I’m so sorry.)
Performance numbers look really impressive! I love everything about Erlang except I don’t have any experience writing it! Actors are just such a beautiful idea and really shows the benefit of true OOP which is why I used them in Firestr. Is there any performance benchmarks compared to something like the CAF which is for C++?
Any systems that share the same design of Distributed Erlang (e.g., Akka Cluster, Microsoft Orleans) should be able to benefit from our design. We just only evaluated our techniques using Erlang.
This is awesome. The performance numbers looks really impressive. Is there any place I can find the up to date documentation or usage guide for Partisan?
Documentation is a bit lacking at this point – I don’t have numbers, and as I’ve unfortunately become used to saying, you don’t get a Ph.D. for writing documentation on the software you write. We’ve got a little bit up on in the Lasp docs at http://lasp-lang.org.
Nice work @cmeiklejohn. Congrats on the awesome results!
How is this different from other actors solutions?
For a fair comparison, you should be considering actor systems that are distributed because the architectural decisions made here are specific to distribution. That means we would be comparing only to Distributed Erlang and Akka.
Briefly highlighting some of the differences:
I also wrote a summary on how Orleans differs from using Basho’s Riak Core (built on Erlang) for building fault tolerant, highly-available, distributed applications. It provides a line-by-line comparison of the paper.
I found that once I switched from a computer to a notebook for note-taking and general research, I became a lot more productive. I find having the computer in front of me makes me more distracted and less focused, and I love the ability I have to just stick my notebook in my pocket with a pen in my jacket at all times of the day. I bring it to bars, cafes, train rides, the park, etc.
I used to be a big fan of the Ogami Stone Notebooks , the paper is wonderful to write on with a ballpoint pen and is waterproof. However, once I realized that the paper decomposes at around the five year mark, I switched back to a Moleskine.
The Moleskine  has worked well for me – I buy the exact same size and same one every time, I can buy them almost anywhere – abroad, in an airport, in basically every single city I may visit for both school and work. I like the consistency, because I can keep them all together and date them and have a record of whatever I was working on at a given moment related to my research.
It’s an interesting idea, but in my opinion, it reaches a little in service of bolstering the AP story. The observable local real world isn’t eventually consistent, and that’s where we spend all of our time and form all of our intuitions. Even when we see a far off event happening on TV, we perceive it as being perfectly contemporaneous and admit zero possibility that the experience will be altered retroactively to include updated distant information. So I feel like the argument is unconvincing, and I still feel like AP is a great idea waiting for an actual use case.
Personal anecdotal example of correspondingly low value: having dealt with global leaderboards myself for a certain (in)famous, well-trafficked multiplayer shooter, I was super interested to read an article on using AP & CRDTs with leaderboards (https://christophermeiklejohn.com/lasp/erlang/2015/10/17/leaderboard.html). But the benefit of AP was still a mystery to me after reading it; we were able to keep the leaderboards for even this game, which experiences significantly more traffic than all but possibly 2 mobile games (and maybe all), in a single centralized sql instance with a few read slaves. Certain very high update rate stats went into a nosql store. The insert, select and update statements for the sql and nosql stores were the predictable one-liners. Data didn’t have to be specially structured. All of our availability problems had nothing to do with network partitioning between members of the cluster over the course of several years. The architecture and problem-solving around the architecture were so standard I could get junior developers up to speed in hours. Comparing and contrasting that with LASP, even as an erlang fan, …
The observable local real world isn’t eventually consistent, and that’s where we spend all of our time and form all of our intuitions.
I don’t think this is a true statement. Knowledge is a function of perception, and is not immediate unless we’re the agent. Reasoning about truth external to us is bound by the laws of math and physics. For example, if a star goes super nova, I will eventually receive that update–but someone closer to the event will undoubtedly see it first. We can compute, based off relative distance, when that event occurred but it might take some time to form consensus.
I agree that it’s only a true statement insofar as I included the word “local”. I drop a hammer; the hammer is on the ground simultaneously for everyone in the room. There’s no possibility that a larger quorum of people are going to barge in and declare that the hammer did not in fact land on the floor, because it was replaced by a screwdriver before it could, and so now we should all amend our experiential beliefs to account for this new factual evidence. I’ll agree with you as far as cosmological events over interplanetary distances, but the market for those databases may not be very big at the moment.
But the benefit of AP was still a mystery to me after reading it; we were able to keep the leaderboards for even this game, which experiences significantly more traffic than all but possibly 2 mobile games (and maybe all), in a single centralized sql instance with a few read slaves.
The benefit of the CRDT based leaderboard, using Lasp, is that it allows peer-to-peer synchronization without coordinating with a central MySQL instance, where the guarantee is convergence by each member in the system without the risks of message ordering introducing nondeterminism.
Doh, I wasn’t clear, my apologies. I understood the paper, and Lasp looks pretty nifty. What I didn’t understand was the motivation or the payoff, given the triviality of the problem space, the low cost-to-implement and cost-to-operate of the “traditional” solution, and the added complexity of the Lasp-based solution. What factors, if any, got a lot better and made the cost-benefit analysis worth it to do it this way? Or was this pure thought experiment?