I went to WebAudio Conf in Berlin pre-pandemic, and at the conference party there were a few live performances. One was a performance where people picked random sounds by searching for words in a poem that was read on an open source database. I appreciated the performance aspect, but the resulting sound wasn’t my jam. The person that closed the night was an amazing live coding performance using Gibber.
And in between was this couple that presented themselves surfing the internet with a browser plugin that would make a specific sound for each of the major trackers. They searched for stuff, bought something, chatted with each other on Facebook, and the pings became so common it almost became background music by the end. Very similar concept to this, and the mundaneness of the performance aspect was pretty sobering.
I love the vibe of this post. It feels more like a candid conversation with a colleague about that crazy thing they just did/figured out than the usual stilted marketing speak of an announcement like this.
I wonder if something like Phoenix Live View will end being the the technology that unseats React?
Interactivity is definitely a a strong attractor. For me, my recent projects have definitely been JS focused, as opposed to rendering on the server side, though that is mostly due to finding gotoB.js.
The one problem with technologies like Phoenix Live view, is that absent a very easy way to keep track of clients on the server (which Live view provides), it’d be hard to graft onto existing web frameworks. C# might be able to graft something like that in, Python and Ruby would struggle a great deal with all the multiprocessing/async that would be involved for them, at least I’d think so.
I’ve been thinking about this one a lot, so much so that I decided to do a medium-ish project using primarily Phoenix templating, including LiveView. My takeaway so far has been that, while the experience overall has been positive for me, I think “unseating” is going to be an uphill battle (although not impossible).
One issue is that React just has so much momentum right now. The ecosystem is huge, and more and more developers joining a team or starting a project are going to reach for React quickly for reactive interfaces (or at least be willing to). One consequence of that is the ecosystem for both code, utilities, and even tutorials or documentation are feeling more and more weighted towards React and its ilk.
An example of the ecosystem note above was looking at headless. All of the options out there provide dead simple utilities for React + JavaScript integrations. If they do have options for server-side rendered examples, Elixir isn’t on the list (at least I haven’t seen it). Granted, this is specific tooling was often designed for use with the “JAMstack”, but it’s still indicative of the hurtle in front of another piece of technology on the client to take a sizeable piece of market share from the incumbents.
All the packages were already centralized, though, and TBH I think Microsoft / Github are likely to be better stewards of the npm system, given all their resources.
This. One can make the argument that there should be a fundamental shift in how we do package management, but that feels like a very different conversation. This is a critical piece of centralized architecture changing hands to an organization that is objectively better equipped to manage it, and that feels like a net win for the ecosystem.
That doesn’t mean we can’t still have that conversation about shifting away from that centralized architecture, but I think we can still take this win.
Honestly, I was always kind of concerned that NPM, Inc. would do something insane / evil to make money. MS / GitHub don’t actually need to make money on this, that’s the benefit as I see it.
What stunts are you referring to? None really come to mind in the last decade, and the Github acquisition, while admittedly still in or close to the honeymoon phase, has overall seemed to go well. On the open source side of things, Microsoft’s management of TypeScript has been fine, and I haven’t heard too many complaints about how they’ve been doing in terms of maintaining VSCode.
The tools for decentralized package management already exist, to some extent. Both npm and pip, for example, support directly installing dependencies from sourcecode repos (ie. git). Granted, this means you have to ‘compile’ as part of your install process, which isn’t always feasible, but most of the time that’s fine.
From working with Go, installing dependencies from repos leads to less reliable builds because when a single fetch fails go’s module resolver will abort. We ended up having to wrap lots of build tasks in retries in our tooling to handle network hiccups, and that still didn’t help that a single third party server being down can break everything.
The solution seems to be to run a centralized proxy that itself calls out to the repos to insulate your build from this problem. That is what go is doing now, and it seems to work pretty well. That gets you (theoretically) the best of being both distributed and reliable, buts it’s more involved than a centralized system.
Yep, totally agree! npm already can be run without needing an npm registry at all, but you can also run your own registry if you’d like (or use someone else’s). I was more trying to address calls for things like Entropic that always happen when npm, inc news comes up.
The alternative to npm is yarn, which is owned by Facebook. I’m not super comfortable with Github owning NPM, but I also think it will be fine considering you can run your own private repositories.
A more relevant alternative is Entropic, which is actually decentralized and integrates with npm as a legacy source and was specifically developed to address the SPOF of an investor-backed startup.
At the end of the day, you’re still using the Node.js ecosystem with all of its
problems, one of which is how deeply entrenched npm and its registry are. The
solution would be an alternative to Node.js.
If server-side JS is a requirement, then Deno looks interesting.
I’m sorry if I’m being dense or missing something, but how is this better than something like RRWeb or one of the SaaS alternatives? An entire browser dedicated to session recording feels like a huge ask to collect bug reports from end users.
Hi, the difference here is in the level of detail of data being collected. RRWeb records DOM mutations, and other tools generally record screen videos and/or console messages. This is only a small fraction of what is going on in the browser, and for example none of these tools can show all the JS that is running or how it affects the page. Web Replay records absolutely everything the browser does. When debugging a recording developers can see everything that happened when the recording was made, as if they were debugging a tab on their own machine. This takes the guesswork out of the bug reporting process.
Asking users to download a browser to submit a bug report is definitely a big ask, and this product is a better fit for developers and dedicated QA staff. Still, we feel that after developers have some experience tracking bugs down with Web Replay (especially by using its time traveling features), other tools will feel pretty limited and asking users to download the browser and submit recordings won’t seem like such a stretch.
Tiny nit: For what it’s worth, if you’re involved in the project, a small piece of feedback for the marketing site is to include any kind of one liner or project description on the home page. I think there are a lot of folks like me that aren’t going to want to watch a video with sound to get context.
Ah yeah, there’s some jargon in there. Apple has a streaming format called HLS, and they recently announced their plan to support lower latency streaming via that format. That flew in the face of some community extension efforts and is going to be quite problematic for a lot of the major CDNs to support, so it’s a big discussion in the online video community right now.
Your comment made me realize I totally forgot to tag Elixir too! We mostly use Hackney right now (via Tesla) and we’ve generally been happy, but I’d be pretty excited to see a Gun adapter.
I just happened to check recent activity in the Tesla repo and saw a Gun adapter PR was recently merged! Figured I’d update this thread for anyone else still actually paying attention :)
Some of the NewPipe features really do look great, but this feels like a lot of gushing over a GUI on top of an actual platform. A nontrivial amount of the post could be distilled down to:
NewPipe is great because it allows you to circumvent monetization for both the creators and their platform because I don’t agree with the platform’s price point (oh also, consider using a random 3rd party to donate what you feel is fair to the creators and not said platform).
I’m not saying we should all shed a tear for Google’s lost revenue or anything, but is that really “the best of FOSS”?
Supporting creators directly almost always nets them 10x the revenue that you’d generate by pointing your eyeballs at ads, and it’s far less annoying for users, and far more personal and genuine for both parties.
As for the platform, bandwidth isn’t cheap and that’s worth something. I hope that PeerTube continues to grow into a viable alternative.
Yep, I agree! Ads are inefficient and terrible for basically everyone involved (except for exchanges/brokers), which is why I almost always welcome a way to simply pay for services, especially ones I use all the time (like YouTube).
Edit: Just to clarify, by the way, I think you make a lot of good points about what makes NewPipe great (and it does look great). I’m not trying to be Smart Negative Guy™, more just trying to think through the monetization side of things and how that relates to “the best of FOSS.” If we value free, open delivery of video with a great featureset, shouldn’t NewPipe just front PeerTube?
If we value free, open delivery of video with a great featureset, shouldn’t NewPipe just front PeerTube?
We do value that, and for important reasons. However, this gets back to a point I made in the article:
There are a lot of political and philosophical reasons to use & support free and open source software. Sometimes it’s hard to get people on board with FOSS by pitching them these first. NewPipe is a great model because it’s straight up better, and better for reasons that make these philosophical points obvious and poignant.
People are already using YouTube, already follow their favorite creators there, and can use NewPipe to get a better experience using the platform and content they’re familiar with - and happen to a great introduction to why free software is important in so doing.
Yes, the utopian ideal PeerTube represents is something to strive for and I hope that we get there (and will personally help where I can to get us there) - but it’s harder to understand why it’s important to someone who doesn’t already understand free software. PeerTube isn’t necessarily objectively better, either, there are still some streaming problems, it’s missing a lot of content creators, there’s little mobile support, etc. NewPipe, on the other hand, makes these arguments immediately self-evident and is a compelling piece of software in its own right.
The point with a lot of this is that it’s the creator that gets to choose how to get money. Like yeah “you’re choosing against your own interests” but the creator is choosing the terms, and I feel like it’s not really up to us to say “actually no I want your stuff but I refuse to look at the ads”
I’d like to support creators directly, and would like for stuff to be easier for creators to survive. Just think respecting the creators choices are important as well
I think that gets tricky, but keeping to YouTube’s case, I pay $12/mo for the service and still use a privacy blocker.
Ads undeniably are a privacy nightmare, which is why I personally use one. I also opt for a pay option when I can, such as blendle for news. What I do think gets less ethically ambiguous on that front is when a service does offer a paid option without ads, but the response is simply “I don’t want to pay for it.”
YouTube does not forbid blocking ads. Blocking ads without paying is allowed by YouTube, so it can’t be stealing. At best it is diligently collecting coupon to take advantage of price discrimination.
You and your API Clients must not, and must not encourage, enable, or require others to:
modify, interfere with, replace, or block advertisements placed or served by YouTube or by YouTube API Services including in API Data, YouTube audiovisual content, or YouTube players;
That term is about YouTube API Services, which is explicitly not YouTube websites. See section IV.
The comment was not really specific to YouTube. Collecting discount coupons is not stealing from Walmart. Usually, blocking ads is also not stealing from websites.
Agreed. I’m very up on the idea of getting more languages to run on the BEAM. I miss static types and, frankly, I wish that Rust could compile down to run on the BEAM!
Yeah, I love Elixir to death, but sometimes I find myself wishing for a real type system. Some folks swear by Dialyzer, but it feels a bit like a cludgy piece of typing duct tape.
The dynamically-typed nature of Erlang and Elixir and BEAM comes from a design requirement: that the systems built in Erlang can be upgraded at runtime. Strong typing gets quite a bit more complicated when you need to be able to have multiple versions of a type coexist at runtime.
Side note, this took me a while to absorb when beginning to write Elixir. My instinct was to use structs instead of generic maps for GenServer state, since better-defined types are better, right? But that imposes hard requirements on hot upgrades that wouldn’t have been there if I’d used untyped maps from the start; removing fields from a struct breaks upgrades. This knowledge was somewhere between “esoteric” and “esoteric to Ruby assholes who just showed up, well-known to wonks”. The Erlang Way is a lot more than “let it crash”. :)
The dynamically-typed nature of Erlang and Elixir and BEAM comes from a design requirement: that the systems built in Erlang can be upgraded at runtime. Strong typing gets quite a bit more complicated when you need to be able to have multiple versions of a type coexist at runtime
Yeah, I really wish there was more type system research going into figuring out how to use them effectively in upgradable, always-on systems, where you might have heterogeneous versions across a cluster. I actually think static types could be super helpful here, but as far as I’m aware there doesn’t seem to be much work put into it.
And when people talk about “I wish there was a type system” they probably don’t realise that Erlang is very different animal (that can do things other animals have no concepts for). Just bolting on types is not an option (if you want to know what happens if you do so, look at CloudHaskell — you have to have a exact binary for every node in the entire cluster, or else).
Just bolting on types is not an option (if you want to know what happens if you do so, look at CloudHaskell — you have to have a exact binary for every node in the entire cluster, or else).
That’s what I mean. I see Cloud Haskell as interesting, but really not the distributed type system I want. It would be super cool to see more new ideas here (or rediscovery of old ones, if they’re around). Eg. you may need some kind of runtime verification step to ensure that a deployment is valid based on the current state of the world. Perhaps some stuff from databases and consensus would help here. Doing that efficiently could be… interesting. But that’s why research is important!
I think protocol buffers (and similar systems like Thrift / Avro) are pretty close to the state of the art (in terms of many large and widely deployed systems using them). When you write distributed systems using those technologies, you’re really using the protobuf type system and not the C++ / Java / Python type system. [1] It works well but it’s not perfect of course.
I also would make a big distinction between distributed systems where you own both sides of the wire (e.g. Google’s), and distributed systems that have competing parties involved (e.g. HTTP, e-mail, IRC, DNS, etc.). The latter case is all untyped because there is a “meta problem” of agreeing on which type system to use, let alone the types :) This problem is REALLY hard, and I think it’s more of a social/technological issue than one that can be addressed by research.
[1] This is a tangent, but I think it’s also useful to think of many programs as using the SQL type system. ORMs are a kludge to bridge SQL’s type system with that of many other languages. When the two type systems conflict, the SQL one is right, because it controls “reality” – what’s stored on disk.
Alice ML is a typed programming language designed to enable open extensions of systems. Objects can be serialized/deserialized and retain their types and it’s possible to dynamically load new code.
You might find ferd’s intro helpful. For historical perspective with some depth, you might like Armstrong’s thesis from 2003 that describes everything in deep detail.
Yup, this is related to the point I was making about protobufs and static “maybe” vs. dynamic maps here. In non-trivial distributed systems, the presence of fields in message has to be be checked at RUNTIME, not compile-time (if there’s a type system at all).
I think of protobufs/thrift as trying to “extend your type system over the network”. It works pretty well, but it’s also significantly different from a type system you would design when you “own the world”. Type systems inherently want a global view of your program and that conflicts with the nature of distributed systems.
So this is really interesting. I read the paper on success typing and it seems pretty cool. It still, however, doesn’t guarantee soundness. Then on the other hand, neither does TypeScript, so it’s hard for me to make up my mind about what I want.
Going to a Vince Staples show in Oakland tonight! Otherwise, I’m unreasonably excited about doing laundry and hanging out with my dog. I’ve been traveling a lot lately with more coming up, so it’ll be nice to just…not.
i blinked a bit at the fact that this was all expected to be done in a week, but i guess the senior engineer who asked to use reason was aware of the scope of the project.
I had the same reaction. I’m really curious about ReasonML, but the learning curve (particularly for a junior engineer) feels like it would be pretty wild.
My guess is the POC was done in a week. She does mention spending at least a month working on ReasonML in both backend and frontend, so it sounds like it was an ongoing effort.
I went to WebAudio Conf in Berlin pre-pandemic, and at the conference party there were a few live performances. One was a performance where people picked random sounds by searching for words in a poem that was read on an open source database. I appreciated the performance aspect, but the resulting sound wasn’t my jam. The person that closed the night was an amazing live coding performance using Gibber.
And in between was this couple that presented themselves surfing the internet with a browser plugin that would make a specific sound for each of the major trackers. They searched for stuff, bought something, chatted with each other on Facebook, and the pings became so common it almost became background music by the end. Very similar concept to this, and the mundaneness of the performance aspect was pretty sobering.
I love the vibe of this post. It feels more like a candid conversation with a colleague about that crazy thing they just did/figured out than the usual stilted marketing speak of an announcement like this.
Also, this is technically very cool.
I wonder if something like Phoenix Live View will end being the the technology that unseats React?
Interactivity is definitely a a strong attractor. For me, my recent projects have definitely been JS focused, as opposed to rendering on the server side, though that is mostly due to finding gotoB.js.
The one problem with technologies like Phoenix Live view, is that absent a very easy way to keep track of clients on the server (which Live view provides), it’d be hard to graft onto existing web frameworks. C# might be able to graft something like that in, Python and Ruby would struggle a great deal with all the multiprocessing/async that would be involved for them, at least I’d think so.
I’ve been thinking about this one a lot, so much so that I decided to do a medium-ish project using primarily Phoenix templating, including LiveView. My takeaway so far has been that, while the experience overall has been positive for me, I think “unseating” is going to be an uphill battle (although not impossible).
One issue is that React just has so much momentum right now. The ecosystem is huge, and more and more developers joining a team or starting a project are going to reach for React quickly for reactive interfaces (or at least be willing to). One consequence of that is the ecosystem for both code, utilities, and even tutorials or documentation are feeling more and more weighted towards React and its ilk.
An example of the ecosystem note above was looking at headless. All of the options out there provide dead simple utilities for React + JavaScript integrations. If they do have options for server-side rendered examples, Elixir isn’t on the list (at least I haven’t seen it). Granted, this is specific tooling was often designed for use with the “JAMstack”, but it’s still indicative of the hurtle in front of another piece of technology on the client to take a sizeable piece of market share from the incumbents.
I don’t like all this centralization.
All the packages were already centralized, though, and TBH I think Microsoft / Github are likely to be better stewards of the npm system, given all their resources.
This. One can make the argument that there should be a fundamental shift in how we do package management, but that feels like a very different conversation. This is a critical piece of centralized architecture changing hands to an organization that is objectively better equipped to manage it, and that feels like a net win for the ecosystem.
That doesn’t mean we can’t still have that conversation about shifting away from that centralized architecture, but I think we can still take this win.
..that has a history of pulling crazy stunts just to make money. Uncomfortable indeed.
Honestly, I was always kind of concerned that NPM, Inc. would do something insane / evil to make money. MS / GitHub don’t actually need to make money on this, that’s the benefit as I see it.
What stunts are you referring to? None really come to mind in the last decade, and the Github acquisition, while admittedly still in or close to the honeymoon phase, has overall seemed to go well. On the open source side of things, Microsoft’s management of TypeScript has been fine, and I haven’t heard too many complaints about how they’ve been doing in terms of maintaining VSCode.
I agree that recently, I assume after Nadella started as CEO, MS has been doing a lot of great work to clean their track record.
So perhaps (hopefully!) things have structurally changed since the times they introduced their own version of Java, or since their tricks to retain a monopoly over internet browsing, or indeed originally their repackaging other people’s work just to sell an OS to IBM without primary experience in building it.
Except for testing the waters whether it is time to be evil again.
The tools for decentralized package management already exist, to some extent. Both
npm
andpip
, for example, support directly installing dependencies from sourcecode repos (ie. git). Granted, this means you have to ‘compile’ as part of your install process, which isn’t always feasible, but most of the time that’s fine.From working with Go, installing dependencies from repos leads to less reliable builds because when a single fetch fails go’s module resolver will abort. We ended up having to wrap lots of build tasks in retries in our tooling to handle network hiccups, and that still didn’t help that a single third party server being down can break everything.
The solution seems to be to run a centralized proxy that itself calls out to the repos to insulate your build from this problem. That is what go is doing now, and it seems to work pretty well. That gets you (theoretically) the best of being both distributed and reliable, buts it’s more involved than a centralized system.
We use vendoring with Go. imho that’s the best: you do get decentralized repos, but you don’t need to download anything on dev machines or CI server.
Another benefit of the Go proxy approach is that it does not require git and hg installation.
Yep, totally agree!
npm
already can be run without needing an npm registry at all, but you can also run your own registry if you’d like (or use someone else’s). I was more trying to address calls for things like Entropic that always happen when npm, inc news comes up.In these trying times it’s important to remember that we need letrec.
The alternative to npm is yarn, which is owned by Facebook. I’m not super comfortable with Github owning NPM, but I also think it will be fine considering you can run your own private repositories.
yarn is still based on the NPM registry.
This is refuted in their own Q&A: https://yarnpkg.com/advanced/qa#is-yarn-operated-by-facebook
A more relevant alternative is Entropic, which is actually decentralized and integrates with npm as a legacy source and was specifically developed to address the SPOF of an investor-backed startup.
However, development seems to have stalled at the end of last year: https://github.com/entropic-dev/entropic/commits/master
Two of the core maintainers made statements this week on twitter that they cannot really work on it for a multitude of reasons.
At the end of the day, you’re still using the Node.js ecosystem with all of its problems, one of which is how deeply entrenched npm and its registry are. The solution would be an alternative to Node.js.
If server-side JS is a requirement, then Deno looks interesting.
I’m sorry if I’m being dense or missing something, but how is this better than something like RRWeb or one of the SaaS alternatives? An entire browser dedicated to session recording feels like a huge ask to collect bug reports from end users.
Hi, the difference here is in the level of detail of data being collected. RRWeb records DOM mutations, and other tools generally record screen videos and/or console messages. This is only a small fraction of what is going on in the browser, and for example none of these tools can show all the JS that is running or how it affects the page. Web Replay records absolutely everything the browser does. When debugging a recording developers can see everything that happened when the recording was made, as if they were debugging a tab on their own machine. This takes the guesswork out of the bug reporting process.
Asking users to download a browser to submit a bug report is definitely a big ask, and this product is a better fit for developers and dedicated QA staff. Still, we feel that after developers have some experience tracking bugs down with Web Replay (especially by using its time traveling features), other tools will feel pretty limited and asking users to download the browser and submit recordings won’t seem like such a stretch.
Thanks for the clarification!
Tiny nit: For what it’s worth, if you’re involved in the project, a small piece of feedback for the marketing site is to include any kind of one liner or project description on the home page. I think there are a lot of folks like me that aren’t going to want to watch a video with sound to get context.
I read several paragraphs and couldn’t figure out what this article is about. Something to do with live streaming video I think?
Ah yeah, there’s some jargon in there. Apple has a streaming format called HLS, and they recently announced their plan to support lower latency streaming via that format. That flew in the face of some community extension efforts and is going to be quite problematic for a lot of the major CDNs to support, so it’s a big discussion in the online video community right now.
I appreciate the clarification!
Gun also supports websockets, something I’d like to explore more. It’s good to see an example of some of its features used in Elixir!
Your comment made me realize I totally forgot to tag Elixir too! We mostly use Hackney right now (via Tesla) and we’ve generally been happy, but I’d be pretty excited to see a Gun adapter.
Testla is AWESOME and so is Hackney.
I just happened to check recent activity in the Tesla repo and saw a Gun adapter PR was recently merged! Figured I’d update this thread for anyone else still actually paying attention :)
Some of the NewPipe features really do look great, but this feels like a lot of gushing over a GUI on top of an actual platform. A nontrivial amount of the post could be distilled down to:
I’m not saying we should all shed a tear for Google’s lost revenue or anything, but is that really “the best of FOSS”?
Supporting creators directly almost always nets them 10x the revenue that you’d generate by pointing your eyeballs at ads, and it’s far less annoying for users, and far more personal and genuine for both parties.
As for the platform, bandwidth isn’t cheap and that’s worth something. I hope that PeerTube continues to grow into a viable alternative.
Yep, I agree! Ads are inefficient and terrible for basically everyone involved (except for exchanges/brokers), which is why I almost always welcome a way to simply pay for services, especially ones I use all the time (like YouTube).
Edit: Just to clarify, by the way, I think you make a lot of good points about what makes NewPipe great (and it does look great). I’m not trying to be Smart Negative Guy™, more just trying to think through the monetization side of things and how that relates to “the best of FOSS.” If we value free, open delivery of video with a great featureset, shouldn’t NewPipe just front PeerTube?
We do value that, and for important reasons. However, this gets back to a point I made in the article:
People are already using YouTube, already follow their favorite creators there, and can use NewPipe to get a better experience using the platform and content they’re familiar with - and happen to a great introduction to why free software is important in so doing.
Yes, the utopian ideal PeerTube represents is something to strive for and I hope that we get there (and will personally help where I can to get us there) - but it’s harder to understand why it’s important to someone who doesn’t already understand free software. PeerTube isn’t necessarily objectively better, either, there are still some streaming problems, it’s missing a lot of content creators, there’s little mobile support, etc. NewPipe, on the other hand, makes these arguments immediately self-evident and is a compelling piece of software in its own right.
The point with a lot of this is that it’s the creator that gets to choose how to get money. Like yeah “you’re choosing against your own interests” but the creator is choosing the terms, and I feel like it’s not really up to us to say “actually no I want your stuff but I refuse to look at the ads”
I’d like to support creators directly, and would like for stuff to be easier for creators to survive. Just think respecting the creators choices are important as well
Do you consider ad blockers to be stealing?
I, at least, pay a subscription for YouTube. If they don’t like the fact that I use an ad blocker, they can stop taking my money.
If you’re paying for YouTube, do you even need an ad-blocker for it? I was under the impression that paying removes the ads…
I think that gets tricky, but keeping to YouTube’s case, I pay $12/mo for the service and still use a privacy blocker.
Ads undeniably are a privacy nightmare, which is why I personally use one. I also opt for a pay option when I can, such as blendle for news. What I do think gets less ethically ambiguous on that front is when a service does offer a paid option without ads, but the response is simply “I don’t want to pay for it.”
YouTube does not forbid blocking ads. Blocking ads without paying is allowed by YouTube, so it can’t be stealing. At best it is diligently collecting coupon to take advantage of price discrimination.
From the YouTube API terms:
That term is about YouTube API Services, which is explicitly not YouTube websites. See section IV.
The comment was not really specific to YouTube. Collecting discount coupons is not stealing from Walmart. Usually, blocking ads is also not stealing from websites.
Looks pretty cool! https://gleam.run/
Agreed. I’m very up on the idea of getting more languages to run on the BEAM. I miss static types and, frankly, I wish that Rust could compile down to run on the BEAM!
Yeah, I love Elixir to death, but sometimes I find myself wishing for a real type system. Some folks swear by Dialyzer, but it feels a bit like a cludgy piece of typing duct tape.
The dynamically-typed nature of Erlang and Elixir and BEAM comes from a design requirement: that the systems built in Erlang can be upgraded at runtime. Strong typing gets quite a bit more complicated when you need to be able to have multiple versions of a type coexist at runtime.
Side note, this took me a while to absorb when beginning to write Elixir. My instinct was to use structs instead of generic maps for GenServer state, since better-defined types are better, right? But that imposes hard requirements on hot upgrades that wouldn’t have been there if I’d used untyped maps from the start; removing fields from a struct breaks upgrades. This knowledge was somewhere between “esoteric” and “esoteric to Ruby assholes who just showed up, well-known to wonks”. The Erlang Way is a lot more than “let it crash”. :)
Yeah, I really wish there was more type system research going into figuring out how to use them effectively in upgradable, always-on systems, where you might have heterogeneous versions across a cluster. I actually think static types could be super helpful here, but as far as I’m aware there doesn’t seem to be much work put into it.
It’s very difficult. It’s not like nobody tried — https://homepages.inf.ed.ac.uk/wadler/papers/erlang/erlang.pdf
And when people talk about “I wish there was a type system” they probably don’t realise that Erlang is very different animal (that can do things other animals have no concepts for). Just bolting on types is not an option (if you want to know what happens if you do so, look at CloudHaskell — you have to have a exact binary for every node in the entire cluster, or else).
That’s what I mean. I see Cloud Haskell as interesting, but really not the distributed type system I want. It would be super cool to see more new ideas here (or rediscovery of old ones, if they’re around). Eg. you may need some kind of runtime verification step to ensure that a deployment is valid based on the current state of the world. Perhaps some stuff from databases and consensus would help here. Doing that efficiently could be… interesting. But that’s why research is important!
I think protocol buffers (and similar systems like Thrift / Avro) are pretty close to the state of the art (in terms of many large and widely deployed systems using them). When you write distributed systems using those technologies, you’re really using the protobuf type system and not the C++ / Java / Python type system. [1] It works well but it’s not perfect of course.
I also would make a big distinction between distributed systems where you own both sides of the wire (e.g. Google’s), and distributed systems that have competing parties involved (e.g. HTTP, e-mail, IRC, DNS, etc.). The latter case is all untyped because there is a “meta problem” of agreeing on which type system to use, let alone the types :) This problem is REALLY hard, and I think it’s more of a social/technological issue than one that can be addressed by research.
[1] This is a tangent, but I think it’s also useful to think of many programs as using the SQL type system. ORMs are a kludge to bridge SQL’s type system with that of many other languages. When the two type systems conflict, the SQL one is right, because it controls “reality” – what’s stored on disk.
Seriously? PB, where you can’t even distinguish between
(int)-1
and(uint)2
is state of the art?Alice ML is a typed programming language designed to enable open extensions of systems. Objects can be serialized/deserialized and retain their types and it’s possible to dynamically load new code.
I am so with you on this one, and I’ve got so much to learn!
You might find ferd’s intro helpful. For historical perspective with some depth, you might like Armstrong’s thesis from 2003 that describes everything in deep detail.
Yup, this is related to the point I was making about protobufs and static “maybe” vs. dynamic maps here. In non-trivial distributed systems, the presence of fields in message has to be be checked at RUNTIME, not compile-time (if there’s a type system at all).
https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey#c_povjwe
I think of protobufs/thrift as trying to “extend your type system over the network”. It works pretty well, but it’s also significantly different from a type system you would design when you “own the world”. Type systems inherently want a global view of your program and that conflicts with the nature of distributed systems.
edit: this followup comment was more precise: https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey#c_jc0hxo
So this is really interesting. I read the paper on success typing and it seems pretty cool. It still, however, doesn’t guarantee soundness. Then on the other hand, neither does TypeScript, so it’s hard for me to make up my mind about what I want.
That would be cool. There’s at least Rustler.
Static types per se — that’s easy. But think about the distributed system with different versions of VMs. Think about live upgrades.
Going to a Vince Staples show in Oakland tonight! Otherwise, I’m unreasonably excited about doing laundry and hanging out with my dog. I’ve been traveling a lot lately with more coming up, so it’ll be nice to just…not.
i blinked a bit at the fact that this was all expected to be done in a week, but i guess the senior engineer who asked to use reason was aware of the scope of the project.
I had the same reaction. I’m really curious about ReasonML, but the learning curve (particularly for a junior engineer) feels like it would be pretty wild.
My guess is the POC was done in a week. She does mention spending at least a month working on ReasonML in both backend and frontend, so it sounds like it was an ongoing effort.