This article really resonates with me. We should be building purposed computing so that it is cheaper and more performant for its given task. I’m not saying that we get rid of general purpose altogether, but it seems like there are specific uses for computers that blocks of consumers don’t go beyond. Do devices like Chromebooks need to be built on Gentoo to be a competent web OS?
I’m glad they preserved this. I once ran WebOS on an old Android tablet and it worked out great! I left it on there until the tablet died. The OS UI/UX is fantastic.
The author doesn’t even (AFAICT) go into the most evil aspect of OAuth— that it acts as a gatekeeper that allows service providers to police what client programs are allowed to connect to the service. It isn’t just authenticating the user, it’s authenticating the client developer. This allows services like Twitter and Facebook to become walled gardens while still claiming to offer open APIs.
Before OAuth, if a service has a published (or reversed) API, you can write a 3rd party client for it and the user can choose to use your client. After OAuth, the user can only choose a client that has been approved by the service provider. If the client does something the provider doesn’t like, such as blocking ads, they just withdraw their approval and kill it.
But there’s an even darker aspect to this requirement that is perhaps not immediately apparent: what happens if a site declines to accept your registration, or revokes your registration at a later time? Suddenly, a corporation (because the large players are all corporations) has control over whether or not your program is useful to your users
Oh yeah, totally, the problem with the internet is not enough “autonomy”. As opposed to having our personas traded by data brokers, brains melted by social media, behaviors modified by targeted advertising and influencer culture.
These techno-libertarian 80s MIT hacker derivative “autonomy” and “freedom” arguments have been such a massive distraction from a material analysis of how power actually uses technology to exploit people for profit. The spectacle of the individual atomizes people and alienates them from common cause.
But if only we were to perfect our APIs and network protocols. Then we would not be exploited by internet companies. We would have autonomy 🙄
Back in the 00’s I printed out copies of _why’s guide to use as a textbook for teaching kids programming. It’s the most successful literature I’ve ever used for this purpose.
Trying to wrap my head around Lisp macros. I have long had a misconception that that Lisp-2s existed as a weird compromise to allow macros to still be fairly useful in the face of lexical scoping. But I have recently seen evidence that, in fact, the opposite is true, and it is much easier to write correctly behaved macros in a Lisp-1.
I am not a Lisp person, so I’m coming at this pretty blind. I’ve been reading papers about the history of Lisp, and trying to understand where my misconception came from. So far I’ve seen this claim repeated in a few places, but nowhere that includes an example of the “right” way to reconcile lexical scope and quasiquoting. So I have a lot more reading to do…
This really doesn’t have anything to do with Lisp-1 vs Lisp 2 so much as it has to do with hygienic vs non-hygienic macros. Your misconception might stem from the fact that the most common Lisp 2 also has a non-hygienic macro system and the most common Lisp-1 (Scheme) tends to have hygienic macro systems. I think the idea that Lisp-2 makes it “easier” to deal with non-hygienic macros probably has to do with the fact that if you separate the function environment from the regular variable environment, then it is often the case that the function environment is much simpler than the variable environments. Typical programs don’t introduce a lot of function bindings except at top or package level.
This is a very reasonable assumption, but in this case I was only thinking about “classic” quasiquote-style macros, and how they differ in Lisp-1s and Lisp-2s.
I think the idea that Lisp-2 makes it “easier” to deal with non-hygienic macros probably has to do with the fact that if you separate the function environment from the regular variable environment, then it is often the case that the function environment is much simpler than the variable environments.
Yeah, that matches my prior assumption. I was very surprised when I learned how a modern Lisp-1 with quasiquote handles the function capture problem – far more elegantly than the separate function namespace. Then I learned that Common Lisp can do the same thing (in a much more ugly way), and I was very surprised that it is not just the canonical way to deal with unhygienic macros. Now it seems like more of a historical accident that Lisp-2 are considered (by some people) “better” for writing unhygienic macros than Lisp-1s.
I’m probably not explaining this well. I ended up writing a blog post about my findings that is very long, but does a better job of explaining my misunderstanding.
Yep! I’m using Common Lisp as my prototypical Lisp-2 as I try to work through and understand this.
The thing I’m having trouble with is that if you want to call a function in a macro expansion, you have to do the whole funcall unquote sharp quote dance, or risk the function being looked up in the calling code’s scope. It seems CL tried to make this less necessary by saying there are certain functions you cannot shadow or redefine, so you only actually have to do this with user-defined functions, but that seems like such a big hack that I must be missing something.
Its the same thing with variables. Common Lisp macros just don’t know anything about lexical scope. In fact, arguably, they don’t even operate on code at all. They operate on the union of the set of numbers, symbols, strings and lists of the other things. Code denotes something, but without knowledge of the lexical context, the things CL macros transform cannot even come close to being “code”.
This is why I like Scheme macros so much. They operate on a “dressed” representation of the code which includes syntactic information like scoping, as well is useful information like line number of denotation, etc. By default they do the right thing and most schemes support syntax-case, which allows you to have an escape hatch as well. I also personally find syntax-case macros easier to understand.
This makes sense when seen inline with the death of GO111MODULE. The two having separate behaviors have caused a lot of problems with those who don’t understand the history of Go with no package management. I may finally get to stop having to help my developers understand how to get things into the $GOPATH versus their mod.
One question. In many frameworks, multiples of the same param key are treated as an array of values. Did you get your approach, taking the last value as canonical, from a standard out there? I always wondered if there is something that tells us how to handle that.
Thank you for the kind words! I’m not familiar with a specification for the content of a query string. I see that RFC 3986 mentions “query components are often used to carry identifying information in the form of key=value pairs.”
I’ve seen it handled three ways: first-value, array, or last-value. First-value may have some security benefit in particular contexts for resisting additional parameters being added to the end. Array, of course, is handy if you want to accept an array. Last-value is easy to implement. I’ve also seen conventions like “array[]=foo;array[]=bar;array[]=baz” or “foo[bar]=baz” used to encode more complex data structures.
One thing that Erlang gets right that other people miss is Hot Reloading. A distributed system that is self healing has to be able to hot reload new fixes.
That’s my biggest frustration with the new BEAM compilers in Rust and so on: they choose to not implement hot reloading - it’s often in the list of non-goals.
In a different video, Joe says to do the hard things first. If you can’t do the hard things, then the project will fail, just at a later point. The hard thing is isolated process hot reloading: getting BEAM compiled in a single binary is not.
Hot reloading is one of those features that I have never actually worked with (at least, not like how Erlang does it!) So for possibly that reason alone I don’t see the absence of the feature a major downside of the new BEAM compiler. I wonder if the lack of development in that area is just because it is a rare feature to have, and while it seems like a nice-to-have, it isn’t a paradigm shift in most people’s minds (mine included!).
The benefits of it do seem quite nice though, and there was some other lobste.rs member who had written a comment about their Erlang system which could deploy updates in < 5min due to the hot reloading, and it was as if nothing changed at all (no systems needed to restart). This certainly seems incredible, but it is hard to fully understand the impact without having worked in a situation like this.
Is it me, or does this seem like an excessive amount of deployments?
How would any amount be excessive? 100 is honestly not that big
This article really resonates with me. We should be building purposed computing so that it is cheaper and more performant for its given task. I’m not saying that we get rid of general purpose altogether, but it seems like there are specific uses for computers that blocks of consumers don’t go beyond. Do devices like Chromebooks need to be built on Gentoo to be a competent web OS?
I’m glad they preserved this. I once ran WebOS on an old Android tablet and it worked out great! I left it on there until the tablet died. The OS UI/UX is fantastic.
The author doesn’t even (AFAICT) go into the most evil aspect of OAuth— that it acts as a gatekeeper that allows service providers to police what client programs are allowed to connect to the service. It isn’t just authenticating the user, it’s authenticating the client developer. This allows services like Twitter and Facebook to become walled gardens while still claiming to offer open APIs.
Before OAuth, if a service has a published (or reversed) API, you can write a 3rd party client for it and the user can choose to use your client. After OAuth, the user can only choose a client that has been approved by the service provider. If the client does something the provider doesn’t like, such as blocking ads, they just withdraw their approval and kill it.
Hm; isn’t the “Issue 2” in the article all about exactly this?
…yes, it states that point very clearly.
There’s also the whole “Publish App” part at the bottom.
Oops, missed that, sorry.
Oh yeah, totally, the problem with the internet is not enough “autonomy”. As opposed to having our personas traded by data brokers, brains melted by social media, behaviors modified by targeted advertising and influencer culture.
These techno-libertarian 80s MIT hacker derivative “autonomy” and “freedom” arguments have been such a massive distraction from a material analysis of how power actually uses technology to exploit people for profit. The spectacle of the individual atomizes people and alienates them from common cause.
But if only we were to perfect our APIs and network protocols. Then we would not be exploited by internet companies. We would have autonomy 🙄
I would like to subscribe to your newsletter
I mean, we already have “autonomy” just layer your protocol on IP and away with you beasties!
Back in the 00’s I printed out copies of _why’s guide to use as a textbook for teaching kids programming. It’s the most successful literature I’ve ever used for this purpose.
Trying to wrap my head around Lisp macros. I have long had a misconception that that Lisp-2s existed as a weird compromise to allow macros to still be fairly useful in the face of lexical scoping. But I have recently seen evidence that, in fact, the opposite is true, and it is much easier to write correctly behaved macros in a Lisp-1.
I am not a Lisp person, so I’m coming at this pretty blind. I’ve been reading papers about the history of Lisp, and trying to understand where my misconception came from. So far I’ve seen this claim repeated in a few places, but nowhere that includes an example of the “right” way to reconcile lexical scope and quasiquoting. So I have a lot more reading to do…
This really doesn’t have anything to do with Lisp-1 vs Lisp 2 so much as it has to do with hygienic vs non-hygienic macros. Your misconception might stem from the fact that the most common Lisp 2 also has a non-hygienic macro system and the most common Lisp-1 (Scheme) tends to have hygienic macro systems. I think the idea that Lisp-2 makes it “easier” to deal with non-hygienic macros probably has to do with the fact that if you separate the function environment from the regular variable environment, then it is often the case that the function environment is much simpler than the variable environments. Typical programs don’t introduce a lot of function bindings except at top or package level.
This is a very reasonable assumption, but in this case I was only thinking about “classic” quasiquote-style macros, and how they differ in Lisp-1s and Lisp-2s.
Yeah, that matches my prior assumption. I was very surprised when I learned how a modern Lisp-1 with quasiquote handles the function capture problem – far more elegantly than the separate function namespace. Then I learned that Common Lisp can do the same thing (in a much more ugly way), and I was very surprised that it is not just the canonical way to deal with unhygienic macros. Now it seems like more of a historical accident that Lisp-2 are considered (by some people) “better” for writing unhygienic macros than Lisp-1s.
I’m probably not explaining this well. I ended up writing a blog post about my findings that is very long, but does a better job of explaining my misunderstanding.
https://ianthehenry.com/posts/janet-game/the-problem-with-macros/
Have you had a look at Common Lisp yet? I’m learning macros there and it seems straight forward.
Yep! I’m using Common Lisp as my prototypical Lisp-2 as I try to work through and understand this.
The thing I’m having trouble with is that if you want to call a function in a macro expansion, you have to do the whole funcall unquote sharp quote dance, or risk the function being looked up in the calling code’s scope. It seems CL tried to make this less necessary by saying there are certain functions you cannot shadow or redefine, so you only actually have to do this with user-defined functions, but that seems like such a big hack that I must be missing something.
Its the same thing with variables. Common Lisp macros just don’t know anything about lexical scope. In fact, arguably, they don’t even operate on code at all. They operate on the union of the set of numbers, symbols, strings and lists of the other things. Code denotes something, but without knowledge of the lexical context, the things CL macros transform cannot even come close to being “code”.
This is why I like Scheme macros so much. They operate on a “dressed” representation of the code which includes syntactic information like scoping, as well is useful information like line number of denotation, etc. By default they do the right thing and most schemes support syntax-case, which allows you to have an escape hatch as well. I also personally find syntax-case macros easier to understand.
Yeah, I really hate that approach
I’m trying to get polynomial commitments solidified in my head
Good riddance!
This makes sense when seen inline with the death of GO111MODULE. The two having separate behaviors have caused a lot of problems with those who don’t understand the history of Go with no package management. I may finally get to stop having to help my developers understand how to get things into the $GOPATH versus their mod.
I love this! Great project concept!
One question. In many frameworks, multiples of the same param key are treated as an array of values. Did you get your approach, taking the last value as canonical, from a standard out there? I always wondered if there is something that tells us how to handle that.
Thank you for the kind words! I’m not familiar with a specification for the content of a query string. I see that RFC 3986 mentions “query components are often used to carry identifying information in the form of key=value pairs.”
I’ve seen it handled three ways: first-value, array, or last-value. First-value may have some security benefit in particular contexts for resisting additional parameters being added to the end. Array, of course, is handy if you want to accept an array. Last-value is easy to implement. I’ve also seen conventions like “array[]=foo;array[]=bar;array[]=baz” or “foo[bar]=baz” used to encode more complex data structures.
One thing that Erlang gets right that other people miss is Hot Reloading. A distributed system that is self healing has to be able to hot reload new fixes.
That’s my biggest frustration with the new BEAM compilers in Rust and so on: they choose to not implement hot reloading - it’s often in the list of non-goals.
In a different video, Joe says to do the hard things first. If you can’t do the hard things, then the project will fail, just at a later point. The hard thing is isolated process hot reloading: getting BEAM compiled in a single binary is not.
Hot reloading is one of those features that I have never actually worked with (at least, not like how Erlang does it!) So for possibly that reason alone I don’t see the absence of the feature a major downside of the new BEAM compiler. I wonder if the lack of development in that area is just because it is a rare feature to have, and while it seems like a nice-to-have, it isn’t a paradigm shift in most people’s minds (mine included!).
The benefits of it do seem quite nice though, and there was some other lobste.rs member who had written a comment about their Erlang system which could deploy updates in < 5min due to the hot reloading, and it was as if nothing changed at all (no systems needed to restart). This certainly seems incredible, but it is hard to fully understand the impact without having worked in a situation like this.