This feels like that time Paul Graham said over 25% of ViaWeb code was macros and it was intended to be a boast but everyone who saw it was just horrified.
I don’t see the problem. He made a dsl in lisp, so the actual code was smaller/denser describing the problem space well. In this particular case, the users would basically be creating configs, similar to knowledge bases or constraint lists describing what they wanted. The macro heavy code would then transform that into an end result. Isn’t that good?
Graham has been riding on his “we built an ecommerce site in Lisp and it was so awesome” for nearly 30 decades years now. Sadly the followup was Hackernews.
I’m not sure if you’ve ever had the chance to use Yahoo! Store (what Viaweb became)—it was terrible—but it was also extremely innovative for the time. It used an actual programming language RTML you edited with the browser. It was sandboxed, and built on continuations (for which Graham received a patent).
So, yeah. Maybe he’s been bragging about this for 30 years, but this success ultimately paved the way for YC which changed the world (I won’t make a value judgement on which way).
I’d posit that Graham, like a lot of other people, was lucky to be in the right place at the right time. Sure he and his team worked hard - creating a company isn’t easy - but lots of people made a lot of money in the first internet bubble, and the smarter ones kept the money and reinvested it. Running a VC firm was essentially a license to print money for a while. And YC itself “just” pics companies to invest in, they don’t run the companies themselves.
In other words, while PG attributes a lot of his success to the use of Lisp for Viaweb, it was probably not a game changer. The real product was making a web storefront, making it a success , and selling it to idiots investors with more money.
I think Graham himself attributes using Lisp as helping to get to the finish line first over the competitors. And it’s a fair point as it appears to have worked for him. But there is little doubt that user editable online stores would have appeared otherwise and I don’t think he was ever alluding to that.
While every exit has some luck involved and some “right place right time,” there likely weren’t other storefront builders in the same way Viaweb existed. That’s something.
Additionally, the choice of Lisp was definitely part familiarity, but also, he’s told stories as to why it was advantageous. He wasn’t skilled in running software, so their deployment, or bug fixes, or reporting were “run this thing in the repl.” You can’t do that outside of Lisp—maybe Erlang.
As for YC, it’s never been a traditional VC, and it’s only scaled up as a result of iteration on the model. The first group was like 6 companies. They all lived in Boaton. They all had dinner every Sunday (or whatever), they all got introduced to other investors, etc. YC built a community of like minded startup founders, and they lnow each other, and when possible, help each other. Does that happen in traditional VC? Probably to some degree. YC took it further, and it should be considered an innovation.
Note: I am not a Paul Graham Stan. I do think the man wasn’t “just lucky,” and actually created things that people wanted.
In terms of how the software works, hacker news is one of the best sites I’ve used … like ever
I mean lobst.ers was directly inspired by it, and judging by my logs hacker news drives 5x to 10x the traffic
So oddly enough the Lisp way of writing web apps actually did and does work. I think they had to do a bunch of work on the runtime too, but so did stack overflow, instagram, etc
Well, I think the point here is largely tongue in cheek, but there’s truth in it too, so I’ll focus on the truth; Yes, building layers is good, declarative code is good, interpreting specifications is good. But none of these things necessitate excessive amounts of metaprogramming.
My personal experience is that I dig myself into metaprogramming holes when I spend too little time thinking about simpler ways to achieve my goals, so I’ve developed a reflex of stopping myself and reflecting a little whenever the urge to metaprogram comes. So, naturally, when somebody says their codebase is 25% metaprogramming, the same reflex kicks in.
A good deal of standard Common Lisp language constructs are actually macros and nobody flinches using them. The language integration for that is really good. So I agree with u/veqq, it’s a nothingburger in general, although naturally a question of taste here applies just as much as to programming in general.
Metaprogramming is as normal as defining functions in lisp. In Common Lisp core things like when, and, loop, dotimes, let, defun, defvar, defmacro or cond are macros. Code is data is code. Perhaps you are thinking of metapogramming in other languages where it has its own strange system and syntax and complects the system.
Honestly, thinking about it, 25% must mean new macro definitions as typical code probably involves more macros (just from using control flow etc.).
I wonder how much this is a case of chasing the metric rather than organic adoption.
Some large consultancies here really push the adoption of assistants onto programmers so that they can boast the customers they’re on the bleeding edge of development. The devs grumble, attend mandatory AI training and for the most part pretend they lean on their now indispensable Copilots. It is possible something like that is happening here. The VP of AI Adoption and their department that Google for sure has counts all the lines from AI enabled editors. This then communicated with a good part of wishful thinking all the way up to the CEO.
Or who knows, maybe Google has a secret model which is not an utter crap for anything non-trivial and just holding it back for competitive advantage. Hope googlers here would let us know!
If you read their report on it, it is definitely chasing for metrics. All their other “AI tool for devs” initiatives have abysmal numbers in term of adoption and results, and they are already saying that all their future growth are in other domains of development. Translation: We are out of things we can easily growth hack.
FWIW, if this is anything like Copilot (which I do use for personal projects because I’ll take anything that’ll fry my brain a little less after 8 PM) it’s not even a particularly difficult metric to chase. I guess about 25% of my code is written by AI, too, as in the default completion that Copilot offers is good enough to cover things like convenience methods in an API, most function prototypes of common classes (e.g. things like constructors/destructors), basic initialization code for common data structures, hardware-specific flags and bitmasks and so on.
It’s certainly useful in that it lets me spend more of my very limited supply of unfried neurons on the harder parts, but also hardly a game changer. That 25% of the code accounts for maybe 1% of my actual mental effort.
I will give it that, though, it’s the one thing that generative AI tools have nailed. I’m firmly in the AI should do the dishes and vacuum so I can write music and poetry, not write music and poetry so I can do the dishes and vacuum camp. This is basically the one application where LLM tools really are doing the dishes so I can do the poetry.
Now how long before 25% of executive decisions are made by AI?
I guess 25% is underestimating the current situation.
However unlike with coders (in this example here) and many other fields equally affected, executives will still be allowed (by themselves and their buddies) to serve as the AI’s mouthpiece, and be comfortably redressed for the insult.
The question is how much of it makes it anywhere near to production. Someone fooling around on some small test project is technically also “new Google code”
As you might be able to guess from my comment history, I’m a huge AI skeptic. But here, even though the meaning of your joke resonates with me, I find it a little disingenuous :P
If you start writing this:
public int getCountPlusOne( <tab>
LLMs will actually generate code similar to:
public int getCountPlusOne() {
return this.count + 1;
}
Now, should people write such repetitive and tedious code? My opinion is no, but unfortunately, in the real world many people do, and that’s where “AI” is better than language server, even-though I prefer using language servers personally.
No, Google has been pioneering the art of making crappy products even before starting to use LLM. This new tech will enable them to make worse products much faster!
This feels like that time Paul Graham said over 25% of ViaWeb code was macros and it was intended to be a boast but everyone who saw it was just horrified.
Found a source.
It was apparently Lisp macros, which is at least a little less cursed than pre-processor ones (AFAIU Lisp).
Eh, you can still do very cursed things with reader macros, which allow you to write non-sexp language features.
Why horrified?
I’ll let you read that again:
I don’t see the problem. He made a dsl in lisp, so the actual code was smaller/denser describing the problem space well. In this particular case, the users would basically be creating configs, similar to knowledge bases or constraint lists describing what they wanted. The macro heavy code would then transform that into an end result. Isn’t that good?
Graham has been riding on his “we built an ecommerce site in Lisp and it was so awesome” for nearly 30
decadesyears now. Sadly the followup was Hackernews.I’m not sure if you’ve ever had the chance to use Yahoo! Store (what Viaweb became)—it was terrible—but it was also extremely innovative for the time. It used an actual programming language RTML you edited with the browser. It was sandboxed, and built on continuations (for which Graham received a patent).
So, yeah. Maybe he’s been bragging about this for 30 years, but this success ultimately paved the way for YC which changed the world (I won’t make a value judgement on which way).
I’d posit that Graham, like a lot of other people, was lucky to be in the right place at the right time. Sure he and his team worked hard - creating a company isn’t easy - but lots of people made a lot of money in the first internet bubble, and the smarter ones kept the money and reinvested it. Running a VC firm was essentially a license to print money for a while. And YC itself “just” pics companies to invest in, they don’t run the companies themselves.
In other words, while PG attributes a lot of his success to the use of Lisp for Viaweb, it was probably not a game changer. The real product was making a web storefront, making it a success , and selling it to
idiotsinvestors with more money.I think Graham himself attributes using Lisp as helping to get to the finish line first over the competitors. And it’s a fair point as it appears to have worked for him. But there is little doubt that user editable online stores would have appeared otherwise and I don’t think he was ever alluding to that.
While every exit has some luck involved and some “right place right time,” there likely weren’t other storefront builders in the same way Viaweb existed. That’s something.
Additionally, the choice of Lisp was definitely part familiarity, but also, he’s told stories as to why it was advantageous. He wasn’t skilled in running software, so their deployment, or bug fixes, or reporting were “run this thing in the repl.” You can’t do that outside of Lisp—maybe Erlang.
As for YC, it’s never been a traditional VC, and it’s only scaled up as a result of iteration on the model. The first group was like 6 companies. They all lived in Boaton. They all had dinner every Sunday (or whatever), they all got introduced to other investors, etc. YC built a community of like minded startup founders, and they lnow each other, and when possible, help each other. Does that happen in traditional VC? Probably to some degree. YC took it further, and it should be considered an innovation.
Note: I am not a Paul Graham Stan. I do think the man wasn’t “just lucky,” and actually created things that people wanted.
Either 3 decades or 30 years, not 30 decades. 30 decades ago was 1724 ;)
You’re correct, it only feels like he’s been banging the drum for 300 years…
I can happily claim that I have never read any of is essays or whatever. I think I made the right choice.
In terms of how the software works, hacker news is one of the best sites I’ve used … like ever
I mean lobst.ers was directly inspired by it, and judging by my logs hacker news drives 5x to 10x the traffic
So oddly enough the Lisp way of writing web apps actually did and does work. I think they had to do a bunch of work on the runtime too, but so did stack overflow, instagram, etc
Well, I think the point here is largely tongue in cheek, but there’s truth in it too, so I’ll focus on the truth; Yes, building layers is good, declarative code is good, interpreting specifications is good. But none of these things necessitate excessive amounts of metaprogramming.
My personal experience is that I dig myself into metaprogramming holes when I spend too little time thinking about simpler ways to achieve my goals, so I’ve developed a reflex of stopping myself and reflecting a little whenever the urge to metaprogram comes. So, naturally, when somebody says their codebase is 25% metaprogramming, the same reflex kicks in.
A good deal of standard Common Lisp language constructs are actually macros and nobody flinches using them. The language integration for that is really good. So I agree with u/veqq, it’s a nothingburger in general, although naturally a question of taste here applies just as much as to programming in general.
Metaprogramming is as normal as defining functions in lisp. In Common Lisp core things like when, and, loop, dotimes, let, defun, defvar, defmacro or cond are macros. Code is data is code. Perhaps you are thinking of metapogramming in other languages where it has its own strange system and syntax and complects the system.
Honestly, thinking about it, 25% must mean new macro definitions as typical code probably involves more macros (just from using control flow etc.).
Given this was said in an earnings call and Google is highly investing in AI, it’s likely biased numbers.
Oh yeah, I don’t necessarily think it’s true, but lying about it is in some ways even worse.
I wonder how much this is a case of chasing the metric rather than organic adoption.
Some large consultancies here really push the adoption of assistants onto programmers so that they can boast the customers they’re on the bleeding edge of development. The devs grumble, attend mandatory AI training and for the most part pretend they lean on their now indispensable Copilots. It is possible something like that is happening here. The VP of AI Adoption and their department that Google for sure has counts all the lines from AI enabled editors. This then communicated with a good part of wishful thinking all the way up to the CEO.
Or who knows, maybe Google has a secret model which is not an utter crap for anything non-trivial and just holding it back for competitive advantage. Hope googlers here would let us know!
If you read their report on it, it is definitely chasing for metrics. All their other “AI tool for devs” initiatives have abysmal numbers in term of adoption and results, and they are already saying that all their future growth are in other domains of development. Translation: We are out of things we can easily growth hack.
FWIW, if this is anything like Copilot (which I do use for personal projects because I’ll take anything that’ll fry my brain a little less after 8 PM) it’s not even a particularly difficult metric to chase. I guess about 25% of my code is written by AI, too, as in the default completion that Copilot offers is good enough to cover things like convenience methods in an API, most function prototypes of common classes (e.g. things like constructors/destructors), basic initialization code for common data structures, hardware-specific flags and bitmasks and so on.
It’s certainly useful in that it lets me spend more of my very limited supply of unfried neurons on the harder parts, but also hardly a game changer. That 25% of the code accounts for maybe 1% of my actual mental effort.
I will give it that, though, it’s the one thing that generative AI tools have nailed. I’m firmly in the AI should do the dishes and vacuum so I can write music and poetry, not write music and poetry so I can do the dishes and vacuum camp. This is basically the one application where LLM tools really are doing the dishes so I can do the poetry.
I guess AI doesn’t complain when you cancel the project it’s working on.
Now how long before 25% of executive decisions are made by AI?
I guess 25% is underestimating the current situation.
However unlike with coders (in this example here) and many other fields equally affected, executives will still be allowed (by themselves and their buddies) to serve as the AI’s mouthpiece, and be comfortably redressed for the insult.
The question is how much of it makes it anywhere near to production. Someone fooling around on some small test project is technically also “new Google code”
Eh. It sounds like bullshit, but… if you think 40% of your code should be tests, it sounds like they’re using AI to generate a pile of unit test code.
In my opinion, that’s certainly better than having programmers write the tests for AI written code.
> press auto complete key
> “AI” generates list of results any existing language server already can accomplish
> line of code is now written by “AI”
> ???
> Profit.
As you might be able to guess from my comment history, I’m a huge AI skeptic. But here, even though the meaning of your joke resonates with me, I find it a little disingenuous :P
If you start writing this:
LLMs will actually generate code similar to:
Now, should people write such repetitive and tedious code? My opinion is no, but unfortunately, in the real world many people do, and that’s where “AI” is better than language server, even-though I prefer using language servers personally.
that’s incredible. last time I used an llm to code it invented 3 new features that don’t exist on the programming language I was using.
Sure it’s useful, but is it multi-billion valuation, nuclear power station-restarting useful?
I believe that ??? is:
}and empty lines are kept, adding up to 25% of the lines of code.Is that why more and more Google products fail or get worse quality wise?
No, Google has been pioneering the art of making crappy products even before starting to use LLM. This new tech will enable them to make worse products much faster!