I genuinely think that AI-assisted coding features are table stakes for a developer-oriented text editor in 2024 - just like syntax highlighting and language server autocomplete.
I’m not at all surprised to see Zed investing time in building these features.
I genuinely think that AI-assisted coding features are table stakes for a developer-oriented text editor in 2024 - just like syntax highlighting and language server autocomplete.
Watching the demos in this blog post, I’m struck by how much this feature changes the process of software development from “writing code” into “reviewing code”. Reviewing code is not fun, especially when it’s walls of text spit out from an LLM that you know will require extra scrutiny. If this is the future of software development, I’m not excited for it. So, respectfully, I hope you’re wrong.
Fully agree, I’m never touching AI code assistants until they unequivocally become smarter than me (probably will happen, but not for years) and they get over their overwhelming Python bias, since I don’t use Python but it’s practically the only language I ever see it used with (probably will happen, but again, years from now). Until then, I am not going to be picking out bugs and incorrect types from autogenerated code like picking out lint from a lollypop I dropped on the floor.
However, I do strongly agree with the sentiment that text editors with AI assistants is the norm, and any text editor that wants to be taken seriously needs it, even if I detest the feature, personally. I think it should be opt-in, and, ideally, completely uninstallable as if it were any other addon, not just for the sake of feeling “clean” but also your editor shouldn’t favor any built-in functionality over what plugins can and can’t do, plus it cleans up the core codebase. It especially needs to be opt-in at the moment because we don’t yet know the legality of this generated code, and some companies do not feel legally confident in using it.
However, I do strongly agree with the sentiment that text editors with AI assistants is the norm, and any text editor that wants to be taken seriously needs it
Anyone who needs it can install a 3rd party plugin or something. As soon as someone adds AI to something, I take that something and that someone much less seriously.
That’s not how this works. Building features takes time/money/resources/attention/know-how. Working on features like this which many people actively dislike (guilty as charged) is disappointing because it means other features have fallen by the wayside.
Depends if the developer of that feature actively wants it or enjoys working on it as opposed to another feature. Even in a company, people aren’t machines that just spit out arbitrary code, they’re more productive doing what they like. Can’t actually say anything about how the Zed team works internally, though.
I’m sure that AI assistance offers some convenience to programmers. But our convenience cannot justify the energy and natural resource usage which AI requires, now and for the foreseeable future.
It’s like fast-fashion, where the ecological and humanitarian cost is abstracted away from us by distance and time.
We know exactly where our inability to face the real cost of our convenience has led us. We should be wiser than this.
I’m less worried about that now that I can run a competent LLM on my own laptop.
Training costs are high but have been dropping dramatically - the Phi-2 series were trained for less than $50,000 in electricity costs as far as I know.
When I compare the carbon cost of training an LLM used by millions of people to the carbon cost of flying a single passenger airliner from London to New York I feel a lot less environmentally guilty about my use of LLMs!
Although there’s a lot more to it. Google and Microsoft are saying they will not reach their climate targets now because of AI investments. Sam Altman is going around lobbying for increasing energy production to power AI.
I’m inclined to think that the huge energy increases from those companies are more a factor of the AI arms race than something that’s required by the LLMs themselves. There is massive over-investment in this field right now - see NVIDIA stock price - it’s very bubbly.
The ethics of the training remain incredibly murky. I fully respect the opinion of people who opt out on that basis, just like I respect the opinion of vegans despite not being a vegan myself.
In an interview [with] former Microsoft VP of Energy Brian Janous (page 15), the report details numerous nightmarish problems that the growth of generative AI is causing to the power grid, such as:
Hyperscalers like Microsoft, Amazon and Google have increased their power demands from a few hundred megawatts in the early 2010s to a few gigawatts by 2030, enough to power multiple American cities.
The centralization of data center operations for multiple big tech companies in Northern Virginia may potentially require a doubling of grid capacity over the next decade.
Utilities have not experienced a period of load growth — as in a significant increase in power draw — in nearly 20 years, which is a problem because power infrastructure is slow to build and involves onerous permitting and bureaucratic measures to make sure it’s done properly.
The total capacity of power projects waiting to connect to the grid grew 30% in the last year and wait times are 40-70 months.
Expanding the grid is “no easy or quick task,” and that Mark Zuckerberg said that these power constraints are the biggest thing in the way of AI, which is… sort of true.
There’s this weird thing at the moment where if you work for a company in the AI space - Microsoft, Meta, OpenAI - you are strongly incentivized to argue that this is the future of all technology and will require vast amounts of resources, because that’s how you justify your valuation.
Meanwhile I’m watching as the quality of LLMs that run on my laptop continues to increase dramatically month over month - which is a little inconvenient if you’re trying to make the case that you need a trillion dollars to build and run a digital God.
That’s not what I’m trying to say. My point here is that I take some of the claims of AI purveyors - like Sam Alaskan with his trillion dollar data center plans - with a healthy pinch of salt, because nobody raised a trillion dollars saying “demand for this is going to level off at a sensible level”.
I have a whole lot of experience and work with a bunch of different languages. I wouldn’t pick a text editor today that didn’t have Copilot-style autocomplete. So it’s table stakes for me at least.
That’s fine, I respect your choice and feel happy for you. But I think people here are arguing the usefulness in general, and not only for low hanging fruits like simple autocomplete or text generation, let’s not even get to the massive privacy concerns.
I was thinking Zed would be a good editor to introduce to my partner who is beginning to code. But she despises AI so I’ll try to find something else. I also don’t think relying on them when trying to learn is a good idea.
Just about every major GUI editor is in the process of introducing some sort of AI feature at this point. Zed, JetBrains, VSCode, it’s pretty pervasive.
I think Atom is no longer maintained. And maybe we have different priorities, but I feel like using a proprietary paid-for editor to escape a feature you can simply disable in other text editors is a bit unreasonable. I use VSCode, I don’t even have any AI enabled in it. I don’t think VSCode would ever get AI as default since the default, and what MS is pushing, is Copilot, which is paid for, and I can’t see them making that free and on by default any time soon.
Sincere, non-rhetorical question: Do you see the traditional programmer’s-editors, Vim and Emacs, as too difficult to learn for beginners (even one who’s in deep enough already to “despise[] AI”)? Has she tried their tutorials?
Providing server-side compute to power AI features is another monetization scheme we’re seeing getting traction.
So even if I probably won’t use this feature, I don’t think that the rest of the editor will suffer too much from it (since it should have happened long ago).
They are an (for now) unprofitable VC-funded startup looking for new investment rounds. Investors continually ask “so what’s your AI strategy?”. It was only a matter of time.
The feature includes ollama compatibility (not mentioned in this blog post, I found it in the settings and then tracked down this PR) so people who don’t want code to leave their machine can run local models instead. I’ve been hearing good things about DeepSeek-Coder-V2 recently.
Thinking a moment longer though, this “absolutely zero guarantees or DIY” approach most GPT wrappers take is not so nice in 2024. You would hope for a little concern at least on paper for user data, like, we relay them to openai/anthropic/whatever /but/ at least we are mindful it’s sensitive stuff. It’s this “yolo” approach to product development that I’m calling out.
Good threads here about AI’s energy usage, but I also hope we’re all keeping a black book of “open source”-y and dev tooling companies that are buying into this. Everything else aside, LLMs seem to be, at least in part, an attempt to do an end-run around the difficulties of open source licensing, under the legal theory that training an LLM is fair use and thus doesn’t trigger copyright-based licensing at all. I understand why businesses want that, but those difficulties exist because people who write open source code often want to get something back for their work, whether that’s simple attribution or GPL-like recontribution of changes. Companies buying heavily into plagarism-based LLMs should never be trusted again in this space.
It’s not sufficient to fix this now, to build models that provide that attribution, for instance. That’s something that should be done, if we’re really stuck with this tech, but that trust is still broken; they started out by screwing us over, and that’s not something we should forget.
Despite working for an LLM company, I did not use AI much in my work. However, the Zed integration actually changed this completely. I am using this on a day to day basis and it made me probably 2x more productive. It’s not magic, but a tool that when learned can be quite helpful. I found the Zed integration particularly good, because it doesn’t try to be smart or hide prompts, but just gives you a raw experience with a very tight loop.
Over time I learned the patterns where the model is good and where it makes mistakes. I tuned my prompts and now get a very high success rate of gettign what I want. Does it one-shot everything? No, but a repeated prompts with the right detail gets me much quicker to the result than if i would write it myself.
Are you able to share any specific examples? I see “it has made me 2x more productive” kinds of statements and I keep wondering what I’m doing differently because while I do use LLM tools in my work, I’d say they make me more like 5% more productive at the “writing code” part of my job: enough to be noticeably beneficial, but not enough to fundamentally alter how I work.
The win for me is mostly in simple situations: single-line autocomplete or tiny code blocks that can be described very tersely. Every time I’ve tried to do something more complex, I have ended up spending 3-5x longer wrestling with prompts and fixing up the generated code than it would have taken me to write the code myself.
I have not, however, tried Zed. So I’m curious to hear concrete examples of how much better it is.
The way I work with LLMs is Zed + Claude Sonnet 3.5 (I assume GPT 4o will do fine as well).
What I generally found useful is the following approach: I write a function definition in a class, then select the whole class. I add context to the assistant panel in Zed, usually the most recent tabs I opened, that provide additional context for the implementation. Then I use inline assist to prompt for implementing the given function and give it some basic requirements. Usually this works very well. Similarly, i make changes to the whole class by selecting the whole class, adding context to the assistant panel and then inline prompt (CTRL + Enter).
What I’ve found is that trying to one-shot large files doesn’t work well. What works well if targeted changes, implementations with the right context in the assistant panel. There are clear limitations in waht models can do and most importantly how good retrieval methods are. In general i found any sort of magic retrieval, “Ask the codebase”, etc not very good.
One example very recently: I regenerate some pydantic types from a json schema. I now add all the new types into the context window and select a whole file at a time and prompt the model with : I’ve have changed the files in types.py, but have yet to change the usage of said types. Please go ahead and change the usages for me appropriately.
In these examples the initial pass gets about 90% right, the last 10% I either fix or select and make specific prompts.
So that’s all to say, waht I feel peoeple try is big one-shot examples, or relying on fancy retrieval methods which rarely work. What does work well is targeted prompting with the human selecting the right context. So effectively I use the model as superhuman writer, not a as superhuman understanding.
Then push the the Assistant Panel button(lower right corner). Configure the API key: sk-000000000000000000000000000000000000000000000000
Since I’m working in Python at the moment, I downloaded WizardCoder-Python-34B. So far it’s earned it’s 22GB on disk(if only as a novelty so far). I only have 30m in on playing with my “WizardCoder” and I’m not sure it’s a wizard but I haven’t freed up the 22GB of space yet :).
I’m really excited for this! I’ve been using the AI features for Cursor (an AI-oriented VS Code fork) + Supermaven (a GitHub copilot alternative), and I’ve come to believe that for many development tasks, I can deliver the same quality code twice as fast or more. These tools are changing the discipline of programming just like IDEs like Eclipse did way back when
Bummer to see them doubling down on AI, I’d hoped that the current integration was a bandwagon / securing funding thing.
I genuinely think that AI-assisted coding features are table stakes for a developer-oriented text editor in 2024 - just like syntax highlighting and language server autocomplete.
I’m not at all surprised to see Zed investing time in building these features.
Watching the demos in this blog post, I’m struck by how much this feature changes the process of software development from “writing code” into “reviewing code”. Reviewing code is not fun, especially when it’s walls of text spit out from an LLM that you know will require extra scrutiny. If this is the future of software development, I’m not excited for it. So, respectfully, I hope you’re wrong.
Fully agree, I’m never touching AI code assistants until they unequivocally become smarter than me (probably will happen, but not for years) and they get over their overwhelming Python bias, since I don’t use Python but it’s practically the only language I ever see it used with (probably will happen, but again, years from now). Until then, I am not going to be picking out bugs and incorrect types from autogenerated code like picking out lint from a lollypop I dropped on the floor.
However, I do strongly agree with the sentiment that text editors with AI assistants is the norm, and any text editor that wants to be taken seriously needs it, even if I detest the feature, personally. I think it should be opt-in, and, ideally, completely uninstallable as if it were any other addon, not just for the sake of feeling “clean” but also your editor shouldn’t favor any built-in functionality over what plugins can and can’t do, plus it cleans up the core codebase. It especially needs to be opt-in at the moment because we don’t yet know the legality of this generated code, and some companies do not feel legally confident in using it.
Anyone who needs it can install a 3rd party plugin or something. As soon as someone adds AI to something, I take that something and that someone much less seriously.
You’re welcome not to use these features, but that doesn’t mean other people don’t want them.
I’ve been tracking AI-assisted programming for a while now. Personally I’ve found it extremely beneficial. https://simonwillison.net/tags/ai-assisted-programming/
That’s not how this works. Building features takes time/money/resources/attention/know-how. Working on features like this which many people actively dislike (guilty as charged) is disappointing because it means other features have fallen by the wayside.
Depends if the developer of that feature actively wants it or enjoys working on it as opposed to another feature. Even in a company, people aren’t machines that just spit out arbitrary code, they’re more productive doing what they like. Can’t actually say anything about how the Zed team works internally, though.
I’m sure that AI assistance offers some convenience to programmers. But our convenience cannot justify the energy and natural resource usage which AI requires, now and for the foreseeable future.
It’s like fast-fashion, where the ecological and humanitarian cost is abstracted away from us by distance and time.
We know exactly where our inability to face the real cost of our convenience has led us. We should be wiser than this.
I’m less worried about that now that I can run a competent LLM on my own laptop.
Training costs are high but have been dropping dramatically - the Phi-2 series were trained for less than $50,000 in electricity costs as far as I know.
When I compare the carbon cost of training an LLM used by millions of people to the carbon cost of flying a single passenger airliner from London to New York I feel a lot less environmentally guilty about my use of LLMs!
Although there’s a lot more to it. Google and Microsoft are saying they will not reach their climate targets now because of AI investments. Sam Altman is going around lobbying for increasing energy production to power AI.
https://disconnect.blog/generative-ai-is-a-climate-disaster/
Also this ignores the ethical concerns of going around hovering data from all over the internet, not accrediting anyone and making a profit of it.
I’m inclined to think that the huge energy increases from those companies are more a factor of the AI arms race than something that’s required by the LLMs themselves. There is massive over-investment in this field right now - see NVIDIA stock price - it’s very bubbly.
The ethics of the training remain incredibly murky. I fully respect the opinion of people who opt out on that basis, just like I respect the opinion of vegans despite not being a vegan myself.
There was a recent report from Goldman Sachs which takes a less optimistic stance than yours. A summary courtesy of this article by Ed Zitron:
There’s this weird thing at the moment where if you work for a company in the AI space - Microsoft, Meta, OpenAI - you are strongly incentivized to argue that this is the future of all technology and will require vast amounts of resources, because that’s how you justify your valuation.
Meanwhile I’m watching as the quality of LLMs that run on my laptop continues to increase dramatically month over month - which is a little inconvenient if you’re trying to make the case that you need a trillion dollars to build and run a digital God.
I have to admit, I can only admire the audacity of claiming that everyone is just pretending that AI uses lots of energy. :-)
That’s not what I’m trying to say. My point here is that I take some of the claims of AI purveyors - like Sam Alaskan with his trillion dollar data center plans - with a healthy pinch of salt, because nobody raised a trillion dollars saying “demand for this is going to level off at a sensible level”.
I just ran into another attempt to quantify AI energy usage (the title indicates they plan a second part, but I don’t see that it’s available yet).
Thank you for the link! Part 2 is now available.
Table stakes to get VC funding yes, for developers writing anything else besides react apps or python scripts hardly so.
I have a whole lot of experience and work with a bunch of different languages. I wouldn’t pick a text editor today that didn’t have Copilot-style autocomplete. So it’s table stakes for me at least.
That’s fine, I respect your choice and feel happy for you. But I think people here are arguing the usefulness in general, and not only for low hanging fruits like simple autocomplete or text generation, let’s not even get to the massive privacy concerns.
What are the privacy concerns if you’re using a local model?
I was thinking Zed would be a good editor to introduce to my partner who is beginning to code. But she despises AI so I’ll try to find something else. I also don’t think relying on them when trying to learn is a good idea.
Just about every major GUI editor is in the process of introducing some sort of AI feature at this point. Zed, JetBrains, VSCode, it’s pretty pervasive.
Yeah I suppose it’d be either sublime or atom
I think Atom is no longer maintained. And maybe we have different priorities, but I feel like using a proprietary paid-for editor to escape a feature you can simply disable in other text editors is a bit unreasonable. I use VSCode, I don’t even have any AI enabled in it. I don’t think VSCode would ever get AI as default since the default, and what MS is pushing, is Copilot, which is paid for, and I can’t see them making that free and on by default any time soon.
Ah I didn’t know Sublime was a paid product. VSCode I don’t particularly like for other reasons.
I googled around a bit and found GEdit, I think that will suffice.
Sincere, non-rhetorical question: Do you see the traditional programmer’s-editors, Vim and Emacs, as too difficult to learn for beginners (even one who’s in deep enough already to “despise[] AI”)? Has she tried their tutorials?
[Comment removed by author]
I don’t really like AI neither, but I think that Zed has been quite transparent about it:
So even if I probably won’t use this feature, I don’t think that the rest of the editor will suffer too much from it (since it should have happened long ago).
They are an (for now) unprofitable VC-funded startup looking for new investment rounds. Investors continually ask “so what’s your AI strategy?”. It was only a matter of time.
For anyone wanting to disable the AI/assistant/telemetry features in Zed, here’s my config snippet:
Perhaps you should enable the metrics so they can ser that you’ve disabled the ai bits?
Thank you. I wish these features were opt-in.
not a single mention of “security”, “confidentiality” and so on. good good I guess
The feature includes ollama compatibility (not mentioned in this blog post, I found it in the settings and then tracked down this PR) so people who don’t want code to leave their machine can run local models instead. I’ve been hearing good things about DeepSeek-Coder-V2 recently.
Thinking a moment longer though, this “absolutely zero guarantees or DIY” approach most GPT wrappers take is not so nice in 2024. You would hope for a little concern at least on paper for user data, like, we relay them to openai/anthropic/whatever /but/ at least we are mindful it’s sensitive stuff. It’s this “yolo” approach to product development that I’m calling out.
good to know thanks!
We really need domain level story filtering, or at least a
zedtag.Good threads here about AI’s energy usage, but I also hope we’re all keeping a black book of “open source”-y and dev tooling companies that are buying into this. Everything else aside, LLMs seem to be, at least in part, an attempt to do an end-run around the difficulties of open source licensing, under the legal theory that training an LLM is fair use and thus doesn’t trigger copyright-based licensing at all. I understand why businesses want that, but those difficulties exist because people who write open source code often want to get something back for their work, whether that’s simple attribution or GPL-like recontribution of changes. Companies buying heavily into plagarism-based LLMs should never be trusted again in this space.
It’s not sufficient to fix this now, to build models that provide that attribution, for instance. That’s something that should be done, if we’re really stuck with this tech, but that trust is still broken; they started out by screwing us over, and that’s not something we should forget.
And today is the day I un-subscribed from their newsletter
Despite working for an LLM company, I did not use AI much in my work. However, the Zed integration actually changed this completely. I am using this on a day to day basis and it made me probably 2x more productive. It’s not magic, but a tool that when learned can be quite helpful. I found the Zed integration particularly good, because it doesn’t try to be smart or hide prompts, but just gives you a raw experience with a very tight loop.
Over time I learned the patterns where the model is good and where it makes mistakes. I tuned my prompts and now get a very high success rate of gettign what I want. Does it one-shot everything? No, but a repeated prompts with the right detail gets me much quicker to the result than if i would write it myself.
Are you able to share any specific examples? I see “it has made me 2x more productive” kinds of statements and I keep wondering what I’m doing differently because while I do use LLM tools in my work, I’d say they make me more like 5% more productive at the “writing code” part of my job: enough to be noticeably beneficial, but not enough to fundamentally alter how I work.
The win for me is mostly in simple situations: single-line autocomplete or tiny code blocks that can be described very tersely. Every time I’ve tried to do something more complex, I have ended up spending 3-5x longer wrestling with prompts and fixing up the generated code than it would have taken me to write the code myself.
I have not, however, tried Zed. So I’m curious to hear concrete examples of how much better it is.
The way I work with LLMs is Zed + Claude Sonnet 3.5 (I assume GPT 4o will do fine as well).
What I generally found useful is the following approach: I write a function definition in a class, then select the whole class. I add context to the assistant panel in Zed, usually the most recent tabs I opened, that provide additional context for the implementation. Then I use inline assist to prompt for implementing the given function and give it some basic requirements. Usually this works very well. Similarly, i make changes to the whole class by selecting the whole class, adding context to the assistant panel and then inline prompt (CTRL + Enter).
What I’ve found is that trying to one-shot large files doesn’t work well. What works well if targeted changes, implementations with the right context in the assistant panel. There are clear limitations in waht models can do and most importantly how good retrieval methods are. In general i found any sort of magic retrieval, “Ask the codebase”, etc not very good.
One example very recently: I regenerate some pydantic types from a json schema. I now add all the new types into the context window and select a whole file at a time and prompt the model with : I’ve have changed the files in types.py, but have yet to change the usage of said types. Please go ahead and change the usages for me appropriately.
In these examples the initial pass gets about 90% right, the last 10% I either fix or select and make specific prompts.
So that’s all to say, waht I feel peoeple try is big one-shot examples, or relying on fancy retrieval methods which rarely work. What does work well is targeted prompting with the human selecting the right context. So effectively I use the model as superhuman writer, not a as superhuman understanding.
I should probably make a video about this.
I hope that helps.
I found the easiest way to play with this(all locally) was to:
Then push the the Assistant Panel button(lower right corner). Configure the API key:
sk-000000000000000000000000000000000000000000000000Since I’m working in Python at the moment, I downloaded
WizardCoder-Python-34B. So far it’s earned it’s 22GB on disk(if only as a novelty so far). I only have 30m in on playing with my “WizardCoder” and I’m not sure it’s a wizard but I haven’t freed up the 22GB of space yet :).The slash-commands are super nice & explicit - I love how little abstraction there is between you and what is in the context window of the model.
This is a really good example of UI for LLMs where the use case is lowering friction for power users, which is what I think they’re most suited for
I’m really excited for this! I’ve been using the AI features for Cursor (an AI-oriented VS Code fork) + Supermaven (a GitHub copilot alternative), and I’ve come to believe that for many development tasks, I can deliver the same quality code twice as fast or more. These tools are changing the discipline of programming just like IDEs like Eclipse did way back when