if its not possible to rewrite it in typescript in a way that is as fast as the go version, that really sheds a bad light on v8 and its performance and begs the question why you should write any non browser code in typescript.
if its not possible to rewrite it in go in a way that is as fast as the assembly version, that really sheds a bad light on go and its performance and begs the question why you should write any code in go.
Is this the first big Go project at Microsoft? I had no idea they could choose Go, I would assume that it just wouldn’t be acceptable, because they wouldn’t want to unnecessarily legitimize a Google project.
They had already lost the web platform by the time they switched to Chromium.
When I think of MS and ‘developers developers developers develop’, I was lead to believe that they’d have more pride in their own development platforms.
I wonder if the switch to Chromium, the embrace of Linux in Azure, the move from VS to VSCode, any of these, would have happened under Ballmer or Gates. I suspect Gates’ new book is like Merkel’s: retrospective but avoiding the most controversial questions: Russian gas and giving up dogfooding. Am I foolish to expect a more critical introspection?
I think corporate open source is now “proven”, and companies don’t worry so much about this kind of thing.
Google has used TypeScript for quite awhile now (even though they had their own Closure compiler, which was not staffed consistently, internally)
All of these companies are best thought of as “frenemies” :) They cooperate in some ways, and compete in others.
They have many common interests, and collaborate on say lobbying the US govt
When Google was small, and Microsoft was big, the story was different. They even had different employee bases.
But now they are basically peers, and employees go freely back and forth
I can see a blog post saying they made changes for FIPS compliance, which is usually understood to be for satisfying government requirements and not a security improvement. I can’t see a summary of any other changes.
MS is a pretty big company, so I wouldn’t be surprised if they have a ton of internal services in Go. Plus with Azure being a big part of their business, I doubt they care what language people use as long as you deploy it on their platforms.
Can confirm that some internal services were written in go as of a couple years ago. (I worked for Microsoft at the time.) I didn’t have a wide enough view to know if it was “a ton,” though.
I’m too lazy to look up a source but some of the error messages I’ve seen in azure make it clear Microsoft uses go to build their cloud.
Message like this one:
dial tcp 192.168.1.100:3000: connect: timeout
Keep in mind Microsoft also maintains their own go fork for fips compliance. I don’t know exactly their use case but I can tell you at Google we did the same thing for the same reason. I think the main fips compliant go binaries being shipped were probably gke/k8s related.
(Edit removed off topic ramble, sorry I have ADHD)
I like this idea! Do you think it’s extreme to try and implement dark/light mode using static HTML? I can’t seem to find a good workaround for a javascript-less solution to give people the option to choose to deviate from their system preference.
But it sure feels like overkill to generate a copy of each page just to avoid making someone enable JS to change the colors on their screen… which I don’t even do because I prefer everything in dark mode anyway.
I don’t understand why so many web sites implement a dark mode toggle anyway. If your page uses CSS conditionally on prefers-color-scheme to apply a light theme or dark theme depending on the user’s system preference, why isn’t that enough?
For example, if the user is looking at your page in light theme and suddenly they think their bright screen is hurting their eyes, wouldn’t they change their systempreference or their browser’spreference to dark? (If they don’t solve the problem by just lowering their screen brightness.) After they do so, not only your page but all their other apps would look dark, fixing their problem more thoroughly.
For apps (native or web) the user hangs around in for a long time, I can see some reasons to allow customizing the app’s theme to differ from the system’s. A user of an image editing app might want a light or dark mode depending on the brightness of the images they edit, or a user might want to theme an app’s windows so it’s easily recognizable in their window switcher. But for the average blog website, these reasons don’t apply.
I am curious about how many people use it as well. But it certainly is easier to change by clicking a button in your window than going into your system or browser settings, which makes me think that it would be nice to add. Again, for the imagined person who decides to deviate from their system preference.
Although you’ve made me realize that even thinking about this without putting work into other, known-to-be-used accessibility features is kind of ridiculous. There is lower hanging fruit.
Here’s a concrete example. I generally keep my browser set to dark mode. However, when using dark mode, the online training portal at work switches from black text on a white background to white text on a white background. If I wanted to read the training material, I would need to go into my browser settings and switch to light mode, which then ruins any other tab I would switch to.
If there was a toggle button at the training portal, I could switch off dark mode for that specific site, making the text readable but not breaking my other tabs. Or, if the training portal at work won’t add the button, I could at least re-enable dark mode in every tab whose site had added such a toggle.
Sure, but only on sites that provide a button. It seems a little silly that one bad site should mean that you change your settings on every other site / don’t have your preferred theme on those sites.
Given how widely different colour schemes can vary, even just within the broad realms of “light” and “dark”, I can imagine some users would prefer to see some sites in light mode, even if they want to see everything else in dark mode. It’s the same reason I’ve set my browser to increase the font size for certain websites, despite mostly liking the defaults.
It would be nicer if this could be done at the browser level, rather than individually for each site (i.e. if there was a toggle somewhere in the browser UI to switch toggle between light/dark mode, and if the browser could remember this preference). As it is, a lot of sites that do have this toggle need to either handle the preference server-side (not possible with static sites, unnecessary cookies), handle the preference client-side (FOUC, also unnecessary cookies), or don’t save the preference at all and have the user manually toggle with every visit. None of these options are really ideal.
That said, I still have a theme switcher on my own site, mostly because I wanted to show off that I made two different colour schemes for my website, and that I’m proud of both of them… ;)
I remember the days when you could do <link rel="alternate stylesheet" title="thing" href="..."> and the browser would provide its own nice little ui for switching. Actually, Firefox still does if you look down its menu (view -> page style), but it doesn’t remember your preference across loads or refreshes, so meh, not a good user experience. But hey, page transitions are an IE6 feature coming back again, so maybe alternate stylesheets will too someday.
The prefers dark mode css thing really also ought to be a trivial button on the browser UI too. I’m pretty sure it is somewhere in the F12 things but I can’t even find it so woe on the users lol.
But on the topic in general too, like I think static html is overrated. Remember you can always generate html on demand with a trivial program on the server with these changes and still use all the same browser features…
I’ve been preparing something like this. You can do it with css pseudo selectors and a checkbox: :root:has(#checkbox-id:checked) or so; then you use this to either ‘respect’ the system theme, or invert it.
The problems I’m having with this approach:
navigating away resets the checkbox state
svg and picture elements have support for dark/light system theme, but not for this solution
Yeah, I think I saw the checkbox trick before, but the problems you outline make the site/page/dark and site/page/light solution seem more enticing, since they can avoid especially the state reset issue. I like the idea of respecting/inverting the system theme as a way of preserving a good default, though!
Yeah, as an alternative, for the state issue I was thinking of using a cookie + choose the styles based on it, but that brings a whole host of other “issues”
Ah yes, one of the super toxic throw away comments anyone can make which alienates one participant and makes the other feel super smug.
These things are mostly going away, phone keyboards killed the “grammar nazi”, no one can spell anymore and no one knows who is at fault. I’m looking forward to a world where our default isn’t trying to win the conversation, but to move it forward more productively.
I’m looking forward to a world where our default isn’t trying to win the conversation, but to move it forward more productively.
Probably a bit optimistic
On UUOC: I suspect it was originally in the spirit of fun. It becoming a cultural gatekeeping tool (as many, many in-group signifiers become) is a separate phenomenon and I think it’s probably best to view it as such
IMO the issue is that pointing at other people’s code and saying “lol that’s dumb” is really only something you can do with friends, with strangers over the internet it’s a very different thing
Often, people in these forums are friends to some extent and I’d assume that part of this that became toxic is that since you’re not ALL friends, it’s hard to tell who knows who and what’s a joke and what’s a dig.
How about a good priced consumer grade card that gives similar performance? Is there any option to the Nvidia Tesla P40? Slightly more modern, less power without all the hacky stuff?
You can run inference using CPU only, but you’ll have to use smaller models since it’s slower. But the P40 is the best value right now given the amount of VRAM it has.
There are several options for a consumer grade card, but it all gets incredibly expensive really fast. I just checked for my country (The Netherlands) and the cheapest 24GB card is 949 euros new. And that is an AMD card, not an Nvidia. While I am sure the hardware is just as good, the fact is that the software support for AMD is currently not at the same level as Nvidia.
Second-hand, one can look for RTX 3090’s and RTX 4090’s. But a quick check shows that a single second-hand 3090 would cost over 600 euros at minimum here. And this does not consider that those cards are really power hungry and often take up 3 PCIe slots, to make space for the cooling, which would have been an issue in this workstation.
Since I could only accommodate speeds to what PCIe 3.0 offers anyway, a limitation of the workstation, this seemed the best option to me. But of course, check the markets that are available to you to see if there are better deals to be made for your particular situation.
Is the future moving away from FB/Insta/etc and towards easy-to-deploy Discord-alikes for our circles of friends, perhaps all based on open standards and interoperable?
I can’t imagine this happening, because I have no interest in running a chat server for my friends. The last thing I want to do on game night is figure out why message delivery is failing for some people, or debug why my server’s swap usage is at 100%. I already deal with production servers at work, I’d rather let other people manage that stuff for me.
Billions of people around the world use Google’s products every day, and they count on those products to work reliably
Really? The only thing that still feels reliable to me is search. Maps was kind of reliable but recently shops have disappeared randomly, and wayfinding works very strange (well I’m in Japan, but it’s still Google). For every other product, I’ve long since given up on relying on them. Heck, for most products I would even assume they cease to exist within the next couple of years.
Think of what a mom in India or a dad in Indonesia use: search, youtube, gmail, photos, drive. These Google products have worked wonders for the majority of cases.
I feel like this is isolated demands for environmentalism. I’ve never heard of anyone say that the taylor swift eras tour is “incinerating the planet for fun”, even though it plausibly has resulted in a similar order of magnitude of CO2 emissions and fun. How many emissions have come from all nascar races? If you look at the energy consumption of youtube’s data centers, I bet it’s a lot.
A Taylor Swift concert (or a Rolling Stones one, or any other big act) is a huge logistical enterprise. Potentially hundreds of thousands of people are relying on the star showing up on time and in the right place. Having them use a private jet is not just an affectation, it’s a sound business decision compared to relying on commercial flights.
Edit speaking about carbon emissions, in the recent TS concert in Stockholm there were audience members who literaly flew in from the US because the tickets were cheaper compared to any American venue.
I saw an estimate for the CO2 impact of the NeurIPS 2024 conference - given the thousands of people who flew in for the event - which suggested it may have had significantly more CO2 impact than a training run for one of the larger models.
Llama 3.3 70B reported using 11,390 tons CO2eq for training and the model itself runs on my laptop.
As far as I can tell, 11,390 tons CO2eq is equivalent to twenty fully loaded passenger jets from NYC to London.
That number is for just that one model though - it doesn’t include many other research training runs Meta’s AI team would have performed along the way - or the impact of many AI labs competing with each other.
I still think it’s a useful point of comparison for considering the scale of the CO2 impact of this technology.
There are papers out there talking about 10% of total emissions are for training, and 90% for inference (and the longer you wait, the more training cost is dwarfed). Also the inference cost could vary from 1 to 100 orders of magnitude apparently, so that would make a lot of twenty-fully-loaded-nyc-london-flights.
Like I said, the Llama 3.3 70B model runs on my laptop. It’s hard for me too get too worried about the CO2 impact of software that runs on the laptop that’s already sitting on my desk - I guess it runs a little hotter?
I also have trouble imagining that an LLM running on a server in a data center where it’s shared with hundreds of thousands of other users is less efficient than one that runs on my own laptop and serves only me.
As a mantainer of 3 react apps at work, I have to deal with this dependency churn all the time. Moving to a different stack is not an immediate possibility. One thing that I have started doing is removing dependencies in favour of having our own implementation, e.g. we have our own router, our own form handling lib.
We also have another app in Elm, which has been great, none of this dependency fatigue.
You’d think viewing text would be automatically done by a browser, but many websites disagree and shove megabytes of JavaScript down the pipe to “enhance” the viewing pleasure.
It is, until you decide that you need to handle forms completely differently from the browser, in which case you have to implement it from scratch or pull in a library. This is part of what makes front-end development so uniquely cursed.
Tbh, if you decide you need to handle forms completely differently from the browser, you are likely doing something wrong, and almost certainly breaking accessibility requirements.
Sure, because the browser by itself can’t know which SPA framework is being used and automatically work with that. But this is an easy problem to solve: set up an onSubmit handler for the form, and write a tiny bit of JS to prevent default, grab the FormData, submit it with a fetch() request, get the response, and update some state in the app. Boom, you have partial page update again.
Couldn’t tell if this was a subtle stab at the «dependency fatigue» fatigue or not, but if not: last Elm release was in 2019 (I think?) so chances are that this Elm dependencies aren’t very… actively… maintained. I don’t have any dependency fatigue on my Pascal project either.
I’ve come to realise that having your own router is often quite straightforward. Often they don’t need to be that complicated.
our own form handling lib
I’d seriously consider this next time, too, though it’s less clear to me where the edge-cases and splinters might be for forms, having not tried to implement my own handling for a long time.
I do like them, but at the same time why do I have to encrypt my recipe site? I would like the option in my browser to not warn about sites that don’t use TLS. Or at least to be presented with an option? Oh, this is a reference recipe site. Would you like not to use encryption? Encryption is such a pita for simple things. I do think that sites that accept credentials always need to be encrypted, but why go through the hassle for things that are public? I am very thankful to let’s encrypt and the caddy web server for making certificates. A non-issue, but at the same time I kind of get tired of oh no it’s not encrypted properly warnings which everyone will ignore anyway.
Back in ~2012 users of our startup’s iPhone app complained that it crashed when they were on the London Underground (I think, I may be misremembering the details).
It turned out the WiFi down there was modifying HTML pages served over HTTP, and our app loaded HTML pages that included comments with additional instructions for how the app should treat the retrieved page… and those comments were being stripped out!
We fixed the bug by switching to serving those pages over HTTPS instead. I’ve used HTTPS for everything I’ve built since then.
I can sort of understand that since bandwidth was a premium in 2012, so if they could remove as many bytes from the payload as possible, then they increase their network bandwidth overall. Still surprising, but I could at least rationalize it.
No matter how much bandwidth you (an ISP) have, there are always schemes which promise to reduce your usage and thus improve the end-user experience – or invade their experience and make you money.
(Some of those schemes actually work. CDNs, for example.)
Interesting. I wonder if my memory is just off. NYC had really bad internet back then, as I recall, because our infrastructure is buried and expensive to upgrade. But I could swear we had like 100Mbps.
It’s not just ISPs, it’s any malicious actor, such as the operator of the wireless access point you’ve connected to (which may not be the person you think it is). You have a choice of either protecting visitors to your site from trivial interception and tampering or leaving them vulnerable. No one is forcing you to choose either way.
I originally chose to not enable TLS for our game’s asset CDN because checking certs on arbitrary Linux distros is ~unsolvable and we have our own manifest signing so we don’t need TLS’s security guarantees anyway, then we found some ISPs with broken caching that would serve the wrong file for a given URL, so I enabled it and disabled cert verification in the Linux client instead.
ISPs don’t even have to be malicious, just crappy…
It’s sort of self explanatory. Confidentiality and Integrity.
I know that I’m getting exactly the recipe that you are serving from your site
I know that no one else can see which recipe I’m cooking
I know that no one can inject ads, malicious code, tracking, malicious/ abusive images, etc.
If you aren’t willing to give those two things to your users I’m really convinced that you just aren’t in a position to host. Recipe site or not, we all have basic obligations. If you can’t meet them, that’s okay, you don’t have to host a website.
HTTPS leaks a lot less metadata than HTTP. With HTTP, you can see the full URL of the request. With HTTPS, you can see only the IP address. There’s a huge difference between knowing that I visited Wikipedia and that I read a specific Wikipedia page (the latter may be possible to determine based on the size of the response, but that’s harder). With SNI, the IP address may be shared by hundreds of domains and so a passive adversary doesn’t even see the specific host, let alone the specific page.
Usually SNI is sent in the clear, because the server needs to know the server name to be able to choose the right cert to present to the client, and it would require an extra round trip to do key exchange before certificate exchange.
There’s ongoing work on encrypted SNI (ESNI) but it requires complicated machinery to establish a pre-shared key; it only provides meaningful protection for mass virtual hosters (ugly push to centralize); and it’s of limited benefit without encrypted DNS (another hump on the camel).
Thanks, SNI does not work how I thought it worked. I assumed there was an initial unauthenticated key exchange and then the negotiated key was signed with the cert that the client said it wanted. I believe QUIC works this way, but I might be wrong there as well.
QUIC illustrated shows that the initial packet is encrypted with keys derived from a nonce that is sent in the clear in the initial packet; inside the wrapper is a TLS/1.3 client hello
I suppose this makes sense in that QUIC is designed to always encrypt, and it’s harder to accidentally send a cleartext packet if there aren’t any special cases that need cleartext. RFC 9000 says, “This protection does not provide confidentiality or integrity against attackers that can observe packets, but it does prevent attackers that cannot observe packets from spoofing Initial packets.”
Looking at today’s instant messaging solutions, I think IRC is very
underrated. The functionality of clients for IRC made years ago still
surpass what “modern” protocols like Matrix have to offer. I think
re-adoption of IRC is very much possible only by introducing a good UI,
nothing more.
About a year ago I moved my family/friends chat network to IRC. Thanks to modern clients like Goguma and Gamja and the v3 chathistory support and other features of Ergo this gives a nice modern feeling chat experience even without a bouncer. All of my users other than myself are at basic computer literacy level, they can muddle along with mobile and web apps not much more. So it’s definitely possible.
I went this route because I wanted something that I can fully own, understand and debug if needed.
You could bolt-on E2EE, but decentralization is missing—you have to create accounts on that server. Built for the ’10s, XMPP + MUCs can do these things without the storage & resource bloat of Matrix + eventual consistency. That said, for a lot of communites IRC is a serviceable, lightweight, accessible solution that I agree is underrated for text chat (even if client adoption of IRCv3 is still not where one might expect relative to server adoption)—& I would 100% rather see it over some Slack/Telegram/Discord chatroom exclusivity.
I dunno. The collapse of Freenode 3 years ago showed that a lot of the accounts there were either inactive or bots (because the number of accounts on Libera after the migration was significantly lower). I don’t see any newer software projects using IRC (a depressingly large number of them still point to Freenode, which just reinforces my point).
I like IRC and I still use it but it’s not a growth area.
There’s an ongoing effort to modernize IRC with https://ircv3.net. I would agree that most of these evolutions is just IRC catching up with features of modern chat plaforms.
Calling IRCv3 an “ongoing effort” is technically correct, but it’s been ongoing for around 8 to 9 years at this point and barely anything came out of it - and definitely nothing groundbreaking that IRC would need to catch up to the current times (e.g. message history).
The collapse of Freenode 3 years ago showed that a lot of the accounts there were either inactive or bots (because the number of accounts on Libera after the migration was significantly lower).
I don’t know if that’s really the right conclusion. A bunch of communities that were on Freenode never moved to Libera because they migrated to XMPP, Slack, Matrix, Discord, OFTC, and many more alternatives. I went from being on about 20 channels on Freenode to about 5 on Libera right after Freenode’s death, and today that number is closer to 1 (which I’m accessing via a Matrix bridge…).
I guess it just depends what channels you were in; every single one I was using at the time made the jump from Freenode to Libera, tho there were a couple that had already moved off to Slack several years earlier.
It’s “opt-in” in the sense that if you send an OTR message to someone without a plugin, they see garbage, yes. OTR is the predecessor to “signal” and back then (assuming you meant “chats” above), E2EE meant “one-to-one”: https://en.wikipedia.org/wiki/Off-the-record_messaging – but it does support end-to-end encrypted messages, and from my memory of using it on AIM in the zeros, it was pretty easy to setup and use. (At one point, we quietly added support to the hiptop, for example.)
Someone could probably write a modern double-ratchet replacement, using the same transport concepts as OTR, but I bet the people interested in working on that are more interested in implementing some form of RFC 9420 these days.
Seems like it’s based on tracking with Signals are accessed when a given Signal is evaluated:
Computed Signals work by automatically tracking which other Signals are read during their evaluation. When a computed is read, it checks whether any of its previously recorded dependencies have changed, and re-evaluates itself if so. When multiple computed Signals are nested, all of the attribution of the tracking goes to the innermost one.
I feel like I’m taking crazy pills whenever I read one of these articles. CoPilot saves me so much time on a daily basis. It just automates so much boilerplate away: tests, documentation, switch statements, etc. Yes, it gets things wrong occasionally, but on balance it saves way more time than it costs.
Comments like this always make me wonder: How much boilerplate are you writing and why? I generally see boilerplate as a thing that happens when you’ve built the wrong abstractions. If every user of a framework is writing small variation on the same code, that doesn’t tell me they should all use an LLM to fill in the boilerplate, it tells me that we want some helper APIs that take only the things that differ between the users as arguments.
“It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter.” — Nathaniel Borenstein
What on earth are you talking about? How could “tests, documentation, and switch statements” possibly be a questionable example? They’re the perfect use-case for automated AI completion.
I’ve found it useful when I want to copy an existing test and tweak it slightly. Sure, maybe I could DRY the tests and extract out common behavior but if the test is only 10 LoC I find that it’s easier to read the tests without extracting stuff to helpers or shared setup.
That was one of the places where Copilot significantly reduces the amount I type relative to writing it entirely, but I found it was only a marginal speedup relative to copying and pasting the previous test and tweaking. It got things wrong enough that I had to carefully read the output and make almost as many changes as if I’d copied and pasted.
IME the cumulative marginal savings from each place it was helpful was far, far, far outweighed by one particular test where it used fail instead of error for a method name and it took me a distressingly long time to spot.
I think I’ve only wasted a cumulative five minutes of debugging test failures caused by Copilot writing almost the right test, but I’m not sure I could claim that it’s actually saved me more than five minutes of typing.
I think the general answer is “a lot”. Once you have a big codebase and several developers the simplicity you get from NOT building abstractions is often a good thing. Same as not DRYing too much and not making too many small functions to simplify code flow and local changes. Easy to maintain code is mostly simple and reducing “boilerplate” while great in theory always means macros or metaprogramming or some other complicated thing in practice.
I don’t think you are taking crazy pills! Copilot could totally be saving you time. That’s why I prefaced by saying the kind of project I use Copilot with is atypical.
But I also want to say, I once believed Copilot was saving me time too, until I lost access to it and had some time to compare and reflect.
I’ve used Copilot for a while and don’t use it anymore. In the end, I found that for most boilerplate can better be solved with snippets and awk scripts, as they are more consistent. For example, to generate types from SQL, I have an AWK script that does it for me.
For lookup, I invested in good offline docs that I can grep, that way I can be sure I’m not trusting hallucinations.
I didn’t think Copilot was useless but my subscription ran out and I don’t really feel like I need to resubscribe, it didn’t add enough.
Same here. One of the biggest ways it helps is by giving me more positive momentum. Copilot keeps me thinking forward, offers me an idea of a next step to either accept, adjust, or reject, and in effectively looking up the names and structure of other things (like normal IDE autocomplete but boosted) it keeps me from getting distracted and overfocusing on details.
It does help though that I use (somewhat deliberately) pretty normal mainstream stacks.
Ditto. Especially the portion of the article that mentions it being unpredictable. Maybe my usage is biased because I mostly write python and use mainstream libraries, but I feel like I have a very good intuition for what it’s going to be smart enough to complete. It’s also made me realize how uninteresting and rote a lot of code tends to be on a per-function basis.
If you are trying to prescribe something new for front-end web but your demo is riddled with questionable pracitices, there’s irony folks can’t help but point out. …Like pitching a new restaurant with a the musk of rotten food as you open the door, why trust this establishment?
I had a tangential question if that’s allowed. Has anyone here been using these LLMs and if yes, how have they helped you?
I missed the chatgpt train because I wasn’t interested. Recently I found out about llamafiles which makes running these easier but the large variety of models and the unintuitive nomenclature dissuaded me. I still wanna try these out and looks like I have enough RAM to run the Mistral-7B.
I have played around with stable diffusion but the slowness due to weak specs and the prompt engineering aspect made me bounce.
I’ve been using LLMs on almost a daily basis for more than a year. I use them for a ton of stuff, but very rarely for generating text that I then copy out and use directly.
Code. I estimate 80% of my LLM usage relates to code in some way - in Python, JavaScript, Bash, SQL, jq, Rust, Go - even AppleScript, see https://til.simonwillison.net/gpt3/chatgpt-applescript - it’s like having a tutorial that can produce exactly the example you need for the problem you are working on, albeit with occasional mistakes
Brainstorming. This one surprised me, because everyone will tell you that LLMs can never come up with a new idea, they just spit out what they’ve been trained on. The trick with brainstorming is to ask for 20 ideas, and to prompt in a way that combines different things. “20 ideas for Datasette plugins relevant to investigative reporting” for example: https://chat.openai.com/share/99aeca01-62c7-4b7c-9878-7ce055738682 - you’ll rarely get an idea that you want to use directly, but some of them may well genuinely spark something interesting
World’s best thesaurus: you will NEVER be unable to find that word that’s on the tip of your tongue ever again.
Entertainment. There are so many weird dystopian games you can play with these things. I recently got ChatGPT to tell me “Incorporating the spirit of Fabergé eggs into your orchestrion project is a brilliant and deeply evocative concept” and I’ve been chuckling about that for days - it’s a consummate “yes man”, so one game is feeding it increasingly ludicrous ideas and watching it breathlessly praise them.
I do most of my work with GPT-4 because it’s still a sizable step ahead of other LLM tools. I love playing with the ones that run on my laptop but I rarely use them for actual work, since they are far more likely to make mistakes or hallucinate than GPT-4 through paid ChatGPT.
Mistral 7B is my current favourite local model - it’s very capable, and I even have a version of it that runs on my iPhone! https://llm.mlc.ai/#ios
Generating simple programs in languages I barely know, like C++ and javascript, which I can then kludge into something that does what I want.
“Here is a some data in this format, convert it to data in this other format.”
Generating regular expressions
Bulk searching: “here’s a list of ten libraries, give me the github or gitlab page for each of them.” or “Take this snippet of code, find all of the references for the NetworkX library, and link the docs page for each of them.”
Summarizing youtube videos
LLMs are good in cases where it’s hard to solve a problem, but easy to verify a solution.
This was a challenge and I’m proud of how I approached it. Most of my friends and family know that “I’m a programmer”, but they have no idea what I do. I made a video that takes a non-technical person through python, numpy, matplotlib, pandas and jupyter in a very concise way so that I can show my data table built on top of those projects.
How do you explain your projects to non-technical loved ones?
I think it’s very worthwhile to work to explain your work. What’s your niche? Give it a try here.
I really want to make individual videos explaining NumPy, pandas, Matplotlib, Jupyter and polars to programmers. Not explaining how to sue them, but a quick overview of what they are capable of in the hands of an experienced practitioner. I have worked at a couple of places where the devs only know Node/Typescript, there’s a lot going on outside of that world.
“I help make sure that servers keep running while we’re asleep, as well as when we’re awake. Usually I succeed.” They usually don’t ask more than that.
At a certain level I don’t think they need to fully understand things. I love it when someone close to me geeks out over their hobby and gets excited explaining it to me, even if at the end of the day I probably won’t fully understand it.
All this is within reason of course, I knew someone that would talk nonstop for 15+ min about Dungeons & Dragons and even though I play it, those conversations were still exhausting.
if its not possible to rewrite it in typescript in a way that is as fast as the go version, that really sheds a bad light on v8 and its performance and begs the question why you should write any non browser code in typescript.
lacking nuance imo
if its not possible to rewrite it in go in a way that is as fast as the assembly version, that really sheds a bad light on go and its performance and begs the question why you should write any code in go.
Is this the first big Go project at Microsoft? I had no idea they could choose Go, I would assume that it just wouldn’t be acceptable, because they wouldn’t want to unnecessarily legitimize a Google project.
Their web browser runs on Chromium, I think it’s a little late to avoid legitimizing Google projects
They had already lost the web platform by the time they switched to Chromium.
When I think of MS and ‘developers developers developers develop’, I was lead to believe that they’d have more pride in their own development platforms.
I wonder if the switch to Chromium, the embrace of Linux in Azure, the move from VS to VSCode, any of these, would have happened under Ballmer or Gates. I suspect Gates’ new book is like Merkel’s: retrospective but avoiding the most controversial questions: Russian gas and giving up dogfooding. Am I foolish to expect a more critical introspection?
I think corporate open source is now “proven”, and companies don’t worry so much about this kind of thing.
Google has used TypeScript for quite awhile now (even though they had their own Closure compiler, which was not staffed consistently, internally)
All of these companies are best thought of as “frenemies” :) They cooperate in some ways, and compete in others.
They have many common interests, and collaborate on say lobbying the US govt
When Google was small, and Microsoft was big, the story was different. They even had different employee bases. But now they are basically peers, and employees go freely back and forth
Microsoft has actually maintained a security-hardened build of Go for a while now: https://devblogs.microsoft.com/go/
I can see a blog post saying they made changes for FIPS compliance, which is usually understood to be for satisfying government requirements and not a security improvement. I can’t see a summary of any other changes.
MS is a pretty big company, so I wouldn’t be surprised if they have a ton of internal services in Go. Plus with Azure being a big part of their business, I doubt they care what language people use as long as you deploy it on their platforms.
Can confirm that some internal services were written in go as of a couple years ago. (I worked for Microsoft at the time.) I didn’t have a wide enough view to know if it was “a ton,” though.
We embrace and we extend!
I’m too lazy to look up a source but some of the error messages I’ve seen in azure make it clear Microsoft uses go to build their cloud.
Message like this one:
Keep in mind Microsoft also maintains their own go fork for fips compliance. I don’t know exactly their use case but I can tell you at Google we did the same thing for the same reason. I think the main fips compliant go binaries being shipped were probably gke/k8s related.
(Edit removed off topic ramble, sorry I have ADHD)
Could this message just be coming from Kubernetes?
Doesn’t seem to work on Windows firefox
works on windows+firefox here
Nor Linux FF
Alternatively you can do
return fn().catch((e) => ...);if you don’t like using try+catch.I like this idea! Do you think it’s extreme to try and implement dark/light mode using static HTML? I can’t seem to find a good workaround for a javascript-less solution to give people the option to choose to deviate from their system preference.
But it sure feels like overkill to generate a copy of each page just to avoid making someone enable JS to change the colors on their screen… which I don’t even do because I prefer everything in dark mode anyway.
There’s a CSS-only way (using a heavily restyled checkbox) to toggle other CSS attributes:
Today I learned that
light-dark()is a thing! Thanks!I’m using a similar idea for my own dark mode checkbox: https://isuffix.com (website is still being built).
GP comment might enjoy more examples of CSS
:has()in this blog post: https://www.joshwcomeau.com/css/has/I don’t understand why so many web sites implement a dark mode toggle anyway. If your page uses CSS conditionally on
prefers-color-schemeto apply a light theme or dark theme depending on the user’s system preference, why isn’t that enough?For example, if the user is looking at your page in light theme and suddenly they think their bright screen is hurting their eyes, wouldn’t they change their system preference or their browser’s preference to dark? (If they don’t solve the problem by just lowering their screen brightness.) After they do so, not only your page but all their other apps would look dark, fixing their problem more thoroughly.
For apps (native or web) the user hangs around in for a long time, I can see some reasons to allow customizing the app’s theme to differ from the system’s. A user of an image editing app might want a light or dark mode depending on the brightness of the images they edit, or a user might want to theme an app’s windows so it’s easily recognizable in their window switcher. But for the average blog website, these reasons don’t apply.
I am curious about how many people use it as well. But it certainly is easier to change by clicking a button in your window than going into your system or browser settings, which makes me think that it would be nice to add. Again, for the imagined person who decides to deviate from their system preference.
Although you’ve made me realize that even thinking about this without putting work into other, known-to-be-used accessibility features is kind of ridiculous. There is lower hanging fruit.
Here’s a concrete example. I generally keep my browser set to dark mode. However, when using dark mode, the online training portal at work switches from black text on a white background to white text on a white background. If I wanted to read the training material, I would need to go into my browser settings and switch to light mode, which then ruins any other tab I would switch to.
If there was a toggle button at the training portal, I could switch off dark mode for that specific site, making the text readable but not breaking my other tabs. Or, if the training portal at work won’t add the button, I could at least re-enable dark mode in every tab whose site had added such a toggle.
Or, hear me out, instead of adding javascript to allow users to work around its broken css, the training portal developers could fix its css?
(Browsers should have an easy per-site dork mode toggle like the reader mode toggle.)
I feel like this is something to fix with stylus or a user script, maybe?
sounds like the button fixes it
Sure, but only on sites that provide a button. It seems a little silly that one bad site should mean that you change your settings on every other site / don’t have your preferred theme on those sites.
Or the DarkReader extension or similar.
Given how widely different colour schemes can vary, even just within the broad realms of “light” and “dark”, I can imagine some users would prefer to see some sites in light mode, even if they want to see everything else in dark mode. It’s the same reason I’ve set my browser to increase the font size for certain websites, despite mostly liking the defaults.
It would be nicer if this could be done at the browser level, rather than individually for each site (i.e. if there was a toggle somewhere in the browser UI to switch toggle between light/dark mode, and if the browser could remember this preference). As it is, a lot of sites that do have this toggle need to either handle the preference server-side (not possible with static sites, unnecessary cookies), handle the preference client-side (FOUC, also unnecessary cookies), or don’t save the preference at all and have the user manually toggle with every visit. None of these options are really ideal.
That said, I still have a theme switcher on my own site, mostly because I wanted to show off that I made two different colour schemes for my website, and that I’m proud of both of them… ;)
I remember the days when you could do
<link rel="alternate stylesheet" title="thing" href="...">and the browser would provide its own nice little ui for switching. Actually, Firefox still does if you look down its menu (view -> page style), but it doesn’t remember your preference across loads or refreshes, so meh, not a good user experience. But hey, page transitions are an IE6 feature coming back again, so maybe alternate stylesheets will too someday.The prefers dark mode css thing really also ought to be a trivial button on the browser UI too. I’m pretty sure it is somewhere in the F12 things but I can’t even find it so woe on the users lol.
But on the topic in general too, like I think static html is overrated. Remember you can always generate html on demand with a trivial program on the server with these changes and still use all the same browser features…
I’ve been preparing something like this. You can do it with css pseudo selectors and a checkbox:
:root:has(#checkbox-id:checked)or so; then you use this to either ‘respect’ the system theme, or invert it.The problems I’m having with this approach:
Yeah, I think I saw the checkbox trick before, but the problems you outline make the
site/page/darkandsite/page/lightsolution seem more enticing, since they can avoid especially the state reset issue. I like the idea of respecting/inverting the system theme as a way of preserving a good default, though!Yeah, as an alternative, for the state issue I was thinking of using a cookie + choose the styles based on it, but that brings a whole host of other “issues”
Ah yes, one of the super toxic throw away comments anyone can make which alienates one participant and makes the other feel super smug.
These things are mostly going away, phone keyboards killed the “grammar nazi”, no one can spell anymore and no one knows who is at fault. I’m looking forward to a world where our default isn’t trying to win the conversation, but to move it forward more productively.
Ah yes, “the good old days”
The 1990s were so good that some people’s biggest problem was other people wasting a process in their shell scripts. 🙃
Probably a bit optimistic
On UUOC: I suspect it was originally in the spirit of fun. It becoming a cultural gatekeeping tool (as many, many in-group signifiers become) is a separate phenomenon and I think it’s probably best to view it as such
IMO the issue is that pointing at other people’s code and saying “lol that’s dumb” is really only something you can do with friends, with strangers over the internet it’s a very different thing
Often, people in these forums are friends to some extent and I’d assume that part of this that became toxic is that since you’re not ALL friends, it’s hard to tell who knows who and what’s a joke and what’s a dig.
How about a good priced consumer grade card that gives similar performance? Is there any option to the Nvidia Tesla P40? Slightly more modern, less power without all the hacky stuff?
the RTX series have a desktop form factor and comparable memory, but ain’t exactly cheap.
You can run inference using CPU only, but you’ll have to use smaller models since it’s slower. But the P40 is the best value right now given the amount of VRAM it has.
There are several options for a consumer grade card, but it all gets incredibly expensive really fast. I just checked for my country (The Netherlands) and the cheapest 24GB card is 949 euros new. And that is an AMD card, not an Nvidia. While I am sure the hardware is just as good, the fact is that the software support for AMD is currently not at the same level as Nvidia.
Second-hand, one can look for RTX 3090’s and RTX 4090’s. But a quick check shows that a single second-hand 3090 would cost over 600 euros at minimum here. And this does not consider that those cards are really power hungry and often take up 3 PCIe slots, to make space for the cooling, which would have been an issue in this workstation.
Since I could only accommodate speeds to what PCIe 3.0 offers anyway, a limitation of the workstation, this seemed the best option to me. But of course, check the markets that are available to you to see if there are better deals to be made for your particular situation.
I can’t imagine this happening, because I have no interest in running a chat server for my friends. The last thing I want to do on game night is figure out why message delivery is failing for some people, or debug why my server’s swap usage is at 100%. I already deal with production servers at work, I’d rather let other people manage that stuff for me.
Really? The only thing that still feels reliable to me is search. Maps was kind of reliable but recently shops have disappeared randomly, and wayfinding works very strange (well I’m in Japan, but it’s still Google). For every other product, I’ve long since given up on relying on them. Heck, for most products I would even assume they cease to exist within the next couple of years.
Think of what a mom in India or a dad in Indonesia use: search, youtube, gmail, photos, drive. These Google products have worked wonders for the majority of cases.
Thanks to the GenAI search results, I wouldn’t even call Search reliable.
I suspect data integrity and systems uptime are treated separately at google.
Gmail is extremely commonly used I think. Interestingly, I never use Google for search. I find it to be pretty bad.
Right. I don’t use Gmail so can’t judge that.
maps navigation is fine in toronto but i don’t drive
incinerating the planet for fun.
I feel like this is isolated demands for environmentalism. I’ve never heard of anyone say that the taylor swift eras tour is “incinerating the planet for fun”, even though it plausibly has resulted in a similar order of magnitude of CO2 emissions and fun. How many emissions have come from all nascar races? If you look at the energy consumption of youtube’s data centers, I bet it’s a lot.
swift burning entire forests is a common meme on instagram https://knowyourmeme.com/memes/events/taylor-swifts-private-jet-emissions-controversy
“Private jets are bad” is not a hot take. “Going to a concert is bad” is a bizarre take.
A Taylor Swift concert (or a Rolling Stones one, or any other big act) is a huge logistical enterprise. Potentially hundreds of thousands of people are relying on the star showing up on time and in the right place. Having them use a private jet is not just an affectation, it’s a sound business decision compared to relying on commercial flights.
Edit speaking about carbon emissions, in the recent TS concert in Stockholm there were audience members who literaly flew in from the US because the tickets were cheaper compared to any American venue.
I saw an estimate for the CO2 impact of the NeurIPS 2024 conference - given the thousands of people who flew in for the event - which suggested it may have had significantly more CO2 impact than a training run for one of the larger models.
you may underestimate the footprint. There has been an editorial in the CACM https://cacm.acm.org/opinion/genai-giga-terawatt-hours-and-gigatons-of-co2/ that does some math and demands energy to be considered with every IT decision.
Llama 3.3 70B reported using 11,390 tons CO2eq for training and the model itself runs on my laptop.
As far as I can tell, 11,390 tons CO2eq is equivalent to twenty fully loaded passenger jets from NYC to London.
That number is for just that one model though - it doesn’t include many other research training runs Meta’s AI team would have performed along the way - or the impact of many AI labs competing with each other.
I still think it’s a useful point of comparison for considering the scale of the CO2 impact of this technology.
There are papers out there talking about 10% of total emissions are for training, and 90% for inference (and the longer you wait, the more training cost is dwarfed). Also the inference cost could vary from 1 to 100 orders of magnitude apparently, so that would make a lot of twenty-fully-loaded-nyc-london-flights.
Like I said, the Llama 3.3 70B model runs on my laptop. It’s hard for me too get too worried about the CO2 impact of software that runs on the laptop that’s already sitting on my desk - I guess it runs a little hotter?
I also have trouble imagining that an LLM running on a server in a data center where it’s shared with hundreds of thousands of other users is less efficient than one that runs on my own laptop and serves only me.
Which papers?
Oh don’t be so dramatic. We’re also generating a lot of shareholder value!
As a mantainer of 3 react apps at work, I have to deal with this dependency churn all the time. Moving to a different stack is not an immediate possibility. One thing that I have started doing is removing dependencies in favour of having our own implementation, e.g. we have our own router, our own form handling lib.
We also have another app in Elm, which has been great, none of this dependency fatigue.
Does form handling need a separate library? Isn’t it automatically done by browsers?
You’d think viewing text would be automatically done by a browser, but many websites disagree and shove megabytes of JavaScript down the pipe to “enhance” the viewing pleasure.
It is, until you decide that you need to handle forms completely differently from the browser, in which case you have to implement it from scratch or pull in a library. This is part of what makes front-end development so uniquely cursed.
Tbh, if you decide you need to handle forms completely differently from the browser, you are likely doing something wrong, and almost certainly breaking accessibility requirements.
browsers don’t have a way to render server-side form submission errors without a full-page refresh
There’s quite a few options, but one i wanna call out is setting
targeton the form to an iframe.Sure, because the browser by itself can’t know which SPA framework is being used and automatically work with that. But this is an easy problem to solve: set up an onSubmit handler for the form, and write a tiny bit of JS to prevent default, grab the FormData, submit it with a fetch() request, get the response, and update some state in the app. Boom, you have partial page update again.
well that’s why people use libraries, to avoid doing that by hand
Couldn’t tell if this was a subtle stab at the «dependency fatigue» fatigue or not, but if not: last Elm release was in 2019 (I think?) so chances are that this Elm dependencies aren’t very… actively… maintained. I don’t have any dependency fatigue on my Pascal project either.
I’ve come to realise that having your own router is often quite straightforward. Often they don’t need to be that complicated.
I’d seriously consider this next time, too, though it’s less clear to me where the edge-cases and splinters might be for forms, having not tried to implement my own handling for a long time.
I do like them, but at the same time why do I have to encrypt my recipe site? I would like the option in my browser to not warn about sites that don’t use TLS. Or at least to be presented with an option? Oh, this is a reference recipe site. Would you like not to use encryption? Encryption is such a pita for simple things. I do think that sites that accept credentials always need to be encrypted, but why go through the hassle for things that are public? I am very thankful to let’s encrypt and the caddy web server for making certificates. A non-issue, but at the same time I kind of get tired of oh no it’s not encrypted properly warnings which everyone will ignore anyway.
Because your viewers don’t want their ISP to serve them ads in the content.
Back in ~2012 users of our startup’s iPhone app complained that it crashed when they were on the London Underground (I think, I may be misremembering the details).
It turned out the WiFi down there was modifying HTML pages served over HTTP, and our app loaded HTML pages that included comments with additional instructions for how the app should treat the retrieved page… and those comments were being stripped out!
We fixed the bug by switching to serving those pages over HTTPS instead. I’ve used HTTPS for everything I’ve built since then.
I can sort of understand that since bandwidth was a premium in 2012, so if they could remove as many bytes from the payload as possible, then they increase their network bandwidth overall. Still surprising, but I could at least rationalize it.
Bandwidth was a premium in 2012? That can’t be right, I feel like 2012 had plenty of bandwidth.
No matter how much bandwidth you (an ISP) have, there are always schemes which promise to reduce your usage and thus improve the end-user experience – or invade their experience and make you money.
(Some of those schemes actually work. CDNs, for example.)
Of course, but in 2012 I’m pretty sure even homes could get gigabit networking. I don’t think of it as being a bandwidth constrained time.
I lived in Cleveland at the time (major US city) and was still limited to sub-5 megabit ISP service.
Interesting. I wonder if my memory is just off. NYC had really bad internet back then, as I recall, because our infrastructure is buried and expensive to upgrade. But I could swear we had like 100Mbps.
Dunno. Crazy to think that 2012 was so long ago.
I looked through my inbox to find what speeds I have had over time.
I both understand and resent this. Bad actors are making my life worse, and for some unfathomable reason it’s legal?!
If your ISP is manipulating your data it should be sued into oblivion, in a just world.
It’s not just ISPs, it’s any malicious actor, such as the operator of the wireless access point you’ve connected to (which may not be the person you think it is). You have a choice of either protecting visitors to your site from trivial interception and tampering or leaving them vulnerable. No one is forcing you to choose either way.
Well, it’s not a just world in every country.
I originally chose to not enable TLS for our game’s asset CDN because checking certs on arbitrary Linux distros is ~unsolvable and we have our own manifest signing so we don’t need TLS’s security guarantees anyway, then we found some ISPs with broken caching that would serve the wrong file for a given URL, so I enabled it and disabled cert verification in the Linux client instead.
ISPs don’t even have to be malicious, just crappy…
Why didn’t you just ship your own root certificate? :p
It’s sort of self explanatory. Confidentiality and Integrity.
If you aren’t willing to give those two things to your users I’m really convinced that you just aren’t in a position to host. Recipe site or not, we all have basic obligations. If you can’t meet them, that’s okay, you don’t have to host a website.
https://doesmysiteneedhttps.com/
because IPSEC failed so it’s up to application protocols to provide secure communication instead of the network layer.
Because the some entities are passively monitoring all traffic worldwide.
Other than ad networks?! /s
But then again, those entities only really need metadata.
HTTPS leaks a lot less metadata than HTTP. With HTTP, you can see the full URL of the request. With HTTPS, you can see only the IP address. There’s a huge difference between knowing that I visited Wikipedia and that I read a specific Wikipedia page (the latter may be possible to determine based on the size of the response, but that’s harder). With SNI, the IP address may be shared by hundreds of domains and so a passive adversary doesn’t even see the specific host, let alone the specific page.
Usually SNI is sent in the clear, because the server needs to know the server name to be able to choose the right cert to present to the client, and it would require an extra round trip to do key exchange before certificate exchange.
There’s ongoing work on encrypted SNI (ESNI) but it requires complicated machinery to establish a pre-shared key; it only provides meaningful protection for mass virtual hosters (ugly push to centralize); and it’s of limited benefit without encrypted DNS (another hump on the camel).
Thanks, SNI does not work how I thought it worked. I assumed there was an initial unauthenticated key exchange and then the negotiated key was signed with the cert that the client said it wanted. I believe QUIC works this way, but I might be wrong there as well.
Gosh, I thought QUIC is basically TLS/1.3 with a different transport, but it’s weirder than either of us believed!
TLS/1.3 illustrated shows the SNI in the client hello in the clear
QUIC illustrated shows that the initial packet is encrypted with keys derived from a nonce that is sent in the clear in the initial packet; inside the wrapper is a TLS/1.3 client hello
I suppose this makes sense in that QUIC is designed to always encrypt, and it’s harder to accidentally send a cleartext packet if there aren’t any special cases that need cleartext. RFC 9000 says, “This protection does not provide confidentiality or integrity against attackers that can observe packets, but it does prevent attackers that cannot observe packets from spoofing Initial packets.”
Browsers are application runtimes, and plenty of bad actors are all too happy to include their JS software in your pages
I mean if people are going to ignore the warnings it sounds like you don’t need to enable encryption anyways
Looking at today’s instant messaging solutions, I think IRC is very underrated. The functionality of clients for IRC made years ago still surpass what “modern” protocols like Matrix have to offer. I think re-adoption of IRC is very much possible only by introducing a good UI, nothing more.
aka drawing the rest of the owl
More like upscaling an image drawn before the average web developer was born.
no UI will add offline message delivery to IRC
Doesn’t the “IRCToday” service linked in this post solve that? (and other IRC bouncers)
sure but that’s more than just a UI
Specs and implementations on the other hand…
I thínk “Lounge” is a really decent web-based UI.
About a year ago I moved my family/friends chat network to IRC. Thanks to modern clients like Goguma and Gamja and the v3 chathistory support and other features of Ergo this gives a nice modern feeling chat experience even without a bouncer. All of my users other than myself are at basic computer literacy level, they can muddle along with mobile and web apps not much more. So it’s definitely possible.
I went this route because I wanted something that I can fully own, understand and debug if needed.
You could bolt-on E2EE, but decentralization is missing—you have to create accounts on that server. Built for the ’10s, XMPP + MUCs can do these things without the storage & resource bloat of Matrix + eventual consistency. That said, for a lot of communites IRC is a serviceable, lightweight, accessible solution that I agree is underrated for text chat (even if client adoption of IRCv3 is still not where one might expect relative to server adoption)—& I would 100% rather see it over some Slack/Telegram/Discord chatroom exclusivity.
I dunno. The collapse of Freenode 3 years ago showed that a lot of the accounts there were either inactive or bots (because the number of accounts on Libera after the migration was significantly lower). I don’t see any newer software projects using IRC (a depressingly large number of them still point to Freenode, which just reinforces my point).
I like IRC and I still use it but it’s not a growth area.
There’s an ongoing effort to modernize IRC with https://ircv3.net. I would agree that most of these evolutions is just IRC catching up with features of modern chat plaforms.
The IRC software landscape is also evolving with https://lobste.rs/s/wy2jgl/goguma_irc_client_for_mobile_devices and https://lobste.rs/s/0dnybw/soju_user_friendly_irc_bouncer.
Calling IRCv3 an “ongoing effort” is technically correct, but it’s been ongoing for around 8 to 9 years at this point and barely anything came out of it - and definitely nothing groundbreaking that IRC would need to catch up to the current times (e.g. message history).
Message history is provided by this thing (IRC Today), and it does it through means of IRC v3 support.
I don’t know if that’s really the right conclusion. A bunch of communities that were on Freenode never moved to Libera because they migrated to XMPP, Slack, Matrix, Discord, OFTC, and many more alternatives. I went from being on about 20 channels on Freenode to about 5 on Libera right after Freenode’s death, and today that number is closer to 1 (which I’m accessing via a Matrix bridge…).
I guess it just depends what channels you were in; every single one I was using at the time made the jump from Freenode to Libera, tho there were a couple that had already moved off to Slack several years earlier.
IRC really needs end-to-end encrypted messages.
Isn’t that what OTR does?
Not really. It’s opt-in and it only works for 1:1 charts, doesn’t it?
It’s “opt-in” in the sense that if you send an OTR message to someone without a plugin, they see garbage, yes. OTR is the predecessor to “signal” and back then (assuming you meant “chats” above), E2EE meant “one-to-one”: https://en.wikipedia.org/wiki/Off-the-record_messaging – but it does support end-to-end encrypted messages, and from my memory of using it on AIM in the zeros, it was pretty easy to setup and use. (At one point, we quietly added support to the hiptop, for example.)
Someone could probably write a modern double-ratchet replacement, using the same transport concepts as OTR, but I bet the people interested in working on that are more interested in implementing some form of RFC 9420 these days.
I’m very curious about how the “automatic dependency tracking” would work.
Seems like it’s based on tracking with Signals are accessed when a given Signal is evaluated:
this is so cursed, I love it
I feel like I’m taking crazy pills whenever I read one of these articles. CoPilot saves me so much time on a daily basis. It just automates so much boilerplate away: tests, documentation, switch statements, etc. Yes, it gets things wrong occasionally, but on balance it saves way more time than it costs.
Comments like this always make me wonder: How much boilerplate are you writing and why? I generally see boilerplate as a thing that happens when you’ve built the wrong abstractions. If every user of a framework is writing small variation on the same code, that doesn’t tell me they should all use an LLM to fill in the boilerplate, it tells me that we want some helper APIs that take only the things that differ between the users as arguments.
“It should be noted that no ethically-trained software engineer would ever consent to write a
DestroyBaghdadprocedure. Basic professional ethics would instead require him to write aDestroyCityprocedure, to whichBaghdadcould be given as a parameter.” — Nathaniel BorensteinYeah it’s definitely that I don’t know when to add abstractions, not that the tool is useful in some specific circumstances 🙄
You created that perception by choosing such a questionable example. It’s reasonable pushback.
What on earth are you talking about? How could “tests, documentation, and switch statements” possibly be a questionable example? They’re the perfect use-case for automated AI completion.
I’ve found it useful when I want to copy an existing test and tweak it slightly. Sure, maybe I could DRY the tests and extract out common behavior but if the test is only 10 LoC I find that it’s easier to read the tests without extracting stuff to helpers or shared setup.
That was one of the places where Copilot significantly reduces the amount I type relative to writing it entirely, but I found it was only a marginal speedup relative to copying and pasting the previous test and tweaking. It got things wrong enough that I had to carefully read the output and make almost as many changes as if I’d copied and pasted.
IME the cumulative marginal savings from each place it was helpful was far, far, far outweighed by one particular test where it used
failinstead oferrorfor a method name and it took me a distressingly long time to spot.I think I’ve only wasted a cumulative five minutes of debugging test failures caused by Copilot writing almost the right test, but I’m not sure I could claim that it’s actually saved me more than five minutes of typing.
I think the general answer is “a lot”. Once you have a big codebase and several developers the simplicity you get from NOT building abstractions is often a good thing. Same as not DRYing too much and not making too many small functions to simplify code flow and local changes. Easy to maintain code is mostly simple and reducing “boilerplate” while great in theory always means macros or metaprogramming or some other complicated thing in practice.
I don’t think you are taking crazy pills! Copilot could totally be saving you time. That’s why I prefaced by saying the kind of project I use Copilot with is atypical.
But I also want to say, I once believed Copilot was saving me time too, until I lost access to it and had some time to compare and reflect.
Programming in Lisp, I rarely have boiler plate, because any repeated code gets abstracted away.
I’ve used Copilot for a while and don’t use it anymore. In the end, I found that for most boilerplate can better be solved with snippets and awk scripts, as they are more consistent. For example, to generate types from SQL, I have an AWK script that does it for me.
For lookup, I invested in good offline docs that I can grep, that way I can be sure I’m not trusting hallucinations.
I didn’t think Copilot was useless but my subscription ran out and I don’t really feel like I need to resubscribe, it didn’t add enough.
Same here. One of the biggest ways it helps is by giving me more positive momentum. Copilot keeps me thinking forward, offers me an idea of a next step to either accept, adjust, or reject, and in effectively looking up the names and structure of other things (like normal IDE autocomplete but boosted) it keeps me from getting distracted and overfocusing on details.
It does help though that I use (somewhat deliberately) pretty normal mainstream stacks.
Ditto. Especially the portion of the article that mentions it being unpredictable. Maybe my usage is biased because I mostly write python and use mainstream libraries, but I feel like I have a very good intuition for what it’s going to be smart enough to complete. It’s also made me realize how uninteresting and rote a lot of code tends to be on a per-function basis.
Yeah I feel like I type too slowly so sometimes I’ll just let copilot generate something mostly in line with that I’m thinking and then refine it.
why are comments on this website so snarky
If you are trying to prescribe something new for front-end web but your demo is riddled with questionable pracitices, there’s irony folks can’t help but point out. …Like pitching a new restaurant with a the musk of rotten food as you open the door, why trust this establishment?
indeed, pretty disappointing. starts to look like HN :/
That’s because we live in snarky times.
I had a tangential question if that’s allowed. Has anyone here been using these LLMs and if yes, how have they helped you?
I missed the chatgpt train because I wasn’t interested. Recently I found out about llamafiles which makes running these easier but the large variety of models and the unintuitive nomenclature dissuaded me. I still wanna try these out and looks like I have enough RAM to run the Mistral-7B.
I have played around with stable diffusion but the slowness due to weak specs and the prompt engineering aspect made me bounce.
I’ve been using LLMs on almost a daily basis for more than a year. I use them for a ton of stuff, but very rarely for generating text that I then copy out and use directly.
I do most of my work with GPT-4 because it’s still a sizable step ahead of other LLM tools. I love playing with the ones that run on my laptop but I rarely use them for actual work, since they are far more likely to make mistakes or hallucinate than GPT-4 through paid ChatGPT.
Mistral 7B is my current favourite local model - it’s very capable, and I even have a version of it that runs on my iPhone! https://llm.mlc.ai/#ios
Some of my uses:
LLMs are good in cases where it’s hard to solve a problem, but easy to verify a solution.
Maybe I’m just not thinking of the right examples, but is verifying that usually much easier than doing the conversion?
IIRC the formats were
and
So pretty easy to verify (just match the numbers and tokens up) but I was not looking forward to writing all those {{}}s by hand
Also it was only one (very convoluted instance), so I couldn’t justify the time in writing a regex or parser.
I use them to generate weird recipes and memes. At work we pay for copilot so I’m not sure how good they are at writing code.
My god, this feels too real. Excellent work.
Thank you, I try!
This was a challenge and I’m proud of how I approached it. Most of my friends and family know that “I’m a programmer”, but they have no idea what I do. I made a video that takes a non-technical person through python, numpy, matplotlib, pandas and jupyter in a very concise way so that I can show my data table built on top of those projects.
How do you explain your projects to non-technical loved ones?
Geeze I struggle to explain my niche to other programmers, I’ve given up on trying to explain it to nontechnical people.
Does this make you feel lonely? I spent a couple of years working alone on a graph synchronization protocol and the loneliness was brutal…
I think it’s very worthwhile to work to explain your work. What’s your niche? Give it a try here.
I really want to make individual videos explaining NumPy, pandas, Matplotlib, Jupyter and polars to programmers. Not explaining how to sue them, but a quick overview of what they are capable of in the hands of an experienced practitioner. I have worked at a couple of places where the devs only know Node/Typescript, there’s a lot going on outside of that world.
“I help make sure that servers keep running while we’re asleep, as well as when we’re awake. Usually I succeed.” They usually don’t ask more than that.
At a certain level I don’t think they need to fully understand things. I love it when someone close to me geeks out over their hobby and gets excited explaining it to me, even if at the end of the day I probably won’t fully understand it.
All this is within reason of course, I knew someone that would talk nonstop for 15+ min about Dungeons & Dragons and even though I play it, those conversations were still exhausting.