I only recently came across an interview with Arthur Whitney about the language k and only then the array-based languages really started to draw me in. I now wish I knew more about them and am still looking for a nice intro to APL and the likes.
That said the best way to learn APL is to write it. Dyalog has a conquest that start with 10 short exercises. There is also APL quest, which you can solve en your own and then compare with Adam’s solution
So apparently a subset of MathML was extracted, named “MathML Core”, and is now generally available for use! This is news to me! I’ve been looking at MathML every couple of years, and walked back as I wasn’t a fan of adding heavy runtime polyfils like mathjax. But it seems now you can just use the thing?
What is currently the recommended shorthand syntax for authoring math and lowering it to MathML Core subset?
From the project description of Temml it seems that it’s basically the code for MathML export ripped out and improved upon, so that makes sense that it works a bit better.
This idea reminds me of my statistical genetics days. There’s an (in)famous piece of software, PLINK, which is just gobsmackingly fast. It achieves this by “compressing” genotypes into two-bits. The high school understanding of genetics as AA, Aa, aA, aa, can roughly be made to work even with modern sequencing datasets. Usually you can’t tell Aa from aA so you really just have AA, aA, aa (hom-ref, het, hom-alt). You can clearly stuff that into two bits: 00, 01, and 10. We can repurpose 11 for “no data” or N/A.
Now that you’ve got 32 genotypes per u64, you can start implementing all sorts of operations very quickly. Mean genotype? You can compute that with a couple masks and pop counts, followed by one division. Count of homozygous alternates? Masks and pop counts. Correlation between two genomic positions? I bet PLINK even has a linear least squares implementation written in terms of bitfiddles.
The trouble is PLINK is a special purpose binary tool. What I actually want is a library of tools for working with compressed arrays so I can use these ideas to build new tools. I hope Vortex becomes such a toolkit.
There is the BitArray in Julia, which is supposed to work directly with the underlying bits as bools, but abstracting that away so you can write code as if it was a normal array. I can’t vouch for how well optimized the code is in practice though.
As a warning, in C++ there’s a specialization for std::vector<bool> that does something similar but breaks many of the assumptions that you expect to be true from a generic std::vector<T>. The major one: It isn’t necessarily stored in a contiguous array, whereas the generic version is. This means that you can’t really take the address of a start and end element to a function that expects them to be contiguous, among other things. It was more than a decade ago that I ran into these kinds of issues (before I’d learned to very carefully read cppreference.com all the time) but it was definitely surprising to learn about, even just to learn that there was a specialization for it.
The addressing problem is not from a non-contiguous array, it’s from being a bitset and the obvious impossibility of addressing individual bits. As a result indexing into a vector<bool> returns a proxy object which breaks all kinds of assumptions about the interaction between vector and its T.
Having a separate dedicated bitset causes none of the issues, because people don’t expect it to behave exactly like an array, and more importantly can not use it in generic contexts.
Yeah, I wish they’d made it a separate type in C++ to make it clear that you can’t expect it to behave the same as the generic version. It’s a tricky design issue because lots of stuff does work the same, like iterating over it or accessing elements by index but then other parts… don’t.
For genetic data, the work for other kinds of packed arrays is already done by the BioJulia team here.
I think their code is flexible enough that you could define a new alphabet for a different domain and still use the optimised implementations for LongSequence.
My only experience with Tcl was with setting up vivado projects for some hardware programming. This was quite a jarring experience, since the Eclipse-like vivado IDE would corrupt itself and get into some weird unmanageable state. It was nice that Tcl could be used to properly bootstrap the IDE, so it offered some respite. But I still somehow associate the unpleasant experience of that IDE with Tcl, albeit unfairly. (All of this was in 2016, so I don’t know if Vivado improved their VHDL programming experience since then.)
Oh wow, that sounds very cool! I have recently fallen back on my parenthesis addiction and looked into Scheme languages again. I was actually wondering if/when there will be a scheme written in Rust (even though there probably already exist several, but I never bothered to actually look).
Is there a reason to prefer R6RS over R7RS? I know that not everyone is happy with the newer standard, but I was wondering if you have an opinion on that matter.
Thanks! There are a few other Scheme implementations written in Rust I think, but they are less complete and not actively worked on I believe (and of course they don’t support interoperability with async rust)
The reason R6RS is chosen is because it’s much more complete of a spec, as of right now R7RS small is the only version of R7RS released, and that spec does not include some desirable things like syntax-case. When it is fully released, the intention is to support R7RS large
Unrelated to Rust scheme implementations, but when you stated that the primary reason was async, I was reminded of lispx, which I think is a pretty cool project. It was also created to solve the ergonomics of async in its host language. Maybe there’s something there that could be of inspiration. I enjoy a scheme as much as the next parenthesis-loving person, but I’d be happy to see more kernel implementations.
By way of brief comparison, besides the async compatability, Scheme-rs has a much stronger macro system than both of these impls at the current moment, which are attempting to be compatible with R7RS small or R4RS. For example, steel cannot define a macro of the following form:
(define-syntax loop
(lambda (x)
(syntax-case x ()
[(k e ...)
(with-syntax
([break (datum->syntax #'k 'break)])
#'(call-with-current-continuation
(lambda (break)
(let f () e ... (f)))))])))
to be fair to these implementations, they have a lot of features that scheme-rs lacks. But all of the ones present in R6RS I intend to add sooner rather than later
This is very exciting to see - native compilation is something I’ve been wanting to work on for steel for a long time, I’m happy to see someone else take a stab at it for their scheme. I’ve been toying around with cranelift for years but haven’t had the dedicated time to push through the work on it
To address two of the points on syntax-case and async:
As of last night, steel can define this:
(define-syntax (loop x)
(syntax-case x ()
[(k e ...)
(with-syntax ([break #'k])
#'(call-with-current-continuation (lambda (break)
(let f ()
e
...
(f)))))]))
Just have to work on the datum->syntax and then there is parity. The syntax-case implementation was slightly broken in the mainline for a while (and also, undocumented).
For async, steel does have support for rust async functions registered against the runtime, and I have a fun way of integrating with an external executor instance via continuations, but it does not have an async public api (yet) like you do. Best of luck with this and I’m excited to see where it goes
Oh that’s sweet! Very cool to see syntax-case in steel!
I hope I wasn’t missrepresenting your project! Let me be clear here: Steel is faster and more featureful than Scheme-rs by a long shot. I brought up syntax case because that’s were the initial bulk of my effort went (the remainder went to reimplementing the project as a compiled)
It absolutely is the hardest thing to properly implement in my opinion (and I’m technically not even fully compliant, I don’t add any continuations to the expander (soooo much work)).
I’m thankful I spent so much time considering it while the eval logic was interpreted, it was a weight off my shoulder when moving to compiling. That being said, compiling it also lead to a new set of challenges, I had to somehow figure out a way to capture the environment, something that I was just throwing out after compilation.
As an aside, have you noticed how there are so many people who work with Scheme named “Matt”? It’s starting to freak me out a bit
Thanks for the script to disappear the Google sign-in popup! Never got to fixing it.
I love Stylus! Its much more than just disappearing elements. You can eliminate daily inconveniences by shaping your regular webpages to your liking, make them more accessible.
In that context, I’d like to add another fix - headings on tanstack.com.
My mouse/trackpad is configured to auto-click on rest/hover. Over time, I’ve developed a habit of resting the cursor on any white space of a webpage. Now, on tanstack docs, each heading is a clickable anchor, but the clickable area also spans the whole width of the content area. When I subconsciously rest my cursor on the white space besides a heading, the heading and contents are suddenly yanked to the top.
Here is the Stylus fix:
a.anchor-heading {
display: inline;
}
Now, the heading still remains an anchor, but the white space besides the heading text stays out of the clickable area.
uBlock can remove the Google login using the “social” lists. They also remove the social media share buttons that track you.
Generally, any Stylus display: none can be done in uBlock and there’s likely a list for it. With uBlock you might even be able to block the script that inserts the element to stop the root cause.
The default settings are pretty conservative. You need to enable extra “filter lists.”
I thought it was blocked by something in the “Social” category, but it seems it’s actually the “Annoyances > Adguard > Popup Overlays” list that does it for me.
Update: Tanner Linsley, the founder of the tanstack ecosystem, saw this post on Bluesky, and prompted me to submit a PR to fix this problem. The PR is now merged.
Its an accessibility feature. The 3 desktop OSes I’ve used provides a Hover to Click (Dwell to click on macOS) feature in some capacity.
I resorted to using it during a phase of undiagnosed, severe vitamin B12/D3 deficiency, due to which my fingers felt properly hurt while clicking mouse buttons.
Fast forward to today, its a must-have for me now. Its a feature which feels indispensable if and once you get used to it.
Wow, that’s something I would not have expected! I am glad that you were able to work around that. Cool that this accessibility feature seems to be somewhat widely distributed.
If you don’t mind me asking, wouldn’t it be more painful to type on a keyboard? How did you manage to do that back then?
In any case, thanks for sharing and I hope you’re doing better now!
If you don’t mind me asking, wouldn’t it be more painful to type on a keyboard?
Yep. I had to resort to Dragon NaturallySpeaking. Thankfully, I was on a break from my programming career during those days.
It was all a self-inflicted idiocy though. I was young, lived in a flat devoid of all sunlight, and self-diagnosed the issue as RSI (eerily similar symptoms). Only after a month of that buffoonery, I got it diagnosed by a physician. After just a simple physical check, she ordered deficiency tests, while politely nodding at my self-diagnosis. The test results showed an extremely severe deficiency (at which stage RSI-like symptoms show up), and levels had to be replenished with a series of injections.
I am alright now.
Anyway, this whole thing taught me the value of accessibility. We are all temporarily fully enabled.
The demos do look interesting, but the space of libraries that make python performant feels very saturated to me, with several deep learning libraries and e.g. Mojo tackling this. Then there are also of course the previous approaches with numba and cython. In the end, all of those do add some friction to working with it.
What’s interesting to me is that it seems like python is becoming more and more a way of expressing some computation, but is used less and less for actually carrying out the computation. I wonder if it’s the right language for that, lacking a macro system and all of that. But in such a big system there is of couse also a lot of intertia. In some sense, I feel like it’s similar to C, the network effects are just very strong and make moving to another language maybe not worth it.
(That being said, I really liked the improvements over the last few versions, say from 3.11 to 3.13.)
Downloads? To-do lists? Not sold. I prefer to throw things away deliberately, rather than when I just happen to reboot, which is normally quite an infrequent event.
How often do people here reboot? Since moving to the 6.8 linux kernel, I’ve started encountering regular memory leaks and begsn to restart every few days. (Battery life is great, so I stay.)
I have also been doing that basically all my life. I have no control over my browser tabs so one of the best things to keep that in check is to close everything at the end of the day (and I deliberately do not want them restored).
I can only recommend it. Over the years I feel like the browser history navigation has degraded due to keyboard highjacking and more and more links just open in a new tab (window) anyways; which makes going back work unreliably. So I just end up with loads of tabs, and I think it’s just easier to close them all at the end of the day.
Part of the problem is that it’s not predictable. It may be months away, it may be tomorrow. I reboot when either of the following happens:
A new OS has features I care about.
A kernel (or libc or some sufficiently low-level library that restarting everything that uses it means remastering almost everything) security vulnerability is disclosed.
The kernel crashes.
Of these, only the first is something I can plan for in advance. If I create something in /tmp the it may last for months or it may go away almost instantly if there’s a crash. For downloads, going away almost instantly is probably fine, but a lot of systems make /tmp a RAM disk (tmpfs literally has it in the name!) and accumulating a few GiBs of rarely-accessed things in a RAM disk is not ideal.
I feel like this is getting more and more popular, I appreciate the blog post. It feels a bit unfortunate because the way it is implemented it requires every library to build up their own classes that essentially perform the computation lazily. This is actually something that should be doable more elegantly with macros. I wonder if the approach that Julia takes will eventually catch up in popularity because it does allow for better expressiveness regarding this aspect.
I just finished a research project that involved running quite a few methods and comparing them to each other. For caching my experiments I used redo (a build system, but can also be abused for this purpose). The way I structured my project, it meant that the naming of the experiments had to follow a somewhat rigid structure, which worked for my intended purposes, but I would really like to make the entire approach more flexible.
I came across mandala, which seems quite interesting and I am now looking a bit more into this code. Overall quite an interesting approach and something I am reading into now. One problem is that the caching seems to be done via sqlite, so that is no good when accessing the same store via an NFS. So I guess something that makes this approach scale across compute nodes would be great. I also haven’t yet kicked the tires properly, so I am also no sure how well this all scales.
Another thing I would like to have is something like scribble, but tuned for blog posts; including cross-referencing and including content from other modules. This is actually something I tried to approach with an LLM (Claude), but it failed miserably. I think in the end it would also be nice to have some literate programming component there as well; which at some point would need to have some caching as well (which would tie it up with the caching approach outlined above).
I’d like to hear from the Lobsters developer community on this topic - did any of the tools you have built provide some surprising benefits long after you’ve started using them?
The thing I’ve most recently written is a compiler for books. It parses a TeX-like markup, builds a tree, and then lets me run Lua passes over it to transform the tree and then output it. My first four books were all written in LaTeX but for the second one the publisher wanted ePubs and outsourced this to a company that completely screwed up the formatting. For the third one I wrote a prior iteration of the tool so that I could write semantic markup and have some LaTeX macros that would expand it, or parse it with another tool and generate HTML.
For the book that I’m just finishing, I wanted to have a single flow for:
HTML for online reading.
ePub (HTML + metadata) for eBooks.
PDFs for print.
PDFs for online reading (fixed-layout eBooks - some people prefer these).
I started writing this one in AsciiDoctor and eventually wrote my own tool. Extending AsciiDoctor is supposed to be possible, but the docs are totally lacking. I wanted to write my code examples as separate files that I could compile and test and then have libclang mark them up and add them to the text, so my document source code is a thing saying ‘pull in the things between these markers in this file and give it this caption’.
One of the things that’s really helped is that there’s no privileged syntax for syntactic markup. Defining a new thing that’s lowered to a span (HTML) or some format (SILE for PDF) is trivial, so I don’t fall back to using back ticks, I mark things up with the right language. And then, later, because that markup is there, I can go and use TreeSitter to do syntactic labelling for inline snippets.
I remember that you were talking about this topic in another post a while ago. I wanted to follow up on this and ask if you’re planning on making the code available. I would definitely be interested in taking a look (and not because because I want to procrastinate on writing my thesis).
One thing that I am not sure about is how to do cross-referencing properly. I really love how Scribble resolves the functions used in code and is able to link directly to their documentation in code snippets. I would love if something like this would be available in a language-agnostic way.
Management asked us to keep track of the time we spend on each task “type” everyday (r&d, exploitation, support, …).
So I wrote a program to keep track of my time, that I run whenever I switch tasks (it’s bothersome, but wait for it…).
Thanks to it, my time is tracked very accurately, and it could hilight that I spend way too much time doing support, leading to actions being taken by management.
On the other hand, my coworkers do it the “old” way, by guessing approximately how much they did for each task during the week. Which is inaccurate and not representative at all, as you might guess.
I have written small firefox extensions that scratch an itch or two. or a bookmarklette if it’s really small. a simple tab switcher and manager i use daily. one to switch focus in tabs where i press capslock + a or d. one to make the docs on a site prettier. one to open up a folder of bookmarks say 5 or 10 at a time. all of them i use daily.
Hey, long-time-after is kind of a hard (because of not tracking the root cause for improvement) requirement to track, which you did not apply to your own examples!
The impression/fear I get is that the more code is generated by an LLM, the harder it gets (for a human) to sift through it all. I wonder if at some point it will become effectively impossible to code without chat assistance. Line count of a project will lose even more meaning (if it ever has any), as “code is cheap”. I do wonder if the problem solving skill will also be neglected, because you can just get an LLM to fix your code.
Sometimes I get the impression that the situation has parallels to the early days of computing, when people where sceptical if a high-level language such as C would be able to keep up with hand written assembly code. Of course it cannot, but it frees up the code author to think about different problems and that usually seems to have a better return on investment. So maybe this will also happen with LLMs, they will simple free up time for developers to devote it to other topics.
That being said, I’m still not really using LLMs regularly. My last attempt a few weeks ago was at trying to generate some matplotlib code, and it failed miserably, generating code that accessed properties that did not exist. Maybe it will improve for me as well next time and I’ll have a similar moment as the author of this article.
C would be able to keep up with hand written assembly code. Of course it cannot
Aren’t compilers generally far better than hand written assembly today, with esoteric optimizations no one could remember and coordinate? (And the loss coming from the unrelated resource use in modern programs, like graphics, fonts or random tangential operations.)
Not exactly. Compilers still regularly produce suboptimal code, and a human can still make a (sometimes very noticeable) improvement. It’s just that it requires a more specially-trained human than it used to, and it might take the human hours or days to come up with a winner, while the compiler does “good enough” in a millisecond. So cost/benefit is what it’s about.
I’ve recently optimized voxel raycaster targeting RP2040 and I had to dive into asm. Not to write better code directly, but to determine why is my code slow. Register pressure, suboptimal math and so on.
Sometimes there were tradeoffs. Slower hash function (phi multiply, shift right) vs. 3 xor shifts. Faster hash vs. quality of spreading. 1-2 fps difference.
Sometimes storing multiple intermediate results and keeping the registers was faster than fusing the loop. Sometimes fusing the loop proved faster. 4 fps.
Packing grid coordinates for DDA into a single register helped alleviate register pressure significantly. Aligning packed data on byte boundaries helped a lot. Keeping the most accessed data either in LSB or MSB (thus single instruction to unpack) also helped a lot. About 4 fps.
Compiler can only do so much. And honestly, Sonnet helped with geometry, but not really with the code.
Yeah, a related issue is that it depends how much experience you have
LLMs help experienced programmers more than less experienced ones
I guess this will make person-to-person mentoring a lot more valuable? Or at least training and education. I have noticed that there is a real demand for that
I have not seen how LLMs help non-programmers … If I had to guess, I would imagine it can help them pass exams, but maybe not build large applications.
I guess it depends what you mean by “help”.
These articles have experiences / caveats that are consistent with my experience – they are noting the effect that LLMs are more helpful when you know a domain, and not as helpful if you don’t know it:
I meet people (or sometimes read articles by them) every month who have built something useful to them using an LLM that they simply could not have built without it, because without it they can’t code at all and never have.
my gf is now learning to code, sometimes while I’m busy it asks the LLM about topics or things she doesn’t understand. She uses it more as a tutor than as a code generation tool in general and she reads everything it says. Overall she has progressed quite quickly since she has a decently quick feedback loop on what she is doing wrong or to expand on something if she is interested or didn’t understand.
I’m an experienced programmer, but everyone needs to work with languages and frameworks that they’re not familiar with them from time to time. I tend to use an LLM as a big book of examples, showing me techniques I didn’t know I could do in whatever new thing I’m learning. It creates many lovely a-ha moments that I can then build upon.
It’s invaluable (to me) for accelerating the rate of learning new and unfamiliar things, (backed by 4 decades of programming experience of course)
If you’re an experienced programmer, I don’t see how LLM’s can make you worse :) Presumably you will notice yourself getting worse pretty quickly, and stop using it. I mentioned in my experiences below how I tend to use it
The OP’s article is very credible to me, and from an experienced programmer
Though I also acknowledge the hazard of people being under pressure at their jobs, and trying to churn out a bunch of code with an LLM … From the articles I linked, it seems like there is a pretty clear understanding of these problems now. Likewise, LLMs are worse at modifying existing code than generating new code
when people where sceptical if a high-level language such as C would be able to keep up with hand written assembly code.
That’s different though. It would would only be comparable if 1.) All the prompts to create the “code” would be persistant and the code could not be manually changed and 2.) the LLM(s) used to create the code would be deterministic and not random.
If you use LLMs “manually” to generate code, then this is a totally different thing.
A few years ago, we hit a performance regression in FreeBSD as a result of switching from gcc to clang. GCC and LLVM’s inliners started at opposite ends of the call graph and so made very different decisions. Something on a hot path in the kernel was no longer being inlined and that resulted in a measurable performance drop.
We were able to fix it by slapping a couple of always- and never-inline attributes on functions, but this kind of thing still shows up in other places. Your code performance relies, for example, on specific behaviour in an autovectoriser or loop rerolling that later enables vectorisation (can be a factor of 4 speedup).
Yes, the code still exists, but trying to reason about performance of software from the level of C code is increasingly hard.
That’s different though. You switched from gcc to clang. The equivalent would be switching from one llm to the other.
But for llm’s, randomness and non-persistant prompts come on top. The equivalent would be that gcc / clang generate assembly with a certain randomness and the C code is thrown away later and only the assembly would be persisted.
This looks really interesting, but also a bit confusing. I would love if something like this takes of, structural editing is something that sounds too good to be true.
I think that “editing broken structure” shouldn’t be too much of a concern, because once you are commited to structural editing, you will rarely escape insert mode when the tree is malformed, you’ll always want to ensure that the tree is well-formed before entering normal mode again.
So, this is more of a disciplinary issue than a feature issue.
To enjoy structural editing, we have to do the parser a favor by always ensuring the validity of the tree before attempting structural operations, it’s similar to how you can’t enjoy the full cycling experience if you have to rely on training wheels, you have to do the bicycle a favor by keeping it balance all the time while it give you a ride, the true cyclists enjoyer don’t focus on training wheels.
I build things and program because I genuinely enjoy taking things apart and understanding, truly understanding them.
AI may be able to solve the task faster, and maybe it can explain the solution, but it won’t help me intuitively understand. Using AI is like copying the teacher’s answer sheets for your homework.
I’ve tried AI tools, and when a new version comes out I try that too, but so far I’ve had no reason to actually use them.
Often enough the only Google search results for what I’m working on are my own posts. In that case ChatGPT isn’t any more helpful either.
Not only are good programmers curious, but they are also responsible. Say we were lawyers and the entrenched culture was to use AI and that this was not considered malpractice. The same question the author asks, flipped on its head as you have done, would be, “Don’t you want to see for yourself what the actual case law is, you know, like, from the court records—the source?” When some junior dev comes to me with questions about a Git commit with my name on it, I need a better answer than to shrug and say, “Copilot wrote it. LGTM.”
I admit, sometimes I’m fresh out of ideas. I would actually occasionally rather have a dialog with a BS generator than to generate bad ideas on my own. In the words of Don Draper, I need someone to fail faster. But it doesn’t happen often enough that I want that BS generator looking over my shoulder constantly volunteering solutions.
Another aspect of this curiosity argument is internships. Two years ago, the kinds of tasks one assigns to interns—people who were paid very little to be very curious—are now often assigned to LLMs. I don’t have any data, but my impression is that the appetite for hiring software engineering interns has rapidly diminished since ChatGPT. I never enjoyed correcting interns, but talking with them about what they learned and watching their careers progress was immensely gratifying.
I build things and program because I genuinely enjoy taking things apart and understanding, truly understanding them.
I’m exactly the same way… and that’s one of the reasons I’m so enthusiastic about LLM-assisted programming. I can do SO MUCH more exploratory programming with Claude by my side.
I’ve been wanting to figure out WebSockets and SSEs and <iframe sandbox> for years. Thanks to Claude I’m finally getting stuck in to all three of those, learning at a furious pace where previously the friction involved stopped me from getting started with them.
If you’re insatiably curious, LLMs are a gift that keeps giving.
Funny you mention SSEs — I also learned about those recently, and find them cool!
For my approach, I read some blog posts, looked at some libraries, and then wrote an SSE-based application without using any LLM-based tools. Along the way, I found interesting articles about how Wikimedia uses SSEs, as well as a library to expose Kafka via SSE. Reading through such libraries’ GitHub issues and code was also interesting and useful. Coding was a small part of my journey.
To each their own (I won’t knock anyone who benefits from LLM-based tools!), but I struggle to see where I’d fit it into my workflow. For doing the research? For writing the code? I like both of those things… What do you do?
I actually think I’m more likely to ask ChatGPT about something I’m not curious about and just need a solution for (like copying files out of a Docker container).
yes! additionally, llms allow me to postpone learning about details i don’t care about in that moment, while providing an okay-ish implementation of them. i feel much more in charge when exploring, frictionless is truly the right word.
I use them when I don’t know what something is called, because sometimes they can tell me the right keywords. They look like they work when you ask them to do things that have been done a million times before, but if you ask them to do something new, they mostly give you garbage. When I point out their errors, they give me even worse garbage. Better to spend the time learning the problem or the tool than programming the AI assistant.
I noticed something similar as well. It’s incredibly hard to get useful results out of a fringe language or a library. I guess as a rule of thumb, if you don’t find promising results when searching for your problem online then a LLM might not be so useful either.
First, I had it build me a little interactive visualization of the problem. I could drag points around and see the relevant regions of space. This allowed me to have the breakthrough that reframing the problem a certain way made it much easier to solve.
I needed regular google, and pen and paper, to find the final equation, but once it came to actually integrating this equation Claude helped me write the majority of the code and saved me from at least an hour of reading docs.
Sure, using AI here is like “copying the teacher’s answer sheets for your homework” but the thing I’m trying to do here is not “learn frontend” or “memorize the interface to a numeric integration library”, I am completely fine with not truly understanding those components. Claude allowed me to mostly ignore those components in service of actually truly understanding this fun math problem.
The current standards of evaluation in the PL community are set from a masculine perspective as well, valuing formalism and formal methods over user studies, quantitative over qualitative work, and the examining of technical aspects over context and people.
I just can’t connect the dots between masculine and valuing formalism, formal methods, quantitative over qualitative work, and the examining of technical aspects. Can someone eli5.
It reminds me a bit of the “punctuality, standardized testing, etc are white supremacy” stuff from the previous decade. Honestly this stuff always seems a lot more like harmful stereotyping and I would appreciate a clear explanation for why it’s not as toxic as it seems.
Except the stereotype is true: women are raised to care more about people than men are, at least in the Western countries I’m aware of. This whole thing about formalism being more masculine than studies, does match my intuition about why we find so more men in “hard” sciences than we do in the humanities. (Note: I hear that Asia has different stereotypes, and many more women in hard sciences. In Japanese animation for instance, a typical support character is a female scientist or technician — yeah, the main/support gender bias is still there I think.)
Now the famous “Programs must be written for people to read, and only incidentally for machines to execute.” have indeed been coined by men (Harold Abelson and Gerald Jay Sussman), and on that same page I learned that another man, Martin Fowler, said “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” Clearly human stuff is important to men too. And there are some crazy impressive low-level woman programmers out there too.
And yet, I can’t shake the feeling that indeed, we dudes tend to ignore the human element more than women do. One striking example would be the Linux kernel, and the recent Rust debacle. It did reek of not trying particularly hard to take care of humans. Or take my own readability article: the whole point is to cater to humans (increase readability), but then I reduce humans to a single limitation (working memory), and evaluate various advice from that alone. Which, when I think about it, is only possible because I have such a mechanical outlook of the human mind — and I don’t even have any expertise there!
Now that paragraph in the paper could probably have benefited from a couple references showing the prevalence, or at least existence, of the stereotypes it uses in its exposition. Without it feels like the author has internalised those stereotypes more than she would have liked to admit.
While an inaccurate stereotype may be more harmful than an accurate one, an accurate stereotype is still pretty bad. Even if it happened to be true that men are better drivers (whether by nature or nurture), it seems pretty harmful to refer to good driving as a masculine trait and also a thing can be true for a group of people and not be true for all individuals in that group, so applying stereotypes (even accurate stereotypes) to individuals is usually unjust. That’s probably not what the paper is advocating, but it’s an inevitable consequence of trading in stereotypes and moreover I don’t see how stereotyping improves the paper.
For a specific idea of a harmful true stereotype see also “black people don’t swim”. There’s lots of stuff in society which comes from the environment and not conscious choice. It doesn’t mean we should assume that’s just how things are and expect them to continue. Sometimes things are more universal if we allow it / enable everyone to do better in whatever way they choose to approach things.
There was a fascinating experiment on stereotype threat where they got a bunch of Asian and Black Americans to try golf. They told half of them that golf was very mathematical and the other half that it was very athletic. There was a statistically significant difference: the black folks who were told it was athletic did better and the Asians who were told it was mathematical did better. Both groups did better at the thing that they’d been stereotyped at being better at even when it was the same task. So reason one that even accurate stereotypes are bad is that they impact the people being stereotyped.
Reason two is that they are still about probabilities. If a stereotype is 90% accurate, it will not apply to all of the members of a group that you can fit in a fairly small room. If 90% of green people are bad at programming, you’ll see fewer green people in software development, but all of the ones you see will be statistical outliers and the stereotype tells you nothing about that subset, but people interpret as if it does. And even the most accurate stereotypes apply to far fewer than 90%. So reason two that they’re bad.
Reason three is that a lot of accurate stereotyped behaviour is due to how people interact with members of a group at an early age. People subconsciously reward behaviours in children that conform to expectations. Boys and girls are praised for different things by every adult around them. This makes stereotypes self perpetuating.
Couldn’t you just as easily argue that formal methods (conscientious, detail-oriented, theoretical) are feminine, whereas user-studies (pragmatic, empirical, utilitarian) are masculine?
…it was once, not even so long ago, seen as women’s work. People used to believe that women are good at programming because they are fastidious, “they worry how all the details fit together while still keeping the big picture in mind” [22] and “programming requires patience and persistence and these are traits that many girls (sic.) have” [47].
One striking example would be the Linux kernel, and the recent Rust debacle.
But the C advocates were arguing against formalism and formal methods, qualities that the quote describes as “masculine”. In fact, I’ve seen people argue that the “C is fine, just write good code” mindset is an example of masculine arrogance (or what have you). So we’re in a situation where emphasis on formal methods is masculine and disdain for them is also masculine.
Except the stereotype is true: women are raised to care more about people than men are,
I’d say the stereotype was true: we’ve observed much change in society over the last ~15 years, particularly in gender roles. I don’t think the stereotype is true anymore. If it is, its decreasingly so year after year, and will soon be definitively false.
The ground of the feminist critique is what we could call a critical history. Traditionally, we think of philosophy or science arising from a neutral form of “reason” – detached from social or historical context. A critical history challenges this claim, and views thinking as necessarily embedded in a place and time. Thought is not considered to be a universal means at finding objective truth but rather the thought arising out of a specific place and time, under regimes of thinking that ground our understanding of what is true or false. This is not subjectivism (“everyone’s perspective is valid”) but rather a sort of uncovering of the origins of ways of thinking which we take for granted.
Given that the historical context of Western civilization, and more specifically, the 20th century development of computers, feminists can critique the development of computing from the lens of a specifically feminist history – uncovering the manner in which thinking about computers came out of a specifically patriarchal context and mode of thinking. The point is not that early computing pioneers were men, but rather than they were embedded in a society which prioritized ‘masculinist’ forms of thinking and potentially reproduced those structures: structures that exist independent of any individual’s identity, but rather have a sort of independent existence
It’s important to understand that it’s not a universalizing critique, ie we can’t understand everything about computing through feminist critique, but it is one lens through which we can look at the history of computing
The point is not that early computing pioneers were men, but rather than they were embedded in a society which prioritized ‘masculinist’ forms of thinking and potentially reproduced those structures: structures that exist independent of any individual’s identity, but rather have a sort of independent existence
Said that way makes more sense, at least to me, in which “masculine” refers to the patriarchy-way-of-thinking. Would have used other term, but that’s nitpicking.
The original text in the paper sounds too much like a stereotype applied to men as a gender, maybe because I’m mixing in my head my own perspective with the past and everything in between. “men molded the field like men do, because they all behave the same way”.
Based on your description, it seems like critical theory and feminist critique is principally about ideological framing? As someone who is interested in programming language design, I probably don’t have a lot of interest as to whether we should believe that computing originated out of something called “patriarchy” or whether it originated out of some neutral reasoning, I just want tools that help me accomplish my goals more easily. These ideological frames feel unhelpful at best and divisive or even toxic at worst (I made another comment about how claims like “formalism is masculine” and “objectivity is white supremacy culture” seem like really harmful stereotypes and yet they seem to closely orbit critical theory).
I just want tools that help me accomplish my goals more easily
This is not a statement free of ideology. First off, you are conveying a form of instrumentalization: ie, you view programming languages as a tool to be utilized. I’m not saying this is incorrect or bad (obviously, there are many cases in which we do use PLs this way), but it is not the only way of conceiving of programming language design. We could think of programming languages as toys to play with, forms of expression, or places to try out new ideas. Consider the whole field of esoteric programming languages – almost all of them totally fail as “tools to accomplish goals more easily”.
I probably don’t have a lot of interest as to whether we should believe that computing originated out of something called “patriarchy” or whether it originated out of some neutral reasoning
The point is, there is no “neutral reasoning”. Everyone is always thinking within a culture that has a certain ideological frame. This not a moralistic point (although sometimes this sort of analysis does end up distorted into popular culture that way), it just means that you can’t separate computing from its historical, cultural, and ideological history if you want to understand it fully. I mean, consider the basic point (referenced in the paper) that most PLs are monolingual in English – this isn’t because English is the “objectively best” language for programming, it’s necessarily tied to a long history of British and American cultural imperialism. And again, I want to emphasize, the point isn’t that it is “morally wrong” that programming languages are in English. It’s the academic point that this development is inseparable from the history of colonialism. And this critical history can inform experiments, development, and research into non-English based programming languages.
I made another comment about how claims like “formalism is masculine” and “objectivity is white supremacy culture” seem like really harmful stereotypes and yet they seem to closely orbit critical theory
Your issue appears to be with specific applications of critical theory, yet you expand this to include the entire field itself. One could certainly make a bad feminist critique of PLs, but that doesn’t mean that feminist critique is a useless enterprise, which is what you seem to be claiming.
We could think of programming languages as toys to play with, forms of expression, or places to try out new ideas. Consider the whole field of esoteric programming languages – almost all of them totally fail as “tools to accomplish goals more easily”.
Nit: the “goal” in those cases is to have something to play with, something to express oneself with, or some place to try new ideas. You aren’t really disagreeing.
I mean, consider the basic point (referenced in the paper) that most PLs are monolingual in English – this isn’t because English is the “objectively best” language for programming, it’s necessarily tied to a long history of British and American cultural imperialism.
I’ve seen a good chunk of listings for game code written in German that would disagree with this premise. I’d imagine that the Soviet code listings in Cyrillic probably also disagree–Drakon, for example.
Re: imperialism, it’s well-known that CS is rife with accomplished Englishmen like Alain Colmerauer and Americans like Edsger Dijkstra. Similarly, we all run on silicon manufactured in the United States by red-blooded American companies like TSMC and using phones made by god-fearing businesses in Western countries like Huawei.
The thing is, that lens means a particular thing in academia, and outside of academia–where we are here–it’s easy to “disprove” using common sense and rudimentary historical knowledge by the casual practitioner. That’s not to say it’s a bad or even incorrect lens, but merely that it requires more nuance in application and context than is usually found here–people spend their entire doctorates haggling over narrow aspects of imperialism and feminism, after all.
EDIT: After talking with a colleague, I’ll concede that if by “forms of expression” you mean a programming language itself as a sort of work of art then my lumping it in as a tool is a bit unfair. Paint vs paintbrush.
I’m not sure if I understand your point here or if we really disagree. I said most programming languages are English monolingual, and that this fact is tied to the history of British and American cultural imperialism. It is not that we can understand technology solely through that lens, just that a purely neutral, technical, ahistorical viewpoint is not the correct one. The example of Huawei is particularly interesting here – it’s been sanctioned by the US since 2020, and for geopolitical, not technical reasons.
But isn’t it the case that certain fields are dominated by a particular language? Musical terms are Italian; cooking terms and techniques are French; large portions of philosophy are French and German, not to mention the endless Latin found in medicine and law. Could it not be that computers are largely English because English-speaking countries dominated the space early on?
Another thing I’ve heard (or read, sorry, no citations available) is that non-English speaking programmers tend to prefer computer languages not in their native language, for whatever reason. Maybe just a simple “translation of keywords into target language” just doesn’t work linguistically? Not enough non-English documentation/learning materials?
The ability to share code and libraries with the rest of the world. Imagine if you had to learn French to use sqlalchemy, German for numpy, Japanese for pandas, and Russian for requests.
consider that constructed languages are (with few exceptions) designed to be much simpler to learn than natural languages. it’s a lot easier to learn 200 sitelen pona than 2000 kanji. (and it’s a lot easier to learn a phonetic spelling than whatever english is doing)
After we’ve finally finished migrating everything to Esperanto, it will be found to be problematic because of certain reasons. Then we will need to migrate to a new created human language base for programming languages. I vote for Tolkien’s Quenya.
TBH it’s a great case for programming languages in English, which is the most popular spoken language in the world thanks to its strong popularity as a second language globally.
https://www.ethnologue.com/insights/ethnologue200/
He spent a lot of time at the University of Texas in Austin, hence I think it makes sense that the quotes are in English (as that is the actual quote). And yes, I guess most Dutch readers are also familiar enough with English that they do not need a translation.
I just want tools that help me accomplish my goals more easily
This is not a statement free of ideology.
Yeah, it pretty much is. It is a statement of personal preference, not a set of beliefs.
but it is not the only way of conceiving of programming language design.
And it does not claim that it is “the only way of conceiving of programming language design.” It is an expression of a personal preference.
this isn’t because English is the “objectively best” language for programming
And nobody claims that that is the case, so this is just a straw man.
inseparable from the history of colonialism
It has almost nothing whatsoever to do with “colonialism”.
critical theory, yet you expand this to include the entire field itself.
Critical “theory” is not really a “field”, and certainly not a scientific one. It (accurately) self-identifies as ideology, and then goes on to project its own failings onto everything else. With “critical theory” the old saying applies: every accusation is a confession.
what goals do you want to accomplish? why do you want to accomplish them? in what socio-politico-technical context are the tools that are more or less fit for that purpose developed, and how does that context influence them?
What goals or socio-political contexts lend themselves more to more or less formal programming methods? Like even if I want to build some “smash the patriarchy” app rather than a “i ❤️patriarchy” app, would I benefit more from languages and tools that are less “formal” (or other qualities that the poet ascribes to masculinity)? I can’t think of any programming goals that would benefit from an ideological reframing.
the current state of the art in PL research and design prioritizes machines over people, there is a focus on programming languages that are easy to read for the machine, rather than easy for people
I’ve not finished the paper yet but it seems like the authors are using “masculine” as a shorthand for “thing oriented”. Which isn’t a perfect analogy, but in my experience “thing oriented” typically skews male and “person oriented” typically skews female.
Yes, and this is a stereotype that has been semi-deliberately produced and spread. You can read lots of stuff about women in technical fields in the late 19th through mid 20th century, going to universities, applying for jobs, etc and a lot of it involves people explicitly or implicitly saying “women can’t do X because men think logically/rationally/analytically, while women think emotionally/impulsively/socially”. Like most stereotypes there’s a grain of truth in it from the right angle – I recall talking to my sister when she was pregnant with her son and she often said things like “I get so emotional for no reason” and “it’s so draining, I feel like my IQ has dropped 20 points”… now take this phenomenon and project it to a time when families commonly had 8 kids or whatever, your average woman would spend a decade or two of her life either pregnant or recovering from pregnancy.
But the real reason that stereotype exists is because it is used as a proxy for “women shouldn’t be able to do X because I do X and I’m obviously better than a woman”.
Women make up the majority of college graduates. Have you considered the crazy notion that women are capable of making their own decisions as to what fields they wish to study and later work in?
98% of all people choose to not go into computer science. What’s wrong with the world that with males that percentage is only 97%?
As a white boy, no one ever told me that I shouldn’t program because it’s not a white thing. No one ever told me boys can’t code. No one ever told me that computers are not for people like me. No one ever sent a less-competent girl to a maths or programming extracurricular event because programming is a girls thing and so they didn’t consider me. No one ever told me not to bother with programming because it doesn’t lead to appropriate careers.
As Director of Studies for Computer Science at an all-women Cambridge College, every single one of the students that I admitted had stories like that from people in positions of authority. Young women who went on to get first-class honours degrees in computer science at one of the top departments in the world were told girls can’t program by their computing teachers. Girls who scored Gold in the Mathematics Olympiad were encouraged not to enter by their teachers. These aren’t isolated things, these are small social pressures that have been relentlessly applied to them from birth. And the ones I see are the ones who succeeded in spite of that. A much larger number are pushed out along the way.
Now maybe social pressure like this doesn’t apply to you and so this is not relatable for you. There’s an easy test: do you wear a skirt in the summer when it’s too warm for trousers to be comfortable?
Now why do I care? Because I’m selfish. I want to work with the best people. Having more than half of the best people self-deselect out of the profession as a result of unintentional social pressure means I don’t get to work with them.
Thank you for saying what I wanted to far better than I ever could.
I’ll follow up on the research on sex-preferences in newborns or other primates, ‘cause it’s really interesting: built-in differences in preferences are totally real, sure! The differences exist. But everyone is an individual. The way I think of it is that looking at whatever human behavior you want to measure, you’ll probably get a big bell-curve, and if you measure it by sex you’ll probably get two different bell-curves. The interesting question for science is how much those bell curves overlap, and how much the differences are environmental vs. intrinsic. The studies on newborns and such are interesting ‘cause cause there’s barely any cultural or environmental impact at all, so that’s relatively easy to control for.
But when people do studies on “vaguely what do these bell curves look like”, such as https://doi.org/10.3389/fpsyg.2011.00178, they tend to find that the difference within each bell-curve are far larger than the difference between the bell curves. That one is neat ‘cause they look at (fuzzy, self-reported) cultural differences as well and find it has a big effect! They look at generic-white-canadian men and women and get one pair of bell curves, then look at south/east-asian men and women and get a quite distinct pair. Sometimes the male/female bell-curves in each culture overlap more with each other than they do with the bell-curves of the other culture! (figure 4 of that paper.) Sometimes the cultural differences outweigh the intrinsic/genetic ones! Sometimes they don’t! It’s wild, and in that paper deliberately quite fuzzy, but the broad strokes make me really wonder about the details.
I am far, far, far, far, far from an expert on psychology, but the conclusions are pretty compelling to me. If you could magically make a perfect test of “technical ability, whatever that means” that corrected for environment/upbringing/etc and applied it to a big random sample of humans, it seems like you’d expect to get two different bell curves separated by gender, but with far more overlap than difference. So then the question is, why has every computer-tech job, class or social venue I’ve ever been in 80-100% male?
So then the question is, why has every computer-tech job, class or social venue I’ve ever been in 80-100% male?
It’s not just computers. It’s that “thing” oriented jobs are overwhelmingly male. “people” oriented jobs are overwhelmingly female. it’s not that women are bad at “things”, it’s that they often have other interests which are more important to them. Or, they have other skills which means that they’re better off leveraging those skills instead of “thing” oriented skills.
For people who suggest that men and women should have similar skills and interests, I would point to physical differences between men and women. Evolution has clearly worked on male and female bodies to make them different. But we’re asked to believe that those physical differences have no affect on the brain? And that despite the physical differences, men and women have exactly the same kind of interests and abilities?
For people who suggest that men and women should have similar skills and interests, I would point to physical differences between men and women. Evolution has clearly worked on male and female bodies to make them different. But we’re asked to believe that those physical differences have no affect on the brain? And that despite the physical differences, men and women have exactly the same kind of interests and abilities?
Obviously there is going to be an effect. We’ve measured that effect. We would probably expect something like a 55:45 spit, based on biological factors. but instead we see something much closer to a 90:10 split, which is way way way more than we would expect. “maybe it has something to do with culture and society” is a pretty reasonable hypothesis.
See “non-linear effects”. The average woman is only a bit less strong than the average man. But the world record for male “clean and jerk” is essentially “pick up the woman doing the womans record, and add 100 pounds”.
Or look at male high school athletes, who compete pretty evenly against female olympians.
Are we really going to say that similar effects don’t exist for hobbies / interests / intellectual skills?
And why does the comparison always include male oriented jobs? Why not point out that 90% of nurses and dental technicians are female? Why not address the clear societal expectations / discrimination / etc. against men there?
The arguments are generally only one way. Why is that?
‘Cause the female-dominated jobs are uniformly the less prestigious and less well-paid ones ofc~ But it’s still a fun question to ask ’cause, do females get forced into less-prestigious jobs, or do the jobs that females tend to prefer become marked as less prestigious? Probably lots of both!
Makes you realize how much we’re still on the tail-end of 15th century social structures, where males got to run things entirely because they could usually swing a sword harder, hold a bigger pike, and pull a stronger crossbow. And how much of that is built in, I wonder? Testosterone definitely makes a person more eager to go out and do something stupid and dangerous for a potentially big payoff.
it seems like you’d expect to get two different bell curves separated by gender, but with far more overlap than difference. So then the question is, why has every computer-tech job, class or social venue I’ve ever been in 80-100% male?
Because the professional technical ability is not about averages, but extreme tail ends. Professional developers/hackers are not average Joe or Jane, but the extreme .1% percent of population that has both ability and interest, to dedicate their life to it. At that point two otherwise very close bell curves are largely disjoint. Example: at point r there are no more red samples, even though the curves themselves are not that far away. In 2024 I thought this phenomenon was largely know. Almost everything in society is not about averages, but extremes for similar reasons.
Yeah but why is there any gendered difference in the bell curve? Natural aptitude seems like a real cop out unless it’s been established and there are no societal mechanisms that reinforce it.
I started doing computers in the mid 1980s, when it wasn’t socially acceptable. My recollection is getting laughed at and mocked by the girls my age, for being interested in such a stupid thing as “computers”. 25 years later, the same people were sneering at me, telling me that “the only reason you’re a success is because you’re a white male”.
I’d contrast my experience with Bill Gates, for example. He had in-depth experience with computers 10-15 years before me. He had connected parents who helped his business. While it wasn’t inevitable that he was a success, he had enormous amounts of opportunity and support. In contrast, I spent my summers sitting on a tractor. My coding was alone, with no help, with computers I bought with my own money.
I get told I’m similar in “privilege” to Bill Gates, simply because of shared physical attributes. I can’t help but conclude that such conclusions are based on ideology, and are untouched by reality or logic.
I started doing computers in the mid 1980s, when it wasn’t socially acceptable. My recollection is getting laughed at and mocked by the girls my age, for being interested in such a stupid thing as “computers”.
And do you remember how much worse that mocking was for girls who were interested in computers? Do you even remember any girls being willing to express an interest in computers when the consensus of their peer group was that it was a thing for no-life boys?
I get told I’m similar in “privilege” to Bill Gates, simply because of shared physical attributes. I can’t help but conclude that such conclusions are based on ideology, and are untouched by reality or logic.
This feels like a reductio ad absurdum. There is a large spectrum of privilege between Bill Gates and starving refugee. Just because you have less privilege than Gates, doesn’t mean that you don’t have more than other people.
What I find offensive, racist, and sexist, is people who lump me into the same “privilege” category as Bill Gates, simply because of shared physical attributes That’s what I said, and that’s what I meant. There’s no reason to conclude that I disbelieve in a spectrum of privilege. Especially when I explicitly point out that Bill Gates has more privilege than me.
This is one of my main issues with people making those arguments. They are based almost entirely in ideology, in reducto ad absurdum, in racism (“you can’t be racist against white people”), and in sexism (“men can’t be raped”).
Even here, I’m apparently not allowed to point out the hypocrisy of those people, based on my own experiences. Instead, my experiences are invalidated, because women have it worse.
So I’m not arguing here that different people have different opportunities. I’m not disagreeing that bad things happen to women / black people / etc. I’m pointing out that the most of the arguments I’ve seen about this subject are blatantly dishonest.
Now maybe social pressure like this doesn’t apply to you and so this is not relatable for you. There’s an easy test: do you wear a skirt in the summer when it’s too warm for trousers to be comfortable?
Without intending to criticize your preceding remarks about social pressure, I find this “easy” test to seem more culturally-specific than you present it. “it’s too warm for [full-length] trousers to be comfortable” itself is not relatable for me, and the men and women I know for whom it is both wear short trousers in that case, and of course in some cultures both might wear skirts.
Now maybe social pressure like this doesn’t apply to you and so this is not relatable for you. There’s an easy test: do you wear a skirt in the summer when it’s too warm for trousers to be comfortable?
No one ever sent a less-competent girl to a maths or programming extracurricular event because programming is a girls thing and so they didn’t consider me. No one ever told me not to bother with programming because it doesn’t lead to appropriate careers.
I frequently hear women talking about how they were encouraged to pursue STEM fields they weren’t very intrinsically interested in, because of the presence of Women in STEM programs in their educational or social environment and the social expectation that it is laudable for women to participate in things that have relatively low female participation rates on feminist grounds. I’ve even heard women talk about wanting to quit STEM programs and do something else, but feeling social pressure - that they would be a bad feminist - not to do this.
This happens to everyone though, male and female. My uncle became a dentist because his father pressured him into it. He hated it, his whole life. There, now we have two matching anecdotes.
There is of course rather a lot of actual science about this stuff. For example, the lightest, most effortless google offers “Gender stereotypes about interests start early and cause gender disparities in computer science and engineering”, Master, Meltzoff and Cheryan, PNAS 2021, https://www.pnas.org/doi/10.1073/pnas.2100030118.
You can’t take the assumption that “there exist programming tools that work well for me” and replace it with “everyone has tools that make it as easy for them to program as it is for me.” You speak English.
You may not even notice the advantage you have because everyone around you has the exact same advantage! Since a competitive advantage over your peers is not conferred on you it is quite easy to forget that there are some people who are still at a serious disadvantage – they’re just mostly shut out of the system where they can’t be seen or heard from.
The connection to feminism is a bit tangential then I think: it’s a lens through which it is possible to see the distortions the in-group effect has caused, even as the effect is completely invisible to most people who have accepted the status quo.
I was wondering whether tools like this support HTML output. While PDFs are great for typography and printing in the end a lot of content is consumed via the internet and HTML pages. It feels like a tool like this should have at least some support for this, although the output is usually quite different (i.e. people seem to prefer shorter texts on the web).
There are projects that aim to turn LaTeX code onto HTML, for example what arxiv is experimenting with. But most of those feel a bit tacoed onto the program and not baked into the core and thus also appear less polished.
Oh cool, I’d be interested in taking a look at that, if it’s published somewhere. Although in most cases this dual publishing is so customizable that everybody wants to change some subtle behavior and somehow it ends up with a lot of different bespoke tools.
That’s more or less what I have. The front end builds an AST, then it runs Lua passes over it to do transforms. I’m not sure it’s useful to anyone else.
I started with AsciiDoctor but there were a bunch of things I didn’t like:
Lack of clean separation of presentation and content. I needed to manually copy my style for tables to every table, for example.
Too much syntax. I have to remember to escape C++, for example, because ++ is syntax for something.
Privileged syntax for presentation markup. It shouldn’t be easier for me to make something bold than to make it a keyword, because that encourages me to use presentation markup.
Poor documentation of the extension mechanism and phase ordering for processing.
On the last point, I want to include example code in the book I’m working on. This means I want a plugin that will use libclang to parse a source file and syntax highlight it and insert a tree that I can generate. I then want to process that in three different ways:
For HTML / ePub, I want to expand the semantic markup into classes on spans that I can then style in CSS.
For the print edition, I want to replace them with font directives that use bold and italics.
For the non-print PDF, I want to replace them with font directives with colours.
The AsciiDoctor model doesn’t have a clean semantic markup layer and doesn’t really document how to generate things for targets other than HTML.
I looked at djot, but it really likes shorthand for presentation markup. In my previous books, for example, I used a \keyword macro that would italicise a word and also add it to the index. I want that to be the same level of syntax as \textit, so I’m not tempted to just use the latter. I have a bunch of semantics markup like this and so I want an input format with a consistent syntax for all markup.
Sounds like we’re looking for similar stuff. I Found the Scribble format (what Racket is using for their documentation) to be quite nice for authoring in principle with regards to needing to escape special characters. But that is quite opinionated when it comes to its output. So I was looking a bit at the underlying at-exp library that essentially borrows the escaping mechanism, but does not do more than that.
I would love to have a clean separation between the semantics and the output format, but the more I think about this, the harder it seems to achieve. Usually the medium you’re targeting requires or allows for some special features that you can leverage. For instance the print has a fixed width, unlike HTML, so you can create plots with annotations if you have sufficient white space within the plot area. For HTML this becomes much more dynamic and I haven’t really found a good approach to solving this. On the flip side, HTML output can easily embed an animation, which is not straightforward for a PDF (I have read that this is possible, but have not seen any usage of that in the wild). So I am also not sure if it is worth the effort to try and target multiple output formats with a single source file.
For my blog I wrote a TeX to MathML program myself: MathML.zig. For another older site I’m using AsciiMath (ASCII to TeX) and Temml (TeX to MathML) with a bunch of ad hoc post-processing: math.ts. A while ago I also made a web math demo site where you can compare MathJax, KaTeX, and browser MathML. There are some quirks and differences in MathML rendering between Chrome/Safari/Firefox but I think it’s worth dealing with them to have fully static (and much smaller) pages.
I was going to say that visually the output in your comparison page looks worse for MathML. Do you know if there is some plan on improving that in the future by the browser developers? And to potentially unify the output, of there are some quirks between the browsers?
Good to see that the development is still ongoing, some years ago I briefly looked into it and the quality seemed worse compared to now, so hopefully things will continue to improve.
I’m not sure about future plans for browsers. But one big thing my demo page is missing right now is custom fonts for MathML. I think if you use your own webfont instead of the default, it should look much more consistent between browsers. Also if the poor quality you’re seeing is weirdly formatted parentheses (that’s what I see), you can avoid it by not using \left and \right unless it actually needs to stretch – this might be the fault of the TeX to MathML conversion rather than the MathML rendering.
Cool, looks like I should give MathML another look, that sounds promising.
What I’m mostly seeing as an issue is the spacing. The default math on the page seems to misplace the gamma in the subscript and the spacing between f and ( looks a bit off (as well as the variable within the parentheses. But those are relatively minor complaints if I’m being honest.
I only recently came across an interview with Arthur Whitney about the language k and only then the array-based languages really started to draw me in. I now wish I knew more about them and am still looking for a nice intro to APL and the likes.
The learner-friendliest I’ve found is Uiua but it’s a little different than the classic APLs.
Wow that website is actually really neat! Thank you for pointing it out, I just lost a couple of hours digging into that.
This is a nice intro to APL imho. It is short (Mastering Dyalog APL is very good, but it is massive) and introduces a few operators at a time
https://xpqz.github.io/learnapl/intro.html
That said the best way to learn APL is to write it. Dyalog has a conquest that start with 10 short exercises. There is also APL quest, which you can solve en your own and then compare with Adam’s solution
https://m.youtube.com/playlist?list=PLYKQVqyrAEj9wDIUyLDGtDAFTKY38BUMN
I think that kbook is a pretty good intro to k. There are also other resources on the k wiki.
https://xpqz.github.io/learnapl/intro.html and https://tryapl.org/ are both lovely!
However, Intro to College Math in APL is very interesting too!
So apparently a subset of MathML was extracted, named “MathML Core”, and is now generally available for use! This is news to me! I’ve been looking at MathML every couple of years, and walked back as I wasn’t a fan of adding heavy runtime polyfils like mathjax. But it seems now you can just use the thing?
What is currently the recommended shorthand syntax for authoring math and lowering it to MathML Core subset?
Afaik https://katex.org is still the most popular
I’d love to see a https://asciimath.org backend translating directly to mathml tho
For my CMS I use Temml since Katex had a bunch of bugs with MathML only mode.
From the project description of Temml it seems that it’s basically the code for MathML export ripped out and improved upon, so that makes sense that it works a bit better.
Very nicely done, I also really like the overall design of the website.
Shaders in general still seems like a bit of a mystery to me, despite taking a (basic) computer graphics course.
This idea reminds me of my statistical genetics days. There’s an (in)famous piece of software, PLINK, which is just gobsmackingly fast. It achieves this by “compressing” genotypes into two-bits. The high school understanding of genetics as AA, Aa, aA, aa, can roughly be made to work even with modern sequencing datasets. Usually you can’t tell Aa from aA so you really just have AA, aA, aa (hom-ref, het, hom-alt). You can clearly stuff that into two bits: 00, 01, and 10. We can repurpose 11 for “no data” or N/A.
Now that you’ve got 32 genotypes per u64, you can start implementing all sorts of operations very quickly. Mean genotype? You can compute that with a couple masks and pop counts, followed by one division. Count of homozygous alternates? Masks and pop counts. Correlation between two genomic positions? I bet PLINK even has a linear least squares implementation written in terms of bitfiddles.
The trouble is PLINK is a special purpose binary tool. What I actually want is a library of tools for working with compressed arrays so I can use these ideas to build new tools. I hope Vortex becomes such a toolkit.
There is the BitArray in Julia, which is supposed to work directly with the underlying bits as bools, but abstracting that away so you can write code as if it was a normal array. I can’t vouch for how well optimized the code is in practice though.
As a warning, in C++ there’s a specialization for
std::vector<bool>that does something similar but breaks many of the assumptions that you expect to be true from a genericstd::vector<T>. The major one: It isn’t necessarily stored in a contiguous array, whereas the generic version is. This means that you can’t really take the address of a start and end element to a function that expects them to be contiguous, among other things. It was more than a decade ago that I ran into these kinds of issues (before I’d learned to very carefully read cppreference.com all the time) but it was definitely surprising to learn about, even just to learn that there was a specialization for it.The addressing problem is not from a non-contiguous array, it’s from being a bitset and the obvious impossibility of addressing individual bits. As a result indexing into a
vector<bool>returns a proxy object which breaks all kinds of assumptions about the interaction between vector and its T.Having a separate dedicated bitset causes none of the issues, because people don’t expect it to behave exactly like an array, and more importantly can not use it in generic contexts.
time for fractional addresses :P
Yeah, I wish they’d made it a separate type in C++ to make it clear that you can’t expect it to behave the same as the generic version. It’s a tricky design issue because lots of stuff does work the same, like iterating over it or accessing elements by index but then other parts… don’t.
Those concerns don’t apply to the Julia implementation. It uses a contiguous array and folks don’t use vector memory addresses directly anyway.
For genetic data, the work for other kinds of packed arrays is already done by the BioJulia team here.
I think their code is flexible enough that you could define a new alphabet for a different domain and still use the optimised implementations for LongSequence.
My only experience with Tcl was with setting up vivado projects for some hardware programming. This was quite a jarring experience, since the Eclipse-like vivado IDE would corrupt itself and get into some weird unmanageable state. It was nice that Tcl could be used to properly bootstrap the IDE, so it offered some respite. But I still somehow associate the unpleasant experience of that IDE with Tcl, albeit unfairly. (All of this was in 2016, so I don’t know if Vivado improved their VHDL programming experience since then.)
Oh wow, that sounds very cool! I have recently fallen back on my parenthesis addiction and looked into Scheme languages again. I was actually wondering if/when there will be a scheme written in Rust (even though there probably already exist several, but I never bothered to actually look).
Is there a reason to prefer R6RS over R7RS? I know that not everyone is happy with the newer standard, but I was wondering if you have an opinion on that matter.
Thanks! There are a few other Scheme implementations written in Rust I think, but they are less complete and not actively worked on I believe (and of course they don’t support interoperability with async rust)
The reason R6RS is chosen is because it’s much more complete of a spec, as of right now R7RS small is the only version of R7RS released, and that spec does not include some desirable things like syntax-case. When it is fully released, the intention is to support R7RS large
There are at least two (three now) being actively developed that I know about.
Unrelated to Rust scheme implementations, but when you stated that the primary reason was async, I was reminded of lispx, which I think is a pretty cool project. It was also created to solve the ergonomics of async in its host language. Maybe there’s something there that could be of inspiration. I enjoy a scheme as much as the next parenthesis-loving person, but I’d be happy to see more kernel implementations.
Ah I see! I think I was thinking of https://github.com/volution/vonuvoli-scheme which last saw an update two years ago.
By way of brief comparison, besides the async compatability, Scheme-rs has a much stronger macro system than both of these impls at the current moment, which are attempting to be compatible with R7RS small or R4RS. For example, steel cannot define a macro of the following form:
to be fair to these implementations, they have a lot of features that scheme-rs lacks. But all of the ones present in R6RS I intend to add sooner rather than later
Disclaimer: steel is my project
This is very exciting to see - native compilation is something I’ve been wanting to work on for steel for a long time, I’m happy to see someone else take a stab at it for their scheme. I’ve been toying around with cranelift for years but haven’t had the dedicated time to push through the work on it
To address two of the points on syntax-case and async:
As of last night, steel can define this:
Just have to work on the
datum->syntaxand then there is parity. The syntax-case implementation was slightly broken in the mainline for a while (and also, undocumented).For async, steel does have support for rust async functions registered against the runtime, and I have a fun way of integrating with an external executor instance via continuations, but it does not have an async public api (yet) like you do. Best of luck with this and I’m excited to see where it goes
Oh that’s sweet! Very cool to see syntax-case in steel!
I hope I wasn’t missrepresenting your project! Let me be clear here: Steel is faster and more featureful than Scheme-rs by a long shot. I brought up syntax case because that’s were the initial bulk of my effort went (the remainder went to reimplementing the project as a compiled)
All good - the devil is in the (lack of) documentation :)
Syntax-case was a thorn in my side for years, I only recently put in the effort to get it functioning, I found it a bit mind bendy to get right
It absolutely is the hardest thing to properly implement in my opinion (and I’m technically not even fully compliant, I don’t add any continuations to the expander (soooo much work)).
I’m thankful I spent so much time considering it while the eval logic was interpreted, it was a weight off my shoulder when moving to compiling. That being said, compiling it also lead to a new set of challenges, I had to somehow figure out a way to capture the environment, something that I was just throwing out after compilation.
As an aside, have you noticed how there are so many people who work with Scheme named “Matt”? It’s starting to freak me out a bit
Thanks for the script to disappear the Google sign-in popup! Never got to fixing it.
I love Stylus! Its much more than just disappearing elements. You can eliminate daily inconveniences by shaping your regular webpages to your liking, make them more accessible.
In that context, I’d like to add another fix - headings on tanstack.com.
My mouse/trackpad is configured to auto-click on rest/hover. Over time, I’ve developed a habit of resting the cursor on any white space of a webpage. Now, on tanstack docs, each heading is a clickable anchor, but the clickable area also spans the whole width of the content area. When I subconsciously rest my cursor on the white space besides a heading, the heading and contents are suddenly yanked to the top.
Here is the Stylus fix:
Now, the heading still remains an anchor, but the white space besides the heading text stays out of the clickable area.
uBlock can remove the Google login using the “social” lists. They also remove the social media share buttons that track you.
Generally, any Stylus
display: nonecan be done in uBlock and there’s likely a list for it. With uBlock you might even be able to block the script that inserts the element to stop the root cause.I could never make it work. I have ublock origin installed, but it has never blocked the google popup with default settings.
I remember trying but failing to configure it.
The default settings are pretty conservative. You need to enable extra “filter lists.”
I thought it was blocked by something in the “Social” category, but it seems it’s actually the “Annoyances > Adguard > Popup Overlays” list that does it for me.
Thank you. I’ll give it a shot.
Update: Tanner Linsley, the founder of the tanstack ecosystem, saw this post on Bluesky, and prompted me to submit a PR to fix this problem. The PR is now merged.
What’s the reason for the auto click behavior? I’ve never heard of this before and would be curious for the intention behind it.
Its an accessibility feature. The 3 desktop OSes I’ve used provides a Hover to Click (Dwell to click on macOS) feature in some capacity.
I resorted to using it during a phase of undiagnosed, severe vitamin B12/D3 deficiency, due to which my fingers felt properly hurt while clicking mouse buttons.
Fast forward to today, its a must-have for me now. Its a feature which feels indispensable if and once you get used to it.
Wow, that’s something I would not have expected! I am glad that you were able to work around that. Cool that this accessibility feature seems to be somewhat widely distributed.
If you don’t mind me asking, wouldn’t it be more painful to type on a keyboard? How did you manage to do that back then?
In any case, thanks for sharing and I hope you’re doing better now!
Yep. I had to resort to Dragon NaturallySpeaking. Thankfully, I was on a break from my programming career during those days.
It was all a self-inflicted idiocy though. I was young, lived in a flat devoid of all sunlight, and self-diagnosed the issue as RSI (eerily similar symptoms). Only after a month of that buffoonery, I got it diagnosed by a physician. After just a simple physical check, she ordered deficiency tests, while politely nodding at my self-diagnosis. The test results showed an extremely severe deficiency (at which stage RSI-like symptoms show up), and levels had to be replenished with a series of injections.
I am alright now.
Anyway, this whole thing taught me the value of accessibility. We are all temporarily fully enabled.
Cool that looks nice! I like the sorting order, it sounds very simple and reasonable.
The demos do look interesting, but the space of libraries that make python performant feels very saturated to me, with several deep learning libraries and e.g. Mojo tackling this. Then there are also of course the previous approaches with numba and cython. In the end, all of those do add some friction to working with it.
What’s interesting to me is that it seems like python is becoming more and more a way of expressing some computation, but is used less and less for actually carrying out the computation. I wonder if it’s the right language for that, lacking a macro system and all of that. But in such a big system there is of couse also a lot of intertia. In some sense, I feel like it’s similar to C, the network effects are just very strong and make moving to another language maybe not worth it.
(That being said, I really liked the improvements over the last few versions, say from 3.11 to 3.13.)
Downloads? To-do lists? Not sold. I prefer to throw things away deliberately, rather than when I just happen to reboot, which is normally quite an infrequent event.
Agree. It’s also not just on reboot; some systems have tools that clean
/tmpand other directories (e.g. tmpfiles.d, macOS has similar).How often do people here reboot? Since moving to the 6.8 linux kernel, I’ve started encountering regular memory leaks and begsn to restart every few days. (Battery life is great, so I stay.)
I’m personally a big fan of daily. Switch the work computer on in the morning and off in the evening.
I have also been doing that basically all my life. I have no control over my browser tabs so one of the best things to keep that in check is to close everything at the end of the day (and I deliberately do not want them restored).
Turning off tab restore is a controversial but smart plan. :)
I can only recommend it. Over the years I feel like the browser history navigation has degraded due to keyboard highjacking and more and more links just open in a new tab (window) anyways; which makes going back work unreliably. So I just end up with loads of tabs, and I think it’s just easier to close them all at the end of the day.
Part of the problem is that it’s not predictable. It may be months away, it may be tomorrow. I reboot when either of the following happens:
Of these, only the first is something I can plan for in advance. If I create something in /tmp the it may last for months or it may go away almost instantly if there’s a crash. For downloads, going away almost instantly is probably fine, but a lot of systems make /tmp a RAM disk (tmpfs literally has it in the name!) and accumulating a few GiBs of rarely-accessed things in a RAM disk is not ideal.
I feel like this is getting more and more popular, I appreciate the blog post. It feels a bit unfortunate because the way it is implemented it requires every library to build up their own classes that essentially perform the computation lazily. This is actually something that should be doable more elegantly with macros. I wonder if the approach that Julia takes will eventually catch up in popularity because it does allow for better expressiveness regarding this aspect.
I just finished a research project that involved running quite a few methods and comparing them to each other. For caching my experiments I used
redo(a build system, but can also be abused for this purpose). The way I structured my project, it meant that the naming of the experiments had to follow a somewhat rigid structure, which worked for my intended purposes, but I would really like to make the entire approach more flexible.I came across
mandala, which seems quite interesting and I am now looking a bit more into this code. Overall quite an interesting approach and something I am reading into now. One problem is that the caching seems to be done via sqlite, so that is no good when accessing the same store via an NFS. So I guess something that makes this approach scale across compute nodes would be great. I also haven’t yet kicked the tires properly, so I am also no sure how well this all scales.Another thing I would like to have is something like scribble, but tuned for blog posts; including cross-referencing and including content from other modules. This is actually something I tried to approach with an LLM (Claude), but it failed miserably. I think in the end it would also be nice to have some literate programming component there as well; which at some point would need to have some caching as well (which would tie it up with the caching approach outlined above).
I’d like to hear from the Lobsters developer community on this topic - did any of the tools you have built provide some surprising benefits long after you’ve started using them?
The thing I’ve most recently written is a compiler for books. It parses a TeX-like markup, builds a tree, and then lets me run Lua passes over it to transform the tree and then output it. My first four books were all written in LaTeX but for the second one the publisher wanted ePubs and outsourced this to a company that completely screwed up the formatting. For the third one I wrote a prior iteration of the tool so that I could write semantic markup and have some LaTeX macros that would expand it, or parse it with another tool and generate HTML.
For the book that I’m just finishing, I wanted to have a single flow for:
I started writing this one in AsciiDoctor and eventually wrote my own tool. Extending AsciiDoctor is supposed to be possible, but the docs are totally lacking. I wanted to write my code examples as separate files that I could compile and test and then have libclang mark them up and add them to the text, so my document source code is a thing saying ‘pull in the things between these markers in this file and give it this caption’.
One of the things that’s really helped is that there’s no privileged syntax for syntactic markup. Defining a new thing that’s lowered to a span (HTML) or some format (SILE for PDF) is trivial, so I don’t fall back to using back ticks, I mark things up with the right language. And then, later, because that markup is there, I can go and use TreeSitter to do syntactic labelling for inline snippets.
I remember that you were talking about this topic in another post a while ago. I wanted to follow up on this and ask if you’re planning on making the code available. I would definitely be interested in taking a look (and not because because I want to procrastinate on writing my thesis).
One thing that I am not sure about is how to do cross-referencing properly. I really love how Scribble resolves the functions used in code and is able to link directly to their documentation in code snippets. I would love if something like this would be available in a language-agnostic way.
The code is here: https://github.com/davidchisnall/igk
I haven’t pointed people at it because it needs a lot more documentation before anyone who isn’t me can do useful things with it.
Cool thank you anyways! Hope you get around to tidying up the repo at some point.
If you do so and want some feedback, is be happy to take a look and write down my thoughts.
Actually yes, but it’s on purpose :)
Management asked us to keep track of the time we spend on each task “type” everyday (r&d, exploitation, support, …). So I wrote a program to keep track of my time, that I run whenever I switch tasks (it’s bothersome, but wait for it…).
Thanks to it, my time is tracked very accurately, and it could hilight that I spend way too much time doing support, leading to actions being taken by management. On the other hand, my coworkers do it the “old” way, by guessing approximately how much they did for each task during the week. Which is inaccurate and not representative at all, as you might guess.
I have written small firefox extensions that scratch an itch or two. or a bookmarklette if it’s really small. a simple tab switcher and manager i use daily. one to switch focus in tabs where i press capslock + a or d. one to make the docs on a site prettier. one to open up a folder of bookmarks say 5 or 10 at a time. all of them i use daily.
Hey, long-time-after is kind of a hard (because of not tracking the root cause for improvement) requirement to track, which you did not apply to your own examples!
Good point :)
The impression/fear I get is that the more code is generated by an LLM, the harder it gets (for a human) to sift through it all. I wonder if at some point it will become effectively impossible to code without chat assistance. Line count of a project will lose even more meaning (if it ever has any), as “code is cheap”. I do wonder if the problem solving skill will also be neglected, because you can just get an LLM to fix your code.
Sometimes I get the impression that the situation has parallels to the early days of computing, when people where sceptical if a high-level language such as C would be able to keep up with hand written assembly code. Of course it cannot, but it frees up the code author to think about different problems and that usually seems to have a better return on investment. So maybe this will also happen with LLMs, they will simple free up time for developers to devote it to other topics.
That being said, I’m still not really using LLMs regularly. My last attempt a few weeks ago was at trying to generate some matplotlib code, and it failed miserably, generating code that accessed properties that did not exist. Maybe it will improve for me as well next time and I’ll have a similar moment as the author of this article.
Aren’t compilers generally far better than hand written assembly today, with esoteric optimizations no one could remember and coordinate? (And the loss coming from the unrelated resource use in modern programs, like graphics, fonts or random tangential operations.)
Not exactly. Compilers still regularly produce suboptimal code, and a human can still make a (sometimes very noticeable) improvement. It’s just that it requires a more specially-trained human than it used to, and it might take the human hours or days to come up with a winner, while the compiler does “good enough” in a millisecond. So cost/benefit is what it’s about.
I’ve recently optimized voxel raycaster targeting RP2040 and I had to dive into asm. Not to write better code directly, but to determine why is my code slow. Register pressure, suboptimal math and so on.
Sometimes there were tradeoffs. Slower hash function (phi multiply, shift right) vs. 3 xor shifts. Faster hash vs. quality of spreading. 1-2 fps difference.
Sometimes storing multiple intermediate results and keeping the registers was faster than fusing the loop. Sometimes fusing the loop proved faster. 4 fps.
Packing grid coordinates for DDA into a single register helped alleviate register pressure significantly. Aligning packed data on byte boundaries helped a lot. Keeping the most accessed data either in LSB or MSB (thus single instruction to unpack) also helped a lot. About 4 fps.
Compiler can only do so much. And honestly, Sonnet helped with geometry, but not really with the code.
Yeah, a related issue is that it depends how much experience you have
LLMs help experienced programmers more than less experienced ones
I guess this will make person-to-person mentoring a lot more valuable? Or at least training and education. I have noticed that there is a real demand for that
Maybe? Except they help zero experience non programmers maybe the most
I have not seen how LLMs help non-programmers … If I had to guess, I would imagine it can help them pass exams, but maybe not build large applications.
I guess it depends what you mean by “help”.
These articles have experiences / caveats that are consistent with my experience – they are noting the effect that LLMs are more helpful when you know a domain, and not as helpful if you don’t know it:
https://news.ycombinator.com/item?id=42336553
https://news.ycombinator.com/item?id=42829466
I meet people (or sometimes read articles by them) every month who have built something useful to them using an LLM that they simply could not have built without it, because without it they can’t code at all and never have.
my gf is now learning to code, sometimes while I’m busy it asks the LLM about topics or things she doesn’t understand. She uses it more as a tutor than as a code generation tool in general and she reads everything it says. Overall she has progressed quite quickly since she has a decently quick feedback loop on what she is doing wrong or to expand on something if she is interested or didn’t understand.
I’m an experienced programmer, but everyone needs to work with languages and frameworks that they’re not familiar with them from time to time. I tend to use an LLM as a big book of examples, showing me techniques I didn’t know I could do in whatever new thing I’m learning. It creates many lovely a-ha moments that I can then build upon.
It’s invaluable (to me) for accelerating the rate of learning new and unfamiliar things, (backed by 4 decades of programming experience of course)
Reflecting a bit, maybe it’s more of a “humped distribution”
I’ve found the opposite and it seems to be reflected in studies. Using LLMs seems to bring everyone towards the mean.
If you’re an experienced programmer, I don’t see how LLM’s can make you worse :) Presumably you will notice yourself getting worse pretty quickly, and stop using it. I mentioned in my experiences below how I tend to use it
The OP’s article is very credible to me, and from an experienced programmer
Though I also acknowledge the hazard of people being under pressure at their jobs, and trying to churn out a bunch of code with an LLM … From the articles I linked, it seems like there is a pretty clear understanding of these problems now. Likewise, LLMs are worse at modifying existing code than generating new code
That’s different though. It would would only be comparable if 1.) All the prompts to create the “code” would be persistant and the code could not be manually changed and 2.) the LLM(s) used to create the code would be deterministic and not random.
If you use LLMs “manually” to generate code, then this is a totally different thing.
A few years ago, we hit a performance regression in FreeBSD as a result of switching from gcc to clang. GCC and LLVM’s inliners started at opposite ends of the call graph and so made very different decisions. Something on a hot path in the kernel was no longer being inlined and that resulted in a measurable performance drop.
We were able to fix it by slapping a couple of always- and never-inline attributes on functions, but this kind of thing still shows up in other places. Your code performance relies, for example, on specific behaviour in an autovectoriser or loop rerolling that later enables vectorisation (can be a factor of 4 speedup).
Yes, the code still exists, but trying to reason about performance of software from the level of C code is increasingly hard.
That’s different though. You switched from gcc to clang. The equivalent would be switching from one llm to the other.
But for llm’s, randomness and non-persistant prompts come on top. The equivalent would be that gcc / clang generate assembly with a certain randomness and the C code is thrown away later and only the assembly would be persisted.
This looks really interesting, but also a bit confusing. I would love if something like this takes of, structural editing is something that sounds too good to be true.
I intend to kick the tires on this when I have some time off over the holidays, so maybe I can write up an experience report like https://lobste.rs/s/i1xzas/helix_why_how_i_use_it
Among other things, I’m curious about how well it “degrades” when editing broken structure and free-form text.
Hi, the author of Ki here.
I think that “editing broken structure” shouldn’t be too much of a concern, because once you are commited to structural editing, you will rarely escape insert mode when the tree is malformed, you’ll always want to ensure that the tree is well-formed before entering normal mode again.
So, this is more of a disciplinary issue than a feature issue.
To enjoy structural editing, we have to do the parser a favor by always ensuring the validity of the tree before attempting structural operations, it’s similar to how you can’t enjoy the full cycling experience if you have to rely on training wheels, you have to do the bicycle a favor by keeping it balance all the time while it give you a ride, the true cyclists enjoyer don’t focus on training wheels.
Cool, would love to read that, in case you want a proofreader.
This is probably a good question for @hou32hou
What do you mean by “too good to be true”?
I don’t use AI precisely because I am curious.
I build things and program because I genuinely enjoy taking things apart and understanding, truly understanding them.
AI may be able to solve the task faster, and maybe it can explain the solution, but it won’t help me intuitively understand. Using AI is like copying the teacher’s answer sheets for your homework.
I’ve tried AI tools, and when a new version comes out I try that too, but so far I’ve had no reason to actually use them.
Often enough the only Google search results for what I’m working on are my own posts. In that case ChatGPT isn’t any more helpful either.
I wish I could upvote more than once.
Not only are good programmers curious, but they are also responsible. Say we were lawyers and the entrenched culture was to use AI and that this was not considered malpractice. The same question the author asks, flipped on its head as you have done, would be, “Don’t you want to see for yourself what the actual case law is, you know, like, from the court records—the source?” When some junior dev comes to me with questions about a Git commit with my name on it, I need a better answer than to shrug and say, “Copilot wrote it. LGTM.”
I admit, sometimes I’m fresh out of ideas. I would actually occasionally rather have a dialog with a BS generator than to generate bad ideas on my own. In the words of Don Draper, I need someone to fail faster. But it doesn’t happen often enough that I want that BS generator looking over my shoulder constantly volunteering solutions.
Another aspect of this curiosity argument is internships. Two years ago, the kinds of tasks one assigns to interns—people who were paid very little to be very curious—are now often assigned to LLMs. I don’t have any data, but my impression is that the appetite for hiring software engineering interns has rapidly diminished since ChatGPT. I never enjoyed correcting interns, but talking with them about what they learned and watching their careers progress was immensely gratifying.
I’m exactly the same way… and that’s one of the reasons I’m so enthusiastic about LLM-assisted programming. I can do SO MUCH more exploratory programming with Claude by my side.
I’ve been wanting to figure out WebSockets and SSEs and
<iframe sandbox>for years. Thanks to Claude I’m finally getting stuck in to all three of those, learning at a furious pace where previously the friction involved stopped me from getting started with them.If you’re insatiably curious, LLMs are a gift that keeps giving.
Funny you mention SSEs — I also learned about those recently, and find them cool!
For my approach, I read some blog posts, looked at some libraries, and then wrote an SSE-based application without using any LLM-based tools. Along the way, I found interesting articles about how Wikimedia uses SSEs, as well as a library to expose Kafka via SSE. Reading through such libraries’ GitHub issues and code was also interesting and useful. Coding was a small part of my journey.
To each their own (I won’t knock anyone who benefits from LLM-based tools!), but I struggle to see where I’d fit it into my workflow. For doing the research? For writing the code? I like both of those things… What do you do?
I actually think I’m more likely to ask ChatGPT about something I’m not curious about and just need a solution for (like copying files out of a Docker container).
yes! additionally, llms allow me to postpone learning about details i don’t care about in that moment, while providing an okay-ish implementation of them. i feel much more in charge when exploring, frictionless is truly the right word.
Agreed. LLMs, in some sense, are finally the [imperfect but] infinitely patient teacher I’ve always wanted to handle my outsized curiosity.
This is exactly my experience as well.
I use them when I don’t know what something is called, because sometimes they can tell me the right keywords. They look like they work when you ask them to do things that have been done a million times before, but if you ask them to do something new, they mostly give you garbage. When I point out their errors, they give me even worse garbage. Better to spend the time learning the problem or the tool than programming the AI assistant.
I noticed something similar as well. It’s incredibly hard to get useful results out of a fringe language or a library. I guess as a rule of thumb, if you don’t find promising results when searching for your problem online then a LLM might not be so useful either.
Then Rust is probably already fringe. Other than the most basic borrowing problems, it’s not really helpful to get something out of the LLM.
A few days ago I used Claude to help me solve the monthly jane street puzzle: https://www.janestreet.com/puzzles/current-puzzle/
First, I had it build me a little interactive visualization of the problem. I could drag points around and see the relevant regions of space. This allowed me to have the breakthrough that reframing the problem a certain way made it much easier to solve.
I needed regular google, and pen and paper, to find the final equation, but once it came to actually integrating this equation Claude helped me write the majority of the code and saved me from at least an hour of reading docs.
Sure, using AI here is like “copying the teacher’s answer sheets for your homework” but the thing I’m trying to do here is not “learn frontend” or “memorize the interface to a numeric integration library”, I am completely fine with not truly understanding those components. Claude allowed me to mostly ignore those components in service of actually truly understanding this fun math problem.
I just can’t connect the dots between masculine and valuing formalism, formal methods, quantitative over qualitative work, and the examining of technical aspects. Can someone eli5.
It reminds me a bit of the “punctuality, standardized testing, etc are white supremacy” stuff from the previous decade. Honestly this stuff always seems a lot more like harmful stereotyping and I would appreciate a clear explanation for why it’s not as toxic as it seems.
Except the stereotype is true: women are raised to care more about people than men are, at least in the Western countries I’m aware of. This whole thing about formalism being more masculine than studies, does match my intuition about why we find so more men in “hard” sciences than we do in the humanities. (Note: I hear that Asia has different stereotypes, and many more women in hard sciences. In Japanese animation for instance, a typical support character is a female scientist or technician — yeah, the main/support gender bias is still there I think.)
Now the famous “Programs must be written for people to read, and only incidentally for machines to execute.” have indeed been coined by men (Harold Abelson and Gerald Jay Sussman), and on that same page I learned that another man, Martin Fowler, said “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” Clearly human stuff is important to men too. And there are some crazy impressive low-level woman programmers out there too.
And yet, I can’t shake the feeling that indeed, we dudes tend to ignore the human element more than women do. One striking example would be the Linux kernel, and the recent Rust debacle. It did reek of not trying particularly hard to take care of humans. Or take my own readability article: the whole point is to cater to humans (increase readability), but then I reduce humans to a single limitation (working memory), and evaluate various advice from that alone. Which, when I think about it, is only possible because I have such a mechanical outlook of the human mind — and I don’t even have any expertise there!
Now that paragraph in the paper could probably have benefited from a couple references showing the prevalence, or at least existence, of the stereotypes it uses in its exposition. Without it feels like the author has internalised those stereotypes more than she would have liked to admit.
While an inaccurate stereotype may be more harmful than an accurate one, an accurate stereotype is still pretty bad. Even if it happened to be true that men are better drivers (whether by nature or nurture), it seems pretty harmful to refer to good driving as a masculine trait and also a thing can be true for a group of people and not be true for all individuals in that group, so applying stereotypes (even accurate stereotypes) to individuals is usually unjust. That’s probably not what the paper is advocating, but it’s an inevitable consequence of trading in stereotypes and moreover I don’t see how stereotyping improves the paper.
For a specific idea of a harmful true stereotype see also “black people don’t swim”. There’s lots of stuff in society which comes from the environment and not conscious choice. It doesn’t mean we should assume that’s just how things are and expect them to continue. Sometimes things are more universal if we allow it / enable everyone to do better in whatever way they choose to approach things.
There was a fascinating experiment on stereotype threat where they got a bunch of Asian and Black Americans to try golf. They told half of them that golf was very mathematical and the other half that it was very athletic. There was a statistically significant difference: the black folks who were told it was athletic did better and the Asians who were told it was mathematical did better. Both groups did better at the thing that they’d been stereotyped at being better at even when it was the same task. So reason one that even accurate stereotypes are bad is that they impact the people being stereotyped.
Reason two is that they are still about probabilities. If a stereotype is 90% accurate, it will not apply to all of the members of a group that you can fit in a fairly small room. If 90% of green people are bad at programming, you’ll see fewer green people in software development, but all of the ones you see will be statistical outliers and the stereotype tells you nothing about that subset, but people interpret as if it does. And even the most accurate stereotypes apply to far fewer than 90%. So reason two that they’re bad.
Reason three is that a lot of accurate stereotyped behaviour is due to how people interact with members of a group at an early age. People subconsciously reward behaviours in children that conform to expectations. Boys and girls are praised for different things by every adult around them. This makes stereotypes self perpetuating.
Found the 1999 study at https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=1061f5d1d9d35a13e8deaf2f32f6f386b19c489f
Couldn’t you just as easily argue that formal methods (conscientious, detail-oriented, theoretical) are feminine, whereas user-studies (pragmatic, empirical, utilitarian) are masculine?
In fact the paper does cite that argument:
But the C advocates were arguing against formalism and formal methods, qualities that the quote describes as “masculine”. In fact, I’ve seen people argue that the “C is fine, just write good code” mindset is an example of masculine arrogance (or what have you). So we’re in a situation where emphasis on formal methods is masculine and disdain for them is also masculine.
I’d say the stereotype was true: we’ve observed much change in society over the last ~15 years, particularly in gender roles. I don’t think the stereotype is true anymore. If it is, its decreasingly so year after year, and will soon be definitively false.
The ground of the feminist critique is what we could call a critical history. Traditionally, we think of philosophy or science arising from a neutral form of “reason” – detached from social or historical context. A critical history challenges this claim, and views thinking as necessarily embedded in a place and time. Thought is not considered to be a universal means at finding objective truth but rather the thought arising out of a specific place and time, under regimes of thinking that ground our understanding of what is true or false. This is not subjectivism (“everyone’s perspective is valid”) but rather a sort of uncovering of the origins of ways of thinking which we take for granted.
Given that the historical context of Western civilization, and more specifically, the 20th century development of computers, feminists can critique the development of computing from the lens of a specifically feminist history – uncovering the manner in which thinking about computers came out of a specifically patriarchal context and mode of thinking. The point is not that early computing pioneers were men, but rather than they were embedded in a society which prioritized ‘masculinist’ forms of thinking and potentially reproduced those structures: structures that exist independent of any individual’s identity, but rather have a sort of independent existence
It’s important to understand that it’s not a universalizing critique, ie we can’t understand everything about computing through feminist critique, but it is one lens through which we can look at the history of computing
Said that way makes more sense, at least to me, in which “masculine” refers to the patriarchy-way-of-thinking. Would have used other term, but that’s nitpicking.
The original text in the paper sounds too much like a stereotype applied to men as a gender, maybe because I’m mixing in my head my own perspective with the past and everything in between. “men molded the field like men do, because they all behave the same way”.
Based on your description, it seems like critical theory and feminist critique is principally about ideological framing? As someone who is interested in programming language design, I probably don’t have a lot of interest as to whether we should believe that computing originated out of something called “patriarchy” or whether it originated out of some neutral reasoning, I just want tools that help me accomplish my goals more easily. These ideological frames feel unhelpful at best and divisive or even toxic at worst (I made another comment about how claims like “formalism is masculine” and “objectivity is white supremacy culture” seem like really harmful stereotypes and yet they seem to closely orbit critical theory).
This is not a statement free of ideology. First off, you are conveying a form of instrumentalization: ie, you view programming languages as a tool to be utilized. I’m not saying this is incorrect or bad (obviously, there are many cases in which we do use PLs this way), but it is not the only way of conceiving of programming language design. We could think of programming languages as toys to play with, forms of expression, or places to try out new ideas. Consider the whole field of esoteric programming languages – almost all of them totally fail as “tools to accomplish goals more easily”.
The point is, there is no “neutral reasoning”. Everyone is always thinking within a culture that has a certain ideological frame. This not a moralistic point (although sometimes this sort of analysis does end up distorted into popular culture that way), it just means that you can’t separate computing from its historical, cultural, and ideological history if you want to understand it fully. I mean, consider the basic point (referenced in the paper) that most PLs are monolingual in English – this isn’t because English is the “objectively best” language for programming, it’s necessarily tied to a long history of British and American cultural imperialism. And again, I want to emphasize, the point isn’t that it is “morally wrong” that programming languages are in English. It’s the academic point that this development is inseparable from the history of colonialism. And this critical history can inform experiments, development, and research into non-English based programming languages.
Your issue appears to be with specific applications of critical theory, yet you expand this to include the entire field itself. One could certainly make a bad feminist critique of PLs, but that doesn’t mean that feminist critique is a useless enterprise, which is what you seem to be claiming.
Nit: the “goal” in those cases is to have something to play with, something to express oneself with, or some place to try new ideas. You aren’t really disagreeing.
I’ve seen a good chunk of listings for game code written in German that would disagree with this premise. I’d imagine that the Soviet code listings in Cyrillic probably also disagree–Drakon, for example.
Re: imperialism, it’s well-known that CS is rife with accomplished Englishmen like Alain Colmerauer and Americans like Edsger Dijkstra. Similarly, we all run on silicon manufactured in the United States by red-blooded American companies like TSMC and using phones made by god-fearing businesses in Western countries like Huawei.
The thing is, that lens means a particular thing in academia, and outside of academia–where we are here–it’s easy to “disprove” using common sense and rudimentary historical knowledge by the casual practitioner. That’s not to say it’s a bad or even incorrect lens, but merely that it requires more nuance in application and context than is usually found here–people spend their entire doctorates haggling over narrow aspects of imperialism and feminism, after all.
EDIT: After talking with a colleague, I’ll concede that if by “forms of expression” you mean a programming language itself as a sort of work of art then my lumping it in as a tool is a bit unfair. Paint vs paintbrush.
I’m not sure if I understand your point here or if we really disagree. I said most programming languages are English monolingual, and that this fact is tied to the history of British and American cultural imperialism. It is not that we can understand technology solely through that lens, just that a purely neutral, technical, ahistorical viewpoint is not the correct one. The example of Huawei is particularly interesting here – it’s been sanctioned by the US since 2020, and for geopolitical, not technical reasons.
But isn’t it the case that certain fields are dominated by a particular language? Musical terms are Italian; cooking terms and techniques are French; large portions of philosophy are French and German, not to mention the endless Latin found in medicine and law. Could it not be that computers are largely English because English-speaking countries dominated the space early on?
Another thing I’ve heard (or read, sorry, no citations available) is that non-English speaking programmers tend to prefer computer languages not in their native language, for whatever reason. Maybe just a simple “translation of keywords into target language” just doesn’t work linguistically? Not enough non-English documentation/learning materials?
The ability to share code and libraries with the rest of the world. Imagine if you had to learn French to use sqlalchemy, German for numpy, Japanese for pandas, and Russian for requests.
This is a great case for programming languages in Esperanto!
Maybe Lojban or Interlingua (or toki pona) :) Esperanto is very Eurocentric.
I knew this reply would appear when I posted my comment 😁
:D Thanks for the dance!
In more seriousness, though, Lojban’s “logical” bent does make it appealing for PL experiments, in an anti-Perl kind of way ..
I’ll do a different one for variety: Native English speakers who learn a constructed language instead of a normal one are xenophobic.
consider that constructed languages are (with few exceptions) designed to be much simpler to learn than natural languages. it’s a lot easier to learn 200 sitelen pona than 2000 kanji. (and it’s a lot easier to learn a phonetic spelling than whatever english is doing)
After we’ve finally finished migrating everything to Esperanto, it will be found to be problematic because of certain reasons. Then we will need to migrate to a new created human language base for programming languages. I vote for Tolkien’s Quenya.
TBH it’s a great case for programming languages in English, which is the most popular spoken language in the world thanks to its strong popularity as a second language globally. https://www.ethnologue.com/insights/ethnologue200/
I find it funny that, even on the NL wiki they only have English quotes of Dijkstra , and sources to English papers of Dijkstra
https://nl.wikipedia.org/wiki/Edsger_Dijkstra
Does this not itself show how English-centric some of this stuff is?
I think that’s more of peculiarity of how multi-lingual the Dutch society is, and how good they are at using English.
He spent a lot of time at the University of Texas in Austin, hence I think it makes sense that the quotes are in English (as that is the actual quote). And yes, I guess most Dutch readers are also familiar enough with English that they do not need a translation.
Yeah, it pretty much is. It is a statement of personal preference, not a set of beliefs.
And it does not claim that it is “the only way of conceiving of programming language design.” It is an expression of a personal preference.
And nobody claims that that is the case, so this is just a straw man.
It has almost nothing whatsoever to do with “colonialism”.
Critical “theory” is not really a “field”, and certainly not a scientific one. It (accurately) self-identifies as ideology, and then goes on to project its own failings onto everything else. With “critical theory” the old saying applies: every accusation is a confession.
Well, noone was claiming that anyone was claiming that, so ironically, someone could say that your strawman claim is itself a strawman.
This saying actually isn’t that old, the earliest example seems to be from around 2020.
what goals do you want to accomplish? why do you want to accomplish them? in what socio-politico-technical context are the tools that are more or less fit for that purpose developed, and how does that context influence them?
What goals or socio-political contexts lend themselves more to more or less formal programming methods? Like even if I want to build some “smash the patriarchy” app rather than a “i ❤️patriarchy” app, would I benefit more from languages and tools that are less “formal” (or other qualities that the poet ascribes to masculinity)? I can’t think of any programming goals that would benefit from an ideological reframing.
I’ve not finished the paper yet but it seems like the authors are using “masculine” as a shorthand for “thing oriented”. Which isn’t a perfect analogy, but in my experience “thing oriented” typically skews male and “person oriented” typically skews female.
Yes, and this is a stereotype that has been semi-deliberately produced and spread. You can read lots of stuff about women in technical fields in the late 19th through mid 20th century, going to universities, applying for jobs, etc and a lot of it involves people explicitly or implicitly saying “women can’t do X because men think logically/rationally/analytically, while women think emotionally/impulsively/socially”. Like most stereotypes there’s a grain of truth in it from the right angle – I recall talking to my sister when she was pregnant with her son and she often said things like “I get so emotional for no reason” and “it’s so draining, I feel like my IQ has dropped 20 points”… now take this phenomenon and project it to a time when families commonly had 8 kids or whatever, your average woman would spend a decade or two of her life either pregnant or recovering from pregnancy.
But the real reason that stereotype exists is because it is used as a proxy for “women shouldn’t be able to do X because I do X and I’m obviously better than a woman”.
As a white boy, no one ever told me that I shouldn’t program because it’s not a white thing. No one ever told me boys can’t code. No one ever told me that computers are not for people like me. No one ever sent a less-competent girl to a maths or programming extracurricular event because programming is a girls thing and so they didn’t consider me. No one ever told me not to bother with programming because it doesn’t lead to appropriate careers.
As Director of Studies for Computer Science at an all-women Cambridge College, every single one of the students that I admitted had stories like that from people in positions of authority. Young women who went on to get first-class honours degrees in computer science at one of the top departments in the world were told girls can’t program by their computing teachers. Girls who scored Gold in the Mathematics Olympiad were encouraged not to enter by their teachers. These aren’t isolated things, these are small social pressures that have been relentlessly applied to them from birth. And the ones I see are the ones who succeeded in spite of that. A much larger number are pushed out along the way.
Now maybe social pressure like this doesn’t apply to you and so this is not relatable for you. There’s an easy test: do you wear a skirt in the summer when it’s too warm for trousers to be comfortable?
Now why do I care? Because I’m selfish. I want to work with the best people. Having more than half of the best people self-deselect out of the profession as a result of unintentional social pressure means I don’t get to work with them.
Thank you for saying what I wanted to far better than I ever could.
I’ll follow up on the research on sex-preferences in newborns or other primates, ‘cause it’s really interesting: built-in differences in preferences are totally real, sure! The differences exist. But everyone is an individual. The way I think of it is that looking at whatever human behavior you want to measure, you’ll probably get a big bell-curve, and if you measure it by sex you’ll probably get two different bell-curves. The interesting question for science is how much those bell curves overlap, and how much the differences are environmental vs. intrinsic. The studies on newborns and such are interesting ‘cause cause there’s barely any cultural or environmental impact at all, so that’s relatively easy to control for.
But when people do studies on “vaguely what do these bell curves look like”, such as https://doi.org/10.3389/fpsyg.2011.00178, they tend to find that the difference within each bell-curve are far larger than the difference between the bell curves. That one is neat ‘cause they look at (fuzzy, self-reported) cultural differences as well and find it has a big effect! They look at generic-white-canadian men and women and get one pair of bell curves, then look at south/east-asian men and women and get a quite distinct pair. Sometimes the male/female bell-curves in each culture overlap more with each other than they do with the bell-curves of the other culture! (figure 4 of that paper.) Sometimes the cultural differences outweigh the intrinsic/genetic ones! Sometimes they don’t! It’s wild, and in that paper deliberately quite fuzzy, but the broad strokes make me really wonder about the details.
I am far, far, far, far, far from an expert on psychology, but the conclusions are pretty compelling to me. If you could magically make a perfect test of “technical ability, whatever that means” that corrected for environment/upbringing/etc and applied it to a big random sample of humans, it seems like you’d expect to get two different bell curves separated by gender, but with far more overlap than difference. So then the question is, why has every computer-tech job, class or social venue I’ve ever been in 80-100% male?
It’s not just computers. It’s that “thing” oriented jobs are overwhelmingly male. “people” oriented jobs are overwhelmingly female. it’s not that women are bad at “things”, it’s that they often have other interests which are more important to them. Or, they have other skills which means that they’re better off leveraging those skills instead of “thing” oriented skills.
For people who suggest that men and women should have similar skills and interests, I would point to physical differences between men and women. Evolution has clearly worked on male and female bodies to make them different. But we’re asked to believe that those physical differences have no affect on the brain? And that despite the physical differences, men and women have exactly the same kind of interests and abilities?
Just… no.
Obviously there is going to be an effect. We’ve measured that effect. We would probably expect something like a 55:45 spit, based on biological factors. but instead we see something much closer to a 90:10 split, which is way way way more than we would expect. “maybe it has something to do with culture and society” is a pretty reasonable hypothesis.
See “non-linear effects”. The average woman is only a bit less strong than the average man. But the world record for male “clean and jerk” is essentially “pick up the woman doing the womans record, and add 100 pounds”.
Or look at male high school athletes, who compete pretty evenly against female olympians.
Are we really going to say that similar effects don’t exist for hobbies / interests / intellectual skills?
And why does the comparison always include male oriented jobs? Why not point out that 90% of nurses and dental technicians are female? Why not address the clear societal expectations / discrimination / etc. against men there?
The arguments are generally only one way. Why is that?
‘Cause the female-dominated jobs are uniformly the less prestigious and less well-paid ones ofc~ But it’s still a fun question to ask ’cause, do females get forced into less-prestigious jobs, or do the jobs that females tend to prefer become marked as less prestigious? Probably lots of both!
Makes you realize how much we’re still on the tail-end of 15th century social structures, where males got to run things entirely because they could usually swing a sword harder, hold a bigger pike, and pull a stronger crossbow. And how much of that is built in, I wonder? Testosterone definitely makes a person more eager to go out and do something stupid and dangerous for a potentially big payoff.
Anyway, I’m rambling.
Because the professional technical ability is not about averages, but extreme tail ends. Professional developers/hackers are not average Joe or Jane, but the extreme .1% percent of population that has both ability and interest, to dedicate their life to it. At that point two otherwise very close bell curves are largely disjoint. Example: at point r there are no more red samples, even though the curves themselves are not that far away. In 2024 I thought this phenomenon was largely know. Almost everything in society is not about averages, but extremes for similar reasons.
Yeah but why is there any gendered difference in the bell curve? Natural aptitude seems like a real cop out unless it’s been established and there are no societal mechanisms that reinforce it.
The reasons for it have been obvious to humanity for thousands of years.
That’s not a reason, that’s the phenomenon in question.
I started doing computers in the mid 1980s, when it wasn’t socially acceptable. My recollection is getting laughed at and mocked by the girls my age, for being interested in such a stupid thing as “computers”. 25 years later, the same people were sneering at me, telling me that “the only reason you’re a success is because you’re a white male”.
I’d contrast my experience with Bill Gates, for example. He had in-depth experience with computers 10-15 years before me. He had connected parents who helped his business. While it wasn’t inevitable that he was a success, he had enormous amounts of opportunity and support. In contrast, I spent my summers sitting on a tractor. My coding was alone, with no help, with computers I bought with my own money.
I get told I’m similar in “privilege” to Bill Gates, simply because of shared physical attributes. I can’t help but conclude that such conclusions are based on ideology, and are untouched by reality or logic.
And do you remember how much worse that mocking was for girls who were interested in computers? Do you even remember any girls being willing to express an interest in computers when the consensus of their peer group was that it was a thing for no-life boys?
This feels like a reductio ad absurdum. There is a large spectrum of privilege between Bill Gates and starving refugee. Just because you have less privilege than Gates, doesn’t mean that you don’t have more than other people.
I think you’re largely missing my point.
What I find offensive, racist, and sexist, is people who lump me into the same “privilege” category as Bill Gates, simply because of shared physical attributes That’s what I said, and that’s what I meant. There’s no reason to conclude that I disbelieve in a spectrum of privilege. Especially when I explicitly point out that Bill Gates has more privilege than me.
This is one of my main issues with people making those arguments. They are based almost entirely in ideology, in reducto ad absurdum, in racism (“you can’t be racist against white people”), and in sexism (“men can’t be raped”).
Even here, I’m apparently not allowed to point out the hypocrisy of those people, based on my own experiences. Instead, my experiences are invalidated, because women have it worse.
So I’m not arguing here that different people have different opportunities. I’m not disagreeing that bad things happen to women / black people / etc. I’m pointing out that the most of the arguments I’ve seen about this subject are blatantly dishonest.
Without intending to criticize your preceding remarks about social pressure, I find this “easy” test to seem more culturally-specific than you present it. “it’s too warm for [full-length] trousers to be comfortable” itself is not relatable for me, and the men and women I know for whom it is both wear short trousers in that case, and of course in some cultures both might wear skirts.
What does this test determine and how?
Whether a man’s experiences and behaviors have been shaped by societal defaults of gender.
I frequently hear women talking about how they were encouraged to pursue STEM fields they weren’t very intrinsically interested in, because of the presence of Women in STEM programs in their educational or social environment and the social expectation that it is laudable for women to participate in things that have relatively low female participation rates on feminist grounds. I’ve even heard women talk about wanting to quit STEM programs and do something else, but feeling social pressure - that they would be a bad feminist - not to do this.
This happens to everyone though, male and female. My uncle became a dentist because his father pressured him into it. He hated it, his whole life. There, now we have two matching anecdotes.
There is of course rather a lot of actual science about this stuff. For example, the lightest, most effortless google offers “Gender stereotypes about interests start early and cause gender disparities in computer science and engineering”, Master, Meltzoff and Cheryan, PNAS 2021, https://www.pnas.org/doi/10.1073/pnas.2100030118.
You can’t take the assumption that “there exist programming tools that work well for me” and replace it with “everyone has tools that make it as easy for them to program as it is for me.” You speak English.
You may not even notice the advantage you have because everyone around you has the exact same advantage! Since a competitive advantage over your peers is not conferred on you it is quite easy to forget that there are some people who are still at a serious disadvantage – they’re just mostly shut out of the system where they can’t be seen or heard from.
The connection to feminism is a bit tangential then I think: it’s a lens through which it is possible to see the distortions the in-group effect has caused, even as the effect is completely invisible to most people who have accepted the status quo.
So how else do you explain this prevalent tendency?
Getting 404 on this link
The link apparently changed to https://devenv.sh/blog/2024/10/22/devenv-is-switching-its-nix-implementation-to-tvix/
Unfortunately it’s not possible to change the link as a user suggestion.
I was wondering whether tools like this support HTML output. While PDFs are great for typography and printing in the end a lot of content is consumed via the internet and HTML pages. It feels like a tool like this should have at least some support for this, although the output is usually quite different (i.e. people seem to prefer shorter texts on the web).
There are projects that aim to turn LaTeX code onto HTML, for example what arxiv is experimenting with. But most of those feel a bit tacoed onto the program and not baked into the core and thus also appear less polished.
No, sadly. I’ve been working on a tool that lets me write semantic markup and then process it into SILE or HTML.
Oh cool, I’d be interested in taking a look at that, if it’s published somewhere. Although in most cases this dual publishing is so customizable that everybody wants to change some subtle behavior and somehow it ends up with a lot of different bespoke tools.
That’s more or less what I have. The front end builds an AST, then it runs Lua passes over it to do transforms. I’m not sure it’s useful to anyone else.
I started with AsciiDoctor but there were a bunch of things I didn’t like:
On the last point, I want to include example code in the book I’m working on. This means I want a plugin that will use libclang to parse a source file and syntax highlight it and insert a tree that I can generate. I then want to process that in three different ways:
The AsciiDoctor model doesn’t have a clean semantic markup layer and doesn’t really document how to generate things for targets other than HTML.
I looked at djot, but it really likes shorthand for presentation markup. In my previous books, for example, I used a \keyword macro that would italicise a word and also add it to the index. I want that to be the same level of syntax as \textit, so I’m not tempted to just use the latter. I have a bunch of semantics markup like this and so I want an input format with a consistent syntax for all markup.
Sounds like we’re looking for similar stuff. I Found the Scribble format (what Racket is using for their documentation) to be quite nice for authoring in principle with regards to needing to escape special characters. But that is quite opinionated when it comes to its output. So I was looking a bit at the underlying
at-explibrary that essentially borrows the escaping mechanism, but does not do more than that.I would love to have a clean separation between the semantics and the output format, but the more I think about this, the harder it seems to achieve. Usually the medium you’re targeting requires or allows for some special features that you can leverage. For instance the print has a fixed width, unlike HTML, so you can create plots with annotations if you have sufficient white space within the plot area. For HTML this becomes much more dynamic and I haven’t really found a good approach to solving this. On the flip side, HTML output can easily embed an animation, which is not straightforward for a PDF (I have read that this is possible, but have not seen any usage of that in the wild). So I am also not sure if it is worth the effort to try and target multiple output formats with a single source file.
Now that all the major browsers can render MathML I just use that, no JS needed.
What is your workflow for producing MathML? Do you write it directly or maybe generate it from TeX?
For my blog I wrote a TeX to MathML program myself: MathML.zig. For another older site I’m using AsciiMath (ASCII to TeX) and Temml (TeX to MathML) with a bunch of ad hoc post-processing: math.ts. A while ago I also made a web math demo site where you can compare MathJax, KaTeX, and browser MathML. There are some quirks and differences in MathML rendering between Chrome/Safari/Firefox but I think it’s worth dealing with them to have fully static (and much smaller) pages.
I was going to say that visually the output in your comparison page looks worse for MathML. Do you know if there is some plan on improving that in the future by the browser developers? And to potentially unify the output, of there are some quirks between the browsers?
Good to see that the development is still ongoing, some years ago I briefly looked into it and the quality seemed worse compared to now, so hopefully things will continue to improve.
I’m not sure about future plans for browsers. But one big thing my demo page is missing right now is custom fonts for MathML. I think if you use your own webfont instead of the default, it should look much more consistent between browsers. Also if the poor quality you’re seeing is weirdly formatted parentheses (that’s what I see), you can avoid it by not using
\leftand\rightunless it actually needs to stretch – this might be the fault of the TeX to MathML conversion rather than the MathML rendering.Cool, looks like I should give MathML another look, that sounds promising.
What I’m mostly seeing as an issue is the spacing. The default math on the page seems to misplace the gamma in the subscript and the spacing between
fand(looks a bit off (as well as the variable within the parentheses. But those are relatively minor complaints if I’m being honest.It’s the same for epub and epub reader programs, so I look forward to future math textbooks being written in a reflowable format!