Not to cast too much shade, but one of the biggest things (imho, having written both webshit and CAD software) is that the math and CS work for CAD software is genuinely hard. Like, really hard.
Instead of training our devs to do that sort of really hard and mathy stuff, we’ve raised a generation of engineers to give conference talks on amateur (not necessarily novice, but amateur) compiler design and PLT wonkery–stuff that doesn’t scale in terms of impact.
Like, it would seem to me that more humans are impacted on the day-to-day by plastic moldings enabled by computational fluid dynamics or CAD-designed tooling than benefit from ReactJS or Babel or some new exploration in type-safety. Maybe we should try to encourage more of the next generation to go after numerical/graphical stuff.
I haven’t done that kind of programming in a long time now (it’s gonna be 10 years now…) but my experience kindda matches yours…
For example, the biggest obstacle towards solving the first problem in the article (everything is single-threaded) is that the subset of people who can a) do parallel programming (in a maintainable manner), b) make heads or tails out of the math behind a CAD program, and c) understand enough about how that program is going to be used in order to come up with real improvements, not “UX improvements”, is extremely small. In what used to be my field (EM field simulation) I’ve met maybe one or two people who could do it. I definitely wasn’t one of them, I knew enough about electromagnetism to be able to understand the equations and write the programs, but definitely not enough to make any serious original contributions in this regard (in my defence, I wasn’t that interested in that particular part of EE either, it was just the only one I could work in at the time).
This is pretty obvious at every level. E.g. orange site is full of critique against CAD/CAE tools originating from programmers, and while I can’t speak for the mechanical side of things (zero expertise there), I can confidently say that on the EE end, most of it is bonkers. Lots of CAD tools are 1988-era programs with 1998-era UIs not only because they’re marketed by huge, non-software companies who are both unable to recognise and unwilling to pay for software development expertise, but also because too many people with programming expertise are too busy being right and too dogmatic (about many things, including how to build graphical software) to make any real improvement.
(You can’t just take a program for a test drive and maybe do a quick PCB design because you’ve taken up electronics as a hobby and, with a grand total of maybe 200 hours of experience spread across an year and a half, hope to figure out what the people who’ve been at it for twenty years at their day job really need. Especially when they’re operating inside an organisation, not in a living room, and with all sorts of reporting, engineering, logistics and business processes in place, some of them formal, some of them informal, all of which you have to support and facilitate. But that’s a whole other story.)
The educational gap is hard to bridge though. Maybe with enough people going into data science these days there’s some hope to it. Back in my second year of uni, when I took the introductory numerical methods course, lots of people – both students and teachers – were kindda sneering at the more programming-heavy problems in that course, after decades of Moore’s law making everything program twice as fast if you just wait a few months. (Why bother with C when Matlab does all this in a few lines etc. etc..)
I agree. I don’t think the OP understands how hard the problem space is, even ignoring half of the issues presented.
I’d argue you have better chances at working in the field if you start from a math curriculum and add the required CS bits (which are also of the non-trivial kind). I say this as also having worked in the field of CG (mostly doing topographical DBs, graphs and routing), where even seemingly trivial algorithms that can be described in half a page in “C computer graphics” can require tons of foresight to actually work without producing degenerate cases due to numerical instability. It’s fascinating, but incredibly demanding.
The proof is that, on a global level, we have very few geometrical kernels that can work on B-rep representations. All of them are decades-old and still have plenty of degenerate cases. The good ones are too expensive for hobbyists to use. On top of that, history-based modelers still rely on a ton of heuristics to rebuild the models to fill the gaps in topological changes.
I am surprised that the author’s bio doesn’t mention that she’s a cofounder of KittyCAD, which seems to be super early and is focusing on addressing these problems. At the very least, it means she has a financial stake in this area and it seems like it’s reasonable to disclose.
Huh, is that a spin-off from Oxide’s experience building server cases/racks/etc? That would be like, business level yak shaving :)
Single-threadedness of OpenCASCADE is a big pain point in FreeCAD. It’s not even off-threaded, it’s always on the main UI thread. Launched an operation that takes forever? Kill and restart.