From the It Will Never Work In Theory mini-conf, the “Refactor commits take longer to approve than feature commits.” talk was interesting but not as broad as I’d hoped. IIRC it was only talking about Java and only the refactoring experience within a single editor. Very nice to see someone talking about refactoring beyond simple find/replace though.
This is why https://hedycode.com/ supports many different languages, so that students can spend their time focusing on learning programming ideas and not learning English.
Hedy is meant for all kids that want to learn programming! You do need to be able to read English with ease.
???
I’m wondering if that’s a type. Felienne spoke at Strange Loop 2022 and demoed Hedy running in many non-English languages including RTL langauges.
Nothing here is wrong, but it does ignore the reasons end user programming has proven undesirable.
It’s largely undesirable from an IT management and end user perspective because it allows malicious code to do malicious things and ill thought out code to do bad things that are hard to track down. It also makes it much harder to collaborate or train people on software without also giving them your customization code. Which is great until they need something you hid away.
I would argue that end user programming is already almost everywhere it’s desirable - in various kinds of productivity software.
Probably a better next generation solution would be something that makes it easy for apps to expose apis with a standardized security experience. Which MSFT have tried many times to create. And maybe some standard way to embed some ui extensions- maybe as basic as the ability to tether one window to another.
This is not to say that we’re at a global optimum, but the transition to a more “bicycle for the mind” type computing experience is going to require totally rethinking integration points, sandboxing, and security. This leaves intermediate states “impossible” and getting over that chasm challenging.
I mostly agree, except I think there’s a lot of space for personal tooling. But I also don’t think that the current EUP approaches are as good for personal tooling as people want, because personal tooling involves splicing together lots of (often non-programmable) consumer apps.
I think there is a case in the right size of company. Where envs are cheap, give everybody their own env! That by the way is what Bank Python does: everybody gets infinite envs and you just control the ones that are represented to other people as correct.
Hence what I say about apis ;) but even then there’s a huge level of skills required to be able to make use of any kind of programming because it involves being able to learn a really extensive set of thinking skills.
When I was a lawyer, I used to impress my colleagues with my ability to use a spreadsheet. That’s the most accessible programming tool in the hands of the profession that engages in the most programming-like activity outside of STEM, and it’s still like magic to them.
Even as we get programming to the same place as calculus, it’s not going to change that much. Most people struggle with trigonometry and calculus even they were taught them in school. Everyone (to a first approximation) in the UK used to learn French in school, but even being hilariously bad at French is still considered pretty impressive in the UK.
Personal programming is only going to get big if programming becomes something most people have to do on a regular basis so much that it’s a huge part of our culture, for everyone. Rather like speaking other European languages in the Netherlands or Iceland.
Edit: or for an example closer to home, most developers aren’t that great with text processing tools and plenty don’t really use the shell. Most people don’t use vim or emacs. Even within our profession the demand for personal tooling is lower than it would appear to those of us who do actually indulge in it. Or to put it another way, it won’t even see significant uptake from programmers let alone anyone else until it gets vastly better.
Is malicious code such a large problem because in the vast majority of languages you can insert malicious code anywhere, making it impractical to check for maliciousness? Historically there hasn’t been great options for sandboxing either, at least to my knowledge. If these issues were greatly improved, would end user programming be more common or practical?
It has nothing to do with languages. If users can add code that messes with their own data (and the network) then end-users will install some random snippet they found online which steals all their stuff.
Seems there is some reasoning in https://github.com/gren-lang/compiler/issues/12 but it doesn’t seem particularly convincing.
IIRC PureScript doesn’t have then either, but uses custom types instead
type Tuple2 a b = Tuple2 a b
type Tuple3 a b c = Tuple3 a b c
I kind of like this approach because it reduces parens to only be used for grouping, reducing the overall syntax variety.
TypeScript does have tuple types: https://www.typescriptlang.org/docs/handbook/2/objects.html#tuple-types
Although I’m fairly convinced by the rationale for excluding them.
This is tangential, but:
In particular, there is almost always a gap between domain experts (the people who have a need which can be met by creating a new, or adapting an existing, program) and programmers (the people who write programs).
Why haven’t we yet made programming approachable enough that the domain experts can be the programmers rather than having to delegate to programmers? The immediate cynical answer that comes to mind is that we programmers like our job security. But I wonder if there are other, better reasons.
I think the more likely answer is that making programming approachable is a lot harder than we think it is.
What do you think about this essay which argues that things like Visual Basic and HyperCard were on the right track, but then the late 90s web boom (and, according to a later essay, open source), halted progress in that area?
I’m not hwayne, but I agree with him—it’s a lot harder than we think it is. Basically, programming requires tracking detail, enough detail that would daunt most people. Witness the number of articles about fallacies that programmers (people trained to track such details) make about human names, addresses, phone numbers or dates, just to name a few areas.
Here’s my question to you—how do you define “computer literacy”?
Poppycock. There are few imaginary products I can think of that would be more valuable to their creator than the “AI that replaces programmers”, it’s just not something we have any idea how to do.
Small parts of programming do get automated over the years, with things like garbage collection and managed runtimes, but so far this has always lead to an increase in the kinds of tasks we expect computers to handle, rather than doing the same basic tasks with fewer programmers. This makes sense because it gives the business an advantage over competitors in whatever their core business happens to be. They’d (the companies that survive) rather do more and charge more / get more customers, than do the same for slightly less.
and, according to a later essay, open source
That essay seems to confused open source with not charging money for things…
First of all, I’ll say that I agree with hwayne and think that’s the primary reason we don’t have many non-programmer friendly coding/automation tools.
The first essay you linked alludes to this, but I think the point should be emphasized, there’s an incentive mismatch between programmers and end-users. Programmers often like to program because they enjoy the act of programming. Look at how many links we get on this forum about programmers waxing and waning about the joys of TUIs, architectural simplicity, or networks run for and by skilled operators. These are all things that are immaterial to, or even detrimental toward, the user experience of a non-programming SME. Even with today’s world of skilled programmers running large cloud systems, programmers still complain about how much they need to accommodate the whims of non-technical users.
This isn’t unique to programming. Trades folks in a lot of trades often talk shop about better access platforms/crawl spaces, higher quality parts, more convenient diagnostic tools, and other stuff that non-tradespeople would find spurious expenses/concerns that sometimes may even make the tradesperson’s work less aesthetic (say in a residence.) I think there are many complicated factors that make this incentive mismatch worse in programming than in trades. As long as this incentive mismatch exists, I think you’ll only see limited progress toward non-technical programming accessibility.
Having been in the position of “software engineer for SME’s” a few times… Making really good software that you would actually want to use in production is a craft, a skill of its own, and one that takes a lot of time and work to learn. Most software people are interested in software for its own sake, because the craft is fun. Most SME’s are not, and so they will learn as much as is necessary to bang together a solution to their problem and it doesn’t really matter how nasty it is. They want to be working on their subject matter, not understanding cache lines or higher order functions.
We can rephrase the question: “Why haven’t we yet made woodworking approachable enough that the people who use furniture can be the carpenters rather than having to delegate to carpenters?” Sure, if you are actually interested in the process of building furniture then you can make lots of amazing stuff as a non-professional, and there’s more sources out there than ever before for an interested novice getting started. But for most people, even assembling IKEA furniture is more work and suffering than they really want to expend.
I think the whole idea is to make the “band together something that solves the problem” option more possible and more common.
So many people spend so much of their lives using computers to manually do trivially automated things, but the things are all too bespoke for a VC funded startup to tackle making a “product”.
This works pretty well as long as the tools those people build are only used by that person. Which is pretty important! The problem appears when someone’s bespoke little tool ends up with its tendrils throughout an organization, and now suddenly even if it isn’t a “product” it is essential infrastructure.
I think that’s actually a good thing / goal, and work on “making programming accesible” should work on reducing the ways in which that is a problem.
Note that “a dev with a stick up their ass seeing it will say mean things” is not by itself a valid problem for anyone but that dev ;)
I would say it’s for the same reason why programmers can’t be the domain experts; expertise in any field takes time, effort and interest to develop.
For example, a tax application where all the business rules were decided by developers and a tax application developed by accountants would probably both be pretty terrible in their own ways.
A lot of the other responses I almost entirely agree with, but to add my own experience:
I’ve been a part of some implementations of these types of tools, and also read a lot about this subject. Most people building these tools aren’t building “programming that’s easy for non-developer” but “I find ____ easy so I’m going to remove features so that it’s more approachable”. It also leads a lot to either visual programming languages, which don’t directly solve the complexity issues, or config languages, which lack the necessary surface area to be usable for many tasks.
A prior team of mine tried to go down the config route, building out 2 different config languages that “can be used by managers and PMs to configure our app so that we can focus on features.” Needless to say, that never happened. No one did any research on prior attempts to do build these types of languages. No one tested with PMs and managers. It ended up being built by-devs-for-devs.
There’s also this idea that floats around software that somehow simpler languages aren’t “real” languages, so they often get a lot of hate. For many years I’ve heard that Go isn’t for real devs, that it’s only for stupid Google devs who can’t be bothered to learn a real language like Java. JS is still considered by many to be a joke language because it’s for the web and “real” developers program servers, desktops, and mobile. Way back in the day, Assembly was for the weak, “real” devs wrote out their machine code by hand/punch card. Unless we can overcome that instance of what a “real” programming language is, we’ll likely continue to struggle to find and build accessible languages.
One of the few people I know writing about approach-ability of programming, and attempting to actually build it is Evan C. I won’t claim that Elm is perfect, I do think we can do better, but Evan has worked very hard to make it approachable. So much so that both its error message approach and its Elm Architecture have permeated many other languages and frameworks without people realizing it.
When you start learning a programming language, how much time do you spend stuck on syntax errors? [..] how many people do not make it past these syntax errors?
Compilers should be assistants, not adversaries.
Most terminal tools came into existence well before our industry really started focusing on making apps and websites feel great for their users. We all collectively realized that a hard to use app or website is bad for business, but the same lessons have not really percolated down to tools like compilers and build tools yet.
The answer to me comes down to time. I can gather requirements, speak to stakeholders, and organize projects, or I can write code, test code, and deploy code. I do not have the time (or attention span, really) for both.
People have been trying this for a very long time. It results in very bad programs. The idea that programming can be magic’d away and we can fire all the programmers is held only by marketing departments and (for some reason) a few programmers.
For my web app, I looked into replacing a JS-based blurhash implementation with a Rust-based one. I benchmarked it, and found that the JS one was just as fast after the 3rd iteration (presumably because of the JIT kicking in).
As said elsewhere, WASM is not magic performance pixie dust. It makes sense in certain cases, but benchmark before jumping to conclusions.
I have been playing with WASM and Rust for a few things. The motivation hasn’t been speed though It has been the ability to use a far superior typesystem. After all both wasm and javascript are interpreted. The primary benefit is being free to use something other than only javascript for browser based frontend development.
TypeScript also has a pretty nice type system. I was surprised at how comfortable it felt, coming from a C++ (and some Swift) background. The flow-based typing is especially handy.
When compared to Javascript, Typescript’s type system is a massive leap forward. I wholeheartedly endorse it’s usage by anyone.
However, compared to Rust, Typescript doesn’t come anywhere near giving me the same guarantees. Rust/WASM let’s me get an even more expressive type-system to model my software’s contracts with. It won’t always be the best choice but all else being equal a Rust type system beats TypeScript any day.
I prefer TypeScript’s types to Rust’s a lot of the time, but it often depends on the domain I’m in. For lower level things around managing resources Rust’s shines but for higher level modelling of app data I prefer TypeScript’s. What’s universal is that when I’m using one, at some point I miss a feature of the other :-)
There’s also PureScript and Elm, as well as some other very young options for browser languages that have nice type systems.
Always miss not seeing ExtJS not being in these lists. It was quite prevalent in that mid to late ’00s period and still has a foothold in some companies.
The other thing that always stands out to me with this general view of web is that somehow html/js are good goal. By this I mean that in native apps we don’t talk about primitives (drawing pixels directly) but in web we’re still obsessed with primitives.
Yeah - the bigger UI toolkits on top of jQuery - JQueryUI, Mootools, etc were in the “before times”.
I built a thing with jQuery UI. It is a regret, of sorts, because I wouldn’t be surprised if that thing still exists with more or less exactly the same UI I built only two years into my career, now fully a decade ago. While I know what people mean when they talk about it being a framework, it sat in a really, really different spot than the stuff we describe as “frameworks” these days: a curated bucket of interactions and widgets is the kind of thing we build on top of those frameworks.
In some sense, this is just a nomenclature problem: no one ever agrees on what is a “library” vs. a “framework.” The “I call you vs. you call me” distinction is clever and has something to it… but React, for example, 100% calls the components you provide while also regularly being described as a “view library, not a framework”. In some sense the best we can do is define how we’re using the terms, using them in idiomatic ways where possible but also being clear about where we’re saying something slightly different, and hope for the best.
You know, one could capture these “colors” in a type system. One could see the different contexts in which the functions run as, I dunno, “side effects” separate from the intended action of the function. If you had these “wrapper types,” you’d be unable to inadvertently run a function that interacts with Server
contexts in a Client
context. In cases where either context could provide the same functionality – possibly implemented in different ways – you could implement a kind of “class” for those context types and make your functions generic across their “colors.”
They’d need some kind of catchy, short name, though. Something that would definitely not scare people.
Thank you for emphasizing exactly how much the word “monad” has become a fnord. You are describing codensity monads.
Abilities? :-)
Nice – I hadn’t looked at Unison before. That’s definitely an interesting way to encode and name these mysterious container types! Thanks for the link.
Completely agree. I think you could actually have a wonderful experience with this sort of architecture if you had the type system and ecosystem to back it up.
My favorite highlight from the article is the “Building code doesn’t execute it” section:
It is an explicit security design goal of the Go toolchain that neither fetching nor building code will let that code execute, even if it is untrusted and malicious. This is different from most other ecosystems, many of which have first-class support for running code at package fetch time.
This is something unique to among all programming languages, something that even Rust (which puts “security” among its core attributes) doesn’t provide.
I can safely build a Go application and then run it in a separate account or under bubblewrap
without the concern that the build process will trash my workbench or account. (On the other extreme end there was one time when a Ruby dependency decided to overtly sudo
without even notifying or asking for permission; I was saved by the fact by default, on all my systems the default sudo
user is not root
but nobody
…) :)
The Ruby situation is especially dire because Gemfile
s are themselves Ruby programs, so even resolving the dependencies of a project opens you up to remote code execution!
That said, I think there are reasons why projects may sometimes need build-time logic, and my long-term preferences is for this to be available in Rust and other languages, but only in a sandbox with strong limitations, or even the ability for end-users to place additional sandbox constraints or (more ideally) to relax the by-default-strict sandbox constraints.
I don’t think this will ever happen… I think most Rust developers come from two legacies: one is former C/C++ developers that are used to the auto*
or CMake
or plain make
, thus they don’t want to give away those abilities; the other part of developers seem to come from Ruby, Python and other interpreted languages where security is not a top priority…
I would love it if cargo
(the Rust build tool) would have a build option that disables the usage of build.rs
.
Now, getting back to Go, I think it’s fair to say that this decision (of not running code at build time) is also helped by the fact that a lot of libraries are written in “pure Go” and thus there is no need for any “external build” facilities.
Also it is worth mentioning that even Go has go generate
, but which is usually manually invoked by the developer, and its outputs are usually committed besides the code, thus there is no need for it at build time.
I would love it if cargo (the Rust build tool) would have a build option that disables the usage of build.rs.
Note that you’d also need to disable proc macros. And I fear that the number of crates which transitively use neither build.rs nor proc macros is vanishingly small :(
I forgot about proc macros…
However, at least with regard to proc macros, I assume most of them only process the AST given as input, thus could be limited (either by forbidding the usage of certain API, or by something like seccomp
.) For the rest, perhaps the access should be limited to the current workspace (and output) directory, and disallow any other OS interactions (sockets, processes, etc.)
For the rest, that need to invoke external processes or connect to remote endpoints, perhaps their place is not in the build life-cycle, and just like go generate
should be extracted into a completely separate step.
However, at least with regard to proc macros, I assume most of them only process the AST given as input, thus could be limited
watt tries to accomplish this by coming proc macros to WebAssembly, and then executing those.
That’s what Watt does by compiling proc macros to WebAssembly (which is naturally sandboxed).
On the same subject, I have the feeling that Python fits in the same category with it’s setup.py
.
(Funny enough, I think that Java, at least through Maven, dosen’t suffer from this…)
On the same subject, I have the feeling that Python fits in the same category with it’s setup.py.
Python’s “wheel” (.whl
) packages do not have and have never had the ability to run code during installation; they only run setup.py
when building the package for distribution.
And more recently, people have been working on moving to pure declarative package-build configuration anyway.
It’s an unfortunate fact of the world that there are still a lot of sdist-only packages, even ones that are pure python and could easily distribute a universal wheel.
Elm is right up there with Go in not executing code during fetch and build. I’ve even seen experiments with CLIs written in Elm where you can restrict at the type level as to what the code has access to so that were you to run a CLI written in Elm you can know that it’s only touching approved files/directories.
You could maybe include Deno in here too, though it’s a runtime and not a language, because in order to execute something that wants to do IO or such, you need to explicitly allow it. You can even restrict to the directory or url it has access to.
Huh, doesn’t Go tend to make heavy use of code generation? I guess if you check in the generated code, you technically don’t have to execute any code at build-time… but avoiding compile-time code execution by shipping build artifacts in the source repo feels like cheating.
Better than literally distributing binaries, mind you, because generated source is theoretically human-readable! But still, it feels like they only manage to build from source with no code execution by taking a bizarre definition of what “source” is.
I guess if you check in the generated code, you technically don’t have to execute any code at build-time… but avoiding compile-time code execution by shipping build artifacts in the source repo feels like cheating.
Actually I prefer having pre-generated stuff in the repository, as opposed to having to install (and fiddle with) various obscure tools for code generation or documentation… This way, if I only need to patch some minor bug, or make some minor customization to the code, I can rebuild everything by just having Go / Rust / GCC installed.
I have the opposite experience with lots of other projects that in order to build them you need a plethora of Python or Ruby tools, or worse other more esoteric ones, most which are not available by default on many distributions…
Just imagine that you want to patch a tool that relies on serving some JS bundle. Do I want to also build an entire NodeJS project for this? Hell no! I’ll just move to another alternative… (In fact this is my preferred way to interact with the NodeJS based ecosystem: as long as it runs only in the browser, and as long as I don’t have to touch the NodeJS tooling, great! Just give me a “magic” blob! Thus I also keep a close eye on Deno…)
This is fair, but in some cases quite pain, particularly for cross-compilation (or support for other hardware platforms in general). In Rust crates I maintain, we generate FFI bindings for the most common targets, it would be a complete hassle to (re)generate them for all possible targets, and new ones get added regularly, so we’d have to keep on top of that as well. So we offer a feature to do that at build time, if you want to build for a platform we don’t “support”, or if you have some special sauce in your bindgen or the other tooling around it.
I agree that one can’t possibly generate artifacts for all platforms under the sun. (My observation mainly applies to portable artifacts such as JavaScript bundles, or Java jars, or man-pages, or other such resources.)
However, in your case I think it’s great that you at least generate the artifacts for the most common targets! As long as you’ve made the effort to cover +90% of the users, I think it’s enough.
My issue is with other projects out there that don’t even make this effort!
Speaking to the game, a bit, I find it kinda amusing that it uses regenerating health. That would *never” have featured in 1980s games.
Otherwise, it runs fin on my phone, though the audio doesn’t seem to run? I dunno if that’s related to audio context interaction requirements or not.
The problem that side effects create is that they reduce observability. However the overall reduction varies between languages, frameworks, and what other tooling you use.
My experience with trying to bring concepts like immutability, pure functions, or even managed effects into a team is that most of the teams I’ve been on don’t believe half of what I’m saying. They understand the existence of immutability and the idea of pure functions, but actually using them and having it be a gain instead of a loss to productivity seems implausible to them.
I can kind of understand why they think this. Having recently started relearning C after nearly 15 years of not writing it, the tooling today feels a lot better than I remember it being. It feels better because it gives me more e information about what my program is doing, and it provides it earlier. Similarly, web development has gotten easier over the years because the browser’s have provided better debugging that allows the developer to see and interact with the state of their program. Allowing them to see where side effects are effecting their code.
Whether or not this is a problem depends on where you’re standing. The person who creates a new app at a company and then moves on to the next greenfield project likely never sees the difference. The person who comes in after them and maintains the work only is aware of the difference if they’ve been exposed to other possibilities. If they’re ignorant of other possibilities then they may just accept that this is how things are done.
They understand the existence of immutability and the idea of pure functions, but actually using them and having it be a gain instead of a loss to productivity seems implausible to them.
Admittedly, it takes work to relearn other ways of thinking about programming. And due to the names like monad in FP, it probably doesn’t help.
That’s an interesting perspective. Maybe even though the language model is lacking, we’ve made up for it with better tooling with insight into observability of the state of our programs. So it may be possible that while there’s a gap between better language model vs better tooling, it may not be as big as to warrant a language switch.
My instinct is that there probably is a limit to the tooling, depending on the problem domain. Distributed systems comes to mind, where reasoning with imperative style and mutations just doesn’t cut it. But then if you’re never doing that sort of stuff, and instead mostly dealing with business logic, I can see that maybe you don’t think you’d need it.
Reading all the comments about wanting to make a dialect or whatnot within a language and no mention of Forth, where you can define essentially anything and even redefine words within scopes! Just how real world languages can have the same words/sounds but different meanings.
they won’t be stuck with what they’ve built for all that long
I agree but not for the same reason. They won’t be stuck because the devs creating projects typically leave it after a year or 2 and then someone else comes in and is stuck with it, for a year or 2, and then repeat.
Devs aren’t choosing tools for long term maintenance because they either a) only build greenfield and aren’t stuck with it or b) they get stuck with the project after it’s built and aren’t given the power to change things.
Devs are choosing tools for quick Greenfield spin up because that’s what we market to each other. The number of taglines on most tools about “only N lines of code” or “deployed in N mins” with zero regard to what happens a year or 2 later because supposedly we’ll just rebuild it with the next hot tool.
There are very limited options for saying “I’ll do this part later”… You application has to compile.
I think there are many good things in Elm, but I do find this a bit frustrating, given that there are tools available to address this for statically typed languages, like typed holes and deferred type errors, that can reduce the amount of friction during development while not compromising on correctness.
PureScript breaking out to FFI was an easy way to hack around this for now and clean up or upstream later. The best “hacks” in my experience involved things along the lines of using globalThis.Intl
, instantiating an Intl formatter in the FFI file with it’s options and then just consuming IntlObj.format
as a pure, synchronous function to get quick date/number formats.
Usually when I write Elm, I start with the types and let the type errors guide me to what I want to achieve. So I guess the workflow is types => base functions => actual usage in update/view. I have a similar workflow in Derw, but it’s actually possible to work around that currently by calling some global function since they aren’t type checked. So you could have
isNumber: string -> boolean
isNumber str =
globalThis.doNothing
and replace globalThis.doNothing
once you’ve figured out what goes there.
It has Debug.todo "some message"
which you can use similarly. The difference is you can’t use any Debug functions in prod builds. I use them all the time when I’m blocking out my code, and then remove then 1 at a time till I get what I need.
Kinda baffled that they’re using esbuild because they don’t understand what the import syntax does. If they’re comfortable with script tags, why not use those with import syntax? It’s further confusing because Vue’s build tool, Vite, uses esbuild.
It’s pretty clear from the article that Julia doesn’t know about <script type="module">
(or didn’t anyway, I will imagine she has been emailed/Tweeted) by now. It reminds me of Dan Abrahmov’s Things I Don’t Know article. I’ve learned a lot from Julia’s zines about SQL, HTTP, Bash, etc. so it’s interesting to read a post where I definitely know (or knew) more than her. :-)
I don’t know what the import syntax does, either, but I’ve heard esbuild is fast, so I could easily imagine myself being in the same situation.
As someone who has never and will never write, build and deploy JavaScript — is it really this much of a donkey circus?
Yes. It is the kind of thing that would pass as an hyperbolic joke if it wasn’t reality already.
You should check the left pad fiasco. TLDR: developer got salty over loosing a bunch of GitHub stars to a 3 letter name collision by a registered brand. Ends up breaking half the web.
Yes, but: I think that can be misleading.
JavaScript’s ecosystem grew up at a time when three things were true: open-source was completely established from the get-go; “everyone” (for loose values) needed to write at least some JavaScript; and the delivery platform (the browser) arrived before the actual development strategy was really understood. So what you got were multiple efforts to figure out how to build things, done in parallel, done differently, without learning from one another, where no one solution was going to be The Solution, and where the people writing the development tools were (in general) not the same people writing the deployment platform. Throw on top of that users who were, to a greater extent than many other previous platforms, going to be a mix of both professional developers and newer developers to JavaScript, development in general, or both, and you here you are.
I make that qualification for two reasons. First, any time a language is easy enough, and wide-spread enough, that it attracts tons of amateur developers, it gets a reputation for having a crappy ecosystem. Happened to Windows, happened to PHP, and has now happened to JavaScript. I’m not going to say that Webpack is awesome (it isn’t) or that the mainstream frameworks don’t have issues (they do), but I don’t feel that, on balance, the issues we’re facing are worse than what Python, PHP, Java, or a half dozen other languages went through when they were at the apex of their popularity. Maybe bigger, just because so many more developers are around, but not actually worse.
Second, the sheer size of the ecosystem–the same thing that is attracting so much of a “donkey circus”–also means that, to a greater extent than in any other language I’ve used, there is always a good, high-quality library to do whatever I want to do. There will be 67 awful ones and six that are more popular than they should be without working quite right, but there will be at least one that does exactly what I need, and does it well. So I kind of view the pile of nutsitude as the payment to get into that ecosystem.
without learning from one another,
I think this is very important to highlight. Every couple of years I see a new library or 2 that rise in popularity that repeat the same mistakes other libraries did a few years prior. They almost always have a claim along the lines of “____ done right!” Within a year or 2, if they remain popular, they end up having to overhaul the framework to get rid those flaws.
I.e. there’s a lot of building blindly
A lot of this is due to developers flipping their shit if a library doesn’t meet their aesthetics exactly, or is just a little harder to use or has just a little boilerplate or whatever. Weeks of writing code can save minutes of typing.
I think there are two important points being missed here and in almost all discussions I see around this. First, JS is many many people’s first language, and so half the libraries (especially ui) are just magic and programming is impossibly difficult; and then suddenly you get it, and you can actually start doing something half decent, and so of course you want to tell the world! So I think we owe these developers a bit more compassion than what we currently are giving them. That’s not to say that it’s great with 50 new ui libraries every week; but I can certainly empathise with whoever wrote one if it’s their first big thing. Good for them!
Second, updating a user interface in the browser based on partial and updating data is a surprisingly difficult problem to solve, and so a new ui library with a slightly new take isn’t always meant as something anyone should use in production; but rather an experiment, or input to the global discussion on how this should be done best.
Of course not everything falls into these two buckets, but a certain percentage does. At least let’s be kind to the people in the first bucket :-)
I think it’s also important to note that the folks behind the language did it no favors in dragging their heels towards adding the standard library stuff that would’ve prevented some of the crazy. They had their reasons, but the error is manifest.
Yes and no. The fact is that the turnaround is much faster, major libraries and frameworks go at a quick pace. Also the tooling is getting a lot of improvements. So you have to stay on top of it or stay behind.
But for any serious work, you pick stable frameworks and plan ahead.
What the post is complaining about is a project in Vue 2 be Vue 3. Almost like python situation. If someone told you “I wrote this shit in Java 8, and it doesn’t work on 11”, people would just say “well yeah”.
But as said, we have faster turnaround on front-end plus we upgrade not just the framework, but also build systems, libraries, tools. So while you had a some changes between Java 8 and 11 and, say, what some library did in one or the other, at least intellij or eclipse or whatever was it did not break as well.
So, yes, it’s a fast paced action game, not an old school turn based strategy. But people who learn to play it can get pretty good at it and almost never even feel the RSI in their hands from to much clicking.
If someone told you “I wrote this shit in Java 8, and it doesn’t work on 11”, people would just say “well yeah”.
No, that’s basically the opposite of how Java works.
May be so, but that’s not experience. Granted I don’t work with Java-anything but have experience and I remember it being so silly that you’d need an exact minor point version of some dependencies on exact minor jre or else things would not work. So you’re in the same situation like the OP. You can’t upgrade to a new version of X, because Y does not have a working new version.
It’s probably not always so, but it does happen. Maybe not as complicated as in the article, but enough to prevent you from upgrading.
There is a firm-ish boundary if/when a project switches from classpath to modules, and a couple of things from AWT and Applets that no longer work, but other than that I’ve never heard of the JVM breaking anything, let alone requiring a specific point-release.
Yes but we’re not talking about JVM out v8 here. We’re taking about libraries and frameworks and other deps.
Take an app written with spring 2 or something. It all works, right? But now use Spring boot instead. Majority of your code no longer works, right? That’s the problem the article is describing and that’s what I’ve meant when I said dep management is bad in js but it’s not like it’s all peaches in other languages.
I have been on the lookout for an indentation based language to replace Python for some time now as an introductory language to teach students. Python has too many warts (bad scoping, bad implementation of default parameters, not well-thought-out distinction between statements and expressions, comprehensions are a language within the language that makes student’s life difficult, and so on.). Is Nim the best at this point in this space? Am I missing warts in Nim that makes the grass greener on the other side? Anyone who has experience with both Nim and Python, can you tell me what the trade-offs are?
I am uncomfortable with statements like (from this article) “if you know Python, you’re 90% of the way to knowing Nim.” The two languages are not IMO as similar as that. It’s sort of like saying “if you know Java, you’re 90% of the way to knowing C++.” Yes, there is a surface level syntactic similarity, but it’s not nearly as deep as with Crystal and Ruby. Nim is strongly+statically typed, doesn’t have list comprehensions, doesn’t capitalize True, passes by value not reference, has very different OOP, etc.
That said, there’s definitely evidence that Nim has a smooth learning curve for Pythonistas! This isn’t the first article like this I’ve read. Just don’t assume that whatever works in Python will work in Nim — you don’t want to be like one of those American tourists who’s sure the locals will understand him if he just talks louder and slower :)
So yes, Nim is excellent. It’s quite easy to learn, for a high performance compiles-to-machine-code language; definitely easier than C, C++ or Rust. (Comparable to Go, but for various reasons I prefer Nim.) When programming in it I frequently forget I’m not using a scripting language!
passes by value not reference
The terminology here is very muddied by C, so forgive me if this sounds obvious, but do you mean that if you pass a data structure from one function to another in Nim, it will create a copy of that data structure instead of just passing the original? That seems like a really odd default for a modern language to have.
At the language level, it’s passing the value not a reference. Under the hood it’s passing a pointer, so this isn’t expensive, but Nim treats function arguments as immutable, so it’s still by-value semantically: if I pass an array or object to a function, it can’t modify it.
Obviously you don’t always want that. There is a sort-of-kludgey openarray
type that exists as a parameter type for passing arrays by reference. For objects, you can declare a type as ref
which makes it a reference to an object; passing such a type is passing the object by reference. This is very common since ref
is also how you get dynamic allocation (with GC or more recently ref-counting.) It’s just like the distinction in C between Foo
and *Foo
, only it’s a safe managed pointer.
This works well in practice (modulo some annoyance with openarray
which I probably noticed more than most because I was implementing some low-level functionality in a library) … but this is going to be all new, important info to a Python programmer. I’ve seen this cause frustration when someone approaches Nim as though it were AOT-compiled Python, and then starts either complaining or asking very confused questions on the Nim forum.
I recommend reading the tutorial/intro on the Nim site. It’s well written and by the end you’ll know most of the language. (Even the last part is optional unless you’re curious about fancy stuff like macros.)
(Disclaimer: fate has kept me away from Nim for about 6 months, so I may have made some dumb mistakes in my explanation.)
Gotcha; I see. I wonder if it’d be clearer if they just emphasized the immutability. Framing it in terms of “by value” opens up a big can of worms around inefficient copying. But if it’s just the other function that’s prevented from modifying it, then the guarantee of immutability isn’t quite there. I guess none of the widely-understood terminology from other languages covers this particular situation, so some new terminology would be helpful.
Python has too many warts (bad scoping, bad implementation of default parameters
I don’t want to sound like python fanboy, but those reasons are very weak. Why do you need to explore the corner cases of scoping? Just stick to a couple of basic styles. Relyokg on many scoping rules is a bad idea anyways. Why do you need default parameters at all. Many languages have no support for default parameters and do fine. Just don’t use them if you think their implementation is bad.
Less is more. I sometimes flirt with the idea of building a minimal indendtation based language with just a handful of primitives. Just as a proof of concept of the practicallity os something very simpl and minimal.
At least for python and me, it’s less a matter of exploring the corner cases in the scoping rules and more a matter of tripping over them involuntarily.
I only know three languages that don’t do lexical scoping at this point:
Emacs lisp, which does dynamic scoping by default for backwards compatibility but offers lexical scoping as am option and strongly recommends lexical scoping for new code.
Bash, which does dynamic scoping but kind of doesn’t claim to be a real programming language. (This is wrong but you know what I mean.)
Python, which does neither dynamic nor lexical scoping, very much does claim to be a real programming language, and has advocates defending its weird scoping rules.
I mean, access to variables in the enclosing scope has copy on write semantics. Wtf, python?
(Three guesses who started learning python recently after writing a lexically scoped language for many years. Thank you for indulging me.)
It is weirder than copy on write. Not tested because I’m on my iPad, but given this:
x = 1
def f(cond):
if cond:
x
x = 2
f(false)
does nothing, but f(true)
will thrown an undefined variable exception.
I think you need nonlocal x
but I don’t quite get why this is weird/nonlexical.
It has lexical scoping but requires you mark variables you intend to modify locally with ‘nonlocal’ or ‘global’ as a speed bump on the way to accidental aliasing. I don’t think I’d call puthon “not lexically scoped”
Yeah, if doesn’t introduce scope. Nonlexical scope doesn’t IMO mean “there exist lexical constructs that don’t introduce scope”, it is more “there exist scopes that don’t match any lexical constructs”
I just learned the idea of variable hoisting thanks to this conversation. So the bizarre behavior with carlmjohnson’s example can be understood as the later assignment declaring a new local variable that comes into scope at the start of the function. Because python does block scope instead of expression scope.
I guess I’ve been misusing “lexical scope” to mean expression-level lexical scope.
I still find the idea of block scope deeply unintuitive but at least I can predict it’s behavior now. So, thanks!
Yeah I’m not a huge fan either tbh, but I guess I’ve never thought of it as weird cause JavaScript has similar behavior.
It’s multiple weird things. It’s weird that Python has† no explicit local variable declarations, and it’s weird that scoping is per function instead of per block, and it’s weird that assignments are hoisted to the top of a function.
† Had? Not sure how type declaration make this more complicated than when I learned it in Python 2.5. The thing with Python is it only gets more complicated. :-)
Different weird thing: nonlocal
won’t work here, because nonlocal
only applies to functions within functions, and top level variables have to be referred to as global
.
JavaScript didn’t have it it either until the recent introduction of declaration keywords. It only had global and function (not block) scope. It’s much trickier.
But I am puzzled with why/how people stumble up on scoping problems. It doesn’t ever happen to me. Why do people feel the urge of accessing a symbol on a block outside the one when it was created? If you just don’t do it, you will mover have a problem, on any language.
For me it’s all about closures. I’m used to using first class functions and closures where I suspect an object and instance variables would be more pythonic.
But if you’re used to expression level lexical scope, then it feels very natural to write functions with free variables and expect them to close over the thing with the same name (gestures upward) over there.
I’m curious, do you use any languages with expression level scope? You’re not the first python person I’ve met who thinks pythons scope rules make sense, and it confuses me as much as my confusion seems to confuse you.
I don’t need to remember complicated scoping rules because I don’t ever use a symbol in a block higher up in the tree than the one it is defined in. Nor do I understand the need to re-assign variables, let alone re-using their names. (Talking about python now). Which languages qualify having expression level scope? Is that the same as block scope? So… Java, modern JavaScript, c#, etc?
I am confused. What problems does python pose when using closures? How is it different than other languages in that respect?
I use closures in python code all the time. I just tend not to mutate the free variable. If you do that, then you don’t need to reference the free variable as global or nonlocal. If I was mutating the state then I might switch over to an object.
Nim is pretty strongly typed, that is certainly different from Python. I’m currently translating something with Python and Typescript implementations, and I’m mostly reading the Typescript because the typing makes it easier to understand. With Nim you might spend time working on typing that you wouldn’t do for Python (or not, Nim is not object oriented), but its worth it for later readability.
Nim is less OO than Python, but more so than Go or Rust. To me the acid test is “can you inherit both methods and data”, and Nim passes.
Interestingly you can choose to write in OO or functional style, and get the same results, since foo(bar, 3, 4)
is equivalent to foo.bar(3, 4)
.
IIRC, Nim even has multimethods, but I think they’re deprecated.
what?
don’t you mean foo(bar, 3, 4)
and bar.foo(3, 4)
?
AFAIK the last token before a parenthesis is always invoked as a function.
what?
don’t you mean foo(bar, 3, 4)
and bar.foo(3, 4)
?
AFAIK the last token before a parenthesis is always invoked as a function.
Latest release of Scala 3 is trying to be more appealing to Python developers with this: https://medium.com/scala-3/scala-3-new-but-optional-syntax-855b48a4ca76
So I guess could make it an option.
Thanks!, this certainly looks interesting. Would it make an introductory language, though? By which I mean that I want to explain a small subset of the language to the pupil, and that restricted language should be sufficient to achieve fairly reasonable tasks in the language. The student should then be able to pick up the advanced concepts in the language by self exploration (and those implementations should be wart free. For example, I do not want to explain again why one shouldn’t use an array as a default parameter value in Python).
There is no such thing as a programming language that is “wart free”, and while initially you want to present any language as not having difficulties or weirdness, in the long run you do need to introduce this to the student otherwise they will not be prepared for “warts” in other languages.
Depending on what you’re trying to teach, Elm does fit your description of an introductory language for teaching students that uses indentation. I know there’s a school that uses Elm for teaching kids how to make games, so it definitely has a presedence for being used in education too. Though, of you’re looking to teach things like file IO, http servers, or other back end specific things then it’s probably a poor choice.
Did anyone here use Exercism to learn a new language? Learned about it today and would be interested to read some journeys.
I used it to learn Legacy JavaScript and Ruby. I have used it to practice Rust and Clojure after learning the basic syntax through other information.
I’ve more gotten the hang of TDD though it. Red, green, refactor.
I only used it briefly to learn, but I’ve been mentoring the Elm track for about a year now. It’s been really great seeing people go through the track and also in other parts of the Elm community. I also frequently see people coming over from Haskell, Python, and lots of other languages.
I did the Elixir path at the beginning of the pandemic last year. It was fun and the mentors were responsive. I would recommend it!
This year I’ve been doing the Haskell path and the mentors are insightful, but very slow to get through the backlog. It would be weeks between reviews. As a result, I haven’t made a ton of progress on the main track. (You can do unguided exercises as well, but there are a fixed number at each level.)
The difference here might be that Elixir advertises exercism on their website, but Haskell does not. In both cases I think its worthwhile and I really enjoy the site. Just be aware that it’s going to be dependent on some combination of the language community, the mentors and the number of people who want to learn the language.
Do you have any languages that you are interested in?
The changes in v3 (requesting mentoring is opt-in rather than opt-out and there are learning exercises) will hopefully mean that students who want mentoring get it quicker.
Looks to be missing the criticisms about JSX, which was wildly unpopular when it was first announced.
Custom Elements Everywhere hasn’t been updated in 2+ years, so not sure it’s an accurate source on web component support.