I really wish it was easier to try these bindings outside of specific editors. Maybe I’m in a specific situation as a kotlin developer working on a large Android project, but Android Studio has a perfectly good vim emulator built in and so does every other ide. Porting Kakoune’s behavior would be more than just mapping a list of bindings, it looks like?
It wouldn’t be a huge loss to get used to kak as a terminal editor, but then I’d feel sad all day in Android Studio, and it’s already too easy for me to feel sad there.
I think that it is probably easier to emulate kakoune bindings than vim bindings in most modern editors due to kakoune’s minimalist nature. The object-verb approach is also much closer to how a non modal editor works. I think that it is important that the host editor has multiple cursor support to build from. That makes emacs a more difficult target, for example. You can check out dance for vscode https://github.com/71/dance
as far as I’m aware, vim style editing also requires more than just mapping bindings, so I don’t see why adding a noun-first editing paradigm would take more work. Maybe plugins for this exist already in VS code and the JetBrains IDEs?
As some one in the data processing and analytics field, a fan of array languages (including R), and dataframe geek, I’m very much in agreement that this topic is worth more investment. Also, if you’re like me and want to understand the presenter’s background on the topic, this is from Graydon Hoare, the creator of Rust.
One day I hope they’ll open up to electrostatic capacitive rubber domes as an alternative to mechanical keyswitches. I’d pay premium for an ergonomic option that actually thocks.
It seems that only a very small number of people have made custom electrocapacitive keyboards, at least as far as I could find. There are a lot of things that make it finicky:
The circuitry is analogue, rather than digital, and keypress detection measures changes of a few picofarads, so the PCB design needs more expertise than usual. On the up side, like hall effect switches, you can adjust the activation point in software or use them as analogue inputs for things like mouse keys.
The rubber domes and conical springs are sandwiched between the PCB and switch housings, and loose when the keyboard is disassembled. It is tricky to get them aligned when reassembling. It is easier if the domes are on a sheet matching the size and layout of the PCB, which they won’t be in a custom layout.
After reassembly the firmware needs to be recalibrated to accommodate any changes in alignment of the domes and springs.
The plate and PCB must be firmly fixed to each other, since there is no lower switch housing to take the bottom-out force: it hits the PCB through the dome and spring, and none of it is taken directly by the upper housing or plate.
There are not many sources of switch parts. I get the impression Topre parts are only available through unofficial channels. Niz will sell switch parts to hobbyists in keyboard-sized quantities.
Yup. Which is why I want to buy one—not build one. I built a ErgoDox way back, & I’d rather have a manufacturer do the hard work. Of course back then even Cherry switches weren’t the easiest thing to come by. But the folks that have tried Topre or NiZ’s switches broadly seem to pine for the feeling of these switches despite the limited supply. With the right nudges maybe the same thing that happened for mechanical switches could happen.
NiZ is cheap & still feels great, but the firmware releases as Windows executables in a Google Drive doesn’t inspire confidence. I’ve emailed them and they don’t seem to interested in the ergo and/or split keyboard crowd.
I have to admit that I was a bit underwhelmed by the features. There was nothing that I found myself thinking “I need this”. I think that emacs is already the best option for a cross platform text-based computing environment. It can do pretty much everything shown and a lot more.
I would be happy to see more projects that were emacs-like in spirit, but were more accessible and had some modern appeal.
I have never used Zig and yet even I know that a lot of people use it for its very convenient cross-compiling facilities.
I admire ambitious people but at this point in the software engineering history I believe it should be obvious for everyone that improving compilation times and quality is one of the hardest tasks ever and it might easily take the creator and the team their entire careers to even make a dent… and actually succeeding and gaining wide adoption is not at all guaranteed.
I fear this will just doom the project to irrelevance. And it’s not like there are more than two people actually paid to work on Zig.
To me, this is all the more reason to hope that they go forward with it. The team has already proven that they have the expertise and perseverance to solve hard problems. This feels very aligned with their overall goals.
But seriously, why are timezones so difficult? Out of all the things that I would expect to “just work,” this one would be near the top of the list. Python is great, but it’s definitely not batteries-included and I find myself still frequently navigating an ecosystem of broken, stale or “incorrect” external libraries. I attribute this to the horrible state that Python online learning is in, probably because it is one of the most popular languages out there, there are so many misleading resources. It was a pleasure learning other languages, like Go or Perl, where I can turn to vetted and single sources of information for the “right way” to do things.
Thanks for posting this, literally just helped solve a ghost bug that we’ve been sporadically encountering for months now.
Alice wants to schedule a meeting with Bob. Alice proposes “1:30 Tuesday”.
Level 1: What actual point in time does that mean? What point in time is “1:30 Tuesday” in Alice’s time zone? What is it in Bob’s time zone? What is it in UTC (which is presumably what the meeting server will store? How are you collecting the time zone information from both Alice and Bob in order to display the correct point in time to both of them? What about when Bob decides Carol (who is potentially in a third time zone) should also be on the meeting?
Level 2: What actual point in time does that mean? What if Alice is scheduling the meeting from the time zone she normally lives in, but will be traveling and in a different time zone on the date of the meeting?
Level 3: What actual point in time does that mean? What if Bob’s jurisdiction will switch to some sort of “summer” or “winter” time around that date? What if Carol’s jurisdiction is changing their timezone rules this year? Or: the meeting has already happened, but now an auditor is making sure Alice and Bob and Carol all stuck to their legal working hours. What if the time-zone rules for their jurisdictions have changed in the intervening period?
And these are just the most basic top-of-mind things that can come up with time zones. The full reality is almost unbelievably complex, and it is that way largely because time zones are designed for messy humans who want their local time to vaguely match perceived solar time, rather than for computers which don’t care about that but do want consistent precise rules to follow.
And the difficulty and complexity are reflected in the fact that every programming language I’m aware of has warts and complexity and difficulty and “oh, don’t use that” around dates and times. It’s not that Python is somehow uniquely unable to get it right where everyone else did – it’s that lots of people have gotten it wrong, but Python’s popularity amplifies your awareness of the Python-specific incidence of wrongness.
It’s not that Python is somehow uniquely unable to get it right where everyone else did – it’s that lots of people have gotten it wrong
I have exactly once attempted to tackle time information in detail (not in Python) and this is completely the right take. The amount of footguns surrounding our concept of time, irrespective of the language involved, is immense. I will never attempt this again, merely make it clear that my code does X to attempt to handle time information and everything else is an assumption waiting to fail. i.e. this note on Apple Cloud Notes Parser’s date feature:
Note: This feature is not intended to be robust. It does not smartly handle differences in timezones, nor convert to UTC.
I guess an alternative (and better?) approach is to create a separate virtual environment for each system package, or even better, have all packages ship their own virtual environment. From my understanding, many Linux distributions maintain a so-called “system Python”, e.g. /usr/bin/python3, and all system packages share that particular Python distribution, which is clearly suboptimal. For example, the package update-manager depends on python3-yaml==5.3.1, but what if another system package depends on python3-yaml==6.0.0? You get a version conflict, and PEP 668 doesn’t help that.
Essentially, PEP 668 says that “system Python” should not be touched by the user, but I argue that such a globally mutable “system Python” shouldn’t even exist.
I think that the idea is that any scripts that rely on the system python should not use external packages at all. And if they need to use an external package, it should be installed and managed through the systems package manager and not pip.
Yeah, I proposed this idea because I find myself using pipx much oftener than apt when installing Python-based packages like pipenv, icdiff, black, etc.
If you want an absolute gem of a programming language, check out Factor.
The development of it seems to have slowed down a bit when Slava Pestov stopped working on it, but it was in an amazing state at that point already. It’ll rewire your brain because of being a concatenative language and how you can (or have to) do things.
Yes, I was about to recommend Factor as the first choice for Forthlikes. I don’t know gForth, but if it’s like most trad Forths it’s a lot harder to get your head around.
For learning how to build a Forth, good resources are JonesForth (x86 assembly) or Quackery (Python.) The former has exhaustively commented source code, the latter has a whole book describing the language and implementation. I recommend the exercise — I learned a LOT from reading the source code of FIG Forth for the 8080 as a teen.
Another great one is retroforth. Despite its name, it is a thoroughly modern forth. I feel like it combines some of the nice parts of Joy/Factor, while being a bit more simple. http://www.retroforth.org/
I’ve been using Polars for a project analyzing compensation survey data. I’m absolutely in love with the framework. It’s amazing. It’s expressive. It’s easily testable. Its DSL is entirely understandable at face review and doesn’t require Pythonisms to grok:
# pandas
data = df[df['acolumn'] > 4][['acolumn','bcolumn']]
# polars
data = (df
.filter(pl.col('acolumn') > 4)
.select([pl.col('acolumn'), pl.col('bcolumn')]))
It’s more verbose in this example but in my project, we’re finding that we’re creating more reusable components in Polars than in Pandas and our code ends up more brief overall. We’re building a product, not optimizing for code golf!
For another project I did, switching from Pandas to Polars shortened my pipeline from around 30 seconds per report average to less than a second.
I’m also really happy that there are some alternatives to pandas that are trying a different API. I don’t love all of the polars API design, but I do see it as a big improvement. You may already know this, but your polars example could be even more concise. You don’t have to use a list for the select method and you also don’t need to wrap the column names in pl.col unless you’re going to manipulate them in some way.
data = (
df
.filter(pl.col("acolumn") > 4)
.select("acolumn", "bcolumn")
)
Ah, yes, definitely. In my newer project, I’ve got a class defined with all of our expected columns as pl.Expr from pl.col(). It goes through some mapping I have yet to refactor.
question_to_column = {
"What is your name?" : "adventurer_name",
"What is your quest?" : "adventurer_quest",
"What is the airspeed velocity of an unladen swallow?" : "velocity_swallow"
}
class Columns:
adventurer_name = pl.col("adventurer_name")
adventurer_quest = pl.col("adventurer_quest")
velocity_swallow = pl.col("velocity_swallow")
surviving_adventurers = (
adventurers
.filter(Columns.velocity_swallow.is_in(["African", "European"]))
.select(Columns.adventurer_name)
)
Eventually, we’re going refactor to inline all of the mappings. Something ~cool about the columns is being able to do Columns.adventurer_name.meta.output_name() to get the column’s name as a string for functions that require a string, e.g. groupby() and stuff in Plotly that expects string column names like graph x and y and color arguments.
I was always interested in Lua because it was nice and small, but I felt the language itself was a quirky, with some footguns … Also interested in Clojure, but not the JVM.
In my experience Fennel fixes about 90% of the footguns of Lua. About the only ones left are “1 based array indexing” and “referring to a nonexistent variable/table value returns nil instead of being an error”, which are pretty hard to change without fundamentally changing the runtime.
There’s quite a bit of history to how fennel came to be what it is today. It is correct that Calvin (creator of Janet) started it, but it would have just been an experiment in their github if it weren’t for technomancy’s interest in reviving/expanding on it. I don’t know if it is written down anywhere, but Phil did a talk at FennelConf 2021 about the history of fennel, which is the most detailed background for those interested. https://conf.fennel-lang.org/2021
I did a survey a while back about new lisps of the past 2 decades. IIRC the only one to evolve beyond a personal project and have multiple nontrivial contributors but not use Clojure-style brackets is LFE, but LFE was released only a few months after Clojure. It’s safe to say Clojure’s influence has been enormous.
However, Janet seems to take some characteristics of Clojure out of context where they don’t make sense. For instance, Janet has if-let even tho if-let only exists in Clojure because Rich hates pattern matching. Janet also uses Clojure’s style of docstring before arglist, even tho Clojure’s reason for doing this (functions can have multiple arglists) does not apply in Janet as far as I can tell.
Although there’s also the curse of Lisp where the ecosystem becomes fragmented
The other main influence of Clojure is not syntactic at all but rather the idea that a language specifically designed to be hosted on another runtime can be an enormous strength that neatly sidesteps the fragmentation curse.
Ahh very interesting, what were the others? (out of idle curiosity)
I think I remember Carp uses square brackets too.
There’s also femtolisp, used to bootstrap Julia, but that actually may have existed before Clojure as a personal project. It’s more like a Scheme and uses only parens.
I agree the runtime is usually the thing I care about, and interop within a runtime is crucial.
Here’s the ones I found in my survey; I omitted languages which (at the time) had only single-digit contributors or double-digit commit counts, but all of these were released (but possibly not started) after Clojure:
LFE
Joxa
Wisp
Hy
Pixie
Lux
Ferret
Carp
Fennel
Urn
Janet
Maru
MAL
All of these except Urn and LFE were created by someone who I could find documented evidence of them using Clojure, and all of them except Urn and LFE use square brackets for arglists. LFE is still going as far as I can tell but Urn has been abandoned since I made the list.
I was working on this as a talk proposal in early 2020 before Covid hit and the conference was canceled. I’d like to still give it some day at a different conference: https://p.hagelb.org/new-lisps.html
Implicit quoting is when lisps like CL or Scheme treat certain data structure literal notation treats the data structure as if it were quoted despite there being no quote.
For example, in Racket you can have a vector #[(+ 2 3)], without implicit quoting this is a vector containing 5 but with implicit quoting it contains the list (+ 2 3) instead where + is a symbol, not a function. Hash tables also have this problem. It’s very frustrating. Newer lisps all avoid it as far as I know.
Not to take away from Clojure’s influence, just want to mention that Interlisp has square brackets, but with a different meaning. IIRC, a right square bracket in Interlisp closes all open round brackets.
Python has been trying to move toward a re-entrant VM for long time, with subinterpreters, etc. – I think all the global vars are viewed as a mistake. Aside from just being cleaner, it makes the GIL baked in rather than an application policy, which limits scalability.
This kind of API looks suboptimal to me. It would be nice to take something like a lua_State.
The interpreter is thread local in janet, you can actually swap interpreters on the thread too so it doesn’t stop things like rust async from working if you add extra machinery.
The main reason that I use Lua is Sol3. Lua itself is just Smalltalk with weird syntax, but the integration with C++ that you get from Sol3 is fantastic.
I would also love to see more editors experiment with this approach. In addition to text-centric UIs, I also find grid based (VisiData, Excel, etc) to be very intuitive for certain tasks. Emacs does have some grid based interfaces, but having first class support along with the text and line based interactions would be nice.
There is no static type system, so you don’t need to “emulate the compiler” in your head to reason about compilation errors.
Similar to how dynamic languages don’t require you to “emulate the compiler” in your head, purely functional languages don’t require you to “emulate the state machine”.
This is not how I think about static types. They’re a mechanism for allowing me to think less by making a subset of programs impossible. Instead of needing to think about if s can be “hello” or 7 I know I only have to worry about s being 7 or 8. The compiler error just meant I accidentally wrote a program where it is harder to think about the possible states of the program. The need to reason about the error means I already made a mistake about reasoning about my program, which is the important thing. Less errors before the program is run doesn’t mean the mistakes weren’t made.
I am not a zealot, I use dynamically typed languages. But it is for problems where the degree of dynamism inherent in the problem means introducing the ceremony of a program level runtime typing is extra work, not because reading the compiler errors is extra work.
This is very analogous to the benefits of functional languages you point out. By not having mutable globals the program is easier to think about, if s is 7 it is always 7.
Introducing constraints to the set of possible programs makes it easier to reason about our programs.
I appreciate the sentiment of your reply, and I do understand the value of static typing for certain problem domains.
Regarding this:
“making a subset of programs impossible”
How do you know what subset becomes impossible? My claim is you have to think like the compiler to do that. That’s the problem.
I agree there’s value in using types to add clarity through constraints. But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.
I really like your point about having to master several languages. I’m glad to be rid of a preprocessor, and languages like Zig and Nim are making headway on unifying compile-time and runtime programming. I disagree about the type system, though: it does add complexity, but it’s scalable and, I think, very important for larger codebases.
Ideally the “impossible subset” corresponds to what you already know is incorrect application behavior — that happens a lot of the time, for example declaring a “name” parameter as type “string” and “age” as “number”. Passing a number for the name is nonsense, and passing a string for the age probably means you haven’t parsed numeric input yet, which is a correctness and probably security problem.
It does get a lot more complicated than this, of course. Most of the time that seems to occur when building abstractions and utilities, like generic containers or algorithms, things that less experienced programmers don’t do often.
In my experience, dynamically-typed languages make it easier to write code, but harder to test, maintain and especially refactor it. I regularly make changes to C++ and Go code, and rely on the type system to either guide a refactoring tool, or at least to produce errors at all the places where I need to fix something.
How do you know what subset becomes impossible? My claim is you have to think like the compiler to do that. That’s the problem.
You’re right that you have “think like the compiler” to be able to describe the impossible programs for it to check it, but everybody writing a program has an idea of what they want it to do.
If I don’t have static types and I make the same mistake, I will have to reason about the equivalent runtime error at some point.
I suppose my objection is framing it as “static typing makes it hard to understand the compiler errors.” It is “static typing makes programming harder” (with the debatably worth it benefit of making running the program easier). The understandability of the errors is secondary, if there is value there’s still value even the error was as shitty as “no.”
But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.
I think this is the same for “functionalness”. For example, often I find I’d rather set up a thread local or similar because it is easier to deal with then threading through some context argument through everything.
I suppose there is a difference in the sense that being functional is not (as of) a configurable constraint. It’s more or less on or off.
I agree there’s value in using types to add clarity through constraints. But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.
I sometimes divide programmers in two categories: the first acknowledge that programming is a form of applied maths. The seconds went to programming to run from maths.
It is very difficult for me to relate to the second category. There’s no escaping the fact that our computers ultimately run formal systems, and most of our job is to formalise unclear requirements into an absolutely precise specification (source code), which is then transformed by a formal system (the compiler) into a stream of instructions (object code) that will then be interpreted by some hardware (the CPU, GPU…) with more or less relevant limits & performance characteristics. (It’s obviously a little different if we instead use an interpreter or a JIT VM).
Dynamic type systems mostly allow scared-of-maths people to ignore the mathematical aspects of their programs for a bit longer, until of course they get some runtime error. Worse, they often mistake their should-have-been-a-type-error mistakes for logic errors, and then claim a type system would not have helped them. Because contrary to popular beliefs, type errors don’t always manifest as such at runtime. Especially when you take advantage of generics & sum types: they make it much easier to “define errors out of existence”, by making sure huge swaths of your data is correct by construction.
And the worst is, I suspect you’re right: it is quite likely most programmers are scared of maths. But I submit maths aren’t the problem. Being scared is. People need to learn.
My claim is you have to think like the compiler to do that.
My claim is that I can just run the compiler and see if it complains. This provides a much tighter feedback loop than having to actually run my code, even if I have a REPL. With a good static type system my compiler is disciplined so I don’t have to be.
Saying that people who like dynamic types are “scared of math” is incredibly condescending and also ignorant. I teach formal verification and am writing a book on formal logic in programming, but I also like dynamic types. Lots of pure mathematics research is done with Mathematica, Python, and Magma.
I’m also disappointed but unsurprised that so many people are arguing with a guy for not making the “right choices” in a language about exploring tradeoffs. The whole point is to explore!
Obviously people aren’t monoliths, and there will be exceptions (or significant minorities) in any classification.
Nevertheless, I have observed that:
Many programmers have explicitly taken programming to avoid doing maths.
Many programmers dispute that programming is applied maths, and some downvote comments saying otherwise.
The first set is almost perfectly included in the second.
As for dynamic typing, almost systematically, arguments in favour seem to be less rigorous than arguments against. Despite CISP. So while the set of dynamic typing lovers is not nearly as strongly correlated with “maths are scary”, I do suspect a significant overlap.
While I do use Python for various reasons (available libraries, bignum arithmetic, and popularity among cryptographers (SAGE) being the main ones), dynamic typing has systematically hurt me more than it helped me, and I avoid it like the plague as soon as my programs reach non-trivial sizes.
I could just be ignorant, but despite having engaged in static/dynamic debates with articulate peers, I have yet to see any compelling argument in favour. I mean there’s the classic sound/complete dilemma, but non-crappy systems like F* or what we see in ML and Haskell very rarely stopped me from writing a program I really wanted to write. Sure, some useful programs can’t be typed. But for those most static check systems have escape hatches. and many programs people think can’t be typed, actually can. Se Ritch Hickey’s transducers for instance. All his talk he was dismissively daring static programmers to type it, only to have a Haskell programmer actually do it.
There are of course very good arguments favouring some dynamic language at the expense of some static language, but they never survive a narrowing down to static & dynamic typing in general. The dynamic language may have a better standard library, the static language may have a crappy type system with lots of CVE inducing holes… all ancillary details that have little to do with the core debate. I mean it should be obvious to anyone that Python, Mathematica, and Magma have many advantages that have little to do with their typing discipline.
Back to what I was originally trying to respond to, I don’t understand people who feel like static typing has a high cognitive cost. Something in the way their brain works (or their education) is either missing or alien. And I’m highly sceptical of claims that some people are just wired differently. It must be cultural or come from training.
And to be honest I have an increasingly hard time considering the dynamic and static positions equal. While I reckon dynamic type systems are easier to implement and more approachable, beyond that I have no idea how they help anyone write better programs faster, and I increasingly suspect they do not.
Even after trying to justify that you’ve had discussions with “articular peers” and “could just be ignorant” and this is all your own observations, you immediately double back to declaring that people who prefer dynamic typing are cognitively or culturally defective. That makes it really, really hard to assume you’re having any of these arguments in good faith.
To be honest I only recall one such articulate peer. On Reddit. He was an exception, and you’re the second one that I recall. Most of the time I see poorer arguments strongly suggesting either general or specific ignorance (most of the time they use Java or C++ as the static champion). I’m fully aware how unsettling and discriminatory is the idea that people who strongly prefer dynamic typing would somehow be less. But from where I stand it doesn’t look that false.
Except for the exceptions. I’m clearly missing something, though I have yet to be told what.
Thing is, I suspect there isn’t enough space in a programming forum to satisfactorily settle that debate. I would love to have strong empirical evidence, but I have reasons to believe this would be very hard: if you use real languages there will be too many confounding variables, and if you use a toy language you’ll naturally ignore many of the things both typing disciplines enable. For now I’d settle for a strong argument (or set thereof). If someone has a link that would be much appreciated.
And no, I don’t have a strong link in favour of static typing either. This is all deeply unsatisfactory.
I know of — oops I do not, I was confusing it with some other study… Thanks a ton for the link, I’ll take a look.
Edit: from the abstract there seem to be some evidence of the absence of a big effect, which would be just as huge as evidence of effect one way or the other.
Edit 2: just realised this is a list of studies, not just a single study. Even better.
Well, it’s the subset of programs which decidably don’t have the desired type signature! Such programs provably aren’t going to implement the desired function.
Let me flip this all around. Suppose that you’re tasked with encoding some function as a subroutine in your code. How do you translate the function’s type to the subroutine’s parameters? Surely there’s an algorithm for it. Similarly, there are algorithms for implementing the various primitive pieces of functions, and the types of each primitive function are embeddable. So, why should we build subroutines out of anything besides well-typed fragments of code?
Sure, but I think you’re talking past the argument. It’s a tradeoff. Here is another good post that explains the problem and gives it a good name: biformity.
People in the programming language design community strive to make their languages more expressive, with a strong type system, mainly to increase ergonomics by avoiding code duplication in final software; however, the more expressive their languages become, the more abruptly duplication penetrates the language itself.
That’s the issue that explains why separate compile-time languages arise so often in languages like C++ (mentioned in the blog post), Rust (at least 3 different kinds of compile-time metaprogramming), OCaml (many incompatible versions of compile-time metaprogramming), Haskell, etc.
Those languages are not only harder for humans to understand, but tools as well
The Haskell meta programming system that jumps immediately to mind is template Haskell, which makes a virtue of not introducing a distinct meta programming language: you use Haskell for that purpose as well as the main program.
Yeah the linked post mentions Template Haskell and gives it some shine, but also points out other downsides and complexity with Haskell. Again, not saying that types aren’t worth it, just that it’s a tradeoff, and that they’re different when applied to different problem domains.
One common compelling reason is that dynamic languages like Python only require you to learn a single tool in order to use them well. […] Code that runs at compile/import time follows the same rules as code running at execution time. Instead of a separate templating system, the language supports meta-programming using the same constructs as normal execution. Module importing is built-in, so build systems aren’t necessary.
That’s exactly what Zig is doing with it’s “comptime” feature, using the same language, but while keeping a statically typed and compiled approach.
I’m wondering where you feel dynamic functional languages like Clojure and Elixir fall short? I’m particularly optimistic about Elixir as of late since they’re putting a lot of effort in expanding to the data analytics and machine learning space (their NX projects), as well as interactive and literate computing (Livebook and Kino). They are also trying to understand how they could make a gradual type system work. Those all feel like traits that have made Python so successful and I feel like it is a good direction to evolve the Elixir language/ecosystem.
I think there are a lot of excellent ideas in both Clojure and Elixir!
With Clojure the practical dependence on the JVM is one huge deal breaker for many people because of licensing concerns. BEAM is better in that regard, but shares how VMs require a lot of runtime complexity that make them harder to debug and understand (compared to say, the C ecosystem tools).
For the languages themselves, simple things like explicit returns are missing, which makes the languages feel difficult to wield, especially for beginners. So enumerating that type of friction would be one way to understand where the languages fall short. Try to recoup some of the language’s strangeness budget.
I’m guessing the syntax is a pretty regular Lisp, but with newlines and indents making many of the parenthesis unnecessary?
Some things I wish Lisp syntax did better:
More syntactically first-class data types besides lists. Most obviously dictionaries, but classes kind of fit in there too. And lightweight structs (which get kind of modeled as dicts or tuples or objects or whatever in other languages).
If you have structs you need accessors. And maybe that uses the same mechanism as namespaces. Also a Lisp weak point.
Named and default arguments. The Lisp approaches feel like cludges. Smalltalk is a kind of an ideal, but secretly just the weirdest naming convention ever. Though maybe it’s not so crazy to imagine Lisp syntax with function names blown out over the call like in Smalltalk.
Great suggestions thank you! The syntax is trying to avoid parentheses like that for sure. If you have more thoughts like this please send them my way!
This might be an IDE / LSP implementation detail, but would it be possible to color-code the indentation levels? Similar to how editors color code matching brackets these days. I always have a period of getting used to Python where the whitespace sensitivity disorients me for a while.
Most editors will show a very lightly shaded vertical line for each indentation level with Python. The same works well for this syntax too. I have seen colored indentation levels (such as https://archive.fosdem.org/2022/schedule/event/lispforeveryone/), but I think it won’t be needed because of the lack of parentheses. It’s the same reason I don’t think it’ll be necessary to use a structural editor like https://calva.io/paredit/
I think that’s a bit of an unfair take. They are talking about making it as easy as possible for newbies to bootstrap a python environment in Windows/Linux/macOS. If your answer to that is Nix, the bootstrapping would become a nightmare.
So how do I setup for this project?
Either you install this whole operating system where everything works very different to what you’re used to, or you install this Nix cli tool. Ah you’re using Windows? Sorry, unless you start by installing this WSL thing.
Now, imagine you want to add a new dependency, well, you may need to either put it in pyproject and then use this glue called poetry2nix, that sometimes works and sometimes doesn’t, if it doesn’t work maybe you can add it directly using python3XPackages.package and if it’s not there, then good luck, you either open a patch to nixpkgs poetry2nix adding your package or learn how to package a python library and also contribute to nixpkgs. The other option is to create a virtualenv the old-fashioned way and then use nix as kind of a pyenv.
–
I don’t think any of that sounds better than what is being proposed in the link.
I don’t mean to dismiss the author’s work, but to point out the continued insular choices of the Python core teams. Instead of installing Nix, the author asks us to install Cargo and then go through a standard Rust-project workflow; they are comparable in complexity and extent.
You may choose to continue using pyproject.toml, Poetry, virtualenv, pyenv, etc. but the direct path is to use nix-shell to configure an entire development environment in an atomic step. The list of packages can be contained to a single line in a single file; here is an example from a homelab application which I updated recently.
Contributing to nixpkgs is not trivial, but it is not difficult either. Here is a recent PR I authored for adding a Python package. It’s shorter than a setup.py, in my experience! Also, you don’t have to contribute new packages to nixpkgs; instead, you can add them to your local Nix expressions on a per-project basis.
Please also keep in mind that all of this discussion is within the context of Python packaging difficulties. Languages without extension modules don’t require all of this effort; all we need instead is to install a runtime directly from an upstream package-builder, whether that’s a distro, vendor, or third-party packager. We should imagine that a language is either designed to have lots of extensions and be an integrator of features, or designed for monolithic applications which reimplement every feature and are suitable for whole-program transformation. Python picked both, and so gets neither.
Hypothetically, just as a thought experiment and nothing else, I think maybe a case could be made that running Nix inside WSL and cross-compiling from that to Windows might be sorta acceptable. I don’t think that’s a realistic thing to propose: it’s a ton of work for starters, and the payoff would be pretty dubious since you’d have this really long painful edit/test cycle.
WSL is like Electron: it makes it easy for a developer to provide something to the user, but the thing provided is much worse than a corresponding native-solution. I’d struggle to integrate a WSL app with my native Windows powershell scripts.
The longer answer is: python has supported Windows natively for over a decade (how well might be up for debate, but it was supported), it’s not reasonable for them to suddenly say “use Linux inside Windows or get fucked”, and it’s not reasonable to expect them to do so, either.
I don’t have any statistics, but I would bet that the vast majority of Windows users (corporate IT managed machines) can’t enable WSL. Python is actually very easy to install on Windows with the Microsoft Store. Requiring users to enable WSL and understand how to use linux would be a large obstacle.
Or just with the installer from python.org, which will install to AppData by default (I think? At least if you choose the install just for me option), so no admin permissions needed.
I don’t mean to dismiss the author’s work, but to point out the continued insular choices of the Python core teams.
No, you are doing what you always do: pushing your preferred tools as the only acceptable tools, such that all development on all other tools must cease and all people everywhere must adopt only and exclusively your preferred tools. And along the way you throw in the usual (un)healthy dose of bashing anyone who dares to develop other tools, since obviously it’s bad and wrong for them to do so when the One True Thing has already been invented and thus they must be doing that for bad reasons.
Sometimes you do this with PyPy versus CPython. Sometimes with functional progamming/category theory versus other paradigms. Sometimes with Nix versus literally everything. But it’s always the same basic dismissive/attacking approach.
I didn’t read your comment, and boy, there’s so much more I disagree with.
but the direct path is to use nix-shell
To whom? There’s a world of people for whom nix is a non starter. Everyone using Visual Studio. Or working on computers they don’t fully control (enterprise developers). Or people that like Bluetooth to work, so they can’t use Linux (this is half in jest, half serious). “There’s no silver bullet” applies to your favorite thing too.
all we need instead is to install a runtime directly from an upstream package-builder
What language is like that? Ruby has C extensions, JavaScript has them, Java has jni. Even go, which is famous for reinventing the wheel a lot, has cgo. In every single language that isn’t C you will, at some point, have problems trying to install a package that needs to compile something in another language.
The reason it happens so much more in python is actually kind of a feature, not a bug: python was designed to be easily extendable, specifically in C, although that feature was perhaps not as well designed as we would like, in hindsight.
We should imagine that a language is either designed to have lots of extensions and be an integrator of features, or designed for monolithic applications which reimplement every feature and are suitable for whole-program transformation.
Maybe in a perfect world but … I don’t think any languages really fit this binary, well, binarilly (?). At most some are more at one end than the other, but I’m honestly struggling to find utility in the whole classification really.
Everyone using Visual Studio. Or working on computers they don’t fully control (enterprise developers).
I don’t understand, why would Nix be a blocker in those contexts? If you don’t fully control the computer, wouldn’t you have trouble installing all the Rust thingies anyhow?
I don’t think that addresses my question. I genuinely don’t get why Nix would be a blocker to people using Visual Studio (VS Code(?)), are plugins sandboxed, or unable to interact with binaries/run commands in some other way?
You’re applying things I said about one thing to to other things that I didn’t for it to be applied to.
One of my disagreements is with the idea that nix is some sort of ideal goal that ever developer is converting to. This idea breaks down as soon as you realize that people writing C# on Visual Studio (not VSCode) will never adopt something like nix, unless it’s fully integrated with Windows, like every single other tool they use.
The other disagreement is with the idea that the way the project linked can currently be used is the final interface: it clearly isn’t, they clearly say it will be a single binary in the future.
Only the second one has anything to do with Python tools. The first one is just a criticism of the idea of nix as the best thing ever that everyone should use and can do no wrong.
The Vision
The goal is for posy to act as a kind of high-level frontend to python: you install posy, then run posy [args] some_python.py and it takes care of everything up until entering the python interpreter. That includes:
installing Python (posy is a pure-rust single-file binary; it doesn’t assume you have anything else installed) (…)
I for one would like to strongly encourage anyone who would like to make an attempt at “reinventing Nix”, since a thing that is like Nix but avoids some of its pain points could potentially be delightful.
What a great news, Web browsers are critically important and are now becoming like mini-OSes, having modular components that can be composed and specialised will ensure that we have viable alternative to the current integration around WebKit/Blink.
Web browsers are critically important and are now becoming like mini-OSes
IMO the entire point of web browsers has always been that they were mini-OSes, competing with apt. They dominated Windows, because 1) the vast majority of use-cases didn’t actually need special permissions from the OS, and 2) the alternative was “go to this [website/FTP server/etc] and download a .exe, run the .exe, wait a minute or two, launch the app” instead of just “go to this website”.
This is kind of a funny take to me, because I have viewed it the opposite way. Web browsers becoming the application platform of choice for many users is what has made the use of alternative OSes (not Windows) more viable in the modern age. Of course, the move to pocket computers (phones) is really what has made the most impact.
The reason web browsers made alternative OSes more viable is because they’re an open standard VM that’s easily portable to the alternative OS.
Pocket computers displacing Windows demonstrates that trivial installation procedures really are what make or break an OS - the procedure for installing an app is to 1) open up the app store (or click a link), then 2) hit “install”. Just like the web browser, it’s trivial. IMO this is why mobile destroyed desktops for most use-cases.
Ever wanted to write x max= y while searching for some maximum value in some complicated loop? You can do that here. You can do it with literally any function.
Oh my gosh I love this.
We don’t have that problem because we don’t distinguish sets and dictionaries.
Ever wanted to write x max= y while searching for some maximum value in some complicated loop? You can do that here. You can do it with literally any function
I do not understand what x max= y is supposed to do. Care to explain, maybe with an example?
This may have been true in the past, but I don’t think that python does this currently. Python’s dict implementation now guarantees order, but the set definitely does not.
The stopwatch app from the Textual tutorial starts running for me in less than a quarter of a second - it’s fast enough that I didn’t even notice the delay until I tried to eyeball-measure it just now.
The whole point of TUI apps is that you’re going to spend some time in them. Does a quarter of a second to show the initial screen really matter to anyone? That’s way faster than loading a web application in a browser tab.
Thinking about it, I don’t think I’ve ever used a TUI written in any language where the startup speed has bothered me.
To whoever is starting to write a reply - no, it cannot be optimized.
Python will always take a bit to start the program.
It depends.
Maybe startup time doesn’t really matter for someone’s particular use
case. While there will always be some baseline startup time from
Python, there are cases where you can optimize it and possibly bring it
down to a level you find acceptable.
At a job, I was tasked with figuring out and speeding up slow start of a
Python program. Nobody knew why the bloody thing was taking so long to
start. Part of it was network delays, of course, but part was Python.
I did some profiling.
This little Python program was importing a library, and that library
imported something called pkg_resources. Turns out that pkg_resources
does a bunch of work at import-time (nooo!). After some digging, I
found that pkg_resources was actually an optional dependency of the
library we were using. It did a try … import … except: …, and
could work without this dependency. After digging into the code (both
ours and the library’s), I found that we didn’t need the facilities of
pkg_resources at all.
We didn’t want to uninstall it. Distro packages depended on it, and it
was possible that there were other programs on the system that might use
it. So I hacked up a module importer for our program that raised
ModuleNotFoundError whenever something tried to import pkg_resources.
I cut a nearly one-second start time down to an acceptable 300
milliseconds or so, and IIRC a fair portion of the 300 milliseconds was
from SSH.
Know your dependencies (direct and indirect). Know what you’re calling
(directly and indirectly) and when you’re calling it. Profile. And if
your Python startup times are slow, look for import-time shenanigans.
Program startup speed is important for some applications but negligible compared to other aspects like usability, accessibility or ease of development, wouldn’t you agree?
Program startup speed and performance is an important part of usability. It’s bad for software users when the software they use is full of latency, or uses so many system resources it bogs down their entire computer.
Agreed, it’s part of usability. But it depends on the numbers. Saying “stop writing TUIs in Python” because of 200ms (out of which something can be shaved off with optimization) sounds extreme.
I completely agree with the unsuitability of Python for TUI / CLI projects! (Especially if these tools are short-lived in their execution.)
Long ago (but still using today) I’ve written a simple console editor that has ~3K lines of code (25 Python modules) which imports only 12 core (and built-in) Python modules (without any other dependencies) and mainly uses curses.
On any laptop I’ve tried it (even 10 years old) it starts fast enough. However recently I’ve bought an Android device and tried it under Termux. It’s slow as hell, taking more than a second to start… (Afterwards it’s OK-ish to use.)
What’s the issue? The Python VM is slow to bootstrap and load the code (in my case it’s already in .pyo format, all in a zip with zipapp). For example just calling (on my Lenovo T450) python2 -c True takes ~10ms meanwhile python3.10 -c True takes ~14ms (python3.6 used to take ~20ms). Just adding import json adds another +10ms, meanwhile import curses, argparse, subprocess, json (which is perhaps the minimal any current-day project requires) yields a ~40ms startup.
With this in mind, this startup latency starts to pile-on and it has no solution in sight (except rewriting it in a compiled language).
Granted, even other languages have their issues, like for example Go, which is very eager in initializing any modules you have referenced, even though will never use, and thus easily adds to startup latency.
(I’ll not even touch on the deployment model, where zipapp is almost unused for deployment and https://github.com/indygreg/PyOxidizer is the only project out there trying to really make a difference…)
Why would I? It only has buttons and checkboxes implemented. And according to comments in here is still taking 1/4 of a second to start on a modern CPU.
EDIT: In the demo video, the demo takes 34 frames to boot. At 60fps, that’s more than half a second.
The popularity of a chat app - particularly one that most people use because it’s what their workplace standardizes on - is driven much more by network effects than by the quality of the standard client app. It is bad that the Slack client is slow, and this is made worse by the fact that there aren’t a whole lot of alternative clients for the Slack network that a person who is required to use Slack as part of their job can use instead of the official, slow, one.
I think that the problem with your assessment is the assumption that the users of this framework have the knowledge to use a different language or want to use a different language than python. Nobody is forcing you to use it and if folks are releasing tools using it, nobody is forcing you to use those. For those that want to add a TUI front end to a script they made, this seems like a good option.
I am new to emacs but does treesitter work as an alternative for eglot? I am using treesitter and seems to be working fine for me, will using eglot+treesitter give me any extra bells? or these two are supposed to be mutually exclusive?
They are generally meant to compliment each other. Tree-sitter helps with syntax highlighting (font-locking) and structural navigation/editing. LSP generally knows more about your code, but it is too heavy/slow to be used for the tasks that tree-sitter is good at. It will offer autocompletion of functions, methods, parameters, and arguments, as well as some linting capabilities. It will also allow renaming of symbols and other more complex refactoring.
From what I could tell after briefly trying both, lsp-mode tries to give you a very full-featured IDE or VS Code-like experience. It’s got a lot of power and gives a fairly slick interface to it all with a good bit of bling. It seems to have a lot of code and a lot of dependencies.
Eglot on the other hand tries to be lighter weight. It’s a single .el file of about 3500 lines and it mostly just depends on a handful of built-in libraries. It’s philosophy seems to be more about blending in with the traditional Emacs experience, setting some stuff up and then mostly staying out of the way until invoked.
As a long-time Emacs user, I found Eglot much more to my taste and went with it over lsp-mode, but there’s definitely room in the ecosystem for both. I don’t use golang, so I can’t speak to that. It does come with an entry for gopls for go-mode in its list of known servers.
I’m hoping that we’ll see some packages bring some of the lsp-mode UI to eglot for those that prefer that style. I’ll be sticking with the vanilla eglot myself.
I really wish it was easier to try these bindings outside of specific editors. Maybe I’m in a specific situation as a kotlin developer working on a large Android project, but Android Studio has a perfectly good vim emulator built in and so does every other ide. Porting Kakoune’s behavior would be more than just mapping a list of bindings, it looks like?
It wouldn’t be a huge loss to get used to kak as a terminal editor, but then I’d feel sad all day in Android Studio, and it’s already too easy for me to feel sad there.
I think that it is probably easier to emulate kakoune bindings than vim bindings in most modern editors due to kakoune’s minimalist nature. The object-verb approach is also much closer to how a non modal editor works. I think that it is important that the host editor has multiple cursor support to build from. That makes emacs a more difficult target, for example. You can check out dance for vscode https://github.com/71/dance
Kakuone editing in emacs which incidentally uses multiple-cursors
as far as I’m aware, vim style editing also requires more than just mapping bindings, so I don’t see why adding a noun-first editing paradigm would take more work. Maybe plugins for this exist already in VS code and the JetBrains IDEs?
As some one in the data processing and analytics field, a fan of array languages (including R), and dataframe geek, I’m very much in agreement that this topic is worth more investment. Also, if you’re like me and want to understand the presenter’s background on the topic, this is from Graydon Hoare, the creator of Rust.
One day I hope they’ll open up to electrostatic capacitive rubber domes as an alternative to mechanical keyswitches. I’d pay premium for an ergonomic option that actually thocks.
It seems that only a very small number of people have made custom electrocapacitive keyboards, at least as far as I could find. There are a lot of things that make it finicky:
The circuitry is analogue, rather than digital, and keypress detection measures changes of a few picofarads, so the PCB design needs more expertise than usual. On the up side, like hall effect switches, you can adjust the activation point in software or use them as analogue inputs for things like mouse keys.
The rubber domes and conical springs are sandwiched between the PCB and switch housings, and loose when the keyboard is disassembled. It is tricky to get them aligned when reassembling. It is easier if the domes are on a sheet matching the size and layout of the PCB, which they won’t be in a custom layout.
After reassembly the firmware needs to be recalibrated to accommodate any changes in alignment of the domes and springs.
The plate and PCB must be firmly fixed to each other, since there is no lower switch housing to take the bottom-out force: it hits the PCB through the dome and spring, and none of it is taken directly by the upper housing or plate.
There are not many sources of switch parts. I get the impression Topre parts are only available through unofficial channels. Niz will sell switch parts to hobbyists in keyboard-sized quantities.
Some links:
https://github.com/tmk/tmk_keyboard/wiki/Capacitive-Sense
https://github.com/tomsmalley/custom-topre-guide
https://www.nizkeyboard.com/products/2019-new-niz-ec-switch
https://hhkb.io/modding/Rubber_Domes/
https://www.reddit.com/r/MechanicalKeyboards/wiki/modifications_topre/
The tako is a recent one to take a look at https://github.com/ssbb/tako.
Yup. Which is why I want to buy one—not build one. I built a ErgoDox way back, & I’d rather have a manufacturer do the hard work. Of course back then even Cherry switches weren’t the easiest thing to come by. But the folks that have tried Topre or NiZ’s switches broadly seem to pine for the feeling of these switches despite the limited supply. With the right nudges maybe the same thing that happened for mechanical switches could happen.
NiZ is cheap & still feels great, but the firmware releases as Windows executables in a Google Drive doesn’t inspire confidence. I’ve emailed them and they don’t seem to interested in the ergo and/or split keyboard crowd.
I have to admit that I was a bit underwhelmed by the features. There was nothing that I found myself thinking “I need this”. I think that emacs is already the best option for a cross platform text-based computing environment. It can do pretty much everything shown and a lot more.
I would be happy to see more projects that were emacs-like in spirit, but were more accessible and had some modern appeal.
I have never used Zig and yet even I know that a lot of people use it for its very convenient cross-compiling facilities.
I admire ambitious people but at this point in the software engineering history I believe it should be obvious for everyone that improving compilation times and quality is one of the hardest tasks ever and it might easily take the creator and the team their entire careers to even make a dent… and actually succeeding and gaining wide adoption is not at all guaranteed.
I fear this will just doom the project to irrelevance. And it’s not like there are more than two people actually paid to work on Zig.
To me, this is all the more reason to hope that they go forward with it. The team has already proven that they have the expertise and perseverance to solve hard problems. This feels very aligned with their overall goals.
starts sweating nervously
But seriously, why are timezones so difficult? Out of all the things that I would expect to “just work,” this one would be near the top of the list. Python is great, but it’s definitely not batteries-included and I find myself still frequently navigating an ecosystem of broken, stale or “incorrect” external libraries. I attribute this to the horrible state that Python online learning is in, probably because it is one of the most popular languages out there, there are so many misleading resources. It was a pleasure learning other languages, like Go or Perl, where I can turn to vetted and single sources of information for the “right way” to do things.
Thanks for posting this, literally just helped solve a ghost bug that we’ve been sporadically encountering for months now.
This article was written in 2018. As of Python 3.9, there is now the “zoneinfo” standard library which I believe should be considered best practice when working with time zones. https://docs.python.org/3/library/zoneinfo.html#module-zoneinfo
Alice wants to schedule a meeting with Bob. Alice proposes “1:30 Tuesday”.
Level 1: What actual point in time does that mean? What point in time is “1:30 Tuesday” in Alice’s time zone? What is it in Bob’s time zone? What is it in UTC (which is presumably what the meeting server will store? How are you collecting the time zone information from both Alice and Bob in order to display the correct point in time to both of them? What about when Bob decides Carol (who is potentially in a third time zone) should also be on the meeting?
Level 2: What actual point in time does that mean? What if Alice is scheduling the meeting from the time zone she normally lives in, but will be traveling and in a different time zone on the date of the meeting?
Level 3: What actual point in time does that mean? What if Bob’s jurisdiction will switch to some sort of “summer” or “winter” time around that date? What if Carol’s jurisdiction is changing their timezone rules this year? Or: the meeting has already happened, but now an auditor is making sure Alice and Bob and Carol all stuck to their legal working hours. What if the time-zone rules for their jurisdictions have changed in the intervening period?
And these are just the most basic top-of-mind things that can come up with time zones. The full reality is almost unbelievably complex, and it is that way largely because time zones are designed for messy humans who want their local time to vaguely match perceived solar time, rather than for computers which don’t care about that but do want consistent precise rules to follow.
And the difficulty and complexity are reflected in the fact that every programming language I’m aware of has warts and complexity and difficulty and “oh, don’t use that” around dates and times. It’s not that Python is somehow uniquely unable to get it right where everyone else did – it’s that lots of people have gotten it wrong, but Python’s popularity amplifies your awareness of the Python-specific incidence of wrongness.
I have exactly once attempted to tackle time information in detail (not in Python) and this is completely the right take. The amount of footguns surrounding our concept of time, irrespective of the language involved, is immense. I will never attempt this again, merely make it clear that my code does X to attempt to handle time information and everything else is an assumption waiting to fail. i.e. this note on Apple Cloud Notes Parser’s date feature:
I guess an alternative (and better?) approach is to create a separate virtual environment for each system package, or even better, have all packages ship their own virtual environment. From my understanding, many Linux distributions maintain a so-called “system Python”, e.g.
/usr/bin/python3
, and all system packages share that particular Python distribution, which is clearly suboptimal. For example, the packageupdate-manager
depends onpython3-yaml==5.3.1
, but what if another system package depends onpython3-yaml==6.0.0
? You get a version conflict, and PEP 668 doesn’t help that.Essentially, PEP 668 says that “system Python” should not be touched by the user, but I argue that such a globally mutable “system Python” shouldn’t even exist.
I think that the idea is that any scripts that rely on the system python should not use external packages at all. And if they need to use an external package, it should be installed and managed through the systems package manager and not pip.
pipx - install and run Python applications in isolated environments
Yeah, I proposed this idea because I find myself using
pipx
much oftener thanapt
when installing Python-based packages like pipenv, icdiff, black, etc.Good article.
If you want an absolute gem of a programming language, check out Factor. The development of it seems to have slowed down a bit when Slava Pestov stopped working on it, but it was in an amazing state at that point already. It’ll rewire your brain because of being a concatenative language and how you can (or have to) do things.
Highly recommended.
Yes, I was about to recommend Factor as the first choice for Forthlikes. I don’t know gForth, but if it’s like most trad Forths it’s a lot harder to get your head around.
For learning how to build a Forth, good resources are JonesForth (x86 assembly) or Quackery (Python.) The former has exhaustively commented source code, the latter has a whole book describing the language and implementation. I recommend the exercise — I learned a LOT from reading the source code of FIG Forth for the 8080 as a teen.
Another great one is retroforth. Despite its name, it is a thoroughly modern forth. I feel like it combines some of the nice parts of Joy/Factor, while being a bit more simple. http://www.retroforth.org/
As for K’s, I think that ngn/k is the best one to try. https://codeberg.org/ngn/k
BQN would also be a great choice for an APL descendant language. https://mlochbaum.github.io/BQN/
This is great. A modern and successful language is so much more than it used to be. Examples of how to create this additional tooling is very helpful.
I’ve been using Polars for a project analyzing compensation survey data. I’m absolutely in love with the framework. It’s amazing. It’s expressive. It’s easily testable. Its DSL is entirely understandable at face review and doesn’t require Pythonisms to grok:
It’s more verbose in this example but in my project, we’re finding that we’re creating more reusable components in Polars than in Pandas and our code ends up more brief overall. We’re building a product, not optimizing for code golf!
For another project I did, switching from Pandas to Polars shortened my pipeline from around 30 seconds per report average to less than a second.
I’m also really happy that there are some alternatives to pandas that are trying a different API. I don’t love all of the polars API design, but I do see it as a big improvement. You may already know this, but your polars example could be even more concise. You don’t have to use a list for the
select
method and you also don’t need to wrap the column names inpl.col
unless you’re going to manipulate them in some way.Ah, yes, definitely. In my newer project, I’ve got a class defined with all of our expected columns as
pl.Expr
frompl.col()
. It goes through some mapping I have yet to refactor.Eventually, we’re going refactor to inline all of the mappings. Something ~cool about the columns is being able to do
Columns.adventurer_name.meta.output_name()
to get the column’s name as a string for functions that require a string, e.g.groupby()
and stuff in Plotly that expects string column names like graphx
andy
andcolor
arguments.This is very well written and motivated :)
I was always interested in Lua because it was nice and small, but I felt the language itself was a quirky, with some footguns … Also interested in Clojure, but not the JVM.
Janet sounds interesting
In my experience Fennel fixes about 90% of the footguns of Lua. About the only ones left are “1 based array indexing” and “referring to a nonexistent variable/table value returns nil instead of being an error”, which are pretty hard to change without fundamentally changing the runtime.
Hm I’ve seen both fennel and Janet, but didn’t realize until now that both use the square brackets and braces from Clojure
That’s cool, and a sign clojure is influential. Although there’s also the curse of Lisp where the ecosystem becomes fragmented
Both languages were written by the same person if you weren’t aware
Which two?
Fennel and Janet are both from Calvin Rose.
Huh I actually didn’t know that! @technomancy seems to be the defacto maintainer since 2020 or so.
There’s quite a bit of history to how fennel came to be what it is today. It is correct that Calvin (creator of Janet) started it, but it would have just been an experiment in their github if it weren’t for technomancy’s interest in reviving/expanding on it. I don’t know if it is written down anywhere, but Phil did a talk at FennelConf 2021 about the history of fennel, which is the most detailed background for those interested. https://conf.fennel-lang.org/2021
I did a survey a while back about new lisps of the past 2 decades. IIRC the only one to evolve beyond a personal project and have multiple nontrivial contributors but not use Clojure-style brackets is LFE, but LFE was released only a few months after Clojure. It’s safe to say Clojure’s influence has been enormous.
However, Janet seems to take some characteristics of Clojure out of context where they don’t make sense. For instance, Janet has if-let even tho if-let only exists in Clojure because Rich hates pattern matching. Janet also uses Clojure’s style of docstring before arglist, even tho Clojure’s reason for doing this (functions can have multiple arglists) does not apply in Janet as far as I can tell.
The other main influence of Clojure is not syntactic at all but rather the idea that a language specifically designed to be hosted on another runtime can be an enormous strength that neatly sidesteps the fragmentation curse.
Ahh very interesting, what were the others? (out of idle curiosity)
I think I remember Carp uses square brackets too.
There’s also femtolisp, used to bootstrap Julia, but that actually may have existed before Clojure as a personal project. It’s more like a Scheme and uses only parens.
I agree the runtime is usually the thing I care about, and interop within a runtime is crucial.
Here’s the ones I found in my survey; I omitted languages which (at the time) had only single-digit contributors or double-digit commit counts, but all of these were released (but possibly not started) after Clojure:
All of these except Urn and LFE were created by someone who I could find documented evidence of them using Clojure, and all of them except Urn and LFE use square brackets for arglists. LFE is still going as far as I can tell but Urn has been abandoned since I made the list.
I was working on this as a talk proposal in early 2020 before Covid hit and the conference was canceled. I’d like to still give it some day at a different conference: https://p.hagelb.org/new-lisps.html
That link is super cool. What do you mean by “implicit quoting”?
Thanks!
Implicit quoting is when lisps like CL or Scheme treat certain data structure literal notation treats the data structure as if it were quoted despite there being no quote.
For example, in Racket you can have a vector
#[(+ 2 3)]
, without implicit quoting this is a vector containing 5 but with implicit quoting it contains the list(+ 2 3)
instead where + is a symbol, not a function. Hash tables also have this problem. It’s very frustrating. Newer lisps all avoid it as far as I know.Not to take away from Clojure’s influence, just want to mention that Interlisp has square brackets, but with a different meaning. IIRC, a right square bracket in Interlisp closes all open round brackets.
Hm although now that I look, the VM doesn’t appear to be re-entrant like Lua
https://janet.guide/embedding-janet/
Python has been trying to move toward a re-entrant VM for long time, with subinterpreters, etc. – I think all the global vars are viewed as a mistake. Aside from just being cleaner, it makes the GIL baked in rather than an application policy, which limits scalability.
This kind of API looks suboptimal to me. It would be nice to take something like a
lua_State
.The interpreter is thread local in janet, you can actually swap interpreters on the thread too so it doesn’t stop things like rust async from working if you add extra machinery.
The main reason that I use Lua is Sol3. Lua itself is just Smalltalk with weird syntax, but the integration with C++ that you get from Sol3 is fantastic.
Looking forward to reading this. I’m a big Janet fan. Good resources are an important part of a language’s approachability and adoption.
I would also love to see more editors experiment with this approach. In addition to text-centric UIs, I also find grid based (VisiData, Excel, etc) to be very intuitive for certain tasks. Emacs does have some grid based interfaces, but having first class support along with the text and line based interactions would be nice.
Do you have any more information on the project? This is a bit light.
I haven’t shared the open source project publicly yet, but I plan to later this year.
This thread has some example code and a link for more info if you’re interested (some details have changed since): https://twitter.com/haxor/status/1618054900612739073
And I wrote a related post about motivations here: https://www.onebigfluke.com/2022/11/the-case-for-dynamic-functional.html
This is not how I think about static types. They’re a mechanism for allowing me to think less by making a subset of programs impossible. Instead of needing to think about if s can be “hello” or 7 I know I only have to worry about s being 7 or 8. The compiler error just meant I accidentally wrote a program where it is harder to think about the possible states of the program. The need to reason about the error means I already made a mistake about reasoning about my program, which is the important thing. Less errors before the program is run doesn’t mean the mistakes weren’t made.
I am not a zealot, I use dynamically typed languages. But it is for problems where the degree of dynamism inherent in the problem means introducing the ceremony of a program level runtime typing is extra work, not because reading the compiler errors is extra work.
This is very analogous to the benefits of functional languages you point out. By not having mutable globals the program is easier to think about, if s is 7 it is always 7.
Introducing constraints to the set of possible programs makes it easier to reason about our programs.
I appreciate the sentiment of your reply, and I do understand the value of static typing for certain problem domains.
Regarding this:
How do you know what subset becomes impossible? My claim is you have to think like the compiler to do that. That’s the problem.
I agree there’s value in using types to add clarity through constraints. But there’s a cost for the programmer to do so. Many people find that cost low and it’s easy. Many others — significantly more people in my opinion — find the cost high and it’s confusing.
I really like your point about having to master several languages. I’m glad to be rid of a preprocessor, and languages like Zig and Nim are making headway on unifying compile-time and runtime programming. I disagree about the type system, though: it does add complexity, but it’s scalable and, I think, very important for larger codebases.
Ideally the “impossible subset” corresponds to what you already know is incorrect application behavior — that happens a lot of the time, for example declaring a “name” parameter as type “string” and “age” as “number”. Passing a number for the name is nonsense, and passing a string for the age probably means you haven’t parsed numeric input yet, which is a correctness and probably security problem.
It does get a lot more complicated than this, of course. Most of the time that seems to occur when building abstractions and utilities, like generic containers or algorithms, things that less experienced programmers don’t do often.
In my experience, dynamically-typed languages make it easier to write code, but harder to test, maintain and especially refactor it. I regularly make changes to C++ and Go code, and rely on the type system to either guide a refactoring tool, or at least to produce errors at all the places where I need to fix something.
You’re right that you have “think like the compiler” to be able to describe the impossible programs for it to check it, but everybody writing a program has an idea of what they want it to do.
If I don’t have static types and I make the same mistake, I will have to reason about the equivalent runtime error at some point.
I suppose my objection is framing it as “static typing makes it hard to understand the compiler errors.” It is “static typing makes programming harder” (with the debatably worth it benefit of making running the program easier). The understandability of the errors is secondary, if there is value there’s still value even the error was as shitty as “no.”
I think this is the same for “functionalness”. For example, often I find I’d rather set up a thread local or similar because it is easier to deal with then threading through some context argument through everything.
I suppose there is a difference in the sense that being functional is not (as of) a configurable constraint. It’s more or less on or off.
I sometimes divide programmers in two categories: the first acknowledge that programming is a form of applied maths. The seconds went to programming to run from maths.
It is very difficult for me to relate to the second category. There’s no escaping the fact that our computers ultimately run formal systems, and most of our job is to formalise unclear requirements into an absolutely precise specification (source code), which is then transformed by a formal system (the compiler) into a stream of instructions (object code) that will then be interpreted by some hardware (the CPU, GPU…) with more or less relevant limits & performance characteristics. (It’s obviously a little different if we instead use an interpreter or a JIT VM).
Dynamic type systems mostly allow scared-of-maths people to ignore the mathematical aspects of their programs for a bit longer, until of course they get some runtime error. Worse, they often mistake their should-have-been-a-type-error mistakes for logic errors, and then claim a type system would not have helped them. Because contrary to popular beliefs, type errors don’t always manifest as such at runtime. Especially when you take advantage of generics & sum types: they make it much easier to “define errors out of existence”, by making sure huge swaths of your data is correct by construction.
And the worst is, I suspect you’re right: it is quite likely most programmers are scared of maths. But I submit maths aren’t the problem. Being scared is. People need to learn.
My claim is that I can just run the compiler and see if it complains. This provides a much tighter feedback loop than having to actually run my code, even if I have a REPL. With a good static type system my compiler is disciplined so I don’t have to be.
Saying that people who like dynamic types are “scared of math” is incredibly condescending and also ignorant. I teach formal verification and am writing a book on formal logic in programming, but I also like dynamic types. Lots of pure mathematics research is done with Mathematica, Python, and Magma.
I’m also disappointed but unsurprised that so many people are arguing with a guy for not making the “right choices” in a language about exploring tradeoffs. The whole point is to explore!
Obviously people aren’t monoliths, and there will be exceptions (or significant minorities) in any classification.
Nevertheless, I have observed that:
As for dynamic typing, almost systematically, arguments in favour seem to be less rigorous than arguments against. Despite CISP. So while the set of dynamic typing lovers is not nearly as strongly correlated with “maths are scary”, I do suspect a significant overlap.
While I do use Python for various reasons (available libraries, bignum arithmetic, and popularity among cryptographers (SAGE) being the main ones), dynamic typing has systematically hurt me more than it helped me, and I avoid it like the plague as soon as my programs reach non-trivial sizes.
I could just be ignorant, but despite having engaged in static/dynamic debates with articulate peers, I have yet to see any compelling argument in favour. I mean there’s the classic sound/complete dilemma, but non-crappy systems like F* or what we see in ML and Haskell very rarely stopped me from writing a program I really wanted to write. Sure, some useful programs can’t be typed. But for those most static check systems have escape hatches. and many programs people think can’t be typed, actually can. Se Ritch Hickey’s transducers for instance. All his talk he was dismissively daring static programmers to type it, only to have a Haskell programmer actually do it.
There are of course very good arguments favouring some dynamic language at the expense of some static language, but they never survive a narrowing down to static & dynamic typing in general. The dynamic language may have a better standard library, the static language may have a crappy type system with lots of CVE inducing holes… all ancillary details that have little to do with the core debate. I mean it should be obvious to anyone that Python, Mathematica, and Magma have many advantages that have little to do with their typing discipline.
Back to what I was originally trying to respond to, I don’t understand people who feel like static typing has a high cognitive cost. Something in the way their brain works (or their education) is either missing or alien. And I’m highly sceptical of claims that some people are just wired differently. It must be cultural or come from training.
And to be honest I have an increasingly hard time considering the dynamic and static positions equal. While I reckon dynamic type systems are easier to implement and more approachable, beyond that I have no idea how they help anyone write better programs faster, and I increasingly suspect they do not.
Even after trying to justify that you’ve had discussions with “articular peers” and “could just be ignorant” and this is all your own observations, you immediately double back to declaring that people who prefer dynamic typing are cognitively or culturally defective. That makes it really, really hard to assume you’re having any of these arguments in good faith.
To be honest I only recall one such articulate peer. On Reddit. He was an exception, and you’re the second one that I recall. Most of the time I see poorer arguments strongly suggesting either general or specific ignorance (most of the time they use Java or C++ as the static champion). I’m fully aware how unsettling and discriminatory is the idea that people who strongly prefer dynamic typing would somehow be less. But from where I stand it doesn’t look that false.
Except for the exceptions. I’m clearly missing something, though I have yet to be told what.
Thing is, I suspect there isn’t enough space in a programming forum to satisfactorily settle that debate. I would love to have strong empirical evidence, but I have reasons to believe this would be very hard: if you use real languages there will be too many confounding variables, and if you use a toy language you’ll naturally ignore many of the things both typing disciplines enable. For now I’d settle for a strong argument (or set thereof). If someone has a link that would be much appreciated.
And no, I don’t have a strong link in favour of static typing either. This is all deeply unsatisfactory.
There seems to be no conclusive evidence one way or the other: https://danluu.com/empirical-pl/
Sharing this link is the only correct response to a static/dynamic argument thread.
I know of — oops I do not, I was confusing it with some other study… Thanks a ton for the link, I’ll take a look.
Edit: from the abstract there seem to be some evidence of the absence of a big effect, which would be just as huge as evidence of effect one way or the other.
Edit 2: just realised this is a list of studies, not just a single study. Even better.
Well, it’s the subset of programs which decidably don’t have the desired type signature! Such programs provably aren’t going to implement the desired function.
Let me flip this all around. Suppose that you’re tasked with encoding some function as a subroutine in your code. How do you translate the function’s type to the subroutine’s parameters? Surely there’s an algorithm for it. Similarly, there are algorithms for implementing the various primitive pieces of functions, and the types of each primitive function are embeddable. So, why should we build subroutines out of anything besides well-typed fragments of code?
Sure, but I think you’re talking past the argument. It’s a tradeoff. Here is another good post that explains the problem and gives it a good name: biformity.
https://hirrolot.github.io/posts/why-static-languages-suffer-from-complexity
That’s the issue that explains why separate compile-time languages arise so often in languages like C++ (mentioned in the blog post), Rust (at least 3 different kinds of compile-time metaprogramming), OCaml (many incompatible versions of compile-time metaprogramming), Haskell, etc.
Those languages are not only harder for humans to understand, but tools as well
The Haskell meta programming system that jumps immediately to mind is template Haskell, which makes a virtue of not introducing a distinct meta programming language: you use Haskell for that purpose as well as the main program.
Yeah the linked post mentions Template Haskell and gives it some shine, but also points out other downsides and complexity with Haskell. Again, not saying that types aren’t worth it, just that it’s a tradeoff, and that they’re different when applied to different problem domains.
This is probably a fair characterization.
I am a bit skeptical of this. Certainly C++ is harder for a tool to understand than C say, but I would be much less certain of say Ruby vs Haskell.
Though I suppose it depends on if the tool is operating on the program source or a running instance.
That’s exactly what Zig is doing with it’s “comptime” feature, using the same language, but while keeping a statically typed and compiled approach.
I’m wondering where you feel dynamic functional languages like Clojure and Elixir fall short? I’m particularly optimistic about Elixir as of late since they’re putting a lot of effort in expanding to the data analytics and machine learning space (their NX projects), as well as interactive and literate computing (Livebook and Kino). They are also trying to understand how they could make a gradual type system work. Those all feel like traits that have made Python so successful and I feel like it is a good direction to evolve the Elixir language/ecosystem.
I think there are a lot of excellent ideas in both Clojure and Elixir!
With Clojure the practical dependence on the JVM is one huge deal breaker for many people because of licensing concerns. BEAM is better in that regard, but shares how VMs require a lot of runtime complexity that make them harder to debug and understand (compared to say, the C ecosystem tools).
For the languages themselves, simple things like explicit returns are missing, which makes the languages feel difficult to wield, especially for beginners. So enumerating that type of friction would be one way to understand where the languages fall short. Try to recoup some of the language’s strangeness budget.
I’m guessing the syntax is a pretty regular Lisp, but with newlines and indents making many of the parenthesis unnecessary?
Some things I wish Lisp syntax did better:
Great suggestions thank you! The syntax is trying to avoid parentheses like that for sure. If you have more thoughts like this please send them my way!
This might be an IDE / LSP implementation detail, but would it be possible to color-code the indentation levels? Similar to how editors color code matching brackets these days. I always have a period of getting used to Python where the whitespace sensitivity disorients me for a while.
Most editors will show a very lightly shaded vertical line for each indentation level with Python. The same works well for this syntax too. I have seen colored indentation levels (such as https://archive.fosdem.org/2022/schedule/event/lispforeveryone/), but I think it won’t be needed because of the lack of parentheses. It’s the same reason I don’t think it’ll be necessary to use a structural editor like https://calva.io/paredit/
So, reinventing Nix?
I think that’s a bit of an unfair take. They are talking about making it as easy as possible for newbies to bootstrap a python environment in Windows/Linux/macOS. If your answer to that is Nix, the bootstrapping would become a nightmare.
So how do I setup for this project?Either you install this whole operating system where everything works very different to what you’re used to, or you install this Nix cli tool. Ah you’re using Windows? Sorry, unless you start by installing this WSL thing.
Now, imagine you want to add a new dependency, well, you may need to either put it in pyproject and then use this glue called poetry2nix, that sometimes works and sometimes doesn’t, if it doesn’t work maybe you can add it directly using
python3XPackages.package
and if it’s not there, then good luck, you either open a patch to nixpkgs poetry2nix adding your package or learn how to package a python library and also contribute to nixpkgs. The other option is to create a virtualenv the old-fashioned way and then use nix as kind of a pyenv.–
I don’t think any of that sounds better than what is being proposed in the link.
I don’t mean to dismiss the author’s work, but to point out the continued insular choices of the Python core teams. Instead of installing Nix, the author asks us to install Cargo and then go through a standard Rust-project workflow; they are comparable in complexity and extent.
You may choose to continue using
pyproject.toml
, Poetry, virtualenv, pyenv, etc. but the direct path is to use nix-shell to configure an entire development environment in an atomic step. The list of packages can be contained to a single line in a single file; here is an example from a homelab application which I updated recently.Contributing to nixpkgs is not trivial, but it is not difficult either. Here is a recent PR I authored for adding a Python package. It’s shorter than a
setup.py
, in my experience! Also, you don’t have to contribute new packages to nixpkgs; instead, you can add them to your local Nix expressions on a per-project basis.Please also keep in mind that all of this discussion is within the context of Python packaging difficulties. Languages without extension modules don’t require all of this effort; all we need instead is to install a runtime directly from an upstream package-builder, whether that’s a distro, vendor, or third-party packager. We should imagine that a language is either designed to have lots of extensions and be an integrator of features, or designed for monolithic applications which reimplement every feature and are suitable for whole-program transformation. Python picked both, and so gets neither.
I don’t understand how Nix would even be an alternative option when the goal is to support MacOS, Linux, and Windows.
Hypothetically, just as a thought experiment and nothing else, I think maybe a case could be made that running Nix inside WSL and cross-compiling from that to Windows might be sorta acceptable. I don’t think that’s a realistic thing to propose: it’s a ton of work for starters, and the payoff would be pretty dubious since you’d have this really long painful edit/test cycle.
Doesn’t WSL solve that problem out of the gate?
WSL is like Electron: it makes it easy for a developer to provide something to the user, but the thing provided is much worse than a corresponding native-solution. I’d struggle to integrate a WSL app with my native Windows powershell scripts.
The short answer is no.
The longer answer is: python has supported Windows natively for over a decade (how well might be up for debate, but it was supported), it’s not reasonable for them to suddenly say “use Linux inside Windows or get fucked”, and it’s not reasonable to expect them to do so, either.
I don’t have any statistics, but I would bet that the vast majority of Windows users (corporate IT managed machines) can’t enable WSL. Python is actually very easy to install on Windows with the Microsoft Store. Requiring users to enable WSL and understand how to use linux would be a large obstacle.
Or just with the installer from python.org, which will install to AppData by default (I think? At least if you choose the install just for me option), so no admin permissions needed.
All the cool kids use
winget
No, you are doing what you always do: pushing your preferred tools as the only acceptable tools, such that all development on all other tools must cease and all people everywhere must adopt only and exclusively your preferred tools. And along the way you throw in the usual (un)healthy dose of bashing anyone who dares to develop other tools, since obviously it’s bad and wrong for them to do so when the One True Thing has already been invented and thus they must be doing that for bad reasons.
Sometimes you do this with PyPy versus CPython. Sometimes with functional progamming/category theory versus other paradigms. Sometimes with Nix versus literally everything. But it’s always the same basic dismissive/attacking approach.
Maybe don’t do that?
I really appreciate this comment. It helps me understand you.
I didn’t read your comment, and boy, there’s so much more I disagree with.
To whom? There’s a world of people for whom nix is a non starter. Everyone using Visual Studio. Or working on computers they don’t fully control (enterprise developers). Or people that like Bluetooth to work, so they can’t use Linux (this is half in jest, half serious). “There’s no silver bullet” applies to your favorite thing too.
What language is like that? Ruby has C extensions, JavaScript has them, Java has jni. Even go, which is famous for reinventing the wheel a lot, has cgo. In every single language that isn’t C you will, at some point, have problems trying to install a package that needs to compile something in another language.
The reason it happens so much more in python is actually kind of a feature, not a bug: python was designed to be easily extendable, specifically in C, although that feature was perhaps not as well designed as we would like, in hindsight.
Maybe in a perfect world but … I don’t think any languages really fit this binary, well, binarilly (?). At most some are more at one end than the other, but I’m honestly struggling to find utility in the whole classification really.
I don’t understand, why would Nix be a blocker in those contexts? If you don’t fully control the computer, wouldn’t you have trouble installing all the Rust thingies anyhow?
https://lobste.rs/s/j9gr9z/pybi_posy#c_gnarky
I don’t think that addresses my question. I genuinely don’t get why Nix would be a blocker to people using Visual Studio (VS Code(?)), are plugins sandboxed, or unable to interact with binaries/run commands in some other way?
You’re applying things I said about one thing to to other things that I didn’t for it to be applied to.
One of my disagreements is with the idea that nix is some sort of ideal goal that ever developer is converting to. This idea breaks down as soon as you realize that people writing C# on Visual Studio (not VSCode) will never adopt something like nix, unless it’s fully integrated with Windows, like every single other tool they use.
The other disagreement is with the idea that the way the project linked can currently be used is the final interface: it clearly isn’t, they clearly say it will be a single binary in the future.
Only the second one has anything to do with Python tools. The first one is just a criticism of the idea of nix as the best thing ever that everyone should use and can do no wrong.
This is clearly a very early stage prototype, I didn’t see any claims that this is the final interface they people should use today.
In fact, straight from the readme:
Has nix managed to one-up rust on its evangelism task force?
Not that I know a lot about nix but the only similarity I see between nix and this proposal is that they are both made with code?
Ok, to be fair, they are made with code and related to managing packages. And they mention immutability somewhere in their description.
So, 3 similarities. Maybe I am wrong. Still sounds like a far-fetched comparison.
I for one would like to strongly encourage anyone who would like to make an attempt at “reinventing Nix”, since a thing that is like Nix but avoids some of its pain points could potentially be delightful.
What a great news, Web browsers are critically important and are now becoming like mini-OSes, having modular components that can be composed and specialised will ensure that we have viable alternative to the current integration around WebKit/Blink.
IMO the entire point of web browsers has always been that they were mini-OSes, competing with apt. They dominated Windows, because 1) the vast majority of use-cases didn’t actually need special permissions from the OS, and 2) the alternative was “go to this [website/FTP server/etc] and download a .exe, run the .exe, wait a minute or two, launch the app” instead of just “go to this website”.
This is kind of a funny take to me, because I have viewed it the opposite way. Web browsers becoming the application platform of choice for many users is what has made the use of alternative OSes (not Windows) more viable in the modern age. Of course, the move to pocket computers (phones) is really what has made the most impact.
The reason web browsers made alternative OSes more viable is because they’re an open standard VM that’s easily portable to the alternative OS.
Pocket computers displacing Windows demonstrates that trivial installation procedures really are what make or break an OS - the procedure for installing an app is to 1) open up the app store (or click a link), then 2) hit “install”. Just like the web browser, it’s trivial. IMO this is why mobile destroyed desktops for most use-cases.
Oh my gosh I love this.
I don’t love this as much.
I do not understand what x max= y is supposed to do. Care to explain, maybe with an example?
x=max(x,y)
Thanks for this! I now understand that
max=
is being understood the same way+=
is. Neat.Why not? That’s how at least a few languages implement sets: hashtables/dictionaries with dummy values. Python, in particular, comes to mind.
This may have been true in the past, but I don’t think that python does this currently. Python’s dict implementation now guarantees order, but the set definitely does not.
While they don’t share the same implementation (anymore?), Python sets are absolutely still implemented using a hashtable: https://github.com/python/cpython/blob/main/Objects/setobject.c
What’s the union/intersection/difference of two dictionaries?
A dictionary containing union/intersection/difference of the keys of those two dictionaries?
Please stop releasing TUI frameworks for Python. The language is way too slow and makes these tools a pain to use.
To whoever is starting to write a reply - no, it cannot be optimized. Python will always take a bit to start the program.
I find this comment very surprising.
The stopwatch app from the Textual tutorial starts running for me in less than a quarter of a second - it’s fast enough that I didn’t even notice the delay until I tried to eyeball-measure it just now.
The whole point of TUI apps is that you’re going to spend some time in them. Does a quarter of a second to show the initial screen really matter to anyone? That’s way faster than loading a web application in a browser tab.
Thinking about it, I don’t think I’ve ever used a TUI written in any language where the startup speed has bothered me.
Java/scala usually has a 1-3 second startup time, which is too long for me, but I agree – that’s the only one I can think of.
It depends. Maybe startup time doesn’t really matter for someone’s particular use case. While there will always be some baseline startup time from Python, there are cases where you can optimize it and possibly bring it down to a level you find acceptable.
At a job, I was tasked with figuring out and speeding up slow start of a Python program. Nobody knew why the bloody thing was taking so long to start. Part of it was network delays, of course, but part was Python. I did some profiling.
This little Python program was importing a library, and that library imported something called pkg_resources. Turns out that pkg_resources does a bunch of work at import-time (nooo!). After some digging, I found that pkg_resources was actually an optional dependency of the library we were using. It did a try … import … except: …, and could work without this dependency. After digging into the code (both ours and the library’s), I found that we didn’t need the facilities of pkg_resources at all.
We didn’t want to uninstall it. Distro packages depended on it, and it was possible that there were other programs on the system that might use it. So I hacked up a module importer for our program that raised ModuleNotFoundError whenever something tried to import pkg_resources.
I cut a nearly one-second start time down to an acceptable 300 milliseconds or so, and IIRC a fair portion of the 300 milliseconds was from SSH.
Know your dependencies (direct and indirect). Know what you’re calling (directly and indirectly) and when you’re calling it. Profile. And if your Python startup times are slow, look for import-time shenanigans.
Program startup speed is important for some applications but negligible compared to other aspects like usability, accessibility or ease of development, wouldn’t you agree?
Program startup speed and performance is an important part of usability. It’s bad for software users when the software they use is full of latency, or uses so many system resources it bogs down their entire computer.
Agreed, it’s part of usability. But it depends on the numbers. Saying “stop writing TUIs in Python” because of 200ms (out of which something can be shaved off with optimization) sounds extreme.
I completely agree with the unsuitability of Python for TUI / CLI projects! (Especially if these tools are short-lived in their execution.)
Long ago (but still using today) I’ve written a simple console editor that has ~3K lines of code (25 Python modules) which imports only 12 core (and built-in) Python modules (without any other dependencies) and mainly uses
curses
.On any laptop I’ve tried it (even 10 years old) it starts fast enough. However recently I’ve bought an Android device and tried it under Termux. It’s slow as hell, taking more than a second to start… (Afterwards it’s OK-ish to use.)
What’s the issue? The Python VM is slow to bootstrap and load the code (in my case it’s already in
.pyo
format, all in azip
withzipapp
). For example just calling (on my Lenovo T450)python2 -c True
takes ~10ms meanwhilepython3.10 -c True
takes ~14ms (python3.6
used to take ~20ms). Just addingimport json
adds another +10ms, meanwhileimport curses, argparse, subprocess, json
(which is perhaps the minimal any current-day project requires) yields a ~40ms startup.With this in mind, this startup latency starts to pile-on and it has no solution in sight (except rewriting it in a compiled language).
Granted, even other languages have their issues, like for example Go, which is very eager in initializing any modules you have referenced, even though will never use, and thus easily adds to startup latency.
(I’ll not even touch on the deployment model, where
zipapp
is almost unused for deployment and https://github.com/indygreg/PyOxidizer is the only project out there trying to really make a difference…)Have you tried the framework?
Why would I? It only has buttons and checkboxes implemented. And according to comments in here is still taking 1/4 of a second to start on a modern CPU.
EDIT: In the demo video, the demo takes 34 frames to boot. At 60fps, that’s more than half a second.
I guess it will never be succesful then, like that slow-as-hell Slack thing /s
The popularity of a chat app - particularly one that most people use because it’s what their workplace standardizes on - is driven much more by network effects than by the quality of the standard client app. It is bad that the Slack client is slow, and this is made worse by the fact that there aren’t a whole lot of alternative clients for the Slack network that a person who is required to use Slack as part of their job can use instead of the official, slow, one.
I think that the problem with your assessment is the assumption that the users of this framework have the knowledge to use a different language or want to use a different language than python. Nobody is forcing you to use it and if folks are releasing tools using it, nobody is forcing you to use those. For those that want to add a TUI front end to a script they made, this seems like a good option.
I think Ink is overall a better TUI framework than this, and let’s face it, Python really is slow, JavaScript is much better.
I am new to emacs but does treesitter work as an alternative for eglot? I am using treesitter and seems to be working fine for me, will using eglot+treesitter give me any extra bells? or these two are supposed to be mutually exclusive?
They are generally meant to compliment each other. Tree-sitter helps with syntax highlighting (font-locking) and structural navigation/editing. LSP generally knows more about your code, but it is too heavy/slow to be used for the tasks that tree-sitter is good at. It will offer autocompletion of functions, methods, parameters, and arguments, as well as some linting capabilities. It will also allow renaming of symbols and other more complex refactoring.
Is Eglot better than
lsp-mode
? How painful is the switch, for golang?From what I could tell after briefly trying both, lsp-mode tries to give you a very full-featured IDE or VS Code-like experience. It’s got a lot of power and gives a fairly slick interface to it all with a good bit of bling. It seems to have a lot of code and a lot of dependencies.
Eglot on the other hand tries to be lighter weight. It’s a single .el file of about 3500 lines and it mostly just depends on a handful of built-in libraries. It’s philosophy seems to be more about blending in with the traditional Emacs experience, setting some stuff up and then mostly staying out of the way until invoked.
As a long-time Emacs user, I found Eglot much more to my taste and went with it over lsp-mode, but there’s definitely room in the ecosystem for both. I don’t use golang, so I can’t speak to that. It does come with an entry for gopls for go-mode in its list of known servers.
I’m hoping that we’ll see some packages bring some of the lsp-mode UI to eglot for those that prefer that style. I’ll be sticking with the vanilla eglot myself.