Git terminology is a clinical example where many (most? definitely not all) terms actually make sense once you understand how it works, but make almost no sense in concert with other terminology or when you don’t know the implementation details.
The problem I’ve always had with Git is that it’s really a hodgepodge of different tools developed by different people without a unified “vision”, and it feels like it. It desperately needs a UX redesign to fix that, but the ship has probably sailed.
Compare it with something like Mercurial (hg) which by comparison feels intentionally designed, consistent, and the terminology all makes more sense to me. But Git has the market share (well, and historically better performance), so it’s what we use.
The problem I’ve always had with Git is that it’s really a hodgepodge of different tools developed by different people without a unified “vision”, and it feels like it.
It wasn’t even that, many of the most problematic commands were in fact developed by the same people.
The issue is that Git was developed completely bottom-up, the data model came first, followed by operations on this data model, and the high-level operations were basically convenience shortcuts to a bunch of lower level ones.
git’s high-level CLI was not designed, it was grown out of automation shell scripts. Which were glommed together in terms of their low-level concerns rather than their high-level (top-down) sensibility. Which is why e.g. git checkout does 15 seemingly unrelated things: bottom up all those unrelated things have a lot of machinery in common, so they all went into checkout, because most of the machinery was already there for previous tasks so might as well.
A lot of the terminology is also idiosyncratic because it comes from a pretty closed-off group with its own lingo (the linux kernel developers) and it was built by outright looking down on what came before.
This. I cringed when skimming the list, because several of the worst-offender terms did have better names before. When we moved to git, we taped sheets of paper to the wall with advice like “shelve is called stash now” and “colon means delete”, and – while things have gotten a lot better over the past decade (switch, restore) – there is still a long way to go.
Exactly! Great comment! git is tracked by git nearly from beginning. So the author (Linus) had to write low-level parts first (and data model), so that he could commit git source to git. (First commit: https://github.com/git/git/commit/e83c5163316f89bfbde7d9ab23ca2e25604af290 ). Everything else was added later
Statically-enforced types are annoying when you don’t know what types you need and you want to explore the solutions with code.
A great thing about Ruby is you can use hashes and strings to figure out what you need (same with JS objects) and then create actual types once you’ve got some code working that has allowed you to understand the problem.
With stuff like Java or Go, you are fighting the compiler instead of understanding your domain.
I have the opposite experience: Whenever I’m using Rust, SML or Haskell, I always start with defining my datatypes, and use those to understand my domain. The rest of the code flows naturally from there.
In other words: I explore solutions using datatypes.
Same. And if I don’t know what type I’ll need (like u64 or i32 or whatever), then I can use an alias or wrapper type like in the OP so I can change it for every usage in one location. Starting with types helps me build a model of the problem domain before writing any code.
This idea that types make you “fight the compiler” makes little sense to me. On the contrary, they enable the compiler to help me ensure that what I’m writing is correct from the start, free of needless debugging of runtime problems that types would have prevented.
What I’m referring to here (and should’ve made more clear) is exploring a user experience. With something like Rails, I can create a user experience very quickly, and have it using real data, real third party APIs, etc etc. This requires extremely fast iteration, often making drastic changes to see how they feel or how they work. Static typing would introduce two new steps that aren’t providing value for this particular activity: explicitly defining types, and requiring that the entire app conform to all type checks.
These two steps seem absolutely valuable in production, but for prototyping in an existing app, iterating on user experiences and design, they provide negative value and make the process harder.
Depends on what language you use. Most MLs have type inference, even Rust.
requiring that the entire app conform to all type checks.
When you’re writing something in a dynamically typed language you also need to ensure that your types match up, you just won’t know if they do until you run the code. Trivial example, 3 + x, were x is a variable containing the string “foo” will cause an exception at runtime in Ruby. I think it’s reasonable to argue that having that check happen at compile-time makes prototype development faster.
[…] create a user experience very quickly, […] often making drastic changes to see how they feel […]. […] app […]
I wonder to what extent the difference between the two sides here is a difference in perspectives between (Web or other GUI) app developers and non-app developers.
I often use that as a first step toward introducing stronger types in a code base written in a strongly typed language by developers who haven’t yet taken to actually using the stronger type system.
Sometimes after adjusting the datatypes, I want to test one function. I know I’ll have to update everything eventually, and I appreciate the compiler’s help with this, but I’d rather try running one example end-to-end first. I shouldn’t need to update code that won’t be used in that example.
I think -fdefer-type-errors is supposed to achieve this, and when I used Java and Eclipse it could do this. I could run the project without fixing all the red squiggles; it would just turn them into runtime errors.
Like the article says, it does not pay for itself. Ruby “domain code” in the wild is full of type errors, nil references and preventable bugs. I did an analysis recently and over 70% of all our exceptions that led to 500 errors were things a type system would’ve caught. This wasn’t a surprise, all projects I’ve ever been had this class of errors.
There’s little point in writing domain code quickly but faulty. We really need to disavow ourselves from this notion that static types are 100x slower to write: they’re not. You might throw something together in a day in Ruby, but it takes 2 days in Go or something else. This 1 day difference will not make your product fail, specially since you have taken the time to prevent some invalid states along the way.
This is coming from someone with 16+ years of Ruby experience, I don’t dislike the language at all. “Fast to write” is just not a good measure of quality.
There’s little point in writing domain code quickly but faulty.
Pity that domain code might be dynamically typed. If we have a DSL for our domain, the DSL compiler knows a lot, and thus is in the best position possible to enforce a crapton of invariants up front. I reckon those static checks aren’t free to implement, but they’re likely worth it. Else why are we paying the cost of a DSL to begin with?
Go, while statically typed, is arguably not strongly typed. It’s firmly in the bottom of the uncanny valley of programming languages, which can neither handle nor prevent errors. Been meaning to blog about that …
Newer versions of Go do have something close to enums. You can define something like this:
type MyEnum int64
const (
A MyEnum = iota
B
C
)
This will disallow implicit casts from other kinds of integer, which prevents other kind of integers being used by accident. The mildly annoying thing is that these are not namespaced in any way within a package so you can’t have two enumerations with the same value (just like in C, something C++ fixed long ago).
The standard library contains stuff that doesn’t make sense in terms of types, such as multiplying two Durations to get a Duration
That’s not really an artefact of the type system, that’s just the standard library authors getting their units wrong.
What about something like Typescript? You can still create arbitrary objects that lets you explore, but you still have static typing. The type inference removes most of the overhead of writing types as well.
I used to think that, but I’m leaning away from it now. I would say that whatever language you use, 75% of your code ends up having very simple types, so there’s no real gain from having dynamic typing. Dynamic typing is bad for exploratory programming with these because if you decide that the arguments should be c, b, a instead of a, b, c, you can’t use an automated tool to fix it. You do save the time of declaring the types up front, but it’s only a net gain if you happen to get it right the first time.
Where dynamic typing is useful is when you have a complicated type relationship that would be a pain to spell out. These don’t happen often, but when they do, then dynamic looks better. In a dynamic language you might say f can take an x, a y, or a z, but in static, you just say f takes an x and write a y to x and z to x converter or add fy and fz methods. And that’s a relatively simple case.
The other thing about static typing is that it sometimes buys you better performance. That doesn’t apply to TypeScript, but it is nice to have when you have it.
Dynamic typing is bad […] because if you decide that the arguments should be c, b, a instead of a, b, c, you can’t use an automated tool to fix it.
This is factually incorrect. Not only automated refactorings like this exist today. But the earliest example of automated refactorings that I know of are for a dynamically-typed language, Smalltalk.
Note that this is not directly related to the type system of the languages. But whether one uses runtime reflection vs static analysis.
When I say “can’t” I don’t mean “can’t with any possible amount of effort.” I mean “I can’t in the dynamic languages I use today with the tools available to me now.” If you have a tool that can detect signature breakages in Python and JavaScript without using static type annotations, I guess I’d like to add it to my tool chest, but for most people, the way they do it is by adding static types and using MyPy or TypeScript to catch the breakages.
When I say “can’t” I don’t mean “can’t with any possible amount of effort.” I mean “I can’t in the dynamic languages I use today with the tools available to me now.”
The discussion is about typed vs untyped, so it seems sensible to me limit the discussion to things that are inherent differences. Not things that are different between ‘popular’/‘mainstream’ typed vs untyped languages.
I don’t understand why you think automating this refactoring is so difficult in dynamically-typed languages. Or why you (implicitly) think it would be easy in statically-typed languages.
Suppose you have a function in a hypothetical statically-typed language, which has the following signature:
If you believe an IDE should be able to perform this refactoring correctly, why do you believe it would not be able to perform the same refactoring correctly in a dynamically-typed language?
The stated problem is a refactoring which changes the order of the arguments from a, b, c to c, b, a. If this is something that tooling for statically-typed languages can handle, then it must also be something that tooling for dynamically-typed languages can handle. Effectively, I’m saying that if the tooling can perform this refactoring for:
A way to illustrate my point, globals()["my_var"] = new_value. How should a refactoring algorithm detect you’re doing something like this with arbitrary strings, or what I’ve actually seen is globals()["var" + dynamic] = new_value?
I don’t understand how this is relevant to the actual hypothetical posed.
Also, I don’t understand why you think runtime modification is somehow only a thing in dynamically-typed languages, because it very much isn’t, and nobody suggests that it’s impossible to refactor, say, Java codebases when they rely on tons of runtime configuration, runtime injection, etc. making it impossible to know ahead-of-time which code paths might execute or with which values.
Fabulously written and engaging article. Loved it. The prose was superb. I wish I could write even 1/4 as well as the author.
Prior to reading this, I thought I was firmly in the camp of “LLMs are fake”, but I appreciate the nuance of the argument a bit more.
What I don’t understand (and I don’t pretend to understand LLMs well), is when I ask a programmer to write a program, I’m considering the writer to employ creativity, experience, and perspective. Where does that fit into an LLMs response? The author mentioned new algorithms which is pretty much what I’m asking: if LLMs have a “mental” model of the world, can they also use that model to generate creative and novel solutions? If they have a model of the world, I think they should be able to. I’ve never seen a creative solution by LLMs.
I get what I think are creative solutions from LLMs genuinely several times a day - because I’ve developed intuition as to what kind of prompts will get the most interesting results out of them.
“Give me twenty suggestions for ….” is a good format, because the obvious ideas come first but the interesting stuff will start to emerge once those have been covered.
I think this falls into the “kaleidoscope for text” bucket. There are interesting and novel juxtapositions of existing ideas, but no truly new ideas. Which TBF, humans rarely produce as well. “Genius” is the term for some work that goes outside of the ordinary bounds and then manages to bring a critical consensus behind it to endorse it. Most people have no genius, so it’s not fair to ask an LLM to have it. But genius is crucial to the movement of history. Without it, there’s just stagnation: the rearrangement of existing ideas.
Arguably “Genius” by this definition can still be the product of a trickle of innovations building upon one another over decades or centuries. The example that comes to mind is Television, which arguably was invented entirely independently by several people roughly simultaneously around the world. It could be argued that the invention of television was an inevitability, brought about by the confluence of a multitude of innovations over the previous century by thousands of individual humans.
I agree that it’s not fair to expect LLMs to have “genius”, or even to expect them to innovate on their own (absent human input). But could they still be said to “innovate” by putting together the creations of humans (or past LLMs!) in novel ways, the same way humans often do? I think it’s possible, and quite useful.
Building on @simonw’s excellent reply, which you should respond to first:
The whole point of LLMs is that they produce novel, “creative” solutions. The most amazing thing about them is that despite being trained on a corpus of mostly human-generated text, the best ones are still able to generate surprisingly sophisticated results. Here’s another prompt I generated today, to write Raspberry Pi code for cheese-finding robotic mice. Perhaps not the most complicated example, and I had to hold its hands a bit, but certainly relatively novel.
I’m also not sure at all what you could mean by “LLMs are fake”. Clearly they are a real thing that exist. It’s not a “Mechanical Turk” where a human is actually generating responses for you. They’re just programs that take input and produce output, based on a complex algorithm manipulating a huge corpus of text. In what way are they “fake”?
Mozilla has been claiming that FF is faster than Chrome for literally years, and maybe it is, at least based on whatever benchmark they’ve decided to highlight on any given day. Unfortunately, in my experience, FF just feels slower, and always has. The reason I switched to Chrome back in 2008 or 2009 was that FF felt like a bloated, clunky Microsoft product and Mozilla appeared to have zero interest in improving the situation, preferring instead to diddle around with a mobile OS, app store, and any number of other distractions. FWIW, I now use Safari, largely because, today, Chrome feels bloated and clunky by comparison.
When is the last time you used Firefox? I switched back to Firefox in 2017 when the new CSS engine was integrated, because now it feels faster to me than Chrome. Chrome also had more stability problems at the time. Quantum was a game changer for Firefox’s performance, and it’s just gotten better.
Quantum was mostly a game changer in pre-quantum firefox vs post-quantum firefox. It made huge strides forward in firefox performance itself.
But even after introducing the new CSS, Servo and all these, Chrome still had clear technical advantages (each site has it’s own process completely, while Firefox still shares some things across tabs), and a lot of benchmarks can show that. I use Firefox and Chrome both, approx. 49% of the time for each of them, and as the commenter above said, Firefox often feels slower, and a random site is much more often broken in firefox vs Chrome. (It’s not that often. It’s just that, I almost never run into a site broken for Chrome, while I do have a few of them where in Firefox they’re broken.)
I do have to admit that most of the times I do not see much difference. But when I do spot it, it’s usualy Chrome that has the edge.
On mobile, Firefox is much more useful to me - mostly because I can Ublock Origin stuff with it, unlike the mobile Chrome.
One major slow down factor in Firefox is adblocker addons: They typically load their entire list early and (in Firefox) can and do block network requests (unlike in Chrome) until they created a compiled version of their thousands of regexes.
With process isolation and everything, that slowdown can appear on every new tab.
I feel like it varies a lot by platform, honestly. On Windows, Firefox and Chrome feel about the same; on Linux, Firefox feels faster, but that might be because I have Chrome installed as a Flatpak. On Android, though, Chrome-based browsers feel a lot faster than Firefox-based ones, and, more importantly use less battery. I put up with the speed and battery differences on Android to have satisfactory ad blocking (Bromite and Mulch have only almost-satisfactory ad-blocking.
Any idea what the increase is attributed to? I know some Rust components ended up actually making their way into FF, it would be very cool if this is related to that.
There was a sizeable jump in performance from the accessibility engine rewrite released in May, but other than that I think it’s just been steady improvements by the development team. They’ve been pretty focused on performance to improve the experience of users with screenreader software.
For what it’s worth, Go also has workspaces since 1.18 as a way to organize groups of modules. This is quite similar to Rust workspaces, conceptually. It was necessary because in practice, many projects consist of multiple modules which also need to be developed concurrently, and this was inconvenient.
This seems like a non-issue – who builds on Windows outside of an environment like cygwin EDIT: who builds non-windows-first applications on windows using windows-specific build systems, rather than unix emulation layers? Supporting users of visual studio is a project in of itself, & while there are lots of Windows users around, there are very few who have a compiler installed or are liable to want to build anything from source. It makes more sense to do builds for windows via a cross-compiler & ask folks who want to build on windows to use cygwin – both of which are substantially less effort than getting VS to build an already-working project.
I believe that’s your experience, but you and I have radically different experiences as Windows users.
First, to be very blunt: Cygwin is truly awful. It needs to die. Cygwin is not a port of *nix tools to Windows; that’s MSYS. Cygwin is a really weird hacky port of a *nix to Windows. It’s basically WSL 0.0. It does not play well with native Windows tooling. It honestly doesn’t play well with Windows in general. And it’s comically slow, even compared to vaguely similar solutions such as WSL1. If I see a project that claims Windows support, and see Cygwin involved, I don’t even bother. And while I don’t know if a majority of devs feel similarly, a substantial enough group of Windows devs agree that I know my opinion’s not rare, either.
You’re right that Visual Studio is the go-to IDE on Windows, in the same way that Xcode is on Mac. But just as Mac users don’t necessarily bust out Xcode for everything, Windows devs don’t necessarily bust out Visual Studio. Using nmake from the command line is old as dirt and still common (it’s how we build Factor on Windows, for instance), and I’ve seen mingw-based Windows projects that happily use cross-platformgnumake Makefiles. CMake is also common, and has the benefit that you can generate Visual Studio projects/solution when you want, and drive everything easily from the command line when you want. These and similar tools designed to be used without Visual Studio are heavily used enough and common enough that Microsoft continues to release the command-line-only Windows SDK for the most recent Windows 10–and they do that because plenty of devs really do only want that, not all of Visual Studio.
For reasons you point out elsewhere, there’s a lot that goes into supporting Windows beyond the Makefile, to the point that concerns about cross-platform make may be moot, but “Windows devs will use Cygwin” seems reductionist.
I don’t think windows devs use cygwin. I think that non-windows devs use cygwin (or mingw or one of the ten other unix-toolchain-for-windows environments) so that they don’t need to go through the hoops to become windows devs.
In other words, I’m not really sure who the audience is for OP’s argument re: building on windows.
If you’re building on windows & you are a windows dev, why care about make at all? If you’re building on windows & you are not a windows dev, why care about the first group at all? In my (dated & limited) experience these two ecosystems hardly interact & the tooling to make such interaction easier is mostly done by unix-first developers who want to check windows builds off the list with a minimum of effort.
I think you need to take into consideration that there are also libraries. Sure, if you have an application developed on non-Windows, the easiest way to port it to Windows is building it in MSYS, with MinGW, or possibly Clang. But if you develop a library that you wish Windows developers be able to use in their projects, you have to support them building it with their tools, which is often MSVC.
who builds on Windows outside of an environment like cygwin?
I don’t understand this question. There are lots of software applications for Windows, each one has to be built, and cygwin is used really rarely. And CMake is precisely for supporting Visual Studio and gcc/clang at the same time, this is one of the purposes of the tool.
In software applications that are only for windows, supporting unix make isn’t generally even on the table. Why would you, when a lot more than the build system would need to change to make a C or C++ program aimed at windows run on anything else?
It only really makes sense to consider make for code on unix-like systems. It’s very easy to cross-compile code intended for unix-like systems to windows without actually buying a copy of windows, and it’s very easy for windows users to compile these things on windows using mechanisms to simulate a unix-like system, such as cygwin.
There are a lot of alternative build systems around, including things like cmake and autotools that ultimately produce makefiles on unix systems. If your project actually needs these tools, there are probably design issues that need to be resolved (like overuse of unreliable third party dependencies). These build systems do a lot of very complicated things that developers ought not to depend upon build systems for, like generating files that play nice with visual studio.
In software applications that are only for windows, supporting unix make isn’t generally even on the table.
Every team I’ve been on which used C++ has used CMake or FASTBuild, so supporting Unix builds at some point isn’t off the table, and it makes builds a lot easier to duplicate and simplifies CI/CD. Every project I’ve seen with build configuration in a checked-in Visual Studio solution makes troubleshooting build issues a complete nightmare since diffs in the configs can be hard to read. CMake’s not great, but it’s one of the more commonly supported tools.
If your project actually needs these tools, there are probably design issues that need to be resolved (like overuse of unreliable third party dependencies).
I’m not sure how this logically follows.
These build systems do a lot of very complicated things that developers ought not to depend upon build systems for, like generating files that play nice with visual studio.
Using CMake (or something else which generates solution files for Visual Studio), provides developers options on how they want to work. If they want to develop on Linux with vim (or emacs), that’s fine. If they want to use CLion (Windows, Mac or Linux), that’s also fine. There really isn’t that much extra to do to support Visual Studio solution generation. Visual Studio has a fine debugger and despite many rough edges is a pretty decent tool.
Who builds cross-platform applications not originally developed on windows outside of an environment like cygwin?
Windows developers don’t, as a rule, care about the portability concerns that windows-first development creates, & happily use tools that make windows development easier even when it makes portability harder. And cross-platform frameworks tend to do at least some work targeting these developers.
But, although no doubt one could, I don’t think (say) GIMP and Audacity builds are done through VS. For something intended to be built with autotools+make, it’s a lot easier to cross-compile with winecc or build on windows with cygwin than to bring up an IDE with its own distinct build system – you can even integrate it with your normal build automation.
I work on software that is compiled on Windows, Mac, and Linux, and is generally developed by people on Windows. We do not use Cygwin, which as gecko point out above, is truly awful. If I need to use Linux, I use WSL or a VirtualBox VM. And yes, I and my team absolutely care about portability, despite the fact that we develop primarily on Windows.
E.g., about 5% of folks singed this. Many bigger packages like GCC would have more than one maintainer, too.
Additionally, it’s been pointed out on another platform that this whole thing is a Guix’ response to disagreeing with Dr RMS on his GNU Kind Communications Guidelines some 11 months ago, because they weren’t punitive enough:
I’d say the whole thing was brewing for quite a while. Would be surprised for the list of signatories to change in any significant manner. Just looking at these numbers and the dates, I’d be surprised if many more folks haven’t been afforded the opportunity to join the mob, but didn’t. The fact that they hide all these things reveals their methods of action.
We are not hiding anything. Stallman is not a victim. We are not a mob. We are a collective of GNU maintainers who have had enough, and we’re hardly alone in the world with having had enough with RMS. He’s had good philosophies that persuaded all of us at one point, but his leadership and communication have been sorely lacking.
I actually expect the number of signatories to increase a little. I know of at least a few who wanted to sign but just didn’t get around to it because they were busy. Of those 400 GNU maintainers, most are inactive. GNU is not as cohesive as you might think, which again I think shows lack of good leadership.
Yes, there’s only 20 or so of us, but we represent some of the biggest GNU packages.
We are not hiding anything. Stallman is not a victim. We are not a mob. We are a collective of GNU maintainers who have had enough, and we’re hardly alone in the world with having had enough with RMS. He’s had good philosophies that persuaded all of us at one point, but his leadership and communication have been sorely lacking.
I actually expect the number of signatories to increase a little. I know of at least a few who wanted to sign but just didn’t get around to it because they were busy. Of those 400 GNU maintainers, most are inactive. GNU is not as cohesive as you might think, which again I think shows lack of good leadership.
Yes, there’s only 20 or so of us, but we represent some of the biggest GNU packages.
There’s so much misrepresentation here I don’t even know where to begin.
There’s already at least a couple of people on the list that aren’t even developers.
Not familiar with GNU Octave, I originally got the impression that you were the sole person responsible for the project. In fact, that’s what the word “maintainer” means in most other projects. Which, per further examination, cannot be further from the truth — there’s a bunch of commits over at http://hg.savannah.gnu.org/hgweb/octave, and none of them seem from you. When searching for your name, http://hg.savannah.gnu.org/hgweb/octave/log?rev=Jordi, we get a whole 10 results, spanning 2014 to 2017. Do you use some other ID within the project? Or is this pretty much representative of your involvement with the project you claim to be an official representative of? Wikipedia has a link to http://hg.savannah.gnu.org/hgweb/octave/file/tip/doc/interpreter/contributors.in, which reveals that there are a whole of 445 contributors to GNU Octave, and you’re the only one of these people who is a Guix signatory listing Octave.
Sure, some of the folks on the list are actual maintainers and/or are responsible for significant work. But do you even fail to see how simply putting a random list of semi-active part-time and drive-by developers as signatories behind cancelling the founder and 80-hours-per-week full-time advocate of Free Software is not exactly representing things as they are? How’s that not a mob?
Also, what is your exact intention when presenting yourself and everyone else as a “maintainer”, and with statements like “we represent some of the biggest GNU packages”? Were you officially designated to speak on behalf of any of these projects? Or is the whole intention to confuse others in a way similar to how you had me confused with your hat here on Lobste.rs? I don’t have time to check out every name (and some do checkout, some don’t), but it is beyond obvious that you don’t actually represent the views of GNU Octave as you imply, and presenting yourself as an active “maintainer” shows that you have no interest in spreading any truths anywhere, either.
As much as I dislike the backstabbing of this “joint statement” by GNU developers, I have to say that you are grossly mis-representing JordiGH contribution to Octave. He’s easily the main scientific contributor to this project after Eaton himself (which makes me even sadder that he’s actually signed the backstabbing manifesto).
I’m very sad to hear about that. From the outside it looks like you are part of the pithy smearing campaign against free software. I fail to understand how this “joint statement” at this moment helps anybody (besides mattl and the like).
I admire the work of most people who signed this statement, and jwe is one of my heros and sources of inspiration–as much as RMS. Even if I agree with the principle that the FSF/GNU leadership can change for the good, the second part of the statement that you signed reads as a callous backstabbing. I literally cried when I read the list of signatories. I cannot help but feel a bit guilty today when recommending octave to my students.
GNU leadership and its structure needs to change. Hell, GNU needs a structure to begin with – we don’t have any sort of organisation yet and thus our ties and cohesion between GNU packages over the years have weakened.
Even if RMS were a perfect saint and the hero many of us made him out to be, nobody should be appointed leader for life. We rotate other leadership positions, and we should do the same with this one.
No plan yet, just a plan to discuss. I am personally in favour of a steering committee. It seems to have mostly worked for gcc. I got to see some gcc people a couple of weeks ago for GNU cauldron, and that was fun. I would like something more like that.
It links using “GNU leadership has also published a statement”, which kinda implies with the surrounding text that GNU leadership is multiple people, but the link target is mail by Stallman saying that he will talk to FSF as a single person.
So, if rms resigns from GNU and suffers any negative mental health outcomes, would you believe yourselves to be contributing factors or perhaps even responsible?
I don’t know about abuser playbooks, I’m just thinking about it in terms of common decency for folks that have had internet mobs arrayed against them (correctly or incorrectly).
I certainly think it would be tacky if, say, a bunch of trolls got somebody ousted from their position in an open-source project and then refused to take responsibility if that person was harmed. The only salient difference to me here seems that you think (and correct me if I’m wrong!) of rms as an acceptable target.
RMS getting fired over the Minsky remarks is utter bullshit, and it was a total violation of due process, journalistic integrity, and other niceties of civilization… but that doesn’t mean he should be in a leadership position. I think the the whole Epstein business was used as a pretext for people who already wanted him out (for good reasons) to kick him out (based on a bad reason).
RMS getting fired over the Minsky remarks is utter bullshit,
He wasn’t fired. He voluntarily left of his own accord, because of comments that he made, while interjecting into a conversation that he was not originally part of. The comments are in line with culturally taboo statements he has made public on his website for over 20 years that people have willfully ignored for the sole reason of giving him the benefit of the doubt. This time, he crossed a line because a) the statements that he made are incredibly adjacent to, and almost identical to, arguments made by people who abuse young children (Regardless of his intent) and b) there were abuse survivors in the conversation that he interjected into, that were likely affected by those statements.
and it was a total violation of due process, journalistic integrity, and other niceties of civilization…
Well, no. Not only is his position as chairman not subject to those concerns, he himself violated said niceties of civilization.
but that doesn’t mean he should be in a leadership position. I think the the whole Epstein business was used as a pretext for people who already wanted him out (for good reasons) to kick him out (based on a bad reason).
Indeed. The word is that he has continually scuppered several projects (Including GNU’s version of DotNET which had a presence on the steering committee!!!) which caused non-GNU alternatives to have the upper hand, defeating GNU’s objectives of software freedom in the process.
Part of leadership is your subordinates not wanting to be lead by you anymore. This doesn’t make him a target.
Harm reduction may be a goal in these situations and, if you have a look at the statement, it gives appropriate credit to RMS, but also makes it clear that his time is over.
Perhaps I’m out of the loop. I’m aware of Stallman’s anti-social behavior in the past, but is there some new reason this is happening now, rather than years ago?
Edit: Oh, I am definitely out of the loop. I just read about Stallman’s Epstein remarks. How vile.
I don’t think that the Epstein remarks, at least what I’ve heard of them, are anything new or surprising if you’ve followed Stallman for a while. It’s not out of character at all.
Well, it may be nice to have a different leadership for the GNU project. Why not discuss it with the man himself? Has anyone tried before going public?
So I guess, that’s a no. “Unkind” is too kind a word.
Edit: to clarify this comment, this all reeks of “the ends justify the means”. While I agree with the ends, the means do not look good, and it changed how I perceive both RMS & the projects under the GNU umbrella.
I hope I did not sound angry. I’m just annoyed at myself (mostly). I wish you luck in this endeavour and other future projects. :)
“Tabs for indent, spaces for alignment” requires a lot of discipline. Even if you’re disciplined, it’s easy to screw up and the mistake isn’t noticeable until you change your tab width display preference. In my experience working on teams, I have found that it’s hard enough to get people to use tabs or spaces without screwing that up. To ask them to use a specific mixture of tabs and spaces is an unwinnable battle.
It’s hard to enforce meaningful rules about line length when you use tabs for indentation. When you use spaces, you can have a simple rule “lines shall not exceed 100 characters except in the case of user visible strings on their own line”. When you use tabs for indentation, you have to change your rule to “lines shall not exceed 100 characters wide based on a tab rendering width of 8 spaces except in the case of user visible strings on their own line”. Many people will choose a tab rendering width narrower than 8 spaces and as a result it will be very easy for them to author lines which are under 100 characters in their environment, but span past 100 when tabs are expanded to 8 space width.
Maybe the first point can be mitigated by having a highlighting rule that paints ^\s*( \t|\t ).* bright red. Or by prettier! Code auto formatters are lovely. ❤
Use visible whitespace in your editor and it is easier to be disciplined about. Combined with a .editorconfig file and IDE tools to detect mistakes, and it is not as hard as all that.
I’ve never found line length restrictions to be valuable, so this is not a consideration for me. In a world with word-wrap, where many people working in tech have 27” monitors, and in which a horizontal scrollbar exists, it just seems unimportant. Code in practice in my experience tends to fall within reasonable limitations naturally, anyway. The only times I’ve rejected a code review because a line of code was too long was because the engineer was trying to do too many things on one line anyway (ie, ternary operator magic, or math that could be abstracted into local vars for readability). Character restrictions for line length seem overly pedantic to me unless you have a specific technical reason to have them (ie, this has to display on an old TTY or has to be printed out at a specific font size or something). Why would you need them in the general case?
Word wrap is garbage, especially for diffs / 3 way merges. On my 27” 2560x1440 display at a normal font size, 80 chars wide fits 3 side by side panes of code. Or the diff view in my IDE (2 panes), with space for the sidebar and gutters. Or just cross-referencing 2 files.
Working on code with a hard 80 chars rule has been magnificent.
I like to split my window to 2 or 3 columns, so even though I have a wide screen monitor, I value 80-column source code restrictions. This allows me to simply have 2 or 3 files open at the same time which I can browse independently.
Well. This is being presented as some sort of binary absolute – that is, “spaces = 0% accessible, tabs = 100% accessible” – with no mention of compromises or tradeoffs or deeper investigation. But actual accessibility never works that way. There’s always a spectrum of options, each coming with its own set of tradeoffs, and navigating them well requires context and understanding and weighing those tradeoffs against each other.
So color me skeptical of the rush by many people to declare victory in the tabs-versus-spaces holy war on the grounds that “it’s proven more accessible to use tabs, so tabs win”. Those people didn’t realize until very recently that there even could be accessibility implications to the choice. I wonder what thing they don’t realize today they’ll get to find out about tomorrow.
Fair point, I have been an avid spaces advocate mostly for the sake of consistency. I have never really cared much about alignment and prefer to align things at the beginnings of lines. But what this pointed out to me was that while my arguments for spaces have mostly been preferences about code structure, the other side has a real legitimate need that surpasses my preference. Perhaps there is another accessibility case that supports spaces, I just haven’t heard it in the 20 years I have been having these discussions. But to be fair, I hadn’t heard this one until today, while I have worked with deaf programmers, I have yet to work with anyone visually impaired.
The first issue that comes to mind for me, with this specific situation, is that it’s almost certain to come down to tooling. Accessibility tools tend to kind of suck, but that creates a situation where you need to be very careful to maximize someone’s ability to use accessibility tools while not building permanent assumptions in about the abilities and shortcomings of the specific tools available right now. If we all rush out and convert to tabs-only today without any further thought about it, we run the risk of locking in the shortcomings of today’s tools in ways that may not be good tomorrow.
Which is why I’d really prefer to hear educated opinions from people with expertise in accessibility tooling and practices before coming to any decisions based on this.
The title is a bit hyperbolic but I’ve also noticed the same effect during my experience deploying large-scale H2: you can get hit by hundreds or thousands of requests simultaneously from H2, unlike from HTTP/1 that throttled this from the browser. This mostly happens with legacy or badly architected clients/clientside applications (like overgrown Wordpress instances etc).
It’s also a chance to clean up the client, it used to be a more hidden issue that is just now more visible due to H2 removing some bottlenecks.
Strictly speaking, this was a problem before already; if a bad actor would use a client that didn’t do any limiting itself they would be able to break down your application quite easily.
Preventing this sort of thing beforehand can be quite difficult, though.
I’d say it’s mildly clickbaity, but I wouldn’t say it’s hyperbolic. It is generally a mistake to make production configuration changes without an understanding of the impact. So in this case it was a mistake to enable HTTP/2 without a strategy for the change in traffic patterns. HTTP/2 is a net positive, generally, but it’s not a change to make lightly.
I feel like this article is conflating several different notions of “safe”. For instance, the author considers Rust’s goals to be “safe”, however Rust does not prevent memory leaks - which the author regards as “unsafe”. Meanwhile, Java - like Rust - is certainly quite “memory safe” (ie, it prevents accessing invalid memory). If you access a null in Java, it doesn’t result in undefined behavior - it crashes with an exception, which is surely safer than what some other languages might do.
To be useful I think we must distinguish “safety” from things like “sound”, “secure”, “statically verifiable”, etc. They are related but different concerns, and most languages outside of maybe Coq and Spark (even Rust, with its “unsafe” blocks) accept some level of fudge factor with them in the interest of getting work done.
Yeah I found it to be an odd grab bag. I thought there was a formal definition of “safe” that means that the program either halts or the behavior is predictable from the source text. Java/Python’s exception on null is “safe” in that respect, but C’s undefined behavior is not.
You can also divide by zero in or write infinite loops in Rust/Swift/etc. You could fail to handle EINTR. Does that make them “unsafe”? The term really does need a definition for this blog post to be meaningful.
Python and Java aren’t “safe” by that definition: Consider non-domain errors like running out of disk-space, or low-memory conditions. There are languages that are safe by that definition (like Zig) but given how few players there are at that table, I never assume that’s what someone means by “safe”.
Knowing it isn’t a protected term, I treat the term “safe” like I treat the term “clean”: I appreciate the author is trying to point to something that makes them feel better, and often that thing is useful or interesting, so I can try to understand why that thing would make my life easier instead of worrying about whether MyFavouriteLanguage™ has this particular feature-label with some definition of this feature-label.
Here, the author has given a definition up-front:
The prime directive in programming is to write correct code. Some programming languages make it easy to achieve this objective. We can qualify these languages as ‘safe’.
Ok, so the author measures “safety” as the function of how easy it is for a programmer to write code correctly in this language. And yes, by that definition Java and Python (and other stackoverflow-oriented programming languages) are unsafe because most programmers are unable to write code correctly in those languages. From there, the author suggests four “language features” that a language that is “more safe than Java” has. Hooray.
I have a very different issue with the article, and that’s that I don’t see the point of talking about these things. I would see the audience going one of four ways:
I’m not a Java Developer, and MyFavouriteLanguage™ doesn’t have these features, and “I don’t have any problems” (Blah blah blah) so maybe they aren’t that important after all…
I’m not a Java Developer, and MyFavouriteLanguage™ does have these features, so I already know all this HUR HUR HUR
I’m not a Java developer, so I can’t use this information to make Java better.
I’m in management/eat paste, so I don’t know what any of this means.
None of them are good, and even if the author “wins” (whatever that means) and convince someone who thought Java was great and now it isn’t, so what? Do they somehow benefit if more people think Java sucks?
And yet the prevailing response to something like this is to ask for a better definition. After all, we want to understand how MyFavouriteLanguage™ stacks up! We want the debate! But to me, asking for rigour here is just putting lipstick on a pig, and may even cause real harm in (us collectively) trying to understand how to make programmers write correct code quickly that runs fast.
I don’t like Java, know about other options, and have to write it since the managers demand that language.
I am a Java developer who is required to write standard Java for maintainability to help future, disposable programmers or come-and-go volunteers. I can’t use better ways to write Java in this company or open project.
I am a Java developer with lots of existing code and use for the ecosystem (esp libraries and IDE). I’m not an expert in compilers or program analysis. I can’t fix these problems while still writing Java even if managers or open collaborators would allow it.
The idea that anyone would consider a language “safe” which has a type system that doesn’t prevent null pointer exceptions is very strange to me. I mean, surely that’s not enough to qualify on its own, but it’s certainly one prerequisite.
The main problem I have with Mastodon is how Twitter-like it is. What I want is something more Google Plus or Facebook-like, where I have posts associated with my identity (or some discussion group) - of some arbitrarily large length, certainly longer than 500 characters - that people can comment on. I hate how Twitter conversations are scattered all over different peoples’ profiles, with a ridiculously difficult to navigate thread view, rather than centralized in relation to the originating post. This probably sounds like “old man shouts at cloud” talk, but I believe it is useful. Moreover, it’s not Twitter that I want to replace, by and large - it’s Facebook and G+.
As for excluding Nazis…as noble of a goal as that is, depending on moderation standards it’s a good way to create an echo chamber. If we come to a point where the “conservative” mastodon-sphere is completely cut off from the “liberal” mastodon-sphere, then Mastodon simply will not be a forum to facilitate dialog that reconciles our differences. Perhaps that’s okay; I have no problem with moderated safe spaces, and there’s no reason Mastodon has to be for that. But it’s worth being mindful of the choices we make that further divide us from one another. Exposure to differing points of view is the only way I’ve ever seen extremism tempered.
As for excluding Nazis…as noble of a goal as that is, depending on moderation standards it’s a good way to create an echo chamber.
Absolutely no one is arguing that differing opinions should be banned, just that if you come in ranting about “racial replacement” or “(((social engineering)))” or “(((Soros))) financed it!” (or whatever bullshit is popular with these idiots this week) you should probably, you know, sod off.
I have no idea how anyone can go from “we should ban literal Nazis” to “we should ban differing opinions”.
This is like saying we should let homeopaths or the anti-vaxx crowd make presentations at medical conferences, otherwise it would be too much of an “echo chamber”.
Mastodon simply will not be a forum to facilitate dialog that reconciles our differences
I don’t want to talk to literal neo-Nazis to “reconciles our differences”. These people are toxic assholes. If you literally believe that white people are superior then no amount of conversation is going to convince us. The only effect these people will have is driving away genuine good-faith contributors, as engaging with toxic assholes is not what people want to do, and recruit new people to their cause.
Exposure to differing points of view is the only way I’ve ever seen extremism tempered.
Have you already forgotten what happened last week in Christchurch?
You’ve misunderstood me. I’m not talking about reconciling differences with “literal neo-Nazis”. Perhaps I’m in danger of making a slippery slope argument, but it doesn’t seem like such a leap to thing that federated social networks might realistically sever themselves from one another to create echo chambers, even beyond just ridding ourselves of the most extreme speech (ie, Nazism).
You seem to be reacting awfully aggressively to what I feel was a fairly moderate concern on my part. Please pull it back a bit if you want to continue discussing this topic cordially.
With my gripes about comments, Diaspora(/Friendica/Hubzilla) seems more my speed. It does not seem to have the penetration of Mastodon for whatever reason (or maybe just not the same buzz), but Friendica and Hubzilla are apparently compatible.
I write a post on kineticdial.com—it receives a couple hundred reads.
I write a post on Medium—it receives tens of thousands of reads.
It really depends on what problem I am trying to solve for. Am I trying to get the content I write to be read by the most people or am I trying to develop a personal brand largely for employers considering hiring me?
Genuinely honest question: if it’s a personal blog, why do you care about how many readers you’re getting? I understand that getting absolutely zero views is kinda depressing, but with a few dozen readers, I feel like content. My blog is just my personal space for me to ramble on about things I care. It’s personal.
I guess it’s human nature to always want more, but I dunno, I just don’t feel that with the number of readers reading my blog.
Zero views can be depressing only if you measure it :)
I removed all statistics from my pages a while ago (did not check GA before anyway). While I don’t write as often as I’d like to, when I do, I find obliviousness to my content’s reach liberating.
I’ve found that just getting higher numbers stops mattering pretty quickly for my personal satisfaction. If someone emails me with a genuine question or complement, it would make my day!
The other day I gave a training talk to a bunch of new employees via video call. One of them recognized me in the hallways and said he thought it was funny, engaging, and interesting. It really did make my day! Much more so than knowing I’m impacting a dozen products by training a dozen engineers. Same goes for code. I know for a fact code that I’ve written touches millions of people every day. But that stat became pretty meaningless quickly. If one of them said they liked a feature I worked on, that would mean a lot more to me.
Fuzzy feelings beat pure numbers for me. I suspect looking at blogging from that perspective will push you toward enabling comments and encouraging tweets and email.
True. I don’t have any kind of analytics on my blog either. But I have a couple of friends who follow my blog, so that’s how I know :) But it wouldn’t matter if they stopped reading (perhaps they have already and they’re too polite to tell me), I’d still write about the same things at the same frequency with the same writing style. That’s the beauty of the internet. I hope we don’t ever lose that.
I’ve had the opposite experience: my website pieces got far more readers than my medium pieces. This could just be because I’ve written a lot more and my topic has changed, but it’s still a data point.
More importantly to me, I’ve gotten more engagement from website pieces. People are more likely to email me about them.
I speculate, but can’t prove, that it helps that the website is a straight-forward, minimalist design that takes people straight to good content without distractions or asking pardon for interruptions. A better, user experience.
Medium has tools for discovering interesting content. You can browse blog posts not just by author, but by category and tag, and at the bottom of every post are “related reads” - links to articles by the same author or by other authors that might be relevant to your interests. Combined with the traffic generated by a cohort of popular bloggers, that means impressions on your own writing are much more likely.
Compare that with a personal website, where people will only discover it if they go to your site specifically, happen to find you on google, or have you in their RSS feed.
Not the shiny tools: putting trust and data in an organization whose incentives are aligned against you now or potentially in the future. It’s why I strongly push for:
Open formats and protocols with open-source implementations available to dodge vendor lock-in. Enjoy the good, proprietary stuff while their good behavior lasts. Exit easily if it doesn’t.
Public-benefit and/or non-profit organizations chartered to do specific good things and not do specific bad things. Ghost is best example in that area. Alternatively, self-hosting something that’s easy to install, sync, and move on a VPS at a good company like Prgmr.com with local copies of everything.
Then, you don’t care if the vendor with shiny tools wants to put up a paywall on their version of the service. It will be their loss (No 1) or not happen at all (No 2.) We must always consider economic and legal incentives in our investments of time, money, and data. That’s the lesson I took way too long to learn as a technical person.
I write on my own website and post it to websites like this. Someone posted a link to my website on hacker news and it hit the top and I got thousands of views without any need for medium
I’ve been using kakoune for around a year now for some things, coming from both Vim and Spacemacs. The noun-verb ordering is a revelation. Being able to see what I’m operating on before you hit a button is a big bonus.
Sort of. In a way, it’s as if you’re in visual mode by default. But the fact that the cursor works differently, combined with multiple selections and better integration with search, it is arguably more ergonomic.
What is the motivation for this editor? It’s not clear why I would choose this over anything else, especially when many programming environments support Emacs/vim style interaction.
You can read the justification here. Largely it is based on the idea of switching commands from vim’s verb-object pattern to an object-verb pattern, which enables the useful behavior of showing the user what they’re modifying before they modify it.
Combined with some other useful features like multiple selections, and a client-server model like neovim, I have to admit it’s pretty appealing to me. I’ve been a vim user for about 20 years, however, and it would likely take quite a lot of retraining to switch now. Edit: Not to mention the fact that no official Windows support is planned; I prefer to use the same editor on all operating systems if possible.
I was a Vim user for 20 years, and after using Kakoune for two or three weeks I started finding Vim frustrating and clumsy. That’s partially because Kakoune’s more orthogonal keymap makes Vim’s x and v redundant, so it replaces them with other commands, but also because of Kakoune’s movement-is-selection model. In Vim, to delete a word, I hit w until I reach the end of it (but not past it!) and then db to delete it, or sometimes bdw if I haven’t had my coffee yet. In Kakoune, I hit w until it’s highlighted, and then d.
This is something i have explored when trying out versor-mode for Emacs. I had no idea Kakoune did the same thing. It’s a very powerful paradigm to start treating editor navigation as a coordinate system for various dimensions of a text file.
In versor-mode, these coordinate axes are an explicit modal choice, but setting it implicitly based on the last navigation command sounds highly useful.
As Vim moves more towards the Emacs model of “Do it all inside” (following Neovim’s lead), I became less inclined to buy into this model. So the thing that really made me look at Kakoune isn’t what it did – but what the author insists it shall NOT do. From giving window control over to like i3/tmux/$X to delegating to the shell – I think this approach has value, and I think it will continue to benefit from this core decision.
“Working Effectively with Legacy Code” by Michael Feathers.
Most of my career has involved some amount of legacy code, but my current position features a mountain of 90s “C-with-classes-style” C++ - and my standing orders are to rewrite much of it for the modern era. This book takes a very useful approach to the topic, and I’m enjoying it a lot. It is from 2004, though; if anyone has more recent recommendations, I would welcome them!
I enjoy much of Codeless Code, but very few entries seem to qualify as “koans” in any true sense. Most seem closer to fables. The AI Koans entries from the Jargon file are closer.
Git terminology is a clinical example where many (most? definitely not all) terms actually make sense once you understand how it works, but make almost no sense in concert with other terminology or when you don’t know the implementation details.
Leaky terminology.
The problem I’ve always had with Git is that it’s really a hodgepodge of different tools developed by different people without a unified “vision”, and it feels like it. It desperately needs a UX redesign to fix that, but the ship has probably sailed.
Compare it with something like Mercurial (
hg
) which by comparison feels intentionally designed, consistent, and the terminology all makes more sense to me. But Git has the market share (well, and historically better performance), so it’s what we use.It wasn’t even that, many of the most problematic commands were in fact developed by the same people.
The issue is that Git was developed completely bottom-up, the data model came first, followed by operations on this data model, and the high-level operations were basically convenience shortcuts to a bunch of lower level ones.
git’s high-level CLI was not designed, it was grown out of automation shell scripts. Which were glommed together in terms of their low-level concerns rather than their high-level (top-down) sensibility. Which is why e.g.
git checkout
does 15 seemingly unrelated things: bottom up all those unrelated things have a lot of machinery in common, so they all went into checkout, because most of the machinery was already there for previous tasks so might as well.A lot of the terminology is also idiosyncratic because it comes from a pretty closed-off group with its own lingo (the linux kernel developers) and it was built by outright looking down on what came before.
This. I cringed when skimming the list, because several of the worst-offender terms did have better names before. When we moved to git, we taped sheets of paper to the wall with advice like “shelve is called stash now” and “colon means delete”, and – while things have gotten a lot better over the past decade (
switch
,restore
) – there is still a long way to go.Exactly! Great comment! git is tracked by git nearly from beginning. So the author (Linus) had to write low-level parts first (and data model), so that he could commit git source to git. (First commit: https://github.com/git/git/commit/e83c5163316f89bfbde7d9ab23ca2e25604af290 ). Everything else was added later
Statically-enforced types are annoying when you don’t know what types you need and you want to explore the solutions with code.
A great thing about Ruby is you can use hashes and strings to figure out what you need (same with JS objects) and then create actual types once you’ve got some code working that has allowed you to understand the problem.
With stuff like Java or Go, you are fighting the compiler instead of understanding your domain.
I have the opposite experience: Whenever I’m using Rust, SML or Haskell, I always start with defining my datatypes, and use those to understand my domain. The rest of the code flows naturally from there.
In other words: I explore solutions using datatypes.
Same. And if I don’t know what type I’ll need (like u64 or i32 or whatever), then I can use an alias or wrapper type like in the OP so I can change it for every usage in one location. Starting with types helps me build a model of the problem domain before writing any code.
This idea that types make you “fight the compiler” makes little sense to me. On the contrary, they enable the compiler to help me ensure that what I’m writing is correct from the start, free of needless debugging of runtime problems that types would have prevented.
What I’m referring to here (and should’ve made more clear) is exploring a user experience. With something like Rails, I can create a user experience very quickly, and have it using real data, real third party APIs, etc etc. This requires extremely fast iteration, often making drastic changes to see how they feel or how they work. Static typing would introduce two new steps that aren’t providing value for this particular activity: explicitly defining types, and requiring that the entire app conform to all type checks.
These two steps seem absolutely valuable in production, but for prototyping in an existing app, iterating on user experiences and design, they provide negative value and make the process harder.
Depends on what language you use. Most MLs have type inference, even Rust.
When you’re writing something in a dynamically typed language you also need to ensure that your types match up, you just won’t know if they do until you run the code. Trivial example,
3 + x
, werex
is a variable containing the string “foo” will cause an exception at runtime in Ruby. I think it’s reasonable to argue that having that check happen at compile-time makes prototype development faster.I wonder to what extent the difference between the two sides here is a difference in perspectives between (Web or other GUI) app developers and non-app developers.
I never thought to use type aliases that way. Very clever!
I often use that as a first step toward introducing stronger types in a code base written in a strongly typed language by developers who haven’t yet taken to actually using the stronger type system.
Sometimes after adjusting the datatypes, I want to test one function. I know I’ll have to update everything eventually, and I appreciate the compiler’s help with this, but I’d rather try running one example end-to-end first. I shouldn’t need to update code that won’t be used in that example.
I think
-fdefer-type-errors
is supposed to achieve this, and when I used Java and Eclipse it could do this. I could run the project without fixing all the red squiggles; it would just turn them into runtime errors.Like the article says, it does not pay for itself. Ruby “domain code” in the wild is full of type errors, nil references and preventable bugs. I did an analysis recently and over 70% of all our exceptions that led to 500 errors were things a type system would’ve caught. This wasn’t a surprise, all projects I’ve ever been had this class of errors.
There’s little point in writing domain code quickly but faulty. We really need to disavow ourselves from this notion that static types are 100x slower to write: they’re not. You might throw something together in a day in Ruby, but it takes 2 days in Go or something else. This 1 day difference will not make your product fail, specially since you have taken the time to prevent some invalid states along the way.
This is coming from someone with 16+ years of Ruby experience, I don’t dislike the language at all. “Fast to write” is just not a good measure of quality.
See https://lobste.rs/s/z6lpqg/strong_static_typing_hill_i_m_willing_die#c_gfptir for more elaboration on what I’m talking about. It’s not “go quickly from zero to production code”.
Pity that domain code might be dynamically typed. If we have a DSL for our domain, the DSL compiler knows a lot, and thus is in the best position possible to enforce a crapton of invariants up front. I reckon those static checks aren’t free to implement, but they’re likely worth it. Else why are we paying the cost of a DSL to begin with?
Go, while statically typed, is arguably not strongly typed. It’s firmly in the bottom of the uncanny valley of programming languages, which can neither handle nor prevent errors. Been meaning to blog about that …
I’d be interested in hearing why would you’d say it’s not strongly typed
Duration
s to get aDuration
Newer versions of Go do have something close to enums. You can define something like this:
This will disallow implicit casts from other kinds of integer, which prevents other kind of integers being used by accident. The mildly annoying thing is that these are not namespaced in any way within a package so you can’t have two enumerations with the same value (just like in C, something C++ fixed long ago).
That’s not really an artefact of the type system, that’s just the standard library authors getting their units wrong.
What about something like Typescript? You can still create arbitrary objects that lets you explore, but you still have static typing. The type inference removes most of the overhead of writing types as well.
I used to think that, but I’m leaning away from it now. I would say that whatever language you use, 75% of your code ends up having very simple types, so there’s no real gain from having dynamic typing. Dynamic typing is bad for exploratory programming with these because if you decide that the arguments should be c, b, a instead of a, b, c, you can’t use an automated tool to fix it. You do save the time of declaring the types up front, but it’s only a net gain if you happen to get it right the first time.
Where dynamic typing is useful is when you have a complicated type relationship that would be a pain to spell out. These don’t happen often, but when they do, then dynamic looks better. In a dynamic language you might say f can take an x, a y, or a z, but in static, you just say f takes an x and write a y to x and z to x converter or add fy and fz methods. And that’s a relatively simple case.
The other thing about static typing is that it sometimes buys you better performance. That doesn’t apply to TypeScript, but it is nice to have when you have it.
This is factually incorrect. Not only automated refactorings like this exist today. But the earliest example of automated refactorings that I know of are for a dynamically-typed language, Smalltalk.
Note that this is not directly related to the type system of the languages. But whether one uses runtime reflection vs static analysis.
When I say “can’t” I don’t mean “can’t with any possible amount of effort.” I mean “I can’t in the dynamic languages I use today with the tools available to me now.” If you have a tool that can detect signature breakages in Python and JavaScript without using static type annotations, I guess I’d like to add it to my tool chest, but for most people, the way they do it is by adding static types and using MyPy or TypeScript to catch the breakages.
The discussion is about typed vs untyped, so it seems sensible to me limit the discussion to things that are inherent differences. Not things that are different between ‘popular’/‘mainstream’ typed vs untyped languages.
But even if we move the goalpost, to things I can do in Python or JS. That is still factually incorrect. Take PyCharms for example: https://www.jetbrains.com/help/pycharm/change-signature.html
Changing the order of parameters is a pretty low bar.
Without type annotations PyCharm thinks everything is of type Any and it really quickly loses its ability to do interesting analysis or refactoring.
I don’t understand why you think automating this refactoring is so difficult in dynamically-typed languages. Or why you (implicitly) think it would be easy in statically-typed languages.
Suppose you have a function in a hypothetical statically-typed language, which has the following signature:
And you want to change the order of the parameters as stated in your comment:
If you believe an IDE should be able to perform this refactoring correctly, why do you believe it would not be able to perform the same refactoring correctly in a dynamically-typed language?
Runtime construction of dynamic variables and factory types.
I still don’t see the issue.
The stated problem is a refactoring which changes the order of the arguments from a, b, c to c, b, a. If this is something that tooling for statically-typed languages can handle, then it must also be something that tooling for dynamically-typed languages can handle. Effectively, I’m saying that if the tooling can perform this refactoring for:
then it ought to be equally able to perform this refactoring for:
since that is, in essence, what people like to claim a dynamically-typed language really is.
A way to illustrate my point,
globals()["my_var"] = new_value
. How should a refactoring algorithm detect you’re doing something like this with arbitrary strings, or what I’ve actually seen isglobals()["var" + dynamic] = new_value
?I don’t understand how this is relevant to the actual hypothetical posed.
Also, I don’t understand why you think runtime modification is somehow only a thing in dynamically-typed languages, because it very much isn’t, and nobody suggests that it’s impossible to refactor, say, Java codebases when they rely on tons of runtime configuration, runtime injection, etc. making it impossible to know ahead-of-time which code paths might execute or with which values.
Just use a union/sum type.
Fabulously written and engaging article. Loved it. The prose was superb. I wish I could write even 1/4 as well as the author.
Prior to reading this, I thought I was firmly in the camp of “LLMs are fake”, but I appreciate the nuance of the argument a bit more.
What I don’t understand (and I don’t pretend to understand LLMs well), is when I ask a programmer to write a program, I’m considering the writer to employ creativity, experience, and perspective. Where does that fit into an LLMs response? The author mentioned new algorithms which is pretty much what I’m asking: if LLMs have a “mental” model of the world, can they also use that model to generate creative and novel solutions? If they have a model of the world, I think they should be able to. I’ve never seen a creative solution by LLMs.
I guess I’m still in the camp of “LLMs are fake”.
What do you mean by “creative solutions” here?
I get what I think are creative solutions from LLMs genuinely several times a day - because I’ve developed intuition as to what kind of prompts will get the most interesting results out of them.
“Give me twenty suggestions for ….” is a good format, because the obvious ideas come first but the interesting stuff will start to emerge once those have been covered.
Just in the last 24 hours: “20 ideas for exciting and slightly unusual twists on backyard grill burgers” https://chat.openai.com/share/cc72358b-8915-4f40-98e7-684b9987ef0d
And I got it to brainstorm Spanish nicknames for my mischievous dog: https://chat.openai.com/share/eb8bec31-76d0-464f-a123-a9d0823ad1f8 - I particularly enjoyed “Desenrolladora de Papel” and “Deslizadora de Alfombras”.
I also got an interesting optimization out of it for loading the graph when I tried the word chain example myself: https://chat.openai.com/share/c2b2538e-4d8b-40e2-a603-b9808b932000
I think this falls into the “kaleidoscope for text” bucket. There are interesting and novel juxtapositions of existing ideas, but no truly new ideas. Which TBF, humans rarely produce as well. “Genius” is the term for some work that goes outside of the ordinary bounds and then manages to bring a critical consensus behind it to endorse it. Most people have no genius, so it’s not fair to ask an LLM to have it. But genius is crucial to the movement of history. Without it, there’s just stagnation: the rearrangement of existing ideas.
Oh I really like “kaleidoscope for text”.
Arguably “Genius” by this definition can still be the product of a trickle of innovations building upon one another over decades or centuries. The example that comes to mind is Television, which arguably was invented entirely independently by several people roughly simultaneously around the world. It could be argued that the invention of television was an inevitability, brought about by the confluence of a multitude of innovations over the previous century by thousands of individual humans.
I agree that it’s not fair to expect LLMs to have “genius”, or even to expect them to innovate on their own (absent human input). But could they still be said to “innovate” by putting together the creations of humans (or past LLMs!) in novel ways, the same way humans often do? I think it’s possible, and quite useful.
Building on @simonw’s excellent reply, which you should respond to first:
The whole point of LLMs is that they produce novel, “creative” solutions. The most amazing thing about them is that despite being trained on a corpus of mostly human-generated text, the best ones are still able to generate surprisingly sophisticated results. Here’s another prompt I generated today, to write Raspberry Pi code for cheese-finding robotic mice. Perhaps not the most complicated example, and I had to hold its hands a bit, but certainly relatively novel.
I’m also not sure at all what you could mean by “LLMs are fake”. Clearly they are a real thing that exist. It’s not a “Mechanical Turk” where a human is actually generating responses for you. They’re just programs that take input and produce output, based on a complex algorithm manipulating a huge corpus of text. In what way are they “fake”?
For fun, I asked ChatGPT what it thought you meant. Let me know if it got it right.
Mozilla has been claiming that FF is faster than Chrome for literally years, and maybe it is, at least based on whatever benchmark they’ve decided to highlight on any given day. Unfortunately, in my experience, FF just feels slower, and always has. The reason I switched to Chrome back in 2008 or 2009 was that FF felt like a bloated, clunky Microsoft product and Mozilla appeared to have zero interest in improving the situation, preferring instead to diddle around with a mobile OS, app store, and any number of other distractions. FWIW, I now use Safari, largely because, today, Chrome feels bloated and clunky by comparison.
When is the last time you used Firefox? I switched back to Firefox in 2017 when the new CSS engine was integrated, because now it feels faster to me than Chrome. Chrome also had more stability problems at the time. Quantum was a game changer for Firefox’s performance, and it’s just gotten better.
Using Firefox on macOS vs Chrome or Safari is a challenge. Video playback being the biggest offender.
Quantum was mostly a game changer in pre-quantum firefox vs post-quantum firefox. It made huge strides forward in firefox performance itself.
But even after introducing the new CSS, Servo and all these, Chrome still had clear technical advantages (each site has it’s own process completely, while Firefox still shares some things across tabs), and a lot of benchmarks can show that. I use Firefox and Chrome both, approx. 49% of the time for each of them, and as the commenter above said, Firefox often feels slower, and a random site is much more often broken in firefox vs Chrome. (It’s not that often. It’s just that, I almost never run into a site broken for Chrome, while I do have a few of them where in Firefox they’re broken.)
I do have to admit that most of the times I do not see much difference. But when I do spot it, it’s usualy Chrome that has the edge.
On mobile, Firefox is much more useful to me - mostly because I can Ublock Origin stuff with it, unlike the mobile Chrome.
One major slow down factor in Firefox is adblocker addons: They typically load their entire list early and (in Firefox) can and do block network requests (unlike in Chrome) until they created a compiled version of their thousands of regexes. With process isolation and everything, that slowdown can appear on every new tab.
I feel like it varies a lot by platform, honestly. On Windows, Firefox and Chrome feel about the same; on Linux, Firefox feels faster, but that might be because I have Chrome installed as a Flatpak. On Android, though, Chrome-based browsers feel a lot faster than Firefox-based ones, and, more importantly use less battery. I put up with the speed and battery differences on Android to have satisfactory ad blocking (Bromite and Mulch have only almost-satisfactory ad-blocking.
Any idea what the increase is attributed to? I know some Rust components ended up actually making their way into FF, it would be very cool if this is related to that.
There was a sizeable jump in performance from the accessibility engine rewrite released in May, but other than that I think it’s just been steady improvements by the development team. They’ve been pretty focused on performance to improve the experience of users with screenreader software.
For what it’s worth, Go also has workspaces since 1.18 as a way to organize groups of modules. This is quite similar to Rust workspaces, conceptually. It was necessary because in practice, many projects consist of multiple modules which also need to be developed concurrently, and this was inconvenient.
The Android app RIF will also shut down citing the same reasons: https://www.reddit.com/r/redditisfun/comments/144gmfq/rif_will_shut_down_on_june_30_2023_in_response_to/
As will my personal favorite client, Sync: https://old.reddit.com/r/redditsync/comments/144jp3w/sync_will_shut_down_on_june_30_2023/
A sad day indeed.
This seems like a non-issue –
who builds on Windows outside of an environment like cygwinEDIT: who builds non-windows-first applications on windows using windows-specific build systems, rather than unix emulation layers? Supporting users of visual studio is a project in of itself, & while there are lots of Windows users around, there are very few who have a compiler installed or are liable to want to build anything from source. It makes more sense to do builds for windows via a cross-compiler & ask folks who want to build on windows to use cygwin – both of which are substantially less effort than getting VS to build an already-working project.I believe that’s your experience, but you and I have radically different experiences as Windows users.
First, to be very blunt: Cygwin is truly awful. It needs to die. Cygwin is not a port of *nix tools to Windows; that’s MSYS. Cygwin is a really weird hacky port of a *nix to Windows. It’s basically WSL 0.0. It does not play well with native Windows tooling. It honestly doesn’t play well with Windows in general. And it’s comically slow, even compared to vaguely similar solutions such as WSL1. If I see a project that claims Windows support, and see Cygwin involved, I don’t even bother. And while I don’t know if a majority of devs feel similarly, a substantial enough group of Windows devs agree that I know my opinion’s not rare, either.
You’re right that Visual Studio is the go-to IDE on Windows, in the same way that Xcode is on Mac. But just as Mac users don’t necessarily bust out Xcode for everything, Windows devs don’t necessarily bust out Visual Studio. Using nmake from the command line is old as dirt and still common (it’s how we build Factor on Windows, for instance), and I’ve seen mingw-based Windows projects that happily use cross-platform
gnumake
Makefiles. CMake is also common, and has the benefit that you can generate Visual Studio projects/solution when you want, and drive everything easily from the command line when you want. These and similar tools designed to be used without Visual Studio are heavily used enough and common enough that Microsoft continues to release the command-line-only Windows SDK for the most recent Windows 10–and they do that because plenty of devs really do only want that, not all of Visual Studio.For reasons you point out elsewhere, there’s a lot that goes into supporting Windows beyond the
Makefile
, to the point that concerns about cross-platformmake
may be moot, but “Windows devs will use Cygwin” seems reductionist.I don’t think windows devs use cygwin. I think that non-windows devs use cygwin (or mingw or one of the ten other unix-toolchain-for-windows environments) so that they don’t need to go through the hoops to become windows devs.
In other words, I’m not really sure who the audience is for OP’s argument re: building on windows.
If you’re building on windows & you are a windows dev, why care about make at all? If you’re building on windows & you are not a windows dev, why care about the first group at all? In my (dated & limited) experience these two ecosystems hardly interact & the tooling to make such interaction easier is mostly done by unix-first developers who want to check windows builds off the list with a minimum of effort.
I think you need to take into consideration that there are also libraries. Sure, if you have an application developed on non-Windows, the easiest way to port it to Windows is building it in MSYS, with MinGW, or possibly Clang. But if you develop a library that you wish Windows developers be able to use in their projects, you have to support them building it with their tools, which is often MSVC.
I don’t understand this question. There are lots of software applications for Windows, each one has to be built, and cygwin is used really rarely. And CMake is precisely for supporting Visual Studio and gcc/clang at the same time, this is one of the purposes of the tool.
In software applications that are only for windows, supporting unix make isn’t generally even on the table. Why would you, when a lot more than the build system would need to change to make a C or C++ program aimed at windows run on anything else?
It only really makes sense to consider make for code on unix-like systems. It’s very easy to cross-compile code intended for unix-like systems to windows without actually buying a copy of windows, and it’s very easy for windows users to compile these things on windows using mechanisms to simulate a unix-like system, such as cygwin.
There are a lot of alternative build systems around, including things like cmake and autotools that ultimately produce makefiles on unix systems. If your project actually needs these tools, there are probably design issues that need to be resolved (like overuse of unreliable third party dependencies). These build systems do a lot of very complicated things that developers ought not to depend upon build systems for, like generating files that play nice with visual studio.
Every team I’ve been on which used C++ has used CMake or FASTBuild, so supporting Unix builds at some point isn’t off the table, and it makes builds a lot easier to duplicate and simplifies CI/CD. Every project I’ve seen with build configuration in a checked-in Visual Studio solution makes troubleshooting build issues a complete nightmare since diffs in the configs can be hard to read. CMake’s not great, but it’s one of the more commonly supported tools.
I’m not sure how this logically follows.
Using CMake (or something else which generates solution files for Visual Studio), provides developers options on how they want to work. If they want to develop on Linux with vim (or emacs), that’s fine. If they want to use CLion (Windows, Mac or Linux), that’s also fine. There really isn’t that much extra to do to support Visual Studio solution generation. Visual Studio has a fine debugger and despite many rough edges is a pretty decent tool.
Most Windows developers and cross-platform frameworks that I can tell.
I should rephrase:
Who builds cross-platform applications not originally developed on windows outside of an environment like cygwin?
Windows developers don’t, as a rule, care about the portability concerns that windows-first development creates, & happily use tools that make windows development easier even when it makes portability harder. And cross-platform frameworks tend to do at least some work targeting these developers.
But, although no doubt one could, I don’t think (say) GIMP and Audacity builds are done through VS. For something intended to be built with autotools+make, it’s a lot easier to cross-compile with winecc or build on windows with cygwin than to bring up an IDE with its own distinct build system – you can even integrate it with your normal build automation.
I work on software that is compiled on Windows, Mac, and Linux, and is generally developed by people on Windows. We do not use Cygwin, which as gecko point out above, is truly awful. If I need to use Linux, I use WSL or a VirtualBox VM. And yes, I and my team absolutely care about portability, despite the fact that we develop primarily on Windows.
Am signatory, AMA.
Were there any project leaders that refused to sign?
Let’s provide some context here, shall we?
There’s been 20 signatories, and one of them isn’t even a maintainer of any package (they’re a staff member).
There’s close to 400 GNU packages, plus close to 100 additional discontinued GNU packages:
E.g., about 5% of folks singed this. Many bigger packages like GCC would have more than one maintainer, too.
Additionally, it’s been pointed out on another platform that this whole thing is a Guix’ response to disagreeing with Dr RMS on his GNU Kind Communications Guidelines some 11 months ago, because they weren’t punitive enough:
I’d say the whole thing was brewing for quite a while. Would be surprised for the list of signatories to change in any significant manner. Just looking at these numbers and the dates, I’d be surprised if many more folks haven’t been afforded the opportunity to join the mob, but didn’t. The fact that they hide all these things reveals their methods of action.
We are not hiding anything. Stallman is not a victim. We are not a mob. We are a collective of GNU maintainers who have had enough, and we’re hardly alone in the world with having had enough with RMS. He’s had good philosophies that persuaded all of us at one point, but his leadership and communication have been sorely lacking.
I actually expect the number of signatories to increase a little. I know of at least a few who wanted to sign but just didn’t get around to it because they were busy. Of those 400 GNU maintainers, most are inactive. GNU is not as cohesive as you might think, which again I think shows lack of good leadership.
Yes, there’s only 20 or so of us, but we represent some of the biggest GNU packages.
There’s so much misrepresentation here I don’t even know where to begin.
There’s already at least a couple of people on the list that aren’t even developers.
You refer to yourself and all other signatories as “GNU maintainers”, including the “GNU Octave maintainer” on your hat, but what does it mean exactly?
Not familiar with GNU Octave, I originally got the impression that you were the sole person responsible for the project. In fact, that’s what the word “maintainer” means in most other projects. Which, per further examination, cannot be further from the truth — there’s a bunch of commits over at http://hg.savannah.gnu.org/hgweb/octave, and none of them seem from you. When searching for your name, http://hg.savannah.gnu.org/hgweb/octave/log?rev=Jordi, we get a whole 10 results, spanning 2014 to 2017. Do you use some other ID within the project? Or is this pretty much representative of your involvement with the project you claim to be an official representative of? Wikipedia has a link to http://hg.savannah.gnu.org/hgweb/octave/file/tip/doc/interpreter/contributors.in, which reveals that there are a whole of 445 contributors to GNU Octave, and you’re the only one of these people who is a Guix signatory listing Octave.
Sure, some of the folks on the list are actual maintainers and/or are responsible for significant work. But do you even fail to see how simply putting a random list of semi-active part-time and drive-by developers as signatories behind cancelling the founder and 80-hours-per-week full-time advocate of Free Software is not exactly representing things as they are? How’s that not a mob?
Also, what is your exact intention when presenting yourself and everyone else as a “maintainer”, and with statements like “we represent some of the biggest GNU packages”? Were you officially designated to speak on behalf of any of these projects? Or is the whole intention to confuse others in a way similar to how you had me confused with your hat here on Lobste.rs? I don’t have time to check out every name (and some do checkout, some don’t), but it is beyond obvious that you don’t actually represent the views of GNU Octave as you imply, and presenting yourself as an active “maintainer” shows that you have no interest in spreading any truths anywhere, either.
As much as I dislike the backstabbing of this “joint statement” by GNU developers, I have to say that you are grossly mis-representing JordiGH contribution to Octave. He’s easily the main scientific contributor to this project after Eaton himself (which makes me even sadder that he’s actually signed the backstabbing manifesto).
He’s been busy, but jwe finally got around to signing it too. 24 signatories now.
I’m very sad to hear about that. From the outside it looks like you are part of the pithy smearing campaign against free software. I fail to understand how this “joint statement” at this moment helps anybody (besides mattl and the like).
I admire the work of most people who signed this statement, and jwe is one of my heros and sources of inspiration–as much as RMS. Even if I agree with the principle that the FSF/GNU leadership can change for the good, the second part of the statement that you signed reads as a callous backstabbing. I literally cried when I read the list of signatories. I cannot help but feel a bit guilty today when recommending octave to my students.
GNU leadership and its structure needs to change. Hell, GNU needs a structure to begin with – we don’t have any sort of organisation yet and thus our ties and cohesion between GNU packages over the years have weakened.
Even if RMS were a perfect saint and the hero many of us made him out to be, nobody should be appointed leader for life. We rotate other leadership positions, and we should do the same with this one.
I agree 100% with what you say here, but not with the public statement that you signed, which alienates me.
He’s been busy, but jwe finally got around to signing it too. 24 signatories now.
Who is the staff member?
I don’t know. I wasn’t the one doing the outreaching.
How was this coordinated?
Private emails. We all were kind of aware of each other and Ludovic started an email thread where we discussed this.
You all planning to replace RMS with a new “chief GNUsciance”, or planning to switch to a steering council like Python did?
If there is no plan, then which one do you prefer?
No plan yet, just a plan to discuss. I am personally in favour of a steering committee. It seems to have mostly worked for gcc. I got to see some gcc people a couple of weeks ago for GNU cauldron, and that was fun. I would like something more like that.
I’m confused by this FSF statement: https://www.fsf.org/news/fsf-and-gnu.
It links using “GNU leadership has also published a statement”, which kinda implies with the surrounding text that GNU leadership is multiple people, but the link target is mail by Stallman saying that he will talk to FSF as a single person.
https://lists.gnu.org/archive/html/info-gnu/2019-10/msg00004.html
Is there anyone else or is this just a language oddity?
Just a language oddity. As of right now, nothing has changed and “GNU leadership” is synonymous with “RMS”.
So, if rms resigns from GNU and suffers any negative mental health outcomes, would you believe yourselves to be contributing factors or perhaps even responsible?
Let’s not play into “if you leave me, I’ll hurt myself and it’ll be your fault” abuser playbook.
RMS should get help if he needs it, but not in the form of coddling him in a position of power he’s unfit for.
I don’t know about abuser playbooks, I’m just thinking about it in terms of common decency for folks that have had internet mobs arrayed against them (correctly or incorrectly).
I certainly think it would be tacky if, say, a bunch of trolls got somebody ousted from their position in an open-source project and then refused to take responsibility if that person was harmed. The only salient difference to me here seems that you think (and correct me if I’m wrong!) of rms as an acceptable target.
RMS getting fired over the Minsky remarks is utter bullshit, and it was a total violation of due process, journalistic integrity, and other niceties of civilization… but that doesn’t mean he should be in a leadership position. I think the the whole Epstein business was used as a pretext for people who already wanted him out (for good reasons) to kick him out (based on a bad reason).
Which is to say, it’s not entirely that simple.
He wasn’t fired. He voluntarily left of his own accord, because of comments that he made, while interjecting into a conversation that he was not originally part of. The comments are in line with culturally taboo statements he has made public on his website for over 20 years that people have willfully ignored for the sole reason of giving him the benefit of the doubt. This time, he crossed a line because a) the statements that he made are incredibly adjacent to, and almost identical to, arguments made by people who abuse young children (Regardless of his intent) and b) there were abuse survivors in the conversation that he interjected into, that were likely affected by those statements.
Well, no. Not only is his position as chairman not subject to those concerns, he himself violated said niceties of civilization.
Indeed. The word is that he has continually scuppered several projects (Including GNU’s version of DotNET which had a presence on the steering committee!!!) which caused non-GNU alternatives to have the upper hand, defeating GNU’s objectives of software freedom in the process.
Pretending his exit was voluntary is disingenuous.
One of the niceties of civilization is the rule of law, in particular “just because you broke the rules doesn’t mean I get to”. So that’s irrelevant.
They railroaded a guilty man, in other words?
Not sure I follow the phrasing, but perhaps “a good thing done badly” might describe it, depending on whose stories you give credence to.
Part of leadership is your subordinates not wanting to be lead by you anymore. This doesn’t make him a target.
Harm reduction may be a goal in these situations and, if you have a look at the statement, it gives appropriate credit to RMS, but also makes it clear that his time is over.
He’s fine. We’re not responsible for his behaviour or his health. He is, and his own actions over the decades are.
But really, he’ll be fine. He’s not a martyr. We need a change in leadership and he needs time to reflect.
What’s the big deal?
I don’t understand the question. Big deal about what?
Perhaps I’m out of the loop. I’m aware of Stallman’s anti-social behavior in the past, but is there some new reason this is happening now, rather than years ago?
Edit: Oh, I am definitely out of the loop. I just read about Stallman’s Epstein remarks. How vile.
If you ask me (which I think you did), this should have happened years ago, but yes, the recent incidents were the final push we all needed.
I don’t think that the Epstein remarks, at least what I’ve heard of them, are anything new or surprising if you’ve followed Stallman for a while. It’s not out of character at all.
Well, it may be nice to have a different leadership for the GNU project. Why not discuss it with the man himself? Has anyone tried before going public?
We’re trying to discuss different leadership. And they’re trying to not go public. I don’t think i can say much more without being unkind.
So I guess, that’s a no. “Unkind” is too kind a word.
Edit: to clarify this comment, this all reeks of “the ends justify the means”. While I agree with the ends, the means do not look good, and it changed how I perceive both RMS & the projects under the GNU umbrella.
I hope I did not sound angry. I’m just annoyed at myself (mostly). I wish you luck in this endeavour and other future projects. :)
There are two reasons that I don’t like tabs:
Maybe the first point can be mitigated by having a highlighting rule that paints
^\s*( \t|\t ).*
bright red. Or by prettier! Code auto formatters are lovely. ❤Sometimes, though, you can use only tabs for alignment, to get it to align just right; that won’t help you there.
Word wrap is garbage, especially for diffs / 3 way merges. On my 27” 2560x1440 display at a normal font size, 80 chars wide fits 3 side by side panes of code. Or the diff view in my IDE (2 panes), with space for the sidebar and gutters. Or just cross-referencing 2 files.
Working on code with a hard 80 chars rule has been magnificent.
I like to split my window to 2 or 3 columns, so even though I have a wide screen monitor, I value 80-column source code restrictions. This allows me to simply have 2 or 3 files open at the same time which I can browse independently.
Example: https://i.imgur.com/Xvj9R.png (not my screenshot, but it looks pretty similar in my setup)
While I understand the reasoning here. These are preferences and IMO accessibility > preferences.
Well. This is being presented as some sort of binary absolute – that is, “spaces = 0% accessible, tabs = 100% accessible” – with no mention of compromises or tradeoffs or deeper investigation. But actual accessibility never works that way. There’s always a spectrum of options, each coming with its own set of tradeoffs, and navigating them well requires context and understanding and weighing those tradeoffs against each other.
So color me skeptical of the rush by many people to declare victory in the tabs-versus-spaces holy war on the grounds that “it’s proven more accessible to use tabs, so tabs win”. Those people didn’t realize until very recently that there even could be accessibility implications to the choice. I wonder what thing they don’t realize today they’ll get to find out about tomorrow.
Fair point, I have been an avid spaces advocate mostly for the sake of consistency. I have never really cared much about alignment and prefer to align things at the beginnings of lines. But what this pointed out to me was that while my arguments for spaces have mostly been preferences about code structure, the other side has a real legitimate need that surpasses my preference. Perhaps there is another accessibility case that supports spaces, I just haven’t heard it in the 20 years I have been having these discussions. But to be fair, I hadn’t heard this one until today, while I have worked with deaf programmers, I have yet to work with anyone visually impaired.
The first issue that comes to mind for me, with this specific situation, is that it’s almost certain to come down to tooling. Accessibility tools tend to kind of suck, but that creates a situation where you need to be very careful to maximize someone’s ability to use accessibility tools while not building permanent assumptions in about the abilities and shortcomings of the specific tools available right now. If we all rush out and convert to tabs-only today without any further thought about it, we run the risk of locking in the shortcomings of today’s tools in ways that may not be good tomorrow.
Which is why I’d really prefer to hear educated opinions from people with expertise in accessibility tooling and practices before coming to any decisions based on this.
The title is a bit hyperbolic but I’ve also noticed the same effect during my experience deploying large-scale H2: you can get hit by hundreds or thousands of requests simultaneously from H2, unlike from HTTP/1 that throttled this from the browser. This mostly happens with legacy or badly architected clients/clientside applications (like overgrown Wordpress instances etc).
It’s also a chance to clean up the client, it used to be a more hidden issue that is just now more visible due to H2 removing some bottlenecks.
Strictly speaking, this was a problem before already; if a bad actor would use a client that didn’t do any limiting itself they would be able to break down your application quite easily.
Preventing this sort of thing beforehand can be quite difficult, though.
I’d say it’s mildly clickbaity, but I wouldn’t say it’s hyperbolic. It is generally a mistake to make production configuration changes without an understanding of the impact. So in this case it was a mistake to enable HTTP/2 without a strategy for the change in traffic patterns. HTTP/2 is a net positive, generally, but it’s not a change to make lightly.
I feel like this article is conflating several different notions of “safe”. For instance, the author considers Rust’s goals to be “safe”, however Rust does not prevent memory leaks - which the author regards as “unsafe”. Meanwhile, Java - like Rust - is certainly quite “memory safe” (ie, it prevents accessing invalid memory). If you access a null in Java, it doesn’t result in undefined behavior - it crashes with an exception, which is surely safer than what some other languages might do.
To be useful I think we must distinguish “safety” from things like “sound”, “secure”, “statically verifiable”, etc. They are related but different concerns, and most languages outside of maybe Coq and Spark (even Rust, with its “unsafe” blocks) accept some level of fudge factor with them in the interest of getting work done.
Yeah I found it to be an odd grab bag. I thought there was a formal definition of “safe” that means that the program either halts or the behavior is predictable from the source text. Java/Python’s exception on null is “safe” in that respect, but C’s undefined behavior is not.
You can also divide by zero in or write infinite loops in Rust/Swift/etc. You could fail to handle EINTR. Does that make them “unsafe”? The term really does need a definition for this blog post to be meaningful.
Python and Java aren’t “safe” by that definition: Consider non-domain errors like running out of disk-space, or low-memory conditions. There are languages that are safe by that definition (like Zig) but given how few players there are at that table, I never assume that’s what someone means by “safe”.
Knowing it isn’t a protected term, I treat the term “safe” like I treat the term “clean”: I appreciate the author is trying to point to something that makes them feel better, and often that thing is useful or interesting, so I can try to understand why that thing would make my life easier instead of worrying about whether MyFavouriteLanguage™ has this particular feature-label with some definition of this feature-label.
Here, the author has given a definition up-front:
Ok, so the author measures “safety” as the function of how easy it is for a programmer to write code correctly in this language. And yes, by that definition Java and Python (and other stackoverflow-oriented programming languages) are unsafe because most programmers are unable to write code correctly in those languages. From there, the author suggests four “language features” that a language that is “more safe than Java” has. Hooray.
I have a very different issue with the article, and that’s that I don’t see the point of talking about these things. I would see the audience going one of four ways:
None of them are good, and even if the author “wins” (whatever that means) and convince someone who thought Java was great and now it isn’t, so what? Do they somehow benefit if more people think Java sucks?
And yet the prevailing response to something like this is to ask for a better definition. After all, we want to understand how MyFavouriteLanguage™ stacks up! We want the debate! But to me, asking for rigour here is just putting lipstick on a pig, and may even cause real harm in (us collectively) trying to understand how to make programmers write correct code quickly that runs fast.
I’ll add to your list:
I don’t like Java, know about other options, and have to write it since the managers demand that language.
I am a Java developer who is required to write standard Java for maintainability to help future, disposable programmers or come-and-go volunteers. I can’t use better ways to write Java in this company or open project.
I am a Java developer with lots of existing code and use for the ecosystem (esp libraries and IDE). I’m not an expert in compilers or program analysis. I can’t fix these problems while still writing Java even if managers or open collaborators would allow it.
The idea that anyone would consider a language “safe” which has a type system that doesn’t prevent null pointer exceptions is very strange to me. I mean, surely that’s not enough to qualify on its own, but it’s certainly one prerequisite.
I can’t comment on the tool itself, but I’d like to commend the authors on their copiously documented source code. It is remarkably thorough.
The main problem I have with Mastodon is how Twitter-like it is. What I want is something more Google Plus or Facebook-like, where I have posts associated with my identity (or some discussion group) - of some arbitrarily large length, certainly longer than 500 characters - that people can comment on. I hate how Twitter conversations are scattered all over different peoples’ profiles, with a ridiculously difficult to navigate thread view, rather than centralized in relation to the originating post. This probably sounds like “old man shouts at cloud” talk, but I believe it is useful. Moreover, it’s not Twitter that I want to replace, by and large - it’s Facebook and G+.
As for excluding Nazis…as noble of a goal as that is, depending on moderation standards it’s a good way to create an echo chamber. If we come to a point where the “conservative” mastodon-sphere is completely cut off from the “liberal” mastodon-sphere, then Mastodon simply will not be a forum to facilitate dialog that reconciles our differences. Perhaps that’s okay; I have no problem with moderated safe spaces, and there’s no reason Mastodon has to be for that. But it’s worth being mindful of the choices we make that further divide us from one another. Exposure to differing points of view is the only way I’ve ever seen extremism tempered.
Absolutely no one is arguing that differing opinions should be banned, just that if you come in ranting about “racial replacement” or “(((social engineering)))” or “(((Soros))) financed it!” (or whatever bullshit is popular with these idiots this week) you should probably, you know, sod off.
I have no idea how anyone can go from “we should ban literal Nazis” to “we should ban differing opinions”.
This is like saying we should let homeopaths or the anti-vaxx crowd make presentations at medical conferences, otherwise it would be too much of an “echo chamber”.
I don’t want to talk to literal neo-Nazis to “reconciles our differences”. These people are toxic assholes. If you literally believe that white people are superior then no amount of conversation is going to convince us. The only effect these people will have is driving away genuine good-faith contributors, as engaging with toxic assholes is not what people want to do, and recruit new people to their cause.
Have you already forgotten what happened last week in Christchurch?
You’ve misunderstood me. I’m not talking about reconciling differences with “literal neo-Nazis”. Perhaps I’m in danger of making a slippery slope argument, but it doesn’t seem like such a leap to thing that federated social networks might realistically sever themselves from one another to create echo chambers, even beyond just ridding ourselves of the most extreme speech (ie, Nazism).
You seem to be reacting awfully aggressively to what I feel was a fairly moderate concern on my part. Please pull it back a bit if you want to continue discussing this topic cordially.
You might find Pleroma a bit more to your liking. I think the limit is 5000 characters for that.
With my gripes about comments, Diaspora(/Friendica/Hubzilla) seems more my speed. It does not seem to have the penetration of Mastodon for whatever reason (or maybe just not the same buzz), but Friendica and Hubzilla are apparently compatible.
Here’s the issue:
It really depends on what problem I am trying to solve for. Am I trying to get the content I write to be read by the most people or am I trying to develop a personal brand largely for employers considering hiring me?
Or you write it on your site and duplicate to medium. Two birds, two stones
Or write a new post for Medium and link to a ton of old content on your site.
Very true!
Genuinely honest question: if it’s a personal blog, why do you care about how many readers you’re getting? I understand that getting absolutely zero views is kinda depressing, but with a few dozen readers, I feel like content. My blog is just my personal space for me to ramble on about things I care. It’s personal.
I guess it’s human nature to always want more, but I dunno, I just don’t feel that with the number of readers reading my blog.
Zero views can be depressing only if you measure it :)
I removed all statistics from my pages a while ago (did not check GA before anyway). While I don’t write as often as I’d like to, when I do, I find obliviousness to my content’s reach liberating.
I’ve found that just getting higher numbers stops mattering pretty quickly for my personal satisfaction. If someone emails me with a genuine question or complement, it would make my day!
The other day I gave a training talk to a bunch of new employees via video call. One of them recognized me in the hallways and said he thought it was funny, engaging, and interesting. It really did make my day! Much more so than knowing I’m impacting a dozen products by training a dozen engineers. Same goes for code. I know for a fact code that I’ve written touches millions of people every day. But that stat became pretty meaningless quickly. If one of them said they liked a feature I worked on, that would mean a lot more to me.
Fuzzy feelings beat pure numbers for me. I suspect looking at blogging from that perspective will push you toward enabling comments and encouraging tweets and email.
True. I don’t have any kind of analytics on my blog either. But I have a couple of friends who follow my blog, so that’s how I know :) But it wouldn’t matter if they stopped reading (perhaps they have already and they’re too polite to tell me), I’d still write about the same things at the same frequency with the same writing style. That’s the beauty of the internet. I hope we don’t ever lose that.
I’ve had the opposite experience: my website pieces got far more readers than my medium pieces. This could just be because I’ve written a lot more and my topic has changed, but it’s still a data point.
More importantly to me, I’ve gotten more engagement from website pieces. People are more likely to email me about them.
I speculate, but can’t prove, that it helps that the website is a straight-forward, minimalist design that takes people straight to good content without distractions or asking pardon for interruptions. A better, user experience.
Interesting; why do you get so many more reads on Medium?
Medium has tools for discovering interesting content. You can browse blog posts not just by author, but by category and tag, and at the bottom of every post are “related reads” - links to articles by the same author or by other authors that might be relevant to your interests. Combined with the traffic generated by a cohort of popular bloggers, that means impressions on your own writing are much more likely.
Compare that with a personal website, where people will only discover it if they go to your site specifically, happen to find you on google, or have you in their RSS feed.
Medium also recently rolled out a feature where you’re not allowed to read more than N articles without paying.
No idea if it was a one-time thing, as my wife and I haven’t seen it since. But we were both wtf’ing about it.
Beware the shiny tools.
Not the shiny tools: putting trust and data in an organization whose incentives are aligned against you now or potentially in the future. It’s why I strongly push for:
Open formats and protocols with open-source implementations available to dodge vendor lock-in. Enjoy the good, proprietary stuff while their good behavior lasts. Exit easily if it doesn’t.
Public-benefit and/or non-profit organizations chartered to do specific good things and not do specific bad things. Ghost is best example in that area. Alternatively, self-hosting something that’s easy to install, sync, and move on a VPS at a good company like Prgmr.com with local copies of everything.
Then, you don’t care if the vendor with shiny tools wants to put up a paywall on their version of the service. It will be their loss (No 1) or not happen at all (No 2.) We must always consider economic and legal incentives in our investments of time, money, and data. That’s the lesson I took way too long to learn as a technical person.
You’re referring to the Medium Partner Program to which the author has to explicitly opt in to. If they do, they get a cut of the payday.
what is a read? i have a feeling that most pageloads on medium are not actual reads, unless there are active metrics on the client side.
They distinguish reads from views in their
stats
page so there’s some sort of client-side logic that tries to determine true reads.I write on my own website and post it to websites like this. Someone posted a link to my website on hacker news and it hit the top and I got thousands of views without any need for medium
I’ve been using kakoune for around a year now for some things, coming from both Vim and Spacemacs. The noun-verb ordering is a revelation. Being able to see what I’m operating on before you hit a button is a big bonus.
Can’t you do this with Vim’s visual mode? v2wd instead of d2w.
Sort of. In a way, it’s as if you’re in visual mode by default. But the fact that the cursor works differently, combined with multiple selections and better integration with search, it is arguably more ergonomic.
What is the motivation for this editor? It’s not clear why I would choose this over anything else, especially when many programming environments support Emacs/vim style interaction.
You can read the justification here. Largely it is based on the idea of switching commands from vim’s verb-object pattern to an object-verb pattern, which enables the useful behavior of showing the user what they’re modifying before they modify it.
Combined with some other useful features like multiple selections, and a client-server model like neovim, I have to admit it’s pretty appealing to me. I’ve been a vim user for about 20 years, however, and it would likely take quite a lot of retraining to switch now. Edit: Not to mention the fact that no official Windows support is planned; I prefer to use the same editor on all operating systems if possible.
I was a Vim user for 20 years, and after using Kakoune for two or three weeks I started finding Vim frustrating and clumsy. That’s partially because Kakoune’s more orthogonal keymap makes Vim’s
x
andv
redundant, so it replaces them with other commands, but also because of Kakoune’s movement-is-selection model. In Vim, to delete a word, I hitw
until I reach the end of it (but not past it!) and thendb
to delete it, or sometimesbdw
if I haven’t had my coffee yet. In Kakoune, I hitw
until it’s highlighted, and thend
.Wait, in vim, why don’t you do it the other way around, use
w
to go the beginning and thendw
? Ordaw
(delete, around, word) if you’re inside a word?Oops! It’s been so long since I used Vim that I forgot
w
works differently there.This is something i have explored when trying out versor-mode for Emacs. I had no idea Kakoune did the same thing. It’s a very powerful paradigm to start treating editor navigation as a coordinate system for various dimensions of a text file.
In versor-mode, these coordinate axes are an explicit modal choice, but setting it implicitly based on the last navigation command sounds highly useful.
As Vim moves more towards the Emacs model of “Do it all inside” (following Neovim’s lead), I became less inclined to buy into this model. So the thing that really made me look at Kakoune isn’t what it did – but what the author insists it shall NOT do. From giving window control over to like i3/tmux/$X to delegating to the shell – I think this approach has value, and I think it will continue to benefit from this core decision.
“Working Effectively with Legacy Code” by Michael Feathers.
Most of my career has involved some amount of legacy code, but my current position features a mountain of 90s “C-with-classes-style” C++ - and my standing orders are to rewrite much of it for the modern era. This book takes a very useful approach to the topic, and I’m enjoying it a lot. It is from 2004, though; if anyone has more recent recommendations, I would welcome them!
I enjoy much of Codeless Code, but very few entries seem to qualify as “koans” in any true sense. Most seem closer to fables. The AI Koans entries from the Jargon file are closer.
Good point. I edited the title to reflect what’s in the title graphic on the site: “Fables & Koans for the Software Engineer.”
The AI Koans are pretty great, too!