I don’t write Forth– it was my first language, I was a kid– but I remember someone (Brad Rodriguez?) having an even shorter list of primitives for something not quite forth that could bring up forth.
It might be Three Instruction Forth, which contains “@”, “!” and “EXEC”. It’s meant as a target for embedded systems to upload code (using “!” and “EXEC”) and for examining memory (”@”). It’s not really a language per se.
This the opposite of my experience, but our workflows are different. I’ll just note some points here:
And if you encounter a problem, the most likely solution is “hidden” somewhere on Reddit or in a Github issue.
I have learned to love Zig’s source code (I mean the standard library, not the compiler). It’s very easy to read (for me, at least) and I’ve found because of the explicitness of the language design, I don’t need archaic hidden knowledge to understand what the source is doing. As a result, I’ve stopped searching for answers online and have embraced the code as the source of truth. I’ve gained a better understanding of what’s going on as a result as well which is a bonus.
But my main gripe so far is the file organization. In Go, everything in a directory ends up in the same namespace. Compared to that, Zig feels like a regression: every file needs to an explicit import. This incentivies creating very large source files.
I don’t quite agree with this. The explicit namespacing for files gives you a tool to make logical separation in your directory. I’ve found Go’s implict namespacing confusing because when I’m reading other people’s code, I never seem to know where to find a symbol (I’m not considering external tools like lsp or grep which I need in Go, but don’t in Zig). In Zig, I can just see which import the symbol refers to. It makes nice and clear.
Worse yet, tests in Zig live alongside the source code, making files even harder to read.
Again, maybe it’s my workflow, but I’ve found having the tests right beside where the code lives gives me the best tool to learn what the code does, and it never goes out of date. Taking a look at e.g. http server code in stdlib, you can read the code and then how it’s used, and that’s all you need to do. No praying, grepping, hoping, aligning the stars, searching online, banging head against the wall, to maybe find how a piece of code is used. In fact, this is why I’ve started reading the code, because it’s so easy and takes so little time to read the code and see how it’s used.
I encourage you to consider this workflow. I don’t use this flow with any other language because it just doesn’t work: too many redirections, implicitness, and fluff get in the way and you need advanced tools. In Zig, it’s all there, right there, in front of you, nicely collected together. No need for tools. This workflow has been so effective thanks to Zig that I haven’t felt the need to setup lsp yet! I just don’t feel like I need it.
I do something like that, but with LSP, that takes me right to the source/tests. Works offline, no need to even start my browser.
It also leads me to discover better ways to do what I wanted to do, because I spot a function in a module somewhere. E.g. seeing I can do bufferedReader(reader) instead of BufferedReader(@TypeOf(reader)){ .unbuffered_reader = reader, ... }.
That said, different things work for different people, and I hope the docs will one day be as enjoyable as the language.
Worse yet, tests in Zig live alongside the source code, making files even harder to read.
Again, maybe it’s my workflow, but I’ve found having the tests right beside where the code lives gives me the best tool to learn what the code does, and it never goes out of date.
Having only played with Zig, I agree with this. I prefer tests alongside the code. Drifting the topic a bit, I very much like D’s support for unit tests in the same file as the code and that they can produce documentation.
+1 it’s great in Rust as well. I also like the ability to test module-private functions (I know some people think this is bad). Java is such a pain needing to have a mirrored folder structure for the tests just so things can be in the same package.
Arrived to test-right-beside-code independently when working on Next Generation Shell. I have two groups of tests. One is basic language functionality, it goes separately and is practically not used anymore. The second is for the libraries, where each test lives just below the function.
I’ve noticed that the tests below the function are almost identical to examples that are given in documentation which is just above the function. This needs to be addressed.
But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features. Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP. The companies who stumble are the ones who spend too much time reading tea leaves to figure out the future direction of Microsoft.
An old analysis, but substitute the 800 pound gorilla du jour for Microsoft and it holds up well.
Have they been at loggerheads before? From what I’ve gleaned, the projects respect each other but have fundamental disagreements about how to structure a Unix-like.
Linus in 2008: “I think the OpenBSD crowd is a bunch of masturbating monkeys, in that they make such a big deal about concentrating on security to the point where they pretty much admit that nothing else matters to them.”
I interpreted the reply as mock offense at the insult of being called a monkey with the humorous and unwritten acceptance of the accusation that they were masturbating over security. I don’t know, it seemed funny and answered in kind without taking up an argument.
That’s funny, except I’m afraid it shows rather a lack of research effort :-) — none of the people named was an American at the time (and most of them aren’t now, either).
Git is hard because it’s a terrible version control system. We keep using it because people think it’s reasonable to write blog posts putting the burden of understanding entirely on the end-user. It’s unreasonable to have a bullet list of things you need to understand about it where the first thing is:
A commit is its entire worldline
Seriously just realize you have Stockholm syndrome and stop victim blaming other people who haven’t developed it yet.
I believe that we use git precisely because it is terrible. There is a process (which someone told me a couple of years back had an actual name, but I’ve forgotten it) where systems reinforce the weakest link. If you build a tool that is useful, people will use it. If you build a tool that is almost useful, people will build things around it to strengthen it. Things like GitHub were able to thrive because using git without additional tooling was so painful. There are a lot of git GUIs (I am a particular fan of gitui, since it means I don’t need to leave my terminal) that wouldn’t exist if people didn’t have such a strong desire to avoid the git CLI. Subversion GUIs were always an afterthought and most people just went to the command line. If you said you used a GUI for svn, people would be surprised. If you say you use one for git, people will want to discuss it and see if it has any features that make it better than the one they use.
Alan Kay said that evolution in computing is a process of building a stack of abstractions and then squashing the lower-level ones into something simpler. I believe that the lack of progress in computing is because we are far better at the first step than the second. Something that replaces git could do so by looking at the abstractions that people build to avoid dealing with git and building a tool for that, but it will be competing with dozens of things layered atop git.
Around the time git was started I was still avoiding subversion because it had gone through a lot of churn and instability in its repository formats (including data loss). Svn was just about settling down and becoming something I might trust as a repo admin. I was using it casually for contributing to Apache SpamAssasin and FreeBSD, and the main impression I got was that svn was incredibly slow, much slower than CVS. I think this was because the svn protocol was (is?) ridiculously chatty, so if you are working over a long latency link then it sucks. And it still lacked realistic support for merges. So I continued to use CVS when I had to host repositories.
Then along comes git, and it’s much faster, it supports merges, it has better tools for wrangling my working copy. I was not an early adopter, but it made sense for me to continue using cvs a few more years and skip svn entirely.
At the time I was also watching bzr and hg, and it wasn’t entirely clear which would be worth adopting. I remember the discussion about bzr’s rich patch representation, versus git’s approach of working out what was a rename (etc) after the fact - I was persuaded by Linus’s arguments. Also bzr’s lack of stable repo format was not great. Mercurial had a well-documented format, but it was file-based (like cvs) and it was unclear to me that it could handle renames well. And hg’s branching model seemed rigid and awkward compared to git.
So from my point of view, git was terrible, but it was much better than the alternatives.
I stayed with svn for a long time because one of my collaborators did some fantastic work in a local branch with svk and then lost his laptop. That put me off the idea of distributed revision control for a long time. I wanted people to push things to public branches as quickly as possible and not lose work because their laptop or VM broke.
FreeBSD was very late to adopt git (though a fairly early adopter of svn). The faster download speed was a big selling point. I could do a git clone of the ports or source repo (with all history) with git in less time than updating from a week-old checkout with subversion. No idea how they managed to make the protocol so slow. I think it needed multiple round trips to request each revision, rather than just telling the server ‘give me everything from revision x’, which is very silly given that there’s basically no client state with subversion.
I don’t think that can be the whole story. It’s much better than what came before (cvs, svn) for most people AND nothing has yet been created that’s clearly better for a great many people.
If you said you used a GUI for svn, people would be surprised. If you say you use one for git, people will want to discuss it and see if it has any features that make it better than the one they use.
I’m surprised to hear this and I wonder how much the state of Git GUIs has changed since circa 2014, when the messaging I would hear about Git GUIs was like “Please, please don’t use them! They’re (even more) confusing, they’ll hinder you in learning Git, and they’re especially bad because, when you need to ask for help with using Git, it will be far more difficult for us, the Internet, to help you than if you used the CLI.”
I was using GitX from about 2008ish, maybe a bit earlier, and the advice then was ‘there are some complex things that you can’t do in the GUI, it shouldn’t be your only tool’, but doing commits of part of a file without a GUI is incredibly painful and that was one of the big selling points of git (if you have some small bug fixes done alongside a new feature you can commit them separately and merge them into the main branch without merging the whole feature).
For the CS-inclined you can just say “path to the commit from the root of the tree” (or equivalently, “the parent matters”) instead which I think captures the idea without talking about alternate worlds or parallel universes or whatever. We’re professionals, it should take skill to use our tools.
We’re professionals, it should take skill to use our tools.
Things should only require skill to use if it isn’t possible to engineer a high skill requirement out of it. If the tooling has a high skill requirement for no reason beyond access gating then it is definitionally a poor tool that should be replaced. It isn’t acceptable for a tool to require significant training simply because the users are expected to be highly skilled in some other domain.
This honestly feels like when people complain about tools being simplified because it means more people who aren’t “skilled” are using the tool. It also implies that people are only skilled if their skill is in specific fields.
People say that, but then they can’t point to a version control system (of the many that exist over the decades they’ve existed) that solves all the problems git does with an elegance that is assumed to be possible. The proof should be easy!
Mercurial, for example, doesn’t require knowledge of worldlines and branch go-karts. It is an extremely usable tool that doesn’t defy most people’s mental models.
Look at Sapling’s features like sl absorb. That would take 1 second to teach someone vs whatever it takes to teach interactive rebase, fixup commits, autosquash, etc.
Better tools exist and have existed for a long time.
Heck, just look at what people have layered on top of git itself to fix its warts, eg git-branchless. It exposes extremely important workflows missing in git (like a sensible default log and branch rebasing) in a sane and usable cli.
If you explore other vcs you’ll find a million examples of concrete ways of doing things better.
We’re professionals, it should take skill to use our tools.
The industrial revolution was successful precisely because it removed skill from the tooling - instead of having a general-purpose hammer that you whacked hot iron with in a carefully-aimed direction, you had a block of iron that you milled by twiddling knobs to the correct number and pulling the lever. Pulling the lever did not require years of training.
“It should take skill to use our tools” is an unjustified assertion that we should put this iron collar around our necks and can be trivially dismissed. The save button should not require skill, that’s insane.
The industrial revolution was successful precisely because it removed skill from the tooling - instead of having a general-purpose hammer that you whacked hot iron with in a carefully-aimed direction, you had a block of iron that you milled by twiddling knobs to the correct number and pulling the lever
I agree with the intention of this statement and how it relates to Git but it also tells me you don’t have much first-hand experience either as a blacksmith or as a machinist.
The save button indeed does not require skill to use. Git isn’t a save button though, it’s a system for handling conflicting saves. Otherwise just paste your code into a google doc and let their auto-merge functionality sort it out. A system following your Industrial Revolution example would require knowing what the synthesis of two conflicting programs should be. Maybe AI will be able to do that some day but good luck otherwise.
No. First of all, if you take the trouble to understand it, it can be a lot easier to use than other systems, and it solves almost everyone’s problems.
In my experience all dvcses are confusing to some people. Git gets a lot of complaints because it’s what people actually use.
I think we keep using it because git, for all its faults, solves the problem well enough for many people, whether that’s through learning the idiosyncrasies, or mitigating it’s shortcomings with external interfaces like Github etc.
For many people, life starts and stops and clone, checkout, push, and pull. If you’ve found a way to make that work for you safely and consistently, I think version control understandably becomes a problem that you might not be so invested in.
I think the current situation speaks to how these external GUI tools and websites have smoothed over the cracks of the git experience.
Sometimes I feel like a good chunk of the problems with yaml end up being a combo of unfamiliarity with the spec as well as out-of-date implementations.
Half of the problems people commonly cite with yaml (Norway problem, sexagesimal numbers, surprise octal, merge operator) were removed in yaml 1.2. However, despite yaml 1.2 being 14 years old, surprisingly few parsers implement 1.2, so people continue running into the same issues that were fixed in the spec over a decade ago. Some implementations (the most popular one for go, for example) chose to implement a mix of 1.2 and 1.1 to support certain 1.1 features that were removed.
One of the problems is that the YAML spec is moderately difficult to implement. I learned to use a PEG implementation by writing a CSV parser and then a JSON parser in a weekend. Decoding JSON was ~65 lines. I set aside my YAML 1.2 parser about 250 lines in and it was incomplete. I’ll get back to it one day but it’s a hairy spec.
Having read the spec multiple times and looked at other implementations, I’m not plagued by the problems other people report but I don’t think that’s reasonable when all you want to do is write a config file.
There are a lot of reasons to not use Rust, but this post does not list them out. Speaking as someone who has used Rust professionally for four years, this is my take on these points:
Rust is (over)hyped
The rationale being that it’s stackoverflow’s most loved language and 14th most used… Its a growing language, and seeing more and more industry adoption. Currently, due to posts like this, it is a hard sell to management for projects which aren’t low-level, despite developers loving using it (and I’ve been told multiple times that people feel far more comfortable with their Rust code, despite not being experts in the language). Rust might be overhyped, but the data provided to back up this claim is just not correct.
Rust projects decay
I know this person has written a book on Rust, but… I have to question what the hell they’re talking about here. The steady release cycle of the Rust compiler has never once broken my builds, not even slightly. In fact, Rust has an entire Epoch system which allows the compiler to make backwards-incompatible changes while still being able to compile old code.
I mean, seriously, I genuinely don’t know how the author came to this conclusion based on the release cycle. Even recent releases don’t have many added features. Every project I have ever come across developed in Rust I’ve been able to build with cargo build and I’ve never once thought of the version it was developed with or what I had in my toolchain. Python 3 has literally had a series of breaking changes fairly recently and its being compared as a language doing it “better” because it has fewer releases.
Rust is still beta (despite the 1.0)
sigh. Because async traits aren’t stabilized? Even though there is a perfectly workable alternative, async-traits crate, which simply makes some performance trade-offs? I’m excited for async traits being stabilized, and its been a bummer that we haven’t had them for so long, but that doesn’t make it Beta.
The standard library is anemic
This is just an opinion the author has that I strongly disagree with (and I imagine most Rust developers would). The standard library is small, this was/is a design decision with a number of significant benefits. And they do bring 3rd party libraries into the standard library once they are shown to be stable/widely used.
async is hard
To put this more accurately, Rust forces you to write correct async code, and it turns out correct async code is hard. This is an important distinction, because a language like Go makes it just as easy to write incorrect async code as it does correct async code. Having been bitten by enough data races and other undefined behavior in my lifetime, I love Rust’s stance on async code, which is to make it hard to do incorrectly with minimal runtime overhead.
Frankly, the Rust compiler is some incredible engineering that is also pushing the bounds of what a programming language can do. I mean, seriously, as frustrating as async Rust can be to work with, it is an impressive feat of engineering which is only improving steadily. Async Rust is hard, but that is because async code is hard.
[edit] Discussed below, but technically Rust just prevents data-races in async code, and does not force you to write code which is free from race-conditions or deadlocks (both of which are correctness issues). Additionally, the “async code” I’m talking about above is multi-threaded asynchronous code with memory sharing.
Frankly, the points being made in this post are so shoddy I’m confused why this is so high on lobsters. The anti-Rust force is nearly as strong as the pro-Rust force, and neither really contributes to the dialog we have on programming languages, their feature-set and the future of what programming looks like.
Use Rust, don’t use Rust, like Rust, don’t like Rust, this post is not worth reading.
To put this more accurately, Rust forces you to write correct async code, and it turns out correct async code is hard. This is an important distinction, because a language like Go makes it just as easy to write incorrect async code as it does correct async code.
I have not written any production Rust code (yet) but the “async is hard” resonates with me. I’ve wrestled with it in C, C++, Java, and Go and it’s easy to make a mistake that you don’t discover until it’s really under load.
it’s easy to make a mistake that you don’t discover until it’s really under load.
I think you really hit the nail on the head with this point. The particularly damning thing about data-race bugs is that they are probabilistic. So you can have latent code with a 0.0001% chance of having a data-race, which can go undetected until you reach loads which make it guaranteed to occur… And at that point you just have to hope you can (a) track it down (good luck figuring out how to recreate a 0.0001% chance event) and (b) it doesn’t corrupt customer data.
There is a reason so many Rust users are so passionate, and its not because writing Rust is a lovely day in the park every day. Its because you can finally rest at night.
I’m in the process of switching to Rust professionally, dabbled for years and the biggest selling point of Rust is it’s ability to help me write working software with little or no undefined behaviour.
Most languages let you build large applications that are plagued by undefined behaviour.
Async Rust is hard, but that is because async code is hard.
I think this is debatable. Async rust introduces complexities that don’t exist in other async models which rival in ease-of-use and efficiency. It has unintuitive semantics like async fn f(&self) -> T and fn f(&self) -> impl Future<Output=T> subtly not being the same; the latter is + &'self_lifetime for the Future). It also allows odd edge-cases that could’ve been banned to simplify the model:
Future::poll() can be spuriously called and must keep stored the most recently passed in Waker. Being two words in size, you can’t atomically swap them out resulting in requiring mutual exclusion for updating and notification. Completion based async models don’t require this.
Waker is both Clone and Send, meaning it can outlive the task which owns/polls the Future. This results in the task having to be heap allocated (+ reference counted to track outstanding Wakers) if it’s spawned/scheduled generically. Contrast this with something like Zig async or rayon’s join!() which allow stack-allocated structure concurrency. Waker could’ve been tied to the lifetime of the Future, forcing leaf futures to implement proper deregistration of them but reducing task constraints.
The cancellation model is also equally simple, useful, and intricately limiting/error-prone:
Drop being cancel allows you to stop select!() between any future, not just ones that take CancellationTokens or similar like in Go/C# (neat). Unfortunately, it means everything is cancellable so a().await; b().await is no longer atomic to the callee (or “halt-safe”) whereas it is in other async models.
It also means you can’t do asynchronous cancellation since Drop is synchronous. Future’s which borrow memory and use completion-based APIs underneath (e.g. Overlapped IOCP, io_uring) are now unsound if cancelled unless you 1) move ownership of the memory to the Futures (heap alloc, ref counting, locked memory) 2) block in Drop until async cancellation occurs (can deadlock runtime: waiting to drive IO but holding IO thread).
Sure, async is hard. But it can be argued that “Rust async” is an additional type of hard.
Async rust introduces complexities that don’t exist in other async models which rival in ease-of-use and efficiency.
I don’t disagree Rust introduces an additional kind of hard: async Python is much easier to use than async Rust. I wrote in another comment how it all comes down to trade offs.
I do agree with you, there are more sharp edges in async Rust than normal Rust, but from my understanding of how other languages do it no language has a solution without trade offs that are unacceptable for Rust’s design.
Personally, I think async is the wrong paradigm, but also happens to be the best we have right now. Zig is doing interesting things to prevent having the coloring problem, but I don’t think any language is doing it perfectly.
Async Rust is hard, but that is because async code is hard.
Any data to back that exact claim? I love rust and I’m working professionally in it for last few years but I think I would still find erlang approach to async code easier.
Fair point, and to really dive into that question we have to be more specific about what exactly we’re talking about. The specific thing that is hard is multi-threaded asynchronous code with memory sharing. To give examples of why this is hard, we can just look at the tradeoffs various languages have made:
Python and Node both opted to not have multi-threading at all, and their asynchronous runtimes are single-threaded. There is work to remove the GIL from Python (which I actually haven’t been following very closely), but in general, one option is to avoid the multi-threading part entirely.
Erlang/BEAM (which I do love) makes a different tradeoff, which is removing memory sharing. Instead, Erlang/BEAM processes are all about message-passing. Personally, I agree with you, and I think the majority of asynchronous/distributed systems can work this way effectively. However, that isn’t to say it is without tradeoffs, message passing is overhead.
So essentially you have two options to avoid the dangerous shenanigans of multi-threaded asynchronous code with memory sharing, which is to essentially constrain one of the variables (multi-threading or memory sharing). Both have performance trade-offs associated with them, which may or may not be deal-breaking.
Rust lets you write multi-threaded asynchronous code with memory sharing and write it correctly. In general though I agree with you about the Erlang approach, and there isn’t really anything stopping you from writing code in that way with Rust. I haven’t been following this project too closely, but Lunatic (https://github.com/lunatic-solutions/lunatic) is a BEAM alternative for Rust, and last I checked in with it they were making great progress.
Yes, I can agree that “multi-threaded asynchronous code with memory sharing” is hard to write. That’s a much more reasonable claim.
The only thing I would disagree slightly is the assertion that rust solves this problem. That’s not really completely true, since deadlocks are still just as easy to create as in c++. For that the only sort of mainstream solution I can think of is STM in Clojure (and maybe in Haskell?).
I hadn’t heard of STM, but that is a really cool concept bringing DB transaction-notions to shared memory. Wow I need to read about this more! Though I don’t think that solves the deadlock problem globally, as if we’re considering access which is not memory (eg. network), and thus not covered by STM, then we can still deadlock.
From my understanding, solving deadlocks is akin to solving the halting problem. There just simply isn’t a way to avoid them. But you are right, Rust doesn’t solve deadlocks (nor race conditions in general), just data-races. I’ll modify my original text to clarify this a bit.
Bear in mind, though, that STM has been through a hype cycle and some people are claiming that, like String Theory, it’s in the “dead walking” phase rather than past the hype. For example, Bryan Cantrill touches on transactional memory in a post from 2008 named Concurrency’s Shysters.
So fine, the problem statement is (deeply) flawed. Does that mean that the solution is invalid? Not necessarily — but experience has taught me to be wary of crooked problem statements. And in this case (perhaps not surprisingly) I take umbrage with the solution as well. Even if one assumes that writing a transaction is conceptually easier than acquiring a lock, and even if one further assumes that transaction-based pathologies like livelock are easier on the brain than lock-based pathologies like deadlock, there remains a fatal flaw with transactional memory: much system software can never be in a transaction because it does not merely operate on memory. That is, system software frequently takes action outside of its own memory, requesting services from software or hardware operating on a disjoint memory (the operating system kernel, an I/O device, a hypervisor, firmware, another process — or any of these on a remote machine). In much system software, the in-memory state that corresponds to these services is protected by a lock — and the manipulation of such state will never be representable in a transaction. So for me at least, transactional memory is an unacceptable solution to a non-problem.
As it turns out, I am not alone in my skepticism. When we on the Editorial Advisory Board of ACM Queue sought to put together an issue on concurrency, the consensus was twofold: to find someone who could provide what we felt was much-needed dissent on TM (and in particular on its most egregious outgrowth, software transactional memory), and to have someone speak from experience on the rise of CMP and what it would mean for practitioners.
I think you’ll find Erlang much harder tbh. Have you used it much? Erlang requires that you do a lot of ‘stitching up’ for async. In Rust you just write .await, in Erlang you need to send a message, provide your actor’s name so that a response can come back, write a timeout handler in case that response never comes back, handle the fact that the response may come back after you’ve timed out, decide how you can recover from that, manage your state through recursion, provide supervisor hierarchies, etc.
Fortunately, almost all of that is abstracted away by gen_server you in practice you don’t actually do all that boilerplate work yourself, you just take advantage of the solid OTP library that ships with Erlang.
For sure I have way more experience with Rust, but I’m not really sure that all of what you listed is downside or Erlang specific. You also need to handle timeouts in rust (eg. tokio::time::timeout and something (match?) to handle the result), you might also need to handle possibility that future will be canceled. Others like recursion (which enables hot reloads) and supervisors are not obvious negatives to me.
Handling a timeout in Rust is pretty trivial. You can just say timeout(f, duration) and handle the Result right there. For an actor you have to write a generalized timeout handler and, as mentioned, deal with timeouts firing concurrent to the response firing back.
I think for the most part handling cancellation isn’t too hard, at least not for most code. Manual implementors of a Future may have to worry about it, but otherwise it’s straightforward - the future won’t be polled, the state is dropped.
In Rust you just write .await, in Erlang you need to send a message
TBH I do not see difference between these two.
provide your actor’s name so that a response can come back
You can just add self() as a part of message.
write a timeout handler in case that response never comes back,
As simple as adding after block to the receive block.
handle the fact that the response may come back after you’ve timed out
Solved in OTP 24 with erlang:monitor(process, Callee, [{alias, reply_demonitor}]).
decide how you can recover from that
In most cases you simply do not try to recover from that and instead let the caller to do that for you.
Simplest async-like receive looks like, from the docs:
server() ->
receive
{request, AliasReqId, Request} ->
Result = perform_request(Request),
AliasReqId ! {reply, AliasReqId, Result}
end,
server().
client(ServerPid, Request, Timeout) ->
AliasMonReqId = monitor(process, ServerPid, [{alias, reply_demonitor}]),
ServerPid ! {request, AliasMonReqId, Request},
%% Alias as well as monitor will be automatically deactivated if we
%% receive a reply or a 'DOWN' message since we used 'reply_demonitor'
%% as unalias option...
receive
{reply, AliasMonReqId, Result} ->
Result;
{'DOWN', AliasMonReqId, process, ServerPid, ExitReason} ->
error(ExitReason)
after
Timeout ->
demonitor(AliasMonReqId),
error(timeout)
end.
The difference is huge and kind of the whole selling point of actors. You can not share memory across actors, meaning you can not share state across actors. There is no “waiting” for an actor, for example, and there is no way to communicate “inline” with an actor. Instead you must send messages.
You can just add self() as a part of message.
Sure, I wasn’t trying to imply that this is complex. It’s just more. You can’t “just” write .await, it’s “just” add self() and “just” write a response handler and “just” write a timeout handler, etc etc etc. Actors are a very low level concurrency primitive.
As simple as adding after block to the receive block.
There’s a lot of “as simple as” and “just” to using an actor. There’s just.await in async/await. If you add a timer, you can choose to do so and use that (and even that is simpler as well).
The tradeoff is that you share state and couple your execution to the execution of other futures.
Solved in OTP 24 with erlang:monitor(process, Callee, [{alias, reply_demonitor}]).
It’s “solved” in that you have a way to handle it. In async/await it’s solved by not existing as a problem to begin with. And I say “problem” loosely - literally the point of Erlang is to expose all of these things, it’s why it’s so good for writing highly reliable systems, because it exposes the unreliability of a process.
It takes all of this additional work and abstraction layering to give you what async/await has natively. And that’s a good thing - again, Erlang is designed to give you this foundational concurrent abstraction so that you can build up. But it doesn’t change the fact that in Rust it’s “just” .await.
Sure, I wasn’t trying to imply that this is complex. It’s just more….Actors are a very low level concurrency primitive.
Sure, if you pretend you have to raw-dog actors to do concurrency in Erlang, and that OTP doesn’t exist and take care of almost all the boilerplate in gen_server etc. We could also pretend that async/await syntax doesn’t exist in Rust and we need to use callbacks. Wow, complex!
Perhaps the best example is 3.6 introducing async and await as keywords in the language (and thus breaking code which used them for variables). In Rust, this was done via the 2018 Edition with the Epoch system.
The difference is a Python 3.6+ interpreter can’t run code from Python 3.5 using async/await as non-keywords, while Rust can compile a mix of 2015 Edition and 2018 Edition code with some using async/await as non-keywords and others with it.
For what it’s worth, instead of trying to find a working computer with an IDE and FDD controller, I’ve been playing with a GreaseWeazle and a fluxengine. It’s been interesting (read, “frustrating”) finding the right 3.5” drive model in working order to read old and damaged disks. These being old drives from random sources some are in working order and of those working ones some will read reliably and some won’t. There’s also some oddities around which cable they prefer (straight, crossed over, etc.). I’m having good luck with a Sony MPF920 for PC disks, but it doesn’t like Mac formatted disks (variable speed). I haven’t yet found a working 5.25” drive.
I’ve been working on databases for some time, and you win an upvote.
Sure, I can find some nits and have various opinionated color to things (learned indexes, for instance, are interesting but ultimately useless in practice under write load), but generally speaking you’ve done a great job simply highlighting (and, thankfully, not overexplaining or dismissing) many of the hard problems of databases. Double props for getting down to fsync() which, my god, don’t deal in filesystems, kids, because every part of the stack lies.
Will be linking people to this when they’re learning as a teachable moment :)
Double props for getting down to fsync() which, my god, don’t deal in filesystems, kids, because every part of the stack lies.
I’m laughing with tears in my eyes because I’ve been there… and at the time (circa linux kernel 2.4.x) it seemed that the more complex your storage provisioning (ex. SAN) the less you could trust fsync.
I like all of this effort into type safety in a shell language. Pipelines are a pretty good notion as well that we can build off of (though I wonder if we can do some sort of laziness or query planning so that heavier commands can selectively get the “relevant info” instead of serializing everything)
Somebody on HN mentioned shells also being about job control. Is there some cool new ideas that can be put into something like nushell?
Interactivity is another place as well. Would be nice if I didn’t have to pipe into pv all the time. Similarly, it would be cool if we could output tables and have that display be a bit better than just “spit everything into stdout”.
Basically something like Jupyter (without being Jupyter) would be quite cool IMO
Somebody on HN mentioned shells also being about job control.
For the sake of argument, how useful is job control for most users? For interactive shells tmux/screen eliminated the need for managing foreground and background processes, they’re all foreground. I can’t remember the last time I thought I’d rather background something that didn’t daemonize itself than run it in another pane.
I use a much smaller subset of “job control” all the time, with Ctrl-Z to suspend the active process (aka the editor) and drop back into the shell, then fg to resume. Can’t do that in nu right now. Maybe one day I hack together a tmux setup that replicates it but for now it’s one of my main blockers for using nu
I personally have never gotten a hang of tmux/screen. I know shells like to rely on those as “outs”, but I honestly would not mind a shell that has this stuff built-in. And I think it’s pretty common for webdevs to want to basically start up a handful of processes and juggle them around.
The sort of split between “shell” and “terminal emulator” is something that I feel really gets in the way of offering easy answers to things like “make a script that gets me to the right configuration right now”. Every “nice” terminal emulator ships with weird hacks to try and get a shell to communicate information at the top level.
I know part of the answer would be to “just use screen”, but it’s a tool where I don’t really need it that often, so the information doesn’t stick around long enough.
I use daemontools in a bunch of locations. It takes 30 seconds to adapt my generic daemontools-sysvinit script to a new service, and then sysvinit or systemd can start and stop it as well. supervise treats a request to start an already running service as a non-error.
Maybe not adding much but have you looked at http://www.skarnet.org/software/s6/? Inspired by daemontools but maintained, under the ISC license, and more features. I was introduced to it years ago when I was discussing daemontools and still running my own qmail then much later worked with folks who had settled on it as part of their in-house scheduler. It was easy and reliable, never really needed to give it a thought.
I have, and I am also aware of Nosh – https://jdebp.uk/Softwares/nosh/ – which I keep meaning to try out but never actually got around to.
Heh. Yes, Iwas aware of it. Also never tried it out.
Wasn’t it nice when qmail could unambiguously do everything you needed a mail server to do?
At the time qmail and dovecot were a relief after sendmail, then postfix, and uwimap. Everything just worked for me. Now I don’t want to be bothered and use fastmail.
Or make it a law that it should be absolutely evident and understandable at a glance how you can pay to 9 out of 10 randomly selected people so if you find yourself in a situation where it’s not evident how you pay, you just turn on your phone’s camera, record a 360 video and go about your business knowing that you can easily dispute whatever fee they throw at you.
This is probably the best answer. No cost to “plot of land for parking” operators, no cost to people. Just record that you couldn’t clearly tell what’s going on and move on with your day.
Maybe? This entire discussion is severely lacking in generality. People are extrapolating wildly from one ranty post in one US city. I could fake another rant saying that parking is free as long as you scan your eyeballs with Worldcoin and it would add as much data…
Plant asphalt-breaking flora at the edges of the lots. Bermudagrass is a good choice if you can obtain it, but standard mint and thyme will do fine for starters. In some jurisdictions, there may be plants which are legal to possess and propagate, but illegal to remove; these are good choices as well.
We’d can start by not forcing people to use an app to begin with.
In Chicago, they have a kiosk next to a row of on-street parking. You just put in your license plate number, and pay with a credit card. No app needed. At the O’Hare airport, short term parking gives you a receipt when you enter the lot. Then you use it to pay when you exit. No app needed.
Right. The way it used to be everywhere, until relatively recently.
A root problem is that, for a lot of systems like this, a 95% solution is far more profitable than a 99% solution. So companies will happily choose the former. Mildly annoying when the product is a luxury, but for many people access to parking is very much a necessity.
So there’s one way to change this: companies providing necessities have to be held to stronger standards. (Unfortunately in the current US political climate that kind of thing seems very hard.)
You’re talking about public (on-street) parking. This post is talking about private parking lots, which exist for the sole purpose of profit maximization.
The way I see it, the issue is that every random company has to do a positively amazing job of handling edge cases, or else people’s lives get disrupted. This is because every interaction we have with the world is, increasingly, monetized, tracked, and exploited. Most of these companies provide little or no value over just letting local or state governments handle things and relying primarily on cash with an asynchronous backup option. Especially when it comes to cars, this option is well-tested in the arena of highway tolls.
To put it succinctly: stop letting capital insert itself everywhere in our society, and roll back what has already happened.
This seems like it’s just some random for-profit Seattle parking lot (cheap way to go long on a patch of downtown real estate while paying your taxes) that, consistent with the minimal effort the owner is putting in generally, has let whatever back-alley knife fight parking payments startup set up shop as long as they can fork over the dough. It is essentially a non-problem. Even odds the lot won’t exist in two years. There are many more worthwhile things to care about instead.
I disagree. This is going on outside Tier-1 and Tier-2 cities with high population density. Small cities and large towns are finally coming to terms with (using Shoup’s title) the high cost of free parking and replacing meters with kiosks (usually good but not necessarily near where you need to park) or apps (my experience is they’re uniformly bad for all the reasons in the link) to put a price on public parking.
One nearby municipality has all of:
Missing or incorrect signs.
Unclear hours. Is it free after 6pm? Sunday? Holidays? This zone? Seasonally?
Very few kiosks.
QR codes and stale QR codes.
Apps acquired by other app companies and replaced.
Contracts ended or changed where the QR code or app doesn’t work or worse takes the payment but is invalid (this only happened to me once).
Even if you’re local and know the quirks you’ll have to deal with it.
It’s not just “some random for-profit Seattle parking lot”. I’ve run into frustrating and near-impossible experiences trying to pay for parking in plenty of places. Often compounded by the fact that I refuse to install an app to pay with.
The other day I was so happy when I had to go to the downtown of (city I live in) and park for a few minutes and I found a spot with an old-fashioned meter that accepted coins.
Establish a simple interoperable protocol standard, that every parking lot must support by law. Then everyone can use a single app everywhere which fits their needs. I mean, this is about paying for parking, how hard can it be?
I mean, this is about paying for parking, how hard can it be?
I think that’s the thing, though. A company comes in to a municipality and says “this is about paying for parking, we make it easy and you no longer have to have 1) A physical presence, 2) Employees on site, or (possibly) 3) Any way to check if people have paid.” They set you up with a few billboards that have the app listed on them, hire some local outfit to drive through parking lots with license plate readers once or twice a day, and you just “keep the profit.” No need to keep cash on hand, make sure large bills get changed into small bills, deal with pounds of change, give A/C to the poor guy sitting in a hut at the entrance, etc.
I write this having recently taken a vacation and run into this exact issue. It appeared the larger community had outsourced all parking to a particular company who has a somewhat mainline app on the Android and Apple stores, and hence was able to get rid of the city workers who had been sitting around doing almost nothing all day as the beach parking lots filled up early and stayed full. I am very particular about what I run on my phone, but my options were leave the parking lot, drive another 30 minutes in hopes that the next beach had a real attendant with the kids upset, or suck it up. I sucked it up and installed long enough to pay and enough other people were that I don’t see them caring if a few people leave on principle of paying by cash, either way the lot was full.
I say all this to point out that some companies are well on their way to having “the” way to pay for parking already and we might not like the outcome.
I get that digital payment for parking space is less labor intensive (the town could also do that themselves, btw), but we can by law force these companies to provide standardized open APIs over which car drivers can pay for their parking spot, why don’t we do that?
I’m always in favor of citizens promoting laws they feel will improve society, so if you feel that way I’d say go for it! I don’t, personally, think that solves the issue of standardizing on someone needing a smart phone (or other electronic device) with them to pay for parking. That to me is the bigger issue than whose app is required (even if I can write my own, until roughly a year ago I was happily on a flip phone with no data plan). So if this law passes, the company adds the API gateway onto their website and… we’re still headed in a direction for required smart device use.
But, again, I strongly support engaging with your local lawmakers and am plenty happy to have such subjects debated publicly to determine if my view is in the minority and am plenty happy to be outvoted if that is the direction it goes.
This reminds me of a meeting I attended years ago when I worked at a company in the health-care space, and someone had asked why we didn’t have an official mobile app. And a company leader explained, not unkindly, that a lot of the people we were providing services to were not only not well-off, many were in the sort of financial situation where one regularly chooses which utility to pay down a bit and get turned on, and which one(s) to leave shut off for nonpayment, and as such they are not the sort of people who either have smartphones or are accustomed to using smartphones as an interface to the world.
It’s changed a bit according to my wife who works in the healthcare and mental health space in the Northeast US. Nearly everyone has a smartphone including the unhoused. The financial struggles though haven’t changed much.
*nod* When you think about it, being unhoused makes having a mobile phone *less* of a privilege because, if you have housing, you can use WiFi and desktop/laptop PCs and DSL/Cable/Fiber Internet to get better value on your bandwidth and and you might have a job that allows you to work from home, making mobile phones more of a luxury and less of a necessity.
I’ve yet to try this one but It’s been recommended to me a few times. I’m still using go-jira, although it’s very broken with the latest JIRA versions, and doesn’t seem to be an active project (my admittedly incomplete PR is languishing like dozens of others)
Extending go-jira is… interesting. You write weird little embedded shell scripts in YAML files that are executed by sub-processes of the main binary in different phases.
I knew a guy at Netflix who turned me on to go-jira but at the time (and again as little as a year ago) it wasn’t working for me with employer’s internally hosted JIRA. jira-cli at least works.
Lots of small language-level improvements, but there don’t seem to be that many fundamental compiler architecture issues mentioned in the changelog. I may be missing those since I haven’t used Nim for a long time, but I assume if the compiler was made 10x less janky they would’ve at least had a footnote about it. (For context, I’ve used Nim as my main language in 2019-2020.)
I don’t like that there still isn’t my #1 most awaited feature - incremental compilation. Nim isn’t a very fast language to compile and the slowness really showed with some of my projects. In particular rapid, my game engine, suffered from pretty terrible compile times, which made building games with it really difficult.
And I wonder how much generics jank is still present in 2.0. And I wonder when they’ll make generics into proper generics rather than C++-like templates. The error messages with those can get pretty abysmal and can appear out of nowhere. Like here. This particular cryptic error has appeared so many times while I was developing my game that I just gave up on the project eventually.
In addition to that I’m not a fan of how lame the module system is. Having used/seen module systems from other languages, eg. Rust or Go, Nim’s compiler not having a clue about the concept of packages feels pretty ancient. This particularly shows when you’re developing libraries made out of many small modules; the only way to typecheck such a library in its entirety is to create a library.nim file and import everything there. Text editors didn’t seem to understand that back in the 1.4 days and would regularly miss type errors that occurred when analyzing the library as a whole.
Oh, and the text editor situation… Nim’s compiler does not have incremental recompilation, so the autocomplete gets slower the larger your project is. And it can get really slow for even pretty small projects (~10k SLOC.)
And don’t get me started on the dialects. Endless --define switches and experimental features. And none of it is implemented in a robust way. Anyone can break your library by globally enabling a --define you did not anticipate. And the --defines are not even documented in one place.
So sad to see Nim’s leadership pursuing syntax sugar and small convenience features instead of fixing foundational problems. Really wish they had a more forward-looking vision for the project and its community, rather than focusing on fulfilling the main developer’s wishes and experiments.
The Nim leadership is the main developer, Andreas. He’s not interested in sharing responsibility or broadening the leadership, as he vehemently expressed a month ago:
I lost all interest in setting up a foundation because I lost faith in mankind. Every day I wake up in this clownworld where people replaced the “master” branch with “main”, sending a strong message “fuck you” to everybody who is older than 50 and has a hard time to change old habits.
Here is a hint: If you are obsessed with racism and sexism it’s because you’re a racist and sexist.
That was the point where I gave up on Nim. I don’t know where to start with this — it’s laughable that he pulls out that silly fight about master/main as his breaking point; he whines about society changing and it’s their fault he might have to “change old habits”, and that tired canard about racism/sexism. (He also appears to have deleted the comments objecting to his post, though not the supportive ones. Because of course he’s the forum moderator too.)
But my main takeaway is that the guy’s even more of an asshat than I previously thought, and he’s going to remain the gatekeeper to any change in Nim, and main source of truth on the forum. I just didn’t want to deal with that anymore. I’d rather fight with a borrow-checker.
I’ve seen his comment, yeah. It’s informative and unfortunate.
I’ve honestly been tempted to write a “why not Nim” blog post for a couple years now but never got around to doing so because a) I don’t like spreading negativity, and b) I’d rather not attract publicity to a project whose success I don’t exactly believe in.
Bad opinions aside, I believe Araq’s lack of care for the community is precisely the reason why the project is going in the wrong direction. I’ve heard horror stories from former compiler contributors about how hard to maintain the code is and how much it lacks documentation. No wonder it doesn’t attract very many outside contributions. Had he cared more for having other people work on the language alongside him, maybe things would have turned out different, but alas…
This sort of dictatorship is not the sort of leadership of a project I’d like to invest my time in. I much prefer the Rust RFC process over this.
Woah, I didn’t expect so much negativity in this thread… I was kind hoping to see some interesting discussions and maybe even some praises for a language that reached its 2.0.0 milestone without the backing of any tech giant.
Sure, the language is probably still not perfect, and at least some of @liquidev’s remarks make sense… but it is a remarkable milestone nonetheless.
I have been using Nim for years mostly on my personal projects (I kinda built my own ecosystem on top of it), and I must say it is fun to use. And it is very well documented. Unfortunately it feels very much a fringe language because it didn’t see massive corporate adoption (yet) but I hope this can change sooner or later.
About Araq: the guy can be rude at times, maybe even borderline unprofessional in some of his replies but he did put a lot of energy into the project over the years, and I am grateful for that. I tend not to get too involved in politics or heated debates… I saw that reply and it struck me as “a bit odd” and definitely not good marketing for the language and the community. I just hope that doesn’t drive too many people away from the language; it would be a pity.
that doesn’t drive too many people away from the language
Well it’s one thing to wish for some features, it’s another to wish for a leadership that doesn’t have a personal mental breakdown in the official forums - attacking a multitude of people - and deletes any critical response. The second one can’t just be ignored.
And if rust is already struggling with compile times, I wonder how bad this is with something that doesn’t even know about incremental compilation. You can’t just ignore a debugging round-trip time of minutes.
You can ask people for being less negative or strict, but first: don’t forget it’s v2.0 and second: the other way of not complaining about real production problems is to say nothing and move on.
I’m sorry if my comment came off as rude or overly negative… I don’t mean to ruin the celebration; as a long time user I’m just trying to voice my concerns about the direction the language is taking, and I think it’s important to talk about these rather than keep quiet about them forever and instead create an atmosphere of toxic positivity. 2.0 is considered by many a huge milestone and seeing important issues which put me off from using the language not be addressed in a major version is pretty disappointing.
Perhaps this speaks of our expectations of major versions; I see them as something that should be pretty big while in real life often it’s just some small but breaking changes. I’m of the firm belief that perhaps Nim went into 1.0 too early for its own good, because inevitably there will be breaking changes (and with how unstable the compiler can be, there can be breakages even across minor versions.)
I’ll be this person and ask you why you don’t come to D? There is a fundation and the tone is very respectful.
It is a main inspiration for Nim, actually Araq spent many years reading and commenting on the D forums. D pioneered many things that went into Nim, but the core language is very stable and there is no compiler switch explosion. In many ways D is more further along than Nim with its 3 compilers and it supports internal controversy and I’d say sane debate inside its community. I do see a bit of FUD about D on the internet, often by echo’ing a single negative opinion in a majority of content programmers. Sometimes I think it’s down to syntax (Python-like vs C-like).
Agree. I also use D and have since… looks at personal repos… 2014 or 2015 but maybe earlier and started doing some toys in Nim around 2018. What D lacks is buzz. It’s mature, stable, and performant and, at least for me, doesn’t break between upgrades. Some corners of D like CTE and nested templates I find hard to debug (and this is true for other languages, but that’s not a free pass) but they work. I keep finding bits of Nim and core/former-core libraries where that’s not the case and they fail in odd ways and that’s still true in 2.0.
As bizarre as that is, Araq’s use of the phrase “clown world” is more indicative of future behaviour than random Rust community members talking about pronouns. Here’s another strange Araq post - I wouldn’t want to support a project with this kind of world view.
Maybe also because that analogy argument was inside one issue, opened specifically to bikeshed it. The other one felt more like a dismissal of anything that isn’t in his view of the world - in a discussion about giving the community a chance to steer the direction of the language.
I’d happily take that over Araq’s bullshit, like when I pointed out that Nim’s null-checking was leading to bogus errors in a bunch of my code (after hours of debugging and creating a reduced test case) he dismissed it with “that’s because the control flow analysis doesn’t notice ‘return’ statements, and you shouldn’t be using return because it isn’t in Pascal.” Despite him having put both features in the language.
Oh? I recall similar arguments being used against Jews.
It’s a fairly obvious logic fallacy, which anyone smart enough to be a programmer ought to see through pretty easily. (Hint: if you deny a > b, it does not follow you believe b > a.)
He also appears to have deleted the comments objecting to his post
Although I agree with almost all of your points and came to the same conclusion, I think it’s fair to say that not all critical comments were deleted. There are several in the thread that you linked.
The comments do show that at least one comment was removed. I don’t know if there were more now-removed comments because I read the thread only a while after it was closed.
After trying Nim for a little while some time ago, the module system is what soured me on the language. I don’t like that you can’t tell where a symbol comes from by looking at the source of the current file. It’s “clever” in that you automatically get things like the right $ function for whatever types get imported by your imports, but that’s less valuable than explicitness.
On the contrary, I actually don’t hold much grudge against the import system’s use of the global namespace. Static typing and procedure overloading ensures you get to call the procedure you wanted, and I’ve rarely had any ambiguity problems (and then the compiler gives you an error which you can resolve by explicitly qualifying the symbol.) While coding Rust, I almost never look at where a symbol comes from because I have the IDE for browsing code and can Ctrl+Click on the relevant thing to look at its sources.
My main grudge is that the module system has no concept of a package or main file, which would hugely simplify the logic that’s needed to discover a library’s main source file and run nim check on it. Right now text editors need to employ heuristics to discover which file should be nim check’d, which is arguably not ideal in a world where developers typically intend just a single main file.
I agree highlighted code ought to be pre-rendered for static sites. The same goes for math, especially now that MathML is gaining wider support. But I can’t blame people that much – it’s so much easier to drop a CDN script in your page than to configure a static build system, especially when the various parts come from different languages/ecosystems (e.g. your static site generator is in Go but all the math renderers are JS).
For my SICP study website I initially used Pandoc’s built-in syntax highlighting (skylighting) but then decided to roll my own Scheme highlighter in C: schemehl.c. It was fun figuring out how to correctly handle nested quasiquotes (something I doubt any general purpose highlighter would ever bother with).
Doing it client side is fine for half arsing it, but proper syntax highlighting often requires more context than is available in the snippet and so has to be done with something that runs at build time. For my Objective-C book, I wanted to ensure that the code examples were all correct, so each one was extracted from a file that I could build and test. For the ePub version, this meant that I could use libclang to tag every token in the file and then extract the lines and add semantic class descriptions to span tags around each token. In CSS, I could then style identifiers differently depending one whether they were macros, type definitions, local symbols, instance variables, and so on. Generating the highlights from the snippet would have restricted me to lexical highlighting: comments, literals, keywords, and identifiers.
It was fun figuring out how to correctly handle nested quasiquotes
I was building a language with nested quasiquotes, and I for the life of me could not figure out how to do it. Finally I looked for existing languages that had them, and I found this comment in the Links source code.
Sometimes when something seems hard, that’s because it’s actually impossible with the current approach! I had no idea that basic lexing algorithms can’t handle nested quasi-quoting, and a stack of lexers is required.
Just a few years ago I actually switched from MathML to MathJax because I simply could not get stuff to render consistently across websites. I remember Chromium being especially problematic. I’d be happy if it’s gotten better since then, but I have been very careful about using MathML since then.
I’m using a site hosted copy of MathJax for a minor side project and wonder if anyone will really object to client-side rendering if it skips the cdn. At 623kb (svg) it’s small compared to related videos.
The author leads with “People working in other industries should probably not be miserable at work either, but that is not the concern of this article”. About that:
I spent my 20s working almost-min-wage jobs in kitchens and grocery stores, working as many as 3 jobs (opening + prep work in a cafe in the early morning, cook in a restaurant in the afternoon and evening, and bus dishes on the weekend) and various side hustles to pay for a small room in a crowded house in South Berkeley (~approx 11 other people were living there), with not much hope in sight for anything different.
Sometimes nowadays I find myself getting frustrated with e.g. some of the nasty proprietary or legacy tech I have to work or interface with. But while this work can sometimes feel like slogging through filth, I’ve worked jobs where I literally had to slog my way through actual filth – and this is very far from that. As a knowledge worker, you generally have autonomy and respect and flexibility that is completely unheard of to even a lot of white collar workers, let alone folks toiling away doing physically demanding work for minimum wage. Not to mention you probably won’t deal with the kinds of psychological abuse and unsafe conditions that are part and parcel with many of those lower-wage jobs
Which isn’t to say that tech workers shouldn’t aim to improve their conditions further or try to have more fun at work or that we should put up with bullshit simply because others have it worse – It’s essential that we protect what we have and improve it and even enjoy ourselves. But I do think that tech workers often miss how dire things are for other folks, especially folks working low-wage, manual jobs, and it would be nice to see more recognition of the disparity in our circumstances
I grew up in a restaurant and spent some time working as a bus boy. It really grinds my gears when you go out for a meal with a coworker and they complain about the service. “How hard could it be to get my order right?” Why don’t you work in a restaurant for a couple years and find out! Or when people assume scaling a restaurant is as easy as adding a load balancer and a couple more servers (pun not intended, but appreciated).
Some people have never worked in the service industry and it really shows.
I resonate really hard with this, I’ve come back several times to try to write a reply that isn’t a whole rant about people in tech but:
I’ve done a bunch of not so sexy jobs (legal and not so much, I’ll leave those out): retail, restaurants in various positions, and I was even one of those street fundraisers for the children (where I was subject to physical violence and the company’s reaction was “it comes with the job”). Now I work tech full time, and I’m a deckhand when I’m not doing that.
My perspective is shaped by a couple things, I think:
Being born to teenage parents who worked similar jobs and struggled for a long time
The fact that they “raised me right” – if I talked to / treated anyone the way I’ve seen some folks I’ve met in this industry do to service workers / people they seem to consider as “below them”, they wouldn’t be having any of it
Actually working the jobs and being subject to the awful treatment by both customers and management
The thing is, though, is that I really don’t think you should have to go through any of this in order to just not be a jerk to people…I really don’t know what the disconnect is. The most difficult customers I’ve had (at previous jobs and on the boat) have typically been the ones that seem the most privileged. When it comes to restaurants, the cheapest ones (in terms of tipping) were similarly the people that would come in and order hundreds of dollars worth of food and then leave little to no tip (I’m not here to debate tipping culture, it is what it is here at this point).
I’ve had situations where I take people to a place where I’m cool with the staff and someone picks up the tab (for which I’m appreciative) but then they are rude / pushy / skimp out on the tip, which is really embarrassing to say the least (I don’t hesitate to call people out but I feel like … I shouldn’t have to?)
The boat I work on is in the bay area and so we get a lot of tech people, and a couple of things stand out:
I don’t really know how some of the most intelligent people can be so dumb (literally all you need to do is follow directions)
They talk down to us (the crew trying to put them on fish and, for what it’s worth, keep them alive – won’t get into that here), and when you ask them not to do something for safety or you try to correct something they’re doing wrong, they get an attitude. I want to emphasize, not everyone, but enough to make you stop and ask why.
When they find out that I also work in tech (you talk to people since you’re with them for 8+ hours), the reaction is typically one of “why do you need to be doing THIS?”. Sidenote – the most hilarious thing that I enjoy doing is dropping into a technical conversation (a lot of people come on with their coworkers) and having people be like “wtf how does the deckhand know about any of this?”
They don’t tip … lol … or they stiff us for fish cleaning which we are upfront is a secondary service provided for a fee.
It’s not everyone, but I get a pretty decent sample size given the population here. The plus side of working on the boat (vs a restaurant or retail) is that if someone starts being a major a-hole the captain doesn’t mind if we stand up for ourselves (encourages it, even)
It’s not everyone of course, but it’s enough to make you wonder.
Some people have never worked in the service industry and it really shows.
Yeah, exactly. Or, we have a saying “you can tell whose never pushed a broom in their life”.
That was more of a rant than I wanted to get into but since I’m here it was kind of cathartic. I really just wish people would stop and think about the human on the other end. Of course it’s not just tech people that do things like this, but … yeah.
I’ve done a bunch of not so sexy jobs (legal and not so much, I’ll leave those out)
I worked in eDiscovery for a while (~3 years) so I have a sense of legal. It’s very stratified and stressful. I remember going to bed at 2 AM and waking up at 5 AM to make sure that a production was ready for opposing counsel. Not ideal…
My perspective is shaped by a couple things, I think:
Being born to teenage parents who worked similar jobs and struggled for a long time
By contrast, my father was 39 when he had me. However, he had a hard life. He grew up in Francoist Spain. (One of the few memories of my grandfather was when he told me “La habre es miseria. La habre es miseria.” (Hunger is misery. Hunger is misery.)) My father was a Spanish refuge in France at age 9. He didn’t complete high school. Instead, he did an apprenticeship in a French restaurant where the chefs beat him. He worked 16 hour days for a long time.
The fact that they “raised me right” – if I talked to / treated anyone the way I’ve seen some folks I’ve met in this
industry do to service workers / people they seem to consider as “below them”, they wouldn’t be having any of it
Absolutely. My father always said that everyone was welcome at his restaurant, regardless of what they were wearing. It’s important to respect everyone.
Inter-generational trauma is a real thing. I’m doing okay, but my brothers didn’t fare so well. (A topic for a more one on one conversation.) I hope you are okay. <3
Edit: this has really thrown me through a loop. I don’t mean to be dramatic and I know this is a public forum, but I’m sure there are more people posting than responding. If it means anything to anyone then it’s more important to say so than to be stoic. I hope you are all doing okay.
I worked in eDiscovery for a while (~3 years) so I have a sense of legal. It’s very stratified and stressful. I remember going to bed at 2 AM and waking up at 5 AM to make sure that a production was ready for opposing counsel. Not ideal…
Heh, sorry I meant legal vs not-so-legal in the legality sense, but in any case wow that sounds dreadful!
I appreciate your kind words and you sharing your story, and I’m glad you’re doing okay. I’m also sorry to hear about your brothers, similar thing is true for some of my siblings…kind of weird how that works.
There is a theory in some circles that states that having money enable people to not have to care about other human beings. With money you can feel like you provide for all your needs by just buying stuff and services. If you don’t have so much money, you need to compensate by trying to build mutual understanding. That leads to being more empathic. You also need to respond or even anticipate the needs of people who give you money. Which leads also to some kind of asymmetric empathy (similar to impostor syndrome).
Also there may be the fact that some people are attracted to tech because they fell they are more gifted with machines than with people. So maybe some form of selection bias here too.
Well put. I sometimes ask myself, “How many people are living miserable lives so that I can sit in a cushy chair and think about interesting problems?”
Well, I’ve worked in the oil and gas industry, so I helped keep lots of people’s heat working in the winter, including my own. At the cost of making the world incrementally more fucked though, so that one’s a net negative. I’ve done a fair amount of teaching, so I helped share skills that were useful for people. I’ve worked datacenter tech support, so I helped lots of people keep their online businesses working. So there’s that.
If I really wanted to make the world a better place, I would either work for something like Habitat For Humanity and build houses, or I would get a PhD in nuclear physics or material science and try to make better nuclear/solar energy sources. Or become a teacher, natch, but my sister and both parents are teachers so I feel like I have to break the family mold. Could always do what my grandmother did and become a social worker. Or go into politics, because someone with a brain stem has to, but I’ve had enough brushes with administration to know that I will burn myself out and be of no use to anyone.
Right now I work in robotics doing R&D for autonomous drones, so I’ll hopefully make peoples’ lives better somewhere, someday down the line. Nothing as good as what Zipline does, but on a similar axis – one of my most fun projects was basically Zipline except for the US Army, so it was 10x more expensive and 0.25x as good.
…do people not normally think about this sort of thing?
That there are people supporting cole-k’s job (I don’t know who, maybe car mechanics, cafeteria workers, janitors?) whose work is required for cole-k’s job to be possible, but who are necessarily miserable in their jobs.
Yeah, and I’ve done at least a moderate share of supporting them back one way or another, within the bounds of my aptitudes, skills, and how selfish I’m willing to be – and honestly I’m pretty selfish, ngl. Sometimes I’ve done it directly by serving them back, more often indirectly by trying to do things that Benefit Everyone A Bit. All I can do is keep trying. We’re all in this together.
This should not be controversial, and sometimes I wish I had a button to teleport some of my colleagues where I used to work in Africa to recalibrate their sense of what “hard” means.
This is so true. I try to remind myself of this as much as I can but as I did not experience minimum wage work myself this can be hard to be fully aware of this situation.
Maybe we should try for the condition of everybody to improve. I fear that by insisting a lot on the good conditions we have in the tech industry it would only encourage a degradation on those conditions unfortunately. Tactically, I wonder if we should not focus on the commonalities of the issues we face across all the different types of jobs.
I also paid for college and university working in a large hotel kitchen and then dining room. At the time in the front of the house I could earn enough in tips over a summer to cover a year of state school tuition plus room and board. I’d go back on holidays and long weekends to make the rest of my expenses. It was hard work, long hours, and disrespected in all kinds of ways. Once in a while there was violence or the threat of violence. But it beat doing landscaping or oil changes and tires. There were a number of us who were using it as a stepping stone, one guy from Colombia worked until he saved up enough to go back home and buy a multi-unit building so he and his parents could live in it and be landlords, get a used BMW for himself, and finish his education. His motivation, and taking extra shifts, made mine look weak and I was highly motivated.
I remind myself of that time when I’m frustrated at my desk.
one guy from Colombia worked until he saved up enough to go back home and buy a multi-unit building so he and his parents could live in it and be landlords
This I think was one of Bryan Caplan’s arguments about open borders.
In addition to the moral issue that no-one has the right to curtail the free movement of others[1], there is solid empirical evidence that not only do immigrants enrich the countries they emigrate to (i.e. contribute more on average than locals), they often also help lift their home countries out of poverty by doing exactly what your Colombian friend did.
[1] One frequently occurring example of hypocrisy on the matter of travel: people who simultaneously rail against any attempt by their own Government to control their movement (passports, papers, ID, etc.), but also complain loudly about people crossing the border into “their” country and demand the Government build a wall, metaphorically or literally.
I’d be interested in the backstory here: was Red Hat ever profitable before these changes? Did something stop them from turning a profit when they did before? Or did someone at IBM just decided they could be squeezed for more profit?
A quick search for their financial reports showed that, as of 2019, they were making quite large piles of money. I was quite surprised at how much revenue they had: multiple billions of dollars in revenue and a healthy profit from this.
It’s not clear to me the extent to which CentOS affected this. My guess is that, by being more actively involved for the past few years, they’ve made it easier to measure how many sales that get as a conversion from CentOS users and found that it’s peanuts compared to the number of people that use CentOS as a way of avoiding paying an RHEL subscription.
I didn’t see any post-acquisition numbers but I wouldn’t be surprised if they’re seeing some squeezes from the trend towards containerisation. Almost all Linux containers that I’ve seen use Ubuntu if they want something that looks like a full-featured *NIX install or Alpine or something else tiny if they want a minimal system as their base layer. These containers then get run in clouds on Linux VMs that don’t have anything recognisable as a distro: they’re a kernel and a tiny initrd that has just enough to start containerd or equivalent. None of this requires a RedHat license and Docker makes it very easy to use Windows or macOS as client OS for developing them. That’s got to be eroding their revenue (and a shame, because they’re largely responsible for the Podman suite, which is a much nicer replacement for Docker, but probably won’t make them any money).
I’d say they’re not hurting in the containerization space, with OpenShift as the enterprisey Kubernetes distro, quay as the enterprisey container registry, and the fact that they own CoreOS.
If you’re deploying into the cloud, you’re using the cloud provider’s distro for managing containers, not OpenShift. You might use quay, but it incurs external bandwidth charges (and latency) and so will probably be more expensive and slower than the cloud provider’s registry. I don’t think I’ve ever seen a container image using a CoreOS base layer, though it’s possible, but I doubt you’d buy a support contract from Red Hat to do so.
You’re missing that enterprises value the vendor relationship and the support. They can and will do things that don’t seem to make sense externally but that’s because the reasoning is private or isn’t obvious outside that industry.
I’ve never seen a CoreOS-based container but I’ve seen a lot of RHEL-based ones.
You’re missing that enterprises value the vendor relationship and the support.
Possibly. I’ve never had a RHEL subscription but I’ve heard horror stories from people who did (bugs critical to their business ignored for a year and then auto closed because of no activity). Putting something that requires a connection to a license server in a container seems like a recipe for downtime.
I expect that big enterprise customers will not suffer stamping a license on every bare-metal host, virtual machine, and container. My experience is that connectivity outside the organization, even in public cloud, is highly controlled and curtailed. Fetching from a container registry goes through an allow-listed application-level proxy like Artifactory or Nexus, or through peculiarly local means. Hitting a license server on the public internet just isn’t going to happen. Beyond a certain size these organizations negotiate terms, among them all-you-can-eat and local license servers.
All this is easily findable on the Internets, but the tl;dr - yes. Red Hat was profitable. That’s Red Hat’s job, to turn a profit. It’s also Red Hat’s job to remain profitable and try to grow its market share, and to try to avoid being made irrelevant, etc.
Being a public company means that shareholders expect not only profit, but continual growth. Whether that’s a reasonable expectation or healthy is a separate discussion, but that’s the expectation for public companies – particularly those in the tech space. IBM paid $34 billion for Red Hat and is now obliged to ensure that it was worth the money they paid, and then some.
If RHEL clones are eating into sales and subscription renewals, Red Hat & IBM are obliged to fix that. I don’t work at Red Hat anymore, but it’s no secret that Red Hat has a target every quarter for renewals and new subscriptions. You want renewals to happen at a pretty high rate, because it’s expensive to sign new customers, and you want new subscriptions to happen at a rate that not only preserves the current revenue but grows it.
That’s the game, Red Hat didn’t make those rules, they just have to live by them.
Another factor I mean to write about elsewhere soon is the EOL for EL 7 and trying to ensure that customers are moving to RHEL 8/9/10 and not an alternative. When CentOS 7 goes EOL anybody on that release has to figure out what’s next. Red Hat doesn’t have any interest in sustaining or enabling a path to anything other than RHEL. In fact they have a duty to try to herd as many paying customers as possible to RHEL.
So it isn’t about “aren’t they making a profit today?” It’s about “are they growing their business and ensuring future growth sufficiently to satisfy the shareholders/market or not?”
My guess is that revenue was expected to start declining. Density of deployments has been rising rapidly since Xen and VServer came. Red Hat had to adjust pricing to cope multiple times, but I don’t believe they were able to track the trend.
Nowadays with containers, the density is even higher. We are at PHP shared hosting level density, but for any stack and workload. For simple applications, costs of running them are approaching the cost of the domain name.
Instead of fleet of 10 servers, each with their own subscription (you had in 2005-2010 with RHEL 4 & 5), you now have just 2U cluster with a mix of VMs and containers, with just two licenses.
And sometimes not even that. People just run a lightweight OS with Docker on top pretty frequently.
This is a band-aid on a bleeding wound, I believe.
They should be pursuing some new partnerships. It’s weird that e.g. Steam Deck is not running an OS from Red Hat. Or that you can’t pay a subscription for high quality (updated) containers running FLOSS, giving a portion of the revenue to the projects.
The Steam Deck might be a poor business case for Red Hat or Valve. Since the Steam Deck hardware is very predictable and it has a very specific workload, I don’t know if it would make sense to make a deal with Red Hat to support it. It would be a weird use case for RHEL/Red Hat, too, I think. At least it would’ve when I was there - I know Red Hat is trying to get into in-vehicle systems so there might be similarities now.
I am not saying Red Hat should be trying to support RHEL on a portable game console. It should have been able to spin a Fedora clone and help out with the drivers, graphics and emulation, though.
Somebody had to do the work and they made profits for someone else.
Concentrating on Java for banks won’t get them much talent and definitely won’t help get them inside the next generation of smart TVs that respect your privacy. Or something.
It would be a weird use case for RHEL/Red Hat, too. I know Red Hat is trying to get into in-vehicle systems so there might be similarities now.
One business case for Red Hat would be a tremendous install base, which would increase the raw number of people reporting bugs to Fedora or their RHEL spin. And that in turn could led IVI vendors to have really battle tested platorm+software combo. Just don’t let them talk directly to the normal support other companies are paying for.
My understanding is that Canonical has benefitted hugely from WSL in this regard. It’s practically the default Linux distro to run on WSL. If you want to run Linux software and you have a Windows machine, any tutorial that you find tells you how to install Ubuntu. That’s a huge number of users who otherwise wouldn’t have bothered. Ubuntu LTS releases also seem to be the default base layers for Mose dev containers, so if you open a lot of F/OSS repos in VS Code / GitHub Code Spaces, you’ll get an Ubuntu VM to develop in.
was once rejected from a job specifically because I mentioned Erlang and the founder said he thought I was more of a computer scientist than an engineer
That’s interesting, because there’s a fair amount of Erlang used in industry — it was created for telephone switching systems, not as an academic exercise. CouchDB is mostly written in it. Is Kafka in Erlang or am I misremembering?
As to your main point, I’m not a Lisper, and to me the quotes you gave tend to reflect my feelings: stuff that once made Lisp special is widely available in other languages, the cons cell is a pretty crude data structure with terrible performance, and while macros are nice if not overused, they’re not worth the tradeoff of making the language syntax so primitive. But I don’t speak from a position of any great experience, having only toyed with Lisp.
Can you elaborate? Aren’t lists the primary data structure, in addition to the representation of code? And much of the Lisp code I’ve seen makes use of the ability to efficiently replace or reuse the tail portion of a list. That seems to practically mandate the use of linked lists — you can implement lists as vectors but that would make those clever recursive algorithms do an insane amount of copying, right?
Aren’t lists the primary data structure, in addition to the representation of code? And much of the Lisp code I’ve seen makes use of the ability to efficiently replace or reuse the tail portion of a list
No. Most lisp code uses structures and arrays where appropriate, same as any other language. I’m not sure what lisp code you’ve been looking at, so I can’t attest to that. The primordial LISP had no other data structures, it is true, but that has very little to do with what we would recognise as lisp today.
I think it stems mostly from how Lisp is taught (if it’s taught at all). I recall back in college when taking a class on Lisp it was all about the lists; no other data structure was mentioned at all.
That’s a popular misconception, but in reality Common Lisp, Scheme, and Clojure have arrays/vectors, hashtables, structures, objects/classes, and a whole type system.
I don’t know what Lisp code you’ve looked at, but in real projects, like StumpWM or the Nyxt browser or practically any other project, lists typically don’t play a big role.
Unfortunately, every half-assed toy language using s-expressions gets called “a Lisp”, so there’s a lot of misinformation out there.
Clojure and Fennel and possibly some other things don’t used linked lists as the primary data structure. Both use some kind of array, afaik (I’ve never properly learned Clojure, alas). How this actually works under the hood in terms of homoiconic representation I am not qualified to describe, but in practice you do code generation stuff via macros anyway, which work basically the same as always.
As I said above, this is a divisive issue for some people, but I’d still call them both Lisp’s.
Depends a bit on the person’s perspective. I’ve seen some people get absolutely vitriolic at Clojure and Fennel for ditching linked lists as the primary structure. I personally agree with you, but apparently it makes enough of a difference for some people that it’s a hill worth dying on.
I don’t think Kafka is, but CouchDB certainly is and, famously, WhatsApp. It’s still not so common but not unheard of, especially now in the age of Kubernetes, although Elixír seems reasonably popular. Either way I don’t think most people know much about its history, they just sort of bucketize it as a functional language and whatever biases they have about them
I never actually wrote much Erlang – I only even mentioned it in that interview because the founder mentioned belonging to some Erlang group on his LinkedIn. It turned out to have been something from his first startup, which failed in a bad way, and I think he overcorrected with regard to his attitude toward FP. He was a jerk in any case
This essay is an admirable display of restraint. I would have been far crueler.
In my experience, protocols that claim to be simple(r) as a selling point are either actually really complex and using “simple” as a form of sarcasm (SOAP), or achieve simplicity by ignoring or handwaving away inconvenient details (RSS, and sounds like Gemini too.)
After years thinking and reading RFCs and various other documents, today, I finally understood. “Simple” refers to “Network” not to “Management Protocol”! So it is a Management Protocol for Simple Networks not a Simple Protocol for Management of Networks
Let’s not forget ASN.1, DCE, and CORBA. Okay, let’s forget those. In comparison SOAP did seem easier because most of the time you could half-ass it by templating a blob of XML body, fire it off, and hopefully get a response.
achieve simplicity by ignoring or handwaving away inconvenient details
Exactly, and the next-order effect is often pushing the complexity (which never went away) towards other parts of the whole-system stack. It’s not “simple”, it’s “the complexity is someone else’s problem”.
Pretty sure that’s because RSS2 is not supposed to contain HTML.
But RSS2 is just really garbage even if people bothered following the spec. Atom should have just called itself RSS3 to keep the brand awareness working.
Well, Winer’s way of arguing was never really via the legal system, it was by being a whiny git in long-winded blog posts. Besides, RSS versions <1.0 were the RDF-flavored ones (hence RSS == RDF Site Summary), and no-one wanted that anymore.
<=1.0 and people kept using 1.0 long after 2.0 existed because some people still wanted that :) Thought those people were mostly made happy by Atom and then 1.0 finally died.
O god, don’t get me started. RSS 2 lacked proper versioning, so Dave Fscking Winer would make edits to the spec and change things and it would still be “2.0”. The spec was handwavey and missing a lot of details, so inconsistencies abounded. Dates were underspecified; to write a real-world-useable RSS parser (circa 2005) you had to basically try a dozen different date format strings and try them all until one worked. IIRC there was also ambiguity about the content of articles, like whether it was to be interpreted as plain text or escaped HTML or literal XHTML. Let alone what text encoding to use.
I could be misremembering details; it’s been nearly 20 years. Meanwhile all discussions about the format, and the development of the actually-sane replacement Atom, were perpetual mud-splattered cat fights due to Winer being such a colossal asshat and several of his opponents being little better. (I’d had my fill of Winer back in the early 90s so I steered clear.)
Does that mean the other core words are implemented using these 14 words? Or are they not implemented at all?
Yes, only 8 primitives are required to bootstrap a whole FORTH: https://groups.google.com/g/comp.lang.forth/c/NS2icrCj1jQ
For fun: in this Strang Loop talk Concatenative programming and stack-based languages the speaker eventually comes to just two lambda calculus-like words: https://imgur.com/a/E5KWGZJ
Do you have a list of those words? I skimmed through the thread but didn’t find them :facepalm:
I believe the list is:
I don’t write Forth– it was my first language, I was a kid– but I remember someone (Brad Rodriguez?) having an even shorter list of primitives for something not quite forth that could bring up forth.
It might be Three Instruction Forth, which contains “@”, “!” and “EXEC”. It’s meant as a target for embedded systems to upload code (using “!” and “EXEC”) and for examining memory (”@”). It’s not really a language per se.
Yeah, that’s probably as small as it gets. I was aware of that but as you said it’s really not a language, it’s the first step toward one.
This the opposite of my experience, but our workflows are different. I’ll just note some points here:
I have learned to love Zig’s source code (I mean the standard library, not the compiler). It’s very easy to read (for me, at least) and I’ve found because of the explicitness of the language design, I don’t need archaic hidden knowledge to understand what the source is doing. As a result, I’ve stopped searching for answers online and have embraced the code as the source of truth. I’ve gained a better understanding of what’s going on as a result as well which is a bonus.
I don’t quite agree with this. The explicit namespacing for files gives you a tool to make logical separation in your directory. I’ve found Go’s implict namespacing confusing because when I’m reading other people’s code, I never seem to know where to find a symbol (I’m not considering external tools like lsp or grep which I need in Go, but don’t in Zig). In Zig, I can just see which import the symbol refers to. It makes nice and clear.
Again, maybe it’s my workflow, but I’ve found having the tests right beside where the code lives gives me the best tool to learn what the code does, and it never goes out of date. Taking a look at e.g. http server code in stdlib, you can read the code and then how it’s used, and that’s all you need to do. No praying, grepping, hoping, aligning the stars, searching online, banging head against the wall, to maybe find how a piece of code is used. In fact, this is why I’ve started reading the code, because it’s so easy and takes so little time to read the code and see how it’s used.
I encourage you to consider this workflow. I don’t use this flow with any other language because it just doesn’t work: too many redirections, implicitness, and fluff get in the way and you need advanced tools. In Zig, it’s all there, right there, in front of you, nicely collected together. No need for tools. This workflow has been so effective thanks to Zig that I haven’t felt the need to setup lsp yet! I just don’t feel like I need it.
I do something like that, but with LSP, that takes me right to the source/tests. Works offline, no need to even start my browser.
It also leads me to discover better ways to do what I wanted to do, because I spot a function in a module somewhere. E.g. seeing I can do
bufferedReader(reader)
instead ofBufferedReader(@TypeOf(reader)){ .unbuffered_reader = reader, ... }
.That said, different things work for different people, and I hope the docs will one day be as enjoyable as the language.
Having only played with Zig, I agree with this. I prefer tests alongside the code. Drifting the topic a bit, I very much like D’s support for unit tests in the same file as the code and that they can produce documentation.
+1 it’s great in Rust as well. I also like the ability to test module-private functions (I know some people think this is bad). Java is such a pain needing to have a mirrored folder structure for the tests just so things can be in the same package.
Arrived to test-right-beside-code independently when working on Next Generation Shell. I have two groups of tests. One is basic language functionality, it goes separately and is practically not used anymore. The second is for the libraries, where each test lives just below the function.
I’ve noticed that the tests below the function are almost identical to examples that are given in documentation which is just above the function. This needs to be addressed.
Linus’s reply to the mailing list might have been a better link than phoronix: https://lore.kernel.org/lkml/CAHk-=whFZoap+DBTYvJx6ohqPwn11Puzh7q4huFWDX9vBwXHgg@mail.gmail.com/
Damn, I was hoping for a flamewar between Linus and Theo de Raadt, but then Theo says “I agree completely”…
His reply, at length, is a good read: https://lore.kernel.org/lkml/55960.1697566804@cvs.openbsd.org/
I remember when he introduced
mimmutable(2)
. I really is amazing how much bs Chrome puts everyone through.Fire and motion, fire and motion.
https://www.joelonsoftware.com/2002/01/06/fire-and-motion/
An old analysis, but substitute the 800 pound gorilla du jour for Microsoft and it holds up well.
what a week, Microsoft explains how-to download and install Linux, and Theo and Linus getting along on a mailinglist.
Must be a bit chilly in hell
Have they been at loggerheads before? From what I’ve gleaned, the projects respect each other but have fundamental disagreements about how to structure a Unix-like.
Linus in 2008: “I think the OpenBSD crowd is a bunch of masturbating monkeys, in that they make such a big deal about concentrating on security to the point where they pretty much admit that nothing else matters to them.”
I think it was Marc Espie who replied “Who are you calling a monkey?” which was a perfect response.
I’ll take your word for it. I’m not that well-versed in US primary school insults.
I interpreted the reply as mock offense at the insult of being called a monkey with the humorous and unwritten acceptance of the accusation that they were masturbating over security. I don’t know, it seemed funny and answered in kind without taking up an argument.
That’s funny, except I’m afraid it shows rather a lack of research effort :-) — none of the people named was an American at the time (and most of them aren’t now, either).
Just shows nerds need some tutoring in how to beef. This stuff is pathetic.
Git is hard because it’s a terrible version control system. We keep using it because people think it’s reasonable to write blog posts putting the burden of understanding entirely on the end-user. It’s unreasonable to have a bullet list of things you need to understand about it where the first thing is:
Seriously just realize you have Stockholm syndrome and stop victim blaming other people who haven’t developed it yet.
Worldline. Jesus christ.
I believe that we use git precisely because it is terrible. There is a process (which someone told me a couple of years back had an actual name, but I’ve forgotten it) where systems reinforce the weakest link. If you build a tool that is useful, people will use it. If you build a tool that is almost useful, people will build things around it to strengthen it. Things like GitHub were able to thrive because using git without additional tooling was so painful. There are a lot of git GUIs (I am a particular fan of gitui, since it means I don’t need to leave my terminal) that wouldn’t exist if people didn’t have such a strong desire to avoid the git CLI. Subversion GUIs were always an afterthought and most people just went to the command line. If you said you used a GUI for svn, people would be surprised. If you say you use one for git, people will want to discuss it and see if it has any features that make it better than the one they use.
Alan Kay said that evolution in computing is a process of building a stack of abstractions and then squashing the lower-level ones into something simpler. I believe that the lack of progress in computing is because we are far better at the first step than the second. Something that replaces git could do so by looking at the abstractions that people build to avoid dealing with git and building a tool for that, but it will be competing with dozens of things layered atop git.
Around the time git was started I was still avoiding subversion because it had gone through a lot of churn and instability in its repository formats (including data loss). Svn was just about settling down and becoming something I might trust as a repo admin. I was using it casually for contributing to Apache SpamAssasin and FreeBSD, and the main impression I got was that svn was incredibly slow, much slower than CVS. I think this was because the svn protocol was (is?) ridiculously chatty, so if you are working over a long latency link then it sucks. And it still lacked realistic support for merges. So I continued to use CVS when I had to host repositories.
Then along comes git, and it’s much faster, it supports merges, it has better tools for wrangling my working copy. I was not an early adopter, but it made sense for me to continue using cvs a few more years and skip svn entirely.
At the time I was also watching bzr and hg, and it wasn’t entirely clear which would be worth adopting. I remember the discussion about bzr’s rich patch representation, versus git’s approach of working out what was a rename (etc) after the fact - I was persuaded by Linus’s arguments. Also bzr’s lack of stable repo format was not great. Mercurial had a well-documented format, but it was file-based (like cvs) and it was unclear to me that it could handle renames well. And hg’s branching model seemed rigid and awkward compared to git.
So from my point of view, git was terrible, but it was much better than the alternatives.
I stayed with svn for a long time because one of my collaborators did some fantastic work in a local branch with svk and then lost his laptop. That put me off the idea of distributed revision control for a long time. I wanted people to push things to public branches as quickly as possible and not lose work because their laptop or VM broke.
FreeBSD was very late to adopt git (though a fairly early adopter of svn). The faster download speed was a big selling point. I could do a git clone of the ports or source repo (with all history) with git in less time than updating from a week-old checkout with subversion. No idea how they managed to make the protocol so slow. I think it needed multiple round trips to request each revision, rather than just telling the server ‘give me everything from revision x’, which is very silly given that there’s basically no client state with subversion.
I don’t think that can be the whole story. It’s much better than what came before (cvs, svn) for most people AND nothing has yet been created that’s clearly better for a great many people.
I’m surprised to hear this and I wonder how much the state of Git GUIs has changed since circa 2014, when the messaging I would hear about Git GUIs was like “Please, please don’t use them! They’re (even more) confusing, they’ll hinder you in learning Git, and they’re especially bad because, when you need to ask for help with using Git, it will be far more difficult for us, the Internet, to help you than if you used the CLI.”
I was using GitX from about 2008ish, maybe a bit earlier, and the advice then was ‘there are some complex things that you can’t do in the GUI, it shouldn’t be your only tool’, but doing commits of part of a file without a GUI is incredibly painful and that was one of the big selling points of git (if you have some small bug fixes done alongside a new feature you can commit them separately and merge them into the main branch without merging the whole feature).
For the CS-inclined you can just say “path to the commit from the root of the tree” (or equivalently, “the parent matters”) instead which I think captures the idea without talking about alternate worlds or parallel universes or whatever. We’re professionals, it should take skill to use our tools.
Things should only require skill to use if it isn’t possible to engineer a high skill requirement out of it. If the tooling has a high skill requirement for no reason beyond access gating then it is definitionally a poor tool that should be replaced. It isn’t acceptable for a tool to require significant training simply because the users are expected to be highly skilled in some other domain.
This honestly feels like when people complain about tools being simplified because it means more people who aren’t “skilled” are using the tool. It also implies that people are only skilled if their skill is in specific fields.
People say that, but then they can’t point to a version control system (of the many that exist over the decades they’ve existed) that solves all the problems git does with an elegance that is assumed to be possible. The proof should be easy!
But proof is easy.
Mercurial, for example, doesn’t require knowledge of worldlines and branch go-karts. It is an extremely usable tool that doesn’t defy most people’s mental models.
Look at Sapling’s features like
sl absorb
. That would take 1 second to teach someone vs whatever it takes to teach interactive rebase, fixup commits, autosquash, etc.Better tools exist and have existed for a long time.
Heck, just look at what people have layered on top of git itself to fix its warts, eg git-branchless. It exposes extremely important workflows missing in git (like a sensible default log and branch rebasing) in a sane and usable cli.
If you explore other vcs you’ll find a million examples of concrete ways of doing things better.
The industrial revolution was successful precisely because it removed skill from the tooling - instead of having a general-purpose hammer that you whacked hot iron with in a carefully-aimed direction, you had a block of iron that you milled by twiddling knobs to the correct number and pulling the lever. Pulling the lever did not require years of training.
“It should take skill to use our tools” is an unjustified assertion that we should put this iron collar around our necks and can be trivially dismissed. The save button should not require skill, that’s insane.
I agree with the intention of this statement and how it relates to Git but it also tells me you don’t have much first-hand experience either as a blacksmith or as a machinist.
The save button indeed does not require skill to use. Git isn’t a save button though, it’s a system for handling conflicting saves. Otherwise just paste your code into a google doc and let their auto-merge functionality sort it out. A system following your Industrial Revolution example would require knowing what the synthesis of two conflicting programs should be. Maybe AI will be able to do that some day but good luck otherwise.
Other than being hard to learn for many people, what makes it bad in your opinion?
Is that not enough?
No. First of all, if you take the trouble to understand it, it can be a lot easier to use than other systems, and it solves almost everyone’s problems.
In my experience all dvcses are confusing to some people. Git gets a lot of complaints because it’s what people actually use.
I think we keep using it because git, for all its faults, solves the problem well enough for many people, whether that’s through learning the idiosyncrasies, or mitigating it’s shortcomings with external interfaces like Github etc.
For many people, life starts and stops and clone, checkout, push, and pull. If you’ve found a way to make that work for you safely and consistently, I think version control understandably becomes a problem that you might not be so invested in.
I think the current situation speaks to how these external GUI tools and websites have smoothed over the cracks of the git experience.
Sometimes I feel like a good chunk of the problems with yaml end up being a combo of unfamiliarity with the spec as well as out-of-date implementations.
Half of the problems people commonly cite with yaml (Norway problem, sexagesimal numbers, surprise octal, merge operator) were removed in yaml 1.2. However, despite yaml 1.2 being 14 years old, surprisingly few parsers implement 1.2, so people continue running into the same issues that were fixed in the spec over a decade ago. Some implementations (the most popular one for go, for example) chose to implement a mix of 1.2 and 1.1 to support certain 1.1 features that were removed.
One of the problems is that the YAML spec is moderately difficult to implement. I learned to use a PEG implementation by writing a CSV parser and then a JSON parser in a weekend. Decoding JSON was ~65 lines. I set aside my YAML 1.2 parser about 250 lines in and it was incomplete. I’ll get back to it one day but it’s a hairy spec.
Having read the spec multiple times and looked at other implementations, I’m not plagued by the problems other people report but I don’t think that’s reasonable when all you want to do is write a config file.
There are a lot of reasons to not use Rust, but this post does not list them out. Speaking as someone who has used Rust professionally for four years, this is my take on these points:
The rationale being that it’s stackoverflow’s most loved language and 14th most used… Its a growing language, and seeing more and more industry adoption. Currently, due to posts like this, it is a hard sell to management for projects which aren’t low-level, despite developers loving using it (and I’ve been told multiple times that people feel far more comfortable with their Rust code, despite not being experts in the language). Rust might be overhyped, but the data provided to back up this claim is just not correct.
I know this person has written a book on Rust, but… I have to question what the hell they’re talking about here. The steady release cycle of the Rust compiler has never once broken my builds, not even slightly. In fact, Rust has an entire Epoch system which allows the compiler to make backwards-incompatible changes while still being able to compile old code.
I mean, seriously, I genuinely don’t know how the author came to this conclusion based on the release cycle. Even recent releases don’t have many added features. Every project I have ever come across developed in Rust I’ve been able to build with
cargo build
and I’ve never once thought of the version it was developed with or what I had in my toolchain. Python 3 has literally had a series of breaking changes fairly recently and its being compared as a language doing it “better” because it has fewer releases.sigh. Because async traits aren’t stabilized? Even though there is a perfectly workable alternative,
async-traits
crate, which simply makes some performance trade-offs? I’m excited for async traits being stabilized, and its been a bummer that we haven’t had them for so long, but that doesn’t make it Beta.This is just an opinion the author has that I strongly disagree with (and I imagine most Rust developers would). The standard library is small, this was/is a design decision with a number of significant benefits. And they do bring 3rd party libraries into the standard library once they are shown to be stable/widely used.
To put this more accurately, Rust forces you to write correct async code, and it turns out correct async code is hard. This is an important distinction, because a language like Go makes it just as easy to write incorrect async code as it does correct async code. Having been bitten by enough data races and other undefined behavior in my lifetime, I love Rust’s stance on async code, which is to make it hard to do incorrectly with minimal runtime overhead.
Frankly, the Rust compiler is some incredible engineering that is also pushing the bounds of what a programming language can do. I mean, seriously, as frustrating as async Rust can be to work with, it is an impressive feat of engineering which is only improving steadily. Async Rust is hard, but that is because async code is hard.
[edit] Discussed below, but technically Rust just prevents data-races in async code, and does not force you to write code which is free from race-conditions or deadlocks (both of which are correctness issues). Additionally, the “async code” I’m talking about above is multi-threaded asynchronous code with memory sharing.
Frankly, the points being made in this post are so shoddy I’m confused why this is so high on lobsters. The anti-Rust force is nearly as strong as the pro-Rust force, and neither really contributes to the dialog we have on programming languages, their feature-set and the future of what programming looks like.
Use Rust, don’t use Rust, like Rust, don’t like Rust, this post is not worth reading.
I have not written any production Rust code (yet) but the “async is hard” resonates with me. I’ve wrestled with it in C, C++, Java, and Go and it’s easy to make a mistake that you don’t discover until it’s really under load.
I think you really hit the nail on the head with this point. The particularly damning thing about data-race bugs is that they are probabilistic. So you can have latent code with a 0.0001% chance of having a data-race, which can go undetected until you reach loads which make it guaranteed to occur… And at that point you just have to hope you can (a) track it down (good luck figuring out how to recreate a 0.0001% chance event) and (b) it doesn’t corrupt customer data.
There is a reason so many Rust users are so passionate, and its not because writing Rust is a lovely day in the park every day. Its because you can finally rest at night.
I’m in the process of switching to Rust professionally, dabbled for years and the biggest selling point of Rust is it’s ability to help me write working software with little or no undefined behaviour.
Most languages let you build large applications that are plagued by undefined behaviour.
I think this is debatable. Async rust introduces complexities that don’t exist in other async models which rival in ease-of-use and efficiency. It has unintuitive semantics like
async fn f(&self) -> T
andfn f(&self) -> impl Future<Output=T>
subtly not being the same; the latter is+ &'self_lifetime
for the Future). It also allows odd edge-cases that could’ve been banned to simplify the model:Future::poll()
can be spuriously called and must keep stored the most recently passed in Waker. Being two words in size, you can’t atomically swap them out resulting in requiring mutual exclusion for updating and notification. Completion based async models don’t require this.join!()
which allow stack-allocated structure concurrency. Waker could’ve been tied to the lifetime of the Future, forcing leaf futures to implement proper deregistration of them but reducing task constraints.The cancellation model is also equally simple, useful, and intricately limiting/error-prone:
select!()
between any future, not just ones that take CancellationTokens or similar like in Go/C# (neat). Unfortunately, it means everything is cancellable soa().await; b().await
is no longer atomic to the callee (or “halt-safe”) whereas it is in other async models.Sure, async is hard. But it can be argued that “Rust async” is an additional type of hard.
I don’t disagree Rust introduces an additional kind of hard: async Python is much easier to use than async Rust. I wrote in another comment how it all comes down to trade offs.
I do agree with you, there are more sharp edges in async Rust than normal Rust, but from my understanding of how other languages do it no language has a solution without trade offs that are unacceptable for Rust’s design.
Personally, I think async is the wrong paradigm, but also happens to be the best we have right now. Zig is doing interesting things to prevent having the coloring problem, but I don’t think any language is doing it perfectly.
Any data to back that exact claim? I love rust and I’m working professionally in it for last few years but I think I would still find erlang approach to async code easier.
Fair point, and to really dive into that question we have to be more specific about what exactly we’re talking about. The specific thing that is hard is multi-threaded asynchronous code with memory sharing. To give examples of why this is hard, we can just look at the tradeoffs various languages have made:
Python and Node both opted to not have multi-threading at all, and their asynchronous runtimes are single-threaded. There is work to remove the GIL from Python (which I actually haven’t been following very closely), but in general, one option is to avoid the multi-threading part entirely.
Erlang/BEAM (which I do love) makes a different tradeoff, which is removing memory sharing. Instead, Erlang/BEAM processes are all about message-passing. Personally, I agree with you, and I think the majority of asynchronous/distributed systems can work this way effectively. However, that isn’t to say it is without tradeoffs, message passing is overhead.
So essentially you have two options to avoid the dangerous shenanigans of multi-threaded asynchronous code with memory sharing, which is to essentially constrain one of the variables (multi-threading or memory sharing). Both have performance trade-offs associated with them, which may or may not be deal-breaking.
Rust lets you write multi-threaded asynchronous code with memory sharing and write it correctly. In general though I agree with you about the Erlang approach, and there isn’t really anything stopping you from writing code in that way with Rust. I haven’t been following this project too closely, but Lunatic (https://github.com/lunatic-solutions/lunatic) is a BEAM alternative for Rust, and last I checked in with it they were making great progress.
Yes, I can agree that “multi-threaded asynchronous code with memory sharing” is hard to write. That’s a much more reasonable claim.
The only thing I would disagree slightly is the assertion that rust solves this problem. That’s not really completely true, since deadlocks are still just as easy to create as in c++. For that the only sort of mainstream solution I can think of is STM in Clojure (and maybe in Haskell?).
Fair enough, its just a bit of a mouthful :)
I hadn’t heard of STM, but that is a really cool concept bringing DB transaction-notions to shared memory. Wow I need to read about this more! Though I don’t think that solves the deadlock problem globally, as if we’re considering access which is not memory (eg. network), and thus not covered by STM, then we can still deadlock.
From my understanding, solving deadlocks is akin to solving the halting problem. There just simply isn’t a way to avoid them. But you are right, Rust doesn’t solve deadlocks (nor race conditions in general), just data-races. I’ll modify my original text to clarify this a bit.
Bear in mind, though, that STM has been through a hype cycle and some people are claiming that, like String Theory, it’s in the “dead walking” phase rather than past the hype. For example, Bryan Cantrill touches on transactional memory in a post from 2008 named Concurrency’s Shysters.
I think you’ll find Erlang much harder tbh. Have you used it much? Erlang requires that you do a lot of ‘stitching up’ for async. In Rust you just write
.await
, in Erlang you need to send a message, provide your actor’s name so that a response can come back, write a timeout handler in case that response never comes back, handle the fact that the response may come back after you’ve timed out, decide how you can recover from that, manage your state through recursion, provide supervisor hierarchies, etc.Fortunately, almost all of that is abstracted away by gen_server you in practice you don’t actually do all that boilerplate work yourself, you just take advantage of the solid OTP library that ships with Erlang.
For sure I have way more experience with Rust, but I’m not really sure that all of what you listed is downside or Erlang specific. You also need to handle timeouts in rust (eg. tokio::time::timeout and something (match?) to handle the result), you might also need to handle possibility that future will be canceled. Others like recursion (which enables hot reloads) and supervisors are not obvious negatives to me.
Handling a timeout in Rust is pretty trivial. You can just say
timeout(f, duration)
and handle theResult
right there. For an actor you have to write a generalizedtimeout
handler and, as mentioned, deal with timeouts firing concurrent to the response firing back.I think for the most part handling cancellation isn’t too hard, at least not for most code. Manual implementors of a Future may have to worry about it, but otherwise it’s straightforward - the future won’t be polled, the state is dropped.
TBH I do not see difference between these two.
You can just add
self()
as a part of message.As simple as adding
after
block to thereceive
block.Solved in OTP 24 with
erlang:monitor(process, Callee, [{alias, reply_demonitor}])
.In most cases you simply do not try to recover from that and instead let the caller to do that for you.
Simplest async-like receive looks like, from the docs:
And that is all.
The difference is huge and kind of the whole selling point of actors. You can not share memory across actors, meaning you can not share state across actors. There is no “waiting” for an actor, for example, and there is no way to communicate “inline” with an actor. Instead you must send messages.
Sure, I wasn’t trying to imply that this is complex. It’s just more. You can’t “just” write
.await
, it’s “just” add self() and “just” write a response handler and “just” write a timeout handler, etc etc etc. Actors are a very low level concurrency primitive.There’s a lot of “as simple as” and “just” to using an actor. There’s just
.await
in async/await. If you add a timer, you can choose to do so and use that (and even that is simpler as well).The tradeoff is that you share state and couple your execution to the execution of other futures.
It’s “solved” in that you have a way to handle it. In async/await it’s solved by not existing as a problem to begin with. And I say “problem” loosely - literally the point of Erlang is to expose all of these things, it’s why it’s so good for writing highly reliable systems, because it exposes the unreliability of a process.
It takes all of this additional work and abstraction layering to give you what async/await has natively. And that’s a good thing - again, Erlang is designed to give you this foundational concurrent abstraction so that you can build up. But it doesn’t change the fact that in Rust it’s “just”
.await
.Sure, if you pretend you have to raw-dog actors to do concurrency in Erlang, and that OTP doesn’t exist and take care of almost all the boilerplate in gen_server etc. We could also pretend that async/await syntax doesn’t exist in Rust and we need to use callbacks. Wow, complex!
i am curious about what are the recents python3 breaking changes.
Perhaps the best example is 3.6 introducing async and await as keywords in the language (and thus breaking code which used them for variables). In Rust, this was done via the 2018 Edition with the Epoch system.
The difference is a Python 3.6+ interpreter can’t run code from Python 3.5 using async/await as non-keywords, while Rust can compile a mix of 2015 Edition and 2018 Edition code with some using async/await as non-keywords and others with it.
It’s possible GP has reached the same age as me, where you mentally think something happened last year when it was like 3 years ago.
For what it’s worth, instead of trying to find a working computer with an IDE and FDD controller, I’ve been playing with a GreaseWeazle and a fluxengine. It’s been interesting (read, “frustrating”) finding the right 3.5” drive model in working order to read old and damaged disks. These being old drives from random sources some are in working order and of those working ones some will read reliably and some won’t. There’s also some oddities around which cable they prefer (straight, crossed over, etc.). I’m having good luck with a Sony MPF920 for PC disks, but it doesn’t like Mac formatted disks (variable speed). I haven’t yet found a working 5.25” drive.
I’ve been working on databases for some time, and you win an upvote.
Sure, I can find some nits and have various opinionated color to things (learned indexes, for instance, are interesting but ultimately useless in practice under write load), but generally speaking you’ve done a great job simply highlighting (and, thankfully, not overexplaining or dismissing) many of the hard problems of databases. Double props for getting down to
fsync()
which, my god, don’t deal in filesystems, kids, because every part of the stack lies.Will be linking people to this when they’re learning as a teachable moment :)
I’m laughing with tears in my eyes because I’ve been there… and at the time (circa linux kernel 2.4.x) it seemed that the more complex your storage provisioning (ex. SAN) the less you could trust fsync.
I think that’s still essentially valid, and storages didn’t get simpler over time either, as far as i’m aware
I like all of this effort into type safety in a shell language. Pipelines are a pretty good notion as well that we can build off of (though I wonder if we can do some sort of laziness or query planning so that heavier commands can selectively get the “relevant info” instead of serializing everything)
Somebody on HN mentioned shells also being about job control. Is there some cool new ideas that can be put into something like nushell?
Interactivity is another place as well. Would be nice if I didn’t have to pipe into
pv
all the time. Similarly, it would be cool if we could output tables and have that display be a bit better than just “spit everything into stdout”.Basically something like Jupyter (without being Jupyter) would be quite cool IMO
For the sake of argument, how useful is job control for most users? For interactive shells tmux/screen eliminated the need for managing foreground and background processes, they’re all foreground. I can’t remember the last time I thought I’d rather background something that didn’t daemonize itself than run it in another pane.
I don’t use job control myself – I use tmux – but I’ve heard from surprisingly many people that use it, including at least 2 Oils contributors.
Luckily, one of them (Melvin Walls) implemented job control. It was pretty tricky. AFAIK we’re the only “modern” shell with job control!
(“modern” you could define as – codebase started in the last 10 years :) )
I use a much smaller subset of “job control” all the time, with Ctrl-Z to suspend the active process (aka the editor) and drop back into the shell, then fg to resume. Can’t do that in nu right now. Maybe one day I hack together a tmux setup that replicates it but for now it’s one of my main blockers for using nu
I personally have never gotten a hang of tmux/screen. I know shells like to rely on those as “outs”, but I honestly would not mind a shell that has this stuff built-in. And I think it’s pretty common for webdevs to want to basically start up a handful of processes and juggle them around.
The sort of split between “shell” and “terminal emulator” is something that I feel really gets in the way of offering easy answers to things like “make a script that gets me to the right configuration right now”. Every “nice” terminal emulator ships with weird hacks to try and get a shell to communicate information at the top level.
I know part of the answer would be to “just use screen”, but it’s a tool where I don’t really need it that often, so the information doesn’t stick around long enough.
I don’t know what you’ve tried but wezterm and alacritty work out of the box. On MacOS, iTerm was excellent and needed no tweaking.
I use daemontools in a bunch of locations. It takes 30 seconds to adapt my generic daemontools-sysvinit script to a new service, and then sysvinit or systemd can start and stop it as well. supervise treats a request to start an already running service as a non-error.
Maybe not adding much but have you looked at http://www.skarnet.org/software/s6/? Inspired by daemontools but maintained, under the ISC license, and more features. I was introduced to it years ago when I was discussing daemontools and still running my own qmail then much later worked with folks who had settled on it as part of their in-house scheduler. It was easy and reliable, never really needed to give it a thought.
I have, and I am also aware of Nosh – https://jdebp.uk/Softwares/nosh/ – which I keep meaning to try out but never actually got around to.
Wasn’t it nice when qmail could unambiguously do everything you needed a mail server to do?
Heh. Yes, Iwas aware of it. Also never tried it out.
At the time qmail and dovecot were a relief after sendmail, then postfix, and uwimap. Everything just worked for me. Now I don’t want to be bothered and use fastmail.
How do we change it?
Make it a law that paid parking lots have to accept payment by cash?
“To pay with cash please buy a single-use code in one of the authorized points” (nearest one 2 districts away, opening tomorrow morning).
I agree with the spirit of what you said though.
You are experienced with the dark patterns, sir
Or make it a law that it should be absolutely evident and understandable at a glance how you can pay to 9 out of 10 randomly selected people so if you find yourself in a situation where it’s not evident how you pay, you just turn on your phone’s camera, record a 360 video and go about your business knowing that you can easily dispute whatever fee they throw at you.
This is probably the best answer. No cost to “plot of land for parking” operators, no cost to people. Just record that you couldn’t clearly tell what’s going on and move on with your day.
Ah yes, big cash boxes under unmotivated observation, sitting out in public. That won’t raise the cost of parking.
Has parking become cheaper when those boxes were replaced with apps?
Maybe? This entire discussion is severely lacking in generality. People are extrapolating wildly from one ranty post in one US city. I could fake another rant saying that parking is free as long as you scan your eyeballs with Worldcoin and it would add as much data…
Plant asphalt-breaking flora at the edges of the lots. Bermudagrass is a good choice if you can obtain it, but standard mint and thyme will do fine for starters. In some jurisdictions, there may be plants which are legal to possess and propagate, but illegal to remove; these are good choices as well.
We’d can start by not forcing people to use an app to begin with.
In Chicago, they have a kiosk next to a row of on-street parking. You just put in your license plate number, and pay with a credit card. No app needed. At the O’Hare airport, short term parking gives you a receipt when you enter the lot. Then you use it to pay when you exit. No app needed.
Right. The way it used to be everywhere, until relatively recently.
A root problem is that, for a lot of systems like this, a 95% solution is far more profitable than a 99% solution. So companies will happily choose the former. Mildly annoying when the product is a luxury, but for many people access to parking is very much a necessity.
So there’s one way to change this: companies providing necessities have to be held to stronger standards. (Unfortunately in the current US political climate that kind of thing seems very hard.)
You’re talking about public (on-street) parking. This post is talking about private parking lots, which exist for the sole purpose of profit maximization.
The cities could pass laws to regulate the payment methods. Parking lots that don’t confirm can be shut down.
Depending on the city, getting such regulations passed may be difficult though.
The way I see it, the issue is that every random company has to do a positively amazing job of handling edge cases, or else people’s lives get disrupted. This is because every interaction we have with the world is, increasingly, monetized, tracked, and exploited. Most of these companies provide little or no value over just letting local or state governments handle things and relying primarily on cash with an asynchronous backup option. Especially when it comes to cars, this option is well-tested in the arena of highway tolls.
To put it succinctly: stop letting capital insert itself everywhere in our society, and roll back what has already happened.
First do no harm. Don’t build stuff like this.
Learn and follow best practices for device independence and accessibility. Contrast. Alt text. No here links. No text rendered with images.
Those are things we can and should do.
But likely things like this won’t change until there are law suits and such. Sigh.
This seems like it’s just some random for-profit Seattle parking lot (cheap way to go long on a patch of downtown real estate while paying your taxes) that, consistent with the minimal effort the owner is putting in generally, has let whatever back-alley knife fight parking payments startup set up shop as long as they can fork over the dough. It is essentially a non-problem. Even odds the lot won’t exist in two years. There are many more worthwhile things to care about instead.
I disagree. This is going on outside Tier-1 and Tier-2 cities with high population density. Small cities and large towns are finally coming to terms with (using Shoup’s title) the high cost of free parking and replacing meters with kiosks (usually good but not necessarily near where you need to park) or apps (my experience is they’re uniformly bad for all the reasons in the link) to put a price on public parking.
One nearby municipality has all of:
Even if you’re local and know the quirks you’ll have to deal with it.
It’s not just “some random for-profit Seattle parking lot”. I’ve run into frustrating and near-impossible experiences trying to pay for parking in plenty of places. Often compounded by the fact that I refuse to install an app to pay with.
The other day I was so happy when I had to go to the downtown of (city I live in) and park for a few minutes and I found a spot with an old-fashioned meter that accepted coins.
History does not bear you out.
What?
Establish a simple interoperable protocol standard, that every parking lot must support by law. Then everyone can use a single app everywhere which fits their needs. I mean, this is about paying for parking, how hard can it be?
I think that’s the thing, though. A company comes in to a municipality and says “this is about paying for parking, we make it easy and you no longer have to have 1) A physical presence, 2) Employees on site, or (possibly) 3) Any way to check if people have paid.” They set you up with a few billboards that have the app listed on them, hire some local outfit to drive through parking lots with license plate readers once or twice a day, and you just “keep the profit.” No need to keep cash on hand, make sure large bills get changed into small bills, deal with pounds of change, give A/C to the poor guy sitting in a hut at the entrance, etc.
I write this having recently taken a vacation and run into this exact issue. It appeared the larger community had outsourced all parking to a particular company who has a somewhat mainline app on the Android and Apple stores, and hence was able to get rid of the city workers who had been sitting around doing almost nothing all day as the beach parking lots filled up early and stayed full. I am very particular about what I run on my phone, but my options were leave the parking lot, drive another 30 minutes in hopes that the next beach had a real attendant with the kids upset, or suck it up. I sucked it up and installed long enough to pay and enough other people were that I don’t see them caring if a few people leave on principle of paying by cash, either way the lot was full.
I say all this to point out that some companies are well on their way to having “the” way to pay for parking already and we might not like the outcome.
I get that digital payment for parking space is less labor intensive (the town could also do that themselves, btw), but we can by law force these companies to provide standardized open APIs over which car drivers can pay for their parking spot, why don’t we do that?
I’m always in favor of citizens promoting laws they feel will improve society, so if you feel that way I’d say go for it! I don’t, personally, think that solves the issue of standardizing on someone needing a smart phone (or other electronic device) with them to pay for parking. That to me is the bigger issue than whose app is required (even if I can write my own, until roughly a year ago I was happily on a flip phone with no data plan). So if this law passes, the company adds the API gateway onto their website and… we’re still headed in a direction for required smart device use.
But, again, I strongly support engaging with your local lawmakers and am plenty happy to have such subjects debated publicly to determine if my view is in the minority and am plenty happy to be outvoted if that is the direction it goes.
This reminds me of a meeting I attended years ago when I worked at a company in the health-care space, and someone had asked why we didn’t have an official mobile app. And a company leader explained, not unkindly, that a lot of the people we were providing services to were not only not well-off, many were in the sort of financial situation where one regularly chooses which utility to pay down a bit and get turned on, and which one(s) to leave shut off for nonpayment, and as such they are not the sort of people who either have smartphones or are accustomed to using smartphones as an interface to the world.
What a job we’ve done forcing people to prioritize having a smartphone over heat or electricity or a home.
It’s changed a bit according to my wife who works in the healthcare and mental health space in the Northeast US. Nearly everyone has a smartphone including the unhoused. The financial struggles though haven’t changed much.
*nod* When you think about it, being unhoused makes having a mobile phone *less* of a privilege because, if you have housing, you can use WiFi and desktop/laptop PCs and DSL/Cable/Fiber Internet to get better value on your bandwidth and and you might have a job that allows you to work from home, making mobile phones more of a luxury and less of a necessity.
I’ve yet to try this one but It’s been recommended to me a few times. I’m still using go-jira, although it’s very broken with the latest JIRA versions, and doesn’t seem to be an active project (my admittedly incomplete PR is languishing like dozens of others)
Extending go-jira is… interesting. You write weird little embedded shell scripts in YAML files that are executed by sub-processes of the main binary in different phases.
The go-jira name is simply amazing. Doesn’t seem like they lean into the pun though?
I know! A wasted opportunity.
I knew a guy at Netflix who turned me on to go-jira but at the time (and again as little as a year ago) it wasn’t working for me with employer’s internally hosted JIRA. jira-cli at least works.
Lots of small language-level improvements, but there don’t seem to be that many fundamental compiler architecture issues mentioned in the changelog. I may be missing those since I haven’t used Nim for a long time, but I assume if the compiler was made 10x less janky they would’ve at least had a footnote about it. (For context, I’ve used Nim as my main language in 2019-2020.)
I don’t like that there still isn’t my #1 most awaited feature - incremental compilation. Nim isn’t a very fast language to compile and the slowness really showed with some of my projects. In particular rapid, my game engine, suffered from pretty terrible compile times, which made building games with it really difficult.
And I wonder how much generics jank is still present in 2.0. And I wonder when they’ll make generics into proper generics rather than C++-like templates. The error messages with those can get pretty abysmal and can appear out of nowhere. Like here. This particular cryptic error has appeared so many times while I was developing my game that I just gave up on the project eventually.
In addition to that I’m not a fan of how lame the module system is. Having used/seen module systems from other languages, eg. Rust or Go, Nim’s compiler not having a clue about the concept of packages feels pretty ancient. This particularly shows when you’re developing libraries made out of many small modules; the only way to typecheck such a library in its entirety is to create a
library.nim
file and import everything there. Text editors didn’t seem to understand that back in the 1.4 days and would regularly miss type errors that occurred when analyzing the library as a whole.Oh, and the text editor situation… Nim’s compiler does not have incremental recompilation, so the autocomplete gets slower the larger your project is. And it can get really slow for even pretty small projects (~10k SLOC.)
And don’t get me started on the dialects. Endless
--define
switches andexperimental
features. And none of it is implemented in a robust way. Anyone can break your library by globally enabling a--define
you did not anticipate. And the--define
s are not even documented in one place.So sad to see Nim’s leadership pursuing syntax sugar and small convenience features instead of fixing foundational problems. Really wish they had a more forward-looking vision for the project and its community, rather than focusing on fulfilling the main developer’s wishes and experiments.
The Nim leadership is the main developer, Andreas. He’s not interested in sharing responsibility or broadening the leadership, as he vehemently expressed a month ago:
That was the point where I gave up on Nim. I don’t know where to start with this — it’s laughable that he pulls out that silly fight about master/main as his breaking point; he whines about society changing and it’s their fault he might have to “change old habits”, and that tired canard about racism/sexism. (He also appears to have deleted the comments objecting to his post, though not the supportive ones. Because of course he’s the forum moderator too.)
But my main takeaway is that the guy’s even more of an asshat than I previously thought, and he’s going to remain the gatekeeper to any change in Nim, and main source of truth on the forum. I just didn’t want to deal with that anymore. I’d rather fight with a borrow-checker.
I’ve seen his comment, yeah. It’s informative and unfortunate.
I’ve honestly been tempted to write a “why not Nim” blog post for a couple years now but never got around to doing so because a) I don’t like spreading negativity, and b) I’d rather not attract publicity to a project whose success I don’t exactly believe in.
Bad opinions aside, I believe Araq’s lack of care for the community is precisely the reason why the project is going in the wrong direction. I’ve heard horror stories from former compiler contributors about how hard to maintain the code is and how much it lacks documentation. No wonder it doesn’t attract very many outside contributions. Had he cared more for having other people work on the language alongside him, maybe things would have turned out different, but alas…
This sort of dictatorship is not the sort of leadership of a project I’d like to invest my time in. I much prefer the Rust RFC process over this.
Woah, I didn’t expect so much negativity in this thread… I was kind hoping to see some interesting discussions and maybe even some praises for a language that reached its 2.0.0 milestone without the backing of any tech giant.
Sure, the language is probably still not perfect, and at least some of @liquidev’s remarks make sense… but it is a remarkable milestone nonetheless.
I have been using Nim for years mostly on my personal projects (I kinda built my own ecosystem on top of it), and I must say it is fun to use. And it is very well documented. Unfortunately it feels very much a fringe language because it didn’t see massive corporate adoption (yet) but I hope this can change sooner or later.
About Araq: the guy can be rude at times, maybe even borderline unprofessional in some of his replies but he did put a lot of energy into the project over the years, and I am grateful for that. I tend not to get too involved in politics or heated debates… I saw that reply and it struck me as “a bit odd” and definitely not good marketing for the language and the community. I just hope that doesn’t drive too many people away from the language; it would be a pity.
Well it’s one thing to wish for some features, it’s another to wish for a leadership that doesn’t have a personal mental breakdown in the official forums - attacking a multitude of people - and deletes any critical response. The second one can’t just be ignored.
And if rust is already struggling with compile times, I wonder how bad this is with something that doesn’t even know about incremental compilation. You can’t just ignore a debugging round-trip time of minutes.
You can ask people for being less negative or strict, but first: don’t forget it’s v2.0 and second: the other way of not complaining about real production problems is to say nothing and move on.
I’m sorry if my comment came off as rude or overly negative… I don’t mean to ruin the celebration; as a long time user I’m just trying to voice my concerns about the direction the language is taking, and I think it’s important to talk about these rather than keep quiet about them forever and instead create an atmosphere of toxic positivity. 2.0 is considered by many a huge milestone and seeing important issues which put me off from using the language not be addressed in a major version is pretty disappointing.
Perhaps this speaks of our expectations of major versions; I see them as something that should be pretty big while in real life often it’s just some small but breaking changes. I’m of the firm belief that perhaps Nim went into 1.0 too early for its own good, because inevitably there will be breaking changes (and with how unstable the compiler can be, there can be breakages even across minor versions.)
I’ll be this person and ask you why you don’t come to D? There is a fundation and the tone is very respectful. It is a main inspiration for Nim, actually Araq spent many years reading and commenting on the D forums. D pioneered many things that went into Nim, but the core language is very stable and there is no compiler switch explosion. In many ways D is more further along than Nim with its 3 compilers and it supports internal controversy and I’d say sane debate inside its community. I do see a bit of FUD about D on the internet, often by echo’ing a single negative opinion in a majority of content programmers. Sometimes I think it’s down to syntax (Python-like vs C-like).
Agree. I also use D and have since… looks at personal repos… 2014 or 2015 but maybe earlier and started doing some toys in Nim around 2018. What D lacks is buzz. It’s mature, stable, and performant and, at least for me, doesn’t break between upgrades. Some corners of D like CTE and nested templates I find hard to debug (and this is true for other languages, but that’s not a free pass) but they work. I keep finding bits of Nim and core/former-core libraries where that’s not the case and they fail in odd ways and that’s still true in 2.0.
I actually have a book on D that I got years ago. I’d forgotten about it.
Is the compiler still proprietary?
DMD backend was the only bit proprietary and that’s not the case anymore since years.
After seeing the Rust community extensively argue about the gender of philosophers in a silly analogy, I’m glad that Nim has a leader who is explicitly against such bullshit.
As bizarre as that is, Araq’s use of the phrase “clown world” is more indicative of future behaviour than random Rust community members talking about pronouns. Here’s another strange Araq post - I wouldn’t want to support a project with this kind of world view.
Look carefully at the date of the post you linked…
April Fool’s was an opportunity to make a joke, but the content of the so-called joke is all Araq.
Maybe also because that analogy argument was inside one issue, opened specifically to bikeshed it. The other one felt more like a dismissal of anything that isn’t in his view of the world - in a discussion about giving the community a chance to steer the direction of the language.
I’d happily take that over Araq’s bullshit, like when I pointed out that Nim’s null-checking was leading to bogus errors in a bunch of my code (after hours of debugging and creating a reduced test case) he dismissed it with “that’s because the control flow analysis doesn’t notice ‘return’ statements, and you shouldn’t be using return because it isn’t in Pascal.” Despite him having put both features in the language.
All else aside, I think there’s truth in this statement.
With enough sophistry any statement can be considered true.
Oh? I recall similar arguments being used against Jews.
It’s a fairly obvious logic fallacy, which anyone smart enough to be a programmer ought to see through pretty easily. (Hint: if you deny
a > b
, it does not follow you believeb > a
.)Although I agree with almost all of your points and came to the same conclusion, I think it’s fair to say that not all critical comments were deleted. There are several in the thread that you linked.
The comments do show that at least one comment was removed. I don’t know if there were more now-removed comments because I read the thread only a while after it was closed.
After trying Nim for a little while some time ago, the module system is what soured me on the language. I don’t like that you can’t tell where a symbol comes from by looking at the source of the current file. It’s “clever” in that you automatically get things like the right
$
function for whatever types get imported by your imports, but that’s less valuable than explicitness.On the contrary, I actually don’t hold much grudge against the import system’s use of the global namespace. Static typing and procedure overloading ensures you get to call the procedure you wanted, and I’ve rarely had any ambiguity problems (and then the compiler gives you an error which you can resolve by explicitly qualifying the symbol.) While coding Rust, I almost never look at where a symbol comes from because I have the IDE for browsing code and can Ctrl+Click on the relevant thing to look at its sources.
My main grudge is that the module system has no concept of a package or main file, which would hugely simplify the logic that’s needed to discover a library’s main source file and run
nim check
on it. Right now text editors need to employ heuristics to discover which file should benim check
’d, which is arguably not ideal in a world where developers typically intend just a single main file.I agree highlighted code ought to be pre-rendered for static sites. The same goes for math, especially now that MathML is gaining wider support. But I can’t blame people that much – it’s so much easier to drop a CDN script in your page than to configure a static build system, especially when the various parts come from different languages/ecosystems (e.g. your static site generator is in Go but all the math renderers are JS).
For my SICP study website I initially used Pandoc’s built-in syntax highlighting (skylighting) but then decided to roll my own Scheme highlighter in C: schemehl.c. It was fun figuring out how to correctly handle nested quasiquotes (something I doubt any general purpose highlighter would ever bother with).
Doing it client side is fine for half arsing it, but proper syntax highlighting often requires more context than is available in the snippet and so has to be done with something that runs at build time. For my Objective-C book, I wanted to ensure that the code examples were all correct, so each one was extracted from a file that I could build and test. For the ePub version, this meant that I could use libclang to tag every token in the file and then extract the lines and add semantic class descriptions to span tags around each token. In CSS, I could then style identifiers differently depending one whether they were macros, type definitions, local symbols, instance variables, and so on. Generating the highlights from the snippet would have restricted me to lexical highlighting: comments, literals, keywords, and identifiers.
I was building a language with nested quasiquotes, and I for the life of me could not figure out how to do it. Finally I looked for existing languages that had them, and I found this comment in the Links source code.
Sometimes when something seems hard, that’s because it’s actually impossible with the current approach! I had no idea that basic lexing algorithms can’t handle nested quasi-quoting, and a stack of lexers is required.
Just a few years ago I actually switched from MathML to MathJax because I simply could not get stuff to render consistently across websites. I remember Chromium being especially problematic. I’d be happy if it’s gotten better since then, but I have been very careful about using MathML since then.
I’m using a site hosted copy of MathJax for a minor side project and wonder if anyone will really object to client-side rendering if it skips the cdn. At 623kb (svg) it’s small compared to related videos.
The author leads with “People working in other industries should probably not be miserable at work either, but that is not the concern of this article”. About that:
I spent my 20s working almost-min-wage jobs in kitchens and grocery stores, working as many as 3 jobs (opening + prep work in a cafe in the early morning, cook in a restaurant in the afternoon and evening, and bus dishes on the weekend) and various side hustles to pay for a small room in a crowded house in South Berkeley (~approx 11 other people were living there), with not much hope in sight for anything different.
Sometimes nowadays I find myself getting frustrated with e.g. some of the nasty proprietary or legacy tech I have to work or interface with. But while this work can sometimes feel like slogging through filth, I’ve worked jobs where I literally had to slog my way through actual filth – and this is very far from that. As a knowledge worker, you generally have autonomy and respect and flexibility that is completely unheard of to even a lot of white collar workers, let alone folks toiling away doing physically demanding work for minimum wage. Not to mention you probably won’t deal with the kinds of psychological abuse and unsafe conditions that are part and parcel with many of those lower-wage jobs
Which isn’t to say that tech workers shouldn’t aim to improve their conditions further or try to have more fun at work or that we should put up with bullshit simply because others have it worse – It’s essential that we protect what we have and improve it and even enjoy ourselves. But I do think that tech workers often miss how dire things are for other folks, especially folks working low-wage, manual jobs, and it would be nice to see more recognition of the disparity in our circumstances
I grew up in a restaurant and spent some time working as a bus boy. It really grinds my gears when you go out for a meal with a coworker and they complain about the service. “How hard could it be to get my order right?” Why don’t you work in a restaurant for a couple years and find out! Or when people assume scaling a restaurant is as easy as adding a load balancer and a couple more servers (pun not intended, but appreciated).
Some people have never worked in the service industry and it really shows.
I resonate really hard with this, I’ve come back several times to try to write a reply that isn’t a whole rant about people in tech but:
I’ve done a bunch of not so sexy jobs (legal and not so much, I’ll leave those out): retail, restaurants in various positions, and I was even one of those street fundraisers for the children (where I was subject to physical violence and the company’s reaction was “it comes with the job”). Now I work tech full time, and I’m a deckhand when I’m not doing that.
My perspective is shaped by a couple things, I think:
Being born to teenage parents who worked similar jobs and struggled for a long time
The fact that they “raised me right” – if I talked to / treated anyone the way I’ve seen some folks I’ve met in this industry do to service workers / people they seem to consider as “below them”, they wouldn’t be having any of it
Actually working the jobs and being subject to the awful treatment by both customers and management
The thing is, though, is that I really don’t think you should have to go through any of this in order to just not be a jerk to people…I really don’t know what the disconnect is. The most difficult customers I’ve had (at previous jobs and on the boat) have typically been the ones that seem the most privileged. When it comes to restaurants, the cheapest ones (in terms of tipping) were similarly the people that would come in and order hundreds of dollars worth of food and then leave little to no tip (I’m not here to debate tipping culture, it is what it is here at this point).
I’ve had situations where I take people to a place where I’m cool with the staff and someone picks up the tab (for which I’m appreciative) but then they are rude / pushy / skimp out on the tip, which is really embarrassing to say the least (I don’t hesitate to call people out but I feel like … I shouldn’t have to?)
The boat I work on is in the bay area and so we get a lot of tech people, and a couple of things stand out:
I don’t really know how some of the most intelligent people can be so dumb (literally all you need to do is follow directions)
They talk down to us (the crew trying to put them on fish and, for what it’s worth, keep them alive – won’t get into that here), and when you ask them not to do something for safety or you try to correct something they’re doing wrong, they get an attitude. I want to emphasize, not everyone, but enough to make you stop and ask why.
When they find out that I also work in tech (you talk to people since you’re with them for 8+ hours), the reaction is typically one of “why do you need to be doing THIS?”. Sidenote – the most hilarious thing that I enjoy doing is dropping into a technical conversation (a lot of people come on with their coworkers) and having people be like “wtf how does the deckhand know about any of this?”
They don’t tip … lol … or they stiff us for fish cleaning which we are upfront is a secondary service provided for a fee.
It’s not everyone, but I get a pretty decent sample size given the population here. The plus side of working on the boat (vs a restaurant or retail) is that if someone starts being a major a-hole the captain doesn’t mind if we stand up for ourselves (encourages it, even)
It’s not everyone of course, but it’s enough to make you wonder.
Yeah, exactly. Or, we have a saying “you can tell whose never pushed a broom in their life”.
That was more of a rant than I wanted to get into but since I’m here it was kind of cathartic. I really just wish people would stop and think about the human on the other end. Of course it’s not just tech people that do things like this, but … yeah.
I worked in eDiscovery for a while (~3 years) so I have a sense of legal. It’s very stratified and stressful. I remember going to bed at 2 AM and waking up at 5 AM to make sure that a production was ready for opposing counsel. Not ideal…
By contrast, my father was 39 when he had me. However, he had a hard life. He grew up in Francoist Spain. (One of the few memories of my grandfather was when he told me “La habre es miseria. La habre es miseria.” (Hunger is misery. Hunger is misery.)) My father was a Spanish refuge in France at age 9. He didn’t complete high school. Instead, he did an apprenticeship in a French restaurant where the chefs beat him. He worked 16 hour days for a long time.
Absolutely. My father always said that everyone was welcome at his restaurant, regardless of what they were wearing. It’s important to respect everyone.
Inter-generational trauma is a real thing. I’m doing okay, but my brothers didn’t fare so well. (A topic for a more one on one conversation.) I hope you are okay. <3
Edit: this has really thrown me through a loop. I don’t mean to be dramatic and I know this is a public forum, but I’m sure there are more people posting than responding. If it means anything to anyone then it’s more important to say so than to be stoic. I hope you are all doing okay.
Heh, sorry I meant legal vs not-so-legal in the legality sense, but in any case wow that sounds dreadful!
I appreciate your kind words and you sharing your story, and I’m glad you’re doing okay. I’m also sorry to hear about your brothers, similar thing is true for some of my siblings…kind of weird how that works.
Thank you for sharing this.
I meant to write “more people reading than responding” above, but I’m out of the edit window.
There is a theory in some circles that states that having money enable people to not have to care about other human beings. With money you can feel like you provide for all your needs by just buying stuff and services. If you don’t have so much money, you need to compensate by trying to build mutual understanding. That leads to being more empathic. You also need to respond or even anticipate the needs of people who give you money. Which leads also to some kind of asymmetric empathy (similar to impostor syndrome). Also there may be the fact that some people are attracted to tech because they fell they are more gifted with machines than with people. So maybe some form of selection bias here too.
I like to remind my team something that I was once told: “Remember, this work lets us have soft hands.”
Always reminds me of that scene in Trading Places (the “soft hands” part is cut off at the beginning).
Well put. I sometimes ask myself, “How many people are living miserable lives so that I can sit in a cushy chair and think about interesting problems?”
How many people’s misery could you alleviate by switching to a different job and how would that happen?
Well, I’ve worked in the oil and gas industry, so I helped keep lots of people’s heat working in the winter, including my own. At the cost of making the world incrementally more fucked though, so that one’s a net negative. I’ve done a fair amount of teaching, so I helped share skills that were useful for people. I’ve worked datacenter tech support, so I helped lots of people keep their online businesses working. So there’s that.
If I really wanted to make the world a better place, I would either work for something like Habitat For Humanity and build houses, or I would get a PhD in nuclear physics or material science and try to make better nuclear/solar energy sources. Or become a teacher, natch, but my sister and both parents are teachers so I feel like I have to break the family mold. Could always do what my grandmother did and become a social worker. Or go into politics, because someone with a brain stem has to, but I’ve had enough brushes with administration to know that I will burn myself out and be of no use to anyone.
Right now I work in robotics doing R&D for autonomous drones, so I’ll hopefully make peoples’ lives better somewhere, someday down the line. Nothing as good as what Zipline does, but on a similar axis – one of my most fun projects was basically Zipline except for the US Army, so it was 10x more expensive and 0.25x as good.
…do people not normally think about this sort of thing?
Interesting, that’s not the way I interpreted cole-k’s comment!
…how did you interpret it?
That there are people supporting cole-k’s job (I don’t know who, maybe car mechanics, cafeteria workers, janitors?) whose work is required for cole-k’s job to be possible, but who are necessarily miserable in their jobs.
Yeah, and I’ve done at least a moderate share of supporting them back one way or another, within the bounds of my aptitudes, skills, and how selfish I’m willing to be – and honestly I’m pretty selfish, ngl. Sometimes I’ve done it directly by serving them back, more often indirectly by trying to do things that Benefit Everyone A Bit. All I can do is keep trying. We’re all in this together.
This should not be controversial, and sometimes I wish I had a button to teleport some of my colleagues where I used to work in Africa to recalibrate their sense of what “hard” means.
This is so true. I try to remind myself of this as much as I can but as I did not experience minimum wage work myself this can be hard to be fully aware of this situation. Maybe we should try for the condition of everybody to improve. I fear that by insisting a lot on the good conditions we have in the tech industry it would only encourage a degradation on those conditions unfortunately. Tactically, I wonder if we should not focus on the commonalities of the issues we face across all the different types of jobs.
I also paid for college and university working in a large hotel kitchen and then dining room. At the time in the front of the house I could earn enough in tips over a summer to cover a year of state school tuition plus room and board. I’d go back on holidays and long weekends to make the rest of my expenses. It was hard work, long hours, and disrespected in all kinds of ways. Once in a while there was violence or the threat of violence. But it beat doing landscaping or oil changes and tires. There were a number of us who were using it as a stepping stone, one guy from Colombia worked until he saved up enough to go back home and buy a multi-unit building so he and his parents could live in it and be landlords, get a used BMW for himself, and finish his education. His motivation, and taking extra shifts, made mine look weak and I was highly motivated.
I remind myself of that time when I’m frustrated at my desk.
Colombia, or do you mean he was studying at Columbia?
Typo. Fixing.
This I think was one of Bryan Caplan’s arguments about open borders.
In addition to the moral issue that no-one has the right to curtail the free movement of others[1], there is solid empirical evidence that not only do immigrants enrich the countries they emigrate to (i.e. contribute more on average than locals), they often also help lift their home countries out of poverty by doing exactly what your Colombian friend did.
Edited to add: here’s his address on the topic of poverty: https://www.youtube.com/watch?v=K77cGFU36rM
[1] One frequently occurring example of hypocrisy on the matter of travel: people who simultaneously rail against any attempt by their own Government to control their movement (passports, papers, ID, etc.), but also complain loudly about people crossing the border into “their” country and demand the Government build a wall, metaphorically or literally.
Ublock Origin with NoScript on Firefox for desktop and android.
I’d be interested in the backstory here: was Red Hat ever profitable before these changes? Did something stop them from turning a profit when they did before? Or did someone at IBM just decided they could be squeezed for more profit?
A quick search for their financial reports showed that, as of 2019, they were making quite large piles of money. I was quite surprised at how much revenue they had: multiple billions of dollars in revenue and a healthy profit from this.
It’s not clear to me the extent to which CentOS affected this. My guess is that, by being more actively involved for the past few years, they’ve made it easier to measure how many sales that get as a conversion from CentOS users and found that it’s peanuts compared to the number of people that use CentOS as a way of avoiding paying an RHEL subscription.
I didn’t see any post-acquisition numbers but I wouldn’t be surprised if they’re seeing some squeezes from the trend towards containerisation. Almost all Linux containers that I’ve seen use Ubuntu if they want something that looks like a full-featured *NIX install or Alpine or something else tiny if they want a minimal system as their base layer. These containers then get run in clouds on Linux VMs that don’t have anything recognisable as a distro: they’re a kernel and a tiny initrd that has just enough to start containerd or equivalent. None of this requires a RedHat license and Docker makes it very easy to use Windows or macOS as client OS for developing them. That’s got to be eroding their revenue (and a shame, because they’re largely responsible for the Podman suite, which is a much nicer replacement for Docker, but probably won’t make them any money).
I’d say they’re not hurting in the containerization space, with OpenShift as the enterprisey Kubernetes distro, quay as the enterprisey container registry, and the fact that they own CoreOS.
If you’re deploying into the cloud, you’re using the cloud provider’s distro for managing containers, not OpenShift. You might use quay, but it incurs external bandwidth charges (and latency) and so will probably be more expensive and slower than the cloud provider’s registry. I don’t think I’ve ever seen a container image using a CoreOS base layer, though it’s possible, but I doubt you’d buy a support contract from Red Hat to do so.
You’re missing that enterprises value the vendor relationship and the support. They can and will do things that don’t seem to make sense externally but that’s because the reasoning is private or isn’t obvious outside that industry.
I’ve never seen a CoreOS-based container but I’ve seen a lot of RHEL-based ones.
Possibly. I’ve never had a RHEL subscription but I’ve heard horror stories from people who did (bugs critical to their business ignored for a year and then auto closed because of no activity). Putting something that requires a connection to a license server in a container seems like a recipe for downtime.
I expect that big enterprise customers will not suffer stamping a license on every bare-metal host, virtual machine, and container. My experience is that connectivity outside the organization, even in public cloud, is highly controlled and curtailed. Fetching from a container registry goes through an allow-listed application-level proxy like Artifactory or Nexus, or through peculiarly local means. Hitting a license server on the public internet just isn’t going to happen. Beyond a certain size these organizations negotiate terms, among them all-you-can-eat and local license servers.
It’s going to be interesting.
Right over here https://www.redhat.com/en/technologies/management/satellite
All this is easily findable on the Internets, but the tl;dr - yes. Red Hat was profitable. That’s Red Hat’s job, to turn a profit. It’s also Red Hat’s job to remain profitable and try to grow its market share, and to try to avoid being made irrelevant, etc.
Being a public company means that shareholders expect not only profit, but continual growth. Whether that’s a reasonable expectation or healthy is a separate discussion, but that’s the expectation for public companies – particularly those in the tech space. IBM paid $34 billion for Red Hat and is now obliged to ensure that it was worth the money they paid, and then some.
If RHEL clones are eating into sales and subscription renewals, Red Hat & IBM are obliged to fix that. I don’t work at Red Hat anymore, but it’s no secret that Red Hat has a target every quarter for renewals and new subscriptions. You want renewals to happen at a pretty high rate, because it’s expensive to sign new customers, and you want new subscriptions to happen at a rate that not only preserves the current revenue but grows it.
That’s the game, Red Hat didn’t make those rules, they just have to live by them.
Another factor I mean to write about elsewhere soon is the EOL for EL 7 and trying to ensure that customers are moving to RHEL 8/9/10 and not an alternative. When CentOS 7 goes EOL anybody on that release has to figure out what’s next. Red Hat doesn’t have any interest in sustaining or enabling a path to anything other than RHEL. In fact they have a duty to try to herd as many paying customers as possible to RHEL.
So it isn’t about “aren’t they making a profit today?” It’s about “are they growing their business and ensuring future growth sufficiently to satisfy the shareholders/market or not?”
My guess is that revenue was expected to start declining. Density of deployments has been rising rapidly since Xen and VServer came. Red Hat had to adjust pricing to cope multiple times, but I don’t believe they were able to track the trend.
Nowadays with containers, the density is even higher. We are at PHP shared hosting level density, but for any stack and workload. For simple applications, costs of running them are approaching the cost of the domain name.
Instead of fleet of 10 servers, each with their own subscription (you had in 2005-2010 with RHEL 4 & 5), you now have just 2U cluster with a mix of VMs and containers, with just two licenses.
And sometimes not even that. People just run a lightweight OS with Docker on top pretty frequently.
This is a band-aid on a bleeding wound, I believe.
They should be pursuing some new partnerships. It’s weird that e.g. Steam Deck is not running an OS from Red Hat. Or that you can’t pay a subscription for high quality (updated) containers running FLOSS, giving a portion of the revenue to the projects.
The Steam Deck might be a poor business case for Red Hat or Valve. Since the Steam Deck hardware is very predictable and it has a very specific workload, I don’t know if it would make sense to make a deal with Red Hat to support it. It would be a weird use case for RHEL/Red Hat, too, I think. At least it would’ve when I was there - I know Red Hat is trying to get into in-vehicle systems so there might be similarities now.
I am not saying Red Hat should be trying to support RHEL on a portable game console. It should have been able to spin a Fedora clone and help out with the drivers, graphics and emulation, though.
Somebody had to do the work and they made profits for someone else.
Concentrating on Java for banks won’t get them much talent and definitely won’t help get them inside the next generation of smart TVs that respect your privacy. Or something.
One business case for Red Hat would be a tremendous install base, which would increase the raw number of people reporting bugs to Fedora or their RHEL spin. And that in turn could led IVI vendors to have really battle tested platorm+software combo. Just don’t let them talk directly to the normal support other companies are paying for.
My understanding is that Canonical has benefitted hugely from WSL in this regard. It’s practically the default Linux distro to run on WSL. If you want to run Linux software and you have a Windows machine, any tutorial that you find tells you how to install Ubuntu. That’s a huge number of users who otherwise wouldn’t have bothered. Ubuntu LTS releases also seem to be the default base layers for Mose dev containers, so if you open a lot of F/OSS repos in VS Code / GitHub Code Spaces, you’ll get an Ubuntu VM to develop in.
That’s interesting, because there’s a fair amount of Erlang used in industry — it was created for telephone switching systems, not as an academic exercise. CouchDB is mostly written in it. Is Kafka in Erlang or am I misremembering?
As to your main point, I’m not a Lisper, and to me the quotes you gave tend to reflect my feelings: stuff that once made Lisp special is widely available in other languages, the cons cell is a pretty crude data structure with terrible performance, and while macros are nice if not overused, they’re not worth the tradeoff of making the language syntax so primitive. But I don’t speak from a position of any great experience, having only toyed with Lisp.
Lisp has very little to do with cons cells.
Can you elaborate? Aren’t lists the primary data structure, in addition to the representation of code? And much of the Lisp code I’ve seen makes use of the ability to efficiently replace or reuse the tail portion of a list. That seems to practically mandate the use of linked lists — you can implement lists as vectors but that would make those clever recursive algorithms do an insane amount of copying, right?
No. Most lisp code uses structures and arrays where appropriate, same as any other language. I’m not sure what lisp code you’ve been looking at, so I can’t attest to that. The primordial LISP had no other data structures, it is true, but that has very little to do with what we would recognise as lisp today.
I think it stems mostly from how Lisp is taught (if it’s taught at all). I recall back in college when taking a class on Lisp it was all about the lists; no other data structure was mentioned at all.
That’s a popular misconception, but in reality Common Lisp, Scheme, and Clojure have arrays/vectors, hashtables, structures, objects/classes, and a whole type system.
I don’t know what Lisp code you’ve looked at, but in real projects, like StumpWM or the Nyxt browser or practically any other project, lists typically don’t play a big role.
Unfortunately, every half-assed toy language using s-expressions gets called “a Lisp”, so there’s a lot of misinformation out there.
Clojure and Fennel and possibly some other things don’t used linked lists as the primary data structure. Both use some kind of array, afaik (I’ve never properly learned Clojure, alas). How this actually works under the hood in terms of homoiconic representation I am not qualified to describe, but in practice you do code generation stuff via macros anyway, which work basically the same as always.
As I said above, this is a divisive issue for some people, but I’d still call them both Lisp’s.
Depends a bit on the person’s perspective. I’ve seen some people get absolutely vitriolic at Clojure and Fennel for ditching linked lists as the primary structure. I personally agree with you, but apparently it makes enough of a difference for some people that it’s a hill worth dying on.
You might be thinking of RabbitMQ. Kafka is on the JVM.
I don’t think Kafka is, but CouchDB certainly is and, famously, WhatsApp. It’s still not so common but not unheard of, especially now in the age of Kubernetes, although Elixír seems reasonably popular. Either way I don’t think most people know much about its history, they just sort of bucketize it as a functional language and whatever biases they have about them
I never actually wrote much Erlang – I only even mentioned it in that interview because the founder mentioned belonging to some Erlang group on his LinkedIn. It turned out to have been something from his first startup, which failed in a bad way, and I think he overcorrected with regard to his attitude toward FP. He was a jerk in any case
edit: looks like you might be thinking of RabbitMQ? https://en.wikipedia.org/wiki/RabbitMQ
Kafka is a JVM project. It’s written in Java and Scala.
It’s quite possible you’re thinking of Riak, which was implemented in Erlang, though the two are very different beasts.
This essay is an admirable display of restraint. I would have been far crueler.
In my experience, protocols that claim to be simple(r) as a selling point are either actually really complex and using “simple” as a form of sarcasm (SOAP), or achieve simplicity by ignoring or handwaving away inconvenient details (RSS, and sounds like Gemini too.)
The “S” in “SNMP” is a vile lie.
via
It’s simple compared to CIMOM, in the same way LDAP is lightweight compared to DAP.
Let’s not forget ASN.1, DCE, and CORBA. Okay, let’s forget those. In comparison SOAP did seem easier because most of the time you could half-ass it by templating a blob of XML body, fire it off, and hopefully get a response.
Exactly, and the next-order effect is often pushing the complexity (which never went away) towards other parts of the whole-system stack. It’s not “simple”, it’s “the complexity is someone else’s problem”.
what’s inconvenient about RSS?
Some of my personal grievances with RSS 2.0 are:
Obviously, neither are too important – RSS works just fine in practice. Still, Atom is way better.
RSS never specified how HTML content should be escaped, for example.
The Atom protocol resolved that however.
Pretty sure that’s because RSS2 is not supposed to contain HTML.
But RSS2 is just really garbage even if people bothered following the spec. Atom should have just called itself RSS3 to keep the brand awareness working.
The RSS trademark (such as it was) was claimed by Dave Winer, who opposed Atom.
But I don’t think every enforced against the RSS1 people whom he also opposed.
Well, Winer’s way of arguing was never really via the legal system, it was by being a whiny git in long-winded blog posts. Besides, RSS versions <1.0 were the RDF-flavored ones (hence RSS == RDF Site Summary), and no-one wanted that anymore.
<=1.0
and people kept using 1.0 long after 2.0 existed because some people still wanted that :) Thought those people were mostly made happy by Atom and then 1.0 finally died.Incorrect, RSS2 was 2002, Atom was 2005.
Teaches me to not read wikipedia correctly.
O god, don’t get me started. RSS 2 lacked proper versioning, so Dave Fscking Winer would make edits to the spec and change things and it would still be “2.0”. The spec was handwavey and missing a lot of details, so inconsistencies abounded. Dates were underspecified; to write a real-world-useable RSS parser (circa 2005) you had to basically try a dozen different date format strings and try them all until one worked. IIRC there was also ambiguity about the content of articles, like whether it was to be interpreted as plain text or escaped HTML or literal XHTML. Let alone what text encoding to use.
I could be misremembering details; it’s been nearly 20 years. Meanwhile all discussions about the format, and the development of the actually-sane replacement Atom, were perpetual mud-splattered cat fights due to Winer being such a colossal asshat and several of his opponents being little better. (I’d had my fill of Winer back in the early 90s so I steered clear.)
Which version of RSS? :)
I see what you did there.