Someone looking for a similar profiling tool with Zsh should know that that shell comes with one. I describe a similar process I went through in a blog post I wrote a couple years back.
Thanks for the zprof tip, very useful.
I originally was a bit puzzled about the article. I grabbed hyperfine and benchmarked bash -i
and zsh -l
and both were <2ms on my system.
“Not using dummy values” -comment aside, I found this article delightfully genuine and fun to read. Do a thing with your code and then lose hours or days looking for problems, when the problem is a simple typo or a forgotten // TODO: implement this
:)
Refreshing to see it happening to others :)
If your language generates warnings when you call deprecated code, then you can make a “deprecated” function that crashes the program when called.
I wanted to use an editor that didn’t have features that were constantly distracting me from what’s important: writing code.
This is exactly why I stopped using Vim too. In my case the “features” often had to do with color schemes and other stuff that my perfectionist brain couldn’t put down despite knowing they weren’t a value-add.
One of my favorite revelations/changes in my time using vim is to pick a color scheme that colored no syntax. The one I have now only changes the colors on string literals and comments. Getting rid of the unicorn vomit has resulted in vim being a lot less distracting for me.
Yes! I turned syntax highlighting off completely before I stopped using Vim and my current, preferred editor doesn’t even support it.
I made plaintext a “colourless” scheme for vim.
DISCLAIMER: I’ve done very little testing on it outside of my own system, so it may require some tweaking to get it working, I had to play around with the background color so it worked in my xterm with and without reverseVideo on (I like to toggle between them). I don’t think it works in gVim or MacVim. Hell, it may only work in neovim on OpenBSD for all I know.
If you want something more reliable, then I would highly recommend acme-colors, which I based my scheme on.
This is truly amazing. It’s really pretty to look at and feels nice to read. Thank you for making this.
I use nofrils for the reason mentioned above.
I use this one: https://github.com/andreypopp/vim-colors-plain
Looks like author never maintained large projects. Semver is great, but sometimes unexpectedly patch will broke someone’s workflow. Just incrementing major all the time is not a solution. Designing robust API can help with part of the problem. But no silver bullet in semver.
I’m not sure what you’re taking issue with. I’ve couched the whole purpose of Semantic Versioning as supporting the design of a robust API. I also made no claim that it’s a silver bullet, but it’s valuable enough to me that if you’re going to ignore it, then I’m not going to use your libraries.
It’s true that I don’t maintain any large open source projects. Professionally, does >2M LOC count?
edited to remove ad hominem
I’m ambivalent on this post. On the one hand, I think it’s completely fair to say the Go team could/should do more to communicate how modules are supposed to work, especially if there’s widespread confusion. I don’t know if that confusion is widespread, but let’s be generous and assume it is. Adding warnings to Go’s tooling seems like a good suggestion, although I have some questions.
On the other hand, the rest of this boils down to not liking the solution because it doesn’t work the way the author wants it to. It’s widely known that the Go team is very opinionated, and I’m honestly a bit tired of reading this sort of sniping.
The closing paragraph also seems very poorly thought out. How does the optional versioning approach described work in practice? How do I as a module author indicate that I’m using (or not using) the standard versioning approach? How does that choice impact module consumers? The “best” option leaves so many unanswered questions that it’s not clear to me it’s a realistic option at all.
So yeah, a frustratingly mixed read.
It’s widely known that the Go team is very opinionated, and I’m honestly a bit tired of reading this sort of sniping.
The problem seems to be that, increasingly, the Go team’s opinions aren’t working for people who actually use Go outside of Google. It’s not “sniping” to articulate why those opinions aren’t working and suggest alternatives.
It’s sniping because, as I say above, beyond mere suggestion the author doesn’t wrestle at all with how his suggestions would actually work in practice. It’s sniping because it’s easy.
So you’re upset they took the easy route of dismissing an opinionated stance that didn’t work for them.
But to do so, you’re… taking the easy route of dismissing an opinionated stance that wouldn’t work for you.
Again, it’s OK for people not to like something the Go team did. It’s not automatically “sniping” or whatever other derogatory term.
Cute wordplay aside, if you’re going to do something unidiomatic, then the burden of proof is on you to show that it’s an improvement. Otherwise I’m in my right not to take this seriously.
I think the author’s point is that most people are not aware that not only what they are doing is considered “unidiomatic” by the team, but also that the “idiomatic” approach is not something they want to adopt.
I wish the author separated the criticism of SIV from the old tired vgo/dep drama. I find that stuff tiring as well. I’ve been enthusiastic about modules from day 1 (after much suffering from godep, glide, and dep) however, overtime started to think that SIV probably wasn’t worth it compared to just telling people to rename on breaking changes. I’m sure had they gone without SIV that that choice would have garnered lots of cheap criticism too.
So either you’ve made breaking changes to you API, and now you’re violating Semantic Versioning, or did didn’t and you never should have gone to v2.x anyway.
I broke semver. https://xkcd.com/1172/
Isn’t this problem solved in OpenBSD by rc.conf.local?
But rc.conf.local is the only configuration file an administrator touches, and it never conflicts with changes to the rc scripts under rc.d nor the default configuration found in rc.conf.
I’ve occasionally written my own rc.d scripts, but have never needed to modify an existing one provided by the base system or packages (any configuration I would want to do would be in the daemon flags in rc.conf.local).
I’m not sure where we disagree. You’ve just described the situation that led the author to suggest rc.d doesn’t belong in etc. The best case I can muster against that view is as you describe, that a local administrator might want to write a script and drop it in rc.d, but most will just use rc.local for that.
rc.conf.local exists becuase it’s generally not wise for local administrators to modify rc.d.
The crux of the issue as I understood it is that (on netbsd) rc.conf is both used to modify configuration, and is also touched by system upgrades, and therefore can create merge conflicts. With OpenBSD splitting out this local configuration into a separate file only touched manually by the user or with rcctl, there are never merge conflicts here, or to any of the rc infrastructure (unless you changed a system/package rc.d script I suppose).
I don’t share the opinion expressed in the article and think that /etc is a good place for init scripts. We don’t need even more folders, we actually need fewer, and most init-scripts are configuration expressed in code (Nobody complains that e.g. the Lua-configuration files for awesome, which are executable programs, are also stored in /etc). We should start discussing the removal of many folders that are just there for historical reasons or with questionable roles (e.g. /usr/, /usr/local/, /sbin, etc.) or have a very slim difference in meaning (/var vs. /tmp, not even beginning with /var/tmp/ etc.).
The approach taken with sta.li is a very good one (/usr is symlinked to /, /sbin is symlinked to /bin, /tmp is gone, etc.) and it makes a lot of sense.
Using /usr/bin to store user-space-tools and /bin only for system tools sounds like a good reason why it was split, but actually it was only split because hard drives back then couldn’t store everything in one filesystem (which is why /usr was externally mounted). The other “motivation” to allow the users (single-user or multi-user-systems alike) to have their own binaries somewhere also doesn’t work, given you usually can’t change anything outside of /home without root anyway. And being able to see sbin-binaries in your $PATH as a normal user is also no big deal, as it usually already is in the $PATH anyway.
All in all, I see these issues as much more important than the very nitpicky differentiation between “real” configuration files and executables acting as configuration.
Given that etc and libexec both exist already (as they do at least on OpenBSD and Slackware), I don’t see this as an argument for more directories. The argument is that etc is for local configuration while rc.d isn’t intended for that at all.
I think there’s merit to your less-not-more position; I just think it’s beside the point here.
We don’t need even more folders, we actually need fewer
I agree, thats why I love nixos. Still has /etc but its a lot less stuff than base linux. Why get rid of /tmp? What is $TMPDIR set to? What happens to scripts that expect /tmp? Seems a weird piece of unix heirarchy to remove.
IIRC, pfSense is based on FreeBSD, just utilizing OpenBSD’s pf(4), which has been supported on FreeBSD for quite some time. That to say, I think there is still a fair bit of work to do before you’ll see WireGuard in pfSense.
I really like your writing style. But show; don’t tell. I had hoped to see some example scripts, or configuration, or something.
I’d love to try WireGuard instead of OpenVPN at work, but we have a compliance requirement for two-factor auth which as far as I can tell (after a very brief skim of their web site) is impossible in WireGuard. Am I wrong about that?
You’re right, although I’d also argue that WireGuard didn’t authenticate users at all. It authenticates machines to one another.
This is correct and also has held us up from using it at work. I’m not sure if this is something that he considers completely outside of scope and conflicting with the design, or if its just something that hasn’t been done yet.
Yeah, you could have the wireguard keys decrypted through a 2FA system, I guess, but there’s no tooling for that.
I’ve got a Soekris net6501 box I’m using as my gateway device. It currently runs Arch Linux 32, but I’m working on flashing it with a custom-built Alpine ISO with WireGuard support.
Ooo, you reminded me that I also have a Soekris box collecting dust in some box. Might use it as DNS server (pi-hole).
I don’t know about plan 9 so I can’t say for sure but there is overwhelming likelihood that you will be welcome.
Guido’s retrospective mirrors many of the points in your article. While this won’t fix past mistakes, we can at least be reasonably confident that the same mistakes won’t be repeated with “Python 4”.
Regarding the Unicode changes, as a user I’ve seen many cases of UnicodeDecode error: \xef out of range
(or whatever it was) errors, and as a developer I fixed many of them in my own programs as well. Python 3 really does make things easier here. I appreciate it’s not useful for Mercurial specifically, and that the current stdlib usage may introduce some problems, but it also solved a lot of them. And it seems to me that the stdlib problems are fixable(?) Or are the Python maintainers unwilling to do so?
Personally, I rather like Go, and use it for most places where I previously used Python. It has somewhat similar (though not identical) design ethics: most of The Zen of Python applies equally well to Go, perhaps sometimes even more so than Python. Rust, on the other hand, seems more similar to Ruby’s design ethics, which is not a necessarily a bad thing; I worked with Ruby for several years and liked it. It’s just different.
I’ve thought for a while that Go is to Python as Rust is to Ruby. It’s nice to see someone else say it.
I just wish that python had as good of a binary packaging story as Go has. If I could build a python binary that was system specific like Go. I don’t think I would really be tempted to switch. But that packaging story makes me right a lot of infrastructure tools I would have traditionally written in Python in Go now because carrying python around is such a chore.
Could you use http://www.pyinstaller.org/ ? I haven’t used it myself, but I did use py2exe back in the day to ship a bunch of internal python utilities.
The article addresses this. Mercurial will be adopting PyOxidizer for distribution, and you can too.
Personally, I rather like Go, and use it for most places where I previously used Python. It has somewhat similar (though not identical) design ethics: most of The Zen of Python applies equally well to Go, perhaps sometimes even more so than Python. Rust, on the other hand, seems more similar to Ruby’s design ethics, which is not a necessarily a bad thing; I worked with Ruby for several years and liked it. It’s just different.
I see this a lot, and we’re into the realm of inherently subjective personal here - but Go feels so different to me that I find it hard to grasp the comparison.
Go’s level of abstraction is much closer to what C feels like to me. I’m back to worrying about making errors in code that I must rewrite myself that would be handled by Python’s batteries included philosophy.
I’m glad Go makes you happy, and I hope one day to feel the love, but I’m not nearly there yet and still find Python to be far and away my language of choice for day to day work.
I’ve been pretty happy with Nikola, but like a lot of other commenters I would have to say it’s far from perfect. My reasons are similar too: The default configuration contains a lot of stuff I don’t use, and so the whole thing feels bloated.
Given the fact that (I assume) a lot of us are using static generators for our personal sites, and because we want our personal sites to be … er … personal, I would imagine the “right” architecture for a generator is going to be minimal and highly pluggable.
Have you ever looked into Lektor? It’s more like a static website generator generator with a static CMS plugged in. I used it to compose my (at the moment still tiny) personal website from scratch and let it grow with my needs (sources are here: https://github.com/obestwalter/obestwalter.github.io). I just wrote an article about how I use it: https://oliver.bestwalter.de/articles/website-meta/
Not yet, but I might add one, when it turns out to make sense. There’s a plugin for that: https://github.com/t73fde/lektor-feed
I’m not sure this is the best example to support the point the author is trying to make. Yes, there were service disruptions, but it sounds like everything was 100% in the end and they were running the newer service/code base/whatever. They obviously got lucky though.
Here’s what could have gone wrong: Hours later, version B is discovered to have a huge, directly customer-impacting bug, and rollback to A is now impossible without a break-before-make outage.
Sure, but it seems like it didn’t. My point was that the story the author chose was not a great example of why you shouldn’t do something. A better story would have been the situation you described.
Hope is not a strategy. They saw a problem during the rollout, hoped that it was the only problem, and rushed to complete the rollout. It might not have caused an outage, but I’m sure it caused a lot of stress for the people involved. Our jobs and lives don’t need to be this way.
I think you’re talking past each other a bit. Hope is a very poor strategy indeed, but it’s also true that, as a rhetorical device, using an example where everything turned out okay despite doing the wrong thing makes for a weak story.
At the very least the author could have taken some time to describe a worst-case scenario. Otherwise readers who don’t already know how bad this is are left asking themselves, “What’s wrong with picture? Why was this so bad if everything turned out okay?”
Hindsight 20/20, can’t live on that.
It doesn’t matter if this particular occasion ended up not too badly, that’s just bad engineering. And these articles are great cautionary tales for all of us, so I don’t see your comment being of any help at all. Nobody is disagreeing that they worked it out somehow.
It wasn’t neglected at all. Insults are the definition of useless bloat.
This is what is known as “dry humor”
Yeah, I thought that was obvious when I was writing it, but apparently not. Clarified my own opinion on it in another comment.
Touché
Author here. Agreed, it’s entirely useless. I threw this post together for fun one day a few years ago, and it’s meant entirely as tongue in cheek. I don’t use it myself because it’s pointless, and one extra thing I’d need to set up on a new system for no gain.