Very cool, reminds me of smitty, which I miss from AIX, https://youtu.be/MFnbAKYkisc?t=743
It’s awesome that the binary can run on a whole bunch of platforms without changes but I’m still confused why it has .com
in its name. If anything, it seem to be the opposite of .com
format.
Author here. It’s because, in order to run on Windows, the file needs to end with either .exe or .com, and I chose the latter because I didn’t want UNIX users to think it’s a Windows-only program. Com binaries are basically flat executables from the DOS days, and rusage.com holds true to that tradition, because it’ll boot from BIOS, and furthermore if you load it into memory and jump the instruction pointer to the first byte of the executable (the MZ) then it’ll still run fine on many operating systems.
That is a much more interesting explanation, that what I foolishly thought was some sort of internet domain reference! 🤦
I had not heard of the proposal yet, seems like the issue was opened on Oct 20, 2022. Russ Cox has indicated that a decision will be made soon as to whether or not to accept the proposal and pull it into the stdlib. There continues to be interesting discussions on the issue about the design of the API, which prompted me to post it to lobste.rs, as I was curious about folks experience with other structured logging APIs from other languages and libraries.
I would be curious if folks have experience from other languages with good APIs for structured logging? The full design doc is linked in the top post, https://go.googlesource.com/proposal/+/master/design/56345-structured-logging.md.
Pandoc is one of the handful of opensource projects I love:
Conversely, installing this in Arch requires like 100 other Haskell packages (I know it only shows 75 but I believe some dependencies have other Haskell dependencies).
That is unfortunate, pandoc releases static binaries which work great in my experience, https://github.com/jgm/pandoc/releases/download/3.0/pandoc-3.0-linux-amd64.tar.gz
I don’t see the problem. pandoc is very clearly an integration project, integrating many document formats, providing a unified AST, etc.
It is a very reasonable path to pick high-quality and accepted dependencies in the ecosystem over implementing your own in this case. With that eye, I’m actually surprised that it’s just 100.
You cannot blame Pandoc: someone decided that each Haskell library had to be its own separate Archlinux package, and there is not much you can do about it.
This is infortunate and only works because there are very few Haskell packages, so the chances of having version conflicts are low. But it is still annoying to run pacman and have hundreds of packages to update.
It’s been a while since I touched it but my recollection is that pacman is still really fast with lots of little files and packages, so no biggie. Is that still correct, please?
Performances are fine (but it might be that my NVME disk is doing all the work). It is mostly annoying when reviewing package lists everytime I make an update an Pandoc and its dependencies are in it.
This kind of issue is precisely why I’d like to investigate something such as Guix, but I haven’t found the time yet.
With the new split of pandoc-cli, pandoc-lua, pandoc-server, etc. this might be better. For example, these are HTTP server libraries:
haskell-wai
haskell-wai-extra
haskell-warp
And pandoc-cli now has an option to build without the lua and server dependencies. I don’t really use either of those functionalities myself.
Agree with this and will add I really appreciate pandoc’s compatibility. I use pandoc and a makefile to build a static site and it started as maybe 5 lines of make, as I wanted to add additional things it was easy to add in pre- and post-processing steps, do templating, write filters etc. While filters do require learning a bit about pandoc internals, filters are just programs that read from stdin and write to stdout. It generally works out of the box great and can be integrated into unix-y pipelines really well without having to own a build process end-to-end like traditional static site generators or other document building toolchains.
I haven’t thought about AIX for over … um … two decades now? The last time I really used it was in the late 90s and my only good memories of it are of SMIT, The System Management Interface Tool. Yes, it was a menu driven program to admin an AIX system, but at any point, you could have it show you all the commands it will run. I found it very instructive, and wished more “control panel” type interfaces would do the same.
I also loved the little running person animation. You could watch your SP/2 fall over, in real time.
And another thing I just remember—it was available via both the command line and the GUI. Again, I’ve never seen anything quite like it since.
SuSE had YaST.
To me SMIT was barely usable. As soon as you ventured into anything more complex it would spew unintelligible mess of shell functions that were not that explanatory. I had better experience just reading the manual pages and learning to use the couple of commands I needed directly.
s/had/has/
Someone here periodically posts about AIX because it does shared libraries in a different way to other ELF platforms, which apparently causes some excitement, so I’ve thought about it a few times over the past few years.
Solaris also has a similar thing for admin tasks, though I can’t remember its name (I do remember that it was written in Java and my UltraSPARC IIi with 512 MiB of RAM could run either that tool or a web browser to read the docs, but not both at once). A lot of cloud admin things do this, since they talk to a back end via JSON and can provide you the JSON to save / edit as an intermediate thing.
I’m curious what IBM’s plans are for existing AIX customers. I remember Novell spending a lot of time making NetWare run on Xen so that you could incrementally move to SuSE Linux (owned by Novell at the time) and keep buying things from them. Xinuos (the eventual owners of the SCO IP after SCO imploded after trying to sue IBM) did some integration work to make it easy to deploy OpenServer in bhyve on their FreeBSD-based platform. I talked to them about adding an OpenServer compat layer along the lines of the Linux one but their customers wanted 100% compatibility and if they needed to validate their code on an OpenServer ABI layer on FreeBSD they’d just as easily port it to FreeBSD in most cases. Now that IBM owns RedHat, I presume the goal is for everyone now running AIX to run RHEL, but I wonder how they will get there.
I use a method mentioned by @jmk, https://lobste.rs/s/ocmbaq/how_store_your_dotfiles_using_git#c_zo9hhc, namely a git repo in your home directory which by default ignores all files.
Totally agreed about kebab case. It’s an unusually major quality-of-life improvement.
I’d also add being allowed to use ?
in an identifier. user-record-valid?
is pretty clear, both as a function or as a variable.
The argument I hear against kebab case is that it makes it impossible to write subtraction as foo-bar
but like … that’s … good actually? Why are we designing our syntax specifically in order to accommodate bad readability patterns? Just put a space in there and be done with it. Same logic applies to question marks in identifiers. If there’s no space around it, it’s part of the identifier.
Agreed! (hi phil 👋)
This is mentioned in the article too in a way. In addition to the readability point you make, the author makes the argument that most of us use multi-word identifiers far, far more often than we do subtractions.
I dunno, I think there’s a lot of pesky questions here. Are all mathematical operators whitespace sensitive, or just -
? Is kebab-case really worth pesky errors when someone doesn’t type things correctly?
I format my mathematical operators with whitespace, but I also shotgun down code and might leave out the spaces, then rely on my formatter to correct it.
Basically, I think kebab-case is nice, but properly reserved for lisps.
Are all mathematical operators whitespace sensitive?
Yes, of course! There’s no reason to disallow tla+
as an identifier either or km/h
for a variable to keep speed other than “that’s the way it’s been done for decades”.
I also shotgun down code and might leave out the spaces, then rely on my formatter to correct it.
The compiler should catch it immediately since it’d be considered an unrecognized identifier.
I’m not sure if this is an argument for or against what you’re saying here, but this discussion reminded me of the old story about how fortran 77 and earlier just ignore all spaces in code:
There is a useful lesson to be learned from the failure of one of the earliest planetary probes launched by NASA. The cause of the failure was eventually traced to a statement in its control software similar to this:
DO 15 I = 1.100
when what should have been written was:
DO 15 I = 1,100
but somehow a dot had replaced the comma. Because Fortran ignores spaces, this was seen by the compiler as:
DO15I = 1.100
which is a perfectly valid assignment to a variable called
DO15I
and not at all what was intended.
If I see x-y
, I always parse it visually as a single term, not x minus y. I think that’s a completely fair assumption to make.
I have always found kebab-case
easier on the eyes than snake_case
, I wish the former was more prevalent in languages.
Raku (previously known as Perl 6) does exactly this: dashes are allowed in variables names, and require spaces to be parsed as the minus operator.
Crazy idea: reverse _
and -
in your keyboard map :)
Probably would work out well for programmers. All your variables are easier to type
When you need to use minus, which is not as often, you press shift
More crazy ideas.
Use ASCII hyphen (-) in identifiers, and use the Unicode minus sign (−) for subtraction.
#include <vader.gif>
Nooooooo!!!!!!!!!
I really don’t like this idea. I’m all for native support for Unicode strings and identifiers. And if you want to create locale-specific keywords, that is also fine. I might even be OK with expanding the set of common operators to specific Unicode symbols, provided there is a decent way to input them. [1]
But we should never, ever use two visually similar symbols for different things. Yes, I know, the compiler will immediately warn you if you mixed them up, but I would like to strongly discourage ever even starting down that path.
[1] Something like :interpunct:
for the “·” for example. Or otherwise let’s have the entire world adopt new standard keyboards that have all the useful mathematical symbols. At any rate, I’d want to think about more symbols a lot more before incorporating it into a programming language.
The hyphen and minus sign differ greatly in length, and are easily distinguished, when the correct character codes and a properly designed proportional font is used. According to The Texbook (Donald Knuth, page 4), a minus sign is about 3 times as long as a hyphen. Knuth designed the standards we still use for mathematical typesetting.
When I type these characters into Lobsters and view in Firefox, Unicode minus sign (−) U+2212 is about twice the width of Unicode hyphen (‐) U+2010. I’m not sure if everybody is seeing the same font I am, but the l and I are also indistinguishable, which is also bad for programming.
A programming language that is designed to be edited and viewed using traditional mathematical typesetting conventions would need to use a font designed for the purpose. Programming fonts that clearly distinguish all characters (1 and l and I, 0 and O), are not a new idea.
Sun Labs’ Fortress project (An HPC language from ~15 years ago, a one time friendly competitor to Chapel, mentioned in the article) had some similar ideas to this, where unicode chars were allowed in programs, and there were specific rules for how to render Fortress programs when they were printed or even edited. for example
(a) If the identifier consists of two ASCII capital letters that are the same, possibly followed by digits, then a single capital letter is rendered double-struck, followed by full-sized (not subscripted) digits in roman font.
RR64
is rendered as ℝ64
it supported identifier naming conventions for superscripts and subscripts, overbars and arrows, etc. I used to have a bookmark from that project that read “Run your whiteboard!”
the language spec is pretty interesting to read and has a lot of examples of these. I found one copy at https://homes.luddy.indiana.edu/samth/fortress-spec.pdf
Thanks, this is cool!
I feel that the programming community is mostly stuck in a bubble where the only acceptable way to communicate complex ideas is using a grid of fixed width ASCII characters. Need to put a diagram into a comment? ASCII graphics! Meanwhile, outside the bubble we have Unicode, Wikipedia and technical journals are full of images, diagrams, and mathematical notation with sophisticated typography. And text messages are full of emojis.
It would be nice to write code using richer visual notations.
Use dieresis to indicate token break, as in some style guides for coöperate:
kebab-case
infix⸚s̈ubtract
(Unserious!)
Nice. All the cool people (from the 1800’s) spell this word diaëresis, which I think improves the vibe.
Ah yes, but if you want to get really cool (read: archaic), methinks you’d be even better served by diæresis, its ligature also being (to my mind at least) significantly less offensive than the Neëuw Yorker style guide’s abominable diære…sizing(?) ;-)
Thank you for pointing this out. I think that diæresis is more steampunk, but diaëresis is self-referential, which is a different kind of cool.
I’ve tried that before and it turns out dash is more common than underscore even in programming. For example terminal stuff is riddle with dashes.
For me, this is not at all about typing comfort, it’s all about reading. Dashes, underscores and camel case all sound different in my head when reading them, the underscore being the least comfortable.
For me, this is not at all about typing comfort, it’s all about reading. Dashes, underscores and camel case all sound different in my head when reading them
I am the same way, except they all sound different from my screenreader, not just in my head. I prefer dashes. It’s also a traditional way to separate a compound word.
Interesting, you must have some synesthesia :-)
As far as I can tell, different variable styles don’t sound like anything in my head. They make it harder for me to read when it’s inconsistent, and I have to adjust to different styles, but an all_underscore codebase is just as good to me as an all camelCase.
I use Ctrl-N in vim so typing underscore names doesn’t seem that bad. Usually the variable is already there somewhere. I also try to read and test “what I need” and then think about the code away from the computer, without referring to specific names
I like ? being an operator you can apply to identifiers, like how it’s used with nullables in C#, or, as I recall, some kind of test in Ruby.
In Ruby, ? is part of the ternary operator and a legal method suffix so method names like dst?
are idiomatic.
In zig maybe.?
resolves maybe
to not be null, and errors if it is null.
maybe?
is different, in my mind.
In Ruby it’s just convention to name your function valid?
instead of the is_valid
or isValid
you have in most languages. The ? Is just part of the function name.
Looks pretty, but I would really love an additional ability to pause the demo so the user could select text to copy. This way you could have an interactive screen cast tutorial.
Does Feature Request: Pause animation for couple of seconds help any?
thanks @l0b0, but I don’t think so. My ideal implementation would be like a typical video player, with pause, fast forward, etc. The primary difference from a video player would be that you could select text and copy it elsewhere.
The .NET team had a really neat project that I think was released. They created a declarative command-line argument parser which, in addition to generating the parser, embedded the grammar in a special section of the binary. PowerShell could then read that section and provide rich completions (including help text and so on) that was always in sync with the binary.
I’d love to see *NIX platforms adopt something like this: a special ELF section that embedded a grammar for the command line and tooling to generate it from getlongopt
arguments and richer interfaces. Shells could then parse it on the first invocation of a command and cache it until the binary changed.
That sounds like a really cool idea. Any idea what the declarative grammar looked like? Was it something akin to what you see in a manpage?
Indeed, this was the first post where I learned about the system, super interesting, I’m going up see if I can get a copy running in qemu!
There was a README back in the day on Slackware (I think?) that recommended MGR if your box had less than 8MB of RAM. Which mine did.
I love the concept. It seems fit for resurrection, at least in its architecture if not in the exact implementation.
@classichasclass I am trying to build your tarball on an emulated sun4m using qemu and a stock copy of SunOS 4.14. Any chance you could share a bit more of your successful build tools? It looks like the Configfile is set to use gcc, did you use gcc? If so what version?
No need: gopher://gopher.floodgap.com/9/archive/sunos-4-solbourne-os-mp/gcc-2.95.3-sunos413.tar.gz
I find editing in vim really pleasant, especially coupled with an automatic formatter, ala gofmt
, I use pandoc
as a fixer via https://github.com/dense-analysis/ale.
Everyone uses an editor? No no no no… 1000 times no. I hate WYSIWYG editors and what thei represent. Putting formating ahead of content was an horrible idea that tends to survive in the heads of many, while at the same time it already has been proven conter productive anyway.
Markdown is human writable, and could be adopted by the masses for example on messaging apps, social media, etc. If people are introduced or forced to use it at work or school.
Bbcode was very popular in the 2000s and webforums broke through in popularity well beyond techies.
What if students had to make their written school assignments in mark down? Is it such a complicated thing to ask the? In which way is MSWord any simpler? It’s not!
If you need to include tabular data, markdown is hard, IMO. The original markdown required you to just write HTML for them, which was no picnic. None of the dialects that have evolved since then are anywhere near as easy as editing a table in Word. And I say this as someone who intensely dislikes Word.
I like writing in markdown, using a plain old text editor. But when I need to insert a table, I use visidata to edit and export github-flavored markdown. I don’t mind it, because I appreciate the other benefits of markdown. I could not claim, with a straight face, that it’s as easy as a WYSIWYG editor would be for creating the document.
(Also, FWIW, markdown has been adopted on discord, and I think most matrix clients do the right thing with it too.)
another nice option is pandoc
:
$ pandoc -f csv -t gfm <<-EOF
foo,bar,baz
1,2,3
4,5,6
EOF
| foo | bar | baz |
|-----|-----|-----|
| 1 | 2 | 3 |
| 4 | 5 | 6 |
FWIW, Emac’s markdown-mode
has a few functions that make writing tables easy.
There’s markdown-insert-table
which prompts for the size and alignment and inserts a pre-built table, and even allows tabbing between cells.
And then there’s a number of markdown-table-*
functions for editting them - moving rows, adding columns, etc..
I wrote my own Markdown/ORG mode markup language for my blog. The one thing I do not do is store the posts in my markup language, but the final HTML render—that way, I’m not stuck with whatever syntax I use forever (and I’ve changed some of the syntax since I initially developed it). Also, for tables, I use simple tab-separated values:
#+table Some data goes here
*foo bar baz
**foo bar baz
3 14 15
92 62 82
8 -1 4
#-table
Whitespace are tabs, the line starting with the asterisk is a header line; the double asterisk is the footer. This handles 95% of the tables I generate, and because I store posts HTML format, it doesn’t matter much that it looks a bit messy here.
I think most people don’t get what John Gruber was trying to do—make it easier to write blog posts.
“Putting formating ahead of content was an horrible idea that tends to survive in the heads of many”
I use Emacs and Org-mode but I have never understood the insistence that those who use anything from LaTeX to Docbook to Markdown are separating content and structure.
Oh, how I tried to learn LaTeX until it smacked me in the forehead that I had to complie a document!
Anyone who types #header ##subheading * bullet while typing (or using autocomplete) is thinking about format and structure while producing content.
I loathe word processors but creating a template makes it just as easy to seperate content and structure. Even back in the 90s on USENET and other pure plaintext forums, or RCF’s that matter, it was commonplace to insert ASCII tables and /emphasis/, like I am now with * and /s.
Nothing has ever stopped anyone from treating a screen like a typewriter or pad of paper and just writing and writing and writing and come back later to add structure and formatting.
Writing is writing. Editing is editing. Typesetting is typesetting. The only difference now is we all have to do all three, but nothing but our minds prevents us from doing them separately.
Agreed. The only WYSIWYG editor I’ve ever enjoyed using is TexMacs, despite it’s strange window/buffer approach and bugs. I wish every WYSIWYG editor learned from it. The vast majorty of them are a complete nightmare. I want to throw my computer every time Slack’s new WYSIWYG message box screws up my formatting.
Nine, twelve, and thirteen are the ones I don’t disagree with. 13 is using long options since they’re easier to read, but they can make one-liners very tangled so you’ve got to case-by-case it.
I’d actually wanna argue for sh (for example dash) over bash or zsh. I use zsh as my interactive shell and so I write zsh functions so I can just source them in. If I write a stand alone shell script, that’s when I’d go for sh first.
I prefer using short options when composing commands interactively, but I prefer long options when composing long lived scripts, as the latter provide better readability as to the intent of arguments. For example grep’s -l
vs --files-with-matches
Yes, that’s the general rule, with the exception being things where the short version is just so well known that the long version is almost obfuscating.
All together a pretty sensible list. I would add shfmt to keep your editing sane. For number (7) another option would be to add a -x <FILE>
cli arg in combination with BASH_XTRACEFD
and allow a person to write trace output to a file. For number (8) I would perhaps provide some examples of how [[
is more ergonomic and powerful than [
, for example:
if [[ -f $f1 && (-f $f2 || -f $f3) ]]; then
echo 'found files'
fi
if [ -f $f1 -a \( -f $f2 -o -f $f3 \) ]; then
echo 'found files'
fi
Also you get regex and globbing.
I thought this was a super interesting post, with great historical references. I would love to see more innovative convergence between the 99% of end users and the 1% of programmers.
Agreed, but the historical references didn’t really do justice to the experience home computer users had in the 80s and early 90s microcomputer era: being dropped at a BASIC prompt. That’s the ultimate “here’s a programming language, have at it”, even moreso than the UNIX CLI, which would only be available to certain professionals and maybe some students.
After BASIC, computers shipped with DOS, which was not nearly as powerful as UNIX or BASIC. It makes sense that people exposed only to DOS would find GUIs better in almost all ways.
That is an interesting thought exercise, though I don’t really see BASIC as a language that helped glue disparate applications together. I think the UNIX shell environment comes the closest to an operating systems that allows a user to gradually use more programming as their tasks become more complex or require more automation.
I fully agree regarding the “glue” thing. I just remembered AppleScript. perhaps that comes closest to something like the shell for the lay user - it allows you to “hook into” the UI components that already exist and “tell” the application to “select this item from the menu”, or “take something from the clipboard and put it at cursor” etc.
I never really see articles about it on the web (though I also don’t seek them out), and even though I was never a long-time Mac user, I have the feeling it’s a bit obscure and most people won’t be familiar with it. You could compare it to VBA, except that VBA is confined to only the applications that support it. AppleScript is intended to bridge between applications, and it can talk to programs without their cooperation (although I suppose if the program does cooperate and provides hooks it can be made much more useful). I guess it’s a lot like the UNIX shell for a GUI in that sense.
Unfortunately, I never used it in anger, just played with it and did some hello world examples.
AppleScript is indeed very interesting. I have not used it myself, but from skimming the Wikipedia article it seems to take a very high level approach where you essential automate the user interactions a person would perform with the GUI. Though that has proven successful for AppleScript, I wonder if something lower level would be useful, where any application exposes its capabilities in a manner that are divorced from its GUI. However, such an abstraction is probably difficult to create.
I wonder if something lower level would be useful, where any application exposes its capabilities in a manner that are divorced from its GUI.
This was basically the Windows COM object abstraction. Applications and services would have COM apis that any win32 app could call. In practice most apps don’t bother to make the COM so it’s fairly limited, though it’s still ok for scripting against windows internals.
I’m really hoping this project will succeed, I would love to be able to use an untainted Linux kernel on my laptop.
I think that’s right, depending on your definition of “untainted”. My understanding is that no binary blob code will be executed in kernel space, but we will still need to load a signed binary firmware blob (provided by nVidea) into the GPU. The main job of the new driver will be to communicate with the firmware blob.
Still needing a mystery signed firmware blob is certainly not ideal, but having a mainline in kernel driver does seem like a significant improvement.
I don’t think nvk will help you there. Their mesa driver runs on the userland, not in the kernel. The tainting happens in the kernel.
“we need a new kernel uAPI and the nouveau kernel needs quite a bit of work” I thought this implied that the NVK driver will use a reworked in tree nouveau driver to talk to nvidia hardware, is this not the case?
Thanks for sharing, though this post is from 2009 its content is still very relevant. All three of Tim O’Reilly’s principles resonated with me.
My favorite part of this post was the notion of using demos to motivate yourself. I have always thought of demos as something I build to show others, but I love the idea of creating small demos for yourself.
I have 10+ year old screenshots of early web projects of mine and it’s quite nice to see those “demos” again :)