If you are using Python, use https://github.com/platformdirs/platformdirs for this exact purpose. It will dynamically construct the directory you need to put your things in. It takes in consideration:
the os
whether it’s for the user or the whole system
your app and author name
if you store cache, log, temp files, config or data
There are official conventions for all that, so better respect them.
Now, I would not store the dev config for your project in there, only the user config for the end product. It makes no sense to have pyproject.toml or packages.json anywhere else than in the code repo.
It makes no sense to have pyproject.toml or packages.json anywhere else than in the code repo.
That’s not what was suggested here. The suggestion was to have a .config directory inside the code repos instead of having the configuration files for various tools at the repo root.
I switch between Neovim and VSCode. VSCode because it’s better for LaTeX, Autohotkey, and Copilot, plus it’s easier to pair with other people on it. Neovim for everything else. I’ve extended it with a lot of plugins and custom lua functions, like this one to tie tasks to specific buffers:
function LoadLocal(local_cmd)
vim.b.local_cmd = local_cmd
end
function RunLocal()
vim.cmd(vim.b.local_cmd)
end
vim.cmd [[command! -nargs=1 LoadLocal call v:lua.LoadLocal(<f-args>)]]
vim.keymap.set('n', 'gxl', RunLocal, {silent = true})
The custom functions are what keep me away from alternatives like Kakoune and Helix. The cold start time is really bad and I should probably lazy load the plugins. Time to dig into @eBPF’s config
I’ve always found upterm / tmate to be ideal in these situations as the only requirement is a terminal emulator which makes it great on bandwidth, the flexibility multiplexers add is nice, and its not limited to a single software. It doesn’t support users keeping their settings as mentioned, but I’d be damned if someone made me use proprietary software just to pair.
There are a host of FOSS options out there too if you don’t want to accidentally feed your or your employers code to the Microsoft GitHub servers or have it packaged into their proprietary offerings.
The cold start time is really bad and I should probably lazy load the plugins.
Not to shill too much, but that’s one of the things I like in kakoune: even though my config is pretty beefy (20 custom plugins, including the substantial lsp one, plus all the bunded ones), the startup time is still 60ms (vs 2ms for no plugins whatsoever).
Some of the comments here reminded me of a popular Stack Exchange post.
Strong cryptography only means the passwords must be encrypted while the user is inputting them but then they should be moved to a recoverable format for later use.
Every time someone brings up a language, including talking about how some old language now has features the ‘cool’ language had last decade, I just want the summary:
What are the coolest things about this language?
For what type of programs is this language best?
In old Perl, the coolest feature would be the speed an experienced author could slam out code. Readability, maintainability, and consistency took second place to this goal. The old joke was that you would never find the last bug in a Perl program. It got your prototype to market really, really fast. There were other coolness items that have since become common such as repositories and database cursors as file handles.
Perl was for prototyping complex web services in a tearing hurry.
I’d assume other than shells Perl is among the most portable scripting languages and that is old enough to have an extremely robust ecosystem and libraries. That’s true for all old languages too I guess. Oh on portability, what was special with Perl is that they have (had?) something where you could just randomly run test suites of libraries on a machine, which caused libraries to be tested on super obscure systems. In other words, it wasn’t just the language that was very cross-platform, but also the libraries you’d find on CPAN. Nowadays it’s a bit annoying to often find software that only works on Ubuntu, maybe only one version of it. Docker certainly hasn’t made that better.
It’s really good for golf and really nice for running Perl scripts as one-liners on the command line. With -M and -e arguments if I remember.
I kind of expect that it’s still the fastest language to build a prototype of pretty much anything in, if you know it well.
But that’s just from someone who hasn’t touched Perl in decades, so I might be misremembering some parts.
This implementation is not nearly pure GNU Make enough for me. :)
Here’s one that is pure Make. Some of it is cribbed from The GNU Make Book and not all the definitions are necessary. Numbers are represented by a list of xs. So 5 would be the string x x x x x.
It counts down from 100 instead of up. I don’t really care.
This implementation is not nearly pure GNU Make enough for me. :)
That’s the spirit :-) I saw a blog post about doing this kind of arithmetic in make but didn’t want to go this far down the rabbit hole. I’m glad you did though.
make‘s default shell is /bin/sh. It’s true that seq is not part of POSIX, but I tried this Makefile on macOS, NixOS, Arch Linux, and OpenBSD (you obviously need to pkg_add gmake) and it worked on all of them. It also works when explicitly setting the shell to Bash, ZSH, or OpenBSD’s kush. It did NOT work with Fish, though not because of seq (which is supported) but because the POSIX-style arithmetic is not supported. That’s portable enough for me but YMMV.
Wow, that’s one of the worst gatekeeping articles I’ve seen in a while. I’m a vim user for over 20 years and made the switch to Neovim at 0.4. It’s by far my favorite vi-incarnation for daily coding. The article also doesn’t seem terrible informed, i.e. it laments the lack of vimdiff while ignoring nvim -d (same flag as vim).
First-class packages are the most underrated feature of lisp. AFAIK only perl offers it fully but it uses very bad syntax, globs . Most macros merely suppress evaluation and this can be done using first class functions. Here is my question for lispers, If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?
Most macros merely suppress evaluation and this can be done using first class functions.
I strongly disagree with this. Macros are not there to “merely suppress evaluation.” As you point out, they’re not needed for that, and in my opinion they’re often not even the best tool for that job.
“Good” macros extend the language in unusual or innovative ways that would be very clunky, ugly, and/or impractical to do in other ways. It’s in the same vein as asking if people really need all these control flow statements when there’s ‘if’ and ‘goto’.
To give some idea, cl-autowrap uses macros to generate Common Lisp bindings to C and C++ libraries using (cl-autowrap:c-include "some-header.h"). Other libraries, like “iterate” add entirely new constructs or idioms to the language that behave as if they’re built-in.
Here is my question for lispers, If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?
Lex/Yacc and CL macros do very different things. Lex/Yacc generate parsers for new languages that parse their input at runtime. CL macros emit CL code at compile time which in turn gets compiled into your program.
In some sense your question is getting DSLs backwards The idea isn’t to create a new language for a special domain, but to extend the existing language with new capabilities and operations for the new domain.
The babel compiler uses parsing to add features ontop of older javascript like asyc/await.
I am guessing all these use lex/yacc internally. Rails uses scaffolding and provides helpers to generate js code compile time. Something like parenscript.
The basic property of a macro is to generate code at compile time. Granted most of these are not built into the compiler but nothing is stopping you adding a new pre-compile step with the help of a make file.
Code walking is difficult in lisp as well. How would I know if an expression is a function or a macro ? If I wanted to write a code highlighter in vim that highlights all macros differently I would have a difficult time doing this by parsing alone even though lisp is an easy language to parse.
Code walking is difficult in lisp as well. How would I know if an expression is a function or a macro ?
CL-USER> (describe #'plus-macro)
#<CLOSURE (:MACRO PLUS-MACRO) {1002F8AB1B}>
[compiled closure]
Lambda-list: (&REST SB-IMPL::ARGS)
Derived type: (FUNCTION (&REST T) NIL)
Documentation:
T
Source file: SYS:SRC;CODE;SIMPLE-FUN.LISP
; No value
CL-USER> (describe #'plus-fn)
#<FUNCTION PLUS-FN>
[compiled function]
Lambda-list: (A B)
Derived type: (FUNCTION (T T) (VALUES NUMBER &OPTIONAL))
Source form:
(LAMBDA (A B) (BLOCK PLUS-FN (+ A B)))
; No value
You underestimate the power of the dark side Common Lisp ;)
In other words … macros aren’t an isolated textual tool like they are in other, less powerful, languages. They’re a part of the entire dynamic, reflective, homoiconic programming environment.
In the Lisp case, a tool (like an editor) only has to ask the Lisp environment about a bit of syntax to check if it’s a macro, function, variable, or whatever.
In the non-Lisp case, there’s no single source of information, and every tool has to know about every new language extension and parser that anybody may write.
I believe the their claim is that code walkers can provide programmers with more power than Lisp macros. That’s some claim, but the possibility of it being true definitely makes reading the article they linked ( https://mkgnu.net/code-walkers ) worthwhile.
… a common lisp interpreter, which you are better off writing in lex/yacc. Even if you do that each macro defines new ways of parsing code so you can’t write a generic highlighter for loop like macros. If you are going to write a language interpreter and parse, why not go the most generic route of lex/yacc and support any conceivable syntax ?
Writing a CL implementation in lex/yacc … I can’t begin to imagine that. I’m not an expert in either, but it seems like it’d be a lot of very hard work for nothing, even if it were possible, and I’m not sure it would be.
So, assuming it were possible … why would you? Why not just use the existing tooling as it is intended to be used???
That’s too small of a problem to demonstrate why code walking is difficult. How about this then,
Count number of s-expression used in the program
Shows the number of macros used
Show number of lines generated by each macro and measure line savings
Write a linter which enforces stylistic choices
Suggest places where macros could be used for minimising code
Measure code complexity, coupling analysis
Write a lisp minifier, obfuscator
Find all places where garbage collection can be improved and memory leaks can be detected
Insert automatic profiling code for every s-expression and list out where the bottlenecks are
Write code refactoring tools.
List most used functions in runtime to suggest which of them can be optimised for speed
Ironically the above is much easier todo with assembly.
My point is simply this, lisp is only easy to parse superficially. Writing the above will still be challenging. Writing lexers and parsers is better at code generation and hence macros in the most general sense. If you are looking for power then code walking beats macros and thats also doable in C.
While intriguing, it would be nice if the article spelled out the changes made with code walkers. Hearing that a program ballooned 9x isn’t impressive by itself. Without knowing about the nature of the change it just sounds bloated. (Which isn’t to say that it wasn’t valid, it’s just hard to judge without more information.)
Regarding your original point, unless I’m misunderstanding the scope of code walkers, I don’t see why it needs to be an either/or situation. Macros are a language supported feature that do localized code changes. It seems like code walkers are not language supported in most cases (all?), but they can do stateful transformations globally across the program. It sounds like the both have their use cases. Like lispers talk about using macros only if functions won’t cut it, maybe you only use code walkers if macros won’t cut it.
BTW, it looks like there is some prior art on code walkers in Common Lisp!
“That’s two open problems: code walkers are hard to program and compilers to reprogram.”
The linked article also ends with something like that. Supports your argument given macros are both already there in some languages and much easier to use. That there’s lots of working macros out there in many languages supports it empirically.
There’s also nothing stopping experts from adding code walkers on top of that. Use the easy route when it works. Take the hard route when it works better.
I’m still busy (see profile). That will probably increase. I figure I can squeeze a little time in here and there to show some love for folks and share some stuff on my favorite, tech site. :)
That kind of is the point. Lisp demonstrates that there is no real boundary between the language as given and the “language” it’s user creates, by extending and creating new functions and macros. That being said, good lisp usually follows conventions so that you may recognize if something is a macro (eg. with-*) or not.
Here are examples of using lex/yacc to extend a language
Those are making new languages, as they use new tooling, which doesn’t come with existing tooling for the language. If someone writes Babel code, it’s not JavaScript code anymore - it can’t be parsed by a normal JavaScript compiler.
Meanwhile, Common Lisp macros extend the language itself - if I write a Common Lisp macro, anyone with a vanilla, unmodified Common Lisp implementation can use them, without any additional tooling.
Granted most of these are not built into the compiler but nothing is stopping you adding a new pre-compile step with the help of a make file.
…at which point you have to modify the build processes of everybody that wants to use this new language, as well as breaking a lot of tooling - for instance, if you don’t modify your debugger, then it no longer shows an accurate translation from your source file to the code under debugging.
If I wanted to write a code highlighter in vim that highlights all macros differently I would have a difficult time doing this by parsing alone even though lisp is an easy language to parse.
Similarly, if you wanted to write a code highlighter that highlights defined functions differently without querying a compiler/implementation, you couldn’t do it for any language that allows a function to be bound at runtime, like Python. This isn’t a special property of Common Lisp, it’s just a natural implication of the fact that CL allows you to create macros at runtime.
Meanwhile, you could capture 99.9%+ of macro definitions in CL (and function definitions in Python) using static analysis - parse code files into s-expression trees, look for defmacro followed by a name, add that to the list of macro names (modulo packages/namespacing).
tl;dr “I can’t determine 100% of source code properties using static analysis without querying a compiler/implementation” is not an interesting property, as all commonly used programming languages have it to some extent.
If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?
I don’t know why you’d think they are comparable. The amount of effort to write a macro is way less than the amount of effort required to write a lexer + parser. The fact that macros are written in lisp itself also reduces the effort needed. But most importantly one is an in-process mechanism for code generation and the other one involves writing the generated code to the file. The first mechanism makes it easy to iterate and modify the generated codec. Given that most of the time you are maintain, hence modifying, code I’d say that is a pretty big difference.
The babel compiler uses parsing to add features on top of older javascript like asyc/await.
Babel is an example of how awful things can be when macros happen out of process. The core of babel is a macro system + plugable reader .
I am guessing all these use lex/yacc internally.
Babel certainly doesn’t. When it started it used estools which used acorn iirc. I think nowadays it uses its own parser.
Rails uses scaffolding and provides helpers to generate js code compile time. Something like parenscript.
I have no idea why you think scaffolding it is like parenscript. The common use case for parenscript is to do the expansion of the fly. Not to generate the initial boilerplate.
Code walking is difficult in lisp as well.
And impossible to write in portable code, which is why most (all?) implementations come with a code-walker you can use.
If syntax is irrelevant, why even bother with Lisp ? If I just stick to using arrays in the native language I can also define functions like this and extend the array language to support new control flow structures
Well, if your question is “Would you prefer a consistent, built-in way of extending the language, or a hacked together kludge of pre-processors?” then I’ll take the macros… ;-)
Code walking is difficult in lisp as well. How would I know if an expression is a function or a macro ? If I wanted to write a code highlighter in vim that highlights all macros differently I would have a difficult time with doing pure code walking alone even though lisp is an easy language to parse.
My first question would be whether or not it makes sense to highlight macros differently. The whole idea is that they extend the language transparently, and a lot of “built-in” constructs defined in the CL standard are macros.
Assuming you really wanted to do this, though, I’d suggest looking at Emacs’ Slime mode. It basically lets the CL compiler do the work. It may not be ideal, but it works, and it’s better than what you’d get using Ragel, Swig, or Babel.
FWIW, Emacs, as far as I know (and as I have it configured), only highlights symbols defined by the CL standard and keywords (i.e. :foo, :bar), and adjusts indentation based on cues like “&body” arguments.
Btw there is already a syntax highlighter that uses a code walker and treats macros differently. The code walker may not be easy to write, but it can hardly be said that it is hard to use.
Yes, you absolutely want macros even if you Lex/Yacc and interpreters.
Lex/Yacc (and parsers more generally), interpreters (and “full language compilers”), and macros all have different jobs at different stages of a language pipeline. They are complimentary, orthogonal systems.
Lex/Yacc are for building parsers (and aren’t necessarily the best tools for that job), which turn the textual representation of a program into a data structure (a tree). Every Lisp has a parser, for historical reasons usually called a “reader”. Lisps always have s-expression parsers, of course, but often they are extensible so you can make new concrete textual notations and specify how they are turned into a tree. This is the kind of job Lex and Yacc do, though extended s-expression parsers and lex/yacc parsers generally have some different capabilities in terms of what notations they can parse, how easy it is to build the parser, and how easy it is to extend or compose any parsers you create.
Macros are tree transformers. Well, M4 and C-preprocessor are textual macro systems that transform text before parsing, but that’s not what we’re talking about. Lisp macros transform the tree data structure you get from parsing. While parsing is all about syntax, macros can be a lot more about semantics. This depends a lot on the macro system – some macro systems don’t allow much more introspection on the tree than just what symbols there are and the structure, while other macro systems (like Racket’s) provide rich introspection capabilities to compare binding information, allow macros to communicate by annotating parts of the tree with extra properties, or by accessing other compile-time data from bindings (see Racket’s syntax-local-value for more details), etc. Racket has the most advanced macro system, and it can be used for things like building custom DSL type systems, creating extensible pattern matching systems, etc. But importantly, macros can be written one at a time as composable micro-compilers. Rather than writing up-front an entire compiler or interpreter for a DSL, with all its complexity, you can get most of it “for free” and just write a minor extension to your general-purpose language to help with some small (maybe domain-specific) pain point. And let me reiterate – macros compose! You can write several extensions that are each oblivious to each other, but use them together! You can’t do that with stand-alone language built with lex/yacc and stand-alone interpreters. Let me emphatically express my disagreement that “most macros merely suppress evaluation”!
Interpreters or “full” compilers then work after any macro expansion has happened, and again do a different, complimentary job. (And this post is already so verbose that I’ll skip further discussion of it…)
If you want to build languages with Lex/Yacc and interpreters, you clearly care about how languages allow programmers to express their programs. Macros provide a lot of power for custom languages and language extensions to be written more easily, more completely, and more compositionally than they otherwise can be.
Macros are an awesome tool that programmers absolutely need!
Without using macros, you have to put all kinds of complex stuff into your language compiler/interpreter or do without it.
Eg. how will your language deal with name binding and scoping, how will your language order evaluation, how do errors and error handling work, what data structures does it have, how can it manipulate them, etc. Every new little language interpreter needs to make these decisions! Often a DSL author cares about only some of those decisions, and ends up making poor decisions or half-baked features for the other parts.
Additionally, stand-alone interpreters don’t compose, and don’t allow their languages to compose.
Eg. if you want to use 2+ independent languages together, you need to shuttle bits of code around as strings, convert data between different formats at every boundary, maybe serialize it between OS processes, etc.
With DSL compilers that compile down to another language for the purpose of embedding (eg. Lex/Yacc are DSLs that output C code to integrate into a larger program), you don’t have the data shuffling problems.
But you still have issues if you want to eg. write a function that mixes multiple such DSLs.
In other words, stand-alone compilers that inject code into your main language are only suitable for problems that are sufficiently large and separated from other problems you might build a DSL for.
With macro-based embedded languages, you can sidestep all of those problems.
Macro-based embedded languages can simply use the features of the host language, maybe substituting one feature that it wants to change.
You mention delaying code – IE changing the host language’s evaluation order.
This is only one aspect of the host language out of many you might change with macros.
Macro extensions can be easily embedded within each other and used together.
The only data wrangling at boundaries you need to do is if your embedded language uses different, custom data structures. But this is just the difference between two libraries in the same language, not like the low-level serialization data wrangling you need to do if you have separate interpreters.
And macros can tackle problems as large as “I need a DSL for parsing” like Yacc to “I want a convenience form so I don’t have to write this repeteating pattern inside my parser”.
And you can use one macro inside another with no problem.
(That last sentence has a bit of ambiguity – I mean that users can nest arbitrary macro calls in their program. But also you can use one macro in the implementation of another, so… multiple interpretations of that sentence are correct.)
To end, I want to comment that macro systems vary a lot in expressive power and complexity – different macro systems provide different capabilities. The OP is discussing Common Lisp, which inhabits a very different place in the “expressive power vs complexity” space than the macro system I use most (Racket’s). Not to disparage the Common Lisp macro system (they both have their place!), but I would encourage anyone not to come to conclusions about what macros can be useful for or whether they are worthwhile without serious investigation of Racket’s macro system. It is more complicated, to be certain, but it provides so much expressive power.
I mean, strictly, no - but that’s like saying “if you can write machine code, do you really need Java?”
(Edited to add: see also Greenspun’s tenth rule … if you were to build a macro system out of such tooling, I’d bet at least a few pints of beer that you’d basically wind up back at Common Lisp again).
I’m not claiming to speak for all lispers, but the question
Here is my question for lispers, If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?
might be misleading. Obviously you don’t need macros, and everything could be done some other way, but macros are easy to use, while also powerful, can be dynamically created or restricted to a lexical scope. I’ve never bothered to learn lax/yacc, so I might be missing something.
Recently stumbled upon this, also seems like a good alternative for infrequently run scripts/aliases: https://github.com/KnorrFG/dotree
I want to read and understand, is there a primer? How do I got started?
Not a primer, but solutions: https://alexaltea.github.io/blog/posts/2016-10-12-xchg-rax-rax-solutions/
Unfortunately, 0x16 is missing here. (And I also don’t know the solution to this one.)
…that features a keyword
probably
which will execute the provided code with a 90% probability.INTERCAL has this! You can add a
%x
where0 < x < 100
to a line to indicate it should execute with a certain probability.That was indeed my inspiration :-)
If you are using Python, use https://github.com/platformdirs/platformdirs for this exact purpose. It will dynamically construct the directory you need to put your things in. It takes in consideration:
There are official conventions for all that, so better respect them.
Now, I would not store the dev config for your project in there, only the user config for the end product. It makes no sense to have pyproject.toml or packages.json anywhere else than in the code repo.
That’s not what was suggested here. The suggestion was to have a
.config
directory inside the code repos instead of having the configuration files for various tools at the repo root.It’s already built into the standard library:
https://docs.python.org/3/library/importlib.html?highlight=importlib#module-importlib
Emacs, specifically Emacs 29 with a few extra patches (most notably native comp and xwidgets).
I do a lot of things in Emacs, not least because it gives me a consistent way to work between macOS, Linux, and WSL if I have to work on Windows:
https://chaos.social/@citizen428/109703887983215601
I’m also partial to Acme but am too deeply invested in Emacs at this point.
I switch between Neovim and VSCode. VSCode because it’s better for LaTeX, Autohotkey, and Copilot, plus it’s easier to pair with other people on it. Neovim for everything else. I’ve extended it with a lot of plugins and custom lua functions, like this one to tie tasks to specific buffers:
The custom functions are what keep me away from alternatives like Kakoune and Helix. The cold start time is really bad and I should probably lazy load the plugins. Time to dig into @eBPF’s config
What would make this text editor easier to pair?
The sheer number of mappings I use makes my neovim unusable by anyone else.
Mostly this: https://code.visualstudio.com/learn/collaboration/live-share
I don’t usually use VSCode but have occasionally used it for remote mentoring sessions because of Live Share.
I’ve always found
upterm
/tmate
to be ideal in these situations as the only requirement is a terminal emulator which makes it great on bandwidth, the flexibility multiplexers add is nice, and its not limited to a single software. It doesn’t support users keeping their settings as mentioned, but I’d be damned if someone made me use proprietary software just to pair.I’ve not used it, but there is a copilot plugin for neovim.
There are a host of FOSS options out there too if you don’t want to accidentally feed your or your employers code to the Microsoft GitHub servers or have it packaged into their proprietary offerings.
I haven’t come across a FOSS Copilot equivalent, care to share some links?
Not to shill too much, but that’s one of the things I like in kakoune: even though my config is pretty beefy (20 custom plugins, including the substantial lsp one, plus all the bunded ones), the startup time is still 60ms (vs 2ms for no plugins whatsoever).
Some of the comments here reminded me of a popular Stack Exchange post.
https://serverfault.com/questions/293217/our-security-auditor-is-an-idiot-how-do-i-give-him-the-information-he-wants
Thanks for the laugh!
That was a wild ride!
Every time someone brings up a language, including talking about how some old language now has features the ‘cool’ language had last decade, I just want the summary:
What are the coolest things about this language?
For what type of programs is this language best?
In old Perl, the coolest feature would be the speed an experienced author could slam out code. Readability, maintainability, and consistency took second place to this goal. The old joke was that you would never find the last bug in a Perl program. It got your prototype to market really, really fast. There were other coolness items that have since become common such as repositories and database cursors as file handles.
Perl was for prototyping complex web services in a tearing hurry.
Is there something else?
PCRE which pretty much all mainstream languages incorporated in some form or other.
I’d assume other than shells Perl is among the most portable scripting languages and that is old enough to have an extremely robust ecosystem and libraries. That’s true for all old languages too I guess. Oh on portability, what was special with Perl is that they have (had?) something where you could just randomly run test suites of libraries on a machine, which caused libraries to be tested on super obscure systems. In other words, it wasn’t just the language that was very cross-platform, but also the libraries you’d find on CPAN. Nowadays it’s a bit annoying to often find software that only works on Ubuntu, maybe only one version of it. Docker certainly hasn’t made that better.
It’s really good for golf and really nice for running Perl scripts as one-liners on the command line. With -M and -e arguments if I remember.
I kind of expect that it’s still the fastest language to build a prototype of pretty much anything in, if you know it well.
But that’s just from someone who hasn’t touched Perl in decades, so I might be misremembering some parts.
I always had a bit of a soft spot for OCaml but the MOOC really made me enjoy the language.
There’s a small collection of single file Ruby starters here: https://starters.wolfgangrittner.dev/scripts
This implementation is not nearly pure GNU Make enough for me. :)
Here’s one that is pure Make. Some of it is cribbed from The GNU Make Book and not all the definitions are necessary. Numbers are represented by a list of
x
s. So 5 would be the stringx x x x x
.It counts down from 100 instead of up. I don’t really care.
That’s the spirit :-) I saw a blog post about doing this kind of arithmetic in
make
but didn’t want to go this far down the rabbit hole. I’m glad you did though.Comments and articles like these are why I love lobste.rs :D
Wow. Some things people were not meant to know.
This note implies GNU make is Turing complete.
https://okmij.org/ftp/Computation/#Makefile-functional
It relies on a specific and nonportable shell builtin.
make
‘s default shell is/bin/sh
. It’s true thatseq
is not part of POSIX, but I tried thisMakefile
on macOS, NixOS, Arch Linux, and OpenBSD (you obviously need topkg_add gmake
) and it worked on all of them. It also works when explicitly setting the shell to Bash, ZSH, or OpenBSD’s kush. It did NOT work with Fish, though not because ofseq
(which is supported) but because the POSIX-style arithmetic is not supported. That’s portable enough for me but YMMV.Nice write up.
Thanks :-) I hadn’t done this in a while, so kept notes for myself. Then I figured I might as well throw then in a blog post.
What about using Redbean?
https://redbean.dev/#:~:text=redbean%20is%20an%20open%20source,html%20and%20.
Given that it’s not part of the base system I’d assume it’s not on topic.
Exactly. I find Redbean quite interesting in principle and starred it on Github a while ago but it never quite seems like what I need.
Wow, that’s one of the worst gatekeeping articles I’ve seen in a while. I’m a vim user for over 20 years and made the switch to Neovim at 0.4. It’s by far my favorite vi-incarnation for daily coding. The article also doesn’t seem terrible informed, i.e. it laments the lack of
vimdiff
while ignoringnvim -d
(same flag as vim).First-class packages are the most underrated feature of lisp. AFAIK only perl offers it fully but it uses very bad syntax, globs . Most macros merely suppress evaluation and this can be done using first class functions. Here is my question for lispers, If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?
I strongly disagree with this. Macros are not there to “merely suppress evaluation.” As you point out, they’re not needed for that, and in my opinion they’re often not even the best tool for that job.
“Good” macros extend the language in unusual or innovative ways that would be very clunky, ugly, and/or impractical to do in other ways. It’s in the same vein as asking if people really need all these control flow statements when there’s ‘if’ and ‘goto’.
To give some idea, cl-autowrap uses macros to generate Common Lisp bindings to C and C++ libraries using
(cl-autowrap:c-include "some-header.h")
. Other libraries, like “iterate” add entirely new constructs or idioms to the language that behave as if they’re built-in.Lex/Yacc and CL macros do very different things. Lex/Yacc generate parsers for new languages that parse their input at runtime. CL macros emit CL code at compile time which in turn gets compiled into your program.
In some sense your question is getting DSLs backwards The idea isn’t to create a new language for a special domain, but to extend the existing language with new capabilities and operations for the new domain.
Here are examples of using lex/yacc to extend a language
I am guessing all these use lex/yacc internally. Rails uses scaffolding and provides helpers to generate js code compile time. Something like parenscript.
The basic property of a macro is to generate code at compile time. Granted most of these are not built into the compiler but nothing is stopping you adding a new pre-compile step with the help of a make file.
Code walking is difficult in lisp as well. How would I know if an expression is a function or a macro ? If I wanted to write a code highlighter in vim that highlights all macros differently I would have a difficult time doing this by parsing alone even though lisp is an easy language to parse.
You underestimate the power of
the dark sideCommon Lisp ;)In other words … macros aren’t an isolated textual tool like they are in other, less powerful, languages. They’re a part of the entire dynamic, reflective, homoiconic programming environment.
I know that but without using lisp runtime and parsing alone can you do the same ?
I’m not sure where you’re going with this.
In the Lisp case, a tool (like an editor) only has to ask the Lisp environment about a bit of syntax to check if it’s a macro, function, variable, or whatever.
In the non-Lisp case, there’s no single source of information, and every tool has to know about every new language extension and parser that anybody may write.
I believe the their claim is that code walkers can provide programmers with more power than Lisp macros. That’s some claim, but the possibility of it being true definitely makes reading the article they linked ( https://mkgnu.net/code-walkers ) worthwhile.
Yes. You’d start by building a Lisp interpreter.
… a common lisp interpreter, which you are better off writing in lex/yacc. Even if you do that each macro defines new ways of parsing code so you can’t write a generic highlighter for loop like macros. If you are going to write a language interpreter and parse, why not go the most generic route of lex/yacc and support any conceivable syntax ?
I really don’t understand your point, here.
Writing a CL implementation in lex/yacc … I can’t begin to imagine that. I’m not an expert in either, but it seems like it’d be a lot of very hard work for nothing, even if it were possible, and I’m not sure it would be.
So, assuming it were possible … why would you? Why not just use the existing tooling as it is intended to be used???
That’s too small of a problem to demonstrate why code walking is difficult. How about this then,
Ironically the above is much easier todo with assembly.
My point is simply this, lisp is only easy to parse superficially. Writing the above will still be challenging. Writing lexers and parsers is better at code generation and hence macros in the most general sense. If you are looking for power then code walking beats macros and thats also doable in C.
While intriguing, it would be nice if the article spelled out the changes made with code walkers. Hearing that a program ballooned 9x isn’t impressive by itself. Without knowing about the nature of the change it just sounds bloated. (Which isn’t to say that it wasn’t valid, it’s just hard to judge without more information.)
Regarding your original point, unless I’m misunderstanding the scope of code walkers, I don’t see why it needs to be an either/or situation. Macros are a language supported feature that do localized code changes. It seems like code walkers are not language supported in most cases (all?), but they can do stateful transformations globally across the program. It sounds like the both have their use cases. Like lispers talk about using macros only if functions won’t cut it, maybe you only use code walkers if macros won’t cut it.
BTW, it looks like there is some prior art on code walkers in Common Lisp!
Okay, I understand your argument now.
I’ll read that article soon.
“That’s two open problems: code walkers are hard to program and compilers to reprogram.”
The linked article also ends with something like that. Supports your argument given macros are both already there in some languages and much easier to use. That there’s lots of working macros out there in many languages supports it empirically.
There’s also nothing stopping experts from adding code walkers on top of that. Use the easy route when it works. Take the hard route when it works better.
Welcome back Nick, haven’t seen you here in a while.
Thank you! I missed you all!
I’m still busy (see profile). That will probably increase. I figure I can squeeze a little time in here and there to show some love for folks and share some stuff on my favorite, tech site. :)
That kind of is the point. Lisp demonstrates that there is no real boundary between the language as given and the “language” it’s user creates, by extending and creating new functions and macros. That being said, good lisp usually follows conventions so that you may recognize if something is a macro (eg.
with-*
) or not.Those are making new languages, as they use new tooling, which doesn’t come with existing tooling for the language. If someone writes Babel code, it’s not JavaScript code anymore - it can’t be parsed by a normal JavaScript compiler.
Meanwhile, Common Lisp macros extend the language itself - if I write a Common Lisp macro, anyone with a vanilla, unmodified Common Lisp implementation can use them, without any additional tooling.
…at which point you have to modify the build processes of everybody that wants to use this new language, as well as breaking a lot of tooling - for instance, if you don’t modify your debugger, then it no longer shows an accurate translation from your source file to the code under debugging.
Similarly, if you wanted to write a code highlighter that highlights defined functions differently without querying a compiler/implementation, you couldn’t do it for any language that allows a function to be bound at runtime, like Python. This isn’t a special property of Common Lisp, it’s just a natural implication of the fact that CL allows you to create macros at runtime.
Meanwhile, you could capture 99.9%+ of macro definitions in CL (and function definitions in Python) using static analysis - parse code files into s-expression trees, look for
defmacro
followed by a name, add that to the list of macro names (modulo packages/namespacing).tl;dr “I can’t determine 100% of source code properties using static analysis without querying a compiler/implementation” is not an interesting property, as all commonly used programming languages have it to some extent.
I don’t know why you’d think they are comparable. The amount of effort to write a macro is way less than the amount of effort required to write a lexer + parser. The fact that macros are written in lisp itself also reduces the effort needed. But most importantly one is an in-process mechanism for code generation and the other one involves writing the generated code to the file. The first mechanism makes it easy to iterate and modify the generated codec. Given that most of the time you are maintain, hence modifying, code I’d say that is a pretty big difference.
Babel is an example of how awful things can be when macros happen out of process. The core of babel is a macro system + plugable reader .
Babel certainly doesn’t. When it started it used estools which used acorn iirc. I think nowadays it uses its own parser.
I have no idea why you think scaffolding it is like parenscript. The common use case for parenscript is to do the expansion of the fly. Not to generate the initial boilerplate.
And impossible to write in portable code, which is why most (all?) implementations come with a code-walker you can use.
If syntax is irrelevant, why even bother with Lisp ? If I just stick to using arrays in the native language I can also define functions like this and extend the array language to support new control flow structures
Well, if your question is “Would you prefer a consistent, built-in way of extending the language, or a hacked together kludge of pre-processors?” then I’ll take the macros… ;-)
My first question would be whether or not it makes sense to highlight macros differently. The whole idea is that they extend the language transparently, and a lot of “built-in” constructs defined in the CL standard are macros.
Assuming you really wanted to do this, though, I’d suggest looking at Emacs’ Slime mode. It basically lets the CL compiler do the work. It may not be ideal, but it works, and it’s better than what you’d get using Ragel, Swig, or Babel.
FWIW, Emacs, as far as I know (and as I have it configured), only highlights symbols defined by the CL standard and keywords (i.e. :foo, :bar), and adjusts indentation based on cues like “&body” arguments.
Btw there is already a syntax highlighter that uses a code walker and treats macros differently. The code walker may not be easy to write, but it can hardly be said that it is hard to use.
https://github.com/scymtym/sbcl/blob/wip-walk-forms-new-marco-stuff/examples/code-walking-example-syntax-highlighting.lisp
Yes, you absolutely want macros even if you Lex/Yacc and interpreters.
Lex/Yacc (and parsers more generally), interpreters (and “full language compilers”), and macros all have different jobs at different stages of a language pipeline. They are complimentary, orthogonal systems.
Lex/Yacc are for building parsers (and aren’t necessarily the best tools for that job), which turn the textual representation of a program into a data structure (a tree). Every Lisp has a parser, for historical reasons usually called a “reader”. Lisps always have s-expression parsers, of course, but often they are extensible so you can make new concrete textual notations and specify how they are turned into a tree. This is the kind of job Lex and Yacc do, though extended s-expression parsers and lex/yacc parsers generally have some different capabilities in terms of what notations they can parse, how easy it is to build the parser, and how easy it is to extend or compose any parsers you create.
Macros are tree transformers. Well, M4 and C-preprocessor are textual macro systems that transform text before parsing, but that’s not what we’re talking about. Lisp macros transform the tree data structure you get from parsing. While parsing is all about syntax, macros can be a lot more about semantics. This depends a lot on the macro system – some macro systems don’t allow much more introspection on the tree than just what symbols there are and the structure, while other macro systems (like Racket’s) provide rich introspection capabilities to compare binding information, allow macros to communicate by annotating parts of the tree with extra properties, or by accessing other compile-time data from bindings (see Racket’s syntax-local-value for more details), etc. Racket has the most advanced macro system, and it can be used for things like building custom DSL type systems, creating extensible pattern matching systems, etc. But importantly, macros can be written one at a time as composable micro-compilers. Rather than writing up-front an entire compiler or interpreter for a DSL, with all its complexity, you can get most of it “for free” and just write a minor extension to your general-purpose language to help with some small (maybe domain-specific) pain point. And let me reiterate – macros compose! You can write several extensions that are each oblivious to each other, but use them together! You can’t do that with stand-alone language built with lex/yacc and stand-alone interpreters. Let me emphatically express my disagreement that “most macros merely suppress evaluation”!
Interpreters or “full” compilers then work after any macro expansion has happened, and again do a different, complimentary job. (And this post is already so verbose that I’ll skip further discussion of it…)
If you want to build languages with Lex/Yacc and interpreters, you clearly care about how languages allow programmers to express their programs. Macros provide a lot of power for custom languages and language extensions to be written more easily, more completely, and more compositionally than they otherwise can be. Macros are an awesome tool that programmers absolutely need! Without using macros, you have to put all kinds of complex stuff into your language compiler/interpreter or do without it. Eg. how will your language deal with name binding and scoping, how will your language order evaluation, how do errors and error handling work, what data structures does it have, how can it manipulate them, etc. Every new little language interpreter needs to make these decisions! Often a DSL author cares about only some of those decisions, and ends up making poor decisions or half-baked features for the other parts. Additionally, stand-alone interpreters don’t compose, and don’t allow their languages to compose. Eg. if you want to use 2+ independent languages together, you need to shuttle bits of code around as strings, convert data between different formats at every boundary, maybe serialize it between OS processes, etc. With DSL compilers that compile down to another language for the purpose of embedding (eg. Lex/Yacc are DSLs that output C code to integrate into a larger program), you don’t have the data shuffling problems. But you still have issues if you want to eg. write a function that mixes multiple such DSLs. In other words, stand-alone compilers that inject code into your main language are only suitable for problems that are sufficiently large and separated from other problems you might build a DSL for.
With macro-based embedded languages, you can sidestep all of those problems. Macro-based embedded languages can simply use the features of the host language, maybe substituting one feature that it wants to change. You mention delaying code – IE changing the host language’s evaluation order. This is only one aspect of the host language out of many you might change with macros. Macro extensions can be easily embedded within each other and used together. The only data wrangling at boundaries you need to do is if your embedded language uses different, custom data structures. But this is just the difference between two libraries in the same language, not like the low-level serialization data wrangling you need to do if you have separate interpreters. And macros can tackle problems as large as “I need a DSL for parsing” like Yacc to “I want a convenience form so I don’t have to write this repeteating pattern inside my parser”. And you can use one macro inside another with no problem. (That last sentence has a bit of ambiguity – I mean that users can nest arbitrary macro calls in their program. But also you can use one macro in the implementation of another, so… multiple interpretations of that sentence are correct.)
To end, I want to comment that macro systems vary a lot in expressive power and complexity – different macro systems provide different capabilities. The OP is discussing Common Lisp, which inhabits a very different place in the “expressive power vs complexity” space than the macro system I use most (Racket’s). Not to disparage the Common Lisp macro system (they both have their place!), but I would encourage anyone not to come to conclusions about what macros can be useful for or whether they are worthwhile without serious investigation of Racket’s macro system. It is more complicated, to be certain, but it provides so much expressive power.
I mean, strictly, no - but that’s like saying “if you can write machine code, do you really need Java?”
(Edited to add: see also Greenspun’s tenth rule … if you were to build a macro system out of such tooling, I’d bet at least a few pints of beer that you’d basically wind up back at Common Lisp again).
OCaml has first-class modules: https://ocaml.org/releases/4.11/htmlman/firstclassmodules.html
I’m a lot more familiar with them than I am with CL packages though, so they may not be 100% equivalent.
I’m not claiming to speak for all lispers, but the question
might be misleading. Obviously you don’t need macros, and everything could be done some other way, but macros are easy to use, while also powerful, can be dynamically created or restricted to a lexical scope. I’ve never bothered to learn lax/yacc, so I might be missing something.
For my blog I use Hugo on Netlify. For some other static site projects I’m still on Middleman.
I just came back from a 5 day work trip to Jakarta and my wife’s visiting her parents, so I got the following planned (no particular order):