UNIX: program crashed, better recompile with print statements as we have no debugger.
LISP: program crashed, here’s a debugger. Restart when you fixed your mess.
So in an alternative timeline we all have debuggers always available all the time and printf is lost to time. It’s not that I look down on printf debugging, it’s that I long for an alternate history where we are more parenthetically inclined.
print debugging is so popular in lisp circles that one of the popular modern tools for developing common lisp programs, Sly, literally has it integrated. you can select any expression to have its value captured instead of having to manually insert a print.
Calling Sly’s stickers feature “print debugging” is absurd. It’s like saying that using GDB is print-debugging because you use the print command to inspect values.
Unless you set them to break, all they do is capture values and then let you go through them in order. Seems functionally identical to print debugging to me.
By a similar logic any value inspection a debugger is print debugging. Which it isn’t, printing involves many things, it produces output (and a system call or two, generally), is irreversible, and so on.
I wish SLY had proper step debugging like edebug, though. Many imperative IDEs let you step statement-by-statement and see how variables behave, sexp step debugging is something I wish was more widespread. To date I’ve only seen edebug (for emacs lisp) and cider-debug (for Clojure).
Amusingly, the BASIC on the Atari 2600 gaming system can trace substeps in an expression! That’s about the only good thing about programming on a system with 128 bytes (yes, BYTES) of RAM.
Hot code reload is cool, it allows you to retry with prints inserted quickly and without disrupting slow-to-setup semi-persistent state! (My Common Lisp code was multithreaded and subject to external timeouts, so I was just writing macros to do print debugging more comfortably rather than figuring out how to use the debugger in such a context; and trace was often worse fit than slapping print-every-expression on a crucial let*)
True. It’s rare to see a tool that discloses its used environment variables in a help command, and would be extremely helpful.
That said, I would still prefer to use the typical -h/--help argument to display it, instead of a HELP environment variable. Reasoning being:
Familiarity.
A tool can warn me when it doesn’t recognize an argument. If it’s expecting -help instead of --help, it can warn me with “I don’t recognize --help”, so I can try other things like -help. On the other side, if I run HELP= tool and nothing happens, I need to go to the documentation, because the tool doesn’t give any feedback, and I wouldn’t expect feedback from a tool when it doesn’t recognize an environment variable.
There’s a curious case when important environment variables may come from libraries or the language runtime itself. Like the somewhat infamous OCAMLRUNPARAM.
I am not sure if this ignores Cython and the buffer protocol. Slapping some more syntax and types on top of Python source files and compiling it with cython makes performance available. The buffer protocol has allowed numpy and the python for data science revolution, all because you could pack things tightly and pass pointers around C libraries FORTRAN. Maybe I’m picking too much at the Python metaphor but it’s also comparing apples to oranges. A high-level dynamic language like Python can be made fast, a high-level fast language like Rust cam be made more dynamic? Please ignore if it makes no sense as I am quite sleepy.
I’d love for Zotero to treat self-hosting the data server component as something that is actually supported. It’s the one feature that is making me just … not want to use Zotero.
And, I’m studying law and engineering, so it’s not like I don’t need quality reference management software…
I have solved most of my issues with the data share through webdav, the metadata though is not and requires a server. I may take a show with https://github.com/linuxserver/docker-zotero though.
Saw the re-licensing and immediately got curious about what would happen with the control plane in Oxide’s server. In a previous discussion Cockroach’s approach to the BUSL was preached for the very simple additional terms, i.e. as long as a third party cannot create a schema you are fine. Maybe an epic rant is coming on the podcast?
Does this mean I can make perfectly dense, C compatible structs, cast them to a void pointer, then pass them off to a C function that expects some binary data that’s memcpy-able… all from within racket?
Objects of the new type are actually C pointers, with a type tag that is the symbol form of id or a list that contains the symbol form of id.
Maybe not perfectly compatible but yes. Take a look at Racket C API. Also the malloc modes for define-cstruct have implications on how things are traced by the GC.
I would expect this to be possible as I know it can be done in Guile with bytevectors but it’s definitely not pretty.
An alternativ approach to reading the documentation would be to use Bustle or other tools for intercepting dbus! Don’t spend time reading and take a look at what’s going over the wire.
I recently learnt about the “three wise men puzzle” (same riddle but n=3). I was reading this paper about non-classical logics and they go through a nice explanation if you are into that. The nicest part of the paper is that it solves by proof search (sledgehammer!) once you encode everything in quantified multi-modal logic. It’s a highly readable paper so give it a go!
Went down the rabbit-hole of planning a home network without ISP-rented equipment. Learned what the heck SFP and PON are, and what differentiates between the different “ONT” equipment types (SFP-transceiver-only vs Bridge/Media Adapter vs All-in-One). For someone used only to Ethernet ports, SFP felt… wild-but-obvious? An abstract network port.
SFP is the modern iteration of AUI/MAU from the coax ethernet years. Even twisted pair sockets that appear to be tightly integrated are specified with a medium independent interface between the MAC and PHY implementations.
The documentation has a thorough introduction. The BSL license mentioned in the project is the Boost Software License and not the Business Source License.
Nice to see a new tool like this around but if the pythonic syntax is all it gives I’m saddened. Though if they wanted to infect the industry they should have chosen JavaScript maybe?
This applies to almost every change: there is small buffer to accumulate changes if many are coming in a short amount of time (that can happen often with the editing tricks permitted by Vim/Emacs). It is a throughput/latency trade-off, the goal is to remain fairly interactive (<100ms reaction time, 20-30ms ideally).
Images that move are not too much of a problem, but it is indeed possible to run arbitrary long computations in TeX, that will of course limit the responsiveness. In my experience, it works quite well in most situations, even with TikZ graphs (a limitation is that there is no Lua scripting and shell-execution is disabled, that limits some advanced use cases).
A driver program that talks to the editor to be notified of changes to the LaTeX document, maintains an incremental view of the document and the rendering process (supporting incrementality, rollback, error recovery, etc.), talks to the LaTeX engine to re-render the modified portions of the document, and synchronizes with the viewer.
Hmm, given that I mostly read on the web in the «save first (for regularly visited sites — automatically and in the background), convert to text next, read in a text editor» mode, and GitHub has recently become way more JS-dependent, any details on the setup that you are willing to share?
There are two parts. A browser extension spawns a process that will receive JSON blobs and store them in a sqlite database, the same browser extension injects scripts in interesting pages. The metadata is scraped by the injected script from the page but could be also sourced using GitHub’s API. On the sqlite side I just make a view onto the JSON blobs and then throw everything into FTS “regularly”. The same process responsible for receiving the JSON blobs also serves a HTTP search page. The messaging is taken care by native messaging.
Historical analysis (fun, not accurate)
UNIX: program crashed, better recompile with print statements as we have no debugger. LISP: program crashed, here’s a debugger. Restart when you fixed your mess.
So in an alternative timeline we all have debuggers always available all the time and printf is lost to time. It’s not that I look down on printf debugging, it’s that I long for an alternate history where we are more parenthetically inclined.
print debugging is so popular in lisp circles that one of the popular modern tools for developing common lisp programs, Sly, literally has it integrated. you can select any expression to have its value captured instead of having to manually insert a print.
Calling Sly’s stickers feature “print debugging” is absurd. It’s like saying that using GDB is print-debugging because you use the
printcommand to inspect values.Unless you set them to break, all they do is capture values and then let you go through them in order. Seems functionally identical to print debugging to me.
By a similar logic any value inspection a debugger is print debugging. Which it isn’t, printing involves many things, it produces output (and a system call or two, generally), is irreversible, and so on.
I wish SLY had proper step debugging like edebug, though. Many imperative IDEs let you step statement-by-statement and see how variables behave, sexp step debugging is something I wish was more widespread. To date I’ve only seen edebug (for emacs lisp) and cider-debug (for Clojure).
Amusingly, the BASIC on the Atari 2600 gaming system can trace substeps in an expression! That’s about the only good thing about programming on a system with 128 bytes (yes, BYTES) of RAM.
Hot code reload is cool, it allows you to retry with prints inserted quickly and without disrupting slow-to-setup semi-persistent state! (My Common Lisp code was multithreaded and subject to external timeouts, so I was just writing macros to do print debugging more comfortably rather than figuring out how to use the debugger in such a context; and
tracewas often worse fit than slapping print-every-expression on a cruciallet*)True. It’s rare to see a tool that discloses its used environment variables in a
helpcommand, and would be extremely helpful.That said, I would still prefer to use the typical
-h/--helpargument to display it, instead of aHELPenvironment variable. Reasoning being:-helpinstead of--help, it can warn me with “I don’t recognize--help”, so I can try other things like-help. On the other side, if I runHELP= tooland nothing happens, I need to go to the documentation, because the tool doesn’t give any feedback, and I wouldn’t expect feedback from a tool when it doesn’t recognize an environment variable.There’s a curious case when important environment variables may come from libraries or the language runtime itself. Like the somewhat infamous OCAMLRUNPARAM.
I’d prefer to see such info in
--helpas well.Thanks for the feedback! I will try and see if I can take over the help message because this is a very good idea.
Always get your automated tools to try and prove false from your assumptions!
I am not sure if this ignores Cython and the buffer protocol. Slapping some more syntax and types on top of Python source files and compiling it with cython makes performance available. The buffer protocol has allowed numpy and the python for data science revolution, all because you could pack things tightly and pass pointers around
C librariesFORTRAN. Maybe I’m picking too much at the Python metaphor but it’s also comparing apples to oranges. A high-level dynamic language like Python can be made fast, a high-level fast language like Rust cam be made more dynamic? Please ignore if it makes no sense as I am quite sleepy.github
I’d love for Zotero to treat self-hosting the data server component as something that is actually supported. It’s the one feature that is making me just … not want to use Zotero.
And, I’m studying law and engineering, so it’s not like I don’t need quality reference management software…
I have solved most of my issues with the data share through webdav, the metadata though is not and requires a server. I may take a show with https://github.com/linuxserver/docker-zotero though.
But… this looks exactly like what I have in flatpak Zotero…? Huh?? Down to the sidebar and everything, even.
It’s been available as a beta version for a while now… Was the Flatpak distributing the beta perhaps…?
Same here. Switched to Flatpak today and got a strange look so here’s the release message from the 9th of August
Saw the re-licensing and immediately got curious about what would happen with the control plane in Oxide’s server. In a previous discussion Cockroach’s approach to the BUSL was preached for the very simple additional terms, i.e. as long as a third party cannot create a schema you are fine. Maybe an epic rant is coming on the podcast?
Question for racketeers:
I was looking at define-cstruct at the FFI.
Does this mean I can make perfectly dense, C compatible structs, cast them to a void pointer, then pass them off to a C function that expects some binary data that’s memcpy-able… all from within racket?
Or am I getting too excited?
Maybe not perfectly compatible but yes. Take a look at Racket C API. Also the
mallocmodes fordefine-cstructhave implications on how things are traced by the GC.I would expect this to be possible as I know it can be done in Guile with bytevectors but it’s definitely not pretty.
The Racket Discourse has Q&A at https://racket.discourse.group/c/questions/6
You can join here https://racket.discourse.group/invites/VxkBcXY7yL
An alternativ approach to reading the documentation would be to use Bustle or other tools for intercepting dbus! Don’t spend time reading and take a look at what’s going over the wire.
Summer school ended so take a breather and explore before I go back
I recently learnt about the “three wise men puzzle” (same riddle but n=3). I was reading this paper about non-classical logics and they go through a nice explanation if you are into that. The nicest part of the paper is that it solves by proof search (sledgehammer!) once you encode everything in quantified multi-modal logic. It’s a highly readable paper so give it a go!
Went down the rabbit-hole of planning a home network without ISP-rented equipment. Learned what the heck SFP and PON are, and what differentiates between the different “ONT” equipment types (SFP-transceiver-only vs Bridge/Media Adapter vs All-in-One). For someone used only to Ethernet ports, SFP felt… wild-but-obvious? An abstract network port.
Please do write it up!
There you go. https://ibookstein.github.io/posts/2024-04-05-home-network/
SFP is the modern iteration of AUI/MAU from the coax ethernet years. Even twisted pair sockets that appear to be tightly integrated are specified with a medium independent interface between the MAC and PHY implementations.
Eval away!
The documentation has a thorough introduction. The BSL license mentioned in the project is the Boost Software License and not the Business Source License.
Always looking for projects like this - but just focused on music and streaming.
From Kyoo’s README:
Maybe navidrome is the complement you are looking for. Saw it the other day in a homelab description.
Fixed link: https://www.navidrome.org/
Nice to see a new tool like this around but if the pythonic syntax is all it gives I’m saddened. Though if they wanted to infect the industry they should have chosen JavaScript maybe?
How can this be so fast? Does this apply to every change? Sometimes in LaTex adding text will move images around, which could potentially take longer.
The main trick is to keep a fork of TeX close to where the changes are made. I described the heuristic with some more details here: https://www.reddit.com/r/LaTeX/comments/1bmjc0r/comment/kwe8u4p/.
This applies to almost every change: there is small buffer to accumulate changes if many are coming in a short amount of time (that can happen often with the editing tricks permitted by Vim/Emacs). It is a throughput/latency trade-off, the goal is to remain fairly interactive (<100ms reaction time, 20-30ms ideally).
Images that move are not too much of a problem, but it is indeed possible to run arbitrary long computations in TeX, that will of course limit the responsiveness. In my experience, it works quite well in most situations, even with TikZ graphs (a limitation is that there is no Lua scripting and shell-execution is disabled, that limits some advanced use cases).
This is really clever!
Automatic scraping of GitHub as I visit to search later. Might have to add some more websites now though.
Hmm, given that I mostly read on the web in the «save first (for regularly visited sites — automatically and in the background), convert to text next, read in a text editor» mode, and GitHub has recently become way more JS-dependent, any details on the setup that you are willing to share?
There are two parts. A browser extension spawns a process that will receive JSON blobs and store them in a sqlite database, the same browser extension injects scripts in interesting pages. The metadata is scraped by the injected script from the page but could be also sourced using GitHub’s API. On the sqlite side I just make a view onto the JSON blobs and then throw everything into FTS “regularly”. The same process responsible for receiving the JSON blobs also serves a HTTP search page. The messaging is taken care by native messaging.
For anybody interested in deductive verifiers for Rust an alternative is Prusti.