When I use my editor KeenWrite to edit R Markdown documents, I keep the data in a machine-readable CSV file and import it using an R function call cvs2md
:
https://youtu.be/XSbTF3E5p7Q?list=PLB-WIt1cZYLm1MMx2FBG9KWzPIoWZMKu_&t=184
This allows for dynamic documents that pull data in from a single source of truth. As a side benefit, I don’t have to concern myself with formatting Markdown tables.
I’m glad you’ve found a workflow that really works for you, but I’m confused about how it seems like you’re citing the advantages of switching to dwm as much as to giving up your external monitor.
Also, your point about being a designer and trying to mimic the setup most of your users have is very valid, but I’m also concerned that you’re voluntarily opting in to a kind of tunnel vision with regard to configuration that could hurt the accessibility of your designs.
I’m blind in one eye, low vision in the other with fine & gross motor impairment and difficulty crossing the midline. I have to work REALLY hard to find an environment where I can get work done on a laptop. A nice large screen is much easier for me.
Will your designs function properly on my large screen with my cartoonishly huge fonts? :)
I have a 43” monitor for work. This is mostly because I have a tendency to hunch over my desk. With a display that big, I set the text size to something nice and large and have to sit back to have everything in my field of view.
I went with the ASUS ROG Strix 43” VA panel (https://www.amazon.com/dp/B099PJ8SPT) because all the other 43” IPS panels I tried had ghosting when displaying light window panels on a dark background (including Dell P4317Q and LG 43UN700-B 43).
What make/model do you use?
I actually have two for different offices, both Dell but different models (bought three years apart). They have some ghosting around the edges but they’re so big I don’t usually put anything I care about right on the edge of the screen. I’d probably care more if I were the one paying for them, but they both render text in terminals well and that’s 90% of what I care about in a work monitor.
Great to hear from alternate perspectives and difficulties users face that are hard to “reproduce”. I find in my work that the larger displays/resolutions tend to be the most “optimized” since I work within a design team (who all mostly have cutting-edge displays). Most times I feel as though low-res, smaller devices are an afterthought. Throughout my career I’ve heard a lot of “just squish and stack everything down on smaller screens…”
Supposing that a language model ever becomes smart enough to be genuinely terrifying, one imagines it must surely also become smart enough to prove deep theorems that we can’t. Maybe it proves P≠NP and the Riemann Hypothesis as easily as ChatGPT generates poems about Bubblesort. Or it outputs the true quantum theory of gravity, explains what preceded the Big Bang and how to build closed timelike curves. Or illuminates the mysteries of consciousness and quantum measurement and why there’s anything at all. Be honest, wouldn’t you like to find out?
This is the type of thing that GPT cannot do. For an AI to do these tasks it would need to be much more than just a language model.
I see GPT as an incredibly powerful search engine that is it capable of composing a response rather than just giving you a list of links.
When it comes to AI safety, my belief is not so much that an AI will go foom, break out the box, or create a grey goo nanomachine scenario. My worry is that it will influence large numbers of people to cause harm. This is a much lower hanging ‘fruit’ in terms of AI disasters because the requirements on the intelligence of the AI are much lower.
The premise of the Terminator franchise is a lot more plausible without the SkyNet-became-self-aware step. An LLM entering a bit of its prediction space where launching all of the nuclear weapons because the input data happened to correlate with something unexpected seems entirely plausible.
AI safety to me is “what do we do when an AI-led competitor to Qanon eventually shows up?” and nobody seems to have an answer for this. “Turn it off”, lol.
I disagree: I think this is the type of thing that GPT can do. The vast difference in difficulty simply makes it seem like a higher class of task, but I think that’s mistaken. Theorem proving in particular “just” consists of repeated application of formal rules. The difference is in choosing which rules to apply, which seems to require both strategy and a particular sense of elegance. I see no reason that a language model could not acquire this sense.
I don’t think this is an accurate representation of what proving is like. While you can limit it to “just” repeated application of formal rules, even the simplest possible systems which can’t prove almost anything, like Presburger Arithmetic, are 2-EXPTIME. PA is just quantified boolean logic plus addition (but not multiplication) of natural numbers. You can automate the proof or disproof of any statement of length N representable in PA… but the worst such statements will (provably!) take O(2^(2^N)) steps. And that’s an incredibly simple system!
Mathematicians primarily work by stepping outside the “formal rules”: finding new grammar or language to express complex ideas more compactly, but also consistently. In other words, there’s an infinite number of possible rules, and it’s a matter of finding the right rules and showing they’re right. That’s an intensely creative and messy process.
For a lot of proofs, the key thing that a human is doing is providing a hint about the path through the search space. For a long time, theorem provers couldn’t go past an EFQ step because there are an infinite number of possible steps immediately after it and only a handful lead to a proof. Then they gained heuristics that let them make this jump in a few cases.
Anything where the current state of the art is ‘we have some heuristics but they’re not great’ is likely to see big wins from ML.
Theorem proving in particular “just” consists of repeated application of formal rules.
Right in a computability sense, but it is a bit more complex complexity-wise. This is like saying “all NP-complete problems are just computable”. To prove things in a reasonable time, you need to be able to define things (concept formation). It is known that avoiding a definition can increase the proof length super-exponentially.
To solve difficult problems you need to “think” before you speak.
GPTs ability to think can be augmented by asking it to explain its steps. This is a clever hack that makes use of the way a language model uses prior context to complete further lines. My belief is that this is not going to scale up to solve difficult computational problems that involve modelling, planning and calculational brute force via tree search. We would need to integrate things like alphazero. but also other things that haven’t been invented yet.
A multimodal model should be able to “think” in more modes. Imagine a ChatGPT version that can interrupt its Chain of Thought to insert a sketch generated from its previous prompt, then condition on that sketch when continuing.
(I was worried about posting this, then I remembered that if I thought of this within five minutes, OpenAI/Deepmind have definitely thought of it.)
Sure, but such capability has not been demonstrated yet, at least not convincingly.
FeepingCreature (and many others) seems to view proof search as similar to game tree search, such that AlphaZero-like methods can work. In fact, AlphaZero-like methods do work, for short proofs. But proofs, when definitions are substituted, can be arbitrarily long, and long proofs in practice are very unlike searching for sequence of moves. It is more like searching for a program, when executed, generates sequence of moves (and short program can generate very long sequence).
This is best answered with reference to the transformer architecture[1]
The input is tokenized and the tokens are put through an ‘encoder’ block which has a self-attention and a feed forward neural network before being given to the decoder path.
This encoder network is very large which enables the system to produce such high quality output. But there is fundamentally no way for this to perform tree search. It isn’t a capability that it can learn.
I don’t think tree search is absolutely necessary. Remember that AlphaGo Zero could beat most amateurs without any tree search (simply playing a move with the highest win probability).
Rather, I consider characterization of proof as “repeated application of formal rules” to be incorrect. Yes, proof is that, but interesting proofs written that way are 2^2^n tokens or more long. In practice, proofs can’t be generated in expanded form and need to be generated compressed, and compression is where the difficulty is, not the choice of which rules of inference to apply.
It’s true. People are already writing content to draw other people into their own epistemically closed communities (somewhat like a cult), and those communities seem to be closely affiliated with mass shootings in the US and related phenomena like Islamic State.
ChatGPT vastly improves the productivity of producing such propaganda content if you don’t care about its accuracy or verifiability.
not so much that an AI will … create a grey goo nanomachine scenario
Kurzweil also supports the idea of grey goo being a non-issue.
https://www.kurzweilai.net/the-gray-goo-problem
The smallest plausible biovorous nanoreplicator has a molecular weight of ~1 gigadalton and a minimum replication time of perhaps ~100 seconds, in theory permitting global ecophagy to be completed in as few as ~104 seconds. However, such rapid replication creates an immediately detectable thermal signature enabling effective defensive policing instrumentalities to be promptly deployed before significant damage to the ecology can occur.
However, such rapid replication creates an immediately detectable thermal signature enabling effective defensive policing instrumentalities to be promptly deployed before significant damage to the ecology can occur.
around 2h45m
As an estimate of how long it would take the world to respond effectively to a novel threat, that seems extremely optimistic.
I wouldn’t expect humanity to be much better at responding to “nanoreplicators” than to global warming, with at least one major reason likely being the same — the growth of the existence and severity of the threat outpacing the growth of consensus among decision-makers about its existence and severity — and maybe other shared reasons, such as the problem technology’s becoming thoroughly embedded into national economies and individuals’ standards of living.
I just wanted to clarify the obvious mispaste of the time.
I find Kurzweil’s argument specious. He seems to argue for both the inevitability of self-replicating “biovorous replicators” and a global surveillance network that can scan for their “breakout” and respond to them. I mean, let’s say they “only” manage to consume the entire Amazon basin’s ecology before we manage to stop them. Problem solved, right?
I just wanted to clarify the obvious mispaste of the time.
Yes, my criticism was of the quoted passage from Kurzweil, not of your clarification; I apologize for not making that clearer.
Thanks! Sorry for my snippy tone. This entire sideshow (OMG! ChatGPT is soon sentient) is grinding my gears at the moment. But lobste.rs is not the place to vent about that.
https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
Can a machine can create a Winograd Schema, yet?
When run against ChatGPT, it answered with, “A dog chased a cat up a tree and a cat came down.”
I just tried a few from here and it got them all correct, including adding correct logical explanations. Are you saying it’s memorized these as part of the training data?
EDIT: I tried a handful more that I invented, and it got those correct too.
Are you saying it’s memorized these as part of the training data?
That sounds perfectly reasonable to me.
If you feed ChatGPT novel Winograd statements, it only gets them right by chance.
Does it not get them right from statistical inference?
If the word “shatter” is associated with “glass” more often than “iron” (throughout its training corpus), then doesn’t it follow that it would correctly “guess” that when a glass ball falls on an iron table then the glass ball shatters, and not the iron table?
I like asciidoc a lot, but the PDF rendering makes me long for LaTeX. Actually, LaTeX is the only FOSS software I’ve found that creates good-looking PDFs. But it can’t really do HTML (I know you can make it work), and it’s super verbose. I can’t help but feel that both
<ul>
<li>Foo</li>
<li>Bar</li>
</ul>
and
\begin{itemize}
\item Foo
\item Bar
\end{itemize}
were the things that markdown was created in reaction to.
SILE produces beautiful PDF output (better than TeX) and has pluggable front ends. If an AsciiDoc tool chain can generate XML, then it may already be able to produce something that SILE can consume.
ASCIIDoc is meant as a simplified representation of DocBook format, so any ASCIIDoc document should have canonical XML representation.
If I remember correctly, SILE can consume DocBook XML, so that seems like a nice path to generating pretty PDFs. If anyone makes that work, please post a link on lobste.rs: I’d probably end up using that flow for a bunch of things.
It appears that’s a pretty easy output format. I can say I’ve used it but I’ve always found asciidoc a joy to write in, and it’s pdf output has generally been good enough for me. https://docs.asciidoctor.org/asciidoctor/latest/docbook-backend/#convert-docbook-to-pdf
<ul>
<li>Foo</li>
<li>Bar</li>
</ul>
But what if you did it this way?
<ul><li>Foo
</li><li>Bar
</li></ul>
Can’t un-see 😉
I’ll take that as a compliment 🤣
I actually use this ^ form because it’s much easier for cut & paste, re-ordering, and so on.
This is why I favour Python enforced indentation.
Because there is always one person, with one weird style, and they’ll have a defence for it…
Congrats. Today, it’s you.
I don’t format HTML files (etc.) like that.
But when I’m dealing with an HTML list in (e.g.) a JavaDoc comment, then sometimes I will use an approach like this, because it dramatically helps the readability of the (e.g.) Java file. This is important because that file’s text is going to be read and used by someone who is not currently in “HTML mode”. Having to parse HTML in one’s head – just to read a comment – is a significant attack on the reader’s senses. It’s also one of the reasons why we made the early decision to use markdown in Ecstasy comments, instead of using raw HTML.
At any rate: Context matters.
–
There’s always one person wanting to make themselves feel better by lobbing insults, instead of just taking the time to think through ideas that are foreign to them and from which they could (occasionally) gain the advantage of a new point of view.
Congrats. Today, it’s you.
(Don’t feel too badly; it’s usually me doing the exact same thing 😢 despite being committed to stopping this habit.)
It really wasn’t meant to be an insult!
You gave a good, legit reason for that style. I accept it. It’s a valid point. However, it is, shall we say, quite unlike more conventional layout styles. I’m sure you’d agree, or else why post it?
And that’s the thing. Everyone’s style has good reasons for it for them. But people want different things.
At my last job, we had a fixed style for XML and a tool to reformat code like that, but my preferred editor occasionally reformatted it all itself, on a whim. In general it was a massive PITA.
So I think the only solutions is to either enforce it by making it significant, so everyone must do it the same or break it, or have the tools do it automatically for you. Or both, of course.
Thanks for the thoughtful response. I’m a stickler for keeping things clean, because I know how expensive it is to have random (even competing) standards being arbitrarily enforced by different members of a team.
I get it and I agree.
I suppose one could also make an argument in favour of cleaner, simpler formats for that reason.
When oXygen reformatted my XML and I didn’t notice and push
ed it, the diffs were unreadable. It’s hard to back that out, partly because I’m not a programmer, I’m a writer, and to me, Git is basically black magic.
Xkcd applies: https://xkcd.com/1597/
Because, in part, XML is complex and thus indenting it is complex and so you need small diffs to be readable at all.
Whereas in ADoc, in places indentation is significant, like code blocks, so you avoid it, and it’s much less noisy which means it’s much more readable.
I am actually curious, what are my choices of layout engines if I want to produce PDF documents?
That is, my understanding is that PDF as a format comes with layout precomputed (input has elements with absolute positions). What are my choices for something which I can feed into a layout engine and get that to compute absolute positions. I know of the two:
Are there other notable things in this space?
https://www.princexml.com/ is another interesting option in this space.
Note that the only non-hack HTML rendering is mandoc and that it only consumes -man
or -mdoc
input. The traditional -mm
(“memorandum”), -ms
, and -me
packages will look nice with groff (the Linux default) but groff can’t really do HTML.
groff can do HTML ok (grohtml(1)). Also don’t forget that man
and mdoc
are for documentation, and that all roff implementations are good at producing PDFs (the question).
There are a few options, none of them cheap, because it’s a problem a lot of businesses have, and requires a fair bit of work many programmers aren’t really qualified to do.
There is of course prawn, which is what asciidoctor-pdf looks like under the hood. But it is OK for layout and has mediocre (worse than browsers) typography. On the other hand, you can use it with asciidoc…
There are also some docbook (and hence asciidoc) stylesheets to convert to PDF, but I was never impressed with the quality.
On the other hand, you can use it with asciidoc…
In practice, what I’ve found works best for me is to make AsciiDoctor output html, which I then style and print to pdf via browser.
If you’re using TeX to produce PDFs for anything except academia (and even then), I’d go with ConTeXt (here are some example documents). It’s about 10 years younger than LaTeX and feels, oh, about half a century more modern, probably because it was and still is developed by someone who is an actual publisher. Some simple things it can do that are hard in LaTeX:
\section[color=red]{My important section}
.\definehead[rubric][section][color=red]
and write \rubric{On the naming of cats}
.It can also handle more advanced stuff like fonts and font features; typesetting on a grid; custom backgrounds or overlays for pages, paragraphs, or words; shaped paragraphs; and writing some or all of your command logic or document with Lua.
My text editor, KeenWrite, transforms Markdown to XHTML then passes that XML document to ConTeXt for typesetting against various themes. I made a few tutorials that demonstrate many of KeenWrite’s features:
https://www.youtube.com/watch?v=8dCui_hHK6U&list=PLB-WIt1cZYLm1MMx2FBG9KWzPIoWZMKu_
The text editor I’ve been working on for several years is able to convert Markdown to PDF. The engine transforms Markdown into XHTML then uses ConTeXt to typeset the XHTML document against a “theme”. One reason I wrote KeenWrite was to give me the ability to use variables while writing:
https://github.com/DaveJarvis/keenwrite/blob/main/docs/screenshots.md
I have a series of videos coming out soon that demonstrates how it works.
KeenWrite, the FOSS Markdown text editor I’ve been working on, includes the ability to render plain text diagrams via Kroki†. See the screenshots for examples. Here’s a sample Markdown document that was typeset using ConTeXt (and an early version of the Solare theme).
One reason I developed KeenWrite was to use variables inside of plain text diagrams. In the genealogy diagram, when any character name (that’s within the diagram) is updated, the diagram regenerates automatically. (The variables are defined in an external YAML file, allowing for integration with build pipelines.)
Version 3.x containerizes the typesetting system, which greatly simplifies the installation instructions that allow typesetting Markdown into PDF files. It also opens the door to moving Kroki into the container so that diagram descriptions aren’t pushed over the Internet to be rendered.
†Kroki, ergo KeenWrite, supports BlockDiag (BlockDiag, SeqDiag, ActDiag, NwDiag, PacketDiag, RackDiag), BPMN, Bytefield, C4 (with PlantUML), Ditaa, Erd, Excalidraw, GraphViz, Nomnoml, Pikchr, PlantUML, Structurizr, SvgBob, UMLet, Vega, Vega-Lite, and WaveDrom.
Note that Mermaid diagrams generate non-conforming SVG, so they don’t render outside of web browsers. There is work being done to address this problem.
Teaching people how to use software. It’s similar to:
I developed it to produce tutorials for KeenWrite, my open-source, cross-platform desktop Markdown editor.
I think that integrating everything into the editor to make it modern is debatable. Terminal users already have a terminal emulator and several file pickers available (like mc or ranger). Working remotely by ie. using ssh and mosh is available too. Utilities like lazygit are also available.
An editor should not need to re-implement file picking, a window system, a way to work remotely and a terminal emulator to be modern.
tl;dr UNIX already is an IDE.
UNIX ain’t modern, it’s seventies tech. Besides, the modern thing is to ignore UNIX and put everything in one application. Zawinski’s law and all…
I mean, I live inside Emacs most of the time too when I’m working, and it does offer some conveniences like a sibling comment points out. But I’m not sure I’d call Emacs exactly “modern”, either…
UNIX is both old and modern at the same time. The UNIX philosophy is old, but still relevant.
Linux distros like Arch Linux has a wide selection of up-to-date and well maintained packages. The Linux kernel might be based on old ideas, but I would argue that it is no longer seventies tech. For distros like Arch or Fedora, the users can choose to install only modern packages, if they want.
macOS, a proper UNIX, is a modern OS with modern software available.
UNIX is modern.
+1. See also https://blog.sanctum.geek.nz/unix-as-ide-introduction/ and the subsequent parts in the series for a good read about this
I’m not saying putting everything into the editor makes it modern, just saying that it makes it easier to integrate between different things. I was a long time neovim user and didn’t even bother touching the integrated terminal or git plugins like fugitive but rather use tmux to combine everything. That said, if you have your git thingy in your editor, jumping to the file you are browsing in the git thingy, then enabling blame, going to the function under cursor flow is easy. And not just navigating, you can control the window positioning. You can technically do this with tmux + lazygit/tig + neovim, but it will be much harder to make them work together.
Nebulous arguments about what is and isn’t “modern” aside, this seems to me to be the difference between a well-integrated workshop and a pile of tools. I like neat, isolated tools as much as the next digital aesthetician (daily tmux user) but there’s a reason that taking on the mental load of a greater pile of tools is unpopular and people want their development environment to just do useful things for them without reading a few dozen man pages and a handful of stack overflow posts. I like the way that Fish shell goes with this for example. It does a decent job of not taking on far too many features where it affects my own experience and I don’t have to worry about mile long dotfiles to get all the useful features. Would be nice to see more tools like that which aren’t stuck in the old ways.
I think that integrating everything into the editor to make it modern is debatable.
I don’t want to get into what’s modern and what’s not but one thing that’s useful to remember about “integrating everything into the editor” is that it’s cultural baggage. (I say that with all the sympathy in the world, I use Emacs).
In Emacs, which is what the post seems to lean towards, it’s because of its ties with Lisp, the MIT environment, and, in the case of GNU Emacs, because of its drive towards portability. Emacs is very much a Lisp environment that runs an editor as its persistent interface. This is foreign to us but it was a popular way to make interfaces back then: dired
, for example, which we now know as a major mode in Emacs, was once a standalone program with a deliberately editor-like interface (you can read some things about it here: https://www.saildart.org/DIRED.SGK%5BUP,DOC%5D1 ). Many of Emacs’ popular packages exist because people wanted to integrate it better with the underlying system, or they missed things from other systems – that’s how we wound up with dired
or eshell
, for example.
In many editors that emerged from the DOS days, lots of functions and window management functions are integrated because the underlying system had poor multi-tasking and program integration capabilities. It wasn’t uncommon for DOS editors to integrate things like calculators and whatnot because even with TSR, it was rather unwieldy. System constraints (including the quality of window management and multitasking) in early Windows and OS/2 versions is one of the factors that contributed to the development of MDIs.
More recently, the fact that lots of editors based on web stacks wind up integrating their own window management system is an artefact of the technological environment they’re in. Limited integration with the underlying system, or unreliable integration across the systems they want to support, means they have to redo some things themselves, higher up the stack.
I can’t point at specific examples (editor internals isn’t something I’ve terribly preoccupied with in 10+ years) but I wouldn’t be surprised if integrating some functions were simply a way to avoid dependency churn, too. If a component is internal, you have control over visual and/or API changes, and over the schedule of integration efforts. I am personally guilty of having reinvented wheels for this exact reason over the years.
I think it’s important for developers and designers not to get high on our own supply. Lots of things we make are made the way they are not because of some stroke of brilliant genius but as a consequence of external design constraints, or as workarounds for shortcoming further down the tech stack, or because it’s just the only way they can be made at the time, not necessarily the best. That’s how things like this goddamn terminal nonsense have been perpetuated, constraining shell interfaces to 1970s capabilities, even in editors, which likely don’t even need the full backwards compatibility baggage.
An editor should not need to re-implement file picking
A fun aside, my text editor, KeenWrite had a pure JavaFX file picker, which is a re-implementation of the native file picker.The reason? On Windows, there was a strange interaction between loss of focus and returning to the main application window. Fortunately, that bug was squashed in JavaFX 19, meaning the native file picker is back.
On a related note, native file pickers can’t (easily) be integrated as tabbed panels in a JavaFX desktop application because they are dialogs. This leads to either re-implementing the native file picker, or integrating two different file pickers: native and JavaFX-based. Here’s a screenshot of the dockable file picker integrated into KeenWrite, showing the feature that needs a “re-implemented file picker”:
Here’s my obligatory mention of the acme editor, which acknowledged this. It’s been called more of an integrating development environment, instead of an integrated on.
ls
.|sort
in your bar, select the text you want to sort, and middle click |sort
. Your text is now sorted.I think it lacks some things that are table stakes for a quality development experience nowadays– but it exemplified the unix philosophy better than any other unix editor.
I rolled my own SSG for my blog because I couldn’t find any that minified fonts (by removing unused glyphs). The generator is a bash script that uses pandoc to convert Markdown into HTML, although it could use my KeenWrite software instead. There are two scripts:
Logging is a form of code duplication. Code duplication is a well-known code smell. Tightly coupling systems to particular logging implementations by invoking the logger liberally throughout an application makes replacing the logger difficult in practice. Ideally, switching logger implementations would take no more than revising a single source file and creating a new configuration for the replacement library.
See also:
Why is it desirable to switch logging libraries easily?
Perhaps the deeper question is: Why is loose coupling desirable? Tight coupling reduces flexibility and re-usability of code, making changes more difficult. Changing one object in a tightly-coupled application often requires changes to other objects, which increases maintenance costs.
To your question, what if there was a remotely exploitable bug that was discovered in the logger, and the product was about to ship? It’s easier to change 10 lines of code than 1,000.
Let’s step back. What is logging? According to Wikipedia, logging assists users and developers by recording events. What I advocate for is making those events … events!
Consider the following example:
final Logger logger = new Logger();
logger.log( "Application starting" );
// ... later ...
logger.log( "Application terminating" );
Instead, for virtually the same amount of effort, we could write:
ApplicationStartedEvent.publish();
// ... later ...
ApplicationStoppedEvent.publish();
An event-based model offers the following benefits:
Further, events can occur on a separate thread and it’s trivial to change the log message to include a timestamp for when the event was triggered (as opposed to when the event was logged).
Why is it not desirable to isolate logging to a single location in the code base?
Perhaps the deeper question is: Why is loose coupling desirable?
Spending time and cognitive energy to decouple things for no point is a bad trade. Loose coupling makes things harder to understand. It’s still useful in many situations, but you shouldn’t treat it like a blind good, because it’s not.
Why is it not desirable to isolate logging to a single location in the code base?
Spending effort/code on changes that you will never make in reality is generally not desirable. It’s like abstracting away your database. Why? Are you really going to ever switch databases?
To your question, what if there was a remotely exploitable bug that was discovered in the logger, and the product was about to ship? It’s easier to change 10 lines of code than 1,000.
No one resolved this by switching logging libraries. They patched log4j and upgraded
Since the last post four months ago, KeenWrite has had the following changes:
plot()
function (see screenshot).added a
freeaddrinfo()
function to go with the newgetaddrinfo()
. This is the only particularly good solution
The open/close metaphor is easy to use and remember, we’re practically ingrained since birth to close what we’ve opened. From https://github.com/DaveJarvis/mandelbrot/blob/master/main.c :
It’s easy to make sure that the opens and closes are both balanced and inverted.
I’ve written a bit about the history behind 80-column text, which can be traced back to 1725:
https://dave.autonoma.ca/blog/2019/06/06/web-of-knowledge/
At some point I may write another post on the topic of 80-column text. In a nutshell, here are some additional considerations, a few have been mentioned by others:
While few studies focus on line length with respect to source code comprehension, numerous studies and typographers discuss line length in printed text. Like printed text, line length on digital displays affects reading speed and comprehension. Long lines slow down reading to the start of the next line; whereas, short lines require more vertical scrolling or paging. Research suggests using shorter lines for accuracy.
There’s a logical fallacy and a bit of privilege behind thinking that larger displays means longer lines can be the norm. Once 86” displays (or projections) proliferate, 720 character-long lines probably qualifies as reductio ad absurdum. Even though wealthy developers in the United States can afford bigger screens, less wealthy developers are left with less screen real estate.
This is good timing - I’m working on my thesis right now. It’s ridiculous how little of Vimtex I’ve been using! It turns out almost all of the manual LaTeX work I’ve been performing can be done via the plugin; I got forward and backward search practically for free. The only thing is, the “real time” claim is a little bit impossible - for documents like mine, there’s a 2-3 second delay between saving and re-rendering. I think that’s a limitation of LaTeX though, and one I’ll have to live with.
I think that’s a limitation of LaTeX though, and one I’ll have to live with.
KeenWrite, my plain text editor, can preview 1,000 simple TeX formulas in about 500 milliseconds on modern hardware. The Knuth-Plass Line Breaking Algorithm, which drives most TeX implementations, slows down rendering. KeenWrite uses an HTML previewer and inlines the math as SVG so that the math pops up instantly:
https://github.com/DaveJarvis/keenwrite/blob/master/docs/screenshots.md#equations
KeenWwrite typesets to PDF using ConTeXt, rather than LaTeX, because ConTeXt makes separating content from presentation much easier. The idea is to focus on the content, then muck with the formatting and layout when the content is finished.
https://wiki.contextgarden.net/Main_Page
Here’s an example of a Markdown document I wrote along with its bibliography. Both were typeset using ConTeXt:
KeenWrite is my text editor that takes a slightly different approach than MDX. Rather than include variable definitions within documents, variables are defined in an external file. I find that when variables are embedded into documents, those variables often include controls for presentation logic. To me, any presentation logic meant to affect a plain text document’s appearance does not belong in the document itself. Part 8 of my Typesetting Markdown series shows the power of separating content from presentation by leveraging pandoc’s annotation syntax.
Annotated Markdown is sufficiently powerful to produce a wide variety of different styles. Here are a few such Markdown documents typeset using ConTeXt:
What’s bothersome is how some companies are setting de facto Markdown standards without considering the greater ecosystem. GitHub has done this by introducing the “``` mermaid” syntax, which creates some problems.
The PersonnelRecord isn’t OOP and exhibits a widespread misunderstanding:
class PersonnelRecord {
public:
char* employeeName() const;
int employeeSocialSecurityNumber() const;
char* employeeDepartment() const;
protected:
char name[100];
int socialSecurityNumber;
char department[10];
float salary;
}
As written, the PersonnelRecord class will inevitably lead to code duplication, tightly coupled classes, and other maintainability issues. An improvement that’s still not OOP, but exposes a more flexible contract, resembles:
class Employee {
public:
Name name() const;
SocialSecurityNumber socialSecurityNumber() const;
Department department() const;
private:
Name name;
SocialSecurityNumber socialSecurityNumber;
Department department;
Salary salary;
}
OOP is more about the actionable messages that objects understand to carry out tasks on behalf of other objects. Wrapping immutable data exposed via accessors reaps few benefits. Rather, OOP strives to model behaviours that relate to the problem domain:
class Employee {
public:
void hire();
void fire();
void kill();
void raise( float percentage );
void promote( Position position );
void transfer( Department department );
private:
Name name;
SocialSecurityNumber socialSecurityNumber;
Department department;
Salary salary;
}
This allows for writing the following code:
employee.transfer( department );
I don’t know how to “transfer” an employee given the code from the article, but it would not be nearly as elegant.
Re: https://github.com/github/markup/issues/533
I’m the main author of KeenWrite (see screenshots), a type of desktop Markdown editor that supports diagrams. It’s encouraging to see that Mermaid diagrams are being supported in GitHub. There are a few drawbacks on the syntax and implications of using MermaidJS.
First, only browser-based SVG renderers can correctly parse Mermaid diagrams. I’ve tested Apache Batik, svgSalamander, resvg, rsvg-convert, svglib, CairoSVG, ConTeXt, and QtSVG. See issue 2485. This implies that typesetting Mermaid diagrams is not currently possible. In effect, by including Mermaid diagrams, many documents will be restricted to web-based output, excluding the possibility of producing PDF documents based on GitHub markdown documents (for the foreseeable future).
Second, there are numerous text-to-diagram facilities available beyond Mermaid. The server at https://kroki.io/ supports Mermaid, PlantUML, Graphviz, byte fields, and many more. While including MermaidJS is a great step forward, supporting Kroki diagrams would allow a much greater variety. (Most diagrams produced in MermaidJS can also be crafted in Graphviz, albeit with less terse syntax.)
Third, see the CommonMark discussion thread referring to a syntax for diagrams. It’s unfortunate that a standard “namespace” concept was not proposed.
Fourth, KeenWrite integrates Kroki. To do so, it uses a variation on the syntax:
``` diagram-mermaid
```
``` diagram-graphviz
```
``` diagram-plantuml
```
The diagram-
prefix tells KeenWrite that the content is a diagram. The prefix is necessary to allow using any diagram supported by a Kroki server without having to hard-code the supported diagram type within KeenWrite. Otherwise, there is no simple way to allow a user to mark up a code block with their own text style that may coincide with an existing diagram type name.
Fifth, if ever someone wants to invent a programming language named Mermaid (see MeLa), then it precludes the possibility of using the following de facto syntax highlighting:
``` mermaid
```
My feature request is to add support for Kroki and the diagram-
prefix syntax. That is:
``` diagram-mermaid
```
And deprecate the following syntax:
``` mermaid
```
And, later, introduce the language-
prefix for defining code blocks that highlight syntax. That is, further deprecate:
``` java
```
With the following:
``` language-java
```
That would provide a “namespace” of sorts to avoid naming conflicts in the future.
I don’t think moving the existing stuff to language-
is necessary, however I agree that diagram-mermaid
is a better option – especially if one wants syntax highlighting for the syntax of the Mermaid diagramming language, to describe how to write such diagrams.
First, only browser-based SVG renderers can correctly parse Mermaid diagrams. I’ve tested Apache Batik, svgSalamander, resvg, rsvg-convert, svglib, CairoSVG, ConTeXt, and QtSVG. See issue 2485
Do you mean the output of mermaid.js? Besides that these SVG parsers should be fixed if they are broken and maybe mermaid.js could get a workaround, surely a typset system could read the mermaid syntax directly and not the output of a for-web implementation of it?
If you look at the issue, there’s a fairly extensive list of renderers affected. This suggests that the core problem is that mermaid uses some feature(s) which are not widely supported.
Besides that these SVG parsers should be fixed if they are broken
Not sure if they are broken per se. The EchoSVG project aims to support custom properties, which would give it the ability to render Mermaid diagrams. From that thread, you can see supporting SVG diagrams that use custom properties is no small effort. Multiply that effort by all the renderers listed and we’re probably looking at around ten years’ worth of developer hours.
surely a typset system could read the mermaid syntax directly and not the output of a for-web implementation of it
Yes. It’s not for free, though. Graphviz generates graphs on par with the complexity of Mermaid graphs and its output can be rendered with all software libraries I tried. IMO, changing Mermaid to avoid custom properties would take far less effort than developing custom property renderers at least eight times over.
IMO, changing Mermaid to avoid custom properties would take far less effort than developing custom property renderers at least eight times over.
Sure, but as I said the ideal would be neither of those, but to just typeset the mermaid syntax directly and not rely on the JS or the SVG at all.
I agree with part of the premise, but this is way, way overengineered. My main problem with logs is they can clutter the code, making the code itself harder to understand at a glance. For that, having a simple function that abstracts the log into a nice compact name, something like LogExternalDependencyResponse()
solves all of my problems.
Why would I introduce all this machinery for logging? To decouple logging code from application logic? Logging code is one of the most clear examples of application logic.
Isn’t it a Java principle that any problem can be solved by adding more interfaces to the program, and requiring more rituals around everyday tasks?
The machinery needed to do this can be scaled up and down to your needs. For example, all events can be written to a file as JSON, where another process can be used to subscribe to the events. This key difference between events and logging is that each kind of event has a name and possibly some structure, whereas a log is typically unstructured text.
Another use for this is to do any sort of async processing based on events. For example, if you have an InviteCreatedEvent
, then you can have something else listening for that event that sends out the invite email. The point is that you can get more out of your logs by pulling them into the application layer a tiny bit.
Unit testing. Verifying that a particular log message has been written to the system log is cumbersome (in Java). Using events makes validating the application behaviour trivial: execute the production code under test and block until the event is received. Presuming that the event machinery is solid, there’s no need to add any extra code to scan the system log file for updates.
Not at all cumbersome if you use something like SLF4J-test as noted in my article https://www.jvt.me/posts/2019/09/22/testing-slf4j-logs/
That’s neat, thank you for sharing. An advantage of the event method remains checking types and avoiding checking log message text except when validating the actual text of the message, which would be a test of the string representation or specific message owned by the event. Looking at the log output is one way to do it but that’s always felt brittle to me, esp. in a high change velocity codebase with multiple contributors having their own opinions of how things should be worded. At least the tests would catch that someone hasn’t deleted a log emission! slf4j-testing seems like a much cleaner way to look for log output than capturing stdout yourself, though!
Maybe I’m just overly opinionated but shipping (or using, for that matter) a flat slab keyboard in $currentyear is essentially just negligence. Be nice to your wrists and get a split/tilt model. Goldtouch keyboards are great (if you don’t care about clacky keyswitches) and offer these features for less than $50 used.
While I’m on this soapbox ordinary mice should be straight up banned. Get a trackball and stop minutely flicking your wrist around all day.
I just got a trackball! Hated it for about 10 minutes, now I don’t even think about it. Can’t believe how quickly I adapted to it.
How do you use a trackball without minutely flipping your wrist? I got one and stopped using it after a week because my wrist was on fire…
I enjoy using a vertical thumb trackball like this one: https://www.kensington.com/p/products/ergonomic-desk-accessories/ergonomic-input-devices/pro-fit-ergo-vertical-wired-trackball2-1/
There is also a wireless one. Fair warning, the build quality isn’t the greatest and I had to disassemble it to try to fix the scroll wheel within 1.5 years of purchase. You’ll also have to pop out the trackball itself (there is a button to do this) and un-gunk it about once a month.
For a couple of decades, I’ve been using basically the Logitech equivalent of this. It’s made a tremendous difference for my wrists compared to a mouse. Avoiding the scroll wheel is unfortunately kind of important, too, for me.
anyone tried a CST trackball? they appear to be the only ones one the market with good build quality.
A bit late to the party, but I’m a user. I have two of those and two Kinesis Advantages.
Didn’t get any gizmos for it. Back/forward buttons would be great and I wouldn’t mind something for horizontal scroll. Actually an extra wheel would be more useful than the buttons, I think.
If I had infinite time, money and patience, I might look for something better. My hands aren’t mini-sized, but I often feel like I have to reach over the ball a bit too much, especially for the middle button, so something flatter with a smaller ball might be appropriate.
If I really regretted the purchase, I would actually get something else. Now I’m at a passively curious stage if something better floats along.
My darkest fantasy is that at least one of the Kineses would break and I’d have an excuse to upgrade it. I have some pretty neat ErgoDox hacks I’d like to incorporate but probably can’t with the older model.
Shameless plug for the ErgoDox layout https://gitlab.com/mjtorn/ergodox-ez-advantage-nordic-enhanced
Any chance one of them is a beige L-trac? Would have bought one years ago if I knew they would become unavailable.
I got them black. I don’t know which colors are available, but I believe X-Keys at least got some funky ones. And something with LED lighting iirc.
All I’d ask for is a lower form factor. The resolution is probably something like thousands of dpi (if I looked it up), which makes me think a smaller ball would still allow you to scroll a 4k screen from side to side in one sweep with some accuracy.
Or if it could be dropped lower for a slimmer case.
Looks like they went out of business a few years ago but another company took over manufacturing their products: https://xkeys.com/xkeys/trackballs.html (lots of cool gizmos on that website btw)
They look nice and solid but lie flat and I am a big fan of that Kensington vertical thumb trackball other than its crap-tier build quality.
OK, I used this one https://www.kensington.com/p/products/control/trackballs/orbit-trackball-with-scroll-ring/?r=1 and as far as I can tell, the only way to use it is by rocking the wrists back and forth, basically the worst possible ergonomics.
I have the Kensington K64325 (love it) and use only my right fingers (ball) and right thumb (button) so it’s at least possible, but I couldn’t say if it’s a difference in device model or hand position.
https://www.ergocanada.com/ec_home/products/trackballs_2.html#Product8
Wrist is kept straight. Palm rests on the mouse body. Index finger controls movement. Wrist barely moves. Can be used left- or right-handed. The downside is that Logitech is no longer producing them.
Or turn up the sensitivity on your mouse as high as you can go while still controlling it. My wrist feels fine.
What?! No ortho option? It’s worse than mercury sandwich! /s
Flat keyboards are not as bad as you make it sound for everyone. Not everyone types 50k words every day 50 weeks a year. It’s fine for most people. Local selection probably should be a little bit more conscious of ergonomic implications but even here most people will be just fine with any laptop keyboard.
Agreed. I had wrist issues a few years ago. I used a trackball for awhile, but I love touchpads, so my goal was to get back to using the laptop touchpad. For me, regular exercise and stretching made the problem go away entirely!
I also use https://www.amazon.com/Staples-Beaded-Keyboard-Wrist-Rest/dp/B00RC8COF2/ as a wrist rest so my wrists are better positioned before they get to my keyboard. This too helps.
On mice:
I picked up https://www.logitech.com/en-us/products/mice/lift-vertical-ergonomic-mouse.910-006466.html and I have much less wrist/finger pain. I’ve tried trackballs in the past, I just don’t like them, my wife uses one though.
I’d be interested in seeing a comparison between vertical mice and trackballs for ergonomics. I’m using a vertical mouse myself but I’m curious as to the differences.
I personally used that exact mouse but my wrist issues persisted until switching to trackball.