I think that John Warnock got the programming language (Postscript) right when compared to Tex, and I wish something like Latex was built on top of Postscript and it was the default language for research publications rather than Latex as it is now.
The article was pretty bad (and, I guess, was a reprint of something from the ‘90s, given its use of the present tense when talking about printers with m68k processors). PostScript didn’t solve the problem of the printer not being able to print arbitrary things, precisely because it was a Turing-complete language. It was trivial to write PostScript programs that would exhaust memory or infinite loop. I had a nice short program to draw fractal trees that (with the recursion depth set sensibly) would take 5 minutes to print a single page on my laser. It was trivial to DoS any PostScript printer. Often even unintentionally: I used to have a laser with a 50 MHz MIPS processor. Printing my PhD thesis, it could do about two pages a minute if I went the PostScript to the printer, 20 pages a minute if I converted to PCL on my computer and sent the resulting PCL to the printer. The output quality was the same.
This is a big part of the reason that early laser printers were so expensive. The first Apple LaserWriter had 1.5 MiB of RAM and a 12 MHz 68000, in 1985. The computer that it was connected to had either 128 or 512 KiB of RAM and a 6 MHz 68000: the printer was a more powerful computer than the computer (and there were a load of hacks over the next few years to use the printer as a coprocessor because it was so much faster and had so much more memory).
The big improvement of PDF over PostScript was removing flow control instructions. The rendering complexity of a PDF document is bounded by its size.
Putting rasterisation on the printer was really a work around for the slow interconnect speed. An A4 page, at 1200 dpi in three colours is around 50 MiB. At the top speed of the kind of serial connection that the early Macs had, it would take about an hour to transfer that to the printer (about 4 minutes at 300dpi). Parallel ports improved that a lot and could send a 300 dpi page in 21 seconds for colour, 7 seconds for mono (faster than most inkjets could print), though a 1200 dpi page was still 6 minutes for colour, 2 minutes for mono, so required some compression (often simple run-length encoding worked well, because 95% of a typical page is whitespace). With a 100Mbit network connection you can transfer an uncompressed, fully-rasterised, 1200dpi, CMY, A4 page in around 4 seconds.
The problem is made worse by the fact that not all of the input to a page is in the form of vectors and so PostScript / PDF / PCL also need to provide a mechanism for embedding lossless raster images. At this point, you may as well do all of the rasterisation on the host and use whatever lossless compression format works best for the current output to transfer it to the printer. This is what a lot of consumer printers from the ’90s onwards did.
The real value of something like PostScript or PDF is not for communicating with a printer, it’s for communicating with a publisher. Being able to serialise your printable output in a (relatively) small file that does not include any printer-specific details (e.g. exactly what the right dithering patterns should be to avoid smudging on this technology, what the exact mix of ink colours is), is a huge win. You wouldn’t want to send every page as a fully rasterised image, because it would be huge and because rasterisation bakes in printer-specific details.
HP’s big innovation in this space was to realise that these were separate languages. PCL was a far better language for computer-printer communication than PostScript and let HP ship printers with far slower CPUs and less RAM than competitors that spoke PostScript natively. At the same time, nothing stopped your print server ( a dedicated machine or a process on the host) from accepting PostScript and converting it to PCL. This had two huge advantages:
I agree with what you say; I am not trying to defend the use of Postscript as a format for communication between computers. As you noticed, there are better communication languages for computer-printer communication. Rather, what I want to point out is that, Postscript can be a really good language for writing complex documents that are meant to be edited by humans when compared to Tex.
Apples vs oranges. Have you ever programmed in PostScript? It’s a much lower-level language than TeX, and not at all suited to writing documents.
For one thing, it’s inside-out from TeX: in PostScript, everything is code by default and the text to be printed has to be enclosed in parentheses. Worse, a string renders only with the font’s default spacing, so any time text needs to be kerned it has to be either broken up into multiple strings with “move” commands between them, or you have to call a library routine that takes the entire string plus an array of kerning offsets.
I used to write in TeX in college and then render it on an Apple LaserWriter. Sometimes I got a glimpse of what the PostScript output of the TeX renderer looked like, and it was basically unreadable. Not something I would ever want to edit, let alone write.
Actually I have. It is a fun little language in th Forth family that is extremely suitable for abstraction, and recalls the features of Lisp family in that it is Homoiconic, and you can write a debugger, editor etc. entirely in postscript. You can program in postscript in a program paradigm called concatenative programming – similar to tacit programming or point free style.
The language was used (think of it as a precursor to Javascript) for client side programming and rendering in Display Postscript used by NeXt and Adobe for their windowing systems and in NeWS windowing system by Sun Microsystems.
There has been a number of higher level document formatting libraries in postscript. The best known (by me) is TinyDict and another is here. (The same person wrote his CV in postscript which is a great example of versatility of postscript. Start from line 60. This is what the rendered pdf looks like.
I used to write in TeX in college and then render it on an Apple LaserWriter. Sometimes I got a glimpse of what the PostScript output of the TeX renderer looked like, and it was basically unreadable.
Have you seen what generated C code looks like when it is used as a backend by other compilers? Do not judge a language by what generated code looks like.
I’m surprised you did not mention Don Lancaster’s many PS macros for publishing - https://www.tinaja.com/pssamp1.shtml <- that’s one of the coolest hobbyist use of PS in my experience.
Looking at the TinyDict docs, I don’t think I’d want to work in a markup language that looks like
palegreen FB 3 a 12 IN 2.5 paleyellow FB R 4 b SB
L H 24 rom gs 13 T ( CAPPELLA ARCHIVE ) dup 0.5 setgray s gr 1 a 11 T red CS L
13 bol ( P R A C T I C A L
L H 1 red LB
That’s much less clear than TeX. If you’re a fluent PS programmer this might be appealing, but not for anyone else…
Can you expand on this? What makes PostScript preferable to the rather straightforward markup of (La)TeX?
See this reply from me. As what you want to accomplish becomes more complex, you really need a well designed programming language, and Postscript IMO is really well designed, though perhaps not as familiar to people from the traditional programming languages.
Looking at your examples, I’m not convinced.
I’m a firm believer in separating authoring from layout, something that LaTeX (and HTML) enforce quite well. The canard about amateur desktop publishing was the enthusiastic tyro that mixed different typefaces in a document just because they could. Having to specify typefaces and sizes in the document being authored is a throwback. While fighting with underfull hboxes in bigger LaTeX docs is a thing, the finished product is of high typographic quality.
I don’t want to dump on the person who wrote their CV in PS, but it doesn’t look that good, typographically. Back when I maintained a CV in LaTeX I used a package for that purpose, and it was easy to keep “chunks” of it separate so I could easily generate slightly different versions depending on the position I was applying for.
Having to manually end each line with an explicit line break is another thing that feels very primitive.
Regarding the link to TinyDict, the hosting website seems offline, so it does not seem to be under active development.
It doesn’t look as if PS has Unicode support, either: https://web.archive.org/web/20120322112530/http://en.wikibooks.org/wiki/PostScript_FAQ#Does_PostScript_support_unicode_for_CJK_fonts.3F
Sorry if I come off as negative, but computer/online authoring is a subject close to my heart, and as time has gone by I’ve come to to the conclusion it’s better to let the author not have to bother with stuff the computer does better.
I agree that Postscript does not have the similar higher level capabilities already available as TeX for separating content from layout. As it is, the basic primitives provided are at a lower level than TeX. However, my point is that the human interface – the language of postscript is much more amenable to building higher level packages than what TeX provides as a language.
I don’t want to dump on the person who wrote their CV in PS, but it doesn’t look that good, typographically.
Surely, these are not related to the language itself?
Back when I maintained a CV in LaTeX I used a package for that purpose, and it was easy to keep “chunks” of it separate so I could easily generate slightly different versions depending on the position I was applying for.
This is doable in Postscript. You have a full programming language at your disposal, and the language is very amenable to creating DSLs for particular domains.
Postscript language at this point is an old language that did not receive the same level of attention that TeX and LaTeX did. My point is not that everyone should use Postscript from now on. What I expressed was a fond wish that something like LaTex was built on top of Postscript so that I could use the Postscript language rather than what TeX and LaTex provides.
At this point, I have used LaTex for 12 years for academic work, and even after all these years, I am nowhere close to being even middling proficient in Latex. With Postscript, I was able to pick up the basics of the language fast, and I can at least make intelligent guesses as to what a new routine does.
I wrote up something very similar to this back in May 2020.
This is great.
As others have pointed out, an export button would be great. Support for images would also cover a lot of use cases.
But let.s not miss the beautiful simplicity of this self contained solution. Even installing or configuring nginx, simple as it might be, is not free.
This is great for intranets, hackatons, or even small temporary websites exposed to the web.
There is an “Download Data” button that points at: https://rwtxt.com/DOMAIN/export
, but it exports markdown files, not HTML as some commenters wished for.
CTWM is an X11 Window Manager. It was created by Claude Lecommandeur Claude.Lecommandeur@Epfl.Ch in 1992 as a fork of the TWM window manager.
Thanks for the sharing. I knew ctwm existed, but didn’t know that it was forked 1992 :)
Besides all the available “tiling” options that we have around while using any opensource OS, ctwm seems to be a safe default for those that need a minimum desktop with auto-generated menu entries to easly access basic software after an install.
Your comment is a good summary of “why” anyone should be interested in a WM originally written in ’92. It is an interesting piece of software to try out.
I usually just use twm
if I’m just installing a quick X windows system. It’s still totally fine for managing terminal windows, which is basically all I use X for, anyway.
I installed and played with it. Got a blast of nostalgia of seeing these similar desktops in Engg design department. And it feels really fast!
90s aesthetic is a thing. Can’t wait to try out SerenityOS one of these days.
It seems that there are some newish Lisps that are heeding the lessons of the latest crop of successful programming languages: they’re focusing on dev experience, tooling, and documentation to a greater degree than many of the earlier generations. I wonder if that seems to be the case to others.
For someone trying to learn LISPs right now, are there any others from this crop you could share or recommend? I’m enjoying the family of languages, but am feeling almost universally underwhelmed by the development experiences.
racket has an excellent development experience. I’ve tried and enjoyed clojure and chicken scheme in the past but these days racket is my lisp of choice.
For the next release of Fennel we’ve been focusing on friendliness: a setup guide, compiler error suggestions, readline completion and bracket matching, simplified installation, and the ability to debug macros by expanding them in the repl.
can you share what’s your development experiences have been so far with Lisps?
There are multiple Commercial Lisps ($$$) with excellent IDE support; eg– Lispworks, Allegro etc.,.
Most open source lisps(and schemes) have integration with Emacs (of course). Some people also use vim for their lisps, though that’s still considered somewhat counter-cultural.
Clojure has CIDER for emacs, cursive - an intelliJ plugin, light table (no longer developed, AFAIK), and even a VScode plugin.
Gerbil has something similar to CIDER called treadmill
Sure! I’ve been mostly working with sbcl CL, Guile Scheme, and Chicken Scheme. I’m very familiar with Emacs, so getting Slime set up was no problem.
What I’ve struggled with is the path to A) managing dependencies well, i.e., locking (most things I get the sense just always pull the latest version of things?), and B) figuring out a standard path to deployment.
The issue might just be documentation or steep learning curve, but coming from, say, the first few steps in a language like Rust, where the first steps are:
Trying to figure the analogous set of steps in lisps just seems to require learning many tools and making many decisions: e.g., quicklisp vs asdf vs buildapp.
I’ve had good preliminary experiences with Janet. But I haven’t put it through its paces yet; I want to build a multi module CLI app before I claim victory.
I looked at Gerbil a while ago, and didn’t get super excited for some reason. Looking at it again, briefly, all I can say is “I wish I had more time to fully digest and play with this! It looks very much like what I’ve been looking for!”
yes, the new website and documentation is of much higher quality than it’s original avatar. It actually has standard libraries, package manager, build tool etc., that one can use to do “useful things”, not just play with the language.
From 3 months ago:
and technical blogs: https://lobste.rs/s/l7b3iy/
This one’s from five years ago though. It’s nice to ask again and see people who started blogging in the last five years or who weren’t on lobsters five years ago.
Nice, but there’s not much to it. I guess that’s the point. I like minimalism. And it’s obviously light weight.
This reminds me a bit of these websites that were going around a while back:
the creators of those websites were going for “you can put up a half decent looking webpage without making it heavy with fonts, CSS frameworks, and JS”. You can ofcourse borrow their css files to get a minimalistic styling for the HTML page.
The two “frameworks” I linked, much like the item under discussion, were designed to be used with semantic HTML without the need to use special classes or div layouts (like you would when using say bootstrap). They also provide styling for most commonly used elements like forms etc.,which the MFW sites lack.
The good thing about classless/semantic CSS styles (framework is probably too heavy a word to use for a css file), is that you can either hand write HTML or use a generator like pandoc and apply these styles to get a nice layout without worrying about figuring out how to use Bootstrap, Tailwind etc.
Eg: I use water.css on my website where 90% of styling is from water.css.
Mirage Project’s Irmin. “Irmin - A distributed database built on the same principles as Git”
Makes me think of how Elm is able to recompile so quickly so that you can see your changes “instantly”. Especially if you’re working in something like glitch.com, I can see my changes within a second of when I stop typing.
made me think of Pharo/Smalltalk actually. You can query/modify and interact with the entire system (and not just the parts of code you are writing) in “real time”.
Yes. This is what Alan Kay was evangelizing, even as far back as Smalltalk-76. He credits Simula 67 and Sketchpad. Squeak, the portable Smalltalk-80 implementation originally developed at Apple, is the direct ancestor of Pharo. The Morphic UI (replacing the old MVC in Squeak) is strongly influenced by the same ideas.
“Rebuild quickly” and “live systems” both make implementing some of these ideas easier but I think the real issue is scaling this up and across the whole system. Not only should this be the default mode of all programming, it should also work for getting a sense of large scale changes that span multiple processes and machines.
I was thinking the core model of working would change from “rewrite, rebuild/rerun and review output” to “project and manipulate” where project involves creating a projection/view of your system that shows the behavior you want affected.
Time to upgrade this benchmark of scheme implementations?
Thanks for this. Even since I saw this [1] I have been looking into Lisp. I started looking and realize its a pretty big community with many dialects and implementations
To quote you 7 months ago: “unreadable langauges with too many parenthesis”, let me know how your search goes.
Honestly I stand by that. Every example of Lisp I have ever seen, looks like this:
(sunday 10
(monday 11
(tuesday 12)))
Where the linked syntax looks like this:
(sunday
10 (monday
11 (tuesday 12)
)
)
Style matters. Its a small difference, but to me its markedly more readable.
By the look of the raw numbers, guile has to be much more than an average of 2x speed up to be any where near chez.
“Compared to 2.2, microbenchmark performance is around twice as good on the whole, though some individual benchmarks are up to 32 times as fast.”
So depending on how “around” rounds off, it may be. :-)
https://www.btbytes.com - personal site that’s been online since 2003 and has gone through multiple backends. Now its generated using a bespoke python script. I’m not a regular blogger. The most “interesting” section would probably be the notebooks, one of which is the interesting programming languages which saw few comments on this site. Since I’ve been off twitter since the beginning of 2020, I’m using the log section as a place to jot down things I might have been posted on twitter instead.
I extracted the domains listed here and put it on a site “lobstersweb” using shell script, pup, pandoc, makefile and zeit.co.
In summary, you can do something like this:
jbang helloworld.java
jbang
is a play on shebang #!
and looks like this (at the top of the file) - //usr/bin/env jbang "$0" "$@" ; exit $?
//DEPS log4j:log4j:1.2.17
I went from vim to sublime to vim around 6/7 years ago. This was back at sublime V2 though I think where it was a crapshoot what you were going to get when you installed a plugin, if you managed to filter through the 30 odd ones for what you wanted. I still have v3 installed but only use it sporadically to edit plaintext or copy text. I used IntelliJ for a while, but once my main project got bigger, the typing lag started to drive me completely insane and I went back to vim. I tried vscode, but it just felt like baby IntelliJ and I don’t want to go down that road again. IntelliJ starts to force you into a situation where your project could easily be a bad time for another developer joining that doesn’t have IntelliJ and vscode felt like that in its infancy.
I do try these things. Sometimes for extended periods of time, but I always end up back at vim just because it’s ubiquitous for me and the typing lag though not nonexistent, is still pretty minimal at the worst of times. I can work around everything else I need like debuggers and build systems, but if the editor typing lags it’s like fingernails on a chalkboard for me.
i’m a fairly decent vim user. My problem with becoming a “power” vim user has always been about the explosion of options once you step out of “a few configs in .vimrc” range.
Yeah I completely understand where you’re coming from. It’s gotten better over the years though, it used to be in order to support different languages, plugins and stuff you had to put a metric ton of effort into your .vimrc file. Nowadays I take a “less is more” approach to things, I think my .vimrc file is only about 200 lines and I use ~10 plugins through Vundle with minimal configuration changes. I’m pretty brutal about cutting out a plugin if it’s problematic or doesn’t do what it says on the tin, or it’s something I can just do on the command-line or in a Makefile/sh file.
I use iterm2 instead of tmux as I can’t seem to keep all the keymappings in my head so I can use the mouse to split views, resize them, etc…
To be honest, IntelliJ and Sublime both have vim emulation layers you can install/enable. But at the end of the day (for me) it literally boils down to the typing lag. I’ll go with whatever lets me write my code the fastest without frustrating me and it just seems to be nvim at the moment that has that in spades, though I may look at Sublime again and try the vim emulation layer with it, but no one ever has an ergonomic keymapping to go from the editor to the file tree and navigate that with vim keystrokes that I’ve seen.
Is Smalltalk the new Clojure, lacking only real world applicability?
I should explain. Two marvelous ideas entered the marketplace last century. One fell apart because it was seen as the domain of AI and academics, with implementations being too like forth (you learn one forth, and you’ve learned one forth) for people’s likings. The other was embroiled in legal battles and refused to acknowledge that it wasn’t the computer. This was quickly deemed, though not obvious at the time, unacceptable. Even C doesn’t care what the kernal is, so long as it has a standard library, and polyglot systems are just More Useful.
I speak, of course, about LISP and Smalltalk.
Clojure is LISP made Really Useful. It can be used to glue Java together with almost no code, runs on possibly the most popular and recognizable virtual machine in the world, and the functional twist took out a lot of the dark corners out of the language
Clojure made LISP usable.
As for Smalltalk, the Pharo VM does a great job of being modem, using the Cog JIT engine, and even (in an upcoming release?) being able to respect the world outside the VM. I love the language, one person describes it as an unhealthy fascination, and I would recommend people learn the lessons of Smalltalk.
As for me, I learned a lesson from the failures of Smalltalk, and it isn’t one people think about. I want to codify my ideas as a language, as soon as I have the time to do so. I want to make Smalltalk applicable to server operators and packagers. Blog posts on the topic to follow.
Smalltalk had a resurgence some 10 years ago with interest in the Seaside web app framework and a few hyped Smalltalk web apps. I recall DabbleDB as one that seemed impressive at the time.
If you haven’t yet, I’d suggest looking at Strongtalk, Self, and Newspeak. And what the space between them and Objective-C looks like.
Good luck!
To quite some extent I think the space between ObjC and Smalltalk-likes is occupied by Ruby. More Smalltalk-like than ObjC, easier C (or really, host platform code in general) FFI than a Smalltalk.
It really is, but Ruby has a bad rap these days, no advantage Python doesn’t in the realm of writing usable code, and an unfortunate load average and dependency story.
Clojure made LISP usable.
I have to disagree with that. Lisp is very usable (and quite nice) on its own. Perhaps Clojure made it usable for gluing Java together, but huge swathes of the software world don’t involve Java, and are still well suited to Lisp.
I love Scheme, but I’ve tried to learn Common Lisp a three times times, and each time I got frustrated. Whether because the lack of Vim support (I can’t be bothered to learn emacs, and it just confuses me), or because quicklisp doesn’t do what I expect, I can never quite understand what’s going on under the hood. It’s a shame too, because I’m fascinated by some of the cool CL projects like CEPL.
However, if we’re including Scheme, I also disagree with the statement that Clojure is the first usable language in the Lisp family. Guile and chicken scheme are both very easy to get started with out of the box, and Racket has a full on IDE geared towards education, superb technical documentation, and tutorials that makes it easy to get started.
you know there are perfectly good IDEs for Lisp that are not Emacs? Clozure Lisp (CCL) [Opensource], Lispworks personal edition [Free].
And there is nothing stopping you from using Atom, Visual Studio Code or Sublime text with Lisp plugins.
I’m afraid I also disagree. Lisp users before Clojure are of a different breed entirely, a more skilled and arcane breed. Clojure is accessible to and used by the masses to write useful code and libraries.
That’s flattering (I guess), but I’m not sure it’s true. By some measures Common Lisp is still more popular than Clojure.
TIOBE’s language ranking isn’t perfect, but it’s better than nothing, and it claims Clojure and Common Lisp are both down in the 50-100 group, behind generic “Lisp” at 32 and Scheme at 34.
However, “Lisp” is often taken to mean “Common Lisp” nowadays (for example the #lisp IRC channel is dedicated specifically to Common Lisp), so those rankings may be interpreted as meaning CL is more popular.
Also, I wold claim CL is at least as accessible as Clojure. It’s not as trendy, but it has more implementations, supports more platforms, isn’t tied to the JVM, and has a bunch of tutorials and books covering it.
And of course there are plenty of useful libraries and applications being written in CL today.
Originally learned about Lisp from AI books. That would make me agree with you. Then, later books like Practical Common Lisp and Land of the Lisp are way more approachable better things for mainstream audience to build. Important they get that IDE that allows incremental, per-function compilation. That by itself might win them over. Alternatively, they can start with Scheme using How to Design Programs trying Common Lisp if they get HtDP.
I have a feeling you’ll see a lot more people in it if the educational resources, community, and tooling facilitate the learning process. In that order for Lisp.
I think TCL is underrated. A syntax similar to shells is great. Well, actually I think TCL is too complex. So maybe a smaller descendant like TH3.
I find languages like Lua or Javascript are too powerful. They get used to write entire applications and that is too much. Embedded scripts should be small and easy to understand. Providing powerful features misleads people into writing larger and complex scripts. If you want to go that way, then I would suggest to extend Python instead.
Another interesting language I stumbled upon recently is SparForte. It allows you to write small scripts (think bash) and when they (inevitably) grow, you can transform them piecewise into Ada. The approach is similar to starting with Python and extending it with Rust where necessary. The advantage is the language is explicitly designed to support that use case.
I find languages like Lua or Javascript are too powerful. They get used to write entire applications and that is too much. Embedded scripts should be small and easy to understand.
It’s pretty easy (like on the order of five lines of code) to take a chunk of Lua and sandbox it so it doesn’t have access to any libraries; even stuff in the standard library. The core of Lua is trivial to learn for anyone who has programmed before; you can pick it up in an afternoon.
Brr. I actually I think it is massively overrated.
Scheme and the “designed to be embedded” flavour Guile is much nicer.
I’d argue this is software more adjustable (<100 lines of code that is really opinionated “configuration” of libraries) than a more “configurable” behemoth with 200 files.
bflat is to C# what betterC is to D?