I think that John Warnock got the programming language (Postscript) right when compared to Tex, and I wish something like Latex was built on top of Postscript and it was the default language for research publications rather than Latex as it is now.
The article was pretty bad (and, I guess, was a reprint of something from the ‘90s, given its use of the present tense when talking about printers with m68k processors). PostScript didn’t solve the problem of the printer not being able to print arbitrary things, precisely because it was a Turing-complete language. It was trivial to write PostScript programs that would exhaust memory or infinite loop. I had a nice short program to draw fractal trees that (with the recursion depth set sensibly) would take 5 minutes to print a single page on my laser. It was trivial to DoS any PostScript printer. Often even unintentionally: I used to have a laser with a 50 MHz MIPS processor. Printing my PhD thesis, it could do about two pages a minute if I went the PostScript to the printer, 20 pages a minute if I converted to PCL on my computer and sent the resulting PCL to the printer. The output quality was the same.
This is a big part of the reason that early laser printers were so expensive. The first Apple LaserWriter had 1.5 MiB of RAM and a 12 MHz 68000, in 1985. The computer that it was connected to had either 128 or 512 KiB of RAM and a 6 MHz 68000: the printer was a more powerful computer than the computer (and there were a load of hacks over the next few years to use the printer as a coprocessor because it was so much faster and had so much more memory).
The big improvement of PDF over PostScript was removing flow control instructions. The rendering complexity of a PDF document is bounded by its size.
Putting rasterisation on the printer was really a work around for the slow interconnect speed. An A4 page, at 1200 dpi in three colours is around 50 MiB. At the top speed of the kind of serial connection that the early Macs had, it would take about an hour to transfer that to the printer (about 4 minutes at 300dpi). Parallel ports improved that a lot and could send a 300 dpi page in 21 seconds for colour, 7 seconds for mono (faster than most inkjets could print), though a 1200 dpi page was still 6 minutes for colour, 2 minutes for mono, so required some compression (often simple run-length encoding worked well, because 95% of a typical page is whitespace). With a 100Mbit network connection you can transfer an uncompressed, fully-rasterised, 1200dpi, CMY, A4 page in around 4 seconds.
The problem is made worse by the fact that not all of the input to a page is in the form of vectors and so PostScript / PDF / PCL also need to provide a mechanism for embedding lossless raster images. At this point, you may as well do all of the rasterisation on the host and use whatever lossless compression format works best for the current output to transfer it to the printer. This is what a lot of consumer printers from the ’90s onwards did.
The real value of something like PostScript or PDF is not for communicating with a printer, it’s for communicating with a publisher. Being able to serialise your printable output in a (relatively) small file that does not include any printer-specific details (e.g. exactly what the right dithering patterns should be to avoid smudging on this technology, what the exact mix of ink colours is), is a huge win. You wouldn’t want to send every page as a fully rasterised image, because it would be huge and because rasterisation bakes in printer-specific details.
HP’s big innovation in this space was to realise that these were separate languages. PCL was a far better language for computer-printer communication than PostScript and let HP ship printers with far slower CPUs and less RAM than competitors that spoke PostScript natively. At the same time, nothing stopped your print server ( a dedicated machine or a process on the host) from accepting PostScript and converting it to PCL. This had two huge advantages:
You could upgrade the print server when CPUs became cheaper. You’d often keep a printer for 5-10 years. You could upgrade the print server to one twice as fast a few times in that time.
You could easily add support for newer features on the computer-computer communication language (e.g. the alpha channels on later PDF revisions with blending between overlayed raster images).
I agree with what you say; I am not trying to defend the use of Postscript as a format for communication between computers. As you noticed, there are better communication languages for computer-printer communication. Rather, what I want to point out is that, Postscript can be a really good language for writing complex documents that are meant to be edited by humans when compared to Tex.
Apples vs oranges. Have you ever programmed in PostScript? It’s a much lower-level language than TeX, and not at all suited to writing documents.
For one thing, it’s inside-out from TeX: in PostScript, everything is code by default and the text to be printed has to be enclosed in parentheses. Worse, a string renders only with the font’s default spacing, so any time text needs to be kerned it has to be either broken up into multiple strings with “move” commands between them, or you have to call a library routine that takes the entire string plus an array of kerning offsets.
I used to write in TeX in college and then render it on an Apple LaserWriter. Sometimes I got a glimpse of what the PostScript output of the TeX renderer looked like, and it was basically unreadable. Not something I would ever want to edit, let alone write.
Actually Ihave. It is a fun little language in th Forth family that is extremely suitable for abstraction, and recalls the features of Lisp family in that it is Homoiconic, and you can write a debugger, editor etc. entirely in postscript. You can program in postscript in a program paradigm called concatenative programming – similar to tacit programming or point free style.
The language was used (think of it as a precursor to Javascript) for client side programming and rendering in Display Postscript used by NeXt and Adobe for their windowing systems and in NeWS windowing system by Sun Microsystems.
I used to write in TeX in college and then render it on an Apple LaserWriter. Sometimes I got a glimpse of what the PostScript output of the TeX renderer looked like, and it was basically unreadable.
Have you seen what generated C code looks like when it is used as a backend by other compilers? Do not judge a language by what generated code looks like.
I’m surprised you did not mention Don Lancaster’s many PS macros for publishing - https://www.tinaja.com/pssamp1.shtml <- that’s one of the coolest hobbyist use of PS in my experience.
Looking at the TinyDict docs, I don’t think I’d want to work in a markup language that looks like
palegreen FB 3 a 12 IN 2.5 paleyellow FB R 4 b SB
L H 24 rom gs 13 T ( CAPPELLA ARCHIVE ) dup 0.5 setgray s gr 1 a 11 T red CS L
13 bol ( P R A C T I C A L
L H 1 red LB
That’s much less clear than TeX. If you’re a fluent PS programmer this might be appealing, but not for anyone else…
See this reply from me. As what you want to accomplish becomes more complex, you really need a well designed programming language, and Postscript IMO is really well designed, though perhaps not as familiar to people from the traditional programming languages.
I’m a firm believer in separating authoring from layout, something that LaTeX (and HTML) enforce quite well. The canard about amateur desktop publishing was the enthusiastic tyro that mixed different typefaces in a document just because they could. Having to specify typefaces and sizes in the document being authored is a throwback. While fighting with underfull hboxes in bigger LaTeX docs is a thing, the finished product is of high typographic quality.
I don’t want to dump on the person who wrote their CV in PS, but it doesn’t look that good, typographically. Back when I maintained a CV in LaTeX I used a package for that purpose, and it was easy to keep “chunks” of it separate so I could easily generate slightly different versions depending on the position I was applying for.
Having to manually end each line with an explicit line break is another thing that feels very primitive.
Regarding the link to TinyDict, the hosting website seems offline, so it does not seem to be under active development.
Sorry if I come off as negative, but computer/online authoring is a subject close to my heart, and as time has gone by I’ve come to to the conclusion it’s better to let the author not have to bother with stuff the computer does better.
I agree that Postscript does not have the similar higher level capabilities already available as TeX for separating content from layout. As it is, the basic primitives provided are at a lower level than TeX. However, my point is that the human interface – the language of postscript is much more amenable to building higher level packages than what TeX provides as a language.
I don’t want to dump on the person who wrote their CV in PS, but it doesn’t look that good, typographically.
Surely, these are not related to the language itself?
Back when I maintained a CV in LaTeX I used a package for that purpose, and it was easy to keep “chunks” of it separate so I could easily generate slightly different versions depending on the position I was applying for.
This is doable in Postscript. You have a full programming language at your disposal, and the language is very amenable to creating DSLs for particular domains.
Postscript language at this point is an old language that did not receive the same level of attention that TeX and LaTeX did. My point is not that everyone should use Postscript from now on. What I expressed was a fond wish that something like LaTex was built on top of Postscript so that I could use the Postscript language rather than what TeX and LaTex provides.
At this point, I have used LaTex for 12 years for academic work, and even after all these years, I am nowhere close to being even middling proficient in Latex. With Postscript, I was able to pick up the basics of the language fast, and I can at least make intelligent guesses as to what a new routine does.
I think that John Warnock got the programming language (Postscript) right when compared to Tex, and I wish something like Latex was built on top of Postscript and it was the default language for research publications rather than Latex as it is now.
The article was pretty bad (and, I guess, was a reprint of something from the ‘90s, given its use of the present tense when talking about printers with m68k processors). PostScript didn’t solve the problem of the printer not being able to print arbitrary things, precisely because it was a Turing-complete language. It was trivial to write PostScript programs that would exhaust memory or infinite loop. I had a nice short program to draw fractal trees that (with the recursion depth set sensibly) would take 5 minutes to print a single page on my laser. It was trivial to DoS any PostScript printer. Often even unintentionally: I used to have a laser with a 50 MHz MIPS processor. Printing my PhD thesis, it could do about two pages a minute if I went the PostScript to the printer, 20 pages a minute if I converted to PCL on my computer and sent the resulting PCL to the printer. The output quality was the same.
This is a big part of the reason that early laser printers were so expensive. The first Apple LaserWriter had 1.5 MiB of RAM and a 12 MHz 68000, in 1985. The computer that it was connected to had either 128 or 512 KiB of RAM and a 6 MHz 68000: the printer was a more powerful computer than the computer (and there were a load of hacks over the next few years to use the printer as a coprocessor because it was so much faster and had so much more memory).
The big improvement of PDF over PostScript was removing flow control instructions. The rendering complexity of a PDF document is bounded by its size.
Putting rasterisation on the printer was really a work around for the slow interconnect speed. An A4 page, at 1200 dpi in three colours is around 50 MiB. At the top speed of the kind of serial connection that the early Macs had, it would take about an hour to transfer that to the printer (about 4 minutes at 300dpi). Parallel ports improved that a lot and could send a 300 dpi page in 21 seconds for colour, 7 seconds for mono (faster than most inkjets could print), though a 1200 dpi page was still 6 minutes for colour, 2 minutes for mono, so required some compression (often simple run-length encoding worked well, because 95% of a typical page is whitespace). With a 100Mbit network connection you can transfer an uncompressed, fully-rasterised, 1200dpi, CMY, A4 page in around 4 seconds.
The problem is made worse by the fact that not all of the input to a page is in the form of vectors and so PostScript / PDF / PCL also need to provide a mechanism for embedding lossless raster images. At this point, you may as well do all of the rasterisation on the host and use whatever lossless compression format works best for the current output to transfer it to the printer. This is what a lot of consumer printers from the ’90s onwards did.
The real value of something like PostScript or PDF is not for communicating with a printer, it’s for communicating with a publisher. Being able to serialise your printable output in a (relatively) small file that does not include any printer-specific details (e.g. exactly what the right dithering patterns should be to avoid smudging on this technology, what the exact mix of ink colours is), is a huge win. You wouldn’t want to send every page as a fully rasterised image, because it would be huge and because rasterisation bakes in printer-specific details.
HP’s big innovation in this space was to realise that these were separate languages. PCL was a far better language for computer-printer communication than PostScript and let HP ship printers with far slower CPUs and less RAM than competitors that spoke PostScript natively. At the same time, nothing stopped your print server ( a dedicated machine or a process on the host) from accepting PostScript and converting it to PCL. This had two huge advantages:
I agree with what you say; I am not trying to defend the use of Postscript as a format for communication between computers. As you noticed, there are better communication languages for computer-printer communication. Rather, what I want to point out is that, Postscript can be a really good language for writing complex documents that are meant to be edited by humans when compared to Tex.
Apples vs oranges. Have you ever programmed in PostScript? It’s a much lower-level language than TeX, and not at all suited to writing documents.
For one thing, it’s inside-out from TeX: in PostScript, everything is code by default and the text to be printed has to be enclosed in parentheses. Worse, a string renders only with the font’s default spacing, so any time text needs to be kerned it has to be either broken up into multiple strings with “move” commands between them, or you have to call a library routine that takes the entire string plus an array of kerning offsets.
I used to write in TeX in college and then render it on an Apple LaserWriter. Sometimes I got a glimpse of what the PostScript output of the TeX renderer looked like, and it was basically unreadable. Not something I would ever want to edit, let alone write.
Actually I have. It is a fun little language in th Forth family that is extremely suitable for abstraction, and recalls the features of Lisp family in that it is Homoiconic, and you can write a debugger, editor etc. entirely in postscript. You can program in postscript in a program paradigm called concatenative programming – similar to tacit programming or point free style.
The language was used (think of it as a precursor to Javascript) for client side programming and rendering in Display Postscript used by NeXt and Adobe for their windowing systems and in NeWS windowing system by Sun Microsystems.
There has been a number of higher level document formatting libraries in postscript. The best known (by me) is TinyDict and another is here. (The same person wrote his CV in postscript which is a great example of versatility of postscript. Start from line 60. This is what the rendered pdf looks like.
Have you seen what generated C code looks like when it is used as a backend by other compilers? Do not judge a language by what generated code looks like.
I’m surprised you did not mention Don Lancaster’s many PS macros for publishing - https://www.tinaja.com/pssamp1.shtml <- that’s one of the coolest hobbyist use of PS in my experience.
Indeed! Thank you for the link.
Looking at the TinyDict docs, I don’t think I’d want to work in a markup language that looks like
That’s much less clear than TeX. If you’re a fluent PS programmer this might be appealing, but not for anyone else…
Can you expand on this? What makes PostScript preferable to the rather straightforward markup of (La)TeX?
See this reply from me. As what you want to accomplish becomes more complex, you really need a well designed programming language, and Postscript IMO is really well designed, though perhaps not as familiar to people from the traditional programming languages.
Looking at your examples, I’m not convinced.
I’m a firm believer in separating authoring from layout, something that LaTeX (and HTML) enforce quite well. The canard about amateur desktop publishing was the enthusiastic tyro that mixed different typefaces in a document just because they could. Having to specify typefaces and sizes in the document being authored is a throwback. While fighting with underfull hboxes in bigger LaTeX docs is a thing, the finished product is of high typographic quality.
I don’t want to dump on the person who wrote their CV in PS, but it doesn’t look that good, typographically. Back when I maintained a CV in LaTeX I used a package for that purpose, and it was easy to keep “chunks” of it separate so I could easily generate slightly different versions depending on the position I was applying for.
Having to manually end each line with an explicit line break is another thing that feels very primitive.
Regarding the link to TinyDict, the hosting website seems offline, so it does not seem to be under active development.
It doesn’t look as if PS has Unicode support, either: https://web.archive.org/web/20120322112530/http://en.wikibooks.org/wiki/PostScript_FAQ#Does_PostScript_support_unicode_for_CJK_fonts.3F
Sorry if I come off as negative, but computer/online authoring is a subject close to my heart, and as time has gone by I’ve come to to the conclusion it’s better to let the author not have to bother with stuff the computer does better.
I agree that Postscript does not have the similar higher level capabilities already available as TeX for separating content from layout. As it is, the basic primitives provided are at a lower level than TeX. However, my point is that the human interface – the language of postscript is much more amenable to building higher level packages than what TeX provides as a language.
Surely, these are not related to the language itself?
This is doable in Postscript. You have a full programming language at your disposal, and the language is very amenable to creating DSLs for particular domains.
Postscript language at this point is an old language that did not receive the same level of attention that TeX and LaTeX did. My point is not that everyone should use Postscript from now on. What I expressed was a fond wish that something like LaTex was built on top of Postscript so that I could use the Postscript language rather than what TeX and LaTex provides.
At this point, I have used LaTex for 12 years for academic work, and even after all these years, I am nowhere close to being even middling proficient in Latex. With Postscript, I was able to pick up the basics of the language fast, and I can at least make intelligent guesses as to what a new routine does.