CGIs are IMO still unbeaten if people want to operate their own cause themselves. The technical overhead is minimal and the tech is battle-tested. E.g. minisleep[1] comes to my mind.
CGIs are IMO still unbeaten if people want to operate their own cause
themselves. The technical overhead is minimal
One can write CGI in any language that supports environment variables
and text I/O over the standard streams. On *nix, that’s basically any
language having the most rudimentary interface to the OS.
I switched my web servers back from nginx to lighttpd last year, because
lighttpd has first-class support for CGI, like a web server should!
Going from CGI -> FastCGI may only be a “bit of extra code”, but the operational complexity is substantial. Instead of forking a child process, you now have a another daemon and client/server connection (httpd<->fcgid) to manage, and you can no longer invoke your script via a normal shell. Plus you lose the single most brilliant part of CGI: namely, the fact that a single UNIX process starts, runs, and stops, after which the host OS can automatically clean up all of the runtime resources (memory, file handles, client sockets, etc.) the script used.
To be fair, that “one-shot” behavior is also the thing that made most people run away from it once they wanted to use languages like Python and Ruby that were very slow to start + load/parse all their dependencies. An extra ~10ms per requests might be fine for occasionally-used endpoints on your lightly-loaded website, but most folks understandably balk at O(seconds) of extra startup time tacked on to every request.
Now that the circle is coming back around to single-binary distributions via Go, Rust, and Zig I would be entirely unsurprised to see CGI have a little renaissance too. It was, after all, the original “cross-language Serverless” platform a lot of us old-timers got to know way back when. :)
To be fair, that “one-shot” behavior is also the thing that made most people run away from it once they wanted to use languages like Python and Ruby that were very slow to start + load/parse all their dependencies.
Large Perl scripts back then stored parts of the source in strings, to be evaluated only when needed, in order to shorten the parse time.
I created my version of a wiki back in 1995 using Perl with CGI. No real libraries or frameworks to learn, everything was just environment variables and strings. Localization was not a concern either. It wasn’t terribly fast, but sufficient on the hardware of that era. Security was… a .htaccess file. Yes, I know. But it was an internal tool, not exposed to the Internet.
When I was at uni the “server side” portion of Webdev was cgi in C (in fairness it was my uni’s only course that had “this is C” course prior to requiring it for 90% of assessment). There was a group assignment to make a message board (still a thing back then).
Part of the grade was how robust it was, and defending against malicious html in the messages was something we spent a lot of time on - this was in the pre-markdown era and we were also still very much noobs :) - but then the lecturer doing the grading tested robustness by overwriting the database file with random data and seeing if we crashed. That wasn’t part of our threat model so crashing is what we did, so no points for robustness. Still bitter :)
CGIs are IMO still unbeaten if people want to operate their own cause themselves. The technical overhead is minimal and the tech is battle-tested. E.g. minisleep[1] comes to my mind.
[1] http://halestrom.net/darksleep/software/minisleep/
One can write CGI in any language that supports environment variables and text I/O over the standard streams. On *nix, that’s basically any language having the most rudimentary interface to the OS.
I switched my web servers back from nginx to lighttpd last year, because lighttpd has first-class support for CGI, like a web server should!
With only a little bit of extra code you could use fastcgi and still use nginx.
Going from CGI -> FastCGI may only be a “bit of extra code”, but the operational complexity is substantial. Instead of forking a child process, you now have a another daemon and client/server connection (httpd<->fcgid) to manage, and you can no longer invoke your script via a normal shell. Plus you lose the single most brilliant part of CGI: namely, the fact that a single UNIX process starts, runs, and stops, after which the host OS can automatically clean up all of the runtime resources (memory, file handles, client sockets, etc.) the script used.
To be fair, that “one-shot” behavior is also the thing that made most people run away from it once they wanted to use languages like Python and Ruby that were very slow to start + load/parse all their dependencies. An extra ~10ms per requests might be fine for occasionally-used endpoints on your lightly-loaded website, but most folks understandably balk at O(seconds) of extra startup time tacked on to every request.
Now that the circle is coming back around to single-binary distributions via Go, Rust, and Zig I would be entirely unsurprised to see CGI have a little renaissance too. It was, after all, the original “cross-language Serverless” platform a lot of us old-timers got to know way back when. :)
Sounds like redbean.dev with lua is the modern CGI thing (although I am not familiar enough with CGI to claim this with full certainty).
indeed, yes, the act of deployment of said ‘extra code’ may have nothing in common with cgi.
Large Perl scripts back then stored parts of the source in strings, to be evaluated only when needed, in order to shorten the parse time.
My blog is still CGI based (but written in C).
I created my version of a wiki back in 1995 using Perl with CGI. No real libraries or frameworks to learn, everything was just environment variables and strings. Localization was not a concern either. It wasn’t terribly fast, but sufficient on the hardware of that era. Security was… a .htaccess file. Yes, I know. But it was an internal tool, not exposed to the Internet.
In fairness I think in 95 it was still reasonable to not be cognizant of the potential threats
The web server also wasn’t running https, which wasn’t very widely deployed at the time.
When I was at uni the “server side” portion of Webdev was cgi in C (in fairness it was my uni’s only course that had “this is C” course prior to requiring it for 90% of assessment). There was a group assignment to make a message board (still a thing back then).
Part of the grade was how robust it was, and defending against malicious html in the messages was something we spent a lot of time on - this was in the pre-markdown era and we were also still very much noobs :) - but then the lecturer doing the grading tested robustness by overwriting the database file with random data and seeing if we crashed. That wasn’t part of our threat model so crashing is what we did, so no points for robustness. Still bitter :)