1. 28

    Possibly unpopular opinions (and a large block of text) incoming:

    C++, Go, Swift, D and Rust all fail to adequately replace C for me. When given the choice, I would likely choose to just stick with C (for the moment; I’ll talk about what I’m considering at the end).

    C++ has so much historical baggage and issues, it’s already an immediate turn-off. More than that, it’s explicitly understood that you should carve out a subset of C++ to use as your language because trying to use the whole thing is a mess of sadness (given that it’s massive at this point). I appreciate the goal of zero-cost abstraction and having pleasant defaults in the standard library, but there are just too many problems for me to take it as a serious choice. Plus, I still have to deal with much of the unfortunate UB from C (not all of it, and honestly, I don’t mind UB in some cases; but a lot of the cases of UB in C that just make no reasonable sense come across to C++). It should be noted that I do still consider C++ occasionally in a C-with-templates style, but it’s still definitely not my preference.

    Despite how often people place it here, I do not believe Go belongs in this group of candidates The garbage collection alone makes it unfit for systems programming. I see Go as a very reasonable choice to replace Java (but I don’t use Java whenever I have the choice, so I might not be the best person to ask). There are many other parts of this language that rub me the wrong way, but mostly, I just think it’s not a good systems language (but is instead a great intro language for higher-level stuff).

    Swift is really easy to rule out: It’s not cross-platform. Even if it were, it has all sorts of terrible issues (have they fixed the massive compile times yet?) that make it a no-go.

    D, as far as I can tell, manages to be C++ without the warts in a really lovely way. Having said that, it seems like we’re talking about good replacements for C, not C++, and D just doesn’t cut it for me. GC by-default (being able to turn that off is good, but I’ll still have to do it every time), keeping name mangling by-default, etc. -betterC helps with some of this, but at that point, there’s just not enough reason for me to switch (especially with all the weirdness of there being two de facto standard libraries from different organizations, one of which I think is still closed-source? sounds like I might need to take another look at D; though, again, its emulation of C++ still suggests to me that it won’t quite cut it).

    Rust is the only language in this list that I think is actually a reasonable contender. Sadly, it still bites a lot of these issues: names are still mangled by-default, the generated binaries are huge (I’m still a little bugged that C’s hello-world with glibc dynamically links to 6KB), et al.

    But more than all of these things I’ve listed, the problems I have with these languages is that they all have a pretty big agenda (to borrow a term from Jon Blow). They all (aside from C++ which has wandered in the desert for many years) have pretty specific goals with how they are designed to try and carve out their ideal of what programming should be instead of providing tools that allow for people to build what they need.

    So, as for languages that I think might (someday, not soon really) actually replace C (for me):

    Zig strikes a balance between C (plus compile-time execution to replace the preprocessor) and LLVM IR internals which allow for incredibly fine-grained control (Hello arbitrarily-sized, fixed-width integers, how are you doing?). It also manages to have the best C interop story I have ever seen in a language so far (you can import from C header files no problem, and zig libraries can have their code called from C also no problem; astounding).

    Myr is still really new (so is Zig really), and has a lot left to figure out (e.g., its C interop story is not quite so nice yet). However, it manages to be incredibly simple and terse for a braced language. My guess is that, in the long run, myr will actually replace shell languages for me, but not C.

    Jai looks incredibly cool and embraces a lot of what I’ve mentioned above in that it is not a big agenda language, but provides a lot of really useful tools so that you can use what you make of it. However, it’s been in development for four years and there is no publicly available implementation (and I am worried that it may end up being closed-source when it is released, if ever). I’m hoping for the best here, but am expecting dark storms ahead.

    Okay, sorry for the massive post; let me just wrap up a few things. I do not mean to imply with this post that any of the languages above are inherently bad or wrong, only that they do not meet my expectations and needs in replacing C. For a brief sampling of languages that I love which suffer from all the problems I mentioned above and more, see here:

    • Haskell and Idris, but really Agda
    • Ada
    • Lua
    • APL (specifically, the unicode variants)
    • Pony

    They are all great and have brilliant ideas; they’re just not good for replacing C. :)

    Now then, I’ll leave you all to it. :)

    1. 15

      (especially with all the weirdness of there being two de facto standard libraries from different organizations, one of which I think is still closed-source?)

      That was resolved a few years ago. D just has one stdlib, it’s fairly comprehensive, and keeps getting better with each release.

      1. 12

        a few years ago

        a bit of an understatement! The competing library was dropped in 2007.

      2. 10

        Despite how often people place it here, I do not believe Go belongs in this group of candidates The garbage collection alone makes it unfit for systems programming. I see Go as a very reasonable choice to replace Java (but I don’t use Java whenever I have the choice, so I might not be the best person to ask). There are many other parts of this language that rub me the wrong way, but mostly, I just think it’s not a good systems language (but is instead a great intro language for higher-level stuff).

        Agreed with this. Go is my go-to when I need to introduce dependencies to a Python script (and thus fuck with pip --user or virtualenv or blah blah blah) for high level glue code between various systems (e.g. AWS APIs, etc.)

        I think there’s a reason Go is dominating the operations/devops tooling world - benefits of static compilation, high level, easy to write.

        Look at the amount of hacks Docker needs to do things like reexec etc. to work properly in Go, that would be trivial to do in C.

        1. 3

          Note that Zig is x86-only at the moment. Check “Support Table” on Zig’s README.

          For that matter, Rust is x86-only too, if you want Tier-1 support.

          1. 3

            I’m a big d fan, but I agree that it’s the wrong thing to replace c. Betterc doesn’t really help in this respect, because it doesn’t address the root reason of why d is the wrong thing to replace c (which being that the language itself is big, not that the runtime or standard library are). Personally, I think zig is the future, but rust has a better shot at ‘making it’, and the most likely outcome is that c stays exactly where it currently is (which I’m okay with). I haven’t looked at myr (yet), and afaik isn’t jai targeted at game development? It might be used for systems programming, but I think it might not necessarily do well there.

            It also manages to have the best C interop story I have ever seen in a language so far (you can import from C header files no problem, and zig libraries can have their code called from C also no problem; astounding)

            I think nim does this, and d for sure does, with d++ (I think this may also help with the c++ emulation? Also, I’m not sure why you’re knocking it for its lack of quality c++ emulation when it’s afaik the only language that does even a mediocre job at c++).

            1. 1

              I agree!

              As for knocking D for emulating C++, I did not mean to suggest that doing so is a count against D as a language, but rather just as a count against replacing C. I already ruled out C++, if a given language is pretty close to C++, it’s probably also going to be ruled out.

              It’s been a long time since I looked at nim, but generating C code leaves a really poor taste in my mouth, especially because of some decisions made in the language design early on (again, I haven’t looked in a while, perhaps some of those are better now).

              As for Jai, yes it’s definitely targeted at game development; however, the more I look at it, the more it looks like it might be reasonable for a lot of other programming tasks. Again though, at the moment, it’s just vaporware, so I offer no guarantee on that point. :)

            2. 2

              However, it’s been in development for four years and there is no publicly available implementation

              I’m amazed that behind the project is Jonathan Blow, he is a legend programming original video games.

            1. 6

              I fully agree with this mostly due to accessibility. I find that more and more websites are harder to read and navigate.

              However for some problems I dont have solutions either, without bringing in some javascript, or changing browser internals:

              • the whole endless scroll concept works well for things like real time chat interfaces, but i dont think you can express it with html alone
              • a lot of things like deferred loading of images, which is done in javascript to speed up loading could probably be done by the browser
              • if the browser would let me do with frames what my window manager does with windows …

              Also I suspect people stress over custom design so much because the default stylesheet for the browser actually looks like crap.

              1. 3

                I agree with this. A default stylesheet with better typography would do a lot to reduce the appeal of css frameworks.

                A lot of the reasonable and valid use-cases for javascript probably ought to be moved into html attributes and the browser – things like “on click make a POST request and replace this element with the response body if it succeeds”. Kind of a “pave the cowpaths” approach that would allow dynamic front-ends without running arbitrary untrusted code on the client.

                1. 1

                  “on click make a POST request and replace this element with the response body if it succeeds”.

                  You can actually do that with a target attribute on the form going toward an iframe. It isn’t exactly the same but you can make it work.

              1. 6

                Great article! I wonder whether the future of HTML escaping libraries will lie with something like ammonia, which actually parses the HTML before emitting a sanitized version, instead of simple text-replacement - at a certain point, I guess it becomes a better idea to just do what a browser would do in order to ensure that your sanitation worked…

                1. 5

                  Yeah, I prefer using DOM functions for everything, including templating. With the DOM, everything gets escaped in proper context and you can do other sanity checks, like always outputting strictly well-formed stuff. A HTML document isn’t really a string and I prefer to avoid pretending it is.

                  1. 2

                    Do you have a link or example for this method?

                  2. 3

                    Related; DOMPurify, uses DOM APIs exposed to JavaScript to ensure that browser and sanitizer show the we parsing behavior.

                  1. 3

                    I’m a bit concerned about the XSS security implications as this allows even more scripting from even more places. I guess it is another reason to be very strict on the whitelists for any user content.

                    1. 4

                      Realistically, CSS already has more security implications than anybody really wants to think about. Everyone should already be treating it as attack surface. This API does make things worse; it was previously difficult but possible to reason about the security implications of an isolated CSS snippet, and now it’s not possible without considering the entire page and all its scripts as well. Maybe that change will get more people to use the caution they already should have been…

                    1. 14

                      I don’t buy it because the real protocol is what you read and write from the file, not that you can read and write files. And if the “file” is a directory, what do the filenames you read and write from/to it mean?

                      So is there really any difference between open(read("/net/clone")) and net_clone();? The author seems to say the former is more loosely coupled than the latter because the only methods are open and read on the noun that is the file…. but really, you are stating exactly the same thing as the “verb” approach (if anything, I’d argue it is more loosely typed than loosely coupled). If a new version wants to add a new operation, what’s the difference between making it a new file that returns some random data you must write code to interpret, and a new method that returns some data you must write code to use?

                      1. 24

                        So is there really any difference between open(read(”/net/clone”)) and net_clone();

                        Yes: The fact that you can write tools that know nothing about the /net protocol, and still do useful things. And the fact that these files live a uniform, customizable namespace. You can use “/net/tcp/clone”, but you can also use “/net.home/tcp/clone”, which may very well be a completely different machine’s network stack. You can bind your own virtual network stack over /net, and have your tests run against it without sending any real network traffic. Or you can write your own network stack that handles roaming and reconnecting transparently, mount it over /net, and leave your programs none the wiser. This can be done without any special support in the kernel, because it’s all just files behind a file server.

                        The difference is that there are a huge number of tools you can write that do useful things with /net/clone that know nothing about what gets written to the /net/tcp/* files. And tools that weren’t intended to manipulate /net can still be used with it.

                        The way that rcpu (essentially, the Plan 9 equivalent of VNC/remote desktop/ssh) works is built around this. It is implemented as a 90 line shell script It exports devices from your local machine, mounts them remotely, juggles around the namespace a bit, and suddenly, all the programs that do speak the devdraw protocol are drawing to your local screen instead of the remote machine’s devices.

                        1. 5

                          You argue better than I can, but I’ll add that the shell is a human interactive environment, C api’s are not. Having a layer that is human interactive is neat for debugging and system inspection. Though this is a somewhat weaker argument once you get python binding or some equivalent.

                          1. 1

                            I was reminded of this equivalent.

                          2. 1

                            But in OOP you can provide a “FileReader” or “DataProvider”, or just a FilePath that abstracts either where the file is or what you are reading from too. The simplest would be the net_clone function above just taking a char* file_path, but in an OOP language the char* or how we read from whatever the char* is can be abstracted too.

                            1. 2

                              Yes, but how do you swap it out from outside your code? The file system interface allows you to effectively do (to use some OOP jargon) dependency injection from outside of your program, without teaching any of your tools about what you’re injecting or how you need to wire it up. It’s all just names in a namespace.

                              1. 0

                                without teaching any of your tools about what you’re injecting or how you need to wire it up

                                LD_PRELOAD, JVM ClassPath…

                          3. 6

                            So is there really any difference between open(read(”/net/clone”)) and net_clone();?

                            Yes, there is. ”/net/clone” is data, while net_clone() is code.

                            1. 4

                              I don’t buy it because the real protocol is what you read and write from the file, not that you can read and write files

                              Yes - but the read()/write() layer allows you to do useful things without understanding that higher-level protocol.

                              It’s a similar situation to text-versus-binary file formats. Take some golang code for example. A file ‘foo.go’ has meaning at different levels of abstraction:

                              1. golang code requiring 1.10 compiler or higher (uses shifted index expression https://golang.org/doc/go1.10#language)
                              2. golang code
                              3. utf-8 encoded file
                              4. file

                              You can interact with ‘foo.go’ at any of these levels of abstraction. To compile it, you need to understand (1). To syntax-highlight it you only need (2). To do unicode-aware search and replace, you need only (3). To count the bytes, or move/delete/rename the file you only need (4).

                              The simpler interfaces don’t allow you to do all the things that the richer interfaces do, but having them there is really useful. A user doesn’t need to learn a new tool to rename the file, for example.

                              If you compare that to an IDE, it could perhaps store all the code in a database and expose operations on the code as high-level operations in the UI. This would allow various clever optimisations (e.g. all caller/callee relationships could be maintained and refactoring could be enhanced).

                              However, if the IDE developer failed to support regular expressions in the search and replace, you’re sunk. And if the IDE developer didn’t like command line tools, you’re sunk.

                              (Edit: this isn’t just one example. Similar affordances exist elsewhere. Text-based internet protocols can be debugged with ‘nc’ or ‘telnet’ in a pinch. HTTP proxies can assume that GET is idempotent and various cacheing headers have their standard meanings, without understanding your JSON or XML payload at all.)

                            1. 2

                              This is pretty close to the article that I’ve been wanting to write. I feel that software interfaces have been careening towards form-over-function for quite some time. It seems like it’s far more important these days that an application or desktop environment look new and fresh than it actually works well. This is reflected in the mass shift from “user interface” (how well people interact with the technology) to “user experience” (how people feel about interacting with the technology).

                              I agree with the author that it’s sad that we’re erasing decades of progress on desktop interface design. I’ve been using computers for three decades and Linux for two decades. I installed Ubuntu 16.04 and really tried hard to give it a fair chance. But after a few weeks, I just could not be productive in it due to silly design decisions and missing features.

                              The one thing I disagree with in the article is that global menu bars are some kind of panacea. I suspect people that champion for them “grew up on them” so to speak and are simply too used to them to give them up. I understand and respect that. However, most of the arguments for them fall down when you introduce multiple displays. When there is more than one display, which display gets the global menu? What if your global menu is on screen A and your application on screen C? Do you expect users to mouse all the way over to the left to get to the menu? Should the global menu be moved to a different screen depending on which application has focus? Or should it move depending on which display has the mouse cursor? Or do you just put the global menu on all screens? Everyone will have a different (valid) opinion for each one of these questions.

                              I myself don’t care for global menus not only because the Right Behavior is not clear when there are multiple displays (I use a minimum of 2-3 every single day) but also because although I have a lot of screen real estate, I am typically focused only a very small part (one window) at a given time and it’s more efficient to have everything that concerns that application all in the same area or window.

                              /rant

                              1. 2

                                I can see the point of Fitt’s law, but otherwise I also hate the global menu bar. Any time I use a Mac, it is just weird how it constantly changes when clicking around and is visually disconnected from what it is supposed to manage. Moreover, it doesn’t work at all with sloppy mouse focus anyway (which I was skeptical of at first and now find vastly superior).

                                And btw menus and submenus require quite a bit of precision anyway, hitting the right option can be a pain. There’s more to it than just opening the menu.

                                So what I’ve been considering today is actually making the mouse lock to the active window with ease, so then the window borders become the infinitely spaced target area. Then the menus are still tied to the window, and have infinite mouse target height. But then, how to make the mouse escape the window? It also kinda ruins sloppy focus. I’m thinking maybe put it on a hotkey to toggle window lock, or maybe just make the cursor a little “sticky”, so it requires a little more distance to escape the window border when it has a clickable thing on that side.

                                idk, I am thinking about it, but at the same time, personally, I basically like things the way they are on my computer - per-window menus ftw.

                              1. 0

                                A list of beliefs about programming that I maintain are misconceptions.

                                1. 3

                                  Small suggestion: use a darker, bigger font. There are likely guidelines somewhere but I don’t think you can fail with using #000 for text people are supposed to read for longer than a couple of seconds.

                                  1. 3

                                    Current web design seems allergic to any sort of contrast. Even hyper-minimalist web design calls for less contrast for reasons I can’t figure out. Admittedly, I’m a sucker for contrast; I find most programming colorschemes hugely distasteful for the lack of contrast.

                                    1. 6

                                      I think a lot of people find the maximum contrast ratios their screens can produce physically unpleasant to look at when reading text.

                                      I believe that people with dyslexia in particular find reading easier with contrast ratios lower than #000-on-#fff. Research on this is a bit of a mixed bag but offhand I think a whole bunch of people report that contrast ratios around 10:1 are more comfortable for them to read.

                                      As well as personal preference, I think it’s also quite situational? IME, bright screens in dark rooms make black-on-white headache inducing but charcoal-on-silver or grey-on-black really nice to look at.

                                      WCAG AAA asks for a contrast ratio of 7:1 or higher in body text which does leave a nice amount of leeway for producing something that doesn’t look like looking into a laser pointer in the dark every time you hit the edge of a glyph. :)

                                      As for the people putting, like, #777-on-#999 on the web, I assume they’re just assholes or something, I dunno.

                                      Lobsters is #333-on-#fefefe which is a 12.5:1 contrast ratio and IMHO quite nice with these fairly narrow glyphs.

                                      (FWIW, I configure most of my software for contrast ratios around 8:1.)

                                      1. 2

                                        Very informative, thank you!

                                  2. 3

                                    I think the byte-order argument doesn’t hold when you mentioned ntohs and htons which are exactly where byte-order needs to be accounted for…

                                    1. 2

                                      If you read the byte stream as a byte stream and shift them into position, there’s no need to check endianness of your machine (just need to know endianness of the stream) - the shifts will always do the right thing. That’s the point he was trying to make there.

                                      1. 2

                                        ntohs and htons do that exact thing and you don’t need to check endianess of your machine, so the comment about not understanding why they exist makes me feel like the author is not quite groking it. Those functions/macros can be implemented to do the exact thing linked to in the blog post.

                                  1. 7

                                    Do not top-post.

                                    Keep quoted text small and relevant

                                    Sadly, this advice is just not followed most of the time on the mailing lists I follow. As much as top-posting bothers me, I have to resign myself to the fact that demanding it is basically me becoming the old man who yells at clouds. All attempts to stop it by others have not worked. I don’t think it’s going away.

                                    1. 1

                                      List managers could stop it by bouncing top-posts.

                                      1. 1

                                        I still reply to e-mails with top posts (although I’m on very few mailing lists these days and contribute to nearly zero).

                                        At least with personal e-mails, it makes more sense for your reply to be at the top and not have to scroll all the way to the bottom.

                                        But whatevs. It’s tabs vs spaces at this point.

                                        1. 3

                                          You shouldn’t have to scroll far - you only quote enough to make it clear what you are talking about, then you talk about it. If you want to see the whole message, you can just go back to the other email.

                                          1. 2

                                            An apt comparison, just like so-called “tabs-vs-spaces” the claim that there are two equivalent options misses the point entirely.

                                            The point is not to uselessly quote everything and put your post at the bottom instead of uselessly quoting everything and putting your post at the top. The point is to stop uselessly quoting everything.

                                          2. 1

                                            This particular battle is lost. I doubt there is any mail client with more than 3% marketshare that does not use top-posting as standard.

                                            In the business world, where clients like Outlook hold sway, someone who doesn’t top post and doesn’t drag along the entire previous conversation (including disclaimer signatures, cutesy “consider the environment before printing this email” PNGs, and potentially embarrassing discussions of someone who was just added as a CC by a 3rd party) is seen as a weirdo.

                                          1. 3

                                            Does anyone know if there’s a simpler alternative to Google Analytics which only shows hit counts? For my site, all I’d love to know is which pages have been viewed how many times. I really don’t care about anything else.

                                            I wish Netlify would provide some sort of basic log analysis of static sites, telling me the view count of each page.

                                            1. 5

                                              If you have access to your web-server logs, Goaccess may be a good candidate. It’s quite easy to use and not really intrusive.

                                              1. 1

                                                I actually don’t since I’m on Netlify. Otherwise this would be an ideal solution.

                                                Most of the static websites are hosted on either Github Pages or Netlify and (as far as I know) neither of those allow you to see the access logs.

                                                1. 4

                                                  You can host a 1x1 pixel on Amazon S3 and enable logging for the associated bucket. Add a query string to identify the current page. A simple transformation on the logs (to remove original URI, keeping only the one in query string) and you should be able to use GoAccess.

                                              2. 1

                                                Does anyone know if there’s a simpler alternative to Google Analytics which only shows hit counts?

                                                I think what you’re looking for is a web counter from the 90’s :)

                                                1. 1

                                                  I don’t! But this sounds like a good service for someone to provide. Something SUPER lightweight. Could even eventually show it on https://barnacl.es

                                                  1. 1

                                                    back in the days https://www.awstats.org/ was a thing

                                                    1. 1

                                                      It still is. I know quite a few customers who still use awstats.

                                                  1. 16

                                                    Reminded me I’ve had Google Analytics code up on my blog since forever for no benefit for me whatsoever. Off it goes!

                                                    1. 2

                                                      Kudos for removing it but I am curious how Google Analytics ends up running on so many sites to begin with?

                                                      1. 11

                                                        It’s free, it’s very easy to setup and understand, and there is a lot of documentation out there on how to integrate it into different popular systems like Wordpress. It’s definitely invasive, but it’s hard to deny that it’s easy to integrate.

                                                        1. 1

                                                          not as easy as doing nothing though… it’s free and easy to crawl around on all fours… that can be invasive too if you crawl under someone’s desk… but this still leaves the question why.

                                                          1. 5

                                                            Because a lot of the time when you’ve just made a site you want to see if anyone’s looking at it, or maybe what kind of browsers are hitting it, or how many bots, or whatever, so you set up analytics. Then time passes, you find out what you wanted to find out, and you stop caring if people are looking at the site, but the tracking code is still there.

                                                            1. 2

                                                              I’d compare it to CCTV cameras in shops. You visit the shop (the website) voluntarily so the owner can and will track you. We can agree that this is a bad thing under certain conditions, but as long as it’s technically trivial it will be done. No use arguing what is, you’d need a face mask or TOR to avoid it.

                                                              That said, I’d also prefer if it wasn’t Google Analytics on most pages but something that keeps the data strictly in the owner’s hands. I can wish for it to be deleted after a while all I want but my expectation is that all the laws in the world won’t change that to a 100% certainty.

                                                          2. 8

                                                            End-user-facing SaaS products are one thing. On a site I run on infrastructure that I run myself I can just look at the httpd logs¹ and doing so is way faster than looking at GA², but if I also bought a dozen other random SaaS products then the companies that run those won’t ship me httpd logs, but they will almost always give me a place to copy-paste in a GA tracking <script>. If I have to track usage on microsites and my main website, it’s nice if the same tracking works for all of them.

                                                            It has some useful features. I believe offhand that, if you wire up code to tell it what counts as a “conversion event”, GA can out the box tell you things like “which pages tended to correlate positively and negatives with people subsequently pushing the shiny green BUY NOW button?”

                                                            There’s a populace of people familiar with it. If you hire a head of marketing³, pretty much every single person in your hiring pool has used GA before, but almost none of them have scraped httpd logs with grep or used Piwik. (Though I would be surprised if they didn’t immediately find Piwik easy and pleasant to use.) So when that person says that they require quantitative analysis of visitor patterns in order to do their job⁴, they’re likely to phrase it as “put Google Analytics on the website, please.”

                                                            (¹ GA writes down a bunch of stuff that Apache won’t, out the box. GA won’t immediately write down everything you care about because you have to tell it what counts as a conversion if you want conversion funnel statistics.)

                                                            (² I have seriously no idea whatsoever how anybody manages to cope with using GA’s query interface on a day to day basis. It’s the most frustratingly laggy UI that I’ve ever used, and I’m including “running a shell and text editor inside ssh to a server on literally the opposite side of the planet” in this comparison. I think people who use GA regularly must have their expectations for software UI adjusted downward immensely.)

                                                            (³ or whatever job title you give to the person whose pay is predicated on making the chart titled “Purchases via our website” go up and to the right.)

                                                            (⁴ and they do! If you think they don’t, take it up with Ogilvy. He wrote a whole book and everything, you should read it.)

                                                            1. 1

                                                              what’s that book?

                                                              1. 3

                                                                The book is “Ogilvy on Advertising”. It’s not long, the prose is not boring and there are some nice pictures in it.

                                                                The main thing it’s about is how an iterative approach to advertising can sell a boatload of product. That is, running several different adverts, measuring how well each advert worked, then trying another set of variations based on what worked the first time. For measurement he writes about doings things like putting different adverts for the same product up, each with a different discount code printed on it, and then counting how many customers show up using the discount code that was in each of those adverts. These days you’ll see websites doing things like using tracking cookies to work out what the conversion rate was from each advert they ran.

                                                                Obviously the specific mechanisms they used for measurement back then are mostly obsolete now, but the underlying principle of evolving ad campaigns by putting out variations, measuring, then doubling down on the things you’ve demonstrated to work is timeless.

                                                                Ogilvy also writes a little bit about specific practical things that he’s found worked when he put them in adverts in the past, such as putting large amounts of copy on the advert rather than small amounts, font choice, attention-grabbing wording, how to write a CTA, black text on white backgrounds or vice-verse, what kinds of photos to run and so on. Many are probably still accurate because human beings don’t change much.

                                                                Many are plausibly wrong now because the practicalities of staring at a glowing screen aren’t identical to those of staring at a piece of paper. If you’re following the advice to in the first bit of the book about actually measuring things, then it won’t matter much to you how much is wrong or right because you’ll rapidly find out for yourself empirically anyway. :)

                                                                Hypothetically, let’s say you’ve done a lot of little-a agile software development: you might feel that the evolutionary approach to advertising is really, really obvious. Well, congratulations, but not all advertising is done that way, and quite a lot of work is sold on the basis of how fashionable and sophisticated it makes the buyer of the advertising job feel. Ogilvy conveys, in much less harsh words, that the correct response to this is to burn those scrubs to the fucking ground by outselling them a hundred to one.

                                                            2. 6

                                                              For me it was probably ego-stroking to find out how much traffic I was getting. I’ve been blogging for more than a decade and not always from hosts where logs were easily accessible.

                                                              1. 4

                                                                What gets me is why people care about how many hits their blog gets anyway. If I write a blog, the main target is actually myself (and maybe, MAYBE, one or two other people I’ll email individually too), and I put it on the internet just because it is really easy to. Same thing with my open source libraries: I offer them for download with the hopes that they may be useful… but it really means nothing to me if you use it or not, since the reason I wrote it in the first place is for myself (or again, somebody who emailed me or pinged me on irc and I had some time to kill by helping them out).

                                                                As such, I have no interest in analytics. It… really doesn’t matter if one or ten thousand people view the page, since it works for me and the individuals I converse with on email, and that’s my only goal.

                                                                So I think that yes, Google Analytics is easy and that’s why they got the marketshare, but before that, people had to believe analytics mattered and I’m not sure how exactly that happened. Maybe it is every random blogger buying into the “data-driven” hype thinking they’re going to be the next John Rockefeller in the marketplace of ideas… instead of the reality where most blogs are lucky to have two readers. (BTW I think these thoughts also apply to the otherwise baffling popularity of Medium.com.)

                                                                1. 1

                                                                  Also, it’s invasive, sure but it’s also fairly high value even at the free level.

                                                                  You get a LOT of data about your users from inserting that tracking info into your site.

                                                                  Which leads me into my next question - what does all this pro-privacy stuff do to such a blog’s SEO?

                                                                  (I know, I know, we’re not supposed to care about SEO - we’re Maverick developers expressing our cultural otherness and doing Maverick-y things…)

                                                                  1. 2

                                                                    Oh, it totally tanks SEO.

                                                                    Alternately, the SEO consultants that get hired by biz request to have GA added anyways and they force you to bring it in. :(

                                                                    1. 1

                                                                      Google will derank pages what don’t have Google Analytics?

                                                              1. 11

                                                                CGI is beautiful and perfect and it is sad how neglected it is nowadays.

                                                                1. 2

                                                                  CGI is pretty awesome, and I’ve always wanted to find a nice way of using Python with it

                                                                  Unfortunately there’s a bit of an awkwardness when trying to pair CGI with something like Django. Having to set up the environment over and over. Also there’s the basic issues of URL routing being coupled with files…

                                                                  I’m sure there’s some nice style of dealing with this, but I haven’t found it yet.

                                                                  1. 1

                                                                    Yeah I actually rather wonder if many of the advantages that caused folks to go the in-process like mod_perl, PHP etc have been rendered pointless by modern processor and IO.

                                                                    1. 5

                                                                      I think a lot of those weren’t even CGI per se, but just that perl and php (and other interpreted languages) have nasty startup costs. I often do traditional CGI programs in the natively-compiled D language and while it doesn’t win any benchmarks, it is also good enough for a LOT of things - and very reliable, thanks to process isolation.

                                                                      1. 2

                                                                        Why would you start a CGI script on request when you can just launch a whole container now? /s :P

                                                                        1. 1

                                                                          I’ve also been playing around with Go’s net/http/cgi package, which is quite nice, since it uses the same backend as the regular http, as well as the fcgi package. One can even bundle them together into one binary, that chooses what to done depending on the file name (example for a mini lobste.rs clone I wrote a while ago).

                                                                          And from my experience, which wasn’t quite the worst, it was quite enough. I sometimes have the feeling that modern web frameworks have created a fear of these more simple and often sufficient solutions (for smaller to mid-size usecases), to promote their own projects - unrightfully.

                                                                          1. 1

                                                                            Yeah, the library I wrote for D also uses the same interface for cgi, fcgi, scgi, and http (both pre-fork processes to provide some segfault isolation as well as simple multiplexing, and one-thread-per-connection primarily for compatibility on non-Linux systems where fork doesn’t work the same way) implementations, so swapping out is often as simple as a recompile.

                                                                            I don’t even think CGI is really dead, but rather just slightly extended or wrapped by everyone individually. Python’s WSGI and Ruby’s rack show their cgi heritage and compatibility, etc.