I’ve been around the internet long enough that analysing access logs is using analytics. Just because it’s not JS based and loading in the client’s browser doesn’t make it less analytical. Was quite confused reading the start of the post as it flipped from “no analytics!” to “analyse access log!” when they are one and the same in my head.
We also don’t need to be implicitly endorsing a proprietary, megacorporation-owned platform when there are dozens of gratis, less-corporate-y alternatives. Additionally, if it was worded “static cloud hosted”, then the same point/concern would have been conveyed while encompassing all the options instead of name-dropping/advertising for a singular one—one that deserves additional scrutiny.
I’m not endorsing anything, just admitting the reality. We should meet people where they are – if some proprietary solution is “not good”, then let’s tell people what to use instead. Just saying “what you’re doing is bad and you should feel bad” is not going to move the needle.
If you were no-knowledge on the topic you might do a web search for something like “free static site hosting” or “how to host a $TOOL blog”? A lot of queries will come back with masses recommending the Microsoft option, especially when you get to ‘influencers’ & video content. You might shrug & say “I guess this is what everyone is doing” without questioning it based on the content maker’s anecdote. Pushing against that narrative in places like forums is where other folks see & consider questioning their decisions–which is something I think they should consider when it comes to this topic.
I think people sometimes forget that access logs are there and have been a lot around longer than client-side analytics scripts added to your site’s source. I used Sawmill at fairly industrial-strength about 15 years ago; glad to see that it’s still around.
I went looking a couple of years ago and was surprised how little development there’d been in access log analysis scripts. I ended up using some 2005-era thing myself which worked but was not great. There’s also limits to what you can learn from an access log these days. The loss of Referer data is keenly felt.
What has gotten development recently is self-hosted analytics scripts. Still Javascript (or tracking pixels) but at least the data only goes to you. Two I’ve squirreled away: Shynet and Plausible.
The other thing you can do is to sign up for Google Search Console (or the Bing equivalent). This gives you statistics on what search terms people use to find your site, how many clickthroughs etc. This doesn’t involve any tracking in your side, it just gives you access to a small slice of the data Google is collecting.
Underrated actually. Taking it to the extremes, you end up with checkbox hacks with poor accessibility. Aiming for minimal JS is probably when the meant unless they are lucky enough to really go JS free.
But that’s the point… there usually is at least something best suited with a sprinkle of JavaScript for the best user experience. If you take no-JS to the extreme, often something in the UX will suffer a worse experience. But we seem to agree, at least for informational websites, keeping to a minimum is ideal.
I like to have analytics. My site enjoys around 30k readers per month and seeing the growth has motivated me a lot to read and write more.
It’s built with Hugo and deployed via GitHub pages so that I don’t have to manage anything. However, the side effect is that I don’t have access to the server and the logs.
I don’t enjoy managing infra and Google analytics works fine in this case. The trick is, not to obsess over it. I rarely check mine but love to see the pretty graphs going crazy when one of my write-ups hits the front page of Hackernews.
I used to have analytics on my blog, and I used to obsess over the insights I gained from it. I carefully updated my site in the hopes that it will provide visitors a more pleasant stay. It took me years to realize that all of this was for naught: people came to the site to read a post (linked from somewhere else, or from its RSS feed), and then left, no matter how I tweaked its looks, no matter how I tried to provide better “content”.
Upon realizing this (a good couple of years ago), I spent a weekend on improving my RSS feed (it now included full posts, and wasn’t limited to the last 10 posts, but had every post on my site). The next month or so, I observed that my visitors dropped considerably: people were reading my site through RSS (RSS traffic slightly increased due to more subscribers, and the hits on images referenced from posts was roughly the same as the RSS traffic), and it was good enough that they did not need to visit the site directly.
Mission accomplished! I turned off analytics, and I also turned off access logs, because I was never looking at those anyway.
I briefly turned access logs on again a few months ago (for a single month) when I entertained the idea of opting out of search engines, just to see how much traffic I get from those. Turned out it was negligible, so I opted out of them, and turned access logs off again.
It’s a big weight off my shoulder, that I do not need to care about any of these. Can recommend!
It’s funny the number of different motivations we have for blogging… part of the value for me is getting things I want to remember into search engines. Some times, the lack of (or poor quality of) search results for a topic I’m trying to learn about is what makes me want to post in the first place. And frequently, when I want to remember the details of something I’m sure I wrote about, I find them by hitting a search engine and tacking on site:myblog.example.com. I know I could just grep my tree of markdown files, but I like the convenience of not leaving the search box I was already using :)
For a while now, I found search results almost completely useless. All the good information I find, I find it via blogs, via the people I follow on the Fediverse, or a carefully curated set of information sites. I make an offline archive of anything I find interesting, or useful, so I can easily search them later - as an added bonus, I can search offline! It’s faster and produces more relevant output than any search engine.
If I want to find something I wrote, I git grep. If I want to find something on the blogs I follow, or in my bookmarked Fedi posts, or on any of the sites I decided to use for this purpose: I have an emacs function for that, so I never have to leave the comfort of my editoroperating system!
I admire your discipline in that practice of archiving anything interesting or useful. I wish I were as disciplined about it. I’ve long harbored ambitions of setting up a personal search engine that I could access from various machines where I wouldn’t checkout my blog’s git repo (phone, pinebook, etc.) but haven’t acted on it.
And, as my blog attracted some media attention a few times, it has been very funny to see faces of journalists when the ask me “how many readers I have” and I tell them I have no analytics at all. Nothing. No metrics.
Something similar happens in the Gemini world where there are two kind of people : those who feel relieved because they feel they can write how they please because “no analytics”. And those having anxiety attacks because they feel like they write in a vacuum.
I like the mindset you describe in that blog post; the less you know about who’s reading you the more free you are to write. I also liked how you managed to make the site bilingual without losing accessibility.
The pervasiveness of all sorts of spyware scripts even on personal websites is why I default to disabling javascript and whitelist websites if they deserve it. Personal blogs used to be made by people who wanted to share something interesting, like their hobby, now they’re almost universally attempts at building a “personal brand” by self-styled gurus looking to get the mythical passive income. With some exceptions.
By the way, IPs are personally identifiable information, and storing them, even in server logs, without obtaining consent is a violation of GDPR.
As far as I can tell I believe a personal blog would be exempt
Exceptions to the rule
There are two important exceptions we should note here. First, the GDPR does not apply to “purely personal or household activity.” So if you’ve collected email addresses to organize a picnic with friends from work, rest assured you will not have to encrypt their contact info to comply with the GDPR (though you might want to anyway!). The GDPR only applies to organizations engaged in “professional or commercial activity.” So, if you’re collecting email addresses from friends to fundraise a side business project, then the GDPR may apply to you.
The second exception is for organizations with fewer than 250 employees. Small- and medium-sized enterprises (SMEs) are not totally exempt from the GDPR, but the regulation does free them from record-keeping obligations in most cases (see Article 30.5).
Yes, this kind of activity is wholly exempt (and even if it wasn’t: most web servers rotate their logs at or within a 30 day interval, making them compliant by default with the GDPR’s response window requirement).
The idea that any kind of storage of IPs is an immediate GDPR violation seems to be a pervasive myth.
An off-topic question, if I may? I noticed the “#:~:text” fragment in the URL and it got me curious if it was a gdpr.eu-site specific thing. I’ve deducted it’s a Chrome/Chromium’s “copy link to highlight” feature. Does anyone know whether it’s their own invention or, perhaps, it’s backed by any standard?
The readme of the associated repository has some further discussions of other approaches: https://github.com/WICG/scroll-to-text-fragment (and people did point at some others in various issues)
For technical reasons you can store them, can be fraud of attack detection (see fail2ban). And if you’re below 30 days, you’re totally fine. So no, storing IPs in itself is not a violation of the GDPR.
Don’t waste your time on a random comment. First of all, it links PII (a concept from US law) to the GDPR, then it assumes that consent is the only legal ground for data storage. Two obvious bits of nonsense in a single sentence.
In an ideal world, software developers would make an effort to become familiar with a law that deeply affects their day-to-day activities, instead of creating a straw man to either mock as ridiculous, or to threaten people with.
In an ideal world, software developers would make an effort to become familiar with a law that deeply affects their day-to-day activities, instead of creating a straw man to either mock as ridiculous, or to threaten people with.
I wouldn’t be surprised if most devs in the US didn’t generally interact with the EU (or the UK, which passed an identical law after brexit). The mega companies definitely have a presence in the EU, but I don’t think most software jobs are in the mega companies. This would explain the misinformation as well as the propagation of said misinformation. The devs hear misinformation about what they have to do, and never get a reality check because they aren’t really involved in that area (and if no legal action happens, it must be correct, right?)
I would only expect this to actually impact people who work at institutions that do business in the EU. I know offering a free service might technically fall under GDPR, but I doubt the EU’s ability to enforce it (especially in the US) against a company that does not have a presence, or accept payments in the EU.
This resonates with me. I have an open ticket (https://github.com/xonixx/xonixx.github.io/issues/14) for quite some time already to add analytics to my blog. But then I was thinking, ok but what for? It appears, I’m not really interested in extensively expanding audience of my blog, what I am interested in indeed is to attract (and connect with) people who share my ideas and can really appreciate my (admittedly, somewhat narrowly specific) content.
This is the same reason why I don’t have commenting on my blog but rather ask to send me an email for the feedback at the end of each post.
I agree ideologically and don’t use trackers. The only thing i kinda wish I was better set up to visualize is ebbs and flows in post views and referrers from humans (minus automated traffic).
I vaguely intend to set something up locally for this. I did look at a go tool that was nice (blank on the name atm), but in my initial exploration at least it seemed a bit too focused on the top N over a given timeframe whereas I’m a bit more interested in being able to tell when a post was or is being discussed somewhere to go look and understand the context.
I’m a bit more interested in being able to tell when a post was or is being discussed somewhere to go look and understand the context
That is the only reason I’d like to have anything vaguely resembling analytics for my blog. I usually write either for myself or because I want to post something as fodder for a discussion somewhere like a chat channel or a forum where I need the extra flexibility I get from a blog post.
But occasionally something will get noticed and discussed somewhere, and out of personal curiosity I’d like to see that.
That said, my blog is completely hosted on gitlab, published by ci jobs whenever I commit markdown/image files. So I don’t get server-side stuff, and I haven’t seen fit to put any client side stuff on there. I probably won’t, either, because I really don’t feel like making any big changes to what’s presently a low impedance setup for writing. If I were starting over today, I’d use zola instead of hugo and sourcehut instead of gitlab, but despite wishing I knew where it got discussed, I don’t care enough to do anything that might interfere with my writing or others’ reading just to further that end.
This brings memories. Back in times when it was still cool to have a 88x31 graphical visitor counter, I bought my first private web-hosting to host a personal web-site and blog. It had Apache with SSI, CGI, and Perl. The hosting came with an instance of Webalizer—a C programme to build reports with pie charts and histograms out of access.log’s. I was so mesmerised to check it every time.
I don’t run a web-site now, but just a placeholder page on my domain. About two years ago I wondered whether any bots visited the page, so I checked my access.log. I found many probes to stuff like “/wp-admin”. I wanted to get a better insight and, just like the OP, I didn’t want to have any complex or client-side analytics.
So I looked whether any “static” access.log analysers were still a thing. Apparently, they still are. I tried this nice “top”-like terminal tool GoAccess and it gave me answers I wanted. I guess, if I were to use it seriously on a real web-site, I’d miss some semantic tagging of requests, i.e. to help separate regular visitors from bots, etc.
My blog doesn’t have analytics that I know of, and I don’t even look at the logs. To me, it’s just wrong: Either I’d take a personal interest in who reads my stuff, and I find that notion pretty creepy. Or I get a kick out of the number of people who read it, and that’s just vain.
Don’t even need ’em on your money making site either.
I run a relatively small but pays-all-the-bills e-commerce site, no analytics, only 1 page has JS for checkout (Stripe/Shopify). I’m missing all the data to optimize marketing, targeting, blah blah, but I guess I’m okay with that for now – unless I can double my sales? 5x or 10x them?
I’ve been around the internet long enough that analysing access logs is using analytics. Just because it’s not JS based and loading in the client’s browser doesn’t make it less analytical. Was quite confused reading the start of the post as it flipped from “no analytics!” to “analyse access log!” when they are one and the same in my head.
Same here. Still trying to understand why people favour JavaScript scripts for analytics.
Lots of people (myself included) host their blogs via GitHub Pages and don’t get access to access logs.
I use Netlify which will sell me analytics from scraping their logs but not the logs themselves.
So as bad as letting Google track your users via JS scripts, instead you are using JS analytics AND giving Microsoft the logs too?
Please don’t shame people for their choice of publishing platform. This kind of gatekeeping is what gives the FLOSS community a bad name.
We also don’t need to be implicitly endorsing a proprietary, megacorporation-owned platform when there are dozens of gratis, less-corporate-y alternatives. Additionally, if it was worded “static cloud hosted”, then the same point/concern would have been conveyed while encompassing all the options instead of name-dropping/advertising for a singular one—one that deserves additional scrutiny.
I’m not endorsing anything, just admitting the reality. We should meet people where they are – if some proprietary solution is “not good”, then let’s tell people what to use instead. Just saying “what you’re doing is bad and you should feel bad” is not going to move the needle.
If you were no-knowledge on the topic you might do a web search for something like “free static site hosting” or “how to host a $TOOL blog”? A lot of queries will come back with masses recommending the Microsoft option, especially when you get to ‘influencers’ & video content. You might shrug & say “I guess this is what everyone is doing” without questioning it based on the content maker’s anecdote. Pushing against that narrative in places like forums is where other folks see & consider questioning their decisions–which is something I think they should consider when it comes to this topic.
I think people sometimes forget that access logs are there and have been a lot around longer than client-side analytics scripts added to your site’s source. I used Sawmill at fairly industrial-strength about 15 years ago; glad to see that it’s still around.
I went looking a couple of years ago and was surprised how little development there’d been in access log analysis scripts. I ended up using some 2005-era thing myself which worked but was not great. There’s also limits to what you can learn from an access log these days. The loss of Referer data is keenly felt.
What has gotten development recently is self-hosted analytics scripts. Still Javascript (or tracking pixels) but at least the data only goes to you. Two I’ve squirreled away: Shynet and Plausible.
The other thing you can do is to sign up for Google Search Console (or the Bing equivalent). This gives you statistics on what search terms people use to find your site, how many clickthroughs etc. This doesn’t involve any tracking in your side, it just gives you access to a small slice of the data Google is collecting.
I did something similar to my blog. Those vanity metrics hurt more than help. Also, my blog is now lighter for people to access.
I still have plans to make it 100% JS free. CSS
:has()is here and will help me.Why 100% JS free? I do progressive enhancement. :)
Because i was only using it for things that css can do now. So why not? Fewer things to worry about. I’m not against JS per se, but if I have no use…
That makes sense!
Underrated actually. Taking it to the extremes, you end up with checkbox hacks with poor accessibility. Aiming for minimal JS is probably when the meant unless they are lucky enough to really go JS free.
Why “lucky? You could develop whole web sites JS free if you prefer. I belive tho that adding a sprinkle of JavaScript for some visual efects is ok.
But that’s the point… there usually is at least something best suited with a sprinkle of JavaScript for the best user experience. If you take no-JS to the extreme, often something in the UX will suffer a worse experience. But we seem to agree, at least for informational websites, keeping to a minimum is ideal.
It will be good when popover API support is widespread, because that will let you do an accessible toggle of a hamburger menu without JS.
I’ve been using accessibility as a reason to remove hamburgers from designs in recent years given the amount of item has seemed to be <5 😃
I like to have analytics. My site enjoys around 30k readers per month and seeing the growth has motivated me a lot to read and write more.
It’s built with Hugo and deployed via GitHub pages so that I don’t have to manage anything. However, the side effect is that I don’t have access to the server and the logs.
I don’t enjoy managing infra and Google analytics works fine in this case. The trick is, not to obsess over it. I rarely check mine but love to see the pretty graphs going crazy when one of my write-ups hits the front page of Hackernews.
I use awk scripts https://adi.onl/cl.html, https://adi.onl/cbl.html, https://adi.onl/fl.html as seen here https://s.mkws.sh/!
I guess this will be a new data point in your measurements of “high traffic load”.
I used to have analytics on my blog, and I used to obsess over the insights I gained from it. I carefully updated my site in the hopes that it will provide visitors a more pleasant stay. It took me years to realize that all of this was for naught: people came to the site to read a post (linked from somewhere else, or from its RSS feed), and then left, no matter how I tweaked its looks, no matter how I tried to provide better “content”.
Upon realizing this (a good couple of years ago), I spent a weekend on improving my RSS feed (it now included full posts, and wasn’t limited to the last 10 posts, but had every post on my site). The next month or so, I observed that my visitors dropped considerably: people were reading my site through RSS (RSS traffic slightly increased due to more subscribers, and the hits on images referenced from posts was roughly the same as the RSS traffic), and it was good enough that they did not need to visit the site directly.
Mission accomplished! I turned off analytics, and I also turned off access logs, because I was never looking at those anyway.
I briefly turned access logs on again a few months ago (for a single month) when I entertained the idea of opting out of search engines, just to see how much traffic I get from those. Turned out it was negligible, so I opted out of them, and turned access logs off again.
It’s a big weight off my shoulder, that I do not need to care about any of these. Can recommend!
It’s funny the number of different motivations we have for blogging… part of the value for me is getting things I want to remember into search engines. Some times, the lack of (or poor quality of) search results for a topic I’m trying to learn about is what makes me want to post in the first place. And frequently, when I want to remember the details of something I’m sure I wrote about, I find them by hitting a search engine and tacking on site:myblog.example.com. I know I could just grep my tree of markdown files, but I like the convenience of not leaving the search box I was already using :)
For a while now, I found search results almost completely useless. All the good information I find, I find it via blogs, via the people I follow on the Fediverse, or a carefully curated set of information sites. I make an offline archive of anything I find interesting, or useful, so I can easily search them later - as an added bonus, I can search offline! It’s faster and produces more relevant output than any search engine.
If I want to find something I wrote, I
git grep. If I want to find something on the blogs I follow, or in my bookmarked Fedi posts, or on any of the sites I decided to use for this purpose: I have an emacs function for that, so I never have to leave the comfort of myeditoroperating system!I admire your discipline in that practice of archiving anything interesting or useful. I wish I were as disciplined about it. I’ve long harbored ambitions of setting up a personal search engine that I could access from various machines where I wouldn’t checkout my blog’s git repo (phone, pinebook, etc.) but haven’t acted on it.
What is the value for you in opting out of search engines?
The good feeling of not feeding the (bad) search engines with useful data. Let them fade into oblivion as they index each others bullshit generators.
(I have not opted out of all search engines, just those that provide “AI enhanced” results.)
What is funny is that this is not basically “how I’m using analytics without JS”.
While, 10 years ago, I started to argue for no analytics at all. Not even the logs : https://ploum.net/how-i-learned-to-stop-worrying-and-love-the-web/index.html
And, as my blog attracted some media attention a few times, it has been very funny to see faces of journalists when the ask me “how many readers I have” and I tell them I have no analytics at all. Nothing. No metrics.
Something similar happens in the Gemini world where there are two kind of people : those who feel relieved because they feel they can write how they please because “no analytics”. And those having anxiety attacks because they feel like they write in a vacuum.
Analytics are the new smoking:
I like the mindset you describe in that blog post; the less you know about who’s reading you the more free you are to write. I also liked how you managed to make the site bilingual without losing accessibility.
The pervasiveness of all sorts of spyware scripts even on personal websites is why I default to disabling javascript and whitelist websites if they deserve it. Personal blogs used to be made by people who wanted to share something interesting, like their hobby, now they’re almost universally attempts at building a “personal brand” by self-styled gurus looking to get the mythical passive income. With some exceptions.
By the way, IPs are personally identifiable information, and storing them, even in server logs, without obtaining consent is a violation of GDPR.
As far as I can tell I believe a personal blog would be exempt
https://gdpr.eu/companies-outside-of-europe/#:~:text=The%20GDPR%20only%20applies%20to,GDPR%20may%20apply%20to%20you.
Yes, this kind of activity is wholly exempt (and even if it wasn’t: most web servers rotate their logs at or within a 30 day interval, making them compliant by default with the GDPR’s response window requirement).
The idea that any kind of storage of IPs is an immediate GDPR violation seems to be a pervasive myth.
An off-topic question, if I may? I noticed the “#:~:text” fragment in the URL and it got me curious if it was a gdpr.eu-site specific thing. I’ve deducted it’s a Chrome/Chromium’s “copy link to highlight” feature. Does anyone know whether it’s their own invention or, perhaps, it’s backed by any standard?
it’s their own thing, documented here: https://wicg.github.io/scroll-to-text-fragment/
The readme of the associated repository has some further discussions of other approaches: https://github.com/WICG/scroll-to-text-fragment (and people did point at some others in various issues)
Do you have a source for this?
Ah, found one, with a link into GDPR: https://gdpr.eu/eu-gdpr-personal-data/
EDIT: …But see sibling comments, just collecting an IP by itself is not necessarily a GDPR violation
For technical reasons you can store them, can be fraud of attack detection (see fail2ban). And if you’re below 30 days, you’re totally fine. So no, storing IPs in itself is not a violation of the GDPR.
I have never heard this mentioned in regards to the GDPR. Do you have a source?
Don’t waste your time on a random comment. First of all, it links PII (a concept from US law) to the GDPR, then it assumes that consent is the only legal ground for data storage. Two obvious bits of nonsense in a single sentence.
In an ideal world, software developers would make an effort to become familiar with a law that deeply affects their day-to-day activities, instead of creating a straw man to either mock as ridiculous, or to threaten people with.
I wouldn’t be surprised if most devs in the US didn’t generally interact with the EU (or the UK, which passed an identical law after brexit). The mega companies definitely have a presence in the EU, but I don’t think most software jobs are in the mega companies. This would explain the misinformation as well as the propagation of said misinformation. The devs hear misinformation about what they have to do, and never get a reality check because they aren’t really involved in that area (and if no legal action happens, it must be correct, right?)
I would only expect this to actually impact people who work at institutions that do business in the EU. I know offering a free service might technically fall under GDPR, but I doubt the EU’s ability to enforce it (especially in the US) against a company that does not have a presence, or accept payments in the EU.
See sibling comment. It surprised me as well
One can use ipscrub to anonymize IP addresses while still keeping the ability to do some analytics.
This resonates with me. I have an open ticket (https://github.com/xonixx/xonixx.github.io/issues/14) for quite some time already to add analytics to my blog. But then I was thinking, ok but what for? It appears, I’m not really interested in extensively expanding audience of my blog, what I am interested in indeed is to attract (and connect with) people who share my ideas and can really appreciate my (admittedly, somewhat narrowly specific) content.
This is the same reason why I don’t have commenting on my blog but rather ask to send me an email for the feedback at the end of each post.
The best feedback I get from blogging is comments, and it’s why I try hard to keep the commenting user experience as simple as possible.
I agree ideologically and don’t use trackers. The only thing i kinda wish I was better set up to visualize is ebbs and flows in post views and referrers from humans (minus automated traffic).
I vaguely intend to set something up locally for this. I did look at a go tool that was nice (blank on the name atm), but in my initial exploration at least it seemed a bit too focused on the top N over a given timeframe whereas I’m a bit more interested in being able to tell when a post was or is being discussed somewhere to go look and understand the context.
That is the only reason I’d like to have anything vaguely resembling analytics for my blog. I usually write either for myself or because I want to post something as fodder for a discussion somewhere like a chat channel or a forum where I need the extra flexibility I get from a blog post.
But occasionally something will get noticed and discussed somewhere, and out of personal curiosity I’d like to see that.
That said, my blog is completely hosted on gitlab, published by ci jobs whenever I commit markdown/image files. So I don’t get server-side stuff, and I haven’t seen fit to put any client side stuff on there. I probably won’t, either, because I really don’t feel like making any big changes to what’s presently a low impedance setup for writing. If I were starting over today, I’d use zola instead of hugo and sourcehut instead of gitlab, but despite wishing I knew where it got discussed, I don’t care enough to do anything that might interfere with my writing or others’ reading just to further that end.
I agree for personal blog, analytics is not needed. Unless you’re interested in SEO and want to experiment with stuff :)))
This brings memories. Back in times when it was still cool to have a 88x31 graphical visitor counter, I bought my first private web-hosting to host a personal web-site and blog. It had Apache with SSI, CGI, and Perl. The hosting came with an instance of Webalizer—a C programme to build reports with pie charts and histograms out of access.log’s. I was so mesmerised to check it every time.
I don’t run a web-site now, but just a placeholder page on my domain. About two years ago I wondered whether any bots visited the page, so I checked my access.log. I found many probes to stuff like “/wp-admin”. I wanted to get a better insight and, just like the OP, I didn’t want to have any complex or client-side analytics.
So I looked whether any “static” access.log analysers were still a thing. Apparently, they still are. I tried this nice “top”-like terminal tool GoAccess and it gave me answers I wanted. I guess, if I were to use it seriously on a real web-site, I’d miss some semantic tagging of requests, i.e. to help separate regular visitors from bots, etc.
My blog doesn’t have analytics that I know of, and I don’t even look at the logs. To me, it’s just wrong: Either I’d take a personal interest in who reads my stuff, and I find that notion pretty creepy. Or I get a kick out of the number of people who read it, and that’s just vain.
Don’t even need ’em on your money making site either.
I run a relatively small but pays-all-the-bills e-commerce site, no analytics, only 1 page has JS for checkout (Stripe/Shopify). I’m missing all the data to optimize marketing, targeting, blah blah, but I guess I’m okay with that for now – unless I can double my sales? 5x or 10x them?