Reminded me I’ve had Google Analytics code up on my blog since forever for no benefit for me whatsoever. Off it goes!
Kudos for removing it but I am curious how Google Analytics ends up running on so many sites to begin with?
It’s free, it’s very easy to setup and understand, and there is a lot of documentation out there on how to integrate it into different popular systems like Wordpress. It’s definitely invasive, but it’s hard to deny that it’s easy to integrate.
not as easy as doing nothing though… it’s free and easy to crawl around on all fours… that can be invasive too if you crawl under someone’s desk… but this still leaves the question why.
Because a lot of the time when you’ve just made a site you want to see if anyone’s looking at it, or maybe what kind of browsers are hitting it, or how many bots, or whatever, so you set up analytics. Then time passes, you find out what you wanted to find out, and you stop caring if people are looking at the site, but the tracking code is still there.
I’d compare it to CCTV cameras in shops. You visit the shop (the website) voluntarily so the owner can and will track you. We can agree that this is a bad thing under certain conditions, but as long as it’s technically trivial it will be done. No use arguing what is, you’d need a face mask or TOR to avoid it.
That said, I’d also prefer if it wasn’t Google Analytics on most pages but something that keeps the data strictly in the owner’s hands. I can wish for it to be deleted after a while all I want but my expectation is that all the laws in the world won’t change that to a 100% certainty.
End-user-facing SaaS products are one thing. On a site I run on infrastructure that I run myself I can just look at the httpd logs¹ and doing so is way faster than looking at GA², but if I also bought a dozen other random SaaS products then the companies that run those won’t ship me httpd logs, but they will almost always give me a place to copy-paste in a GA tracking <script>. If I have to track usage on microsites and my main website, it’s nice if the same tracking works for all of them.
It has some useful features. I believe offhand that, if you wire up code to tell it what counts as a “conversion event”, GA can out the box tell you things like “which pages tended to correlate positively and negatives with people subsequently pushing the shiny green BUY NOW button?”
There’s a populace of people familiar with it. If you hire a head of marketing³, pretty much every single person in your hiring pool has used GA before, but almost none of them have scraped httpd logs with grep or used Piwik. (Though I would be surprised if they didn’t immediately find Piwik easy and pleasant to use.) So when that person says that they require quantitative analysis of visitor patterns in order to do their job⁴, they’re likely to phrase it as “put Google Analytics on the website, please.”
(¹ GA writes down a bunch of stuff that Apache won’t, out the box. GA won’t immediately write down everything you care about because you have to tell it what counts as a conversion if you want conversion funnel statistics.)
(² I have seriously no idea whatsoever how anybody manages to cope with using GA’s query interface on a day to day basis. It’s the most frustratingly laggy UI that I’ve ever used, and I’m including “running a shell and text editor inside ssh to a server on literally the opposite side of the planet” in this comparison. I think people who use GA regularly must have their expectations for software UI adjusted downward immensely.)
(³ or whatever job title you give to the person whose pay is predicated on making the chart titled “Purchases via our website” go up and to the right.)
(⁴ and they do! If you think they don’t, take it up with Ogilvy. He wrote a whole book and everything, you should read it.)
The book is “Ogilvy on Advertising”. It’s not long, the prose is not boring and there are some nice pictures in it.
The main thing it’s about is how an iterative approach to advertising can sell a boatload of product. That is, running several different adverts, measuring how well each advert worked, then trying another set of variations based on what worked the first time. For measurement he writes about doings things like putting different adverts for the same product up, each with a different discount code printed on it, and then counting how many customers show up using the discount code that was in each of those adverts. These days you’ll see websites doing things like using tracking cookies to work out what the conversion rate was from each advert they ran.
Obviously the specific mechanisms they used for measurement back then are mostly obsolete now, but the underlying principle of evolving ad campaigns by putting out variations, measuring, then doubling down on the things you’ve demonstrated to work is timeless.
Ogilvy also writes a little bit about specific practical things that he’s found worked when he put them in adverts in the past, such as putting large amounts of copy on the advert rather than small amounts, font choice, attention-grabbing wording, how to write a CTA, black text on white backgrounds or vice-verse, what kinds of photos to run and so on. Many are probably still accurate because human beings don’t change much.
Many are plausibly wrong now because the practicalities of staring at a glowing screen aren’t identical to those of staring at a piece of paper. If you’re following the advice to in the first bit of the book about actually measuring things, then it won’t matter much to you how much is wrong or right because you’ll rapidly find out for yourself empirically anyway. :)
Hypothetically, let’s say you’ve done a lot of little-a agile software development: you might feel that the evolutionary approach to advertising is really, really obvious. Well, congratulations, but not all advertising is done that way, and quite a lot of work is sold on the basis of how fashionable and sophisticated it makes the buyer of the advertising job feel. Ogilvy conveys, in much less harsh words, that the correct response to this is to burn those scrubs to the fucking ground by outselling them a hundred to one.
For me it was probably ego-stroking to find out how much traffic I was getting. I’ve been blogging for more than a decade and not always from hosts where logs were easily accessible.
What gets me is why people care about how many hits their blog gets anyway. If I write a blog, the main target is actually myself (and maybe, MAYBE, one or two other people I’ll email individually too), and I put it on the internet just because it is really easy to. Same thing with my open source libraries: I offer them for download with the hopes that they may be useful… but it really means nothing to me if you use it or not, since the reason I wrote it in the first place is for myself (or again, somebody who emailed me or pinged me on irc and I had some time to kill by helping them out).
As such, I have no interest in analytics. It… really doesn’t matter if one or ten thousand people view the page, since it works for me and the individuals I converse with on email, and that’s my only goal.
So I think that yes, Google Analytics is easy and that’s why they got the marketshare, but before that, people had to believe analytics mattered and I’m not sure how exactly that happened. Maybe it is every random blogger buying into the “data-driven” hype thinking they’re going to be the next John Rockefeller in the marketplace of ideas… instead of the reality where most blogs are lucky to have two readers. (BTW I think these thoughts also apply to the otherwise baffling popularity of Medium.com.)
Also, it’s invasive, sure but it’s also fairly high value even at the free level.
You get a LOT of data about your users from inserting that tracking info into your site.
Which leads me into my next question - what does all this pro-privacy stuff do to such a blog’s SEO?
(I know, I know, we’re not supposed to care about SEO - we’re Maverick developers expressing our cultural otherness and doing Maverick-y things…)
Oh, it totally tanks SEO.
Alternately, the SEO consultants that get hired by biz request to have GA added anyways and they force you to bring it in. :(
It was quite predictable. Their incentives as a VC-backed, for-profit company aiming for massive IPO are to lock-in as many people as possible. Interoperability works against profitable lock-in. This is why rich, software companies either fight, subvert, or cripple it where possible. So, Slack eventually would ditch that. I doubt they put a lot of effort into maintaining its quality either if it was a marketing gimmick. I don’t use Slack, though, so I can’t say.
Interop feels a lot like what some leaders said about democracy:
It’s like a train. You get off when you reached your destination.
Honestly, Slack to me has become a lot more than just chat, and I can see how they can’t coerce their methodology for chat anymore into IRC. Threads are used very extensively by my team, and I can see how that’s hard to fit into IRC. Rich content messages from apps, images, and posts are basically impossible to fit into IRC. I agree that all those things don’t fit into some people’s ideas of an ideal workflow, but they’ve become crucial for a lot of people on Slack, and kind of break in IRC.
I think that the features you mention could be mapped to IRC, with some loss of course, but IRC users are (maybe?) used to a simpler experience.
IIRC, less choice is often touted as a good design practice. But Slack is removing the simple thing in favour of the bells and whistles. It’s not a surprise, but it’s sad.
hard to fit into IRC
Could you be more specific? This is a Slack-IRC gateway using the recent IRCv3 drafts for threads, reactions and rich content messages: https://twitter.com/irccloud/status/971416931373854721
As far as I can see, IRC can handle all these just fine.
It’s in the ‘wrong’ place in my stack, but the wee-slack plugin mentioned by @oz claims to have thread support. As a WeeChat plugin has access to windows and buffers I can imagine that being a smoother experience that a plugin in the otherwise ‘correct’ place: the bouncer.
Messages from apps are or could be notices in IRC, and images appear as links that I can click through to see using a web browser. It is certainly true that the more a tool tries to structure a conversation the more difficult it becomes to map that to the IRC protocol. That said, I’m absolutely open to retaining the ability to chat from an IRC client by fixing problems anywhere and everywhere they need to be fixed. There is no fundamental reason a thread feature can’t work outside of the official client.
It’s in the ‘wrong’ place in my stack, but the wee-slack plugin mentioned by @oz claims to have thread support. As a WeeChat plugin has access to windows and buffers I can imagine that being a smoother experience that a plugin in the otherwise ‘correct’ place: the bouncer.
Yeah I can see where you’re coming from. I love wee-slack, and would use it if it had Enterprise Grid support. I just think that Slack is making more and more design decisions that make it hard to shoe-horn back into IRC.
I just think that Slack is making more and more design decisions that make it hard to shoe-horn back into IRC.
If not IRC, then an open, extended version of it or new protocol with a reference client. Worst case is that important stuff like messages stay in the open system whereas extra bells and whistles end up in proprietary system. Less transition cost later if people want to ditch Slack for something better. An open, reference implementation people are using in a lot of environments would also give them more testing of their protocols. They definitely have the money for it at their revenue levels.
They’re locking it up instead since it’s more profitable in the long run for the founders and investors. The good news is they might have at least inspired some revamps of IRC or chat that will be done better for us without their problems. I think I’ve already seen some like that but we gotta wait to see who gets a sound, business model going.
I’ve used the Jabber gateway to connect to HipChat and the IRC gateway to connect to Slack. Hands down the Slack gateway was the superior experience. You could, to be sure, tell you were not connecting to a real IRC server. The experience was remarkably good anyway. by comparison, my messages in to HipChat would sometimes take hours (actual multiple hours) to be received–completely crippling my ability to participate.
What about Stack Exchange/Discourse’s solution to limit the depth of replies in a discussion to one level? See Jeff Atwoods article Web Discussions: Flat by Design for details. I like this solution but I am unsure how others think about it.
For me, it would be a regression. I find indented threads are the only design I’ve seen so far that makes this kind of long discussion followable.
The two times I’ve designed commenting, it’s been like that. Top level comment, then linear chain of replies. It supports the typical conversation quite well. Somebody posts a link, I ask what’s a monad, somebody answers. It has its own pathological cases, with a dozen people arguing back and forth in a big jumble, but it’s not necessarily worse than the tree model in that case. The tree model appeals to people because it’s “technically correct” but sometimes worse is better.
The only reason I like the tree model above one-level conversations is that a tree that becomes “toxic” can be hidden entirely from the discussion. It helps when a “troll” comment gets posted, and all the conversation related to that comment as well as the comment itself can just be collapsed out of a conversation, instead of polluting the whole conversation flow. Kind of like the “comment score below threshold” on reddit. Even if lobste.rs didn’t want to auto-hide these flows, I think it’s useful to be able to do that my self.
Personally, I quite like threaded discussion, as it helps to keep track of who’s replying to who, and to separate different topics as comments diverge.
I’m working on getting a private beta of deps.co out to some early users. One of the big tasks is getting the servers setup with Terraform. I’d already done a ton of work setting up the supporting infrastructure, but this was my first time really using systemd in anger, and it took quite a lot of time to figure out some dependency cycles for running my main app. I’m also digging in to Varnish for the first time, although I’m sort of familiar with it from setting up Fastly for Clojars.
What’s your differentiation from something like Artifactory? Not trying to say it’s better, honestly curious.
Sure, great question. Hosted Artifactory is a great option if you have a large organisation, or you want to store packages of many different kinds. The downside is that it is fairly expensive to run and somewhat complicated to manage and configure. This is in part because each customer runs on their own VM.
Deps is designed for smaller JVM based teams that want simpler management and browsing. Crucially it runs as a multi-tenant service, so we don’t need to run a VM per customer. In exchange you get a simpler interface, higher availability, and cheaper pricing. It will be better for some people, but not for everyone, particularly if you need to handle multiple kinds of packages in one system.
Just make sure you use this as a template and not actually run it - as it installs a user and copies ssh keys that aren’t yours
The article it’s responding to on Lobsters.
Hey - I think you linked the same article this submission is about, and not the original? Did you mean to link the Lobste.rs article or the original?
If you’re not even squeezing real fruit, then what is the point of this “juicer”? Why would I buy Juciero packs, which require a $400 can-opener, when I can just get a 12-pack of Naked fruit juice?
The naked juice doesn’t have a QR code on it to prevent you from drinking it a day after expiration.
There is unfortunately a well-known exploit in that expiry mechanism, which can lead to careless drink-after-expiry vulnerabilities!
The bags last for 5-7 days after which the machine supposedly refuses to process them.
What advantage does this machine deliver which bottled, cold-pressed juice does not?
There’s an alternative on kickstarter which at least allows you to fill your own bags of fruit.
I don’t like the tagline for this product. It says “Juicing without the cleaning”, but:
Chop fruit and vegetables into pieces roughly the size of a dollar coin for maximum yield
… which means you have to clean the knife and cutting board. And if you don’t want to use the single-use bags, you have to clean the bag between each use, AND put in a new “cotton filter”, which is USD $0.20.
What they mean is you don’t have to clean the machine. A bit misleading…
[Comment removed by author]
It’s “Engine-X” - https://www.nginx.com/resources/wiki/community/faq/
The problem is not that LetsEncrypt issued those certificates. It’s that we taught people that they should look for the green lock to tell whether a website is legitimate.
Well, it is.
If the green lock is right next to https://we-steal-from-your-paypal-account.mysite.com, then the site really is we-steal-from-your-paypal-account.mysite.com,.
I think he means that people think that if the green lock is there, that the website is somehow more “trustworthy”. When in fact, it’s just a measure of connection security. So someone sees the green lock on https://we-steal-from-your-paypal-account.mysite.com and thinks that means it is “trustworthy”. Most people I know have no idea what the green lock means other than “it’s good to look for when I online bank”.
Most modern browsers do attempt to also convey information the owner of the site in the URL bar, where available, and distinguish that from the connection security status. A green lock on its own means just that the connection is secure (but the site could be anything), while a green lock with text next to it, like “? JPMorgan Chase and Co. (US)”, which is what shows for me in Firefox and Chrome when I visit Chase, conveys that the connection is secure and the site has also been authenticated by the CA as owned by “JPMorgan Chase and Co. (US)”. I think many users are likely unaware of how to interpret these distinctions, though.
This is not browser specific. The ownership information is shown for EV certificates(“extended validation”). Let’s Encrypt offers DV certs only, which means all they verify is that the person requesting the cert really owns the domain.
I agree. If anything, I’d say it makes more sense for browsers to take on this role rather than the CAs. Perhaps browsers could warn users if they’re about to send secrets to a site with a domain that contains or is a misspelling of one of Alexa’s top 500 domains.
This certainly isn’t a perfect solution. It might not even be a good one. But I don’t think a CA filtering which domains are allowed is a good solution either.
I never taught anyone that. I taught them it means that 3rd parties can’t eavesdrop on their conversation with that server.
I got a YubiKey over the weekend. It’s a really cool device and, sadly, not as many places can use it as I hoped. So far I’ve only be able to tie it to my Gmail and my Github accounts. I have a feeling I’m not even close to making full use of it though. If anyone has any helpful advice on how to make better use of it, I’m all ears.
I LOVE my YubiKey (got a bunch at BlackHat USA 2015). I have three - one 4 Nano in my desktop at home, one on my keychain, and one in a fireproof safe. I use Lastpass for all my emails and it’s behind my Yubikey, but I still use Google Authenticator for a bunch of two-factor sites (except Github, Fastmail, and Google). I do wish more people implemented it as a second factor, but having it on my Lastpass at least makes me feel safer.
I’ve had a YubiKey for a while now but not really used it enough - I’m pretty much only using it for GitHub right now. I’ve been meaning to do all sorts of things (GPG key storage, system logins) but just not had the tuits. Would be curious to hear what others are doing.
BTW, make sure you get a second one, set it up as a backup and store it somewhere safe - losing your key and getting locked out of services would be a Bad Thing.
Curious to see how this turns out, can this work without corporate backing? Uber has been paying a lot of legal fees from what I’ve understood.
Probably the biggest problem is adoption. Assuming that’s overcome, I don’t see what could make this illegal. It’s essentially a dispatch service which is already how most cab companies operate. The biggest difference from Lyft is that this isn’t trying to hire and pay drivers, but instead connects cash/bitcoin (on roadmap) payers.
Note: IANAL
I’m just going to assume that in some places, if you accept payment for driving someone around, you are considered a “taxi / limo service” and would have to follow ordinances and whatnot. Isn’t that what Uber has run into?
Yeah, for sure. Uber / Lyft have problems because there are taxi and limo commissions in various cities. I wasn’t considering this as something everyday people would just do, but rather an enhancement to existing Taxi’s. My mistake!
Try to figure out why saucelabs is so freaking hard to integrate end-to-end tests with on one of my pet projects. Almost every google-able page for “protractor and saucelabs” is a 404’d page on saucelabs own site :/
This blog post is off the mark. Matt’s post is pretty clear: https://ma.tt/2016/10/wix-and-the-gpl/
Wix appear to simply be violating WordPress' copyright.
I actually feel like Matt’s edit came across very well. Addressed every point without actually sounding condescending to me, which I feel like is hard on the internet.
Are his assertions correct that if a tool uses a GPL library then the whole tool must be GPL? I thought there are ways to distribute where the GPL portions are isolated.
Yes, using a GPL library requires the entire tool to be GPL’d. If this isn’t the desired behavior, you can use the LGPL.
Some libraries are licensed with the full GPL on purpose, such as GNU Readline. It’s a very useful library, and many software authors have chosen to GPL their entire app rather than find a replacement readline implementation.
Yep, or you can stick the GPL on a program that gets talked to through pipes or network sockets and only open-source that part of your overall product–unless the code is AGPL'ed, which is a whole different kettle of fish as I understand it.
It bugs the hell out of me that people don’t honor these licenses, especially new hackerpreneurs that somehow think that the open source/free software stuff is just whimsy. There’s a reason these things have been done.
Yep, or you can stick the GPL on a program that gets talked to through pipes or network sockets
Bingo! Used that trick for a long time. Didn’t work for high-performance or embedded code so well since computers were too slow. They’re not any more. I’m surprised I didn’t see even more use of this technique out there by proprietary vendors. So far, they’re preferring simply not re-distributing the code while running computations locally in their clouds for extra lockin. That makes sense [for them] too.
I find 20px font sizes obnoxious on my Retina display. I almost always have to zoom out, I don’t know if I agree with this at all, at least for me.
I use the Reader View in iOS Safari a fair amount. There’s other readability things for android I’m sure.
The fact that doing so will prevent you from running Virtualbox or the like I can see as a big problem for developers. I can’t think of any developer I know that doesn’t have some kind of virtualization running on their workstations. I can see the don’t put this evil on me please IT conversations.
Does anybody know if with Hyper-V you can run any kind of generic virtual machines? Noooo idea how it works.
Hyper-V works like Xen more than anything else. The host OS is actually a privileged VM.
The big problem with client Hyper-V quite frankly, is that it sucks. Poor OS compatibility, and you need to use RDP in a VM to gain basic stuff like sound, or just decent GUI performance.
But, I doubt they’d cripple Edge like that, right? So, presumably they have some new tricks for Hyper-V that will make the experience better?
In this case, you’re not dealing with any of Hyper-V’s suck - that is, the whole experience with the paravirtual devices on the console. VT-d being a requirement also probably involves some voodoo to make it blend in.
It’s worth noting LSASS got moved into a separate VM when possible on Windows 10 as well.
Yeah, the article mentions that Windows 10 is Qubes like in some ways. But, I wonder how they do the rendering for these apps, and whether or not it’s exposed in a way that you could start a Linux VM, and show X11 in a window. If so, maybe that’d solve the poor GUI performance that you speak of?
Also, this is kind of a perfect example of why X11 is great.. Being able to send graphics/interactions over a socket to be displayed seems like a much bigger win than effectively taking screenshots of the rendered screen like RDP / VNC. On a Linux machine, with Docker, I could effectively do the same thing being done with Edge by just exposing my X11 unix socket to each container that wants to display something. More info here.
Windows has RemoteApp and RemoteFX, but I wonder with VT-d being involved, it might be using something different instead, with lower overhead.
The big problem with X11 forwarding is that it won’t forward 3D acceleration, sound, or printing. RDP (with RemoteFX) can do this.
Hypervision (is that a word?) in general can be nested, but “Type-1” or “bare-metal” hypervisors typically don’t like to coexist with other hypervisors.
You can run Virtualbox and VMware Workstation next to each other (both Type-2, the app kind), but you cannot run Virtualbox under Xen.
Inability to nest just means the virtualization is incomplete. VMWare can virtualize/emulate VT-x, running “bare metal” hypervisors inside. Hyper-V cheats in some way to allow itself to nest, but doesn’t provide access to VT to the VM. I might classify that as a bug, or at least a missing feature.
VMWare can virtualize/emulate VT-x, running “bare metal” hypervisors inside.
Case in point: This is how you’re supposed to test ESXi - you’re running VMware inside VMware, quite possibly to further nest turtles.
To be fair, Gitlab certainly wasn’t the first to have project boards (assuming that’s what you mean by the quotes around pioneered). But I actually like thinking that Github is responding to Gitlab. I think that they’ve been too comfortable at the “top” of the pack, and the competition is obviously spurring on new features and growth. I’m all for the two competing, I use both for various projects.
I agree. I switched to GitLab for a few projects and after an initial phase of confusion I like it better than github now. It really is a nice project.
You cannot imagine the number of times where I have successfully signed up for a website with an email address that looks like me@example.email. Yes, .email is a completely valid domain name, and yes, I own one (don’t ask why). I use the site just fine. Then, the site decides to reset my password due to a data breach (don’t fault them for that), but then my email is marked as “invalid” by the forgotten password form.
When I have a valid account on their site with that email.
If you can’t tell, I’m bitter.
I have had this happen with plus addressing (user+sitename@example.com) as well. Account creation works, but then some other component (login, password reset, etc) just refuses to accept it.
That always makes me super happy /s.
Would have been funny to run a slightly different wallpaper as a canary trap, thought that’s where this was going.
Side note - I don’t get the chart referenced in this article. This is based off of what, the amount stackoverflow questions about different frameworks? Kubernetes runs Docker… they aren’t exactly competing (although I guess you could consider Kubernetes and Swarm competitors?)
I too had issues around syncing my own files using pass in the past and eventually settled on LastPass. In my opinion, it’s worth paying $24 a year for premium support which gives me 1GB of encrypted storage and more two-factor authentication options. Plus having the app seamlessly sync to my phone is great as I can just copy passwords to the clipboard for other apps on my phone. It’s been easy to use and I enjoy not having to worry about passwords anymore.
Edit: Sorry that this was seen as spam. I realized after the fact that it was a little zealous. I am by no means connected to or trying to endorse LastPass.
I have actually been really impressed with their android auto-fill as well, it actually works pretty well.
Does that work in Firefox as well? This is one pretty big pain point for 1password for me
Talking about Firefox on Android? I don’t think so, at least it doesn’t for me. I don’t think that’s LastPass vs 1Password though, I think each app has to implement it. Looks like Android P will bring it to browsers by default
I’ve been sticking with LastPass for a while as well. They have a good automatic sync and user experience on all platforms, and from what I understand, the data architecture is good - master password never touches their servers, always handled by Javascript in the browser or in the mobile app. I do try and remember to back up the password list periodically as well.