When ad-blocking was obscure, we could free-load off of the majority who fund services by viewing ads… now Apple is taking my free lunch! :/
I clicked on the article. It came up and I started reading it. I didn’t get very far when the window turned black, and said I had to rotate the screen to view it “properly” on my phone. First, I’m not on a phone, thank you very much. Second, I’m on an iPad, using it in landscape mode because I’m using it as a laptop [1].
Fine, I turn the iPad to portrait mode. Page loads with this #@%@#$@$ vertical ad, covering the article, with no way to dismiss it. Thank you so very much. Thank you so very much that I’m not going to read your sob story about how blocking ads will destroy the Internet.
[1] No power. Using iPhone as hot spot. Still waiting for power company to restore power after Hurricane Irma.
Upvoted for your honesty. That’s exactly what ad-blocking is. The malware reduction argument some respond with is bogus. If they were about paying for what they consume and didn’t like malware, they’d just not use the ad-supported services. Free shit rocks, though, right? ;)
I worked at a streaming media company. A lot of our ads were supplied by brokers like Google. They were mostly harmless. Frequently, however, we’d get custom ads for special events (launch events for movies, TV shows, and games).
The code in the special-event ads was a disaster. If I could, I’d clean it up so that it still worked. Problem mostly mitigated.
However, in many of the embed snippets we’d receive the code was a script that would pull the real ad from the advertising company’s servers. Complete crap. Almost all of them would engage in some kind of DOM manipulation. If you didn’t isolate the ads they would break the layout.
The ad code would often try to include its own trackers for unique-visit tracking. Flash ads were very popular. So the companies would try page-takeover techniques to block everything and force you to view 15 seconds of crap. (And let’s not forget pop-over and pop-under ads.)
Very few companies were content with a simple image and an anchor tag to let the user follow-up for further information.
And that’s the chief problem with online ads. They try to be way too smart. Many want to interact with the user, or worse, “demand” you pay attention. Advertisers frequently have an attitude of “I paid for this, you’re going to give me some time.” They’ll say they just want to inform the public. But no. They want ROI.
And these are the “legit” advertisers. After that there are the skeezy “b” players (remember “X10”) who aren’t trying to rob you but are more like the used car salesman of the internet. Then there are the porn advertisers and lastly the purveyors of drive-by malware. This last group doesn’t even pay for ad space. They steal it.
And don’t forget the ad networks and information aggregators who want to build detailed dossiers about everyone (Google and Facebook are the most public of these). Who do you think invented persistent cookies?
No. Being suspicious of online advertising isn’t a sign of paranoia. It’s sensible.
Why aren’t ads just regular websites served in an iframe? That way, their shitty code couldn’t break anything about your website. Each site could have their own ID, sent in a query parameter in the iframe URL, to track which websites provide impressions. The ad could still be as flashy and interactive as it wants. The ad’s code could be as shitty as it wanted, and it wouldn’t have a negative impact on any users.
That would make sense, but many ad networks ban displaying ads in iframes because they can’t check the contextuality of the ad to the page the user sees. The ban also helps mitigate fraud. If the ad could only “see” the iframe around it, it would make it easy to load the ad via techniques as simple as using curl, to more sophisticated uses of multiple javascript xhr requests.
Google still ban it today (AdSense Policy FAQ). Common phrasing for this is “posting on a non-content page”.
The online advertising industry created the cesspool and now they’re whining that Apple, Google, Mozilla, and dozens of ad-blocking companies are trying to force them to clean-up.
On a related note, it might seem weird that Google would try to force better practices with Chrome when they make their money on advertising. But for the most part, Google run a pretty tight ship and force advertisers to adhere to some reasonable standards.
Weeding out the worst players keeps the ecosystem sustainable. The last thing Google want to see is an end to online advertising. And it doesn’t hurt their chances of winning more advertising dollars from the gap left by their departure.
because they can’t check the contextuality of the ad to the page the user sees.
Well they can: IFrame “busters” have been available for a long time, and since the ad network is usually more trustworthy than the publisher (to the Advertiser anyway) they could provide an interface to look up the page the user is on well before location.ancestorOrigins (and generate errors if parent!=top
).
Indeed most of the display networks used to do this – all of them except Google, and now AdSense has edged everyone who wants to do impressions out.
On a related note, it might seem weird that Google would try to force better practices with Chrome when they make their money on advertising. But for the most part, Google run a pretty tight ship and force advertisers to adhere to some reasonable standards.
Google is probably the worst thing to come to advertising and is responsible for more ad fraud and the rise of blocking crap JavaScript than any other single force.
Google will let you serve whatever you want as long as their offshore “ad quality team” sees an ad. Everyone just rotates it out after 100 impressions and Google doesn’t care because they like money.
Google still lets you serve a page as an iframe – even if it has ten ads on it. Buy one ad, sell ten. Easy arbitrage. Even better if you can get video to load (or at least the tracking to fire). This has been trivial to stop for a long time, but hey, Google likes money.
Googles advertising tools are amongst the worst in the world (slow, buggy, etc) and make it difficult to block robots, datacentres, businesses, etc. using basic functionality that other tools support.
What’s amazing is Google’s PR. So many people love Android, good search, that quirky movie about an Intern, the promise of self-driving cars, and so on, that they don’t educate themselves about how Google actually makes their money out of fleecing advertisers and pinching publishers.
Iframe busting is a technique for content in the iframe to “bust out” and replace the page with itself. It’s primarily used for ad-takeover and to prevent clickjacking. It’s not a technique for accessing the DOM of the parent. Browser bugs aside, accessing the DOM of the parent requires the child have the same origin as the parent (or other assistance).
location.ancestorOrigins might not give the ad network or advertiser the contextual information they want if the page the user is viewing varies by status (guest, authenticated user, basic membership, premium membership).
It’s easier (and better for data gathering) for ad networks to demand they’re on the same page the user is viewing. Whether that’s a good thing for the end user probably doesn’t matter to many content providers as long as the ad network isn’t serving up malware (or causing other issues that might hurt the provider/user relationship).
In short, you want to monetize your site, you find a way to convince users to pay, or you get advertising which means you play by the ad-networks’ rules.
Google definitely has issues, but they’ve made it easy enough and, compared to their competitors, less problematic such that many content providers accept it.
Iframe busting is a technique for content in the iframe to “bust out” and replace the page with itself. It’s primarily used for ad-takeover and to prevent clickjacking. It’s not a technique for accessing the DOM of the parent.
The same API ad servers provide to iframes for doing these rich media operations, also carry other capabilities, e.g. EyeBlaster’s _defaultDisplayPageLocation
Since (hypothetically) the ad network is more trustworthy than the publisher, this could have been used to trivially unmask naughty publishers.
The only reason I can come up with for the sell-side platforms not doing this is that they like money.
Google definitely has issues, but they’ve made it easy enough and, compared to their competitors, less problematic such that many content providers accept it.
They don’t really have any display/impression competitors for small sites anymore… although I’ve been thinking about making one.
Well, I respect you for trying to avoid freeloading. I should also add I think it’s ethical for people to use ad blockers for security who otherwise avoid ad-supported site. Just trying to stop any sneaky stuff.
I disagree with that viewpoint. It’s right up there with, “Our service would be secure if people would just stop requesting these specific URLs.”
I just don’t see ad-blocking as freeloading. It doesn’t make any sense to pay for something when there’s an equally good free alternative.
I’m a happy paying customer of GitHub, Fastmail, SmugMug, Amazon Prime, Flickr, Netflix, and probably some services I’m forgetting. At the same time, I’m not stupid, and I’m not going to be annoyed and look at ads.
““Our service would be secure if people would just stop requesting these specific URLs.””
It’s certainly not. Managing the risk your product or service has for consumers is totally different than getting a good you know is ad-supported, has ads built-in by default, and stripping the benefit to the other party while enjoying the content. They’ve put work into something you enjoyed and a way to be compensated for it. You only put work into removing the compensation.
“ It doesn’t make any sense to pay for something when there’s an equally good free alternative.”
I agree. I then make the distinction of whether I’m doing it in a way that benefits the author (ads, patreonage, even a positive comment or thanks) or just me at their expense since they didn’t legally stop me. I’m usually a pirate like most of the Internet in that I surf the web with an ad blocker. I’m against ad markets and I.P. law, too broke to donate regularly, and favor paid/privacy-preserving alternatives where possible (i.e. my Swiss email). When I get past financial issues, I’ll be using donations for stuff where possible. I still do that occasionally. Meanwhile, you won’t catch me pretending like I’m not freeloading off the surveillance profiles of others on top of whatever they have on me.
These anti-adblock sentiments seem to always assume the content creator will get paid if I don’t block the ads. But that assumes that either (1) they get paid by impression – which is vanishingly rare or (2) I would click on ads, which I won’t blocked or not.
Mostly it doesn’t which is why most of the time I don’t bother to look for ways to pay for it. But setting aside vast majority of websites where I might visit only once or twice why should I go out of my way to avoid sites that don’t offer any (to me) reasonable way of paying for them?
From practical point of view using ad-blocker I don’t even know about most websites approach to monetisation if there is one. I do bail on those that notify me about my ad-blocking which I guess is ethical in your book?
For what is worth I do pay for a bunch of online services, few patrons and sponsor/subscribe to a couple of news media organisations.
why should I go out of my way to avoid sites that don’t offer any (to me) reasonable way of paying for them?
A good point. The authors concerned with money should at least have something set up to receive easy payments with a credit card or something. If they make it hard to pay them, the fault is partly on them when they don’t get paid.
While I agree content needs to be paid for in some manner - network ads use a not insignificant amount of bandwidth which I pay for on my mobile data allowance and at home through my ISP. The infrastructure costs of advertising, and spam email are not all bourne by the producers of that content. From my perspective the advertisers are not funding the content that I want…
Well, that’s interesting. I can relate on trying to keep the mobile bill down. It still falls in with freeloading where you don’t agree to offer back what they expect in return for their content. Yet, it’s a valid gripe which might justify advertisers choosing between getting ads blocked or something like progressive enhancement for ads. They offer text, a pic, and/or video with what people see determined by whether a browser setting indicates they have slow or expensive Internet. So, they always serve something but less bandwidth is used when less is available.
Indeed. And the author’s experience is clearly the opposite of my own at a medium sized company where “MVP”s are usually not at all minimal.
It’s hard to get an idea of what “not minimal” is for a medium-sized company. My observation is that you can get into problems once you start making micro-adjustments based on A/B tests, or have to cater for negative impacts to other parts of your product. This happens in larger organizations, but I can see any size of organization getting caught up in their own metrics and approach. Features get cut into smaller and smaller pieces and/or don’t get fully fleshed out later once the original A/B test completes. The feature is less value to the user, but the team feel like they are doing the right thing for the product, and no one may be minding the overall product.
I feel like this is an impressive and useful visualization for the old “rent vs. buy” argument for real-estate.
I’ve never really understood this push over the last several years toward renting instead of buying.
A bubble has to burst eventually?
Simply replacing “buy a house when you can” with “you should probably rent instead” is crappy advice, because it ignores all the variables in individuals’ lives.
OK, but if you happen to be e.g. in SF, New York, or say, Vancouver, there is actually a housing bubble all around you, and it’s not dependent on any of the variables in your life.
The advice to “rent instead” is sound as long as you’re considering buying a house in a housing bubble. Whether you could still buy now and sell to a greater fool after two more years of an even crazier bubble is irrelevant.
At work you may only get to write Java server code say, but at home you can write Haskell, C++, machine learning, games, Pi projects, a custom Yocto OS, etc. Even if it within the same language, you might only get to write C++98 at work but at home you can rule the roost with C++17.
Working on projects at home isn’t a necessity, but not working at home can be an anti pattern if you are also not doing anything exciting at work, or not learning anything new, or stuck in the past.
Thanks, this is a great deep dive. I’m considering adopting your simple style rule (“Do not use anything but the two or three argument forms of [
.”)
In fact, dash, mksh, and zsh all agree that the result of [ -a -a -a -a ] is 1 when the file -a doesn’t exist, not a syntax error! Bash is the odd man out.
I’ve discovered too many bizarre Bashisms to count, and hope you steer oilshell away from Bash behavior emulation and towards the POSIX shell spec. Even Bash’s supposed POSIX compatibility mode has Bashisms poking out. There’s one non-POSIX thing I dearly miss in /bin/sh
: set -o pipefail
. It’s difficult to write safe shell scripts without it, so much so that it should probably be in the spec.
Thanks. The next post will be about Oil’s equivalents, so I’ll be interested to hear your feedback.
You can also just use [[
, although it’s less portable. The one thing I don’t like about [[
(besides aesthetics, I prefer test
), is that ==
does globbing, as I mention in the appendix. That should have been a different operator, as =~
is for regexes.
Bash and all the shells I’ve tested are more POSIX compatible than I would have thought. (Bash does have a tendency to hide unrelated bug fixes behind set -o posix
though.)
The bigger issue is that POSIX is not good enough anymore. POSIX doesn’t have set -o pipefail
like you say, and it also doesn’t have even have local
. For example, there are some Debian guidelines floating around that say use POSIX but add local
and a few other things. Human-written scripts can’t get by with strict POSIX. Even echo --
is a problem.
This is the motivation behind the “spec tests” in OSH – to discover a more complete spec. Looking at what existing shells do is exactly how POSIX was made, although the process probably wasn’t automated and it was done many years ago.
I’m basically implementing what shells agree on. But I do have a bias toward for bash behavior when it’s not ridiculous, because bash is widely deployed. When all shells disagree, you have to pick something, and picking dash or mksh makes no sense. POSIX typically doesn’t say anything at all in these cases, so it’s not much help.
This is the motivation behind the “spec tests” in OSH – to discover a more complete spec…I’m basically implementing what shells agree on.
While we’re on the subject, here’s a point of disagreement between the shells that you might find interesting: assignment from a heredoc. If the POSIX shell spec has anything to say on this one, I couldn’t find it. Defining these kind of behaviors in a more complete shell spec does seem to me like a very valuable endeavor on its own.
But I do have a bias toward for bash behavior when it’s not ridiculous, because bash is widely deployed. When all shells disagree, you have to pick something, and picking dash or mksh makes no sense. POSIX typically doesn’t say anything at all in these cases, so it’s not much help.
I’d probably fall back on Bourne shell behavior as found in present day BSDs, or the Korn shell, or heck even dash, before going in for an obvious Bashism. Both Bourne and Korn exhibit careful, minimal design. Bash on the other hand was “anything goes” for a while there, with predictable implications for quality and security (“Wouldn’t it be cool if you could export functions to children via the env?!” => shellshock).
But, if it’s a question of how facilities common to all shells should behave, then choosing the Bash behavior isn’t necessarily bad.
Yes that’s the kind of thing that I’ve been testing. I copied it into my test framework:
https://github.com/oilshell/oil/commit/a79ebc8437781b8edb8fd8ad03276fc6255af1f3
Here are the results:
http://www.oilshell.org/git-branch/dev/oil4/5ca7bacb/andy-home/spec/blog-other1.html
I put his example as case 0, and his fix as case 1. Interestingly the “before” one works in mksh and zsh, but the “after fix” fails in those two shells.
dash accepts all of them and bash fails at all of them.
Case 2 is my rewrite of this, which works in OSH.
But going even further, I think this construct can always be expressed more cleanly as a separate assignment and then here doc. I did think about this issue, because OSH prints a warning that it’s probably not what you want:
osh warning: WARNING: Got redirects in assignment
Though I think this example is conflating two issues: the command sub + here doc, and the fact that sed
has two standard inputs – the pipe and tr. I didn’t tease those things apart and I suspect that would help reason about this.
It’s definitely interesting but I’m going to leave it for now because it’s not from a “real” script… I still have a lot of work to do on those! But it’s in the repo in case anyone ever hits it.
That’s a neat cross-shell test framework.
It’s definitely interesting but I’m going to leave it for now because it’s not from a “real” script…
This was from a real script (that post is from my blog) but after a bit of git grepping I still can’t find it, so how real can it be eh? I agree your rewrite to $(some complex multiline stuff)
is cleaner than the same in backticks, but the questions around when a here-document should be interpreted remain.
Though I think this example is conflating two issues: the command sub + here doc, and the fact that sed has two standard inputs – the pipe and tr.
It doesn’t depend on sed
actually. Simpler test case:
foo=`cat`<<EOM
hello world
EOM
echo "$foo"
/bin/dash
prints “hello world”, while Bash hangs waiting for input.
Oh sorry it wasn’t clear from the blog post where the example came from!
I just tested it out, and the simpler example works on dash, mksh, and zsh, but fails on bash. That is interesting and something I hadn’t considered. Honestly it breaks my model of how here docs are parsed. I wrote about that here [1].
And while you can express this in OSH, it looks like OSH is a little stricter than bash even. So I’ll have to think about this.
Right now I think there are some lower hanging fruit like echo -e
and trap
and so forth, but these cases are in the repo and won’t be lost.
The article doesn’t answer the question, so it’s kind clickbait-y, but the discussion is reasonable, although not breaking any new ground.
The best it offers is a heuristic.
I offer the heuristic that the correct way to comment is to avoid them as much as humanly possible.
Meh. Maybe. It’s not compelling, but neither is it repulsive.
Unfortunately, it seems that knowing when to add comments can only be determined after leaving the code for a few weeks. If you come back and don’t understand it, you probably need some comments.
If you come back and don’t understand it, you probably need some comments.
Unfortunately, by then you probably need to have already written the comments a few weeks earlier. And this is the key problem with the “if it’s easy to understand you don’t need to comment it” argument: the person who’s deciding whether it’s easy to understand has just dug into the problem enough to write a solution to it, therefore finding it easier to understand than anybody who hasn’t done that.
Funny, but off-topic IMHO, as it doesn’t really go deep into the topic; it’s just a series of images and snark.
“Seeing X made me realize Y” is something we hear sometimes. “Made me” is not necessarily about a person forcing a person to do something.
But maybe the headline would have been less ambiguous (and less sensationalist) had it said “A note from rms made me …” or even more removed, “Reading a note from …”, or just “rms’s views made me …”.
Important context: their target demographic and playtesters are five years old. Some of the kids they’re teaching to program are still wetting the bet!
when the trouble starts it’s easier to blame the entire history of the software industry rather than fix your assumptions.
Couple thoughts on this:
I used to think like this, but now I’m not so sure. To be clear, I’m a terrible salesperson and probably not a good manager. I’ve started to suspect that those skills aren’t as magical as they’re made to seem (so as to maintain status). Say you create a product that you’re stoked about, and you have some potential customers you can talk about it with. That’s doing sales! You might not want to do it all the time, but you might be able to do it enough to get going, and hopefully fill the position with someone with more of an aptitude and talent than yourself.
Again, I’m not advising anyone to do this, but rather coming around to the belief that I might be able to temporarily do these jobs (poorly) enough to create something basic enough to build on.
I think that’s probably true, but also quite possibly true of programming. Everyone thinks they don’t need the other roles, or can pick up enough of them on the side to get by. :-) In mobile apps especially I’ve noticed this in practice, where someone who is good at design and business and 5 years ago would’ve had to hire a programmer to build the prototype, can now pick up enough dev skills via online tutorials to put together at least the initial prototype themselves (often using some kind of RAD tool, but still). Meanwhile the person who 5 years ago would’ve been the contract programmer might think they can pick up enough design & business skills to build & market their own apps without needing that side of the team. Unclear to me what the limiting skills are as this kind of thing becomes more common in multiple directions.
My boss at my previous company was dogmatic to the point of perversity. During code reviews, strict adherence to coding style guidelines (PEP-8+100 for Python, his own for C that included shudder Hungarian notation) was more important than the actual meaning of the code.
I get that consistent style is important, I do, but we spent more time in code reviews discussing how to format our commit messages than we did talking about the algorithms being developed.
Sometimes in C I really need try/finally, and the easiest, clearest, most maintainable way to do that in C is with goto statements to a cleanup label at the end of the function. I cringe when I want to do that because I know the battles that will come, due to another “considered harmful” article.
“If you want to go somewhere, goto is the best way to get there.” – Ken Thompson
Yay another Emacs user!
Although I really like his empathy, I – personally – do not think that org-mode is that great in the long run. It lead my way to Emacs itself, but I was never really able to set up a system that I would stick with. There is a lot of documentation for it out there (printing this page will result in a 96 page document) but configuring it to your every need is a big task. Maybe it’s the fault of the editor because it allows you to turn pretty much every knob there is and org-mode offers just too many.
Still, org-mode does attract quite a lot of users which will then stick with the editor so that’s a point in favor of it.
Like shanemhansen I too used Org mode just as a “better markdown” for a while, but started getting into Org mode a bit more by watching Rainer’s Youtube Org-mode tutorials. I am now at a level where I have capture template for “weekly reviews”, and even have a separate capture template for new invoices… that I process into PDF via LaTeX. I still refer to the manual quite often, it has to be said. I used Org Babel to write executable runbooks, and I maintain my blog as an Org publishing project. I… may need an intervention.
Actually sections of my teams playbooks were executable org mode things that I exported to confluence markdown. Design docs are usually in org mode (with inline graphviz/dot file images). It’s really awesome.
I started blogging using org mode. I was exporting org to markdown for hugo, but then the author of this article (Chase Adams) added native org support to hugo. Markdown is just a tad too simple for me, but org mode is perfect for lightweight structured docs with some code samples.
I’ve started doing presentations in org mode using a reveal.js plugin.
and I haven’t even gotten into capture templates or time tracking.
There are things I haven’t done, such as Jira and Confluence integration
I’ve come to love the shell a hell of a lot. I implemented a Jira CLI thing for my own use, which has EDITOR support, and if I were to have an editor-compatible “confluence” thing, it’d be an EDITOR-compatible thing. What I mean to say is, I miss Acme. There’s a few things that really annoy me about it, but its integration with the system is just fantastic. I think one of the key parts is the plumber; you can do really awesome stuff with that thing. Thing on your screen looks like a Jira ticket number? Right-click, and you got a Jira ticket details. And then I had special formatting for my Jira thing that would output shell commands that I could just execute from Acme by highlighting them. Glorious.
I disagree. The fact that org mode is configurable doesn’t mean you need to configure it. I haven’t configured org mode at all.
Maybe this is a bad analogy but to me it’s a bit like C++. You can use it as a “better markdown” and you can keep using more features until you’re using it to produce reproducible scientific papers or do devops.
This is what most of my org mode files look like
* Title
** Subtitle
**** TODO task
- some
- stuff
I’ve never felt the need to configure it.
I’ve been trying org-mode
on and off for the three years I’ve been using Emacs (switched from vim
). I haven’t managed to stick to it for longer than a couple months, it truly is too powerful, it’s overwhelming for me.
I read several articles about how great Emacs is (which is true, of course!) and the comments will always have mentions to Org, but I haven’t really seen many good long-form articles about their org workflow. If anyone in this thread would share theirs, I’d be super grateful ;)
I’d suggest voting based on whether the comment itself deserves upvotes or downvotes–sympathetic voting somewhat distorts the point of having a karma system.
I’d argue that downvoting because of disagreement (and not because of any of the reasons mentioned in the downvote list) distorts the karma system more and sympathetic upvoting is a corrective.
I agree. I personally think of downvoting as corrective action. The community’s way of saying “Hey don’t do that” - and when I see someone abusing the downvote there are times I want to be able to try to balance the scales.
And if i see you doing that, i will try to rerebalance the scales with a corrective downvote. GOTO 10
What exactly is the purpose of that? Do you ever see a comment and go “hmm, I wouldn’t usually downvote this comment, but it has a score of 1 and I don’t really think it deserves upvotes, so I’ll downvote it”?
Your argument doesn’t make sense to me, but that might just be my lack of imagination.
What do you select as a reason for downvoting? There is no option for “someone else unfairly upvoted this comment”.
I would agree with you about downvoting-as-disagreement. That said, sympathetic upvoting is the same pathology.
As others alluded, does this always work? If somebody posts an incorrect comment, should all 100 people who see it downvote it? At a certain point it gets auto collapsed, but there’s still people who will expand it. No mercy, wrong is wrong?
I’ve been less attentive recently, for reasons I won’t rehash right now, but in general as a moderator I have not noticed patterns of downvoting somebody simply for being wrong - it’s always been clear to me that posts with large numbers of downvotes were about controversial topics. It’s the topic and an often-imagined classification of participants onto “sides” that leads to massive downvotes, not anything specific about what points are made or their veracity.
Sometimes, downvotes are used for clearly wrong factual information. Either by misunderstanding, by bad research or by missing the point of the submitted post. But that’s fine, “incorrect” is one of our downvote reasons and that’s okay.
It has personally happened to me more then once and I think it’s good. It’s not so much about “deserving” downvotes, but “Incorrect” downvotes are a good way to make sure things move out of scope of the discussion.
Absolutely. When that’s the reason, I haven’t witnessed it become a pile-on that goes to -10 or -20, but I’m sure it’s happened at least a few times that I haven’t seen.
Indeed. I think that’s the result of some self moderation (sympathy). If somebody posts “XML is the best” they’ll take a few hits, but then it levels off. I’m asking if angersock thinks the ideal situation would be an unending stream of downvotes.
To expand on my theory, I imagine most voters read a comment and think “this is a -5 comment” or “this is a +10 comment” and then vote as needed to make it so. But this is nowhere close to deciding on an action in isolation based solely on comment content.
I’ve been wondering for a long time about positive and negative feedback. Usually when someone’s upvoted, it’s pretty obvious (Thorough explanation, critical answer, …), but someone being downvoted might just ask him/herself why.
Maybe the easiest way to deal with that is to have downvotes only available if you explain why. Then the number of upvotes explaining the why would be helpful to anyone reading.
Downvoting here and there without explaining people why they might be wrong won’t help anyone to grow anything but frustration.
Even as it seems like an insult to the intelligence, you’d evidently be shocked at how many complete frauds have glow-in-the-dark CVs, and the horrifying fact that FizzBuzz-level tests are actually a useful tool, and quite a quick one.
Next time your company has a hiring round, I urge you to sit in. You’ll see why the poor person doing the hiring does this.
(I’m a sysadmin not a coder, but we have the same thing in sysadmin. Twenty-year CVs where they clearly don’t know basics.)
But are they, though?
As a personal anecdote, I have to be in the right “mindset” for writing code, which is a very different mindset from “interviewing”, so much so that they collide. When I’m in “social mode”, talking about what I do, following social cues, etc. I absolutely cannot write code to save my life. When I’m in ‘coding mode’ I can write code for just about anything you can imagine but my social skills amount to grunts and forces smiles.
Having been on both sides of the fence, I think there’s really a perception that there are frauds behind every CV and it’s up to YOU as the hiring manager to root them out. I believe that this is an adversarial position before the interaction even begins. I’ve found much more success in assuming people are capable and letting them rise or fall from that starting point.
Further, this modality also doesn’t account for the fact that “interviewing” is a skill that is different than “coding”.. Solving large complex problems in relatively unbounded time (weeks, months) vs. on the spot 45 minute brain teasers/puzzles are two entirely different skillsets and don’t signal anything about the prospective person other than they’re practiced and your interview style.
I wonder if this is analogous to doing arithmetic in one’s head versus doing mathematics.
It’s pretty easy to practice arithmetic and get quick & good at it, just by mechanical drilling. That’s one way you get to be known as the guy who’s “good at maths” at school. And then you get lazy about it because you have a calculator, and there’s something more interesting to do than summing & multiplying numbers.
Once on a coffee break, a coworker asked me something silly like how much is 1.5 * 0.5. It was sudden, unexpected, and shocking, and I just sort of froze. It probably took me at least five minutes to come up with the answer.
Were that to be an interview question, I suppose one could extrapolate and conclude that I am hopelessly bad at mathematics. (To be honest I’m not a maths geek but I’m not that bad either, ha!)
I’m talking specifically about 5-minute ones on the level of FizzBuzz, not 45-minute things. My cynicism comes from sad experience. Perhaps it’s the same 199 people, per Joel: https://www.joelonsoftware.com/2005/01/27/news-58/
FizzBuzz isn’t a coding test, it’s a quick bozo filter.
You know, I’ve been in a bunch of interviews at several companies, and I’ve never seen one of these frauds who are supposedly so ubiquitous. Maybe it’s because I haven’t worked at big or “name” companies. But I just haven’t seen people who couldn’t write FizzBuzz applying for developer jobs.
Keep looking, and you’ll discover that all front end development is a hack.
Contrary to popular belief, HTML is not a language for describing the user interface to an application. HTML is a serialization format for a data structure called the DOM- the Document Object Model. The DOM also isn’t itself a presentational structure, but instead is a semantic structure for organizing data elements into a hierarchy. As the name implies, it’s a Model object. It is data. Would you say “tabbed navigation” is a data structure? Of course not! It’s a UI pattern.
Modern web development ends up in a weird place- the DOM is simultaneously an extremely complex and abstract data structure, but also is too low-level to provide a lot of useful UI features. I mean, it doesn’t even have a native concept of a “view”, and instead people just generate snippets of DOM-data and pretend that these are distinct views.
It is a bit ahistoric, but perhaps not as much as you think. HTML has always struggled with the conflict between being a semantic data structure and a presentational layer- so much so that deciding whether or not to include was a huge debate at the time, because it was arguably too presentational and too use-case specific.
One thing that’s important to note is that DOM level 0 was not an API created for JavaScript to consume- it was the browser’s own internal model exposed to be automated via JavaScript. Now, in the bad old days, there was no standard controlling this, so yes, it would be very misleading to say that the DOM and HTML had a predictable relationship- each browser would “deserialize” it very very differently.
But nowadays, the DOM is the canonical state of a page, from the browser’s perspective, and it is what the browser renders. There is a clear and documented relationship between HTML and the DOM tree that should be constructed as a result.
In the end, the statement “HTML is a serialization format for DOM” is a very opinionated way of expressing their relationship. Browsers don’t render HTML, they render DOM. They use HTML to construct that DOM.
Another GitHub aspect I dislike is that it turns into a “social coding” platform, where people “share much more than code”. It feels like a social network, and I do not want a git + facebook mix.
Wow, I’ve not seen anything like it becoming facebookey. Do you have some examples on this?
It seems it only lacks the private messaging. There are already in-github-issue blogs.
I actually like the stars, which I use for bookmarking. I regularly get the latest releases of starred repositories with a script.
Not a fan of emojis though.
Those darn subset of unicode!
Thank you, this is useful. :)
I consciously don’t reply to all, as much of it - IMHO - is very much up to taste.
I know quite a few people using that for discovery.
I like them, because they don’t drop down my inbox. I use both them and the (really well implemented) emails.
They are literally a wanted feature. Before having them, issues were full of people posting “+1”, “no”, etc., which trashed everyones email inbox.
Fun fact: they used to have that. It was killed off in… 2010something?
Yes. :-)
Most of these features happen to be convenient.
Maybe blurring the line with social networks is a side effect of trying to make collaboration better…
IIRC, they added emojis reaction to messages to prevent people putting “+1”-only messages to tell they are really eager to see a new feature implemented.
GitHub is a good answer to what people ask, and I am OK with what people ask for. I do not ask the same thing (just GIT server + gitweb or alike) but still have an account as I find all I need with GitHub, and it is required to comment on issues.
Pretty close: April 2012. No-one wanted another inbox to check.
Ah, the fork queue… good old times.
You are right, we need these features. This makes the platform evolve as they are added, and makes using GitHub as a social network possible.
As long as it is possible to use GitHub without the “social” features in the way, then I have no problem with it. :)
Those things don’t annoy me, I actually like to know if someone follows me, it’s good for my ego. I don’t have any crazy big projects though, may after a certain point the notifications become too frequent? I’m sure you can turn them off or ignore them though?
Yes, exactly. So this is no big trouble fortunately. I can mostly ignore the platform and work as if I got commits through e-mail an a mailing list.
I can’t help but feel their resignation is a mistake.
From my laypersons perspective, their resignation is a public recognition and declaration that the EFF no longer has confidence in the W3C’s ability to operate with the goals and bounds of its original mission.
In that, if that is the case, I agree with the sentiment, and the EFF’s protest.
Given how the disagreement has developed so far, it was the only thing for them left to do. Though quite tragical, in its most literal, classical sense, I agree.
What do they achieve by resigning? What will they miss out on? I honestly don’t know enough to judge or feel much about this, but either way it looks like this is a bridge burnt in a demonstration of principles. Perhaps they garner upvotes and support – in principle. Does that give them more than they gained or could have gained from W3C membership in the future?
Quite right–as illustration of this, part of the reason this has succeeded was that Mozilla caved in an effort to preserve marketshare in the face of less ideologically-pure browsers. Their tacit approval of DRM emboldened the others to push it through, and now it can be pointed at in defense of the odious thing.
Well, they only joined W3C to veto EME. The veto didn’t work, so what’s left for them to do?
https://www.eff.org/deeplinks/2013/05/eff-joins-w3c-fight-drm
If you had joined a chess club, and then some way or another it turned out to be a rape club travelling the world raping random people. Should you resign? Or should you stay to ‘correct the course’?
I mean, you do gain a lot from the membership of said rape club, namely the opportunity to rape random people.
What would you achieve from resigning?
Could you make your point without the references to sexual violence next time? It’s neither necessary to making your point nor kind to those in your audience who might have been on the receiving end of similar, which is two out of the three strikes.
Why?