1. 65

    This blogpost is a good example of fragmented, hobbyist security maximalism (sprinkled with some personal grudges based on the tone).

    Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

    Talking about threat models, it’s important to start from them and that explains most of the misconceptions in the post.

    • Usable security for the most people possible. The vast majority people on the planet use iOS and Android phones, so while it is theoretically true that Google or Apple could be forced to subvert their OSs, it’s outside the threat model and something like that would be highly visible, a nuclear option so to speak.
    • Alternative distribution mechanisms are not used by 99%+ of the existing phone userbases, providing an APK is indeed correctly viewed as harm reduction.
    • Centralization is a feature. Moxie created a protocol and a service used by billions and millions of people respectively that provides real, measureable security for a lot of people. The fact is that doing all this in a decentralized way is something we don’t yet know how to do or doing invites tradeoffs that we shouldn’t make. Federation atm either leads to insecurity or leads to the ossification of the ecosystem, which in turn leads to a useless system for real users. We’ve had IRC from the 1990s, ever wonder why Slack ever became a thing? Ossification of a decentralized protocol. Ever wonder why openpgp isn’t more widespread? Noone cares about security in a system where usability is low and design is fragile. Ever tried to do key rotation in gpg? Even cryptographers gave up on that. Signal has that built into the protocol.

    Were tradeoffs made? Yes. Have they been carefully considered? Yes. Signal isn’t perfect, but it’s usable, high-level security for a lot of people. I don’t say I fully trust Signal, but I trust everything else less. Turns out things are complicated when it’s about real systems and not fantasy escapism and wishes.

    1. 34

      Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

      In this article, resistance to governments constantly comes up as a theme of his work. He also pushed for his tech to be used to help resist police states like with the Arab Spring example. Although he mainly increased the baseline, the tool has been pushed for resisting governments and articles like that could increase perception that it was secure against governments.

      This nation-state angle didn’t come out of thin air from paranoid, security people: it’s the kind of thing Moxie talks about. In one talk, he even started with a picture of two, activist friends jailed in Iran in part to show the evils that motivate him. Stuff like that only made the stuff Drew complains about on centralization, control, and dependence on cooperating with surveillance organization stand out even more due to the inconsistency. I’d have thought he’d make signed packages for things like F-Droid sooner if he’s so worried about that stuff.

      1. 5

        A problem with the “nation-state” rhetoric that might be useful to dispel is the idea that it is somehow a God-tier where suddenly all other rules becomes defunct. The five-eyes are indeed “nation state” and has capabilities that are profound; like the DJB talk speculating about how many RSA-1024 keys that they’d likely be able to factor in a year given such and such developments and what you can do with that capability. That’s scary stuff. On the other hand, this is not the “nation state” that is Iceland or Syria. Just looking at the leaks from the “Hacking Team” thing, there are a lot of “nation states” forced to rely on some really low quality stuff.

        I think Greg Conti in his “On Cyber” setup depicts it rather well (sorry, don’t have a copy of the section in question) and that a more reasonable threat model of capable actors you do need to care about is that of Organized Crime Syndicates - which seems more approachable. Nation State is something you are afraid of if you are political actor or in conflict with your government, where the “we can also waterboard you to compliance” factors into your threat model, Organized Crime hits much more broadly. That’s Ivan with his botnet from internet facing XBMC^H Kodi installations.

        I’d say the “Hobbyist, Fragmented Maximalist” line is pretty spot on - with a dash of “Confused”. The ‘threats’ of Google Play Store (test it, write some malware and see how long it survives - they are doing things there …) - the odds of any other app store; Fdroid, the ones from Samsung, HTC, Sony et al. - being completely owned by much less capable actors is way, way higher. Signal (perhaps a Signal-To-Threat ratio?) perform an good enough job in making reasonable threat actors much less potent. Perhaps not worthy of “trust”, but worthy of day to day business.

      2. 18

        Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

        And yet, Signal is advertising with the face of Snowden and Laura Poitras, and quotes from them recommending it.

        What kind of impression of the threat models involved do you think does this create?

        1. 5

          Who should be the faces recommending signal that people will recognize and listen to?

          1. 7

            Whichever ones are normally on the media for information security saying the least amount of bullshit. We can start with Schneier given he already does a lot of interviews and writes books laypeople buy.

            1. 3

              What does Schneier say about signal?

              1. 10

                He encourages use of stuff like that to increase baseline but not for stopping nation states. He adds also constantly blogged about the attacks and legal methods they used to bypass technical measures. So, his reporting was mostly accurate.

                We counterpoint him here or there but his incentives and reo are tied to delivering accurate info. Moxie’s incentives would, if he’s selfish, lead to locked-in to questionable platforms.

        2. 18

          We’ve had IRC from the 1990s, ever wonder why Slack ever became a thing? Ossification of a decentralized protocol.

          I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”

          If you actually look at the protocols? Slack is a clear case of Not Invented Here syndrome. Slack’s interface is not only slower, but does some downright crazy things (Such as transliterating a subset of emojis to plain-text – which results in batshit crazy edge-cases).

          If you have a free month, try writing a slack client. Enlightenment will follow :P

          1. 9

            I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”

            Per IRCv3 people I’ve talked to, IRCv3 blew up massively on the runway, and will never take off due to infighting.

            1. 12

              And yet everyone is using Slack.

              1. 14

                There are swathes of people still using Windows XP.

                The primary complaint of people who use Electron-based programs is that they take up half a gigabyte of RAM to idle, and yet they are in common usage.

                The fact that people are using something tells you nothing about how Good that thing is.

                At the end of the day, if you slap a pretty interface on something, of course it’s going to sell. Then you add in that sweet, sweet Enterprise Support, and the Hip and Cool factors of using Something New, and most people will be fooled into using it.

                At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on: https://ircv3.net/specs/extensions/batch/chathistory-3.3.html)

                1. 9

                  At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on […])

                  The time for the IRC group to be working on a solution to persistent history was a decade ago. It strikes me as willful ignorance to disregard the success of Slack et al over open alternatives as mere fashion in the face of many meaningful functionality differences. For business use-cases, Slack is a better product than IRC full-stop. That’s not to say it’s perfect or that I think it’s better than IRC on all axes.

                  To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool? But imagine being a UX designer and wanting to help make some native open-source IRC client fun and easy to use for a novice. “Sisyphean” is the word that comes to mind.

                  If we want open solutions to succeed we have to start thinking of them as products for non-savvy end users and start being honest about the cases where closed products have superior usability.

                  1. 5

                    IRC isn’t hip and cool because people can’t make money off of it. Technologies don’t get investment because they are good, they get good because of investment. The reason that Slack is hip/cool and popular and not IRC is because the investment class decided that.

                    It also shows that our industry is just a pop culture and can give a shit about good tech .

                    1. 4

                      There were companies making money off chat and IRC. They just didn’t create something like Slack. We can’t just blame the investors when they were backing companies making chat solutions whose management stayed on what didn’t work in long-term or for huge audience.

                      1. 1

                        IRC happened before the privatization of the internet. So the standard didn’t lend itself well for companies to make good money off of it. Things like slack are designed for investor optimization, vs things like IRC being designed for use and openness.

                        1. 2

                          My point was there were companies selling chat software, including IRC clients. None pulled off what Slack did. Even those doing IRC with money or making money off it didn’t accomplish what Slack did for some reason. It would help to understand why that happened. Then, the IRC-based alternative can try to address that from features to business model. I don’t see anything like that when most people that like FOSS talk Slack alternatives. Then, they’re not Slack alternatives if lacking what Slack customers demand.

                          1. 1

                            Thanks for clarifying. My point can be restated as… There is no business model for federated and decentralized software (until recently , see cryptocurrencies). Note most open and decentralized tech of the past was government funded and therefore didn’t face business pressures. This freed designets to optimise other concerns instead of business onrs like slack does.

                    2. 4

                      To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool?

                      The argument being made is that the vast majority of Slack’s appeal is the “hip-and-cool” factor, not any meaningful additions to functionality.

                      1. 6

                        Right, as I said I think it’s important for proponents of open tech to look at successful products like Slack and try to understand why they succeeded. If you really think there is no meaningful difference then I think you’re totally disconnected from the needs/context of the average organization or computer user.

                        1. 3

                          That’s all well and good, I just don’t see why we can’t build those systems on top of existing open protocols like IRC. I mean: of course I understand, it’s about the money. My opinion is that it doesn’t make much sense to insist that opaque, closed ecosystems are the way to go. We can have the “hip-and-cool” factor, and all the amenities provided by services like Slack, without abandoning the important precedent we’ve set for ourselves with protocols like IRC and XMPP. I’m just disappointed that everyone’s seeing this as an “either-or” situation.

                          1. 2

                            I definitely don’t see it as an either-or situation, I just think that the open source community typically has the wrong mindset for competing with closed products and that most projects are unapproachable by UX or design-minded people.

                    3. 3

                      Open, standard chat tech has had persistent history and much more for decades in the form of XMPP. Comparing to the older IRC on features isn’t really fair.

                      1. 2

                        The fact that people are using something tells you nothing about how Good that thing is.

                        I have to disagree here. It shows that it is good enough to solve a problem for them.

                        1. 1

                          I don’t see how Good and “good enough to solve a problem” are related here. The first is a metric of quality, the second is the literal bare minimum of that metric.

                  2. 1

                    Alternative distribution mechanisms are not used by 99%+ of the existing phone userbases, providing an APK is indeed correctly viewed as harm reduction.

                    I’d dispute that. People who become interested in Signal seem much more prone to be using F-Droid than, say, WhatsApp users. Signal tries to be an app accessible to the common person, but few people really use it or see the need… and often they are free software enthusiasts or people who are fed up with Google and surveillance.

                    1. 1

                      More likely sure, but that doesn’t mean that many of them reach the threshold of effort that they do.

                    2. 0

                      Ossification of a decentralized protocol.

                      IRC isn’t decentralised… it’s not even federated

                      1. 3

                        Sure it is, it’s just that there are multiple federations.

                    1. 2

                      If it would have been Google, would people react the same way? I feel there’s a constant hanger around Microsoft, that is generally localized around its OS, but the Azure teams and this acquisition is showing great advances for Microsoft and hopefully a better future for everyone!

                      1. 2

                        I think people would be much more upset if it had been Google, but really both companies have historically short attention spans.

                        The problem is that products which would be great successes if run as independent entities are frequently seen as distractions or failures inside large corporations like Google or Microsoft. And if they do decide to maintain an acquired product, the amount of value they need to juice from it is dramatically higher than would be needed by an independent org.

                      1. 2

                        But there’s also TypeScript which has bivariant inputs and outputs which will never warn you.

                        As of TypeScript 2.6, inputs are checked contravariantly in strict mode: https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-6.html

                        1. 1

                          Scary how often viruses like this are showing up in linux! I think this is the beginning of a new time.. and we’re going to have to change the way we do things to stay safe.

                          1. 7

                            These incidents (at least in the NPM context) are good endorsements for the goals of Ryan Dahl’s deno, which runs code sandboxed by default.

                            1. 4

                              sandboxing is good but i don’t want to run malicious code at all, even if it’s properly contained! We need better review too.

                              1. 2

                                I’d go a little bit further than that. We need to extend the security architecture of our package managers. For example, architectures like TUF, notary/Docker Content Trust, or PEP-458 are great starting points.

                              2. 5

                                This is more of an NPM virus, not a linux specific one.

                              1. 4

                                It is really surprising he didn’t find jbuilder which is the build system that seems to be displacing all the other build systems in OCaml and has a passable learning curve.

                                Maybe because he uses Reason syntax, which is probably not supported very well, if at all.

                                1. 3

                                  It’s called Dune now: https://github.com/ocaml/dune

                                  1. 1

                                    Dune has no releases yet, so for now it still is jbuilder.

                                  2. 1

                                    Dune supports Reason syntax out of the box and will pick up all *.re files automatically.

                                  1. 6

                                    The internet search experience suffered a setback when the major browsers abandoned the separate search box for the combined address/search box. Only FireFox retains this feature, where your default search engine is the first choice in a list.

                                    In the days before Alta Vista became better than Yahoo, and then Google crushed all other search options, there were meta-search engines that combined, filtered, and formatted results from several search engines of your choice. IIRC Magellan was one of these. I’ve toyed with the idea of reviving this idea for my own use. Google and Bing are pretty similar, but not perfectly similar, and provide different results depending on whether you are signed-in or anonymous. DDG usually provides different enough results to be important. There’s a lot of room for innovation in meta-search.

                                    Finally there are still all sorts of specialized search options. In this category I would start with Amazon and Wikipedia. There are also sites like noodle.com, specializing in education related searches.

                                    1. 5

                                      DuckduckGo is my go to search.

                                      It is simple and doesn’t have the Google bloat to it and thise smart searches like where you can generate a md5 hash for example in a search query or do number system conversions is pretty cool

                                      1. 2

                                        Duckduckgo owns, its my configured default search on all devices. When i need something specific from Google, i use the bang feature for google, !g.

                                        1. 2

                                          I never knew that was a bang available, my word. Is there a !b for bing too? (Update: there is wow)

                                        2. 0

                                          So essengially DDG has a great interface and is actually way more useful.

                                          1. 4

                                            Let’s be honest, though: the results are not as good as Google for many/most queries.

                                            1. 3

                                              I don’t know. I switched to DDG at home and I’ve always been able to find what I’m looking for. I still use Google at work so I’m able to compare and contrast. About the only place where Google is better (in my opinion) is in image search, and that may be due to how Google displays them vs. DDG.

                                              1. 4

                                                Here’s a concrete example. Let’s say I’m trying to remember the name of the project that integrates Rust with Elixir NIFs.

                                                First result for me for the query “elixir rust” on Google is the project in question: https://github.com/hansihe/rustler

                                                After scrolling through three pages of DDG results, that project doesn’t seem to be listed or referenced at all, and there are several Japanese and Chinese-language results despite the fact that I have my location set to “United States”. I will forgive all the results about guitar strings since DDG doesn’t have tracking data to determine that I’m probably not interested in those (although the usage of the word “rust” in those results is in the term “anti-rust” which seems like a bad result for my query).

                                                That query is admittedly obtuse, but that’s what I’ve become accustomed to using with Google. These results feel generally characteristic of my experience using DDG. I end up using the !g command a lot rather than trying to figure out how to reframe my query in a way that DDG will understand.

                                                1. 2

                                                  I think you did that wrong. You were specifically interested in NIF but left that key word off. Even Lobsters search engine, which is often really off for me, gets to Rustler in the first search when I use these: elixir rust nif. Typing it into DDG like this gives me Rustler at Page 1, Result 2.

                                                  Just remember these high-volume, low-cost engines are pretty dumb when not backed by a company the size of Google or Microsoft. You gotta tell them the words most likely to appear together. “NIF” was critical in that search. Also, remember that you can use quotes around a word if you know for sure it will appear and minus in front of one to eliminate bogus results. Put “site:” in front if you’re pretty sure which place or places you might have seen it. Another trick is thinking of other ways to say something that authors might use. These tricks 1990’s-early2000’s searches get me the easy finds I submit here.

                                                  1. 0

                                                    I disagree that “NIF” was essential to that query. There are a fair number of articles and forum posts on Google about the Rustler library. It’s one of the primary contexts that those two languages would be discussed together. DDG has only one of those results as far as I see. Why? Even if I wasn’t looking for Rustler specifcally, I should see discussions of how those two languages can be integrated if I search for them together.

                                                    1. 2

                                                      There are a fair number of pages where Elixir and Rust will show up without Rustler, too. Especially all the posts about new languages. NIF is definitely a keyword because you’re wanting a NIF library specifically instead of a page about Rust and Elixir without NIF. It’s a credit to Google’s algorithms that it can make the extra connection to Rustler pushing it on the top.

                                                      That doesn’t mean I expect it or any other search engine to be that smart. So, I still put every key word in to get consistently accurate results. Out of curiosity, I ran your keywords to see what it produces. The results on the top suck. DuckDuckGo is usually way better than that in my daily use. However, instead of three pages in, DuckDuckGo has Rustler on page 1, result 6. Takes about 1 second after hitting enter to get to it. Maybe your search was bad luck or something.

                                                  2. 1

                                                    I did exactly that search and found it at the 5th position.

                                                    While “elixir rust github” put it at 1st position. Maybe you have some filters? I have it set to “All Regions”.

                                                2. 2

                                                  Google has so many repeated results for me that I feel they have worse quality for most of my queries than ddg or startpage. Maybe I’ve done something wrong and gotten myself into a weird bubble, but these days I find myself using Google less and less.

                                                  1. 1

                                                    Guess so. I have been using it at uni though for a long time and gotten atleast what I needed.

                                                    But I admit that googs has more in their indexes.

                                              2. 5

                                                Searx is a fairly nice meta search engine.

                                                1. 4

                                                  Finally there are still all sorts of specialized search options. In this category I would start with Amazon and Wikipedia.

                                                  DuckDuckGo has a feature called “bangs” that let you access them. Overview here. Even if not using DDG, their list might be a nice reference of what to include in a new, search engine.

                                                  1. 1

                                                    the URL bar itself now performs a search when you put something that’s not a URL in it

                                                    1. 1

                                                      I thought that was clear. What I like about the old style dedicated search box is it that its is so easy to switch between search engines.

                                                      1. 3

                                                        I believe that you can use multiple search engines in an omnibar by assigning each search engine a keyword, and typing that keyword (and then space) before your search.

                                                        1. 1

                                                          Or if you use DuckDuckGo, you can use !bangs to pivot to another search engine or something else.

                                                        2. 2

                                                          With keyword searching (a feature I first used in Opera, and which is definitely present in Firefox; I can’t speak to any other browsers), it’s “so easy” to switch between search engines—in fact, far easier than with a separate search box. I type “g nephropidae” to search Google, or “w nephropidae” for Wikipedia, “i nephropidae” for image search, or even “deb nephropidae” for Debian package search (there’s no results for that one).

                                                          1. 2

                                                            This is not completely obvious from the user experience. Without visual cues, much available functionality is effectively hidden. You must have either taken the initiative to research this, someone told you, or you stumbled upon it some other way. This also effectively requires you to have CLI-like commands memorized, the exact opposite of what GUIs purport to do. And adding new search engines? That’s non-obvious.

                                                            1. 1

                                                              I use YubNub to get large library of such keywords that is the same on every device.

                                                      1. 25

                                                        I did a PhD in maths where I had to do a lot of algebraic geometry, so I’m comfortable with category theory and its concepts and applications. I’ve never seen those ideas being used in nontrivial or useful ways in programming, and by now think that either me or a lot of other people are missing some point. I’m not sure which.

                                                        Category theory became popular in mathematics, and especially algebraic geometry, because it provided a “one higher level” from which to look upon the fields and see that a lot of the ideas we were working with were actually a shadow of a single more abstract idea. For example, the direct products of groups, rings, fields, vector spaces and so on were understood as different incarnations of the category-theoretic product. This helped to standardize a lot of arguments, and give names to some concepts differing groups had been grappling with in isolation before. Grothendieck was able to wield these abstract notions so deftly that he could use them to “take compass bearings” and figure out in what directions he should go. That is unbelievably powerful.

                                                        In programming, I can see how one would model the state of a program as a monad. A monad is basically defined around the idea that we can’t always undo whatever we just did, so that makes sense. I’ve also read a fair number of Haskell programmers and their explanations of how category theory fits into programming. None of it seems to even have the promise of the same levels of usefulness as we’ve seen in mathematics, and a lot of it seems to be actively harmful by raising jargon barriers around trivial ideas.

                                                        1. 8

                                                          That is a great story, I would definitely read more of your writing about math if you shared it somewhere.

                                                          1. 4

                                                            I too have encountered category theory during my maths degree (never managed to get the PhD, though), and I also agree that category theory in programming seems very out of place. The most interesting application I’ve seen for it is in homological algebra, but I’m pretty sure no programmer has any interest in abelian categories. The most prototypical functor for me is the Galois functor, which programmers have no need for.

                                                            The result is that when I see computer people talk about category theory, it’s all utterly foreign to me. They tell me I should like it because I like mathematics, but I do not. I’ve made some effort to understand why they like it and have never been very convinced by it, as unconvinced as you seem yourself.

                                                            1. 4

                                                              I’ve also read a fair number of Haskell programmers and their explanations of how category theory fits into programming.

                                                              I’d be interested in your take on this discussion, in particular the first comment by Gershom Bazerman, as well as his post here. It seems like he has a good perspective on it, but I don’t have the mathematical knowledge to really confirm that one way or the other. Or maybe you’ve already read this particular stuff and dismissed it; in either case it’d be handy to get a sense of what you think to place it in context, if you’re willing.

                                                              Here is another post which your comment reminded me of, which I wish I had the mathematical ability to fully understand; I’d also love to hear what you think about that as well.

                                                              I’m really not trying to challenge anything you said about the misapplication of CT in Haskell/programming in general (if I haven’t emphasized this enough at this point, I don’t think I’m qualified to do so), I’m just always interested in adding more data to my collection, hoping that at some point I’ll have built up the mathematical maturity to understand the different positions better.

                                                              None of it seems to even have the promise of the same levels of usefulness as we’ve seen in mathematics, and a lot of it seems to be actively harmful by raising jargon barriers around trivial ideas.

                                                              As a programmer who barely understands category theory, all I can say is that I’ve personally found the small number of concepts I’ve encountered useful, and, most importantly, more useful than anything else out there (which I’ll generalize as “design patterns and other vague, poorly specified stuff”) for providing a basis for designing modular structures to base programs on. I find that the most basic concepts presented in category theory map well to the kind of abstraction present in programming, and I’d love to get a better sense of where you find the jargon barriers to be and how we could eliminate those (and fwiw I think this is a general problem in programming, not limited to Haskell nerds dropping category theory terms into their discussions). In particular I’ve found concepts like Monoid, Monad, and Functor to be useful–especially in understanding how they interrelate and can be used together. They’ve enhanced my ability to think conceptually and logically about the kinds of structures I deal with in programming all the time, even where I may not be applying these structures directly in whatever program I’m considering. I may be doing it wrong, but insofar as I’ve developed the correct intuition around these things, they seem useful to me.

                                                              So I can readily accept that we have not been able (and maybe never will be able!) to harness category theory at the level Grothendieck did, but it seems like right now it’s yielding results, and part of the value is simply in the exploration and application of a different rigor to programming as a practice. Maybe in ten or twenty years we’ll look back at the folly of applying category theory to programming, but I rather think it’s more likely that we’ll see it as a step on the path toward discovering something deeper, more rigorous and powerful, and more beautiful than what we can imagine for designing programs right now.

                                                              Or maybe we’ll go back to being obsessed with design patterns and UML. If that’s the case I hope I’ll have quit and gone into being an organic farmer in Vermont or something.

                                                              1. 1

                                                                I’m interested in hearing more about this as well. It’s been a long-standing question for me whether continuing to investigate category theory would help me write better programs. I have no background in higher math, but my understanding/assumption has been that category theory is relevant to programming insofar as it facilitates composition.

                                                                As I see it, the fundamental problem of software design is economically accommodating change. We try to facilitate this by selectively introducing boundaries into our systems in the hopes that the resulting structures can be understood, modified, and rearranged as atomic units. I don’t think it’s controversial to say that our overall success at this is mixed at best.

                                                                The promise of category theory seems to be that, rather than relying on hearsay (design patterns) or our own limited experience, we can inform our choices of where to introduce boundaries from a more fundamental, abstract space where the compositional properties of various structures are rigorously understood.

                                                                But like I said, this is very much an open question for me. I would love to be convinced that, although there is clearly some overlap, the fields of category theory and software design are generally independent and irrelevant to each other.

                                                              1. 5

                                                                Going to try to pick up Elm again, for the purpose of writing a game. Tried last year, let’s see if it goes better this time…

                                                                1. 3

                                                                  Just out of curiosity, any arguments to favor Elm over Reason?

                                                                  1. 2

                                                                    I like Reason, but it doesn’t enforce purity. IMO that’s the big reason to use Elm instead.

                                                                    1. 1

                                                                      Ecosystem mainly - though Reason certainly seems very interesting as well!

                                                                    2. 2

                                                                      Elm is great! I really liked the new error messages in the latest 0.18. They’re a bit overdue for an updated version though.

                                                                    1. 11

                                                                      Meta discussion: what’s up with all these people advocating against Net Neutrality? Maybe I didn’t listen to too many different opinions a few years ago, but I’m fairly certain that if anyone wanted to try oppose NN, it was seeming obvious that they were a ISP shill. A few months I saw some looney Anarcho-Capitalist (ie. a radical right-wing (market) libertarian) advocate for it, which I belived to be a new low for their group, but since then I’ve been seeing more and more people popping out of seeming nowhere, trying to convince people that ISPs would still provide equal service to everyone (or “better”), even if the darn government wouldn’t make them do so (even if it weren’t profitable for them - but since when do private businesses care about that?).

                                                                      Is my perspective limited? Has this been a longer trend? If not, what is the cause for this recent shift?

                                                                      1. 7

                                                                        Some people distrust the government so much, they argue against their own interests just to “keep the government out”

                                                                        1. 4

                                                                          Your perspective is limited. An argument against “Net Neutrality” has existed for quite some time.

                                                                          [EDIT]: Source (note the blurb at the top though): https://www.eff.org/deeplinks/2009/09/net-neutrality-fcc-perils-and-promise — As far as I can tell, it’s basically the same partisan argument (on both sides) that’s being repeated today. If I were a betting man, I’d say arguments against it go back even further, but I’ve spent enough time on it already.

                                                                          1. 1

                                                                            I’m not doubting that there was an argument, the very concept of people supporting Net Neutrality without even an argument would be ludicrous. All I’m asking is why lately tere have been, or at least appear to have been, more prominent.

                                                                            1. 0

                                                                              I’d say the EFF is pretty prominent.

                                                                              Anyway, it doesn’t seem more prominent to me than any other time this issue has come up (and it has, several times).

                                                                              1. [Comment removed by author]

                                                                                1. 1

                                                                                  That’s why I said

                                                                                  (note the blurb at the top though)

                                                                                  The EFF is a prominent organization. In 2009, they voiced a stance against “Net Neutrality.” You asked if your perspective was off. This is evidence, IMO, that it is. Feel free to dismiss it, but this niggling argument is just pointless.

                                                                          2. 2

                                                                            I don’t argue against it, but I do think many of the hypothetical scenarios people come up with are far fetched and not representative of why ISPs are opposing net neutrality (and those falsely constructed hypothetical doomsday scenarios are why so many people care in the first place).

                                                                            NN is definitely better for the consumer, but if we don’t have it, we won’t lose our first amendment rights or have to pay extra for full speed access to lobste.rs. Realistically, the change will not be very drastic at all.

                                                                            1. 2

                                                                              Realistically, the change will not be very drastic at all.

                                                                              This is a strangely confident assertion to make in the current political climate.

                                                                              1. 2

                                                                                I’m very confident about the goals behind corporate lobbying: create a friendly regulatory environment, then push profits right up to the line, but not so far as to create a public/regulatory backlash (because that ruins profits, temporarily).

                                                                                It’s a game, and as long as they’re playing it, they’re not going to piss you off squabbling over kilo/mega/giga-bytes when the money is in video streaming (exa/zetta-bytes). Case in point: you really can’t hit Comcast’s data cap without streaming HD video – that’s on purpose.

                                                                                1. 2

                                                                                  They’ve made crazy-high profits for decades using monopolistic tactics with poor service, high costs, and so on. Any public anger at their tactics had to be balanced against drawbacks of not having telecom service at all. So, they tolerated it for lack of other options. The telecoms reinforced that with consolidation that kept things bad until government action forced competition and/or speed increases.

                                                                                  With all evidence to contrary, I don’t know how you are talking like they’ll stop pushing profits at point where it creates backlash. The backlash alone won’t do anything given the public doesnt choose the FCC heads: politicians paid off by telecoms do. So, they keep trying to cause more profitable problems for consumers because executive incentives, barrier for competitors, lobbying, and weak regulations all let them do it.

                                                                                  And yes, they did piss me off with the caps that my non-HD, 2-person household ran through in a month on top of probably-intentionally, shitty meters that said I was using gigabytes of data when stuff was powered off. A strong backlash combined with a consumer-friendly regulator made them back off… not with admissions of wrongdoing… to simply raise the cap. The cap that they invented out of thin air to begin with. If new regulator changes things, they might try that stuff again or something worse like their plan to sell our info.

                                                                          1. 4

                                                                            It’s confusing to me that the Haskell community would be resistant to the pipe operator since it’s IMO it’s been pretty successful in Elixir, Elm, F#, and OCaml. Maybe symptomatic of something unhealthy about the Haskell community in general.

                                                                            1. 1

                                                                              Yeah it’s great and haskell often forgets people like to program to do things.

                                                                              1. 2

                                                                                Avoid success at all cost taken a lil’ too far.

                                                                              2. 1

                                                                                The Haskell community is prone to err on the side of centralization to avoid fragmentation (unlike Lisps that are scattered into incompatible ecosystems, making writing anything practical much more of a chore than it should be) and has deep and ideological aversion to duct tape instead of proper fix. In this case: Flow functions are slightly more intuitive and IDE-friendly, but they do not address Haskell legacy usability problems in depth (a library is not a proper place for fixing the language), thus the community is very reluctant to use this.

                                                                                I don’t see anything unhealthy about this, it is a conscious and rational choice (that has its downsides, yes).

                                                                              1. [Comment removed by author]

                                                                                1. 10

                                                                                  You’re saying that ST was great 4-5 years ago, but apart from the langserver, which one of your points didn’t apply back then as much as it does now? You say that “today there are better editors”, but surely vim is much older than 4-5 years and basically didn’t change.

                                                                                  1. [Comment removed by author]

                                                                                    1. 8

                                                                                      The primary reason I stick with Sublime Text is that Atom and VSCode have unacceptably worse performance for very mundane editing tasks.

                                                                                      I’ve tried to switch to both vim and Spacemacs (I’d love to use an open source editor), but it’s non-trivial to configure them to replicate functionality that I’ve become attached to in Sublime.

                                                                                      1. 1

                                                                                        I thought VSCode was supposed to be very quick. Haven’t experimented with it much myself, what mundane editing tasks make it grind to a halt? I am well aware Atom has performance issues.

                                                                                        1. 1

                                                                                          Neither Atom nor VSCode grind to a halt for me, but I can just tell the difference in how quicky text renders and how quickly input is handled.

                                                                                          I’m not usually one of those people who obsesses about app performance, but editors are an exception because I spend large chunks of my life using them.

                                                                                        2. 1

                                                                                          I’ve tried to switch to both vim and Spacemacs (I’d love to use an open source editor), but it’s non-trivial to configure them to replicate functionality that I’ve become attached to in Sublime

                                                                                          This is the reason who I stay with vim, unable to replicate vim functionality in other editors.

                                                                                          1. 1

                                                                                            Yeah, fortunately NeoVintageous for Sublime does everything I need for vim-style movement and editing.

                                                                                    2. 3

                                                                                      I think the really ground-breaking feature that ST introduced was multi-cursor editing. Now most editors have some version of that. Once you get used to it, it’s very convenient, and the cognitive overhead is low.

                                                                                      As for the mini-map, I suppose it’s a matter of taste, but I found it very helpful for scanning quickly through big files looking for structure. Visual pattern recognition is something human brains are ‘effortlessly’ good at, so why not put it to use? Of course, I was using bright syntax hilighting, which makes code patterns much more visible in miniature. Less benefit for the hilight-averse.

                                                                                      I’ve been using ST3 beta for a few years as my primary editor. I tried using Atom and (more recently) VS Code, but didn’t like them as much: the performance gap was quite noticeable at start-up and for oversized data files. The plug-in ecosystems might make the difference for some folks, but all I really used was git-gutter and some pretty standard linters. For spare-time fun projects I still enjoy Light Table, but it’s more of a novelty. I’m gradually moving away from the Mac and want a light-weight open-source editor that will run on any OS.

                                                                                      So now, as part of my effort to simplify and get better at unix tools, I’m using vis. I’m enjoying the climb up the learning curve, but I think that if I stick with it long enough, I’ll probably end up writing a mouse-mode plugin. And maybe git-gutter. Interactive structural regexps and multi-cursor editing seem like a winning combination, though.

                                                                                      1. 3

                                                                                        You might enjoy exploring kakoune as well. http://kakoune.org | https://github.com/mawww/kakoune

                                                                                        1. 2

                                                                                          I’m an Emacs guy myself and I honestly think that multi-cursor editing is just eye-candy for good ol’ editor macros, and both both vim and Emacs include them since… forever?

                                                                                          1. 3

                                                                                            I’ve never used Sublime Text, but I’ve used multiple-cursors in vis and Kakoune, and it beats the heck out of Vim’s macro feature, just because of the interactivity.

                                                                                            With Vim, I’d record a macro and bang on the “replay” button a bunch of times only to find that in three of seventeen cases it did the wrong thing and made a mess, so I’d have to undo and (blindly) try again, or go back and fix those three cases manually.

                                                                                            With multiple cursors, I can do the first few setup steps, then bang on the “cycle through cursors” button to check everything’s in sync. If there’s any outliers, I can find them before I make changes and keep them in mind as I edit, instead of having my compiler (or whatever) spit out syntax errors afterward.

                                                                                            Also, multiple cursors are the most natural user interface for [url=http://doc.cat-v.org/bell_labs/structural_regexps/]structural regular expressions[/url], and being able to slice-and-dice a CSV (or any non-recursive syntax) by defining regexes for fields and delimiters is incredibly powerful.

                                                                                            1. 0

                                                                                              [url=http://doc.cat-v.org/bell_labs/structural_regexps/]structural regular expressions[/url]

                                                                                              This might be the first attempt at BBCode I’ve seen on Lobsters. Thanks for reminding me how much I hate it.

                                                                                              1. 1

                                                                                                Dangit, you can tell I wrote that reply at like 11PM, can’t you. :(

                                                                                            2. 1

                                                                                              I agree with you. I use Vim, and was thinking about switching until I realized that a search and repeat (or a macro when it’s more complex) works just as well. Multiple cursors is a cute trick, but never seemed as useful as it first appeared.

                                                                                            3. 2

                                                                                              I thought multiple cursors was awesome. Then I switched to using Emacs, thanks to Spacemacs. Which introduced to me [0] iedit. I think this is superior to multiple cursors. I am slowly learning Emacs through Spacemacs, I’m still far away from being any type of guru.

                                                                                              [0] https://github.com/syl20bnr/spacemacs/blob/master/doc/DOCUMENTATION.org#replacing-text-with-iedit

                                                                                            4. 2

                                                                                              I’ve started using vim for work, and although I’ve become quite fast, I find myself missing ST’s multiple cursors.

                                                                                              I might try switching to a hackable editor like Yi. I’ve really enjoyed using xmonad recently for that reason.

                                                                                              1. 1

                                                                                                Not the threat model this functionality is addressing.

                                                                                                1. 2

                                                                                                  What does it address though? I mean seriously, do you really think you can resist in really dangerous situations?

                                                                                                  1. 1

                                                                                                    It’s protecting you against the police, who operate under a legal framework which prevents them from beating you with a rubber hose but not from obtaining your fingerprints.

                                                                                                    1. 2

                                                                                                      I am too cynic to comment on that. I just wish the world was this easy.

                                                                                                      1. 1

                                                                                                        I left the caveat out for the sake of brevity, but like I said the threat model this functionality is addressing is not one where the attacker can utilize any means necessary. Is there any practical system which can address that scenario?

                                                                                                  2. 1

                                                                                                    This brings up a good point though, is it true that, let’s say TSA agents, can force you to unlock your phone with your fingerprint but not with a passcode? Honest question.

                                                                                                    1. 3

                                                                                                      I don’t know about TSA, but it’s true that cops can.

                                                                                                      As far as I know, fingerprints aren’t protected under the fifth amendment but passwords are: http://mashable.com/2014/10/30/cops-can-force-you-to-unlock-phone-with-fingerprint-ruling/#g3MF5oyDTOqN

                                                                                                1. 11

                                                                                                  It was inevitable.

                                                                                                  If only it made him complete a quest with a random character in adventure mode before continuing to update his system. :D

                                                                                                  This is one good reason why I always use full, explicit paths in my scripts.

                                                                                                  1. 12

                                                                                                    This is one good reason why I always use full, explicit paths in my scripts.

                                                                                                    but then they are not portable

                                                                                                    1. 8
                                                                                                      qbit@slip[0]:~λ which bash
                                                                                                      /usr/local/bin/bash
                                                                                                      qbit@slip[0]:~λ
                                                                                                      
                                                                                                      1. -2

                                                                                                        Just always use /bin/bash and don’t care about distros/BSDs that don’t care enough about their users to place bash there. Problem solved for 99% of users. ;)

                                                                                                        1. 10

                                                                                                          or you know, ignore developers that don’t care about their downstream packagers and users to learn about /usr/bin/env? Problem solved for 99% of users caring about cross platform software.

                                                                                                          1. 3

                                                                                                            Not all distros may have env in /usr/bin, so not necessarily an improvement over the extremely common /bin/bash. Then there’s the problem of what /usr/bin/env df might return…

                                                                                                            1. 12

                                                                                                              On NixOS, env is the only thing in /usr/bin, so that’s at least one distro that developers can avoid breaking by using it.

                                                                                                              1. 7

                                                                                                                IME, globally /usr/bin/env is more likely to exist than /bin/bash. The person who has this dwarf fortress issue seems to have done foolish things to get df to be dwarf fortress so I don’t think this situation is a valid motivator for something that is closer to being a standard (/usr/bin/env) than something that’s not (/bin/bash).

                                                                                                                1. 1

                                                                                                                  As long as neither /bin/bash nor /usr/bin/env are standards, there can be issues. In addition to this, there is no agreed upon registry for reservation of the names of the executables.

                                                                                                        2. 1

                                                                                                          Keep in mind, for this to happen, the user probably changed the system default PATH to put Dwarf Fortress first. sudo usually scrubs the environment to default settings unless you’ve taken steps.

                                                                                                          1. 10

                                                                                                            Read the comments on the answer. He dropped a symlink into /usr/local/bin to make the command available to him. /usr/local/bin/df ?

                                                                                                            1. 1

                                                                                                              I don’t get this. Did he override the linux df in /usr/local/bin?

                                                                                                              1. 1

                                                                                                                The original df is in /bin. He placed another df to /usr/local/bin. The default PATH on Ubuntu has /usr/local/bin before /bin, so his df gots executed instead of the system one.

                                                                                                              2. 1

                                                                                                                Why would they use df? Did they not know of the other df? Or did they just not care? I don’t care if someone else set the PATH variable and it isn’t your fault, at best it is confusing, at worst someone messes up an install/copy/backup script, with potential to hose their system.

                                                                                                                1. 3

                                                                                                                  Not all the world is Unix. I can’t confirm with cursory searches, but given the character set choice (CP437) I strongly suspect that Windows was the original platform.

                                                                                                                  1. 1

                                                                                                                    It was

                                                                                                          1. 3

                                                                                                            To a great extent this does exist. Sandboxing has helped prevent these types of attacks for many years now. That’s how iOS works on Apple products. A rouge ransomware app couldn’t encrypt the whole phone because it can’t reach the whole phone. The better question is why haven’t desktop operating systems, specifically Windows, caught up yet?

                                                                                                            1. 2

                                                                                                              Windows Store apps are already sandboxed and have been from the start, but that store has not become a broadly appealing distribution platform for lots of different reasons.

                                                                                                              It’s the same situation with macOS and the Mac App Store, but Apple has done a somewhat better job at getting people on board with their store.

                                                                                                              This is one of the many reasons more people are using tablets and phones as their primary computing devices. The compatibility and UX legacy on the desktop is a goddamn mess.

                                                                                                            1. 17

                                                                                                              This fucks bisect, defeating one of the biggest reasons version control provides value.

                                                                                                              Furthermore, there are tools to easily take both approaches simultaneously. Just git merge —squash before you push, and all your work in progress diffs get smushed together into one final diff. And, for example, Phabricator even pulls down the revision (pull request equivalent) description, list of reviewers, tasks, etc, and uses that to create a squash commit of your current branch when you run arc land.

                                                                                                              1. 7

                                                                                                                I’m surprised to hear so many people mention bisect. I’ve tried on a number of occasions to use git bisect and svn bisect before that, and I don’t think it actually helped me even once. Usually I run into the following problems:

                                                                                                                • there is state that is essential to exercising the test case I’m interested in which isn’t in source control (e.g. configuration files, databases, external services) and the shape of the data in these places needs to change to exercise different versions of the code
                                                                                                                • the test case passes/fails at different points in the git history for reasons unrelated to the problem that I’m investigating

                                                                                                                I love the idea of git bisect but in practice it’s never been worth it for me.

                                                                                                                1. 14

                                                                                                                  Your second bullet point suggests to me bisect isn’t useful to you in part because you’re not taking good enough care of your history and have broken points in it.

                                                                                                                  I bisect things several times a month, and it routinely saves me hours when I do. By not keeping history clean as others have talked about, you ensure bisect is useless even for those developers who do find it useful. :(

                                                                                                                  1. 6

                                                                                                                    Right: meaningful commit messages are important but a passing build for each commit is essential. A VCS has pretty limited value without that practice.

                                                                                                                    1. 1

                                                                                                                      It does help that your commits be at clean points but isn’t really necessary - you don’t need to run your entire test suite. I usually will either bisect with a single spec or isolate the issue to a script that I can run against bisect. And as mentioned in other places you can just bisect manually.

                                                                                                                  2. 6

                                                                                                                    You can run bisect in an entirely manual mode where git checks out the revision for you to tinker with and before marking the commit as good or bad.

                                                                                                                    1. 3

                                                                                                                      There are places where it’s not so great, and there are places where it’s a life-saving tool. I work (okay, peripherally… mostly I watch people work) on the Perl 5 core. Language runtime, right? And compatibility is taken pretty seriously. We try not to break anyone’s running code unless we have a compelling reason for it and preferably they’ve been given two years' warning. Even if that code was written in 1994. And broken stuff is supposed to stay on branches, not go into master (which is actually named “blead”, but that’s another story. I think we might have been the ones who convinced github to allow a different default branch because having it fail to find “master” was kind of embarrassing).

                                                                                                                      So we have a pretty ideal situation, and it’s not surprising that there’s a good amount of tooling built up around it. If you see that some third-party module has started failing its test suite with the latest release, there’s a script that will build perl, install a given module and all of its dependencies, run all of their tests along the way, find a stable release where all of that did work, then bisect between there and HEAD to determine exactly what merge made it started failing. If you have a snippet of code and you want to see where it changed behavior, use bisect.pl -e. If you have a testcase that causes weird memory corruption, use bisect.pl --valgrind and it will tell you the first commit where perl, run with your sample code, causes valgrind to complain bitterly. I won’t say it works every time, but… maybe ¾ of the time? Enough to be very worth it.

                                                                                                                    2. 0

                                                                                                                      This fucks bisect, defeating one of the biggest reasons version control provides value.

                                                                                                                      No it doesn’t. Bisect doesn’t care what the commit message is. It does care that your commit works, but I don’t think the article is actually advocating checking in broken code (despite the title) - rather it’s advocating committing without regard to commit messages.

                                                                                                                      Just git merge —squash before you push, and all your work in progress diffs get smushed together into one final diff.

                                                                                                                      This, on the other hand, fucks bisect.

                                                                                                                      1. 3

                                                                                                                        Do you know how bisect works? You are binary searching through your commit history, usually to find the exact commit that introduced a bug. The article advocates using a bunch of work in progress commits—very few of which will actually work because they’re work in progress—and then landing them all on the master branch. How exactly are you supposed to binary search through a ton of broken WIP commits to find a bug? 90% of your commits “have bugs” because they never worked to begin with, otherwise they wouldn’t be work in progress!

                                                                                                                        Squashing WIP commits when you land makes sure every commit on master is an atomic operation changing the code from one working state to another. Then when you bisect, you can actually find a test failure or other issue. Without squashing you’ll end up with a compilation failure or something from some jack off’s WIP commit. At least if you follow the author’s advice, that commit will say “fuck” or something equally useless, and whoever is bisecting can know to fire you and hire someone who knows what version control does.

                                                                                                                        1. 1

                                                                                                                          Do you know how bisect works?

                                                                                                                          Does condescension help you feel better about yourself?

                                                                                                                          The article advocates using a bunch of work in progress commits—very few of which will actually work because they’re work in progress—and then landing them all on the master branch. How exactly are you supposed to binary search through a ton of broken WIP commits to find a bug? 90% of your commits “have bugs” because they never worked to begin with, otherwise they wouldn’t be work in progress!

                                                                                                                          I don’t read it that way. The article mainly advocates not worrying about commit messages, and also being willing to commit “experiments” that don’t pan out, particularly in the context of frontend design changes. That’s not the same as “not working” in the sense of e.g. not compiling.

                                                                                                                          It’s important that most commits be “working enough” that they won’t interfere with tracking down an orthogonal issue (which is what bisect is mostly for). In a compiled language that probably means they need to compile to a certain extent (perhaps with some workflow adjustments e.g. building with -fdefer-type-errors in your bisect script), but it doesn’t mean every test has to pass (you’ll presumably have a specific test in your bisect script, there’s no value in running all the tests every time).

                                                                                                                          Squashing WIP commits when you land makes sure every commit on master is an atomic operation changing the code from one working state to another.

                                                                                                                          Sure, but it also makes those changes much bigger. If your bisect ends up pointing to a 100-line diff then that’s not very helpful because you’ve still got to manually hunt through those changes to find the one that made the actual difference - at that point you’re not getting much benefit from having version control at all.

                                                                                                                    1. 1

                                                                                                                      Not a fantastic interview, but Herzog is someone worth listening to.

                                                                                                                      1. 2

                                                                                                                        Such a comment is even better if it includes links proving he’s worth listening to. Got any for readers here?

                                                                                                                        1. 6

                                                                                                                          As far as proving he’s worth listening to, I would start with his body of work before reading an interview.

                                                                                                                          If you have a way of watching 3D movies, I’d recommend Cave of Forgotten Dreams. Watching that film is the most moving experience I’ve had in VR by far. Otherwise, some of the films he’s known for are Grizzly Man, Encounters at the End of the World, and Aguirre, the Wrath of God.

                                                                                                                          1. 2

                                                                                                                            I would say listen to the science friday interview last year. He has some very good insights into humanity.

                                                                                                                            http://www.sciencefriday.com/segments/seeking-humanity-in-volcanoes-with-werner-herzog/

                                                                                                                            I would have to back up xtian here and say his body of work stands on its own. Its hard to prove anyone is “worth listening to”. All I can say is he has some very valid and interesting viewpoints that most technological people might not want to confront. Think Black Mirror perhaps, only not as dystopian.

                                                                                                                            Perhaps this interview will suffice as proof: https://www.wired.com/2016/07/warner-herzog-lo-and-behold/

                                                                                                                            1. 1

                                                                                                                              Although not entirely serious, I love his appearance in Rick and Morty https://www.youtube.com/watch?v=Rw1cdRew-Zg

                                                                                                                          1. 13

                                                                                                                            I rail against this frequently.

                                                                                                                            In the interest of fostering discussion^W^Wcomplaining with an audience (but maybe some discussion will result!), here are a few trends, not mentioned in the article, that are profoundly user-unfriendly and which I would very much like to see die:


                                                                                                                            Interface mutability. My partner uses an iPhone. She was not happy to start using her iPhone, because she had to take time to learn how to use it that she could have spent doing literally anything else, most of which she would have found more productive. (The jump from a landline touch-tone phone to a clamshell cell phone is like climbing a curb compared to the Everest of figuring out a smartphone interface.) But okay, now she’s figured out how to use it, all is well, right? Well, obviously not, because I’m here complaining about it. Some years later, Apple pushed iOS 7 to her phone, and it rearranges, redesigns and shuffles everything. Now she has to relearn how to use her phone to no discernible benefit, because Apple decided the previous interface was insufficiently shiny and/or confusing. What the fuck. How many millions or billions of dollars of damage did Apple due to the world’s economy with that change? Because I got to experience secondhand at least a few hours of wasted time and frustration due to it.

                                                                                                                            And of course, it doesn’t end there: she recently had to update her laptop (from OS X Lion to Sierra) because, as a medical professional, operating systems without security support obviously won’t fly. And so now she has to relearn how to use her computer. While the learning curve for new Mac OS versions is shallower than the iOS <7→7 curve, Sierra performs terribly on her (nominally supported) laptop. I’m hoping upgrade to an SSD will resolve that for at least a few more years, but if not (or eventually regardless), she’ll have to buy a new laptop not because she needs new features or the old one is wearing out but essentially because Apple mandated it. Great.

                                                                                                                            While my computer interface (bash, wmii/i3, vim) has been essentially stable for the better part of a decade, the barriers to entry to such an interface are formidable indeed, and the capabilities aren’t sufficient for everyone; my partner, for instance, needs to run proprietary medical record programs, which provide only Windows and Mac OS versions.

                                                                                                                            (I don’t mean to single Apple out here, by the way; it’s just the example I’ve most recently had significant exposure to. Nearly every interface vendor is guilty. I go to some lengths to insulate myself from popular computing for precisely these sorts of reasons.)


                                                                                                                            Inconsistency. The article touches on this in the realm of the web (“is this a button? A link? A static label?”), but it’s a cancer that has spread to native interfaces, as well. Is a given element a button? A link? A static label? Who the hell knows? I can’t figure out without clicking on it, and who knows what happens when I do that. The webapp-ification of native interfaces is partly to blame here (way back when the web was actually a web of static pages linked by hypertext, there was a good reason to present web content differently from application interface; now, unfortunately, those conventions have leaked between environments) but I seem to recall once upon a time major interface vendors published HIGs that were either enforced or at least broadly adhered to (and offenders like Winamp were rare and the butt of frequent jokes). That seems to have fallen by the wayside. Google and Apple seem to be trying to bring it back in the mobile interfaces, but they’re doing a bad job of enforcement (even though they’ve given themselves the technical ability to do so!) and their mobile HIGs are bad anyway.


                                                                                                                            System fragility. You know how many people are terrified of changing their system’s settings? We taught them to feel this way by, in the 90s, presenting them with a multitude of knobs that could destroy their system, requiring them to shell out money to a probably-insufferable technician who would almost certainly make fun of them behind their backs to unfuck things and then quite possibly shell out more money or time to recreate work they lost. Well now we’re well into the 2010s, we’ve learned our lesson, and systems are resilient, present informative warnings at an appropriate frequency and generally enable fearless user operation! Lol, no, of course not. Systems are less likely to fuck themselves now, but regular users are still justifiably afraid of them because they’re still unjustifiably likely to present dangerous options with only jargon to warn you off.

                                                                                                                            Now, back in the 90s, some of that fragility was just because consumer-level computers weren’t a mature product yet. They had to expose some of the rougher edges of the underlying hardware interfaces, because there wasn’t enough headroom to paper over them effectively. (Not all of them, of course. In no universe should it take me two clicks to erase a disk.) But there’s really no remaining excuse now.

                                                                                                                            1. 8

                                                                                                                              On HIGs: While Macs have had good consistency even from third parties for a long time, on Windows its been a total mess of nothing looking and feeling consistent. The last push for HIG consistency was with Windows 95; UWP might improve this though. At least the X11 desktops are consistent with themselves. (I try to make apps that are good citizens on Windows.)

                                                                                                                              On browsers: Please take me back to the days of static pages, when browsers were document viewers, not app runtimes.

                                                                                                                              1. 3

                                                                                                                                Adherence to the macOS HIG has eroded noticeably in recent years, even in first-party apps. I agree it’s nowhere near as much of a mess as Windows, but I don’t use it as a point of comparison anymore.

                                                                                                                                1. 3

                                                                                                                                  Apple seems to be getting less interested in pushing (or even enabling) third parties to conform to any kind of consistent HIG as well. One of the traditional strengths of the Mac platform for developers was its thorough and consistent documentation, which explained what everything did, why it did it, how pieces fit together, and generally what the Right Way To Do Things was. Now the documentation is all over the map and my recent experience with it has not been very good. Large parts look like basically auto-generated Doxygen style stuff giving you bare-bones class documentation and not much else.

                                                                                                                              2. 4

                                                                                                                                Interface mutability.

                                                                                                                                Without any change, there is no progress. I am also amazed at some people I have encountered who actually seemed to simply refuse to learn anything new.

                                                                                                                                That said, change for changes sake (novelty chasing) is indeed a serious problem in the industry. I wholeheartedly agree.

                                                                                                                                1. 4

                                                                                                                                  Maybe a different type of progress?

                                                                                                                                  I think it was The Ultimate C64 Talk where the speaker made the point that hardware moves so fast nowadays, people don’t explore its limits (paraphrased from memory).

                                                                                                                                  There’s often a kind of CADT-style impatience among people, which leads to exasperated comments from my fiancé like “They made Spotify shit again”. I don’t use Spotify, so I’m not sure, but I’ve understood that after the shock of change (“now it’s shit!”) there’s often a meh (“it didn’t get better or worse, just different.”).

                                                                                                                                  So how can the end users know if a change was in any way objectively better when things move faster (and break) than we can explore the limits?

                                                                                                                                  1. 3

                                                                                                                                    When I have a useful thing, I don’t want it to progress; I want it to keep being the same useful thing.

                                                                                                                                    1. 0

                                                                                                                                      So you still have a flip phone and a horse?

                                                                                                                                      1. 8

                                                                                                                                        I find comments like this one frustrating. Some people bloody well do still have flip phones and horses, because they want to and that’s actually just fine. Those things don’t work less well than they used to, and if the owner is happy with it and it isn’t dangerous, I can’t imagine why they should change to something else.

                                                                                                                                        The difference with software updates in newer products like iPhones is that you often need to take the update, or your necessarily connected device will be rife with security holes. But the major updates often screw around with where the buttons are, or how things work. There isn’t really a (safe) choice to just keep the horse or the old flip phone, because in a very real sense they don’t build things that way anymore.

                                                                                                                                        1. 5

                                                                                                                                          Until a few months ago I had a near-invincible candybar phone. Why not a smart phone?

                                                                                                                                          • I wanted good battery life. Months later, forgotten in a drawer, the little beasty still runs its daily alarm clock.
                                                                                                                                          • I wanted durability. Even with a nice Otterbox on my new phone, that little candybar survived thirty-foot bouncedrops from my cycling commute on hard roads.
                                                                                                                                          • I wanted to text without looking. The little keypad gave great tactile feedback, so I could text without breaking eyecontact or appearing to do anything other than have my hands in my pockets.
                                                                                                                                          • I wanted good call quality. Most voice calls were comfortable and good.

                                                                                                                                          Sometimes old tech that correctly solves the problem domain simply and reliably is preferable to some damn fool new fancy solution. That’s why the AK family and the Mauser action have been around as long as they have, why Usenet and IRC is still in widespread use after over 20 years, and so forth.

                                                                                                                                          1. 4

                                                                                                                                            Pull someone from 1967 to today (that’s a leap of 50 years). They know how to drive, they can still drive a 2017 car without much problem since that interface hasn’t changed much. The car radio however? That will take some time (along with the climate controls).

                                                                                                                                            Another thing—from time to time I’ll find some neat feature on Google Maps. At one time, you could select multiple towns and it would highlight each town in light red. I used that feature. Then they removed it. Then they added it back, but you can only do one town at a time. It’s gotten to the point where I don’t want to learn new features for the fear that they’ll be arbitrarily removed because their constant A/B testing shown that not many people use it, or were confused by it, or they just felt like they didn’t want to support it any more.

                                                                                                                                    1. 6

                                                                                                                                      I’ve never used Erlang. How long does it take to recover from a crash, and e.g. start a new thread? I’m guessing this is cheap?

                                                                                                                                      I ask because Node’s adopted the official “let it crash” line, but restarting a process took up to a minute when I was running it in production.

                                                                                                                                      1. 11

                                                                                                                                        It is much cheaper than spawning an OS thread but more expensive than just calling a function.

                                                                                                                                        1. 4

                                                                                                                                          Erlang process spawning is, iirc, less than a thousand machine cycles in modern hardware.

                                                                                                                                          1. 1

                                                                                                                                            That’s a shame. It used to be a goal of the Node project to hold startup time to 30ms.

                                                                                                                                          1. 7

                                                                                                                                            I think we’ll see more aggressive ads. In fact I think this has already happened. I remember ads getting noticeably worse (more Flash animation in particular) at the point when Mozilla started shipping a pop-up-blocker by default, and I don’t think it’s coincidence.

                                                                                                                                            1. 4

                                                                                                                                              I don’t think they are going to be more aggressive, but the opposite. Ads are going to get sneakier, show up in the middle of content as though it was content. Its happening already in the form of paid content, endorsed content, what ever else they want to call it. So instead of aggressive light boxes and pop up windows with three or four timed ads that you can’t skip over, we’re going to looking at content that is actually just one long advertisement. The lines between content and ads will almost disappear entirely.

                                                                                                                                              1. 11

                                                                                                                                                The question of more subtle vs more aggressive is a false dichotomy. The ad industry is already pursuing both strategies. If you disable your ad blocker I think it’s clear that ads have gotten more aggressive and overall inventory has increased. At the same time I think the usefulness of Google for finding information is at an all-time low. Some topics are alright, but an increasing number of queries just return fluff, advertorials, and sponsored content.

                                                                                                                                                I don’t see any possibility for these drain-circling processes to be interrupted. I think both strategies will become increasingly pernicious along with more comprehensive tracking/profiling and a stronger push towards mobile apps.

                                                                                                                                                My hope is that we’ll see a commensurate increase in willingness to pay for content and services that aren’t covered in garbage. I think we see that to some extent already, the question is how broadly it will spread.

                                                                                                                                                1. 2

                                                                                                                                                  My guess is that there are limits on how far advertising can push the paid-content model. Beyond everything else, web pages (and sites) need to attract attention. A lot of paid content that serves advertisers is not likely to be all that compelling, so it simply won’t draw all that many pageviews compared to more interesting content.

                                                                                                                                                  Sites can mix compelling organic content with paid advertising content to some degree, but I think that most sites will wind up with HTML (and Javascript) where the adblockers can strip the paid advertising content out. They won’t be able to on big sites that do custom integration of advertising and content inside their CMS backend, but a lot of sites aren’t going to be able to go that far.

                                                                                                                                                  (My understanding is that modern ad networks work in large part by integrating the core page content and the added advertising in the browser itself, through mechanisms like Javascript, embedded iframes, and images loaded from outside domains. Doing the integration in the browser requires markers in the HTML and so on, which adblockers can see and act on. If you integrate ads into the HTML in your backend you don’t need these giveaway markers, but you have to have a backend that can talk to ad networks, pull in the ads, stuff them into your HTML, etc etc. Big sites can afford to put together such backends and have enough pageviews to get ad networks to talk to them this way; smaller sites are probably likely to lack both the resources and the influence, so will be left with the ‘add this to your HTML’ approach to serving ads.)

                                                                                                                                                  1. 1

                                                                                                                                                    I think you’re right about the need to attract and hold attention. It seems like a site needs to maintain a very high volume of real content in order to make the paid stuff tolerable. I disagree about the technical limitation, though. You could probably cover most of those smaller sites with a WordPress plugin (WordPress supposedly powers 25% of the Internet).

                                                                                                                                                    The main limitation is economic. A paid post is much more expensive to produce than a banner ad and whereas a banner ad can be targeted at broad categories of sites and users, a paid post has to more or less fit within the narrow topic of the site that hosts it.

                                                                                                                                                    1. 1

                                                                                                                                                      When you say “25% of the Internet” you need to be more precise because that is too vague a phrase to be meaningful. It could be 25% of: 1) bandwidth 2) unique domains 3) page views 4) time spent …

                                                                                                                                                      1. 1

                                                                                                                                                        https://w3techs.com/blog/entry/wordpress-powers-25-percent-of-all-websites

                                                                                                                                                        We do count both the self-hosted, open source version of WordPress which can be downloaded at WordPress.org, and we also count WordPress sites hosted at WordPress.com or elsewhere. However, we count the hosted sites only if they are reachable via their own domain (not only as subdomain of wordpress.com), and they must qualify like all other sites in our surveys by getting enough visitors on that separate domain to make it into the top 10 million Alexa sites. As a result, the vast majority of the millions of blogs at WordPress.com are not counted. Only 1.25% of the WordPress sites in our surveys are hosted by Automattic at WordPress.com.

                                                                                                                                                        1. 1

                                                                                                                                                          Looks like they are counting unique domains, which means 25% isn’t as impressive as it sounds because the overwhelming majority of those get very little traffic.

                                                                                                                                                          1. 2

                                                                                                                                                            Right, but I was specifically commenting on the technical viability of back-end integration with a large number of small sites.

                                                                                                                                              1. 3

                                                                                                                                                This ID is then used to deliver targeted ads and track users across the web.

                                                                                                                                                This is wrong.

                                                                                                                                                Most advertisers don’t target ads on desktop using anything other than plain old HTTP cookies. There are lots of reasons for this, but they largely boil down to (a) they don’t have to, and (b) they don’t want to invade your privacy either.

                                                                                                                                                The reason an advertiser wants to collect this is so that they can pay more money for advertising, which allows publishers to make money with quality content that keeps users coming back.

                                                                                                                                                Meanwhile, a publisher who has a large number (99%) of Windows 7 machines that all have Arial Nova is probably running fraud.

                                                                                                                                                Ad fraud pushes the price of advertising down, which doesn’t hurt ad networks like Google, or even the biggest online advertisers, but it does hurt publishers. It means that a website owner either has to change their content to appeal to a wider audience (more traffic), or they need more ads.

                                                                                                                                                I wonder how many people advocate this kind of privacy-focused browsing in exchange for more fake news and lower quality content with their eyes wide open?

                                                                                                                                                1. 5

                                                                                                                                                  Most advertisers don’t target ads on desktop using anything other than plain old HTTP cookies.

                                                                                                                                                  I don’t doubt this but do you know where one could find statistics about this? I wonder what portion of advertisers is this true about. To what degree is this true for mobile as well? Is fingerprinting a growing trend? Also, personally I’m happy to see advertisers that use fingerprinting thwarted, even if they are a minority.

                                                                                                                                                  In any case there is virtually no way for consumers to audit their data footprint apart from preventing the collection of the data in the first place. Additionally, if data is collected, but not used for targeting today, it still has the potential for being used for targeting tomorrow, or else being resold and shared by a company who does targeting. I’m not sure how it makes sense for someone concerned about their data to give faceless companies the benefit of the doubt, advertising companies in particular.

                                                                                                                                                  The promise of getting better content by indirectly paying publishers with my publishing data is unconvincing to me personally. In general, I find that the degree to which a site derives its revenue from advertising is inversely proportional to its quality.

                                                                                                                                                  1. 3

                                                                                                                                                    I don’t doubt this but do you know where one could find statistics about this? I wonder what portion of advertisers is this true about.

                                                                                                                                                    It’s almost 100%. It’s certainly 100% of any big (national) advertiser. Every ad exchange I’m aware of requires the advertiser fill out a questionnaire that says they won’t use things like E-Tag, or evercookies and flash cookies, and so on. Google use this language: Flash cookies and other locally shared object (LSO) technologies are not allowed on Ad Exchange.

                                                                                                                                                    To what degree is this true for mobile as well?

                                                                                                                                                    It’s almost zero. Mobile RTB uses IP address and user agent – which works because mobile apps set the user agent to include the app name. Google and Facebook will sync this data on their platforms to enable cross-device targeting, but it’s still not very sophisticated.

                                                                                                                                                    Is fingerprinting a growing trend?

                                                                                                                                                    Not for ad targeting. Yes for ad fraud.

                                                                                                                                                    Big advertisers can measure the effectiveness of a sophisticated marketing campaign over the course of 6-12 months, and ad fraud is one of the biggest predictors of voidage, so it follows that fingerprinting is valuable insofar as it can detect ad fraud.

                                                                                                                                                    Also, personally I’m happy to see advertisers that use fingerprinting thwarted, even if they are a minority.

                                                                                                                                                    Even though it means more fake news and lower quality content?

                                                                                                                                                    Forget whether you think it’s likely for a moment, because if you’re willing to trade that – then it’s irrelevant, but if not, then see below.

                                                                                                                                                    In any case there is virtually no way for consumers to audit their data footprint apart from preventing the collection of the data in the first place.

                                                                                                                                                    That’s not true. Every data provider makes it possible to get this information, sometimes in a very friendly format. For example, here’s BlueKai’s information about you.

                                                                                                                                                    I think advertisers would be happy to put this information wherever you want, but it’s proven very difficult to have a productive conversation with privacy advocates. They seem more interested in short-term gains rather than discussing the long-term effects of their positions.

                                                                                                                                                    The promise of getting better content by indirectly paying publishers with my publishing data is unconvincing to me personally. In general, I find that the degree to which a site derives its revenue from advertising is inversely proportional to its quality.

                                                                                                                                                    Right now, ESPN can get $5-9 per thousand users per ad if they sell demographic data, or $1-2 without. These numbers are typical, and but a small site can’t command these prices directly since the sale of this traffic is logistically difficult. If we make it easier, it should be evident that smaller sites will be able to 5-10x their revenue, but the problem is: how do we make it easier for them, without making it easier for ad fraud?

                                                                                                                                                    Fingerprinting can help tremendously, because it gives us a way to ask what is (technologically) normal. If we defeat it, how exactly are we supposed to valuate this traffic?

                                                                                                                                                    1. 6

                                                                                                                                                      Thanks for responding.

                                                                                                                                                      Google use this language: Flash cookies and other locally shared object (LSO) technologies are not allowed on Ad Exchange.

                                                                                                                                                      Wait, I was asking about fingerprinting. I see that Flash cookies are not allowed. But fingerprinting refers to determination of identity by aggregating multiple sources of data, with or without Flash supercookies - user agent, canvas, font (per the article), battery API, WebRTC, etc. I might be missing something but I don’t see those practices forbidden in the page you linked.

                                                                                                                                                      Mobile RTB uses IP address and user agent

                                                                                                                                                      AT&T, last I looked, requires you to opt-out of using your data for targeted ads. Presumably because of this they are able to tie all your internet traffic to your profile. I have no idea what happens to this data or how to audit it. Also historically ISPs have tested placing supercookies in http request headers. Based on both of these examples I am under the impression that mobile tracking is more sophisticated than just dumb aggregation by IP address, user agent, and mobile app name.

                                                                                                                                                      Even though it means more fake news and lower quality content?

                                                                                                                                                      I’m quite skeptical of the argument that advertising revenue drives quality content. Rather it seems like the opposite. “Fake news” and clickbait are both examples of this. It’s the plenitude of advertising dollars that drives these practices, not the opposite.

                                                                                                                                                      For example, here’s BlueKai’s information about you.

                                                                                                                                                      A customized hosts file and ad-blocker makes it very difficult for me to open that page :) When I was finally able to do so, it didn’t show me any data, so I visited the sports site they linked to, and came back, and it said that I am located in the country I am indeed located in :)

                                                                                                                                                      But browser fingerprinting is based on collection of low-level data such as I mentioned. I didn’t see in that page any of the data points used in fingerprinting - I didn’t even see my IP address, which presumably they were using for geolocation. My comment was “there is virtually no way for consumers to audit their data footprint” and I still would say this is true.

                                                                                                                                                      Additionally browser fingerprinting is a powerful technology that can be used to construct user profiles after the fact. Let’s say I collect a database of data points from user browsing sessions. Based on these data points I can run analysis and, within a certain degree of likelihood, link data points from heterogenous sessions together and identify them as coming from the same user. Even if this is beyond the technical sophistication of current ad networks, that doesn’t mitigate the risk that these sorts of queries and analyses will become commonplace in a few years time, at which point these queries and analyses could be run retroactively on historical data. Once this data is collected and stored my profile exists “virtually” even if it doesn’t exist “actually.” As a consumer visiting bluekai’s website, I’m not able to see what data they have about me - only some gross aggregation that could retroactively change with the use of more sophisticated methods.

                                                                                                                                                      I think advertisers would be happy to put this information wherever you want, but it’s proven very difficult to have a productive conversation with privacy advocates.

                                                                                                                                                      You make it sound as if advertisers are working in good faith. Yet requiring opt-out rather than opt-in for tracking is a dark pattern. To me this seems like glaring evidence of something other than good faith. Additionally, as far as I can tell, privacy laws are a major impetus here for much of the transparency we do see in the advertising industry.

                                                                                                                                                      [T]he sale of this traffic is logistically difficult. If we make it easier, it should be evident that smaller sites will be able to 5-10x their revenue, but the problem is: how do we make it easier for them, without making it easier for ad fraud?

                                                                                                                                                      Personally I’m not concerned about the revenue of large or small sites from advertising dollars. I view advertising as a parasitic industry that drives content quality down while building up opaque databases about people - a net negative. Something like, say, basic income seems like a much better solution for small, independent content creators than advertising with its numerous downsides.

                                                                                                                                                      1. 3

                                                                                                                                                        I might be missing something but I don’t see those practices forbidden in the page you linked.

                                                                                                                                                        You’re not going to find it on that page, but linked to in industry guidance, such as youronlinechoices.com.

                                                                                                                                                        Here’s another page. It specifically mentions more “fingerprinting” techniques and keywords that I think you’re scanning for.

                                                                                                                                                        You might try learning more about this.

                                                                                                                                                        AT&T, last I looked, requires you to opt-out of using your data for targeted ads. Presumably because of this they are able to tie all your internet traffic to your profile. I have no idea what happens to this data or how to audit it. Also historically ISPs have tested placing supercookies in http request headers. Based on both of these examples I am under the impression that mobile tracking is more sophisticated than just dumb aggregation by IP address, user agent, and mobile app name.

                                                                                                                                                        You might look again. It’s not much more sophisticated than that because none of those things worked very well: Verizon and AT&T and anyone else could include whatever they want in the header, however participants in this space can’t easily exchange that information with their partners, so it isn’t as useful as they might hope.

                                                                                                                                                        Here’s the documentation for BlueKai’s Mobile ID which is indeed, just user agent and IP address.

                                                                                                                                                        Based on these data points I can run analysis and, within a certain degree of likelihood, link data points from heterogenous sessions together and identify them as coming from the same user.

                                                                                                                                                        However you haven’t explained why you think this is bad. That’s my question.

                                                                                                                                                        Knowing that a user is, within a certain degree of likelihood, interested in some topic X, allows for the sale of advertising to a marketer interested in reaching people interested in X: Now instead of estimating 50% of people are interested in this topic, and buying double the people (at half the budget) they can be more efficient, but it also means that publishers can specialise their content, and tailor it for specific (small) interests.

                                                                                                                                                        Do you think that the Internet should only have content that is (a) for direct-pay (by credit card), or (b) is of interest by at least 10% of Americans? Targeting makes it possible to have sites that are interesting to as low as 0.3% of Americans, and still earn the operators a NYC-liveable wage!

                                                                                                                                                        You make it sound as if advertisers are working in good faith. Yet requiring opt-out rather than opt-in for tracking is a dark pattern. To me this seems like glaring evidence of something other than good faith. Additionally, as far as I can tell, privacy laws are a major impetus here for much of the transparency we do see in the advertising industry.

                                                                                                                                                        Yes, because I think most of them are. Especially the big ones. And I think this is a bit hyperbolic.

                                                                                                                                                        What terrible thing are you trying to prohibit? I don’t even know what you mean by “dark pattern”.

                                                                                                                                                        It isn’t what people expect? People are watching a television program that is sponsored by marketers who sell products that may be of interest to people who are interested in that television program, and I can’t imagine what other transaction people expect could be going on. Or should be.

                                                                                                                                                        Personally I’m not concerned about the revenue of large or small sites from advertising dollars. I view advertising as a parasitic industry that drives content quality down while building up opaque databases about people - a net negative. Something like, say, basic income seems like a much better solution for small, independent content creators than advertising with its numerous downsides.

                                                                                                                                                        I don’t think wishing for things is very productive. Do you think by blocking enough ads that Verizon Wireless will send their lobbyists into congress to push for a universal basic income? What exactly are you proposing we do?

                                                                                                                                                      2. 7

                                                                                                                                                        The idea that fake news arises when people take measures to protect their privacy only makes sense if you ignore the entire history of media and advertising.

                                                                                                                                                    2. 8

                                                                                                                                                      I wonder how many people advocate this kind of privacy-focused browsing in exchange for more fake news and lower quality content with their eyes wide open?

                                                                                                                                                      Your argument here seems to be that when ad rates fall too low publishers will simply go out of business unless they publish fake news. But that would be fine with me because then I could accurately ascertain the legitimacy of a site simply by checking to see if it has ads. If it does, it’s fake news, if it doesn’t, then I might pay attention because it’s run either by a company with a real business model or a non-profit.

                                                                                                                                                      I simply don’t accept the premise that advertising is necessary for good content. In fact, I think advertising actively discourages good content. So I’m happy to see it die.

                                                                                                                                                      1. 1

                                                                                                                                                        Your argument here seems to be that when ad rates fall too low publishers will simply go out of business unless they publish fake news. But that would be fine with me.

                                                                                                                                                        Wow.

                                                                                                                                                        That honestly surprises me.

                                                                                                                                                        Thanks for your opinion though.

                                                                                                                                                        1. 4

                                                                                                                                                          Well, it might be less surprising if you hadn’t clipped my quote where you did. Basically, if a publisher can’t convince people to pay for its content (in either a for-profit or non-profit manner) then the content is probably not all that compelling or the publisher is structured in an inefficient manner. I’m 100% positive that there are exceptions, though I can’t personally think of any, and I’d have to consider those separately.

                                                                                                                                                          By the way, I pay for a number of subscriptions to both physical magazines (none of which publish ads) and web sites (which also don’t publish ads). Most are non-profits. I find that I have no trouble finding interesting, thought-provoking things to read. In fact, there’s more content than I can even consume reasonably.

                                                                                                                                                          1. 2

                                                                                                                                                            I can’t personally think of any [exceptions], and I’d have to consider those separately.

                                                                                                                                                            You have a twitter account.

                                                                                                                                                            By the way, I pay for a number of subscriptions to both physical magazines (none of which publish ads) and web sites (which also don’t publish ads). Most are non-profits. I find that I have no trouble finding interesting, thought-provoking things to read. In fact, there’s more content than I can even consume reasonably.

                                                                                                                                                            And of course, you have a search engine you pay-per-search, the google fonts you pay for on your website, the creative-commons advertisement at the bottom of your website, the fact you use Ubuntu which is advertising supported, and so on.

                                                                                                                                                            Sponsored content is pervasive, and it’s a huge part of what (mentally) ratchets our prices and costs so low.

                                                                                                                                                            Well, it might be less surprising if you hadn’t clipped my quote where you did.

                                                                                                                                                            You either answered the question I asked (which ended there), or you’re answering a different question that I didn’t ask, and that I’m not really interested in talking about (how you as a reader detect a fake site).

                                                                                                                                                            Do you think it matters? You don’t seem to think that targeted ads for less fake news and crap content is a fair trade, because, in your words: There is “more content than I can even consume reasonably”. Maybe you just weren’t aware that twitter or Google et al are only able to produce the content and services you consume because of advertising.

                                                                                                                                                            Or maybe you don’t care – if Google give services away for ads that you can block, then they’re suckers, and you’re patting yourself on the back for being so smart, but you’re also facilitating the fake news and crap content that is bombarding human beings that aren’t so smart as you.

                                                                                                                                                            Where exactly do you think this kind of conversation can go?

                                                                                                                                                            1. 2

                                                                                                                                                              I’m enjoying your posts on this topic, thanks :).

                                                                                                                                                              Could you elaborate on how using google services with an adblocker is facilitating fake news and crap content?

                                                                                                                                                              1. 1

                                                                                                                                                                Could you elaborate on how using google services with an adblocker is facilitating fake news and crap content?

                                                                                                                                                                The biggest brands have advertising budgets that are relatively fixed. Their goal is to get a certain [reach] from a given medium, and the planner/buyer is going to do this with the fewest number of transactions possible. Exactly how they go about doing this varies, but because the budgets are fixed, if you block an ad, then Google simply doesn’t sell you, so the publisher misses out, but Google receives x% of that entire budget, so they are (relatively) unaffected.

                                                                                                                                                                Meanwhile, a lot of these fake news/crap content sites will purchase the traffic – “legitimately” by purchasing search, or less legitimately in the form of injection/toolbar users or even pops. That less-legitimate is particularly important, and generally has some technological sameness (like having the same fonts, or has some plugin installed on all the traffic), but you’re not going to these sites anyway, so if 20% of users have an ad blocker, that’s only 20% of real traffic, not this traffic. That means that when the advertiser spends their money, more of their money moves to these vehicles, instead of sponsoring real sites.

                                                                                                                                                                The advertiser will (eventually) check the effectiveness of their campaign, and see that this media buy didn’t get them very/any benefit, so the ROI on their spend will be poor. They will want to count the users – if some of the traffic is invalid and can be excluded, then the ROI on the real traffic might be good (or at least close to their predictions), but if they cannot exclude any traffic, for example because we have stopped tracking/analysis of this traffic, then the advertiser simply pushes the price of the audience down.

                                                                                                                                                                Does that make sense?

                                                                                                                                                              2. 2

                                                                                                                                                                Creative Commons is ad-supported? Really? Seems unlikely. Twitter I would happily do without if I was asked to pay anything more than nothing. Google Web Fonts, again, I would never pay more than nothing for anyway. Ubuntu is not ad-supported in any meaningful way, they barf out Amazon links but the feature is turned off by default these days AFAIK.

                                                                                                                                                                A lot of this is basically an economic question. I consume these things because they are free. If they cost more than zero I wouldn’t consume most of them because they are worth practically nothing to me. In the case of Twitter, I consume it because other people I find interesting consume it. But most of those people wouldn’t pay for it (because they are like me), so if Twitter charged those people would leave, and I would leave.

                                                                                                                                                                If Google offered me a way to pay not to see any ads and to not be tracked I would seriously consider it. As it is, they only offer a way to pay to see fewer ads. Additionally, so long as ad networks are regularly tricked into serving malware, I view ad-blockers as critical security software. Fix your industry and you might find people like me more sympathetic.

                                                                                                                                                                Or maybe you don’t care – if Google give services away for ads that you can block, then they’re suckers, and you’re patting yourself on the back for being so smart, but you’re also facilitating the fake news and crap content that is bombarding human beings that aren’t so smart as you.

                                                                                                                                                                Your argument about “fake news” just isn’t believable, especially when so much “fake” content is published by “legitimate” outlets. Fake news is really more a result of competition (capitalism) in the information industry than anything else. When companies compete for eyeballs (whether they’re paid eyeballs or ad impressions), and there are no rules about telling the truth, there will be a tendency to report whatever “sells”. Your argument is essentially the equivalent of those commercials that told kids that smoking pot supported terrorists.

                                                                                                                                                                1. 1

                                                                                                                                                                  Additionally, so long as ad networks are regularly tricked into serving malware, I view ad-blockers as critical security software.

                                                                                                                                                                  I’m not arguing that you should disable your ad blocker. I simply think you’re facilitating what will be seen as a huge mistake by confusing this issue with the tracking and privacy issues.

                                                                                                                                                                  I consume these things because they are free.

                                                                                                                                                                  You’re wrong. Google is able to permit redistribution of these fonts and make self-driving cars because of advertising.

                                                                                                                                                                  What you’re proposing is not vaccinating your kids. It has real effects that don’t affect you immediately, but it’s going to affect everyone else in your community.

                                                                                                                                                                  Creative Commons is ad-supported? Really? Seems unlikely.

                                                                                                                                                                  Straw man? Really? Creative Commons receive donations and spend the money on advertising. You’re the advertisement.

                                                                                                                                                                  Your argument is essentially the equivalent of those commercials that told kids that smoking pot supported terrorists.

                                                                                                                                                                  Sigh. Smoking pot does support terrorists. So what? It also supports roads and skiing, health programs, and safety programs. Just because marijuana legalisation is probably a net positive doesn’t mean that there aren’t gross negatives. This is just your myopia, and while you complain about “dark patterns” you’re going into an arms race where the collateral damage is malware and representative democracy.

                                                                                                                                                                  But maybe there’s another way: The bulk of the advertising spend isn’t interested in serving malware either. Learn about the market and find a way for everyone close to the money to get a balloon, while still getting what you want, and maybe you’ll actually get what you want in the long run as well.

                                                                                                                                                                  1. 3

                                                                                                                                                                    You’re wrong. Google is able to permit redistribution of these fonts and make self-driving cars because of advertising.

                                                                                                                                                                    No, I’m not wrong. You seem to think I’m an idiot and don’t realize that someone has to pay for everything. They are free to me. If Google started charging more than I was willing to pay for fonts I’d just stop using them. That’s how economic decision making works. Google Web Fonts is worth maybe $1/year to me. Right now, the price to me is zero. I use them because my surplus there is still positive. But I’m not going to make it easy for Google to track me just because they’ve got a tricky business model. If they feel it’s not worth their while to host the fonts, fine, let them stop. Or let them charge and people will decide accordingly.

                                                                                                                                                                    Also, why do we need Google to provide fonts and self-driving cars in the first place? If you’re into markets, and you seem like the type who is, you can correct me if I’m wrong, shouldn’t the market provide those things if they provide value to society (and therefore people are willing to pay)? Now you sound like a late-90s record executive complaining about MP3s. No one owes anyone a business model. If the only way you can make money is by doing something that I find creepy and potentially dangerous then I have no sympathy.

                                                                                                                                                                    Straw man? Really? Creative Commons receive donations and spend the money on advertising. You’re the advertisement.

                                                                                                                                                                    You brought up Creative Commons. Not me. I have no idea what you were trying to demonstrate by pointing out that I link to the CC web site from my web site. Last I checked that was basically the killer feature of the web.

                                                                                                                                                                    This is just your myopia, and while you complain about “dark patterns” you’re going into an arms race where the collateral damage is malware and representative democracy.

                                                                                                                                                                    There are a ton of things I believe are far, far more detrimental to representative democracy. Real existing capitalism itself, for one.

                                                                                                                                                                    But maybe there’s another way: The bulk of the advertising spend isn’t interested in serving malware either. Learn about the market and find a way for everyone close to the money to get a balloon, while still getting what you want, and maybe you’ll actually get what you want in the long run as well.

                                                                                                                                                                    Yep, sounds like something the ad industry should focus on. Until I feel I can trust them, however, I’m blocking everything I can reasonably block.

                                                                                                                                                                    1. 1

                                                                                                                                                                      No, I’m not wrong.

                                                                                                                                                                      Yes. You are wrong. Full stop.

                                                                                                                                                                      You seem to think I’m an idiot and don’t realize that someone has to pay for everything. They are free to me.

                                                                                                                                                                      Well, I don’t think you’re an idiot for that: You clearly realise someone has to pay for it. You just don’t think it’s you. You think you’re somehow hurting google or “the advertising industry” by using an ad blocker, and you’re completely wrong about that.

                                                                                                                                                                      It’s unclear if you know who it’s actually hurting and who it’s actually helping. I think you probably do by now, but you’re so angry you’re growing belligerent; Nobody called you an idiot, so calm down.

                                                                                                                                                                      No one owes anyone a business model. If the only way you can make money is by doing something that I find creepy and potentially dangerous then I have no sympathy.

                                                                                                                                                                      That’s true, but you’re creating more opportunity for more creeping and greater danger, rather than actually reducing the amount of creepy and dangerous things.

                                                                                                                                                                      I don’t know if you’re doing this because you’re ignorant (which I hope), or you simply lack empathy for the people who are being creeped on, and actually being put at risk of that greater danger.

                                                                                                                                                                      I link to the CC web site from my web site. Last I checked that was basically the killer feature of the web.

                                                                                                                                                                      That’s exactly what an advertisement is: The killer feature of the web. You don’t even realize your website has advertisements on it, because when I say “CC advertises” you somehow thought that I meant that “Creative Commons is ad supported”. Or you were trolling. Again, I’m not quite sure.

                                                                                                                                                                      Anyway, that’s how big this marketplace is. You can’t block advertising, you can block a certain kind of technology that makes advertising more efficient for publishers. You know that because you’re not an idiot – the question isn’t even whether that efficiency is worth the other risks;

                                                                                                                                                                      The billion dollar question is whether there’s a way to get publishers what they want (the efficiency) without the risks.