1. 3

    Interesting perspective.

    Sounds a little like FUD when it talks about press and blogs, since press freedom and free speech are part of the Constitutions of many European states. And you know, Constitutions win over ordinary laws.

    The point of GDPR is to protect people, not to punish companies for their misbehaviors: their users should punish them if they want.

    Indeed every company, small and large, is welcome in Europe, as long as they obey the law and properly pay taxes.

    It’s nice to read that Google and Facebook are going to comply, and it’s sad to read that other U.S. startups might have problems with the rights of their European users.

    But all in all, I think the GDPR could be a good starting point for any state that cares about the privacy of its people more than the private profits of its companies.

    1. 4

      Constitutions do not necessarily win over ordinary laws. In e.g. the Netherlands the constitution (Grondwet) does not in fact have force of law, but every new law passed is supposed to be checked against the constitution by the Eerste Kamer. This leads to the interesting situation that several constitutional rights can only be defended by appealing to EU laws that do have force of law.

      So, yeah, this is slightly FUDdy, but not entirely, and the concerns are valid, especially in certain eastern european states; just look at freedom house’s reports on Hungary and Poland.

      1. 1

        I said “many European states” exactly because I know that exceptions exists but I’m not an expert… so thanks for pointing out them.

        Still, the point of GDPR is to protect people.

        Can it be improved? Surely!
        How? You could for example impose full data tracking: if someone send you a mail or call your phone for marketing she must be able to tell you exactly how they get your address/phone number, exposing the full path of your data from the consent to the call/mail.
        AFAIK, this is not yet part of GDPR and it’s a pity.
        This way you can write everybody in the various step to remove your data and not share them anymore.

        Please consider to add this if you are going to improve it in the U.S.A.!

    1. 5

      tldr: book promotion

      1. 5

        I suppose, but the books being “promoted” are each over 30 years old. The Psychology of Computer Programming was originally published in 1971. Becoming a Technical Leader was published in 1986.

        1. 2

          What’s more, the books have very much held up. ‘The psychology of computer programming’ is still considered an insightful book on the subject, and is still in print.

        2. 2

          That’s right but I thought that short text was worth a read and reflection a thought.

        1. 2

          A teeny thing of note that might give you pause before dismissing the author’s ideas out of hand: he has been thinking and publishing about these kinds of things for quite a while already. See for instance his extensive list of publications.

          Yes, there are obvious flaws to the argument as given. That does not mean the idea as presented is without merit. And I would take the blogpost as stating an idea, and a lead to investigate. He even (helpfully) handwaves over the problems to be solved, stating that they exist.

          1. 2

            There’s been more useful proposals to fix TLS CAs, like Convergence, that had code written, would have been more transparent to end-users, and still went nowhere.

          1. 12

            This post betrays a fairly simplistic understanding of how automation actually happens.

            In general it’s not the case that a device or program can simply be slotted in to replace a worker, taking over all of that worker’s tasks. The goal of automation (for the employing organization) is to reduce costs, and to do that, there’s no need for the device or program to be able to take over 100% of someone’s work; in general, it’s accomplished in one of two ways:

            • by requiring less time/fewer workers (what used to be called “speedup”)
            • by requiring less skill (the deskilling that Harry Braverman described in Labor and Monopoly Capital)

            An AI will likely not be able to replace us one-for-one, but it doesn’t need to. All our employers want is to be able to hire fewer programmers, cheaper (and more interchangeable) programmers, or both. Arguably, we’ve already made some strides in making it easier to do more with fewer programmers as our tools get better and as we make it possible for our users to do things for themselves that would have once required professional programming (see, for example, Excel). I suspect the main reason this hasn’t yet resulted in a wage crash is that the market for software is still growing faster than our ability to automate our work, but there’s no reason to think that will continue forever. We are generally pretty expensive employees and our employers have every incentive to make us more disposable. To believe that we are immune from this process seems like nothing more than “programmer exceptionalism.”

            1. 8

              Wow, I think this might be the first time I’ve seen Harry Braverman referenced on a mainly tech-oriented forum! I agree this is the right way to look at automation, as a process driven by social and economic pressures that accommodates both technological and non-technological changes (businesses are simultaneously looking for tech to automate tasks, and looking at whether they can change the tasks).

              It’s frustrating, as an AI researcher, that a lot of the debate internal to the AI community (and tech community in general) is so unrooted in any of the existing research on automation. I mean, you don’t have to like Braverman specifically, but most people seem to not have read anything on the subject, from any school of thought or researcher. A lot is just totally off-the-cuff speculation by people who know something about AI and have some kind of impression of how society works (from living in it) and then speculate on how those relate, which is not so satisfying. The fact that even famous people do this (ahem, Elon Musk) probably helps make it socially acceptable. I attended a panel at AAAI-16 that was like that too, billed as a panel on how AI will impact jobs in the next few decades, and the star panelist was… Nick Bostrom. Who is fine if what you want is a philosopher to speculate about the singularity, but not if you want a rigorous discussion about how AI impacts the job market.

              1. 1

                “is so unrooted in any of the existing research on automation.”

                I only see pieces of it here and there. I’ve purposely ignored a lot of such research in economics because it seems they spend more time speculating and modeling from scratch than studying real-world interactions. The field just has too much bullshit like the methodology- and process-oriented side of software research. The kind done by people who don’t write software. ;)

                Do you have links to any online summaries of such research that you believe accurately portrays effects of automation?

                1. 3

                  It’s not really economics per se that I have in mind, more history of technology, history of labor, sociology of work, political economy, STS, etc., fields that study concrete things that happened in the real world and attempt to figure out what happened and why. Braverman’s book mentioned above is one classic example; he comes from a Marxist perspective, of the kind that focuses on analyzing concrete material factors, i.e. how physical machines interact with specific types of workplaces and corporate forms to change production processes and the social/economic relationships in them. Imo, even if you aren’t interested in Marxism as a political project, this approach often ends up less abstract than the kind of mathematical modeling you get in neoclassical economics, but there are plenty of other approaches as well (mostly various kinds of historical or sociological methodologies). I’m not sure there’s a great online summary, which maybe is something to fix. Wikipedia has an article on the technological unemployment debate, but it’s skewed towards books by economists rather than historians/sociologists/STS people.

                  One interesting historical episode is that there was a huge debate on automation’s impact on employment, both quantity and quality of employment, in the late ‘50s and early '60s (e.g., 1, 2, 3, 4, 5). One can argue either way about its relevance, “this is mostly just a rehash of that debate”, “this time is different”, etc., but I’d personally like to read an informed take on why in either case, which I don’t often find in the AI-and-jobs discussions. Not that I’m an expert either, that’s why I’d like to read from someone who is!

                  1. 1

                    Appreciate the reply. It will give me something to think about. :)

              2. 4

                I definitely think the idea is being motivated in certain groups (VCs, silicon valley executives) by “let’s cut labor costs, these programmers are too expensive.” And automation in general is a political economy problem of distribution, sure, and current generation of management has embraced a delusional economic theory that means they don’t understand that lower pay means lack of demand.

                But I also think a lot of “AI will replace jobs” is fantasy. E.g. there was a cycle of going back from robots to humans in 1990s (can’t find references, alas), and it’s happening again now: https://www.theguardian.com/technology/2016/feb/26/mercedes-benz-robots-people-assembly-lines

                1. 3

                  current generation of management has embraced a delusional economic theory that means they don’t understand that lower pay means lack of demand

                  To a large degree, you can blame the consumer credit bubble (and the ongoing mortgage bubble) for this. The Fordist coupling between wages and demand for products broke apart because people can now spend money they don’t have.

                  We now have a society where people can buy things with money they don’t have, and while prices for typical consumer goods are fairly stable, prices for housing, healthcare and education have gone out of control while wages have been stagnant.

                  The other change is that no employer has the effect on the market that Ford had in 1914. That’s arguably both good and bad. On one hand, decentralization is generally a good thing. On the other, the Fordist argument simply doesn’t apply to a small company where (a) the buyers are usually not the same people as the workers, and (b) a broad-based effect on wage levels– note that Ford’s wage increases lifted the whole market, not just one company– will not occur.

                2. 3

                  All our employers want is to be able to hire fewer programmers, cheaper (and more interchangeable) programmers, or both. […] I suspect the main reason this hasn’t yet resulted in a wage crash is that the market for software is still growing faster than our ability to automate our work, but there’s no reason to think that will continue forever.

                  We have a wage crash already. Wages for high-skill programmers, inflation-adjusted, are nowhere near where they were in the 1990s. Sure, there are a lot of commodity programmers making $120k, and that certainly didn’t exist (even adjusting for inflation) back then, but the top has been absolutely hammered over the past 20 years.

                  Ageism is another form of wage crash. It’s easier to squeeze young people for long hours, and they’re less likely to notice that management is investing nothing in their career growth. Moreover, culling all the old people except for the most successful ones creates an impression, for the young, that the career is more lucrative and rewarding than it actually is.

                  Note of course that most economic “crashes” are actually slow and take place over decades. The slow crash isn’t usually newsworthy but it’s a lot more common. We haven’t had the fast, theatrical type of crash since 2002, but the decline of working conditions (open-plan offices) and infantilization of the craft (Agile Scrum) is a slow wage crash because it floods the market with incompetents, enables age discrimination, and eradicates technical excellence in favor of fungible, marginally qualified workers.

                  I can’t predict whether there will be a fast, theatrical wage crash like what happened in 2002 or in finance in 2009, but I think it’s more likely than not that there has been and will continue to be long-term decline in pay, reputability, and working conditions in corporate programming. I’m also starting to realize that there’s very little that can be done about it. If companies can operate just well enough while running on sub-mediocre, fungible, cheap talent… who would expect anything else?

                  1. 1

                    Your comment leaves off H1-B, offshoring, and the labor-fixing scandal. These have significant effects on pushing down IT wages.

                    1. 1

                      Those are all factors but I think that the dumbing down of programming is a lot more dangerous than the abuse H1-B program or the wage-fixing scandal. It’s not that those aren’t bad, but the commoditization of programming work and the flood of low-talent Scrum programmers are a permanent and ubiquitous threat that we’d still have to deal with even if the H1-B program were fixed.

                      1. 2

                        True on that. It’s been sold as a mechanical process anyone can do rather than a mix of creativity, engineering, and mechanical stuff.

                        1. 2

                          “BASIC is easy to learn, and the language of the future! Millions already use it!”

                          I call this the “idol of accessibility” (I get the feeling that label is taboo): where the ease of use by newcomers is valued above all else, especially engineering reasons. I despise it justifies worse-is-better by network effects. It is consumeristic in nature, and encourages a herd mentality. It leads to a string of tedious posts on the “X is bad because I didn’t learn it in an evening” (ahem).

                          I think it is a consideration, but I don’t agree it should be the defining factor in perceived goodness of a tool.

                          However, I risk being seen as a bad person for arguing this, because so many people are trying to make a better life for themselves by switching to tech, and I shouldn’t make it harder.

                  2. 2

                    Especially in software development, much of that automation is happening constantly, and one could even argue that this has resulted in a larger market for developers, precisely because less skill is necessary these days to achieve things quickly. This all comes at a significant cost in hardware requirements, but it has not, in the past half century, led to the demise of programming as a job. It does depress wages, however, as @michaelochurch correctly finds.

                    1. 4

                      There’s a shift of perspective this talk assumes that I think deserves more attention. The questions the author poses at the end all assume the perspective of “you’re a craftsman and your primary concern should be the effect your craft has on the world.” This isn’t what the hacker ethos is. The hacker ethos is about curiosity, and it’s about shaping the computer into what you want it to be. Making something widely-used or popular or all-inclusive is important if you’re trying to sell or distribute your work, but it’s absolutely not intrinsic to what “hacker” means. When I write code outside of company dime, my concern isn’t about what the rest of the world will think about my design decisions, it’s about what I want to accomplish. This is the same POV that led MIT hackers to use Lisp: other, less flexible languages would have seen wider adoption of their work, but that wasn’t the point. This is the author’s fundamental misunderstanding: hacking is an individual pursuit, so of course looking at it through the lens of community and social causes leaves something to be desired. That isn’t what it’s for.

                      Relatedly, I cringed when she said hacker culture “sort of evolved into the tech industry.” I’m sure much of the tech industry would like to think so, but I see tech company attitudes as a perversion of hacker culture rather than an extension of it - the Silicon Valley ethos is just capitalism relieved of the burden of having to work to expand your business (as in: company like Slack can handle millions of customers with essentially zero capital investment, which is what allows tech companies to expand and pop so easily). This is why 70’s hacker culture flourished primarily at universities, not companies.

                      Unrelatedly, the author blithely writing off human systems as “unknowable” rubbed me the wrong way. A lot of things have been popularly considered unknowable in the past (nature of the stars, genetics, etc), and the track record of these predictions isn’t great. That sort of defeatism just discourages people from actually trying to solve problems.

                      1. 1

                        There is indeed a shift of perspective in the talk that I think needs to be highlighted. However, I think it is not from the perspective of “you’re a craftsman and your primary concern should be the effect your craft has on the world” but more from the perspective “you are a social being and your actions will have effect on the world, it would be good if your ethics reflect that”. There is no person alive who lives in a vacuum, and respecting that fact in choosing ones actions so as to result in a net-positive impact on your surroundings is valuable, but very hard indeed. The questions the author posed seem to me to be intended as reformulations of the tenets of the hacker ethic designed to clarify of whether and how to apply the ethic to your actions. Unfortunately for us the questions only point to there being a way of reconsidering the ethic, thereby possibly improving it, but do nothing to actually guide one to such a possibly better ethic, which the talk is ostensibly about. Furthermore, the talk does not make a truly convicing case that the ethic itself is flawed.

                        Ultimately, the talk is valuable in my opinion because it invites one to reconsider ones biases, and at least to me provided a new light that could be shed on a familiar set of topics.

                      1. 5

                        One way of dealing with this that I’ve learned and find effective is to let the person that asks for help do the driving with you directing. Doing so slows down the debugging itself but greatly expands on the learning of both parties. You need to be confident enough to tell ‘sorry, I’m interested in what you have tried, but I have no way of judging what you say in terms of the current code you’re working on, so please show me what’s going wrong and how the code looks’.

                        Often, I found, just slowly walking through code together will highlight the problem in and of itself, without having needed the extra expertise that may be present in the one helping. Plus, having the respect to work at each other’s speed definitely breeds trust and confidence, and helps working together as a team.

                        1. 2

                          Interesting. I’ve been trained to type diaereses in Dutch, because it’s a base feature of the language’s spelling (‘ideeën’ is the correct spelling, as is ‘officiëel’ and yes, I’ve encoded the diaereses explicitly in the character). This, in turn, means that it doesn’t look weird to me at all.

                          Unfortunately, the latest spellingreform has ordained the hyphen as the indicator to be used in combined wordforms, so that e.g. ‘zeeëgel’ is no longer spelled correctly, but should be written as zee-egel. Yes, this means that semantics do play a part in when to use the diaeresis and when to use the hyphen.

                          1. 1

                            This totally kills my Turing Machine emulator in SED.

                            For those not all that well versed in computability: Tetris happens to be complex enough to allow encoding of a Turing machine, provided one allows an infinitely-high field.

                            1. 1

                              I can’t tell if this is an elaborate troll, or a very uninformed, but serious article. The fact that this article makes no reference to XML, and the technology that exists in that world is baffling.

                              What is the strategy here for parsing incoming documents? Why are HTML based micro formats better than XML with a DTD? This doesn’t make sense to me, but I realize that this idea is “fresh”, and I am being dismissive, so…

                              1. 1

                                The strategy here is that formatting your data as HTML may well be enough when you control the client side Javascript and provides benefit to the user when you don’t. This is an improvement on using JSON, as with JSON you need to control the client side Javascript, but don’t have a ‘fallback’ for the user. The tricky bit is in that you need to parse the data you want to get at from the HTML provided, but that can be done by leveraging the HTML parser that is in the browser and navigating the (probably disconnected) DOM nodes that you get from it.

                                Combining this with the Accept: header and rendering HTML, JSON and e.g. RDF for objects that your app exposes forces you to think harder about how to expose your objects, and allows you to take advantage of all the work done on e.g. Linked Data (SPARQL comes to mind).

                                1. 1

                                  The only benefit I see here, is potentially during development, when you’re trying to discover how to utilize an API. And, while that might be OK, you’d almost certainly be better served by building an API on top of existing tooling (say, e.g. swagger), that give you benefits like standardized documentation and an API console.

                                  But, I don’t build front end services, so maybe I’m missing something key.

                              1. 2

                                Reading up on capabilities, currently ‘A Password-Capability System’, and what else I can find on Norman Hardy’s site. Also trying to finally get through Jonathan Rees' thesis ‘A Security Kernel Based on the Lambda-Calculus’.

                                1. 5

                                  I don’t have anything of value to contribute, but literally laughed out loud over this story. How did we get here? And, how can we turn back to a not so laughable age?

                                  1. 2

                                    It’s happening all the time. Companies want to quickly judge candidates, even better if an inexperienced in the field HR employee can use an arbitrary index value that is perceived as a good measurement for the candidates skill. People will look at your stackoverflow karma, your github profile, sites like hacker rank etc. I think it’s actually getting worse and both sides are gaming the system.

                                    What we can do? Refuse to participate in recruiting processes of companies that go over the line. If you are the recruiter ask the person for a portfolio of his previous work, then hire him on a short term trial contract before offering a long term one. That’s a win-win for both parties in most scenarios.

                                    The recruiting process should be closely related to the work that will be done. Not a race to be the hottest in whiteboard algorithmic interviews, stalking candidates on social sites etc.

                                    1. 2

                                      For every refusal on my part, there might be 10 other bodies, or employers who won’t refuse. Both sides are incentivized to do this, or it wouldn’t be a thing.

                                    2. 2

                                      At least there are no free official-looking certificates complete with seals being mailed anymore. That was fun, while it lasted. Had a stack of several of those, each gained in under an hour. Unfortunately, the provider quickly caught on that mailing actual physical certificates internationally did come at a cost.

                                    1. 3

                                      Well, there’s Hacker News of course, for .Net I follow Chris Alcock’s The Morning Brew and Alvin Ashcroft’s The Morning Dew. Slashdot used to be quite good, but it’s gotten worse, and while I still read it, there’s maybe one or two interesting items a week. Slightly more general IT newsy there’s The Register.

                                      Other than that I try to regularly listen to podcasts, for a slightly more in-depth view on topics that I’d not necessarily dive into on my own, and general local news, which in my case is mostly nu.nl, geenstijl and daskapital.

                                      1. 1

                                        speaking of local news, other than the ones you mentioned there’s De Speld!