1. 9
    What is mathwashing? ai culture math mathwashing.com
  1.  

  2. 23

    Is there a

    version that doesn’t

    require me to scroll

    a page for

    every three words

    of content?

    1. 4

      The heavy page is discrimination against poor people with crappy phones/plans and folks on dialup or pseudo broadband. They might leave the site due to its user experience. Then, mathwashing will remain a non-issue to them. Was this…

      Accidental: When good intentions are combined with a lack of knowledge and naive expectations about people’s level of Internet access or economic class.

      On Purpose: Because people don’t question decisions from site owners about bloated web sites, this faith that this education attempt was sincere can be abused.

      Two things the author should realize:

      1. Technologists designed ways to deliver slim, web pages to a wide audience.

      2. Site owners should use them automatically.

      1. 2

        naive expectations about people’s level of Internet access or economic class

        I suspect this is the case. People forget that WWW means world wide web.

        To answer @hwayne’s question, the most readable version I saw is the one displayed by links.
        The whole content fill 3 screens on my monitor.

    2. 11

      The handling of the Damore memo and related science should tell us everything we need to know about to what degree we can trust both data and the people who criticize it.

      The problem with claiming “mathwashing” is that it’s dangerously close to creating a culture that ignores studies if they don’t feel right. This is not scientific governance.

      1. 6

        You mean the method of citing a number of irrelevant and/or dubious scientific studies in a ideological rant based on logical fallacies and then claiming these cites bolster the credibility of the rant and indicate that anyone who objects is anti-science? Yep!

        1. 0

          The handling of Galileo’s studies should tell us everything we need to know about how science always triumphs over obtuseness.

          Now, do you know what’s funny?

          We call “scientific researchers” incompetent people arguing that neural networks’ models are too deep for humans to understand.
          I mean these people not only rationalize their failures, they sell them as features!

          This is not scientific governance.

          1. 7

            Galileo’s heliocentric theories had reasonable scientific counterobjections based on the observational evidence available at the time, and other contemporary figures (such as Copernicus and Kepler) with heliocentric models of the universe had no particular trouble with the authorities. Galileo’s persecution by the Church was mostly about political and personal conflict between him and the pope, which has been ahistorically re-contextualized as a story about the Catholic church (or Religion in general) persecuting inconvenient scientific truths, by certain modern scientists generally studying different things than Galileo did and offending different authorities than the Catholic church.

            1. 1

              I don’t understand what you’re saying here, could you please rephrase it?

              1. 4

                Let’s try (but I’m not sure what is not clear… my English simply sucks, sorry…)

                I understand the concerns of @friendlysock, but the fact that we now teach an heliocentric model in the elementary schools show that good science always wins against censorship.
                We won’t ignore disturbing studies that “don’t feel right”.
                On the contrary, we will verify them carefully, as we should do with anything that is qualified as “Science”. (and we should not qualify as “Science” any unverified claim: it’s just an hypothesis until several independent experiments confirm it!)

                However, today in IT there is another issue that is much more dangerous.
                Several powerful companies are lobbying to spread the myth of machine intelligence. Not just to collect money or data, but to delegate to machines the responsibility of their errors.

                Now, if you tell me that a software you wrote cannot be debugged, I think that you are not competent to develop any software at all. But if you boldly state that your software is not broken, but too smart for me (and even you) to understand its internal working, I would remove you from any responsibility role in IT.

                For some strange reason, this is not what happens in AI.

                Developers happily admit that they cannot explain their own neural network’s computation.
                But they rationalize such failure as if it was not their fault, but it’s the neural network that is “too smart” (they usually mumble that it takes into account too many variables, it finds unintuitive correlations and so on).
                So they are not just incompetent developers: they are rationalizing their failures.

                And they sell such opacity as if it’s an inherent aspect of neural networks, but an advantage!

                They do not say “this software is shitty mess”, they say “this software is too smart for humans!”.

                Is this a scientific approach?

          2. 7

            This article cites as a fact that Facebook suppressed conservative news sources without bothering to even go to the trouble of reading Snopes. https://www.snopes.com/fact-check/is-facebook-censoring-conservative-news/

            1. 5

              Also known as McNamara’s fallacy.

              https://en.wikipedia.org/wiki/McNamara_fallacy

              TBH mathwashing doesn’t seem like a very good name.

              1. 3

                The advice i’ve seen around the lesswrong people give is to take the time and do all the math you reasonably can, but if despite all the calculations you still feel like it’s telling you to do the wrong thing, just do the right thing anyway. That doing the math is important for influencing that gut feeling, but you shouldn’t ignore it. Hadn’t known there was a name for doing the opposite though.

                1. 1

                  Well I’d say it’s more descriptive than “artificial intelligence”, given we usually speak of cybernetics instead.

                  Do you have an alternative term to propose?

                2. 4

                  This post from EFF does explain the problem a lot better than this website/presentation, it got unnoticed but was very interesting.

                  1. 1

                    Great read, thanks!

                  2. 2

                    In one study, Harvard professor Latanya Sweeney looked at the Google AdSense ads that came up during searches of names associated with white babies (Geoffrey, Jill, Emma) and names associated with black babies (DeShawn, Darnell, Jermaine). She found that ads containing the word “arrest” were shown next to more than 80 percent of “black” name searches but fewer than 30 percent of “white” name searches.

                    Is this a problem? Presumably ads were placed based on click-through rate, and while CTR is an imperfect measure of relevancy, I don’t see problems with delivering relevant ads. As I understand even TV ads are racially targeted based on channels and programs.

                    1. 8

                      She found that ads containing the word “arrest” were shown next to more than 80 percent of “black” name searches but fewer than 30 percent of “white” name searches.

                      […]
                      I don’t see problems with delivering relevant ads.

                      I’m not sure I understand your objection. Actually, I hope that I did not understood it at all.

                      Can you elaborate?

                      1. 4

                        It could be that African-American names like DeShawn or Darnell might show up more in articles about arrests, since, sadly, a lot of African-Americans are arrested in the US vs. non-African-Americans. Google is doing what Google does well—making correlations, which ends up showing biases in our culture. It’s similar to the time Target sent baby-related coupons to an address because it had data that showed a high likelihood of a pregnancy and yes, the daughter was pregnant but her father did not know (she was still a teen if I remember correctly).

                        The right answer is to address this as a society and let Google do what it does (with respect to searching). The wrong answer to is to blame the data (or data collector) for what it’s showing.

                        1. 3

                          The right answer is to address this as a society…

                          For sure!

                          …and let Google do what it does (with respect to searching).

                          The problem is that Google (but not only Google!) replicate, reinforce and spread the bias we should fix.

                          So, to fix our culture we should “fix IT business” too.

                          Instead of trying to build AIs that look ethical, we should ethically regulate businesses, so that people can’t profit from unethical behaviours (not even through undebuggable software proxies).

                        2. 2

                          Google is placing “X arrest records” ads more often for search “X” if “X arrest records” ads are clicked more often for search “X”. I don’t see why Google should place ads that are clicked less often.

                          1. 3

                            Or maybe “X arrest records” ads are clicked more often for search “X” because Google is placing “X arrest records” ads for search “X” 3 times more often?

                            But actually this is not the real cognitive issue.

                            The problem is that if you propose systematically a correlation between X and “arrest record”, people’s subconscious register the association even they don’t click the ads.

                            Now, even if Google propose such ads just to maximise profit, the long term effects of such association are an externality that affects beliefs and the social and political behaviour of many people.

                            The fact that such externalities are hard to measure protect Google from a fair taxation that could cover the social costs, but also make it important to forbid such externalities by law.

                            1. 1

                              I agree externality is plausible in this case. I am in favor of taxing externalities, in principle. For example, carbon tax.

                              In practice, a good tax policy is a hard problem and this problem is probably beyond our taxation skill by far. I may reconsider when we get much better at taxing externalities, say, a successful implementation of carbon tax.

                              1. 1

                                In practice, a good tax policy is a hard problem

                                So we have a single viable solution: forbid the techniques that produce the externalities.

                                1. 1

                                  What? Another viable solution is to live with externalities without doing anything. This is actually what we do to most externalities.

                                  1. 1

                                    Well, why not?

                                    I mean, the profit of a few companies is much more important than the lifes of a few billions of people, isn’t it?

                                    No.

                                    You can call this “viable” just because you have been lucky enough to not being sistematically discriminated, sistematically associated with arrests, sistematically underpayed.
                                    You know another externality we all live with? The suicides at FoxConn.
                                    Another one? Democracy manipulations.

                                    In practice, you can accept externalities that poison the roots of democracy only because you profit by them. Or more probably, because you have been convinced you do.