Threads for honk

    1. 12

      I give Google absolutely zero benefit of the doubt here. Everyone should assume as a matter of course that the Google Play store cannot be relied upon to host software that offends the political sensibilities of either Google themselves, or a sufficiently motivated group of people who are willing to abuse the abuse-report mechanism to have Google censor it on their behalf.

      What I would like to know is why it is the case that the F-droid version of the app is “out of date”. Is there some reason it’s more difficult to ensure that F-droid has the latest released version, compared to the Google play store? I would personally like to see the Element team treat free software distribution channels as first-class, and treat the Google play store as a only a secondary channel, used to make it as easy as possible for as many people as possible to obtain the app.

      1. 6

        What I would like to know is why it is the case that the F-droid version of the app is “out of date”.

        I believe apps in the official F-droid repository are updated/packaged based on “pulls” from the F-droid team, instead of triggered by “pushes” from individual app developers. The F-droid team is aware this sometimes results in some lag, and is working on improving the cycle time.

        Element could also host their own F-droid repo to speed up the process, but AFAIK that comes with a couple of challenges with signatures stemming from non-reproducible builds. I could swear I saw element_hq mention they were considering it somewhere, but now I can’t find a source. Here’s the github issue for it if you’d like to add a thumb: https://github.com/vector-im/element-android/issues/1857

        1. 2

          The announcement post linked above contains:

          Update: reminder that in the interim you can download a (slightly outdated) version of Element Android from F-Droid at https://f-droid.org/en/packages/im.vector.app. We’re also looking into running our own F-Droid repository going forwards so the most recent build is always available there.

        2. 2

          Yes, I’ve one app on f-droid and they build everything from source to ensure “what you see is what you get”. Thus they have to actively poll every source repo and rebuild in a queue. Some repos don’t even have some automation enabled (for example checking of git release tags) and thus need each version manually submitted if such a method for release detection is not available or too many builds fail.

      2. 3

        Never ascribe to malice which can be better explained by blind algorithms.

        I think this app was targeted by a coordinated reporting campaign and Google’s automated system removed it.

        On Monday a human will see the social media shitstorm and reinstate it.

        1. 15

          Frankly if you’re going to have robots take out applications with at least a hundred thousand active and long term users on a weekend you should probably have someone on duty to undo the damage.

          1. 17

            You can bet there are people working 24/7 at Google to serve the needs of advertisers.

            Users? Not so much. App store moderation is a cash sink. Let the bots handle it.

            1. 3

              It appears that after this shitstorm they brought someone in to help stop the PR bleeding:

              Update: we just got a call from a Google VP who explained the suspension was triggered by a report of extremely abusive content accessible on the http://matrix.org server. Our trust & safety team had already acted on it, and the app should be reinstated shortly.

              I wonder if they’ll fix the process. And by “wonder if” I mean “think it is unlikely that”.

              1. 2

                If you can think of a way to solve this (absolutely no false positives or false negatives during review) at the scale of any of the more popular app stores, please apply as SVP for that product area at any of the app store wielding companies ASAP.

                (“Don’t do an app store” won’t fly anymore after users became used to it)

                1. 3

                  I can. And they can. They’re smarter than me. The reason I think it’s unlikely is because there are obvious ways to improve, and the only reason they wouldn’t is because these shit storms don’t hurt them. It’s not a priority.

                  Some observations:

                  • Firefox displays a ton of extremely objectionable content. So does Vivaldi. So does Opera. They never auto-ban these applications.

                  • Google has employees who know how to look at an application like this on the weekend and fix it. As evidenced by the action I linked.

                  If they wanted to avoid this, there are two obvious paths:

                  1. They could let developers say “my app is like firefox, displaying content that I don’t control from arbitrary places on the internet” and let someone at the same level as the person who put this app back in the store verify that assertion before placing the app in that bucket so that it’s no longer subject to automated takedowns.

                  2. They could, whenever an application with a number of users above (insert threshold here) gets flagged for takedown, require someone at the same level as the person who put this app back in the store to verify that the takedown is appropriate.

                  If these steps are obvious to me, they’re obvious to anyone google has paid to think about it. They’d have already taken one of these steps last time this happened, if fixing this were a priority.

                  1. 2

                    In this specific case, the content seems to have come from a server controlled by the same entity that controls the app (from one of the updates: “[Google person] explained the situation, which related to some extremely abusive content which was accessible on the default matrix.org homeserver”), so “my app can display stuff from anywhere” wasn’t sufficient this time.

                    That said, apparently the matrix.org operators have an abuse team and the Play Store folks now have the contact information on file.

                    I also fully expect metrics about users to be part of the assessment, but those could be relatively easily gamed to extend the lifetime of malware in the store. Rough sketch: create a harmless do-nothing app, install it on a few thousand dummy devices somewhere (this step can be done as preparation long before actual use. could be done by some app-shells-as-a-service outlet), change owner, add “useful” purpose, market it to death to get your victims to install it (victim-user-base-as-a-service company), update with malware on friday noon US pacific time, have problem identified on friday evening. But as the user metrics say that the app is “important”, it gets deferred to the Monday team meeting - or needs VP intervention, which will only work that often before the VP will request to be taken out of the loop by whatever means necessary ;-)

                    I think there has been a push for stricter content control after Jan 6 and they are still filling in some blanks in the process - which may also explain the VP involvement, which probably means that there has been lots of escalation behind the scenes (VPs don’t usually get on individual cases by themselves). This likely ruined the weekend of a sizable group of people in that team, not just some on-caller who was scheduled for being around and who probably ticked all the right boxes in their (re-)review of the app, especially since hosting and app is operated by the same group. I think this will serve as motivation to change the process so this particular scenario doesn’t happen again.

                    (Disclosure: I work at Google, but have no insights into how Play Store operates, just a few educated guesses about megacorp behavior)

                    1. 4

                      In this specific case, the content seems to have come from a server controlled by the same entity that controls the app (from one of the updates: “[Google person] explained the situation, which related to some extremely abusive content which was accessible on the default matrix.org homeserver”), so “my app can display stuff from anywhere” wasn’t sufficient this time.

                      That still feels to me like banning firefox because some extremely objectionable content made it onto forums.mozillazine.org. Which is to say nobody would do that on an automated basis if they’d accurately characterized the application, I don’t think.

                      I think there has been a push for stricter content control after Jan 6 and they are still filling in some blanks in the process - which may also explain the VP involvement, which probably means that there has been lots of escalation behind the scenes (VPs don’t usually get on individual cases by themselves). This likely ruined the weekend of a sizable group of people in that team

                      That’s a very interesting point. I wasn’t thinking about the matrix ban in that context.

                      I think the malware concerns you brought up are also interesting, but there should be some other data points at play there that can help which are missing when it comes to content that’s parsed by humans.

                2. 1

                  If you can think of a way to solve this (absolutely no false positives or false negatives during review) at the scale of any of the more popular app stores, please apply as SVP for that product area at any of the app store wielding companies ASAP.

                  There’s a substantial difference between an inscrutable AI and human reviewers; this is at least the second random ban on HN this month. Apple’s review process is not without issues but they use human reviewers. Google could use manual review and alter the app store submission costs accordingly. They certainly are capable of this but it’s a core value to use AI instead of providing customer support.