Threads for williballenthin

  1. 5

    Aren’t tools like VirusTotal just matching against (partial) hashes of the files? Shouldn’t any rewrite temporarily evade them?

    It seems more likely to me that this was a side benefit of something else (portability, development velocity, etc) than the goal, unless I’m misunderstanding.

    1. 4

      They typically do a bit of fuzzy matching because malware is often run through obfuscators that do some random permutations. I presume that the Rust code is a sufficiently different shape that it evades this matching.

      1. 3

        while every AV engine matches based on hash and byte signatures, some of the more advanced ones will use emulation to guess how a program will act within the first N instructions. there are still heuristics used to match the “shape” of a program but they’re less dependent upon the physical representation of the executable.

        VT (and other online services) also run executables in sandboxes to extract runtime behaviors. then they can find malware based on things like network connections to known bad infrastructure or destructive encryption of system files.

        ransomware is usually really obvious to identify, especially dynamically, because it just encrypts everything. but sandbox evasion is also pretty easy (is end user software installed? is the system running for more than five minutes? correct command line argument present? etc).

        so, i’d guess that the rust aspect bypassed static and emulation heuristics because the “shape” is still unusual. and there’s probably a couple trivially novel anti-sandbox checks or other guardrails to evade VT. if i get a chance i’ll poke at the sample today and update here.

        1. 3

          Ah:

          It requires a list of target directories to encrypt to be passed as command line parameters and then encrypts files using AES-256, with RSA used to protect the encryption keys.

          so in a sandbox the default behavior is to do nothing unless the appropriate command line arguments are supplied. the automated system probably doesn’t know about them, so the executable appears benign unless the right code path is triggered.

          there’s research on program analysis to figure out what CLI arguments to provide to maximize the code coverage, but i don’t think they’re widely deployed.

      1. 7

        take a peek at Microsoft PowerToys, esp. the Run tool, for some useful utilities that are actively updated.

        probably also consider using the new Windows Terminal that’s also actively developed.

        1. 3
        1. 5

          How was there no support for RIGHT and FULL OUTER JOIN until now? How did I not know about the lack of it until now? So interesting!

          1. 3

            That shows the low use case of RIGHT JOIN.

            1. 2

              This is a fantastic point. After reading your comment I though back to when I last used a RIGHT JOIN and it has been a very long time since I had circumstances so specific.

            2. 1

              those changes are from 3.39.0 released 2022-06-25, but that doesn’t really change your comment very much!

            1. 6

              I read this article a few days ago and it bounced around in my head a bit.

              I think it could be quite interesting to build a batch processing system like airflow on pure kubernetes resources and operators:

              • First I would create a JobTemplate that is like a CronJob but unscheduled; there is already kubectl create job --from cronjob/foo.
              • Then a DAGTemplate could be written like:
              start:
                - jobTemplateName: A
                  id: A
                - jobTemplateName: B
                  id: B-1
              steps::
              - type: fanout
                follows: A
                tasks:
                  - podTemplateName: B
                    id: B-2
                  - podTemplateName: C
                    id: C
              - type: fanin
                follows:
                  - A
                  - B-1
                  - B-2
                podTemplateName: D
                id: D
              - type: simple
                follows: C
                podTemplateName: E
                id: E
              

              My thinking is that kubernetes could generate a PV for every job to put results into (sizes etc. configurable) and mount that PV into subsequent jobs as ReadOnly. If the PVs or PVCs are configured to linger around after a run this would allow easy debugging by starting jobs from the job-template oneself.

              Instantiating a DAG from a DAG template can then be done with a “simple” API call to the kubernetes API.

              This is all non-actionable, I just needed to get it out of my brain :-D

              1. 4

                This seems similar to workflow description languages like snakemake and CWL.

                1. 3

                  This seems almost exactly like an Argo Workflows DAG

                  1. 1

                    This is what we use at $work to enable analysts to deploy bespoke tools (packaged into Docker images) to a fairly complex-but-extensible pipeline/DAG. This lets us lean into “containers as sandboxes” and “k8s as horizontal scaling platform” while keeping things fairly declarative. All in all, it’s worked quite well, in my opinion.

                1. 2

                  It contains more than 15,000 commits from more than 400 unique contributors (including more than 200 with multiple contributions)

                  wow!

                  1. 4

                    I’m a little confused by what it does. It gives a Rust wrapper library for embedding CPython and also gives a single static binary for distributing CPython (wrapped in it’s Rust library). So the primary benefits are easier interop with Rust programs and easier distribution because it’s a single binary to get “CPython”.

                    Do I have that right?

                    1. 3

                      This is approximately my understanding. The related PyOxidizer project seems to be like PyInstaller except that its built on Rust infrastructure and has some other tradeoffs (in-memory loading of packages vs. unpacking to temp directory?). I think I could use another blog post, etc., to better explain when/why I’d reach for this toolset.

                      1. 2

                        In my understanding PyOxidizer is an “umbrella” project featuring many of what you’ve described, plus probably much more.

                        However, strictly speaking of pyoxy, what the article describes it seems to serve the following use-case:

                        • having a self-contained Python run-time (CPython 3.9 at the moment it seems); self-contained in the sense that you don’t need anything else except that executable (and your code) to run it; (at the moment not being statically linked, you also need glibc on Linux, but this is not a major issue for a first release;)
                        • that self-contained executable can be relocated anywhere on your file-system, thus deploying a Python-based application in an uncontrolled environment becomes easier;
                        • being a single file, it’s also lighter on the file-system, especially at starup, as there aren’t hundreds of small files to be found and read off disk;

                        Basically if before I wasn’t too comfortable writing operations scripts in Python, as opposed to bash, because I never knew which turn the Python on my distribution would take, now I can have the pyoxy stored alongside my scripts, and be sure they’ll keep on running as long as I need them, without needlessly breaking due to Python upgrades.

                        Besides having the Python runtime (with pyoxy run-python), it also allows the user to run pyoxy run-yaml, where the yaml stems from the fact that the user can provide a custom YAML file to instruct the interpreter how it should be set-up. For example I usually use #!/usr/bin/env -S python2.7 -u -O -O -B -E -S -s -R in my scripts, and this always leaves a mess in ps and htop; with the new feature I would use something like #!/usr/bin/env pyoxy run-yaml (see the article), and the process tree would be much cleaner, and also I would have more control over the interpreter setup.

                      1. 3

                        The codebase (including the lattice library, all the consistency levels, the server code, and client proxy code) amounts to about 2000 lines of C++ …

                        wow, I think it’s neat to see the collection of algorithms and performance implemented in such a concise codebase. I’d have anticipated an order of magnitude more code.

                        1. 1

                          Hunting for malware on NPM. So far, there’s a lot of obvious stuff. But I’d anticipate a fair amount of subtle behavior, too. The trick is finding the right signal…

                          1. 2

                            PEP 654 … introduces ExceptionGroups. With them, we’re able to raise multiple exceptions simultaneously

                            I’m not familiar with the pattern of raising multiple exceptions. What other languages support this? I’m interested to see where and how they become useful.

                            try:
                                raise ExceptionGroup("Validation Errors", (
                                    ValueError("Input is not a valid user name."),
                                    TypeError("Input is not a valid date."),
                                    KeyError("Could not find associated project.")
                                ))
                            except* (ValueError, TypeError) as exception_group:
                                ...
                            except* KeyError as exception_group:
                                ...
                            
                            1. 4

                              The ExceptionGroups has primarily been introduced to support proper error handling in the asyncio TaskGroups

                              1. 1

                                That’s interesting. In Go, I have a multierror that I use to consolidate errors when running concurrent gorountines. Of course, errors are just a normal interface in Go, so it’s pretty easy to just add an Error method to a slice of errors. It doesn’t need any special language support.

                                I wrote a gist to try to combine Python exceptions once, but it was sort of a kludge. It’s nice that it will be a first class thing now.

                                1. 1

                                  That’s basically what asyncio.gather() already does, but PEP 654 exists to make the process less cumbersome.

                              2. 3

                                It’s not built-in, but if I’m understanding the feature correctly it wouldn’t be difficult to do with Common Lisp conditions.

                                1. 1

                                  This reminds me a bit of pydantic: if you have errors in your model it should give you multiple validation errors instead of blowing up on the first one. Maybe it would also be useful in that context?

                                  1. 4

                                    I don’t think it’s for that use case. Within a single library, you can do:

                                    from typing import List
                                    
                                    class ValidationError:
                                        line_number: int
                                        reason: str
                                    
                                    class ValidationErrors(Exception):
                                        errors : List[ValidationError]    
                                    

                                    So based on above discussion it sounds like it’s more about combinations of multiple different exceptions.

                                    1. 2

                                      Django’s forms library has something similar; instead of forms having a potential attached error, they have a a potential list of attached errors, produced by catching and collecting every django.forms.ValidationError raised during the validation process.

                                      1. 1

                                        Seeing as we’re talking about new features in recent versions of Python, you don’t need from typing import List anymore, since 3.9 you can just write:

                                        class ValidationErrors(Exception):
                                            errors: list[ValidationError]
                                        
                                        1. 1

                                          Good to know! And I guess I actually have one project where I can use Python 3.9 features, but everything else only just dropped 3.6. Going to be a while…

                                          1. 1

                                            TIL that you can do from __future__ import annotations as of Python 3.7 for this.

                                    1. 2

                                      Loosely related: I recall using raw Python bytecode manipulation to deal with obfuscated malware, as described here: https://www.mandiant.com/resources/deobfuscating-python

                                      1. 4

                                        I find it interesting that the only discussion about this link is about marking the URL as having sensitive documents shown, which I understand is a pretty US-centric thing. As I’m not a security researcher, I’m curious why there is no other discussion around the content of the article? Is it because everyone already knew NSA is backdooring Chinese systems, so it’s an old news, or is it something else?

                                        1. 3

                                          The original report (English) is here (pdf) and is pretty good. Lots of detail recovered by reverse engineering, eg use of BPF to implement magic packet recognition. As a security person, these low level details interest me!

                                        1. 27

                                          Note: this article contains inline images of marked classified documents.

                                          This comment is not intended to spark a discussion; simply put, some people may want to avoid the article for this reason.

                                          1. 11

                                            I’m pro having this warning. Maybe we should have a specific tag if people object to this helpful comment text.

                                            1. 5

                                              I’ve posted this text before and hate to copy-pasta verbatim. If it’s inappropriate, please indicate and/or suggest alternative method to flag.

                                              1. 3

                                                I personally would benefit from the absence of any flags and notices.

                                                1. 8

                                                  Can you explain why?

                                                  Not having the flags is a problem for US government contractors and employees because they’re required to segregate classified and non-classified information onto different machines. Accidentally getting classified material (and a leak does automatically cause declassification) on an unclassified, government-owned, machine can lead to jail in the worst case. In the best case it can lead to your hard drive being wiped and re-imaged, losing you any work that isn’t backed up. The latter happened to several folks I knew at government contractors after they clicked on links to news articles not realising that they included copies of the Snowden leaks.

                                                  1. 11

                                                    I think you mean that a leak does not automatically cause declassification.

                                                    1. 9

                                                      Accidentally getting classified material (and a leak does automatically cause declassification) on an unclassified, government-owned, machine can lead to jail in the worst case.

                                                      I believe you missed a load-bearing not there. A leak does not automatically cause declassification.

                                                      1. 5

                                                        I would differentiate between accidentally seeing what I am not supposed to see and (intentionally or not) breaking measures to see what I am not supposed to see.

                                                        Having your device wiped sounds unfortunate but I don’t understand why solving that on the news aggravation site would be better than in some IT policy.

                                                        1. 8

                                                          I would differentiate between accidentally seeing what I am not supposed to see and (intentionally or not) breaking measures to see what I am not supposed to see.

                                                          You might differentiate, but David’s point is that the US government (and presumably others) does not. Unless you’re planning to change how they handle such instances, it seems appropriate to warn people early.

                                                        2. 5

                                                          Wait, I can go to jail because I clicked on the wrong link, and some document my government wanted to keep secret end up on my browser cache? Like, really? What’s next, being arrested because my set top box recorded the news, which happened to contain footage from Collateral Murder?

                                                          Or is it specific to unclassified, government owned machines?

                                                          1. 13

                                                            If you hold a security clearance from the government, as part of the process whereby they clear you to handle classified information you agree that putting it onto systems which are not classified has certain consequences. Some of these consequences may include criminal punishment. I don’t think that is common for accidental spillage.

                                                            It will, at a minimum, involve an unpleasant conversation with your facility’s security officer.

                                                            I fully understand why some people appreciate warnings like this.b

                                                            1. 2

                                                              I was not talking about accidental spillage. I was talking about storing on a computer I’m using information that were already spilled, and doing that from a publicly available source. Like, Clicking on a link from the front page of Hacker news, and ending up downloading the image a slide from some secret NSA surveillance project.

                                                              The way @david_chisnall was writing seemed to encompass that case. If it’s a genuine accidental spillage, of course I should be trouble.

                                                              1. 4

                                                                I was referring to getting classified data onto an unclassified machine, accidentally, by clicking a link form the front page of HN or some such, as “accidental spillage.” The fact that the data does already exist in a public channel doesn’t really matter. The cleared person putting it onto an unclassified machine is still party to spillage.

                                                                If you hold a security clearance from the government that marked the documents in the linked post, you received clear training on the possible consequences for this. If you don’t, as @pushcx mentioned, you don’t have anything to worry about from viewing them.

                                                                1. 2

                                                                  That’s a very strange definition of “spillage” to be honest. One that we may suspect is tailored to facilitate prosecution if a document ends up where it should not end up. Training or not, the mistake is unavoidable. Literally, flat out _unavoidable.

                                                                  Here’s how it plays out:

                                                                  1. Research the Snowden leaks, because that’s your job or something.
                                                                  2. Stumble upon some well known aggregator such as HN or Reddit.
                                                                  3. See a link to a Guardian article or similar about Snowden leaks. Click on it.
                                                                    • Now your browser is downloading all assets from the page, including a very interesting, and very classified slide about one of NSA’s surveillance projects.
                                                                  4. Read the first few paragraphs, notice that it’s old stuff. Close the browser.
                                                                  5. Your boss now audits your PC, notices the slide in your browser cache, you’re now fired, arrested, perhaps even jailed.

                                                                  OK, OK, people with security clearance should never browse the web outside of private mode. The cache should be cleared after each sessions, cookies erased, the whole shebang. You’re still not out of the woods yet:

                                                                  1. Research the Snowden leaks.
                                                                  2. Read HN.
                                                                  3. Click on a link to a Guardian article.
                                                                    • Your browser downloads the classified slide, which does not fit in RAM, so it ends up written in a temporary cache.
                                                                  4. Ah, old stuff, close the page.
                                                                  5. Your boss now scans your hard drives, notices data blocks that match the slides, that weren’t quite erased from your disk drive. You’re screwed. Again.

                                                                  What silly precautions must people take to avoid being in trouble? Because if that’s the kind of risk involved, I don’t even want security clearance, to the point of being okay with losing a contract over this.

                                                                  1. 5

                                                                    The system is designed around the principle that everyone should know the least amount of classified info necessary, silo’ed as strictly as is feasible. If you have a US security clearance, you are part of this system and swear to keep it this way under risk of heavy penalties. The concept is that if you, say, find an open folder full of documents marked “secret” lying on a table in a coffee shop then you do not casually leaf through it to see what all the fuss is about, you close it immediately and take it to your security officer.

                                                                    How this translates to a computerized world where it is possible to very easily copying stuff by accident, is imperfect to say the least. But if someone has a US security clearance they keep their lives much, much simpler by adhering to the strictest possible interpretation. The Gestapo won’t break down your door and drag you off with a bag over your head, but if someone asks “have you ever seen classified documents you shouldn’t have”, it’s a lot simpler to be able to honestly say “no” without qualifications.

                                                                    Because if that’s the kind of risk involved, I don’t even want security clearance, to the point of being okay with losing a contract over this.

                                                                    Correct, you probably don’t want it! That is the system working as intended.

                                                            2. 13

                                                              No. This conversation is about the U.S. government’s system for classified documents, which includes training on scenarios like this. For those of us who don’t hold a security clearance, there’s nothing to worry about, though there is the still-developing topic of law and journalism in their publishing; The Pentagon Papers are a good starting point, and continue with the Snowden disclosures.

                                                              1. 2

                                                                I do not intend to litigate anything. I am honestly confused about the scenario I describe. What is the purpose of taking such measures with respect to information that has effectively become publicly available?

                                                                1. 12

                                                                  The answers to why the system works that way are off-topic, a current hot-button topic for political argument, the subject of ongoing litigation, and are addressed in the links as well as many excellent government regulations, law articles, and books. I don’t think I understand the intended design and current state of the system well enough to provide a worthwhile answer.

                                                                  1. 5

                                                                    It’s silly, but individuals wisely refrain from throwing themselves in the wood chipper just for the sake of demonstrating that. ;P

                                                                    (I assume actual reasons revolve around having a simple bright-line rule.)

                                                      1. 3

                                                        It would be really interesting to learn what makes the author believe this set of relays is malicious. Though, of course, sharing this information would burn the technique…

                                                        1. 2

                                                          Definitely, like what does this mean:

                                                          and the fact that someone runs such a large network fraction of relays “doing things” that ordinary relays can not do (intentionally vague)

                                                          What kind of things can an ordinary client “not do”? Are those things just not built into the typical implementations or maybe the nodes coordinate amongst themselves and act in a way that a single node wouldn’t?

                                                          1. 2

                                                            If you control every node in the onion chain, you can can see the whole communication stream AND know which ip addresses are communicating with which services, and maybe fingerprint the browser using tor based off the tor config sent in (preferences as to routing) and the ssl negotiation traffic (it’s not a lot, but it’s not nothing) AND if there’s a self signed cert at the service being accessed, you can also MITM the stream and access the cleartext.

                                                            At scale, that might yield some interesting information if you wanted to identify people with double lives that might be interesting (under cover spies, people worth blackmailing, discover clues to the future in traffic analysis), especially if you use other surveillance techniques like buying dns lookup data or operating major network infrastructure.

                                                            Edit: On reflection this would be feasible and not especially expensive for a consortium of international law enforcement agencies looking to see who’s using illegal market places and distributing csam, with a smattering of maybe catching some terrorist activity.

                                                          2. 1

                                                            Well once you’ve identified a large group, if they keep coming back when removed from the directory, it’s not someone just doing it for fun.

                                                            The question is how do they identify the group is under common control.

                                                          1. 6

                                                            The MSRC article mentioned in the comments is also extremely interesting: Building Faster AMD64 memset Routines. In particular, the post does a great job of explaining performance despite how opaquely modern CPU features behave (cache lines, branch prediction, speculative execution, etc.).

                                                            Sidebar: I love the idea of optimizing at the scale of single instructions and yet have an effect on the total performance of the system.

                                                            1. 1

                                                              The automemcpy paper at ISMM this year was also interesting (and the code from it is now being merged into LLVM’s libc). The most surprising thing to me from both their work and Joe’s experiments on Windows was that 0 is one of the most common sizes for memcpy.

                                                            1. 15

                                                              Note: this article contains inline images of marked classified documents.

                                                              This comment is not intended to spark a discussion; simply put, some people may want to avoid the article for this reason.

                                                              1. 9

                                                                Those images are the same as those found on this webpage: https://nsa.gov1.info/dni/nsa-ant-catalog/usb/index.html which is the first hit for a web search.

                                                                There is a wikipedia page on them https://en.wikipedia.org/wiki/NSA_ANT_catalog which says they were leaked in 2013 by Der Speigel.

                                                                I can see that NDA being applied to “I took a peek at my bosses desk” or “I went on the dark web and paid 10 bitcoins for this information”. I can not see that being applied to “I did a web search and found it on wikipedia.”

                                                                And in any case, I don’t know if those are authentic or made up by a teenager hoping to get money from Der Speigel.

                                                                1. 5

                                                                  Out of curiosity… why?

                                                                  1. 12

                                                                    IANAL etc etc… My understanding is something along the lines of… those holding US clearances sign an NDA to agree not to access classified documents for which they are not authorized nor need to access. I understand these people may want to avoid marked classified documents leaked online, for example because they may not have the “need to know”.

                                                                    I’m not here to dictate or judge, just to note for those who care about this material.

                                                                    1. 4

                                                                      Correct! It’s generally the same reason why prominent emulator developers won’t look at or access leaked documents/source code. It’s a whole can of beans that nobody should ever put themselves near.

                                                                  2. 1

                                                                    Good point, it would be polite to put up a “spoiler warning” if you’re going to do this. And there are plenty of publicly available examples they could have used to make the same point. Ah well.

                                                                  1. 5

                                                                    I had just read Dan Luu’s post Some reasons to work on productivity and velocity where he makes some similar arguments. I’ve enjoyed both these discussions.

                                                                    1. 2

                                                                      Now imagine the system can toggle which peer is the master node, thus transferring control flow over network, even right in the middle of a loop or deep closure. Photon achieves this.

                                                                      I’m interested to see how the authors approach security and trust with this model. Its neat that the system can pass around execution environments from client to server to client again; what happens if the client is untrusted? Perhaps there is a virtual evaluator or sandbox that restricts access to resources on the server.

                                                                      1. 2

                                                                        I have the same question. One might use micro apps inside an SPA with different ACLs for each backend connection. Or more fine-grained per-attribute ACL but then it is unknown to me (and I am very interested to know!)

                                                                      1. 5

                                                                        I’ve enjoyed the series of articles and appreciate that when the author received feedback, they incorporated it and used that as fodder for subsequent blog posts. As a reader, it’s felt as if I were along for the ride.

                                                                        1. 5

                                                                          The nom error handling section of this post is the best Ive stumbled across so far. Some concrete examples of getting the span location and human readable messages. Definitely will be using these tips in my projects.

                                                                          1. 8

                                                                            Am I alone in feeling frustrated that botnets are ubiquitous in the modern Internet but very little seems to be being done to combat them? Are botnet takedowns not well publicised, or is it simply too much effort for it to be economical? Perhaps someone with experience in the area can enlighten me.

                                                                            1. 14

                                                                              Author here : you are not alone. This is the first time I have had to actually do anything but any server is continually being bombarded with obviously malicious traffic. In this case, I am not sure what the botnet is even trying to achieve but CloudFlare tells me that they are still out there averaging about 1000 hits per hour.

                                                                              I sometimes see hand-wringing articles on why the hobby website seems to be dying out. Constant maintenance in the face of persistent attacks is one reason.

                                                                              1. 4

                                                                                Big mood. My website (christine.website) gets like 150 GB of traffic per month and Cloudflare only really makes me send out about 50 GB of that. Most of it is poorly configured RSS readers and scraper bots that don’t respect robots.txt. Huge pain. My gitea instance had to have Russia and China blocked at the Cloudflare level to avoid it pegging a core constantly. It constantly oomed my Kubernetes cluster back when I hosted things on it.

                                                                                1. 1

                                                                                  My gitea instance had to have Russia and China blocked at the Cloudflare

                                                                                  Life already sucks for people stuck in Russia and China, and then people in the West ban them from their websites. From my experience, botnets are more or less evenly distributed in the big picture. I’d prefer people to not discriminate against millions of legitimate users just because at the moment the botnet distribution is (or seems) skewed.

                                                                                  That’s especially bad for people in China who cannot setup a VPN due to the “great firewall”.

                                                                                  In our project, we have a number of contributors from China. I can’t imagine just telling them: “your country is so full of botnets that it makes your participation not worth it, go f*ck yourself”.

                                                                                  1. 2

                                                                                    Believe me, I didn’t do this as a first measure. I blocked user agent after user agent, throttled things with nginx rules but they kept scraping every single visible link on my git server. I just gave up and blocked the whole country until I could figure out a better way to do it. Maybe now that it’s been blocked for long enough the scraper bots will have given up trying to index my git server and I can re-enable it to Russia/China. The country of the IP address was the only common factor.

                                                                                2. 1

                                                                                  Also don’t forget that cloudflare protection for your website is for free, try securing your minecraft/voIP/other realtime stuff/non-http speaking server without investing money..

                                                                                3. 3

                                                                                  What’s being done to combat them is moving more of the Internet under control of centralized corporations like CloudFlare. There is understandable discontent with that, but it is also not surprising given our political-economic trajectory.

                                                                                  Solving the problem in a satisfying or elegant way would not allow companies like CloudFlare to skim money off that top. And it’s not just CloudFlare: Big Tech in general benefits from the lack of a standardized distributed solution.

                                                                                  1. 2

                                                                                    Takedowns tend to be publicized pretty well when they happen, so that probably supports your point that they don’t happen often enough. They are difficult to do, both technically and legally. There’s an understandably high bar for exploiting software running on computers within your borders, for example.

                                                                                    Of course, there’s also a many billion dollar AV industry that should prevent such malware in theory. Or network appliances that again help in theory. But these don’t seem to protect the little people all that well.

                                                                                    1. 9

                                                                                      The problem is humans.

                                                                                      It would not be difficult for CloudFlare, Akamai, Fastly, and all the various honeypots in the world to round up the IPs that they have, say, a 50+% confidence are involved in a botnet and send a report to the WHOIS-listed owner of that netblock.

                                                                                      Then what?

                                                                                      Some networks are well-run and will respond quickly. I think this is a minority.

                                                                                      Some networks won’t have anyone reading that email. Or they don’t read the language that it was sent in, and it looks just like more spam.

                                                                                      Some networks don’t have anyone who is willing to take the responsibility for disconnecting/deauthorizing a client – might not even want to warn the client.

                                                                                      It’s the spam problem all over again, but on a much larger scale.

                                                                                      1. 4

                                                                                        Some networks don’t have anyone who is willing to take the responsibility for disconnecting/deauthorizing a client – might not even want to warn the client

                                                                                        But apparently also no one wants to just block them for good until they fix their things. I mean, this is how the big four are doing it with email. They even go so far to just blackhole emails from IPs they don’t like. Try getting removed from microsofts suspicious list, fueled by AI, you won’t get far. There is also a law in germany that makes you personally liable for trash that comes from your home network, they may even disconnect your line.