I dunno about that. I can’t even connect to irc from my mobile hotspot without a tunnel. The ubiquity of HTTPS seems ideal to me. It’s so normal, and ads, trackers, etc so normalized that I doubt most people seeing unexplained HTTP traffic would think much of it. But, if I had a long running connection to irc, and noticed it…that’s probably going to raise some eyebrows, probably even with slightly less saavy folk.
Might not be better, but it’s simple to use and someone else is already running it for you. :D Plus it’s just http, which usually isn’t going to get blocked by firewalls.
It seems dangerous to not control your CnC server. A botnet that is only attempting to communicate with something like this is easily thwarted by taking it offline, or moving it, making the network useless. A botnet that has a network of redundant CnC servers, operated in the guise of regular ole websites with TLS, so, not snoopable HTTP, that’s botnet engineering.
I’m not a go expert but wouldn’t storing all channels in a map prohibit them from being garbage-collected (as channels are strongly reachable) even when they have been used? (That is one can mount a DoS attack by enumerating a lot of URLs on this service).
You answered your own question in the next paragraph:
He could have some logic to delete unused queues after a while though, especially if his project becomes widely used (or abused).
As far as I can see the queues are never deleted so if I use a name it’s permanently bound so the memory consumption never decreases. The only way to prune unused channels is restarting the process.
At least that’s what I think, I’m no go programmer.
Yes, I noticed that too (also not a go dev). I think that can be solved by just deleting the channel at the end of the handler. That also fixes PUT and DELETE requests filling up the map with channels that will never be handled.
Since this isn’t distributed at all, I wonder how many concurrent pending requests this setup can handle. Both in regards to number of open connections, and less so in regards to memory usage on the server.
I guess http.ListenAndServe just fires off the passed in handler in a new goroutine for each connection. I could be wrong, haven’t written a single line of go before…
Benchmarking time! I imagine it’s a goroutine per thread in the handler, so RAM would be the upper limit… Throughput across a channel is almost definitely not the limiting factor, but I’d be curious to see the benchmarks, if we could generate them.
Looks real neat, and also a great tool for distributing commands to botnets.
Why is this any better than another, more typical typical, CnC server for a botnet?
If you have a known set of commands (which seems like a reasonable assumption) AnNotify might be an even better CnC system https://discovery.ucl.ac.uk/id/eprint/1574313/1/main.pdf
I hope botnet operators are taking notes.
This paper is fun! Have you submitted it to lobste.rs?
Sure, just did, with a TL;DR in the text if anyone wants a quick summary :) https://lobste.rs/s/meiruh/annotify_private_notification_service
At some level you can’t really replace IRC as the ultimate botnet CnC server.
I dunno about that. I can’t even connect to irc from my mobile hotspot without a tunnel. The ubiquity of HTTPS seems ideal to me. It’s so normal, and ads, trackers, etc so normalized that I doubt most people seeing unexplained HTTP traffic would think much of it. But, if I had a long running connection to irc, and noticed it…that’s probably going to raise some eyebrows, probably even with slightly less saavy folk.
Might not be better, but it’s simple to use and someone else is already running it for you. :D Plus it’s just http, which usually isn’t going to get blocked by firewalls.
It seems dangerous to not control your CnC server. A botnet that is only attempting to communicate with something like this is easily thwarted by taking it offline, or moving it, making the network useless. A botnet that has a network of redundant CnC servers, operated in the guise of regular ole websites with TLS, so, not snoopable HTTP, that’s botnet engineering.
For those interested in the code: https://github.com/patchbay-pub/patchbay-simple-server/blob/master/main.go
Wow, this is really short!
I’m not a go expert but wouldn’t storing all channels in a map prohibit them from being garbage-collected (as channels are strongly reachable) even when they have been used? (That is one can mount a DoS attack by enumerating a lot of URLs on this service).
Why should they be garbage collected? If I create a queue with his service, I’d expect the queue not to be suddenly deleted.
He could have some logic to delete unused queues after a while though, especially if his project becomes widely used (or abused).
You answered your own question in the next paragraph:
As far as I can see the queues are never deleted so if I use a name it’s permanently bound so the memory consumption never decreases. The only way to prune unused channels is restarting the process.
At least that’s what I think, I’m no go programmer.
Yes, I noticed that too (also not a go dev). I think that can be solved by just deleting the channel at the end of the handler. That also fixes PUT and DELETE requests filling up the map with channels that will never be handled.
Good points!
Since this isn’t distributed at all, I wonder how many concurrent pending requests this setup can handle. Both in regards to number of open connections, and less so in regards to memory usage on the server.
I guess http.ListenAndServe just fires off the passed in handler in a new goroutine for each connection. I could be wrong, haven’t written a single line of go before…
you guessed correctly
Benchmarking time! I imagine it’s a goroutine per thread in the handler, so RAM would be the upper limit… Throughput across a channel is almost definitely not the limiting factor, but I’d be curious to see the benchmarks, if we could generate them.
Note: This is a new thing, not the well-established distributed message client named ‘patchbay’ which connects to ‘pubs’.
Naming things is hard.
Convenience link for those who might be curious about the Scuttlebutt Patchbay: https://github.com/ssbc/patchbay
Yeah, I came here to learn about new functionality of SSB Patchbay, especially because of the .pub TLD.
That make two of us.