Buy a small Mikrotik (like hAP Lite) and configure it to enqueue the traffic on a 100k queue.
You can also made some more specific rules, like putting videos packet in a low priority queue.
Artificial Intelligence: A Modern Approach
Bitcoin, Ethereum and Monero withepapers.
The company I work for has a long history of doing unpaid “as much time as you can give” starter projects, in which you work closely with a member of the team on a task you’d actually be doing if you were hired. When I did mine, 3 years ago, I gave up two days, which is basically foolish, but showed my skills, and learned a lot more about the team, company and other things than I would have otherwise. This was valuable signal for both parties.
My team decided to drop the starter project when we started to hire again, and settled on a coding task that mimics the system the candidate would be supporting at a much smaller scale, but large enough that brute force solutions don’t work well. We give the candidate 4 hours (it took me 15 minutes, and some colleagues about 1.5 hours) to complete this (we haven’t had a single person refuse), and basically make a decision after this. In addition, the problem spec includes the discussion questions that we’ll chat about during the technical debriefing of the coding task.
We’ve had about 8 candidates go through this, and not a single one has complained about the length of time commitment, or stated the problem was too tricky, or anything but fair. However, we’ve had a success rate of 2 out of 8. It doesn’t test data structures, or algorithms, and there are very liberal bounds on acceptable runtime length. It’s fundamentally, sum up fields in a file grouping by ‘foo’, ‘bar’. The part that has tripped people up is almost always related to the fact you can’t store everything in memory. And, the guy with a 64GB machine, didn’t understand how to use Python’s dictionary type…
I do wish we paid the person for their time, but I’ve been quite happy with the results of this recent experiment. Lots of candidates that looked good on paper that were just… not… very great.
Working closely to the new team is great also for the employer, because you can understand the culture and the philosophy of the new team.
At on of my previous job, there was some problems (narcisism, lack of competence, micromanagement, lack of empathy, etc.) that you can recognize easily working closely with the developers.
I’ll change job in a month, then I’m study to be ready for the new job.
– getting pmacct <> rabbitmq <> influxdb working, then putting a nice frontend on top of it
– auditing a Palo Alto install the MSP royally boned on the migration. I know BGP is somewhat obtuse on PANOS but…no excuse. in pre-sales you, unprompted, mentioned having one of four experts qualified to configure whatever is after the top of the line 5000 series. c'mon!
– quickly utilizing the last six days of my Azure $200/30 day credit to boot OpenBSD, get IPsec tunnels with BFD running, and do some iperf tests between regions for a PoC
– setup graylog to ingest wireless controller and firewall logs and make nice dashboards for front line support network troubleshooting
– continue building class outline and course work for a “python for network engineers” (a working title as it’s already in heavy use by Kirk Byers)
– lots of unikernel stuff. Kafka as a unikernel, pmacct as a unikernel. getting rumpkernels to boot with vmm on OpenBSD. getting ExaBGP into a unikernel, then doing ‘stress’ testing against OpenBGPd
– osm + packet clearing house IXP list + peeringDB + d3js = transform spreadsheet currently sitting at http://peering.exposed/ (after a particularly whiskey-infused discussion @ RIPE73)
– play with a couple of network verification tools I recently read and have been reading about, respectively: Propane and NetKAT
Is there some particular reason you’re going to rabbitmq first instead of tossing to influxdb via statsd or some such first? You just want to persist bits in flight?
mostly because pmacct speaks amqp natively, and slightly because I do not wish to run node.js in this instance.
What are you using for ingesting logs from rabbitmq to InfluxDB?
I’m looking forward that Paolo releases the support for Redis.
@work: working with Python and RADIUS attributes for Change of Autorization
@home: writing a Python software to automatically update BGP filter in an IXP peering router.
A friend of mine commented that DDoS attacks are typically a flood of UDP packets with forged IP headers. Why don’t ISPs simply block all packets with a forged origin? Since ISPs are the ones allocating addresses to end-users in the first place, detecting IP forgery would be dead simple.
This solution sounds too easy. Are there any problems that would arise from dropping packets with forged headers?
ISP engineer here.
Majority of DDoS attack are made with UDP, but it’s not easy at all to detect them because the spoofing parts it’s about the protocol payload (NTP, DNS).It would require very expensive hardware to inspect the application layer.
Furthermore, when a DDoS is arrived on the ISP network it’s too late because the upstreams link or routers may be already saturated.
The more you block close to the sources, the more it will be effective.
Even blackholing the destination should be not so much useful.
You should use some BGP tricks (like the smart use of communities), but fighting DDoS it’s an hard work.
I think ChadSki here is referring not how to detect and drop DDoS traffic at the receivers end but why this problem is not solved at the source by the network providers who do know what IP addresses they have assigned and use and to filter out the (egress) traffic leaving their network that claims to have a source IP from outside those ranges. If the sender is unable to get spoofed source IP packets beyond their network providers borders, it kills the DoS at source.
The answer is, they could, this is covered in BCP38 and when I used to follow it the NANOG mailing list had plenty of grumbings about the lack of uptake.
Typically this is implemented by using reverse path filtering so that before a router forwards the traffic, it looks at its own routing table to see if to send traffic back to the ‘source’ (maybe spoofed) it would send it back over the network interface the packet arrived on. If it matches, the packet is ok to forward, if not it is dropped.
This is something most ISP’s (users behind xDSL, leased line, fibre, etc) and hosting providers (co-location, VPS, cloud, etc) can do. It is functionality that has been baked into software and hardware routers for over a decade.
There are some reasons why this may not be a straight forward and possible for some ISP’s, typically if they also offer transit provider, but this is now pushing me past my rusty memory as an “ex-ISP network administrator” and I would need to do some catch up reading before I declare all ISPs lazy/stupid/… :)
OpenBSD’s pf has an antispoof rule for just this. Maybe someone with more knowledge of it, can comment on its effectiveness, but yeah, it seems plausible.
But, I also highly doubt that spoofing source IPs is the leader in DDoS techniques. If I can purchase time on bot nets across 20,000 different nets, I can simply use the source IP of the bot, and get a ton of legitimate looking, randomly distributed sources, which are not so easy to deal with without disruption. If I can get 200,000 different net sources, then any operator trying to block them all, has a high probability of blocking legitimate, customer traffic, which is denial of service in and of itself.
I am not trying to defend Cloudflare here, but its CAPTCHAs and Kill-Bots itself seem like really good strategies for dealing with this, unfortunately.
As far as I know, udp based amplification reflection attacks with spoofed sources, are still a big problem.
Some do, but there’s no incentive. Outbound traffic is rarely an issue.
For big carriers, I imagine you are often dealing with lots of transit and peering, so you may not always know the full sources of an eventually reachable AS being routed through you.
But for last mile connectivity networks, you would certainly think it would make sense. I wonder if it comes down to the fact that unless most people do it, it probably doesn’t help much…and until most people do it, it probably doesn’t make it worth the effort to do it and maintain it.
Big carriers (Tier 1) have some complex network policies, but they must know how the AS traffic is flowing through the network. There are some BGP filters about AS paths on the incoming ports just to prevent DDoS.
Last mile connectivity network has a few BGP peers (providing default route), than it’s easy to control traffic flows.
If you are interested in some products that can fight DDoS look for Arbour Network or Radware.
There were also 100Gbit NIDS built a while back to enable such applications. So, it could be done. Priorities and pricing are key issues as usual.
Priorities and pricing are key issues as usual
You cannot pretend to pay an 20MB ADSL 20€/month and have also DDoS protection.
It’s like buying a Fiat Panda and expect to have Ferrari engine.
What are you talking about? Im clearly talking about the Tier 1-3’s backbones who could actually afford or use a 100Gbps appliance. A customer with ADSL is screwed the second the traffic hits their line. Saturation attacks should be handled upstream of them where the pipes are big and pockets are deep.