1. 39
  1.  

  2. 14

    This is actually a really good article. To summarize one of its key points: your mental intuition for how signal congestion works under wifi is wrong. A stronger signal won’t “drown out” noise from other networks since wifi enforces collision avoidance (i.e. “waiting your turn to speak”) even between different networks. The better idea is to have a mesh of smaller, lower-powered networks and to keep as many devices wired as you can.

    1. 6

      Yes this article is quite good. But it (neccessarily) omits some details, such as the effects of dynamic rate adaptation and the differences in degree of propagation between data rates. This is very pronounced in the 2 GHz band. Low data rate frames use a much less “fragile” kind of modulation than fast data frames do. So you may see beacons (sent at the lowest rate) from other networks but issues only arise if you are close enough to also see many of the data frames (unless the other networks protect all frames with RTS which “reserves” the medium for a while, that’s yet another detail the article skipped).

      All those details fill entire books, which is a core problem of wifi: It is very complex and can’t easily be described in marketing terms without telling lies.

      1. 4

        It is very complex and can’t easily be described in marketing terms without telling lies.

        Definitely, but one of the problems is that everyone lists the unicorn maximum speeds. It would be better to have speeds measured in a couple of realistic, controlled, standardized setups. Of course, it is hard to extremely standardize this across the industry when there is no urge.

        1. 12

          I was a QA manager for one of the enterprise wireless startups startups back in 2002. I can tell you that there certainly was an urge to do that across the industry at the time. It didn’t happen because, as the author of the article said, it is extremely difficult to come up with a wifi performance testing methodology that is realistic, standardized, and controlled all at the same time. Forget about the cost or the political reasons– it is technically difficult to do.

          Here’s how we tried to cover our realistic, controlled, and standardized testing:

          Realistic – we held a monthly lease on a large, unoccupied office space and installed our system in it. We also ‘ate our own dogfood’ at the office.

          Controlled – We ran 802.11 over 802.3 for repeatable protocol and scaling tests. (Yes, we ran the WiFi protocol over the Ethernet phy, and a lot of our wireless testing was done on wired Ethernet.) We had an anechoic chamber for doing phy layer tests.

          Standardized – Once the test vendors started catching up, we had access points stuffed in little isolation chambers, antennas removed, radios cabled directly to 802.11 test gear provided by the likes of Sprient and Ixia.

          As a test guy, working on wifi was super fun. It is a complex and bodged together protocol, and it has such a wide failure curve to play around on and explore… its not like any of the wired networking protocols that came before it. And that’s why the marketing around it is so jacked up. Marketing needs to be simple, and wifi simply isn’t.

          1. 1

            it is extremely difficult to come up with a <insert your technology here> performance testing methodology that is realistic, standardized, and controlled all at the same time.

            Measuring in software world seems to always be insanely hard, more so when hardware is involved. Sure it is always easy to find something to measure but when one thinks bit further how useful those metrics are.. they usually aren’t.

          2. 2

            Yes, as soon as one vendor starts providing inflated numbers, the rest follow :(

            Boiling product performance down to a single number is the best possible situation as far as the marketing department is concerned. It just sucks for everyone else.

      2. 4

        making wireless networks not suck kinda depends on managing airtime properly – this is why wifi sucks and LTE has thousands of pages of specs about a whole menagerie of control channels with heinous acronyms (that even the spec writers sometimes typo) that allocate who transmits what and when (and at what frequency).

        1. 4

          Given what you’ve said, it’s surprising LTE works in practice, because I’d expect implementations to be buggy and screw everything up if the standard is hard to follow. Or are they abnormally well-tested in practice or something? :)

          1. 8

            The standard is hard to follow only in that there’s plenty of moving parts and many different control channels because shared resources – such as airtime and radio bandwidth (and backhaul/network bandwidth) need to be allocated and shared precisely among many different devices.

            If you want to avoid getting buggy implementations you can make formal models for all the layers – the RF layers, the bits that get modulated onto RF, and the protocols that get run over those bits. Formal models let you write software that can output valid protocol transcripts (that you can send to a radio) or validate a transcript (that you received from a radio) – all to make sure that every device is sending/receiving the right bits modulated the right way at the right times/frequencies.

            Once you somehow obtain software that implements a formal protocol model, you (or someone who makes or certifies LTE equipment) can verify/fuzz code that runs on UEs and eNBs – both when it’s just harmless source code and also (if you have some SDRs) when it’s transmitting real RF in real devices in some test lab. So yes, implementations are indeed well-tested (and indeed, are required to be tested before they can be sold and be allowed on real LTE networks)