1. 4

    We are: I’m co-founder of Ad Hoc. We came out of the HealthCare.gov rescue effort. We saw up close that if we’re going to make government work for people, it needs more folks like us building things in government on behalf of its users. We’re making better government digital services, such as Vets.gov, and we’re still improving HealthCare.gov. We’re remote-first, and we have open positions in engineering, product, user research, and management. Message me if you’re curious.

    1. 1

      Why is there a bandwidth limit on the outq? That shouldn’t be necessary. Maybe it’s just the implementation in pf, but it’s totally not required in FreeBSD’s IPFW.

      Point is that for incoming traffic you cannot control the flow of packets so you have to fake a lower bandwidth link by artificially dropping packets to slow down the sender. For outbound traffic you have full control of the sending rate and the ability to detect congestion early so limiting your max outbound bandwidth should not be required.

      1. 1

        Your home router doesn’t know the uplink bandwidth of your cable modem connection to your ISP. So you have to dial it in for the FQ-CoDel algorithm to know how to achieve the right send rate to flush the buffers quickly and fairly enough.

        1. 1

          Your home router doesn’t know the uplink bandwidth of your cable modem connection to your ISP

          That doesn’t matter. The firewall has the ability to detect congestion of the outbound traffic and apply queueing/shaping to immediately control the sending rate and prevent buffer bloat. It shouldn’t need a bandwidth limitation. That’s only required for inbound traffic where you cannot control the sending rate, so you fake a smaller pipe with reasonable overhead (~10%) to drop packets early to slow down the sender and prevent severe congestion/buffer bloat.

          All I can tell you is that I am not restricting any bandwidth for outbound with FQ-CoDel via DummyNet+IPFW and I get passing test results every time. I only have to restrict on the incoming. So something is different between your pf implementation and my DummyNet implementation. Does the OpenBSD pf implementation include ECN (Explicit Congestion Notification) ?

          ipfw pipe 1 config delay 0
          ipfw pipe 2 config bw 220Mbit/s delay 0
          ipfw sched 1 config pipe 1 type fq_codel
          ipfw sched 2 config pipe 2 type fq_codel
          ipfw queue 1 config sched 1
          ipfw queue 2 config sched 2
          $cmd 00100 queue 1 ip from any to any out via $pif
          $cmd 00101 queue 2 ip from any to any in via $pif
          

          Here is my test result: http://www.dslreports.com/speedtest/35535303

      1. 11

        I find it a little disappointing that most of the reactions here are wrapped around the axle on the particulars of true, rather than the (imo) more interesting examination of how complexity creeps into our software design and what that means.

        Also, I wonder if people think that they might have something to learn or gain from someone who has a lot experience in our field and has created successful projects! (Note I’m not making an argument from authority – I’m not saying he’s correct or not, just that we might read it charitably and reflect that he might have something valuable to say before reacting strongly!)

        1. 13

          If he had something valuable to say, one would hope that his experience would have led him to offer an example of it. Instead we got this, which is essentially “old man yells at cloud.”

        1. 2

          Maybe a dumb questions, but in semver what is the point of the third digit? A change is either backwards compatible, or it is not. To me that means only the first two digits do anything useful? What am I missing?

          It seems like the openbsd libc is versioned as major.minor for the same reason.

          1. 9

            Minor version is backwards compatible. Patch level is both forwards and backwards compatible.

            1. 2

              Thanks! I somehow didn’t know this for years until I wrote a blog post airing my ignorance.

            2. 1

              PATCH version when you make backwards-compatible bug fixes See: https://semver.org

              1. 1

                I still don’t understand what the purpose of the PATCH version is? If minor versions are backwards compatible, what is the point of adding a third version number?

                1. 3

                  They want a difference between new functionality (that doesn’t break anything) and a bug fix.

                  I.e. if it was only X.Y, then when you add a new function, but don’t break anything.. do you change Y or do you change X? If you change X, then you are saying I broke stuff, so clearly changing X for a new feature is a bad idea. So you change Y, but if you look at just the Y change, you don’t know if it was a bug-fix, or if it was some new function/feature they added. You have to go read the changelog/release notes, etc. to find out.

                  with the 3 levels, you know if a new feature was added or if it was only a bug fix.

                  Clearly just X.Y is enough. But the semver people clearly wanted that differentiation, they wanted to be able to , by looking only at the version #, know if there was a new feature added or not.

                  1. 1

                    To show that there was any change at all.

                    Imagine you don’t use sha1’s or git, this would show that there was a new release.

                    1. 1

                      But why can’t you just increment the minor version in that case? a bug fix is also backwards compatible.

                      1. 5

                        Imagine you have authored a library, and have released two versions of it, 1.2.0 and 1.3.0. You find out there’s a security vulnerability. What do you do?

                        You could release 1.4.0 to fix it. But, maybe you haven’t finished what you planned to be in 1.4.0 yet. Maybe that’s acceptable, maybe not.

                        Some users using 1.2.0 may want the security fix, but also do not want to upgrade to 1.3.0 yet for various reasons. Maybe they only upgrade so often. Maybe they have another library that requires 1.2.0 explicitly, through poor constraints or for some other reason.

                        In this scenario, releasing a 1.2.1 and a 1.3.1, containing the fixes for each release, is an option.

                        1. 2

                          It sort of makes sense but if minor versions were truly backwards compatible I can’t see a reason why you would ever want to hold back. Minor and patch seem to me to be the concept just one has a higher risk level.

                          1. 4

                            Perhaps a better definition is library minor version changes may expose functionality to end users you did not intend as an application author.

                            1. 2

                              I think it’s exactly a risk management decision. More change means more risk, even if it was intended to be benign.

                              1. 2

                                Without the patch version it makes it much harder to plan future versions and the features included in those versions. For example, if I define a milestone saying that 1.4.0 will have new feature X, but I have to put a bug fix release out for 1.3.0, it makes more sense that the bug fix is 1.3.1 rather than 1.4.0 so I can continue to refer to the planned version as 1.4.0 and don’t have to change everything which refers to that version.

                      2. 1

                        I remember seeing a talk by Rich Hickey where he criticized the use of semantic versioning as fundamentally flawed. I don’t remember his exact arguments, but have sem ver proponents grappled effectively with them? Should the Go team be wary of adopting sem ver? Have they considered alternatives?

                        1. 3

                          I didn’t watch the talk yet, but my understanding of his argument was “never break backwards compatibility.” This is basically the same as new major versions, but instead requiring you to give a new name for a new major version. I don’t inherently disagree, but it doesn’t really seem like some grand deathblow to the idea of semver to me.

                          1. 2

                            IME, semver itself is fundamentally flawed because humans are the deciders of the new version number and we are bad at it. I don’t know how many times I’ve gotten into a discussion with someone where they didn’t want to increase the major because they thought high major’s looked bad. Maybe at some point it can be automated, but I’ve had plenty of minor version updates that were not backwards compatible, same for patch versions. Or, what’s happened to me in Rust multiple times, is the minor version of a package incremented but the new feature depends on a newer version of the compiler, so it is backwards breaking in terms of compiling. I like the idea of a versioning scheme that lets you tell the chronology of versions but I’ve found semver to work right up until it doesn’t and it’s always a pain. I advocate pinning all deps in a project.

                            1. 2

                              It’s impossible for computers to automate. For one, semver doesn’t define what “breaking” means. For two, the only way that a computer could fully understand if something is breaking or not would be to encode all behavior in the type system. Most languages aren’t equipped to do that.

                              Elm has tools to do at least a minimal kind of check here. Rust has one too, though not widely as used.

                              . I advocate pinning all deps in a project.

                              That’s what lockfiles give you, without the downsides of doing it manually.

                    1. 11

                      I’m wondering why people forgot about the unit/record separators in ASCII. It’s not as human readable/writeable, but its certainly less fragile.

                      1. 10

                        In case you’re wondering what this is, ASCII codes 0x1c (aka “field separator”) and 0x1e (aka “row separator”) were reserved for CSV-style tabular output use, the idea being that those codes, being in the “control characters” set of non-printable characters, would not appear in normal output and therefore could solve some of the reading problems in the OP, especially with regard to escaping delimiters.

                        It doesn’t solve the well-formedness problem, so you still have to parse in any case, instead of just blindly splitting on those values.

                        1. 3

                          If you presume all data is human readable, then you can try to blindly split. Even if not, you can reserve a character for escaping, and simply repeat to use that character.

                          1. 3

                            You don’t need to reserve a character for that. DLE is defined as “Data Link Escape”.

                        2. 6

                          I occasionally remember they exist, but they don’t really fit a sweet spot for me personally (nether does CSV, though). What I use are one of two things:

                          1. For simple tabular data I use TSV (tab-separated values). This makes things human-readable and also makes it easy to use traditional Unix command-line tools, and has the sole restriction that fields themselves cannot have embedded tabs (in TSV there’s no way to escape a tab, they’re simply not allowed in fields). I could use the ASCII record separators here, but it’d be more awkward to read/write the files, for the only advantage of being able to embed tabs in fields, which is something I rarely actually want.

                          2. For anything more complicated than basic tabular data where TSV is fine, then I go to a more full-fledged serialization format like XML or JSON, with well-defined escaping rules and support for things like hierarchical data.

                        1. 3

                          Anyone know if the Papers We Love Conf talks were recorded and if so, when they will be available?

                          1. 4

                            They were recorded. Keep an eye on our YouTube channel for the videos within the next couple of weeks.

                            1. 3

                              We should have them out in the next few days, we’re waiting on the captioning to come in. We will publicize on the pwlconf.org and paperswelove.org sites, as well as on Twitter.

                            1. 9

                              Microsoft released it under the Apache 2.0 license.

                              1. 6

                                Which includes a patent protection clause–something likely supremely relevant to many who have been worried about using .NET properties in the past.

                              1. 1