Threads for metafnord

  1. 3

    I wouldn’t say public CDNs are completely obsolete. What this article does not take into consideration is the positive impact of geographic locality (i.e. reduced RTT and packet loss probability) on transport layer performance. If you want to avoid page load times on the order of seconds (e.g. several MB worth of javascript over a transatlantic connection) either rely on a public CDN or run your own content delivery on EC2 et al. Of course this involves more work and potentially money.

    1. 2

      This would only apply if whatever you’re fetching from the CDN is really huge. For any reasonably small file the transport performance is irrelevant compared to the extra handshake overhead.

      1. 1

        It does apply for smallish file sizes (on the order of a few megabytes). It mainly depends on how far you have progressed the congestion window of the connection. Even with an initial window of 10 MSS it would take several RTT to transfer the first megabyte

        1. 3

          There’s a benefit if you use a single CDN for everything, but if you add a CDN only for some URLs, it’s most likely to be net negative.

          Even though CDNs have low latency, connecting to a CDN in addition to the other host only adds more latency, never decreases it.

          It’s unlikely to help with download speed either. When you host your main site off-CDN, then users will pay the cost of TCP slow start anyway. Subsequent requests will have an already warmed-up connection to use, and just going with it is likely to be faster than setting up a brand new connection and suffering TCP slow start all over again from a CDN.

          1. 1

            That is definitely interesting. I never realized how expensive TLS handshakes really are. I’ve always assumed that the number of RTTs required for the crytpo handshake are what the issue is, not the computational part.

            I wonder if this is going to change with QUICs ability to perform 0-RTT connection setups.

            1. 1

              No, CPU cost of TLS is not that big. For clients the cost is mainly in roundtrips for DNS, TCP/IP handshake and TLS handshake, and then TCP starting with a small window size.

              Secondary problem is that HTTP/2 prioritization works only within a single connection, so when you mix 3rd party domains you don’t have much control over which resources are going to load first.

              QUIC 0-RTT may to help indeed, reducing the additional cost to just an extra DNS lookup. It won’t solve the prioritization problem.

    1. 9

      Can someone sum up for me why one might like QUIC?

      1. 13

        Imagine you are visiting website and you try to fetch files:

        • example.com/foo
        • example.com/bar
        • example.com/baz

        In HTTP 1.1 you needed to have separate TCP connection for each of them to be able to fetch them in parallel. IIRC it was about 4 in most cases, which meant that if you tried to fetch example.com/qux and example.com/quux in addition to above, then one of the resources would wait. It doesn’t matter that the rest 4 could take a lot of time and could block the pipe, so it would do nothing until resource was fully fetched. So if by chance your slow resource was requested before fast resources, then it could slow whole page.

        HTTP 2 fixed that by allowing multiplexing, fetching several files using the same pipe. That meant that you do no longer need to have multiple pipelines. However there is still problem. As TCP is stream of data, that mean that it need all packets before current to be received before processing given frame. That mean that single missing packet can slow down processing resources that are already received due to fact that we need to wait for marauder that can be retired over and over again.

        HTTP 3 (aka HTTP over QUIC with few bells and whistles) is based on UDP and the streaming is build on top of that. That mean that each “logical” stream within single “connection” can be processed independently. It also adds few different things like:

        • always encrypted communication
        • multi homing (which is useful for example for mobile devices which can “reuse” connection when switching between carriers, for example switching from WiFi to cellular)
        • reduced handshake for encryption
        1. 9

          Afaik, multihoming is proposed but not yet standardized. I know of no implementation which that supports it.

          QUIC does have some other nice features though

          • QUIC connections are independent of IP addresses. I.e. they survive IP address changes
          • Fully encrypted headers: Added privacy and also flexibility. Makes it easier to experiment in the Internet without middleboxes interfereing
          • Loss recovery is better than TCP’s
          1. 4

            Afaik, multihoming is proposed but not yet standardized

            That is true, however it should be clarified that only applies to using multiple network paths simultaneously. As you mentioned, QUIC does fix the layering violation of TCP connections being identified partially by their IP address. So what OP described (reusing connections when switching from WiFi to cellular) already works. What doesn’t work yet is having both WiFi and cellular on at the same time.

            1. 3

              Fully encrypted headers

              Aren’t headers already encrypted in HTTPS?

              1. 8

                HTTP headers, yes. TCP packet headers, no. HTTPS is HTTP over TLS over TCP. Anything at the TCP layer is unencrypted. In some scenarios, you start with HTTP before you redirect to HTTPS, so the initial HTTP request is unencrypted.

                1. 1

                  They are. If they weren’t, it’d be substantially less useful considering that’s where cookies are sent.

                  e: Though I think QUIC encrypts some stuff that HTTPS doesn’t.

              2. 3

                Why is this better than just making multiple tcp connections?

                1. 5

                  TCP connection are not free, they required handshakes both at TCP level and SSL. They also consume resource at the OS level which can be significant for servers.

                  1. 4

                    Significantly more resources than managing quic connections?

                    1. 4

                      Yes, QUIC use UDP “under the table” so creation of new stream within existing connection is 100% free, as all you need is just to generate new stream ID (no need for communication between participants when creating new stream). So from the network stack viewpoint it is “free”.

                      1. 3

                        Note that this is true for current userspace implementations, but may not be true in the long term. For example, on FreeBSD you can do sendfile over a TLS connection and avoid a copy to userspace. With a userspace QUIC connection, that’s not possible. It’s going to end up needing at least some of the state to be moved into the kernel.

                  2. 5

                    There are also some headaches it causes around network congestion negotiation.

                    Say I have 4 HTTP/1.1 connections instead of 1 HTTP/2 or HTTP/3 connection.

                    Stateful firewalls use 4 entries instead of 1.

                    All 4 connections independently ramp their speed up and down as their independent estimates of available throughput change. I suppose in theory a TCP stack could use congestion information from one to inform behaviour on the other 3, but in practice I believe they don’t.

                    HTTP/1.1 requires single-duplex transfer on each connection (don’t send second request until entirety of first reponse arrives, can’t start sending second response before entirety of second request arrives). This makes it hard for individual requests to get up to max throughput, except when the bodies are very large, because the data flows in each direction keep slamming shut then opening all the way back up.

                    AIUI having 4 times as many connections is a bit like executing a tiny Sybil attack, in the context of multiple applications competing for bandwidth over a contended link. You show up acting like 4 people who are bad at using TCP instead of 1 person who is good at using TCP. ;)

                    On Windows the number of TCP connections you can open at once by default is surprisingly low for some reason. ;p

                    HTTP/2 and so on are really not meant to make an individual server be able to serve more clients. They deliberately spend more server CPU on each client in order to give each client a better experience.

                  3. 2

                    In theory, HTTP 1.1 allowed pipelining requests: https://en.wikipedia.org/wiki/HTTP_pipelining which allowed multiple, simulteneous fetches over a single TCP connection.

                    I’m not sure how broadly it was used.

                    1. 4

                      Pipeline still require each document to be sent in order. A single slow request clog the pipeline. Also, from Wikipedia, it appears to not be broadly used due to buggy implementation and limited proxy support.

                      1. 3

                        QUIC avoids head-of-line blocking. You do one handshake to get an encrypted connection but after that the packet delivery for each stream is independent. If one packet is dropped then it delays the remaining packets in that stream but not others. This significantly improves latency compared to HTTP pipelining.

                    2. 5

                      A non-HTTP-oriented answer: It gives you multiple independent data streams over a single connection using a single port, without needing to write your own framing/multiplexing protocol. Streams are lightweight, so you can basically create as many of them as you desire and they will all be multiplexed over the same port. Whether or not streams are ordered or send in unordered chunks is up to you. You can also choose to transmit data unreliably; this appears to be a slightly secondary functionality, but at least the implementation I looked at (quinn) provides operations to you like “find my maximum MTU size” and “estimate RTT” that you will need anyway if you want to use UDP for low-latency unreliable stuff such sending as media or game data.

                    1. 36

                      Better title, “don’t just check performance on the highest-end hardware.” Applies to other stuff too, like native apps — developers tend to get the fastest machines, which means the code they’re building always feels fast to them.

                      During the development cycle of Mac OS X (10.0), most of the engineers weren’t allowed to have more than 64MB of RAM, which was the expected average end-user config — that way they’d feel the pain of page-swapping and there’d be more incentive to reduce footprint. I think that got backpedaled after a while because compiles took forever, but it was basically a good idea (as was dog-fooding the OS itself, of course.)

                      1. 4

                        Given that the easy solution is often the most poorly performing and that people with high-end hardware have more money and thus will be the majority of your revenue, it would seem that optimising for performance is throwing good money after bad.

                        You are not gonna convince websites driven by profit with sad stories about poor people having to wait 3 extra seconds to load the megabytes of JS.

                        1. 6

                          depends on who your target audience is. If you are selling premium products, maybe. But then still, there are people outside of tech who are willing to spend money, just not on tech. So I would be very careful with that assumption.

                          1. 2

                            It’s still based on your users and the product you sell. Obviously Gucci, Versace and Ferrari have different audiences but the page should still load quickly. That’s why looking at your CrUX reports and RUM data helps with figuring out who you think your users are and who’s actually visiting your web site.

                            I don’t own a Ferrari but I still like to window shop. Maybe one day I will. Why make the page load slow because you didn’t bother to optimize your JavaScript?

                          2. 5

                            These days your page performance (e.g. Core Web Vitals) is an SEO factor. For public sites that operate as a revenue funnel, a stakeholder will listen to that.

                            1. 3

                              I don’t work on websites, but my understanding is that generally money comes from ad views, not money spent by the user, so revenue isn’t based on their wealth. I’m sure Facebook’s user / viewer base isn’t mostly rich people.

                              Most of my experience comes from working on the OS (and it’s bundled apps like iChat.) it was important that the OS run well on the typical machines out in the world, or people wouldn’t upgrade, or buy a new Mac.

                              1. 2

                                Even if you were targeting only the richest, relying on high-end hardware to save you would be a bad strategy.

                                • Mobile connections can have crappy speeds, on any hardware.
                                • All non-iPhone phones are relatively slow, even the top-tier luxury ones (e.g. foldables). Apple has a huge lead in hardware performance, and other manufacturers just can’t get equally fast chips for any price.
                                • It may also backfire if your product is for well-off people, but not tech-savvy people. There are people who could easily afford a better phone, but they don’t want to change it. They see tech upgrades as a disruption and a risk.
                              2. 3

                                I’ve heard similar (I believe from Raymond Chen) about Windows 95 - you could only have the recommended spec as stated on the box unless you could justify otherwise.

                                1. 2

                                  It would be very useful if the computer could run at full speed while compiling, throttling down to medium speed while running your program.

                                  1. 1

                                    Or you use a distributed build environment.

                                    1. 1

                                      If you use Linux, then I believe this can be accomplished with cgroups.

                                    2. 2

                                      They might have loved a distributed build system at the time. :) Compiling on fast boxes and running the IDE on slow boxes would’ve been a reasonable compromise I think.

                                      1. 1

                                        most of the engineers weren’t allowed to have more than 64MB of RAM,

                                        Can OS X even run on that amount of ram?

                                        1. 15

                                          OS X 10.0 was an update to OPENSTEP, which ran pretty happily with 8 MiB of RAM. There were some big redesigns of core APIs between OPENSTEP and iOS to optimise for power / performance rather than memory use. OPENSTEP was really aggressive about not keeping state for UI widgets. If you have an NSTableView instance on OPENSTEP, you have one NSCell object (<100 bytes) per column and this is used to draw every cell in the table. If it’s rendering text, then there’s a single global NSTextView (multiple KiB, including all other associated state) instance that handles the text rendering and is placed over the cell that the user is currently editing, to give the impression that there’s a real text view backing every cell. When a part of the window is exposed and needs redrawing, the NSCell instances redraw it. Most of the objects that are allocated on the drawing path are in a custom NSZone that does bump allocation and bulk free, so the allocation is cheap and the objects are thrown away at the end of the drawing operation.

                                          With OS X, the display server was replaced with one that did compositing by default. Drawing happened the same way, but each window’s full contents were stored. This was one of the big reasons that OS X needed more RAM than OPENSTEP. The full frame buffer for a 24-bit colour 1024x768 display is a little over 2 MiB. With OPENSTEP, that’s all you needed. When a window was occluded, you threw away the contents and drew over it with the contents of the other window[1]. With OS X, you kept the contents of all windows in memory[2] . If you’ve got 10 full-screen windows, now you need over 20 MiB just for the display. In exchange for this, you get faster UI interaction because you’re not having to redraw on expose events.

                                          Fast forward to the iPhone era and now you’ve got enough dedicated video memory that storing a texture for every single window was a fairly negligible impact on the GPU space and having 1-2 MiB of system memory per window to have a separate NSView instance (even something big like NSTextView) for every visible cell in a table was pretty negligible and the extra developer effort required to use the NSCell infrastructure was not buying anything important. To make matters worse, the NSCell mechanisms were intrinsically serial. Because every cell was drawn with the same NSCell instance, you couldn’t parallelise this. In contrast, an NSView is stateful and, as long as the controller / model support concurrent reads (including the case that they’re separate objects), you can draw them in parallel. This made it possible to have each NSView draw in a separate thread (or on a thread pool with libdispatch), spreading the drawing work across cores (improving power, because the cores could run in a lower power state and still be faster than one core doing all of the work in a higher power state, with the same power envelope). It also meant that the result of drawing an NSView could be stored in a separate texture (CoreAnimation Layer) and, if the view hadn’t changed, be composited very cheaply on the GPU without needing the CPU to do anything other than drop a couple of commands into a ring buffer. All of this improves performance and power consumption on a modern system, but would have been completely infeasible on the kind of hardware that OPENSTEP or OS X 10.0 ran on.

                                          [1] More or less. The redraws actually drew a bit more than was needed that was stored in a small cache, because doing a redraw for every row of column of pixels that was exposed was too slow, asking views to draw a little bit more and caching it meant that you make it appear smooth as a window was gradually revealed. Each window would (if you moved the mouse in a predictable path) draw the bit that’s most likely to be exposed next and then that would be just copied into the frame buffer by the display server as the mouse moved.

                                          [2] Well, not quite all - if memory was constrained and you had some fully occluded windows, the system would discard them and force redraws on expose.

                                          1. 1

                                            Thanks for this excellent comment. You should turn it into a mini post of it’s own!

                                          2. 3

                                            Looks as if the minimum requirement for OS X 10.0 (Cheetah) was 128 MB (unofficially 64 MB minimum).

                                            1. 2

                                              Huh. You know, I totally forgot that OS X first came out 20 years ago. This 64M number makes a lot more sense now :)

                                            2. 1

                                              10.0 could, but not very well; it really needed 128MB. But then, 10.0 was pretty slow in general. (It was released somewhat prematurely; there were a ton of low-hanging-fruit performance fixes in 10.1 that made it much more useable.)

                                          1. 2

                                            On the mainboard there is 2 push buttons, and 2 sliding jumpers. One is hidden beneath the black tape. I hit the reset button, and slid both sliders up and down and then pressed down on the 64GB chip. I flipped it over to see if that did anything, and surprisingly it’d turn itself on!

                                            I have a first gen regular Pinebook and have similar issues. After a deep discharge of the battery I am not able to charge it anymore and the charging circuit makes a buzzing noise when connected to the charger. I’ve read online that charging the LiPo with a proper charger fixes this. But since I don’t have a proper LiPo charger I won’t try this and accidentally burn my house down.

                                            The regular pinebook only has one slider, which forces the headphone jack to be UART (serial console / debug) and the pushbutton enters FEL (recovery/USB bootloader) mode. What the other in the Pro do I don’t know.

                                            1. 1

                                              One of the posts I read had someone toggling one of the switches and I just did both to be sure..

                                              I’d be worried about the battery too! So far it’s been fine doing the charge and discharge thing.

                                            1. 1

                                              I feel like just looking at the total number of TODOs in a huge code base like the Linux kernel doesn’t tell you much. What would be interesting is number of TODOs over total LOCs.

                                              Maybe also use information from source control to look at how long TODOs persist in the code base.

                                              1. 1

                                                Every single one of the graphs shows projects accruing more todos over time, which we could assume means they have been added and for one reason or another forgotten about. There can also be seen some patterns where todo’s are being used as placeholders for work in progress that then gets done and the todo’s subsequently removed.

                                                I think the whole point of the page isn’t necessarily to show much more than “look, these projects all accrue todo’s over time, maybe that would be a good place to go looking for providing assistance.” There are multiple reasons for why todo’s remain un-done.

                                                1. 2

                                                  What I find iteresting is the sudden jump of TODOs in just a couple of days. Maybe they were integrating external code in tree? It’s pleasing though to see the number drop occasionally

                                                  1. 1

                                                    I made the assumption that was due to a refactoring in progress, or some other maintenance being carried out and placeholders being committed, but integrating external code is also a good guess at what caused it.

                                                    I personally use todo’s as placeholders, I will write a todo, then open an issue with the body of the todo, update the todo to have the issue number in its copy and then commit with the issue number referenced. This results in the issue being linked to the commit making finding the relevant file and line/s trivial.

                                                    1. 1

                                                      that makes sense if those TODOs were scrapped, say, from the main branch and at some point some a big chunk of work containing a bunch of todos was merged into it

                                              1. 78

                                                Backlash against Kubernetes and a call for simplicity in orchestration; consolidation of “cloud native” tooling infrastructure.

                                                1. 18

                                                  I’m not sure if we’ve reached peak-k8s-hype yet. I’m still waiting for k8s-shell which runs every command in a shell pipe in its own k8s node (I mean, grep foo file | awk is such a boring way to do things!)

                                                  1. 15

                                                    You must not have use Google Cloudbuild yet. They do… pretty much exactly that, and it’s as horribly over-engineered and needlessly complicated as you can imagine :-D

                                                    1. 4

                                                      I haven’t worked with k8 yet but to me all of this sounds like you’ll end up with the same problems legacy CORBA systems had: Eventually you lose track of what happens on which machine and everything becomes overly complex and slow.

                                                    2. 13

                                                      I don’t know if it will happen this year or not but I’ve been saying for many years that k8s is the new cross-language J2EE, and just like tomcat and fat jars began to compete we’ll see the options you’re discussing make a resurgence. Nomad is probably one that’s already got a good following.

                                                      1. 7

                                                        I understand where you’re coming from but I don’t think it’s likely. Every huge company I’ve worked with has idiosyncratic requirements that make simple deployment solutions impossible. I’m sure there will be some consolidation but the complexity of Kubernetes is actually needed at the top.

                                                        1. 1

                                                          We’ve been on k8s in some parts of our org for 2+ years, we’re moving more stuff that direction this year though primarily because of deployments and ease of operation (compared to alternatives).

                                                          We don’t use half of k8s, but things are only just now starting to fill that gap like Nomad. I think we’re probably at least a year off from the backlash though.

                                                        2. 4

                                                          I won’t be surprised if the various FaaS offerings absorb much of the exodus. Most people just want self-healing and maybe auto-scaling with minimal yaml. Maybe more CDN edge compute backed by their KV services.

                                                          1. 1

                                                            Which FaaS offerings are good? They are definitely less limited than they used to be, but do they deal with state well and can they warm up fast?

                                                            I haven’t seen any “reviews” of that and I think it would be interesting. Well there was one good experience from someone doing astronomy on AWS Lambda

                                                            https://news.ycombinator.com/item?id=20433315

                                                            linked here: https://github.com/oilshell/oil/wiki/Distributed-Shell

                                                            1. 2

                                                              The big 3 cloud providers are all fine for a large variety of use cases. The biggest mistake the FaaS proponents have made is marketing them as “nanoservices” which gives people trauma feelings instead of “chuck your monolith or anything stateless on this and we’ll run it with low fuss”.

                                                              “serverless” and “function as a service” are both terrible names for “a more flexible and less prescriptive app engine”, and losing control of the messaging has really kneecapped adoption up until now.

                                                              Just like k8s, there are tons of things I would never run on it, but there are significant operational savings to be had for many use cases.

                                                          2. 4

                                                            I wish, but I am not hopeful. But I have been on that bandwagon for years now. Simple deploys make the devops folks love you.

                                                            1. 1

                                                              For those in the AWS world, I like what I’ve seen so far of Amazon ECS, though I wish that Fargate containers would start faster (i.e. 5 seconds or less).

                                                            1. 7

                                                              It is a bad idea, but it works for me quite well, so ¯_(ツ)_/¯

                                                              1. 3

                                                                It basically depends on your network and also your application traffic characteristics. While it’s considered bad practice, it is still done in the real world when there is no other way. For more information I recommend reading RFC8229 https://tools.ietf.org/html/rfc8229#section-12.1

                                                                1. 3

                                                                  I understand that it may be problematic, but it is for me the only way to get a reliable VPN from the public internet into my home. It is a contraption of openvpn on tcp 443 + an ssh tunnel, but it works.

                                                                  1. 1

                                                                    Hey, I am not judging. I have done the same in the past. It should just be your last resort if nothing else works :) I posted rfc8229 so people understand that it can be done but also what the risks involved are.

                                                                2. 2

                                                                  Same here, I am using openconnect VPN and udp always getting some weird packet drops (maybe MTU?). But TCP mode works just fine, with acceptable latency and much better stability.

                                                                1. 6

                                                                  Posts like these always make me feel like I’m living on another planet than some people. Why use docker? Why use a pi-hole at all? Is this all just for the web interface?

                                                                  I personally think it’s much better to run DNSCrypt Proxy and just either point it to an upstream adblocking DNS or host my internal one with it’s own set of blocklists that use the same list from the Pi-hole. That could probably even be simplified to a set of firewall rules instead of DNS, or just DNS local resolver without DNSCrypt.

                                                                  1. 3

                                                                    I signed up for NextDNS about two weeks ago due to some excited Slack chatter about it (and to test my Handshake domain) and I quite like it. I’m gonna see about applying it to my router, if possible, next week.

                                                                    1. 3

                                                                      Honestly I just use one of the public resolvers that does AdBlocking on my phone or mobile device and at home I run an internal resolver that blackholes using the uBlock origin lists and a tiny script that turns it into unbound format. All of these solutions seem… Massively complex for what they really are.

                                                                      1. 1

                                                                        Oh that’s neat, thanks for sharing!

                                                                        1. 1

                                                                          Since public resolvers can see DNS request originating from your network, the privacy impact can be quite severe. I’d suggest to choose your upstream provider wisely. That’s why I’d never chose a public DNS server from google for example. Since you are already running unbound, you could also chose to take another way:

                                                                          I’ve set up unbound to query the root dns servers directly and increased cache size to 128 megs. When the prefetch option is set, cache entries are revalidated before they expire. Not only does this increase privacy, but also dramatically reduces response times for most sites when the cache is warmed up. Be aware that the DNS traffic goes up by around 10 percent or so.

                                                                        2. 2

                                                                          Been a NextDNS user since the beta and now a paid user. I’ve set up DNS over HTTPs on devices that support it, and have it added my router for devices that don’t. It’s blocked about ~15% of queries over the last month and that’s with all browsers running ad blockers. Well worth it to me.

                                                                        3. 3

                                                                          People don’t understand how things work, so instead of learning how to build something simple, the, throw heaps of complex software on top of each other, because that is how things are done in 2020.

                                                                          I too have a cron job that creates an unbound block list. The great thing is that I can easily debug it, because I understand all of it

                                                                          1. 1

                                                                            How many devices do you own that talk to the internet?

                                                                            If it’s literally just me, then I would configure a thing on my laptop and call it done. I live with a bunch of other people, and even if I could individually configure all of their devices (some of them are too locked down for that), I wouldn’t really want to have to learn how to configure ad blocking on six different operating systems from three different vendors.

                                                                            A centralized solution is actually easier, and it inherently gives ad blocking to everyone. It also has a web interface, so you can teach someone how to turn the ad blocker off if they really, really need to, but turning it off is enough of a pain in the neck that they usually just decide that reading such and such a listicle isn’t work it.

                                                                            1. 1

                                                                              8 physical devices and 30 virtual machines (technically 20 talking to the internet because the others are active directory labs for testing and they switch around depending on my needs). The reality is that if I were in your situation I’d just set my router to give out the DHCP nameserver for dns.adguard.com or to the local resolver to recurse up. That wouldn’t even require software installs but does rely entirely on a third party resolver.

                                                                              1. 1

                                                                                That would’ve been an option, too. I did consider it.

                                                                                OTOH, as you mentioned, “is it just for the web interface?” Yes, that’s one of the biggest reasons.

                                                                          1. 8

                                                                            Counterpoint on thin: After years of using ThinkPads, I’m much happier with my MacBook Air as a laptop. I don’t need ports or performance on the go, I want battery life and lightness. It’s nice having a laptop I can lift effortlessly with one hand. The MBA makes appropriate compromises for a laptop; one of the few things I miss from my ThinkPad is a nub. If it’s your only system, then it probably isn’t an appropriate choice, but I have faster computers (my desktop at home and many servers) if I need more .

                                                                            1. 5

                                                                              One of the authors points is that companies are trying to take away computing power on move it to their cloud. Working on a beefy server in a data center somewhere is kinda similar though. In the end, it is (most likely) not your computer you can’t be sure nobody is interfering with.

                                                                              Not that I am not guilty of doing it myself.

                                                                              1. 2

                                                                                Does that really hold true for Apple though, who doesn’t have their own cloud business? I mean, most of the laptop hardware manufacturer’s aren’t directly involved in cloud services either, so I’m not sure that argument holds up. It’s not like most aren’t still offering beefy ‘portable workstation’ types, it’s just a tradeoff of battery/portability and power.

                                                                                1. 1

                                                                                  This point might not be true for Apple, however they are going in a direction which is problematic in other ways: To me it feels like macOS is becoming more restricted with every major OS release. My guess is that sooner or later they will take away the possibility to run unsigned code on their computers. My prediction is that with the advent of ARM Macs iOS and macOS will eventually merge into the same product, leaving you with a powerful device and not a computer anymore.

                                                                                  1. 3

                                                                                    Given how many of the MBP devices get sold to developers, I think it’s unlikely they’ll restrict unsigned code entirely. They will almost certainly make it more of a pain, but tbh, I’m personally fine with that. The “average” Mac OS user generally needs their hand held and is better served having safety nets to prevent them from doing something dangerous. Power users, who know what they’re doing, can figure out the mechanisms to do whatever they want. iOS and Mac OS may merge in the future, but I think it would result in iOS becoming more ‘open’ than Mac OS being more closed. Even the most recent releases of iOS have things like scripting automation, (finally!) decent file handling, changing default browser (still webkit afaik, uhg), etc..

                                                                                2. 1

                                                                                  I have that computing power at home. I can SSH into it, or if I’m at home, just use it, right there.

                                                                                  You’d be surprised how performant low voltage designs can be anyways.

                                                                                  1. 1

                                                                                    Some people don’t feel comfortable or cannot afford to leave their computer running 24/7

                                                                                    1. 1

                                                                                      So that’s when you leave a rpi running 24/7, hooked up to a USB-controlled finger which presses a key on your desktop’s keyboard and wakes it up when you want to ssh to it

                                                                                3. 2

                                                                                  Nothing portable about having to carry an additional “real” keyboard around …

                                                                                  1. 2

                                                                                    I’ve never needed to carry around a keyboard. Unless you actually have a disability that prevents you from using a traditional keyboard layout, I actually like the keyboard on my ThinkPads and don’t think mechanical keyboards are a huge leap. The MBA is a slight downgrade on that front, but it’s perfectly usable.

                                                                                    1. 3

                                                                                      I think the previous commenter is talking about Mac keyboards vs external keyboards. Employer requires me to use a Mac, and I’ve used the built-in keyboard for maybe thirty hours, and it’s starting to act weird already.

                                                                                      While I’m able to write code on my personal ThinkPad keyboard, the work Mac keyboard sacrifices usability for smaller size to the point that I find it unusable. I think that’s what the previous poster is trying to say.

                                                                                      1. 1

                                                                                        And you are so right. What a horrendous keyboard (and laptop in general). I’m shocked and disgusted by how bad it is.

                                                                                1. 8

                                                                                  Ok, now bring back phones with an integrated hardware keyboard like the HTC desire Z.

                                                                                  1. 4

                                                                                    My experience is that a fold-out bluetooth keyboard is much more comfortable and better.

                                                                                    1. 6

                                                                                      I guess it depends on your use case. If you have a proper table you’re right. If you want to have a purely hand held device for places like a crowded subway an integrated keyboard would be superior, I guess.

                                                                                      1. 2

                                                                                        I do tend to use various phones for multiple tasks as well. That being said, bluetooth fold-out keyboards were never a viable option for me (starting with some early folding ones for Palm handhelds etc up to the current Logitech Key-To-Go that isn’t foldabe but portable). My biggest problem with all of them was, that I use mobile devices mainly via commuting and it is just not really usable on your lap without the phone falling out or it being really shaky. A builtin keyboard might not be as comfortable as a separate bluetooth one, but it is fixed on your phone.

                                                                                        A notable exclusion of the “external keyboards don’t work when commuting” is the ipad Pro with a Smart Keyboard - the magnets are holding it in place as good as a fixed one. (Can’t say anything about the magic keyboard but I assume similar) edit: i actually wrote about my experience using the iPad here - not really using it “fullblown” with a VM and stuff ondevice like you do but rather as a remote shell: https://www.shift-2.com/journal/my-current-setup-learning-and-developing-rust

                                                                                        1. 1

                                                                                          Is it possible to use the one you linked on one’s laps? Or would I need a proper desk for that?

                                                                                      1. 14

                                                                                        While it might be true that 1500 Bytes is now the de facto MTU standard on the Internet (minus whatever overhead you throw at it), everything’s not lost. The problem is not that we don’t have the link layer capabilities to offer larger MTUs, the problem is that the transport protocol has to be AWARE of it. One mechanism for finding out whether what size MTU is supported by a path over the Internet is an Algorithm called DPLPMTUD. It is currently being standardized by the IETF and is more or less complete https://tools.ietf.org/html/draft-ietf-tsvwg-datagram-plpmtud-14. There are even plans for QUIC to implement this algorithm, so if we’ll end up with a transport that is widely deployed and also supports detection of MTUs > 1500 we’ll actually might have a chance to change the link layer defaults. Fun fact: All of the 4G networking gear actually supports jumbo frames, most of the providers just haven’t enabled the support for it since they are not aware of the issue.

                                                                                        1. 6

                                                                                          Wow, it might even work.

                                                                                          I can hardly believe it… but if speedtest.net were able to send jumbo frames and most users’ browsers support receiving it, it might get deployed by ISPs as they look for benchmark karma. Amazing. I thought 1500 was as invariant as π.

                                                                                          1. 5

                                                                                            I was maintainer for an AS at a previous job and set up a few BGP peers with jumbo frames (4470). I would have made this available on the customer links as well, except none of them would have been able to receive the frames. They were all configured for 1500, as is the default in any OS then or today. Many of their NICs couldn’t handle 4470 either, though I suppose that has improved now.

                                                                                            Even if a customer had configured their NIC to handle jumbo frames, they would have had problems with the other equipment on their local network. How do you change the MTU of your smartphone, your media box or your printer? If you set the MTU on your Ethernet interface to 4470 then your network stack is going to think it can send such large frames to any node on the same link. Path MTU discovery doesn’t fix this because there is no router in between that can send ICMP packets back to you, only L2 switches.

                                                                                            It is easy to test. Try to ping your gateway with ping -s 4000 192.168.0.1 (or whatever your gateway is). Then change your MTU with something like ip link set eth0 mtu 4470 and see if you can still ping your gateway. Remember to run ip link set eth0 mtu 1500 afterwards (or reboot).

                                                                                            I don’t think that DPLPMTUD will fix this situation and let everyone have jumbo frames. As a former network administrator reading the following paragraph, they are basically saying that jumbo frames would break my network in subtle and hard to diagnose ways:

                                                                                               A PL that does not acknowledge data reception (e.g., UDP and UDP-
                                                                                               Lite) is unable itself to detect when the packets that it sends are
                                                                                               discarded because their size is greater than the actual PMTU.  These
                                                                                               PLs need to rely on an application protocol to detect this loss.
                                                                                            

                                                                                            So you’re going to have people complain that their browser is working, but nothing else. I wouldn’t enable jumbo frames if DPLPMTUD was everything that was promised as a fix. That said, it looks like DPLPMTUD will be good for the Internet as a whole, but it does not really help the argument for jumbo frames.

                                                                                            And I don’t know if it has changed recently, but the main argument for jumbo frames at the time was actually that they would lead to fewer interrupts per second. There is some overhead per processed packet, but this has mostly been fixed in hardware now. The big routers use custom hardware that handles routing at wire speed and even consumer network cards have UDP and TCP segmentation offloading, and the drivers are not limited to one packet per interrupt. So it’s not that much of a problem anymore.

                                                                                            Would have been cool though and I really wanted to use it, just like I wanted to get us on the Mbone. But at least we got IPv6. Sorta. :)

                                                                                            1. 3

                                                                                              If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                                                                                              Back when I tried running an email server on there, I actually did run into trouble with this, because some bank’s firewall blocked ICMP packets, so… I thought you’d like to know, neither of us used “jumbo” datagrams, but we still had MTU trouble, because their mail server tried to send 1500 octet packets and couldn’t detect that the DSL link couldn’t carry them. The connection timed out every time.

                                                                                              If your application can’t track a window of viable datagram sizes, then your application is simply wrong.

                                                                                              1. 2

                                                                                                If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                                                                                                It’s even worse: in the current situation[1], your system’s MTU won’t matter at all. Most of the network operators are straight-up MSS-clamping your TCP packets downstream, effectively discarding your system’s MTU.

                                                                                                I’m very excited by this draft! Not only it will fix the UDP situation we currently have, but will also make tunneling connections way more easy. That said, it also means that if we want to benefit from that, the network administrators will need to quit mss-clamping. I suspect this to take quite some time :(

                                                                                                [1] PMTU won’t work in many cases. Currently, you need ICMP to perform a PMTU discovery, which is sadly filtered out by some poorly-configured endpoints. Try to ping netflix.com for instance ;)

                                                                                                1. 2

                                                                                                  If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                                                                                                  Very true, one can’t assume an MTU of 1500 on the Internet. I disagree that it’s on the application to handle it:

                                                                                                  If your application can’t track a window of viable datagram sizes, then your application is simply wrong.

                                                                                                  The network stack is responsible for PMTUD, not the application. One can’t expect every application to track the datagram size on a TCP connection. Applications that use BSD sockets simply don’t do that, they send() and recv() and let the network stack figure out the datagram size. There’s nothing wrong with that. For UDP the situation is a little different, but IP can actually fragment large UDP datagrams and PTMUD works there too (unless, again, broken by bad configurations, hence DPLPMTUD).

                                                                                                  1. 3

                                                                                                    I disagree that it’s on the application to handle it

                                                                                                    Sure, fine. It’s the transport layer’s job to handle it. Just as long as it’s detected at the endpoints.

                                                                                                    For UDP the situation is a little different, but IP can actually fragment large UDP datagrams and PTMUD works there too

                                                                                                    It doesn’t seem like anyone likes IP fragmentation.

                                                                                                    • If you’re doing a teleconferencing app, or something similarly latency-sensitive, then you cannot afford the overhead of reconstructing fragmented packets; your whole purpose in using UDP was to avoid overhead.

                                                                                                    • If you’re building your own reliable transport layer, like uTP or QUIC, then you already have a sliding size window facility; IP fragmentation is just a redundant mechanism that adds overhead.

                                                                                                    • Even DNS, which seems like it ought to be a perfect use case for DNS with packet fragmentation, doesn’t seem to work well with it in practice, and it’s being phased out in favour of just running it over TCP whenever the payload is too big. Something about it acting as a DDoS amplification mechanism, and super-unreliable on top of that.

                                                                                                    If you’re using TCP, or any of its clones, of course this ought to be handled by the underlying stack. They promised reliable delivery with some overhead, they should deliver on it. I kind of assumed that the “subtle breakage” that @weinholt was talking about was specifically for applications that used raw packets (like the given example of ping).

                                                                                                    1. 1

                                                                                                      You list good reasons to avoid IP fragmentation with UDP and in practice people don’t use or advocate IP fragmentation for UDP. Broken PMTUD affects everyone… ever had an SSH session that works fine until you try to list a large directory? Chances are the packets were small enough to fit in the MTU until you listed that directory. As breakages go, that one’s not too hard to figure out. The nice thing about the suggested MTU discovery method is that it will not rely on other types of packets than those already used by the application, so it should be immune to the kind of operator who filters everything he does not understand. But it does mean some applications will need to help the network layer prevent breakage, so IMHO it doesn’t make jumbo frames more likely to become a thing. It’s also a band-aid on an otherwise broken configuration, so I think we’ll see more broken configurations in the future, with less arguments to use on the operators who can now point to how everything is “working”.

                                                                                            1. 9

                                                                                              Also, I remember years ago I was trying to compile C code written in the 80s with a modern version of clang. I had to fiddle quite a while with the cflags to get the pre C90 style code to compile. Very large portions of the code are not valid C code anymore and need explicit support by the compiler.

                                                                                              1. 5

                                                                                                My experience with “digital archeology” and raising old C programs from the dead confirms this. You can write future-proof C, but then you can write future-proof anything as long as a a spec and/or open source compiler exists for it.