1. 4

    Good choice of name and domain name for the website. Their are used like the plague and negative connotation is most welcome, these days when people do things they way they are supposed to rather than having a specific reason for it.

    Why do people use floats? Honest question. I don’t know any reason for using floats in any situation.

    1. 18

      Numeric processing with high dynamic range is simpler with floating-point numbers than fixed-point numbers. In particular, they have the ability to temporarily exceed range limitations with a fair amount of headroom and only a modest loss of precision.

      1. 2

        I agree this is the kind of thing they are appropriate for. A rather specific use case.

        1. 16

          I’m not sure that “any science, physics or simulation anywhere, ever” is a very specific use case. Just not one that overlaps much with current hip new computing tech much.

          1. 14

            High dynamic range = most graphics, so it’s not actually very specific

        2. 10

          They require less memory and are adequate for some kinds of programming where higher precision isn’t necessary. For instance, https://gioui.org uses them for pixel offsets because fractional pixels don’t matter beyond a point. 0.00000000001 pixels isn’t usually worth worrying about in an application’s layout.

          I also think that there are some processors on which float32 operations are faster than float64, but I don’t think that’s true of conventional x86_64 processors.

          1. 3

            I also think that there are some processors on which float32 operations are faster than float64, but I don’t think that’s true of conventional x86_64 processors.

            It’s true that there are lots of cases where you won’t see a difference at all because you’re limited by something else (e.g. the cost and latency of arithmetic can be hidden by memory latency sometimes), but I would not state this with confidence.

            When you’re cache or memory bandwidth limited, you can fit twice as many float32 numbers into each cache line.

            Vector operations on float32s typically have twice the throughput. All the vector operations in SSE and SSE2 for example come in versions that work on float32 or float64 numbers packed into 128 bit registers. The 32 bit versions operate on twice as many numbers with the same or better latency and clocks-per-instruction (according to Intel’s documentation, at least).

            A few operations (such as division) have slightly worse latency noted in Intel’s docs for float64 versions.

            1. 2

              In order to have an insignificant error like the example you guive, you are using up more memory, not less.

              Having deltas order of magnitude smaller than the precision you need is an argument against floats. Not for floats. There is nothing positive into brute forcing the the maximal error by throwing useless bytes at it.

              The do have have high precision around the range people use them. What they don’t have, and I suppose this is what people mean by precision is exactness. Given they are created by constructors accepting decimal notation in most programming languages. Most common decimal round numbers are not representable with such data types. And that is why I don’t understand why they are so ubiquitous.

              1. 8

                I don’t think most floats are created to represent decimal numbers. Some are, like when representing currency or in a calculator, but most floats are representing light or sound levels, other sensor readings, internal weights in neural networks, etc.

                I’m guessing you may work in a domain like finance where decimal numbers seem ubiquitous, but you’re not considering the wider use cases.

                1. 3

                  Yes, I do work in domains where decimal numbers are ubiquitous, floats are the plague. I see them even for representing natural numbers “in case we want to use a smaller unit”, and other such nonsense.

                  Even when used for store sensor readings (like image or sound) the only valid reason to use them is ifndividing your scale exponentially serves you better than linearly. Which I would argue It’s perhaps half the times or less.

            2. 9

              In machine learning, it’s common to optimize your parameters for space, since in those cases you typically don’t care about the precision loss compared to doubles and it lets you halve your parameter size, but you don’t want to use fixed point because your parameter range can be large. There are some approaches that involve 8-bit or 16-bit fixed point, but it’s not a universal thing at all.

              In general, though, a lot of times they’re just Good Enough, and they save you from having to think about scaling constants or writing your own multiplication algorithms due to hardware support.

              1. 7

                Are you talking about the C float type, i.e. 32-bit IEEE floating-point, or all floating point types? If the latter, what commonly available data type should people use instead? Last I checked, few languages offer fixed-point types.

                32-bit float is often used internally in audio code (for example Apple’s CoreAudio) because it has as much precision as a 24-bit integer but (a) gives you a lot more more dynamic range at low volume, and (b) doesn’t turn into garbage if a calculation overflows. (I don’t know if you’ve ever heard garbage played as PCM audio, but it’s the kind of harsh noise that can literally damage speakers or people’s hearing, or at least really startle the shit out of someone.)

                A general reason for using floats is because a general purpose system — like the JavaScript language, or the SQLite database —doesn’t know the details of every possible use case, so providing FP math means it’s good enough for most use cases, and people with specialized needs can layer their own custom types, like BCD or fixed-point, on top of strings or integers.

                1. 5

                  JavaScript is a typical case where floating point is a bad default. Typical use cases for numerics are user-facing values such as prices, not 3D graphics.

                  1. 2

                    I haven’t heard anyone say what should be used instead. Are you saying JavaScript should have provided a BCD DecimalNumber type instead of floating point? How would people doing any sort of numerics in JS have felt about this? Doing trigonometry or logarithms in BCD must be fun.

                2. 5

                  I’ve gone through a personal rollercoaster in my relationship with IEEE floating-point, and my current sense is that:

                  a) I’d love to have computers support a better representation like Unums or Posits or something else.

                  b) What we have available in mainstream hardware is fairly decent and certainly worth using while it’s the only option. Overflow and underflow in floating-point isn’t that different from overflow in integers, and a whole lot less likely to be encountered by most of us given the far larger domain of floating-point numbers.

                  c) The real problem lies in high-level languages that hide processor flags from programmers. C and C++ have some bolted-on support with intrinsics that nobody remembers to use. Rust, for all its advances in other areas, sadly hasn’t improved things here. Debug mode is a ghastly atavism, and having release builds silently wrap is a step back from gcc’s (also sub-optimal) -ftrapv and -fwrapv flags.

                  1. 8

                    Haha as the implementor of unums and posits, I’d say unums are too much of a pain in the ass. Posits might have better performance, though if you need error analysis, it might be strictly worse. Posits had a fighting chance with the ML stuff going on but I think that ship has sailed.

                    As for ignored processor flags. I think zig is making an effort to make those sorts of intrinsics easily accessible as special case functions in the language, and hopefully they take on a strategy of making polyfilling easy for platforms that have partial support.

                    1. 3

                      I use floats for GPU based computer graphics. I’ve read “Beating Floating Point at its Own Game: Posit Arithmetic”, and posits sound amazing: better numerical properties and more throughput for a given amount of silicon. But I’ve not used them, and I will never use them unless they are adapted by GPU manufacturers. Which I guess won’t happen unless some new company disrupts the existing GPU ecosystem with superior new GPU tech based on posits. Something like Apple with the M1, but more analogous to Space-X with the Falcon and Starship. I don’t see any reason for the large entrenched incumbents to gamble on new float technology that is incompatible with existing graphics standards.

                      1. 4

                        Yeap. Sorry it didn’t work out. We tried though (I even have some verilog models for posit circuits).

                    2. 3

                      Swift’s default integer arithmetic operators panic on overflow. (There are alternate ones that ignore overflow, for performance hot spots.)

                      1. 1

                        Or when you actually need that behaviour, such as in hashing functions. But you don’t want your customer ids to actually wrap around silently.

                    3. 3

                      Why do people use floats? Honest question. I don’t know any reason for using floats in any situation.

                      They’re used to represent real numbers. It’s easy and convenient to have types like float that natively represent real numbers. It’s also nice to have statically allocated, roughly word-sized representation (as opposed to arbitrary precision).

                      1. 2

                        Why? What makes them more suited than integers for representing real numbers?

                        1. 1

                          fractions, sqrt, etc fixed point arithmetic drops a huge range of precision at either the high or the low end, and is also slower for many operations.

                          1. 1

                            I don’t understand what you mean. Integers have uniform precision throughout b the scale. Choose the base unit as you see fit for the precision you want and that is what you get.

                            It always “drops the same range of precision”. if you need the precision of a float around zero, then set your base unit to that and there you have, it.s your maximum error. Unlike with floats.

                            When are integers slower and why? You always have to at least perform the same operation in the mantissa of your floats..?

                            1. 5

                              the problem with fixed point is that you have to choose one range of precision, otherwise you’re just inventing what is likely to be a suboptimal software version of floating point. While there are (were?) cases where fixed point is acceptable, in general floating point can do better, and is faster.

                              The reasons fixed point is slower boils down to the lack of hardware support for fixed point, but there are a few other reasons - efficientlyand accurately computing a number of real functions often requires converting fixed point to some variant of floating point anyway.

                              In general integer operations are faster for basic arithmetic (and I really mean the basics: +,-,*), complex functions are typically made “fast” in fixed point arithmetic by having lookup tables that approximate the results, because fixed point arithmetic is typically used in places where accuracy is less important.

                              Multiplication, addition, subtraction of floating point is only marginally slower than integer arithmetic, and once you add in the shifts required for fixed point arithmetic floating point actually outperforms it.

                              1. 1

                                I have no idea what you mean by “lack of hardware support”. Manipulating integers is leterally everything a processor does at a low level.

                                What are you referring to?

                                1. 1

                                  It’s not a matter of just doing inter operations, because as you say everything is fundamentally integers in a cpu. The question is how many integer operations you have to do.

                                  If you’re doing fixed point arithmetic you have to do almost everything floating point logic requires only without hardware support. Fixed point arithmetic isn’t simply integer arithmetic, it’s integer arithmetic plus large integer work, plus shifts. Because there isn’t hardware support, which there isn’t because if you’re adding hardware you may as well do floating point which is more generally useful.

                                  1. 1

                                    No to be stubborn but I am still not getting your point.

                                    The question is how many integer operations you have to do.

                                    Less than half as if you use floats, obviously. Whatever operations your cpu does for integers, it needs do for the mantissa of your floats, plus handle the exponents plus moving stuff out of the way and back in.place.

                                    Fixed point arithmetic isn’t simply integer arithmetic

                                    I am not sure what you think I am suggesting but to be clear it is: reduce all you variables to integer and do only integer arithmetic. It is, in the end, everything a processor is capable of doing. Integer arithmetic. Everything builds on it.

                                    I think the confusion here is the notion of “point”. A computer is capable of representing a finite number of states. A point is useful for us humans to make things more readable. But for a computer, a number is always an element in a finite set. You suggest I need to meaa around with fixed point arithmetic because I reject floats. But what I mean is: unless you hit scale limitations, there is no reason for using anything else than integers.

                                    If the confusion is how the result is presented to the user… That is a non problem. Just format your number to whatever is most human readable.

                                    1. 2

                                      No to be stubborn but I am still not getting your point.

                                      no worries

                                      Ok, the first problem here is that you can’t reduce everything to integer arithmetic, if I am doing anything that requires fractional values I need to adopt either fixed point or floating point arithmetic. Fixed point is inherently too inflexible to be worth creating a hardware back end for in a general purpose CPU, so has to be done in software, that gives you multiple instructions for each operation. If you are comparing fixed point to floating point in software fixed point generally wins, but the reality is the floating point is in hardware, so the number of instructions you are dispatching (which for basic arithmetic is the bottleneck) is lower, and floating point wins.

                                      In this case point has nothing to do with what the human visible representation is. The point means how many fractional bits are available. It doesn’t matter what your representation is, floating vs fixed, the way you perform arithmetic is dependent on that decision. Fixed point arithmetic simplifies some of this logic which is why in software implementations it can beat floating point, but it does that by sacrificing range and precision.

                                      To help clarify things lets use concrete examples, how are you proposing 1.5 gets represented, and how do you perform 1.5 * 0.5 and represent the result. I need to understand what you are proposing :D

                                      1. 1

                                        I think the claim that precision and range are sacrificed doesn’t really hold. There is no silver bullet. The range of floats is larger because if has less precision as you get closer to the limits. Arguably, it has more precision where it is most useful, but this can be very deceiving. Include a large number in your computation and the end result might has less precision than what most people would think. They look at the decimal representation with a zillion decimal places and assume a great deal of precision. But you might have poluted your result with a huge error and it won’t show. This doesn’t happen with ints. You reach range limitations faster of course… But this isn’t very common with 64 bit ints.

                                        But your final question perfectly illustrates the problem. As a programmer, you need to decide what should happen ahead of time. If you mean those values as exact values then you pretty much need a CAS to handle fractions, roots and and so on. Which obviously has no use for floats. If you mean approximate values, you need to be explicit and be in charge of the precision you intend. 1.5*0.5 is 0.7 or 0.8. it doesn’t make sense to include more decimal places if you are no doing exact calculus.

                                        We learn this in school and my pocket TI calculator does this. If you set precision to automatic and insert 1/3, the result is zero. But if you inser 1/3.0, the result is 0.3. why would you want more decimal places if the number cannot possibly be stored with its exact value and is derived for numbers with less precision?

                                        If you write 1.000 kg, it doesn’t mean the same as 1kg. If you mean the first it means a precision to the gram, and the easiest when writing a computer program is to just reduce to grams and proceed with integer arithmetic.

                                        1. 3

                                          the claim that precision and range are sacrificed doesn’t really hold

                                          This is well studied. For example, I’ve seen the results of computational fluid dynamics simulation,, taking f128 to be “ground truth”, f64 gets far closer to the correct answer than any fixed64 representation.

                              2. 3

                                Consider something like 1 / x^2, where x >> 1. You have to calculate x², which will be a very small large, and then take the reciprocal, which will be a very small number. You can’t pick a single fixed-point to cover both, and there’s no opportunity in that one calculation to switch between two formats

                                Situations like that are common in many scientific applications, where intermediate stages of computation are much bigger and small than both your inputs and final output.

                                1. 1

                                  That is when one would use floats yes. But let.s be clear. They are comon in some scientific applications, specifically chemistry. The maxint or a 32 bit integer is plentiful for must usages.

                                  64 bit processors have been the standard for over a decade. Even those situations you mention hardly need a range larger than a 64 bit integer.

                                  1. 2

                                    That is when one would use floats yes. But let.s be clear. They are comon in some scientific applications, specifically chemistry. The maxint or a 32 bit integer is plentiful for must usages.

                                    I can’t think of a scientific field which wouldn’t prefer floats to 32 bit integers. What happens when you need to find a definite integral, or RK4 a PDE, or take the determinant of a matrix?

                                    64 bit processors have been the standard for over a decade. Even those situations you mention hardly need a range larger than a 64 bit integer.

                                    If we’ve got 64 bits, then why not use a double?

                                    1. 1

                                      Regarding your first paragraph. I don’t think you are getting that I am suggesting to adjust the base unit to whatever precision delta you intend. Otherwise I don’t understand your question. Could you be clear about what exactly happens if you use floats that wouldn’t happen otherwise? They are both a data type made of a descrete set representing point on the real number axis. What limitations exactly are you suggesting integers have other than their range?

                                      As for your second paragraph, isn’t it the other way around? Isn’t the point of floats to overcome integer range and precision limita and strike a ballance between both? Why would you need to that if you don’t have such limitations anymore. Floats were used all the time on 8 bit processors even for things you would integers because of range limitations. We don’t need to do that on our 32 and 64 bit processors.

                                      I think there is this wrong idea that ints are meant to be used for natural numbers and such only. Which is of course a misconception.

                                      1. 1

                                        Regarding your first paragraph. I don’t think you are getting that I am suggesting to adjust the base unit to whatever precision delta you intend. Otherwise I don’t understand your question. Could you be clear about what exactly happens if you use floats that wouldn’t happen otherwise? They are both a data type made of a descrete set representing point on the real number axis. What limitations exactly are you suggesting integers have other than their range?

                                        My point is that all three of those things involve working with both very large and very small numbers simultaneously. You can’t “just set the precision delta”. Or if you can, you’d have provide a working demonstration, because I believe it’s much harder than you’re claiming it is.

                                        Also, lots of science involves multiplying very small by very large numbers directly, such as with gravitational force.

                                        As for your second paragraph, isn’t it the other way around? Isn’t the point of floats to overcome integer range and precision limita and strike a ballance between both? Why would you need to that if you don’t have such limitations anymore. Floats were used all the time on 8 bit processors even for things you would integers because of range limitations. We don’t need to do that on our 32 and 64 bit processors.

                                        I think we use them for lots of reasons, and one is that you don’t need to pick a basis in advance of computation, like you do with fixed width.

                          2. 1

                            Floating-point numbers can only represent (binary) fractions, but many real numbers need to be represented by computations which emit digits.

                          3. 3

                            One of the most important reasons is that floats are invariably literals whereas “proper” decimals are usually not

                            1. 1

                              How so?

                              1. 1

                                eg in Python

                                # literal reals in python are IEEE floats
                                >>> 0.2 + 0.1
                                0.30000000000000004
                                

                                vs

                                # Decimal is a wrapper around the GMP library - ie proper numbers
                                >>> from decimal import Decimal
                                >>> Decimal("0.2") + Decimal("0.1")
                                Decimal('0.3')
                                

                                Extra syntax and extra library (even though it’s in the stdlib!) is a huge barrier. I have seen a number of real world systems be written to use floats - and suffer constant minor bugs - simply because it was easier.

                                Once or twice I have ripped out floats for decimals. It’s not too hard but you do need a typechecker to keep things straight.

                            2. 2

                              Precision degrades much more gracefully with floating point operations (which round to approximate values or saturate to 0 or inf) than with integer or fixed width operations (which truncate or overflow).

                              If you have to do work with real numbers then floats are usually best of those three options.

                            1. 52

                              For context, this is one of the most prominent Rust programmers, of ripgrep and [xsv]https://github.com/BurntSushi/xsv fame.

                              This is perhaps a good time to re-iterate on lobsters moderation design as it was intended. I don’t mean that the concept should be challenged, but rather continuing changing/improving the way moderation works so it stays truthful to the original vision. This site set itself the goal of overcoming problems with typical moderation features found on most websites, and specifically the orange site.

                              This is, to an extent, a matter of opinion, but if we look at this soberly, virtually everyone will agree that Burntsushi was not by any measure a problematic user. Hence, the message was clear misfire and such feature should be rethought. Such features exist to mitigate the damage of users that repeatedly post inflammatory, condescending or offensive comments. It is not at all the case. Whatever burntsushi opinions are, whatever percentage of people agree with them, or however strongly he has stated them, we’re talking about a regular user with common online etiquete and not a person that throws offences or other abusive behaviour.

                              Online discussion is suffering gravely from the politically-correct stick. Dissent it and hellfire will rain upon you. Lobsters rejected this mindset. It doesn’t have downvotes for example. It is values any unusual opinion rather than throwing it to an angry mob. Flagging is a clear hijackable target in this regard.

                              I don’t know how I would react to such a warning, probably woulnd’t care as much as burntsushi did, but warning a user based on an amount of people essentially not agreeing with (or possibly not liking for whatever reason) him, is clearly against the philosophy of this website.

                              1. 3

                                This is, to an extent, a matter of opinion, but if we look at this soberly, virtually everyone will agree that Burntsushi was not by any measure a problematic user.

                                Agreed: not on comments that I had seen. So which comments drew the recent ire of flags/downvotes/etc? Controversial ones?

                              1. 23

                                I gues it’s better to leave on your own terms than to get your domain blocked or get kicked out.

                                The wording on the banner is far from being a friendly advice - I’d call it antagonistic and confrontational, hostile even.

                                BTW, the code itself has been added last year in this commit.

                                Ironically, lobste.rs was created by /u/jcs as response to HN heavy-handed moderation.

                                1. 40

                                  His engagement with lobste.rs was much more polarising than burntsushi. The latter didn’t jump into comment sections to deliberately kick off a flame war that may not have otherwise occurred; the former did so deliberately and unashamedly. I heartily respect both their views but I can understand why they might be moderated differently.

                                  1. 13

                                    Thank you for saying this in a far more polite way than I was about to.

                                    1. 8

                                      And why would that result in banning the domain? Drew wasn’t even the one posting his blog posts here and they were always upvoted.

                                      1. 11

                                        Because many of his posts were explicit flamebait; look at the last two posts on that domain for instance.

                                        1. 2

                                          Then clearly this community is not what the admin intended it to be before banning this domain because the stories from that domain were routinely getting above 30 points which is rare for most stories. It is time to shut this whole website down and just change it to be a private RSS feed of the admin.

                                          1. 3

                                            It’s an attempt to avoid the Repugnant Conclusion; the mere addition of a steady attractor of upvotes can degrade the quality of life for everybody else.

                                          2. 1

                                            Did you mean to include the one about a finger server and io_uring as one of the two? I found it interesting and informative.

                                            1. 5

                                              I meant what was submitted to Lobsters, which were the final straws,

                                              1. 2

                                                Thanks for the clarification. Not sure why I didn’t read it that way.

                                        2. 1

                                          This was just an example - there’s more in the moderation log if you care to look.

                                        3. 15

                                          Wow, this ban message from your second link:

                                          Please go be loudly disappointed in the entire world (and promote sourcehut) somewhere else.

                                          I really hope that this happened at the end of a process of attempting to politely engage, rather than as the immediate response. That reads like something from a burned-out moderator who needs to take a break.

                                          1. 26

                                            This was a sustained pattern of behavior over months.

                                            1. 2

                                              That reads like something from a burned-out moderator who needs to take a break.

                                              Pro tip: moderators are always burnt-out.

                                            2. 8

                                              oh wow, Drew got banned ..

                                              I don’t like anyone getting banned for anything. I have a lot of respect for how much DeVault puts into his open source contributions and am envious he can live off of it. That being said, he banned me on Mastodon forever ago because I reposted an open letter a professor made during the eight of the 2020 US riots. We had a discussion over DMs and he blocked me in the end.

                                              The more I lean about some of the stuff he’s said and done, I realize I can still respect his work while still agreeing with all the others who’ve come to the conclusion his actions are often inflammatory or childish. I’m not surprised he’s banned. He left the Fediverse a few months back too.

                                              1. 13

                                                Yup. I was actually pretty interested in Sourcehut, but in the end I didn’t really want to use a service run by someone that hot-headed.

                                                1. 1

                                                  because I reposted an open letter a professor made during the eight of the 2020 US riots. We had a discussion over DMs and he blocked me in the end.

                                                  What was the nature of the letter?

                                                2. 7

                                                  There are two issues here:

                                                  • banning the user
                                                  • banning the domain

                                                  The reason for banning the user account was reported by the admin as apparently rude comments/encouraging arguments/arguing? The comments were usually upvoted though as far as I remember so I think the decision was mostly arbitrary.

                                                  The domain was blocked just because the admin banned the author from lobsters, not because there was something wrong with the content on that website. Drew wasn’t even the one posting his blog posts here.

                                                  Therefore at least one of those decisions is nonsensical.

                                                  You can try to create a website with semi-transparent moderation policies but that will never fix the standard power abuse by moderators like in this situation. The personal grievances usually win and no moderation log will fix this. The community enjoyed the content and @pushcx didn’t => the comments and the domain get nuked off the website.

                                                  I tried to get an answer at least to why the domain was banned but of course I never did (in the name of transparency).

                                                  1. 3

                                                    The reason for banning the user account was reported by the admin as apparently rude comments/encouraging arguments/arguing? The comments were usually upvoted though as far as I remember so I think the decision was mostly arbitrary.

                                                    The domain was blocked just because the admin banned the author from lobsters, not because there was something wrong with the content on that website. Drew wasn’t even the one posting his blog posts here.

                                                    I disagree with your opinion that his behavior on the site was not rude, though I didn’t look closely at all of his posts so I can’t say for certain. What I do agree with is the domain ban. The ban itself seemed unclear and arbitrary. Moreover, as you mentioned, a domain ban affects much more than just a user, it affects all content on that domain.

                                                    1. 1

                                                      Negative comments are deleted when users are banned or leave; you won’t find any of his egregious comments here.

                                                  2. 5

                                                    For my sins I’m tracking every submission to lobste.rs.

                                                    Here’s a gist with an extract of submissions matching ‘drewdevault’ in the URL. I consider a comments/score ration above 1.25 “controversial”.

                                                    Hopefully this can give a sampling of how Devault’s content was received by the community here.

                                                  1. 3

                                                    Very cool! I had actually wanted to experiment with Rust to do a simple 4-op FM synths on an ARM microcontroller.

                                                    For anyone not familiar with FM synth sounds… you’ve definitely heard it if you listened any pop music from the 80s - either from the Synclavier or the famous Yamaha DX7. Yamaha’s cost-reduced 4-op FM synths (like the DX100) were also a staple of 90s house music.

                                                    1. 4

                                                      Or played on a Sega Genesis / Mega Drive.

                                                      1. 3

                                                        Can we get an in-browser version of the OP-1? ;)

                                                        1. 5

                                                          You just want the cow, be honest with yourself. :-)

                                                          But more seriously, I got a similar idea about making a sampler in the browser à la MPC or SP404. Not sur if the workflow may fit a no-pad no touch device. Or going back to a tracker like Renoise or Sunvox that fit the computer interface.

                                                        2. 2

                                                          You should def. go for it! It would be really cool to get something running on a microcontroller like that, and Rust seems like the perfect language to make that happen.

                                                        1. 3

                                                          Language facilities like defer can be considered a mitigation for some causes of use-after-free (and double-free?).

                                                          1. 11

                                                            That’s weird. My XPS 13 has been extremely well behaved - suspends / resumes flawlessly, wifi & bluetooth just work. Touchpad is fine. etc etc.

                                                            It’s kind of refreshing to not have to work round some random piece of hardware that just doesn’t work for obscure reasons.

                                                            Maybe it’s the dual GPU XPS laptops that are particularly bad? Or have they simply got worse again since I bought mine?

                                                            1. 5

                                                              An XPS 13 was the inspiration for this rant. I wasted 5 hours on it before washing my hands of the damn thing. Integrated GPU system.

                                                              1. 9

                                                                Which model? I have an XPS 13 9360 and haven’t had any problems running stock Fedora on it. Also curious what the problem is, I’m currently looking to buy a second laptop. My pinebook pro isn’t really working quite well enough to be that yet, and the XPS 13 is currently the top contender.

                                                                1. 4

                                                                  yikes.. any more info on this? i have an 8550 with win10 and I’ll be moving to Linux soon, but now is a good time to switch it if I’m gonna have issues

                                                                  1. 2

                                                                    I have the 9360 and an older 9343. I’ve had an ubuntu release on each since I received ’em. I think the 9343 had some functionality challenges out of the box, but there were BIOS fixes available to be applied by the time I purchased it.

                                                                    The integrated GPU is underpowered for games but otherwise I haven’t encountered any issues with it or anything else with the laptop, really.

                                                                    1. 3

                                                                      Currently I’m running a 9360 & used a 9343 at work before that. Both were / are perfectly well behaved.

                                                                      (I should have bought a 16Gb model though - 8Gb is a bit tight for dev work in the modern world sadly. Compiling anything involving LLVM is an exercise in patience.)

                                                                      1. 3

                                                                        8Gb is a bit tight for dev work in the modern world sadly. Compiling anything involving LLVM is an exercise in patience.)

                                                                        Yeah, amen! My 9360 has 8GB and I use it for Chrome and gnome-terminal so I can remotely access a desktop with 32GB. I think I tried building clang+llvm once on the XPS, but never again.

                                                                        1. 4

                                                                          I think I tried building clang+llvm once on the XPS, but never again.

                                                                          It’s doable but you have to radially reduce the parallelism of the build, otherwise it eats all your memory & goes into swap hell. As a result it takes quite a while.

                                                                  2. 3

                                                                    How much of this was an issue with the hardware itself vs. trying to use linux on it?

                                                                    1. 5

                                                                      I have no idea. I only tried Linux and that should always be enough.

                                                                      1. 6

                                                                        Exactly, especially for a so called “developer” edition. You can’t say it’s a developer machine and then ignore about half the target audience. In the nineties/early naughts, I expected to be marginalized as a Linux user, but nowadays it’s the Windows developers that are often regarded as quaint (assuming web or mobile development; game or desktop development is a whole different ballpark of course).

                                                                  3. 5

                                                                    I only run Linux so I can’t vouch for BSD/Plan 9/Haiku compatibility but I’ve been using higher-end Dell laptops for the past couple of years (from the Latitude and now Precision lines) and have been mostly happy with them. My only real complaints revolve around they keyboards but nearly all laptop keyboards are awful in their own way.

                                                                    Some Lobsters have a seething hatred for all things Intel for whatever reason but when I spec a laptop, I look for one with an Intel CPU, Intel GPU (I do very little gaming), and Intel wifi because whatever their other faults, Intel is awesome at writing and maintaining Linux kernel drivers.

                                                                    I would like to check out some of the newer Thinkpad models but Lenovo’s website is such garbage that I can’t even tell which models they still sell these days.

                                                                    1. 4

                                                                      Intel is awesome at writing and maintaining Linux kernel drivers.

                                                                      It is very good, but not perfect. My dell precision was having random freezes for several due to a faulty intel GPU driver. This is a well-known bug in the driver that has been going for months, without an available solution yet. I had to switch on my nvidia graphics (which I had never used on my lab-issued laptop) because the intel GPU was unusable.

                                                                      1. 4

                                                                        I’ve solved a lot of flickering and other weirdness on my old Toshiba laptop by uninstalling the intel driver and letting the system fall back to a generic modesetting driver. Debian and others have made this the default since.

                                                                    2. 2

                                                                      That’s weird. My XPS 13 has been a bastard of a thing to deal with. Different USB-C ports seem to have different capabilities each time it resumes (or is that reboots?) and I never know if my external display is going to appear as DP-1 or DP-2. Admittedly, I have (mostly) working suspend/resume, don’t use bluetooth and never buy dual-GPU laptops to avoid that rats’ nest of trouble.

                                                                      1. 3

                                                                        The 9360 only has one USB-C port, so I didn’t have this problem :)

                                                                    1. 38

                                                                      Rust.

                                                                      The dev experience is so much nicer than my usual C/C++. After spending a lot of time writing and doing code reviews of C, C++, and rust, I am pretty convinced that it is much easier to write correct code the first time in rust than it is in the others, and rust has equally nice performance properties but is much easier to deploy.

                                                                      I spend most of my day working on high performance network software. I care about safety, correctness, and performance (in that order). The rust compiler pretty much takes care of the first item without any help from me, makes it very easy to achieve the second one, and is just as good as the alternatives for the third.

                                                                      1. 6

                                                                        I’m curious if you’ve ever tried another — non C/C++/Rust — language (anything garbage collected or dynamically typed) for projects where you don’t necessarily care about the fastest runtime? Is that ever relevant, or do you really only work on “high performance network software”?

                                                                        1. 8

                                                                          I work in games, and my experience is very similar to mortimer. I would go rust with no hestitation.

                                                                          I’ve done a lot of C# with Unity, and quite a bit of Go. I’d pick Rust over both of them any day of the week.

                                                                          The big thing with C# in games is that you lack control, and also have to do generally more memory management than even C++, working around the garbage collector is not fun.

                                                                          1. 7

                                                                            Sure, there is some stuff where performance doesn’t matter too much, and for those we’re free to choose something else. Python is pretty popular in this space, though even for these things I’d still consider using Rust instead just because the compiler makes it harder to screw up error handling and such.

                                                                            I did a transparent network proxy in ruby once, and that was super nice because ruby is super nice, but if I were to do it again today then I’d pick Rust. Most of the code wasn’t something you’d get from a library, and the vast bulk of bugs I had to handle would have been squashed by a better type system (this thing that is usually a hash is suddenly an array!) and better error handling (this thing you thought would work did not, and now you have a nil object!). Ruby (also python) just don’t help you at all with these things because it’s dynamically typed and will usually return nil to indicate error (or python will sometimes throw, which is just offensive). This paradigm where the programmer has to manually identify all the places where errors can happen by reading the documentation, and then actually remember to do the check at runtime is really failure prone - inevitably someone does not remember to check and then you get mystery failures at runtime in prod. Rust’s Result and Option types force the programmer to deal with things going wrong, and translate the vast bulk of these runtime errors into compile time errors (or super-obvious-at-code-review-time unwrap()s that you can tell them to go handle correctly).

                                                                            I haven’t really done any professional Java dev, but the people I know who do Java dev seem happy with it. They don’t have any complaints about performance - and they deploy in places where performance matters. When they do complain about Java, they complain about the bloat (?) of the ecosystem. FactoryFactoryFactories, 200 line backtraces, needless layers of abstraction, etc.. I don’t think they’re looking to change, so they must be happy enough. When I did Java in school I remember lots of NullPointerExceptions though, so I assume the same complaint I have about ruby / python / C / C++ error handling would apply to Java.

                                                                            For personal projects, it was usually ruby (because ruby is super nice), but lately all the new stuff is Rust because the error handling is so much better and it’s easier to deploy. Even when I don’t care about it being fast I do care about it being correct.

                                                                          2. 1

                                                                            Another reason: Attract good developers!

                                                                            That’s the flipside of all the good techical reasons, plus actually some of the bad - learning curve and newness.

                                                                            There are too few Rust and Haskell jobs, ok many C and C++ jobs, and absurdly many Java jobs.

                                                                            1. 11

                                                                              In order to validate the ‘learning curve for newbies’ concern, I actually gave Rust to a new employee (fresh out of uni) to see what would happen. They had a background in Java and hadn’t heard of Rust before then. I gave them a small project and suggested they try Rust, then sat back to see what happened. They were productive in about a week, had finished the project in about two weeks, and that project has been running in production ever since without any additional care or feeding for over a year now. This experience really cemented for me that Rust isn’t that hard to learn, even for newbies. The employee also seemed to enjoy it (this is a bit of an understatement), so if new staff can be both productive and happy then I’m not too concerned about learning curves and stuff.

                                                                              1. 4

                                                                                Vast majority of people that write about Rust online mention fighting the borrow checker. Your new folks didn’t have that problem?

                                                                                1. 8

                                                                                  Having helped both a few co-workers and a fresh intern with answering rust questions as they learned it, I’ve come up with a theory: Fighting the borrow checker is a symptom of having internalized manual memory management in some previous language before learning rust. And especially severe cases of it is from having internalised some aspect of manual memory management wrong. People who don’t have that are much more likely to be open to listening to the compiler than people who “know” they’re already implementing it right & they just need to “convince the compiler”.

                                                                                  1. 8

                                                                                    I find that I can often .clone() my way out of problems for now and still be correct.

                                                                                    Sometime later I can revisit the design to get better performance.

                                                                                    1. 4

                                                                                      Oh yes, new people fight the borrow checker but it just isn’t that bad (at least not in my experience) and they seem to get past it quickly. The compiler emits really excellent error messages so it’s easy to see what’s wrong, and once they get their heads around what kinds of things the borrow checker is concerned about they just adapt and get work done.

                                                                                      1. 3

                                                                                        I felt that I wasn’t fighting it. It was difficult, but the compiler was so helpful that it felt more like the compiler was teaching me.

                                                                                        (That said, I was coming from Clojure, which has terrible compilation errors.)

                                                                                        1. 1

                                                                                          Not sure about his employee’s perspective. But, I’m new to writing in Rust, and I think the frustration with the borrow checker is not understanding (or maybe just not liking?) what it is trying to do. My experience has been that at first I wanted to just try to build something in Rust and work through the documentation as I go. In that case the borrow checker was very frustrating and I wanted to just stop. But, instead I worked my way through the Rust book and the examples. Now I’ve picked up the project again, and it isn’t nearly as frustrating because I understand what the borrow checker and ownership stuff is trying to do. I’m enjoying working on the project now.

                                                                                        2. 2

                                                                                          This experience really cemented for me that Rust isn’t that hard to learn, even for newbies.

                                                                                          Counter anecdota – we have a team at $job that works entirely in rust, and common complains from the team are:

                                                                                          1. The steep learning curve and onboarding time for new team members
                                                                                          2. The Very Slow compile times

                                                                                          We aren’t hiring many folks direct from uni though – so perhaps counter-intuively, having more experience in other languages may make learning rust more difficult for some, and not less? Unsure.

                                                                                    1. 23

                                                                                      I think Josh addresses a good point here: systemd provides features that distributions want, but other init systems are actively calling non-features. That’s a classic culture clash, and it shows in the systemd debates - people hate it or love it (FWIW, I love it). I’m also highly sympathetic to systemd’s approach of shipping a software suite.

                                                                                      Still, it’s always important to have a way out of a component. But the problem here seems to be that the scope of an init system is ill-defined and there’s fundamentally different ideas where the Linux world should move. systemd moves away from the “kernel with rather free userspace on top” model, others don’t agree.

                                                                                      1. 17

                                                                                        Since systemd is Linux-only, no one who wants to be portable to, say, BSD (which I think includes a lot of people) can depend on its features anyway.

                                                                                        1. 12

                                                                                          Which is why I wrote “Linux world” and not “Unix world”.

                                                                                          systemd has a vision for Linux only and I’m okay with that. It’s culture clashing, I agree.

                                                                                          1. 6

                                                                                            What I find so confusing - and please know this comes from a “BSD guy” and a place of admitted ignorance - is that it seems obvious the natural conclusion of these greater processes must be that “Linux” is eventually something closer to a complete operating system (not a bazaar of GNU/Linux distributions). This seems to be explicitly the point.

                                                                                            Not only am I making no value judgement on that outcome, but I already live in that world of coherent design and personally prefer it. I just find it baffling to watch distributions marching themselves towards it.

                                                                                            1. 6

                                                                                              But it does create a monoculture. What if you want to run service x on BSD or Redox or Haiku. A lot of Linux tools can be compiled on those operating systems with a little work, sometimes for free. If we start seeing hard dependencies on systemd, you’re also hurting new-OS development. Your service wont’ be able to run in an Alpine docker container either, or on distributions like Void Linux, or default Gentoo (although Gentoo does have a systemd option; it too is in the mess of supporting both init systems).

                                                                                              1. 7

                                                                                                We’ve had wildly divergent Unix and Unix-like systems for years. Haiku and Mac OS have no native X11. BSDs and System V have different init systems, OpenBSD has extended libc for security reasons. Many System V based OSes (looking at you, AIX) take POSIX to malicious compliance levels. What do you think ./configure is supposed to do if not but cope with this reality?

                                                                                            2. 2

                                                                                              Has anyone considered or proposed something like systemd’s feature set but portable to more than just linux? Are BSD distros content with SysV-style init?

                                                                                              1. 11

                                                                                                A couple of pedantic nits. BSDs aren’t distros. They are each district operating systems that share a common lineage. Some code and ideas are shared back and forth, but the big 3, FreeBSD, NetBSD and OpenBSD diverged in the 90s. 1BSD was released in 1978. FreeBSD and NetBSD forked from 386BSD in 1993. OpenBSD from NetBSD in 1995. So that’s about 15 years, give or take, of BSD before the modern BSDs forked.

                                                                                                Since then there has been 26 years of separate evolution.

                                                                                                The BSDs also use BSD init, so it’s different from SysV-style. There is a brief overview here: https://en.m.wikipedia.org/wiki/Init#Research_Unix-style/BSD-style

                                                                                                1. 2

                                                                                                  I think the answer to that is yes and no. Maybe the closets would be (open) solaris smf. Or maybe GNU Shepherd or runit/daemontools.

                                                                                                  But IMNHO there are no good arguments for the sprawl/feature creep of systemd - and people haven’t tried to copy it, because it’s flawed.

                                                                                              2. 6

                                                                                                It’s true that systemd is comparatively featureful, and I’ll extend your notion of shipping a software suite by justifying some of its expansion into other aspects of system management in terms of it unifying a number of different concerns that are pretty coupled in practice.

                                                                                                But, and because of how this topic often goes, I feel compelled to provide the disclaimer that I mostly find systemd just fine to use on a daily basis: as I see it, the problem, though, isn’t that it moves away from the “free userspace” model, but that its expansion into other areas seems governed more by political than by technical concerns, and with that comes the problem that there’s an incentive to add extra friction to having a way out. I understand that there’s a lot of spurious enmity directed at Poettering, but I think the blatant contempt he’s shown towards maintaining conventions when there’s no cost in doing so or even just sneering at simple bug reports is good evidence that there’s a sort of embattled conqueror’s mindset underlying the project at its highest levels. systemd the software is mostly fine, but the ideological trajectory guiding it really worries me.

                                                                                                1. 1

                                                                                                  I’m also highly sympathetic to systemd’s approach of shipping a software suite.

                                                                                                  What do you mean here? Bulling all distro maintainers until they are forced to setup your software as default, up to the point of provoking the suicide of people who don’t want to? That’s quite a heavy sarcasm you are using here.

                                                                                                  1. 12

                                                                                                    up to the point of provoking the suicide of people who don’t want to

                                                                                                    Link?

                                                                                                    1. 25

                                                                                                      How was anyone bullied into running systemd? For Arch Linux this meant we no longer had to maintain initscripts anymore and could rely on systemd service files which are a lot nicer. In the end it saved us work and that’s exactly what systemd tries to be a toolkit for initscripts and related system critical services and now also unifying Linux distro’s.

                                                                                                      1. 0

                                                                                                        huh? Red Hat and Poettering strongarmed distribution after distribution and stuffed the debian developer ballots. This is all a matter of the public record.

                                                                                                        1. 10

                                                                                                          stuffed the debian developer ballots

                                                                                                          Link? This is the first time I am hearing about it.

                                                                                                          1. 5

                                                                                                            I’m also confused, I followed the Debian process, and found it very through and good. The documents coming out of it are still a great reference.

                                                                                                      2. 2

                                                                                                        I don’t think skade intended to be sarcastic or combative. I personally have some gripes with systemd, but I’m curious about that quote as well.

                                                                                                        I read the quote as being sympathetic towards a more unified init system. Linux sometimes suffers from having too many options (a reason I like BSD). But I’m not sure if that was the point being made

                                                                                                        Edit: grammar

                                                                                                        1. 5

                                                                                                          I value pieces that are intended to work well together and come from the same team, even if they are separate parts. systemd provides that. systemd has a vision and is also very active in making it happen. I highly respect that.

                                                                                                          I also have gripes with systemd, but in general like to use it. But as long as no other project with an attitude to move the world away from systemd by being better and also by being better at convincing people, I’ll stick with it.

                                                                                                        2. 2

                                                                                                          I interpreted it as having fewer edges where you don’t have control. Similar situations happen with omnibus packages that ship all dependencies and the idea of Docker/containers. It makes it more monolithic, but easier to not have to integrate with every logging system or mail system.

                                                                                                          If your philosophy of Linux is Legos, you probably feel limited by this. If you philosophy is platform, then this probably frees you. If the constraints are doable, they often prevent subtle mistakes.

                                                                                                      1. 2

                                                                                                        He’s not wrong about some facets of what he says, particularly that given a real-world identity one can just rely on the existing legal system to enforce contracts. But here’s the thing: the existing legal system is really, really expensive (in the U.S., we spend about 38% of GDP on government). Is it possible to use something like a blockchain to provide many of the same benefits more cheaply and/or more accountably and/or with less susceptibility to corruption?

                                                                                                        I don’t know, but it’s an interesting question. We’ll always need some form of physical government to provide physical security, but do we need a government to provide financial security?

                                                                                                        1. 4

                                                                                                          Is it possible to use something like a blockchain to provide many of the same benefits more cheaply and/or more accountably and/or with less susceptibility to corruption?

                                                                                                          I’m a big cryptocurrency fan but I don’t think that anything like blockchains could possibly provide any government services in any sort of useful fashion. It very quickly degrades to real world identity problems that cannot be solved well without trust. If you have trust, you don’t need a blockchain.

                                                                                                          1. 3

                                                                                                            the existing legal system is really, really expensive (in the U.S., we spend about 38% of GDP on government)

                                                                                                            Enforcing legal rules (courts and law enforcement) is a small part of that budget.

                                                                                                            https://www.thebalance.com/u-s-federal-budget-breakdown-3305789

                                                                                                            The discretionary budget will be $1.426 trillion. More than half goes toward military spending, including the Department of Veterans Affairs and other defense-related departments. The rest must pay for all other domestic programs. The largest are Health and Human Services, Education, and Housing and Urban Development.

                                                                                                          1. 13

                                                                                                            I feel like docker is too commercial. I worry about lock-in and how this business might turn.

                                                                                                            I have experimented with it and it seems like a nice feature set. I am also encouraged by appc/rkt/buildah/podman but I have not yet seen a really simple tutorial (though I admit I haven’t looked very hard). It doesn’t seem to me like these have seen wide use – likely because of docker’s popularity.

                                                                                                            1. 2

                                                                                                              Podman is often just alias docker=podman once you get it installed…. Quite the escape hatch if docker implodes.

                                                                                                            1. 4

                                                                                                              Is this the extent of the announcement?

                                                                                                              To the MIT community,

                                                                                                              I am resigning effective immediately from my position in CSAIL at MIT. I am doing this due to pressure on MIT and me over a series of misunderstandings and mischaracterizations.

                                                                                                              Richard Stallman

                                                                                                              If so, it’s interesting, but this news/announcement is really really light on information, and hardly seems worth posting here..

                                                                                                              1. 7

                                                                                                                It’s from Stallman himself. Definitely worth posting and discussing IMO.

                                                                                                                But an article that covers the details would be valuable for context, I agree. Especially for those of us who haven’t followed this issue.

                                                                                                                Edit: for context: https://www.vice.com/en_us/article/mbm74x/computer-scientist-richard-stallman-resigns-from-mit-over-epstein-comments

                                                                                                                1. 6

                                                                                                                  Thing that bothers me most is the combination of these statements:

                                                                                                                  “When someone else in the email thread pointed out that victim Virginia Giuffre, who was 17 when she was forced to have sex with AI pioneer Marvin Minsky, Stallman said “it is morally absurd to define ‘rape’ in a way that depends on minor details such as which country it was in or whether the victim was 18 years old or 17.””

                                                                                                                  “Stallman is known as a pioneer of the free software community and movement, which is closely related to the open source movement.”

                                                                                                                  Great fucking P.R. for the “free software community and movement.” They probably should’ve said something that indicated he was different than many or most of the other people involved in F/OSS. Media wise, this might be making casual readers suspect this is a larger problem among advocates for these types of software.

                                                                                                                  1. 7

                                                                                                                    Great fucking P.R.

                                                                                                                    Thankfully open source as a concept and movement has become ubiquitous enough that Stallman’s writings won’t damn it.

                                                                                                                    Remember, this is the same guy who wrote about legalizing possession of child pornography (2012) or stating that bestiality, necrophilia and child pornography are illegal due to prejudice and narrowmindedness. He’s spent a lot more time talking about possession of child porn than one would expect. It’s rather unsettling to see the “It’s illegal because people are narrowminded!” (No Richard, child porn in specific is illegal because children cannot consent. Ever. And such is the fruit of the poisoned tree…)

                                                                                                                  2. 5

                                                                                                                    Why is it worth posting or discussing?

                                                                                                                    Is it actionable? Unless you’re RMS, not really.

                                                                                                                    Is it going to foster productive discussion? Well, if you don’t agree with what happened or why, it seems like posting here about to that effect would at best be irrelevant and at worst could get you flagged…or worse. It’s hard to have a discussion when everyone is violently agreeing with each other–that’s more of a circlejerk.

                                                                                                                    Is it something the practicing technologist can use? Unless we’re so cynical as to say “This is why you keep your head down in leadership roles”, no.

                                                                                                                    Is covered elsewhere? Extensively, to varying degrees of accuracy.

                                                                                                                1. 2

                                                                                                                  Thanks for sharing, this is a handy reference.

                                                                                                                  Aside: IMO logical expressions with == should be bool and not int.

                                                                                                                  1. 2

                                                                                                                    This depends on the C standard – bool isn’t in C89, so it can become a portability issue.

                                                                                                                    1. 1

                                                                                                                      Y’know – I was tempted to add a snarky “unless you need to be compatible with C89 <scoff>”. I will acknowledge that there are some (hopefully very very few) legit use cases where bool support is missing.

                                                                                                                      1. 2

                                                                                                                        Indeed, and I’m inclined to agree otherwise.

                                                                                                                        I’ve worked on a couple projects that needed C89 compatibility (mostly in embedded), and I begrudgingly keep greatest to -std=c89 just so that it’s always an option. There are some features that are only available when it’s built with >= C99 though.

                                                                                                                  1. 12

                                                                                                                    Each clock cycle uses a little bit of power, and if all I’m doing is typing out Latex documents, then I don’t need that many clock cycles. I don’t need one thousand four hundred of them to be precise.

                                                                                                                    I recall there being some paper or article that shows doing the work quicker with a higher frequency/power draw then moving into the low frequency is less costly for power savings that doing the work slower at a lower frequency. Basically the CPU would spend more time in sleep states in an ‘on demand’ type governor (run at low freq, but elevate to high freq when utilization is high) vs a governor that always ran at the highest p-state (lowest freq) all the time. I’m having trouble finding the specific paper/article though.. I’ll keep searching.

                                                                                                                    1. 15

                                                                                                                      Search for “race to idle”

                                                                                                                      1. 8
                                                                                                                      2. 4

                                                                                                                        I remember reading the same thing, but I think it had the added context of being on mobile. Having the CPU awake also meant having other hardware like the radios awake, because it was processing notifications and stuff. In this case, the rest of the hardware is staying awake regardless, so I think it’s really just reducing the number of wasted CPU cycles.

                                                                                                                        I’d be interested if you or someone else could find the original source for this again to fact check!

                                                                                                                        1. 3

                                                                                                                          The counterbalance here is the increasing cost for each 100MHz as frequencies get higher. This is old, but https://images.anandtech.com/doci/9330/a53-power-curve.png shows the measured shape of a real curve. This StackExchange response helps explain why it isn’t flat.

                                                                                                                          So factors around race-to-idle include how that power-frequency curve looks, how much stuff you can turn off when idle (often not just the cores; see artemis’s comment) and how CPU-bound you are (2x freq doesn’t guarantee 2x speed because you spend some time waiting on main memory/storage/network).

                                                                                                                          Some of that’s workload-dependent, and the frequency governor doesn’t always know the workload when it picks a frequency. Plus you’ve got other goals (like try to be snappy when it affects user-facing latency, but worry less about background work) and other limits (power delivery, thermals, min and max freq the silicon can do). So optimizing battery life ends up really, uh, “fun”!

                                                                                                                          (Less related to race-to-idle, but the shape of the power curve also complicates things at the low end; you can’t keep a big chip running at all at 1mW. So modern phones (and a chip Intel is planning) can switch to smaller cores that can draw less power at low load. Also, since they’re tiny you can spam the chip with more of them, so e.g. Intel’s planning one large core and four small. Fun times.)

                                                                                                                          1. 1

                                                                                                                            Ooh, today somebody happened to post a couple pretty charts of recent high-end Intel desktop chips’ power/frequency curves, and charted speed per watt as a bonus. They also fit a logarithmic curve to it, modeling power needs as growing exponentially to hit a given level of perf, and it looks like it worked reasonably well.

                                                                                                                          2. 1

                                                                                                                            Yes, I remember reading the same thing. But, maybe on a CPU this old it isn’t as efficient as transitioning in and out of low power states? Just a guess, assuming his claim of +1 hour is true.

                                                                                                                            1. 1

                                                                                                                              Maybe it’s not linear. The comparison between ‘low freq’ and ‘high freq’ for that study could be comparing something like 40% (of the available clock range) vs 90%? And maybe at 1% the CPU power draw is so much lower that it’s even better than race-to-idle (but perhaps considered to be an unlikely/uncommon configuration).

                                                                                                                              1. 1

                                                                                                                                The power consumption of a CMOS circuit scales with the square of the operating voltage, though, so intuitively I would expect that 100 ms at 0.5V to be more energy-efficient then 50ms at 1.0V. Chips are extremely complex devices, though, and I’m probably ignorant about a power-saving strategy or physical effect that side-steps this. Please let me know when you find that article - I’m curious to see which of my assumptions have been violated.

                                                                                                                                1. 1

                                                                                                                                  I found it and put the link in another comment

                                                                                                                                2. 1

                                                                                                                                  This makes a lot of sense and mirrors my experience with an X220 on OpenBSD. I got about one more hour of battery life (about 5 hours -> 6 hours ish) just by allowing it to run at a higher frequency but still spend 95% of its time at a lower one.

                                                                                                                                  Also, while the tools built into OpenBSD do a good job of controlling power usage, I found I was getting even better battery life in a standard Arch Linux instal with no configuration of powertop or anything else.

                                                                                                                                1. 0

                                                                                                                                  #ifdef APPLE y’know truncating an mmap’d file actually leads to a lot of undefined behavior

                                                                                                                                  … file can be truncated overwritten before tail can actually read it …

                                                                                                                                  Yeah, at the outset it sounded like the user was expecting too much from tail -f and it did not sound like an OS bug to me. Regardless of the BSD manpage.

                                                                                                                                  1. 1

                                                                                                                                    Does anyone know about the kernel requirements for running podman (and its related software)? What’s the oldest ubuntu release that I could use it on?

                                                                                                                                    EDIT: nm it appears that the PPA includes trusty, bionic, xenial.

                                                                                                                                    1. 14

                                                                                                                                      Occams razor: which is more likely?

                                                                                                                                      1. You’ve been target by an undetectable son of stuxnet cyberweapon.

                                                                                                                                      2. A cloud chat company is lying about their security.

                                                                                                                                      1. 1

                                                                                                                                        This fails to take into account the risk of delaying the remedy.

                                                                                                                                        1. 2

                                                                                                                                          There’s also the cost of the remedy to consider.

                                                                                                                                        2. 1

                                                                                                                                          I read that as advertising, as a statement of particularly high company security values.

                                                                                                                                        1. 2

                                                                                                                                          If the host sees that the user is attempting to use DisplayPort Alternate Mode with the wrong cable, rather than a silent failure (ie, the external display doesn’t light up), the OS should tell the user via a notification they may be using the wrong cable, and educate the user about cables with the right logo.

                                                                                                                                          Anyone know how this would actually work? Presumably, if the wires aren’t there, one end won’t know the other end is trying to send signal down them. Or is all the negotiation to setup alternate modes done over the 2.0 wire pair?

                                                                                                                                          1. 2

                                                                                                                                            I see “Uses the USB-C Power Delivery messaging protocol over the USB-C Configuration Channel (CC) to negotiate into and exit out of DisplayPort Alt Mode signaling.” in some slides for a webinar on the topic.

                                                                                                                                          1. 6

                                                                                                                                            This describes the different compliant varieties but to make things yet more complicated it sounded like for some time there were a lot of manufacturers who were producing incorrectly-terminated cables. There was Benson Leung naming-and-shaming them for a while but I don’t know if that kind of scrutiny is necessary anymore. http://bensonapproved.com redirects and I can’t seem to access that site anymore.

                                                                                                                                            1. 6

                                                                                                                                              For the record Benson Leung is the author of this very post.

                                                                                                                                              1. 2

                                                                                                                                                I bought a cord he approved, it was a PoS. I think they got a bump from his endorsement and then cut quality to reap the profits. He’s only one person and he can’t continually test cables at his own expense, the USB licensors really need to implement some sort of QA process.

                                                                                                                                              1. 5

                                                                                                                                                It is mostly an change that empowers developers; but it also is a change that will cause some existing code to break.

                                                                                                                                                This has been my impression of Rust, and why I’ve avoided doing much with it. It seems like it’s just a constantly moving target. I remember in its earlier days whole features (like classes) just being nuked out of existence. It may have settled down some now; I haven’t been keeping as close an eye on it.

                                                                                                                                                1. 27

                                                                                                                                                  Since 1.0, this is the first major breaking change that I can recall. It’s also worth pointing out that any code that does break under NLL was fundamentally unsound – it shouldn’t have been compiling in the first place, but was, due to the limitations of the then-current lifetime analyzer. It’s also going to downgrade these breakages to just issuing warnings that this unsound code is going to stop compiling sometime in the future.

                                                                                                                                                  My personal take is that, in a language that’s trying to prioritize safety and correctness, privileging correctness over the sanctity of “existing code that compiles” is the right move.

                                                                                                                                                  Sure, in pre-release, there was a lot of churn as all manner of different ideas were tried out (like Classes) and subsequently removed. That’s, uh, why it was in pre-release. You need to get feedback somehow.

                                                                                                                                                  1. 2

                                                                                                                                                    It’s also going to downgrade these breakages to just issuing warnings that this unsound code is going to stop compiling sometime in the future.

                                                                                                                                                    What’s the practical difference between “unsound” code that compiles and runs versus “undefined” or “implementation defined” behavior in C and C++?

                                                                                                                                                    I suppose it’s a moot issue because there’s only one Rust implementation?

                                                                                                                                                    1. 4

                                                                                                                                                      What’s the practical difference between “unsound” code that compiles and runs versus “undefined” or “implementation defined” behavior in C and C++?

                                                                                                                                                      I don’t think there’s any relationship there at all. Undefined/implementation defined code in C is code where the standard says, essentially, “in these cases the compiler can do whatever: compile, reject, crash, launch nethack”.

                                                                                                                                                      Unsound code that was compiling under the AST based borrow checker in Rust were simply compiler bugs — explicitly disallowed things that slipped past due to defects in the borrow checker. The analogous situation in C is again a compiler bug

                                                                                                                                                      1. 1

                                                                                                                                                        What’s the practical difference between “unsound” code that compiles and runs versus “undefined” or “implementation defined” behavior in C and C++?

                                                                                                                                                        There might be a difference between “this is unsound, we don’t know if the code is valid”, and “there is a data race” or some other genuine undefined behaviour.

                                                                                                                                                        If you overlap borrows, one of which is mutable, your program should be rejected. But if you’re single threaded, overlapping borrows should be just fine. Perhaps confusing and bug prone, but as long as you don’t have any actual concurrent access, it should work.

                                                                                                                                                        That’s most likely why there’s an unsafe mode: it’s like telling the compiler “I know you can’t prove my program doesn’t have any data races, but trust me, there isn’t any”.

                                                                                                                                                        1. 4

                                                                                                                                                          But if you’re single threaded, overlapping borrows should be just fine.

                                                                                                                                                          Overlapping borrows are unsound even on a single thread. The canonical example of the borrow checker people use in intro talks is single threaded:

                                                                                                                                                          let v: &mut Vec<T> = ...;
                                                                                                                                                          let x0 = &v[0]; // overlapping borrow
                                                                                                                                                          v.push(...); // potential resize invalidating x0
                                                                                                                                                          println!("{:?}", x0); // use-after-free
                                                                                                                                                          

                                                                                                                                                          Or this more blatant case:

                                                                                                                                                          let o: &mut Option<String> = ...;
                                                                                                                                                          if let Some(s) = o { // overlapping borrow
                                                                                                                                                              *o = None; // string no longer has owner, is dropped
                                                                                                                                                              println!("{}", s); // use-after-free
                                                                                                                                                          }
                                                                                                                                                          
                                                                                                                                                          1. 1

                                                                                                                                                            Crap, didn’t think of pointer invalidation. Good point.

                                                                                                                                                      2. 1

                                                                                                                                                        It’s also worth pointing out that any code that does break under NLL was fundamentally unsound

                                                                                                                                                        I don’t think this is true:

                                                                                                                                                        https://github.com/rust-lang/rust/issues/59159

                                                                                                                                                        This issue doesn’t mention unsafety, just inconvenience for them.

                                                                                                                                                        1. 1

                                                                                                                                                          That’s not my reading of it.

                                                                                                                                                          From reading the various issues and reasoning behind stacked borrows it seems that:

                                                                                                                                                          That particular lint identifies an unintended pattern that was allowed to compile as a two-phase borrow. The pattern is unintended, and undesirable, because it violates fundamental validity guarantees that all Rust code, safe or unsafe, must adhere to in order for there to be a coherent set of rules (“stacked borrows”) about pointer aliasing in Rust that unsafe code and compiler optimizations can be asked to adhere to.

                                                                                                                                                          The two phase borrow pattern in question, in violating those rules, creates situations in which unsafe code or compiler optimizations that follow the rules will nevertheless result in safety and/or validity issues. This was argued about for several months with a bunch of people proposing various ways to make this particular 2PB pattern “work” with aliasing rules — but none of them seems to have managed a working solution.

                                                                                                                                                          Hence, the lint and eventual removal.

                                                                                                                                                      3. 12

                                                                                                                                                        oh, maybe my standard is lower than yours but with the language itself, I was always impressed how remarkedly well managed stability. But yes, they fix compiler bugs eventually. I was once affected and got a pull request to my crate from the compiler team to fix it. The compiler release before issued a warning.

                                                                                                                                                        So, this affects compiler bugs which might lead to unsafe code. The compiler issued warnings for some releases, now it will stop compiling that code.

                                                                                                                                                        In my view, a remarkable balance between stability and upholding the fundamental guarantees of rust.

                                                                                                                                                        (the library ecosystem is a different story)

                                                                                                                                                        1. 4

                                                                                                                                                          Maybe not the best wording on the author’s part, but if you read the rest of the post you would have read that only unsound code is affected, and it will continue to compile for now; only a future-compatibility warning will be raised.

                                                                                                                                                          Rust has strong backwards compatibility guarantees, and has since the 1.0 release in 2015. It’s only a moving target if you want to use the latest and greatest features, which are admittedly being added at a considerable rate. But that’s only a drawback if the features are not worth the maintenance overhead, which so far has not been a problem for Rust.

                                                                                                                                                          1. -1

                                                                                                                                                            ‘Unsound’ does not mean incorrect, it just means they can’t prove it is sound.

                                                                                                                                                            only a future-compatibility warning will be raised.

                                                                                                                                                            It is not useful to be told working code will maybe fail to compile at some nebulous point in the future when you specifically chose a stable release and edition to work with.

                                                                                                                                                            1. 3

                                                                                                                                                              ‘Unsound’ does not mean incorrect, it just means they can’t prove it is sound.

                                                                                                                                                              That’s what the unsafe blocks are for. If you didn’t intend a particular piece of code to be unsound, it shouldn’t be, and you should fix it.

                                                                                                                                                              1. 0

                                                                                                                                                                This is a bad argument taken to the logical extreme - as a compiler gets more and more intelligent it can reject more and more code, to the point is just rejects all code ever written due to all code we write having bugs.

                                                                                                                                                                My argument is simple, a rust edition should keep compiling things it accepted in the past. If they want to fix soundness problems, they should emit very loud and serious warnings and make a new edition. They shouldn’t retroactively stop compiling things they used to accept without a new rust edition.

                                                                                                                                                                1. 4

                                                                                                                                                                  My argument is simple, a rust edition should keep compiling things it accepted in the past.

                                                                                                                                                                  That argument is wrong. The soundness rules are not just defined by the (only) reference implementation. The soundness rules stipulated that multiple borrows (one of which is mutable) is not allowed to overlap, and it used scope as an approximation. If the compiler allow such an overlap, this is a bug and it should be fixed.

                                                                                                                                                                  Likewise, if you unwittingly took advantage of this bug, and wrote unsound code outside of an unsafe block, you have a bug. Perhaps not a genuine bug, but you at least did not abide the soundness rules you should have. Thus, you should fix your code. I don’t care it’s something you consider “done” and no longer want to maintain. At the very least, you should accept a patch. If you don’t, well… there’s always the possibility of forking.

                                                                                                                                                                  If they want to fix soundness problems, they should emit very loud and serious warnings and make a new edition.

                                                                                                                                                                  Well, this is almost what they did: there’s a legacy compatibility mode, and warnings about that not compiling any more. That’s better than what C compilers do right now: when a new compiler spot an undefined behaviour it didn’t spot before, it could introduce a bug, without any warning, it didn’t used to introduce. (The magic of optimisations.)

                                                                                                                                                                  But this is not about undefined behaviour. This is about soundness. Which unlike undefined behaviour in general is perfectly checkable statically. This won’t get worse and worse as compilers get better. It’s just a matter of fixing compiler bugs.

                                                                                                                                                                  1. 0

                                                                                                                                                                    I think it might annoy and cause reputational damage when all they need to do is make edition 2019-nll to avoid breaking the ecosystem.

                                                                                                                                                                    Any crate with unsafe in it already has as much risk as something that has been there for years and nobody noticed any problem. They just need a warning and some bold flashy lights instead of permanently breaking our ability to compile a portion of crates.io. To maintain soundness they could make calling 2018 crates from 2019-nll an unsafe operation.

                                                                                                                                                                    I think the right action depends how many crates they have obsoleted, which I don’t know. They should probably check and make it public, but I feel like they would rather not know.

                                                                                                                                                                    1. 3

                                                                                                                                                                      I think it might annoy and cause reputational damage when all they need to do is make edition 2019-nll to avoid breaking the ecosystem.

                                                                                                                                                                      I dispute the assumption that they broke anything. Some code will stop compiling by default, but the old ways are just an option away. The warnings and the bold flashy lights is exactly what they have done. They have not broken your code, let alone permanently. Your code still compiles.

                                                                                                                                                                      Sure, some users will get the warnings. Of course those users will file a bug. But your code still compiles, and will do for quite some time.

                                                                                                                                                                      Any crate with unsafe in it already has as much risk as something that has been there for years and nobody noticed any problem

                                                                                                                                                                      That’s just unsafe doing its job: a promise from the programmer that there isn’t any undefined behaviour, even though the borrow checker can’t verify it.

                                                                                                                                                                      I think the right action depends how many crates they have obsoleted,

                                                                                                                                                                      “Obsoleted” is such a strong word. I bet the changes required to fix the soundness errors in those crates will be minimal. I strongly suggest you take a look at your own crates, see where the new borrow checker got displeased, and do whatever is needed to please it again. This should be a quick fix.

                                                                                                                                                                      1. 0

                                                                                                                                                                        They have not broken your code, let alone permanently. Your code still compiles.

                                                                                                                                                                        The warning specifically says they plan to stop it from compiling. Fine if it isn’t broken yet, but they told us the plan is to break things. They seem to have proposed adding these changes to edition 2015 to prevent even more code from compiling in the future.

                                                                                                                                                                        I bet the changes required to fix the soundness errors in those crates will be minimal.

                                                                                                                                                                        You also need to back port the changes to every major version release of your package to keep builds working for people depending on older API’s of your crates. Then you need to spend time testing each release, and publish them all. “Minimal” can quickly add up to an hour per crate., I don’t think all authors will do it, nobody is paying them to do it. Some probably don’t program rust anymore. It is just a loss for everyone.

                                                                                                                                                                        “Obsoleted” is such a strong word. I bet the changes required to fix the soundness errors in those crates will be minimal.

                                                                                                                                                                        Those crates simply won’t compile with newer versions of rustc 2018 edition without changes. The old versions just won’t work anymore without change, that sounds like obsoleting to me.

                                                                                                                                                                        Anyway, obviously there are positives, like a lower maintenance burden so it isn’t totally bad.

                                                                                                                                                                        1. 2

                                                                                                                                                                          I don’t think all authors will do it,

                                                                                                                                                                          I agree, they won’t. But I think it’s reasonable to assume that every single noteworthy crate will be addressed. The other can fall into oblivion like they would have anyway.

                                                                                                                                                                          Those crates simply won’t compile with newer versions of rustc 2018 edition without changes.

                                                                                                                                                                          The original post says: the 2018 edition of Rust […] has had NLL enabled ever since its official release.

                                                                                                                                                                          It’s a new edition. If you don’t want to update your code, well, just put a note that you’re only Rust 2015 compatible. C++ broke compatibility in similar ways, by the way: remember how it hijacked the auto keyword ? Nobody complained, because everyone understood it was a major release of the language (and the auto keyword could easily be removed from old code).

                                                                                                                                                                          And then there’s the migration mode, that allows you to take advantage of Rust 2018 even if you fall into one of those soundness bugs. Yes, they will turn it off eventually. But really, if you stopped maintaining your package, you are still using Rust 2015, and you will be fine. If you do maintain your package, well… what’s an hour per crate, really?


                                                                                                                                                                          It is not possible, nor (I think) even desirable to add special cases to the NLL borrow checker so it is bug compatible with the old AST borrow checker. It doesn’t work the same way at all, and even if you could reach for bug compatibility, you’d still have a soundness bugs, and with it the possibility of data races or pointer invalidation in supposedly “safe” code. Is bug compatibility worth sacrificing correctness? Not in my book.

                                                                                                                                                                          Then there are the benefits of the NLL borrow checker to begin with. Now pleasing the borrow checker will be much easier. Old complaints about it will likely fade away, and learning Rust will likely be less painful. Would you seriously sacrifice that just so you can be bug-compatible?

                                                                                                                                                                          Make no mistake, this is bug compatibility we’re talking about. Breaking code that relied on a buggy version of the Rust compiler, and as a result were not as safe as rustc claimed it were. They did not remove any feature, they fixed a bug (several bugs, actually). Sucks that the bug lead rustc to accept unsound code, but it’s really really not the same as, say, removing support for some syntax sugar or whatever.

                                                                                                                                                          2. 3

                                                                                                                                                            Rust has been stable for a long time now, so the moving target you mention has long settled. And as /u/pkolloch mentioned, the breaking changes are just bug fixes essentially, and they are downgrading the errors to warnings for an indefinite time. You should definitely check the language again if you feel like it.

                                                                                                                                                            1. 3

                                                                                                                                                              I thought the same about Rust, but then I realized that it was only going to break things if I upgraded my toolchain. This wasn’t something where a “yum/apt/brew install” or even a rustup would nuke my existing binaries. Instead, it only applies to upgrading your toolchain.

                                                                                                                                                              If the wrong version of python (2.6 instead of 2.7) is installed in a container or on a host I’m trying to use, I’ll see bugs or failures to run. God help me if I want to run Python 3 code. I hit that snag all the time.

                                                                                                                                                              With Rust, that problem is more or less nonexistent. I use Python A LOT. However, I still see the merit of Rust on this front. I’ve seen and used @yonkletron’s work written in Rust in an incredibly risk-averse environment and I was quite impressed at how little I had to think about it compared to my own Python work.

                                                                                                                                                              Rust may be less stable for me as a developer (and even that I’d question!) but it sure as hell seems pretty stable for me as an end user of apps written in it.

                                                                                                                                                              1. 2

                                                                                                                                                                It seems like it’s just a constantly moving target.

                                                                                                                                                                It’s definitely pretty active. But IMO this particular case strikes me as fixing a compiler defect (permitting unsound code to compile). It was a defect not to emit an error for this case, IMO. Fixing defects that result in new errors is not instability from the toolchain, it’s a bugfix that Rust customers should welcome (even though fixing their bug is a prerequisite to accepting the new toolchain release).

                                                                                                                                                                Maybe I didn’t quite read this article right but the fact that their 2015 example yields a warning that explicitly states that it was downgraded for compatibility makes it sound like the designation “will cause some existing code to break” is too pessimistic.

                                                                                                                                                                1. 0

                                                                                                                                                                  They added warnings saying working code of mine in already released crates was going to stop compiling. I complained about it and nobody replied.

                                                                                                                                                                  To put it another way, rust already broke code in one of my ALREADY RELEASED crates WITHOUT AN EDITION CHANGE. It pissed me off so much.

                                                                                                                                                                  Not everyone has resources to go back and rewrite and retest code at their whims. In my opinion this change should be a strongly worded warning about potential unsafety + an edition change if they were serious about language stability. Don’t tell end users of my crate “hey, we might break your code maybe sometime, we don’t know when.”

                                                                                                                                                                  If by future versions, they meant “future editions” I would be much more okay with it.

                                                                                                                                                                  1. 2

                                                                                                                                                                    If by “broke code”, you mean “emit warnings on newer versions of the compiler”, then maybe. What’s the crate? Where did you notify the developers (“complain”)? Are you sure the code is sound?

                                                                                                                                                                    1. 0

                                                                                                                                                                      Here is the issue, A report from a user of a released package:

                                                                                                                                                                      https://github.com/andrewchambers/orderly/issues/20

                                                                                                                                                                      Here is the complaint in the official thread for complaints of this issue:

                                                                                                                                                                      https://github.com/rust-lang/rust/issues/59159

                                                                                                                                                                      Are you sure the code is sound?

                                                                                                                                                                      My reading of that issue is that it is sound, but inconvenient for them, but I don’t really care, they should do an emergency edition change for unsound code if the change isn’t backwards compatible. If code can’t last 6 months, what use is it? I want to write code that keeps working as when I wrote it for 20 years, 200 years if possible. They invented the edition system, why don’t they use it.

                                                                                                                                                                      I don’t like how it is deemed ok to disregard small projects who put faith in rust.

                                                                                                                                                                      1. 5

                                                                                                                                                                        they should do an emergency edition change for unsound code if the change isn’t backwards compatible

                                                                                                                                                                        No… Compiler bugs should be warned about once it becomes feasible to do so, with ample time before making them hard errors. Which is exactly what they’re doing… This looks like a great example of that system working. The warning drew your attention to your dependence on the compiler bug, and you fixed it the same day. All dependencies continued to compile, and assuming you released a patch version and yanked the previous version they always will.

                                                                                                                                                                        If fixing any compiler bug required an edition bump, then the compiler would be riddled with bugs. Don’t pretend that you would prefer that.

                                                                                                                                                                        1. 0

                                                                                                                                                                          Now 0.1, 0.2,0.3,0.4 and 0.5 versions of my project on crates.io are useless junk. What is the point of them being there?

                                                                                                                                                                          If my crate were a library this would also turn all crates depending on them into useless junk too. In my complaint I asked if they checked how many crates they are no only deprecating, but outright obsoleting.

                                                                                                                                                                          If you also note, the issue I linked doesn’t say it is a soundness issue, just a mistake they made accepting things that makes some future changes more annoying for them.

                                                                                                                                                                          1. 1

                                                                                                                                                                            Traceability. Studying the history of the code. Reproducing the results if they download the needed version of Rust. Not too many practical reasons, though.

                                                                                                                                                                1. 4

                                                                                                                                                                  musl libc: a project with no hype but huge impact musl libc is an alternative to GNU libc for Linux, created by Rich Felker, and with a healthy community of high-quality contributors. It’s been around for years, yet making less than V in donations.

                                                                                                                                                                  musl is super-great, without a doubt.

                                                                                                                                                                  In other, semi-related news: GOOG wants to add a libc to llvm, apparently for the sake of Fuchsia. It doesn’t sound like the initial scope is intended to match musl or glibc. But I wouldn’t be too surprised if it got there within a few years.

                                                                                                                                                                  I love to hate on GOOG for abandoning convenient service offerings, but they definitely do some good open source stuff. Andrew’s pledge to support musl (and others’ in-kind pledges) is/are laudable. But it’s good to know that there’s commercial support of open source projects, too.

                                                                                                                                                                  EDIT: lobsters discussion of the llvm libc here

                                                                                                                                                                  1. 3

                                                                                                                                                                    It’s not for fuchsia; someone from fuchsia, apparently as an outsider, came in and said ‘that would be useful, can you make an aarch64 port?’