1. -10

    I know you get a lot of pat-on-the-backs when you implement stuff for the disabled. But I just feel like it’s rarely worth it unless you are at a large scale where the disabled population will offset the man-hours. Not to mention that different segments of the disabled have different requirements and the same special interface will not couple with all of them.

    So to me, I can’t help but think that whenever some megacorps implement these solutions, it’s more likely virtue-signalling rather than altruism or legit economic advantage.

    The problem of course, is that if we could solve this problem economically, then we would have solved it forever, but if it is virtue-signalling, then the incentive isn’t really to provide solutions, but to provide the appearance of caring, and so the mismatch will eventually result in the problem not really being solved long-term.

    1. 10

      I don’t understand this comment at all. If it’s not profitable, why do you think companies are “virtue signaling” and not caring? ISTM you’re reading an awful lot into their behavior, under the odd belief that doing something good has to be for egotistical reasons, and not because you want to help someone out.

      1. 3

        To expand on what I believe @LibertarianLlama is saying is, it’s possible this comes out of their marketing budget as a kind of loss. The upside of this would be that the PR leads to other sells, not necessarily of this product, but others.

        In the end it doesn’t really matter. It’s a local choice of the company, not trying to solve a problem globally in an economically sustainable way.

        It should also be remembered that helping people can be egotistical, in which case it’s a win-win! I find it personally strange when people sometimes boycott beneficial things because they’re suspicious of the underlying motives, when the motives clearly aren’t arming belligerents in a foreign war, or something else clearly evil.

        1. -2

          why do you think companies are “virtue signaling”

          because they think creating an image will give them financial rewards.

          1. 3

            I’m truly sorry you’ve never had the opportunity to work somewhere that prioritizes results over optics.

        2. 9

          Did you read the article? The controller is heavily customizable (it’s a platform, really), precisely to accommodate as many people’s needs as possible.

          1. 13

            I think you’re right, but I SO don’t care!

            As a partially blind person, there is SO much of the gaming world that’s closed off to me. That’s OK. I still sleep just fine at night knowing I will never be a Call of Duty GOD :)

            However, when game developers and console makers bother to make adaptations available to allow me and others with disabilities to enjoy the beautiful mix of art and science that is most modern video games, I really appreciate it.

            So, virtue signaling or not, this is a laudable move on Microsoft’s part, and I for one think we should all recognize that.

            Almost makes me want to own an Xbox again. Only problem is that I haven’t had time to play a game on any platform in ~6 months :)

            -Chris (Aside from iPad gaming in waiting rooms sometimes)

            1. 6

              I think you’ve put your finger on a significant contradiction in libertarianism. You want to judge the worth of the enterprise by economic returns: success is denominated in dollars and the market is the only neutral or efficient judge of value.

              However, the other name for “to provide the appearance of caring” is marketing, and of course good marketing enormously multiplies the returns of a product, the world being annoyingly reticent to beat a path to the door of entrepreneurial mousetrap makers. Even in the very unlikely event that sales of this controller wouldn’t cover the costs to design and manufacture it (given that video gaming is measured in the tens of billions for the U.S. and this product looks overwhelmingly superior to competitors for the mostly-untapped wallets of tens or hundreds of millions of humans with motor control injuries), Microsoft could get a positive return on investment just from the increase in warm, fuzzy feelings from the majority of the market with no need for this product if they go on to buy ever-so-slightly-more copies of OneDrive or Office. The existence of marketing and cross-promotion means that the value of these products can’t be judged solely by the invisible hand of the market discovering prices for goods and driving firms out of business. You make this point in reverse; the long-term existence of marketing points it being economically valuable. There are externalities not on the books of a single product, just like how, in reverse, the market overvalues a polluter because the externality of cleaning up toxic waste or reversing climate change isn’t charged to the company and so can’t be reflected in the stock price.

              But whether or not the economics work, perhaps in this instance we can settle for helping make an entire art form accessible because it’s a small act of basic human decency and we’re not unthinking monsters.

              1. 4

                I’ll probably get downvoted, but here goes…

                I think you’ve put your finger on a significant contradiction in libertarianism. You want to judge the worth of the enterprise by economic returns: success is denominated in dollars and the market is the only neutral or efficient judge of value.

                However, the other name for “to provide the appearance of caring” is marketing […]

                Libertarianism is actually about the freedom to property and its action, where the individual is his or her own property. Economics is more a description of the market that emerges from action and property. Be it a free market or not, depending on the freedom to the underlying rights.

                So when you point out a contradiction, there really is no contradiction. It barely exists on the same plane of reality. Anyone in business, who wants to stay there, knows about marketing, cross-promotion and all that. It’s a business strategy.

                PS.

                Libertarianism is not a game of winners and losers where money is how we keep score.

                But in a hypothetical world where it were, Microsoft would likely end up winning with this device. As would the customer demographic.

              2. 1

                Even if I disagree with you, I don’t understand why you are being downvoted for this argumenter opinion of your. Anyway… thank you for expressing yourself on the topic.

                To me it’s mostly about having a customizable solution for gaming controls, that can be used for players with disabilities. If you look at Nintendo, they recently launched this thing with customizable objects in paper to enhance the gaming experience, this is just how the Microsoft gaming team is implementing it! Bold move from them!

                1. 9

                  Even if I disagree with you, I don’t understand why you are being downvoted for this argumenter opinion of your. Anyway… thank you for expressing yourself

                  Because it’s incorrect, and baseless bloviating in order to shit on the idea of not needlessly excluding the marginalized.

              1. 2

                ChromeOS with a full Linux terminal environment would handle 100% of my computing needs and would “just work”…but it involves selling even more of my soul to Google than I already do, so, I’m conflicted.

                EDIT: Well, not 100% because I need to be able to run VMs. But, you know, 80%.

                1. 1

                  If you only ned to run Linux VMs then it may still be 100%, since that’s how this is being done…

                  1. 1

                    Alas, no.

                    1. 1

                      I guess there’s still the faint hope that VMware Workstation for Linux will run inside ChromeOS eventually.

                  2. -1

                    If Google was good, then Google would push their drivers for Chromebooks upstream. Google does not push their drivers for Chromebooks upstream.

                    1. 1

                      They’re still open source though, aren’t they?

                  1. 2

                    I’m curious what other lobsters think Facebook should be doing?

                    Let’s assume that it’s not profitable for them to offer their service to the EU if they can’t track their users, since that’s the basis of their business. Should they offer “opt in to tracking or pay a yearly fee”? Should they just leave the EU completely?

                    1. 14

                      The “what should Facebook do if this isn’t profitable” question reminds me of the response to Taxi company’s being upset at Uber/Lyft cannibalizing their business: you don’t have a moral right to your business model, if it’s not profitable, do something else. We shouldn’t reduce quality of medical care because it victimizes undertakes.

                      If it’s not profitable, either don’t operate that service, or find some alternate business model that is profitable.

                      (FTR, I’m pretty dubious of the benefits of GDPR, but I think the “what about their business models” is one of the worst arguments against it)

                      1. 3

                        The “what should Facebook do if this isn’t profitable” question reminds me of the response to Taxi company’s being upset at Uber/Lyft cannibalizing their business: you don’t have a moral right to your business model, if it’s not profitable, do something else. We shouldn’t reduce quality of medical care because it victimizes undertakes.

                        I think the Uber comparison isn’t half bad.

                        For example, in Europe, a frequent problem was that Uber tried to undercut reasonable regulations (like having proper insurance for passenger transport and adhering to service standards like having to take any passengers). Here, Ubers approach was morally problematic (“moral” being local and all), and they tried to spin it as a moral issue and users choice.

                        1. 2

                          I’m not in the EU and don’t know enough about GDPR to make a comment on it specifically. I just asked what others thought Facebook should do if we assume that the restrictions placed on the by GDPR make their fundamental business model nonviable.

                          1. 2

                            Well, they should do as any other large company that suddenly found their business model regulated :). It’s not the first time this happens and not the last.

                            It’s their job to figure out, as much as it had been in their hands to avoid the discontent that lead to the GDPR from growing.

                            I’m not precisely enjoying GDPR either (I think it has vast flaws and actually plays into Facebooks hands), but Facebook is a billion-dollar company. “What shall we do now that winds are changing?” is really their question to answer.

                        2. 3

                          I’m curious what other lobsters think Facebook should be doing?

                          I can think of a few things, but monkeys will fly out of my butt before any of them happen. They could, for example…

                          • Mail everybody a copy of their data on solid-state storage.
                          • Destroy their databases.
                          • Shut down their data centers.
                          • Release all of their code into the public domain.
                          • Fire everybody with severance pay.
                          • Dissolve the corporation.
                          • Send Mark Zuckerberg back to his home planet.

                          Facebook is one of the cancers killing the internet, and should be treated like the disease that it is.

                          1. 2

                            Second option would be great, but enough of daydreaming :)

                            1. 1

                              You’re asking the wrong question.

                              1. 3

                                What ls the right question?

                                1. 3

                                  @alex_gaynor has the right idea above: https://lobste.rs/s/krca7n/facebook_now_denying_access_unless_eu#c_si5pn0

                                  The question “well what do you suggest then?” posed to people arguing against Facebook’s business practises implies some kind of self-evident virtuous right Facebook has to exist at the expense of all humanity’s effort.

                                  I do not agree with this position. The world was fine before Facebook came along, for many people is fine without it, and will be fine if Facebook disappears. Facebook is a leech on people’s private lives, minds, and mental health.

                                  It is not up to the common person to provide Facebook with a position. It is up to Facebook to provide a position for itself by virtue of being wholesome and useful to society. If they cannot, then that’s the end of it. I owe them nothing, no-one does.

                                  1. 2

                                    It is not up to the common person to provide Facebook with a position. It is up to Facebook to provide a position for itself by virtue of being wholesome and useful to society. If they cannot, then that’s the end of it. I owe them nothing, no-one does.

                                    I agree, but if people continue to choose to use Facebook in the wake of the numerous controversies, then perhaps people just don’t value their privacy more than the services that sites like FB provide. FB is only as big as it is today because people use it.

                                    1. 1

                                      I implied no such thing, and haven’t made a value judgement on Facebook or GDPR anywhere here. I simply asked what others here think that Facebook should do given the changed situation; I’m just curious as to what Facebook’s next moves could be.

                                      I find that question much more interesting than your condescending replies and tired opinions about Facebook, a service that I don’t particularly like and am not trying to defend.

                              1. 4

                                I’m really thrilled that right up front this series is covering unit testing and writing safe abstractions. These practices would be valuable contributors to making a more stable and secure kernel.

                                I’d be thrilled if someone were to build a linux ABI compatible kernel in Rust, built on these ideas.

                                1. 8

                                  For me, as a browser security engineer, it’s striking that security is only mentioned once, and it’s about the server-side not the client. Rust shows its benefits just in the amount of time not wasted debugging C++’s various forms of unsafety.

                                  I wonder if this is quantifiable, conventional wisdom is that Rust can be relatively difficult to learn, compared to other languages, but if you can demonstrate that you save the time on debugging and not dealing with security issues, that’d be a powerful argument.

                                  1. 9

                                    It’s a whitepaper, so it isn’t intended to highlight the whole gamut. I’m giving a talk on security aspects of Rust next week though, which will be taped, I may ping you if the recording is up and I remember.

                                    conventional wisdom is that Rust can be relatively difficult to learn

                                    Depends on what your baseline and your goal is. It’s a language built for a medium pace, resulting in stable software.

                                    I teach Rust professional and at a learners group. The general takeway from it is that strict enforcement of single ownership is something people really have to get used, although it’s often a line of thinking in general programming, too. I don’t find Rust hard, but it took some time for the community go get used to. It isn’t Go, which is completely built around being easy to pick up. For example, a lot of early Rust 1.0 code had a lot of emphasis on borrowing, now, three years in, people move away towards ownership everywhere and things get a lot easier. There’s now a lot of code to look at which can be considered idiomatic. We have a lot of people around who are competent with the language and can steer people the right way. People became so hyper-focused on having to understand lifetimes, now, I give a 30 minutes lecture in my courses how you are often semantically and computationally better of with avoiding them. That makes the whole language much easier.

                                    Sooo, the whole thing became kind of a meme and its foundation are questionable. People learn hard languages all the time, especially in a space where C++ is dominant.

                                    1. 2

                                      Do you have a link handy for your lecture about how it’s better to avoid lifetimes? I’m interested to know since the borrow checker is one of Rust’s most famous capabilities.

                                      1. 2

                                        Id be interested in that, too, given I looked at it when people were talking about borrowing and lifetimes a lot.

                                    2. 3

                                      They’re doing game development, which means most of the time security is their last priority.

                                      1. 2

                                        Well, crashes often were how consoles got rooted in the end. The game developers might not care, though perhaps the companies making the consoles do.

                                        1. 14

                                          In that case, we should encourage them all to use C/C++ to ensure eventual freedom of our devices. Good news is they all switched to the very CPU’s that have the most attacks and experienced attackers. Probably not going to be necessary. ;)

                                          1. 3

                                            Yeah, I for one hope that we continue to write games in unsafe languages so that consoles can be rooted with Long Horse Names

                                      2. 2

                                        “ but if you can demonstrate that you save the time on debugging and not dealing with security issues, that’d be a powerful argument.”

                                        That’s the exact argument used by the quality- or correctness-improving methodologies I often mention like Cleanroom. The older ones like Fagan Inspection Process said same thing. The reason is that problems are easier and cheaper to prevent or fix earlier in the lifecycle in most cases. They can actually be several times cheaper to prevent than fix. There’s usually an upfront cost that comes with it but the savings in debugging or maintenance often wiped it out in industry usage. Not always, though, where the quality did cost something extra by end of project. That came with other benefits, though, making it an investment with ROI rather than just a pure cost.

                                        So, there’s a lot of evidence to support your argument.

                                      1. 4

                                        It’s funny how for many years people hated on checked exceptions as the worst mistake every – and now they’re back as Result types.

                                        I just wish this realization had come sooner, as now too many languages don’t have support for either.

                                        1. 26

                                          I think checked-exceptions as implemented in Java had a number of flaws that Rust’s corrects:

                                          • They don’t cover common exceptions, most notably NullPointerException, contributing to a feeling that they don’t add a lot of value.
                                          • Suppressing a “can never occur” exception was verbose, e.g. UnsupportedEncodingException on "utf-8". The Java spec says UTF-8 must be available, but you have to write the handful of lines of code to catch UnsupportedEncodingException anyways! In Rust the equivalent situation is handled with .unwrap() or .expect("..."), much less verbose.
                                          • If you have a function that can have multiple error conditions: say, a function that makes an HTTP request and parses a JSON response, you’ve got (at least) 3 categories of error: HTTP errors (e.g. a 404), JSON parse errors, and network IO errors. In Rust the convention would be to wrap those into an enum with three variants, and there’s a bunch of ergonomic tools for taking a Result and wrapping it into the correct one. In Java convention seems to be declaring that every function raises three different exception types, adding verbosity at every call definition.
                                          1. 1

                                            I agree. It just saddens me that Kotlin makes all exceptions unchecked, even those coming from Java, instead of automatically wrapping the Java code in Result<T, E>.

                                            There’s a lot of things Rust does right that no JVM language currently does well.

                                            1. 1

                                              There’s a lot of things Rust does right that no JVM language currently does well.

                                              Such as? Scala is very Rust-like; it doesn’t do linear typing but that wouldn’t help you much on the JVM anyway.

                                          2. 8

                                            At least Rust has unwrap, when you know that errors should not happen if code is correct, or for initial rough code. Java’s checked exceptions are frustrating just because there’s no short syntax for re-raising as unchecked exception (and preserving stacktrace, some IDEs even add code that prints stacktrace to stderr in such unwrap-like handlers).

                                            1. 6

                                              I hated checked exceptions right up until I tried to write some software that had to be more reliable than a http worker that got restarted every request.

                                              Turns out that when I write to a file I really want to know exactly what can go wrong.

                                              1. 4

                                                My only experience with checked exceptions was java, and that sucked… But inferred+merged checked exceptions could be cool. Any languages have that?

                                                1. 3

                                                  If you use ExceptT in Haskell with a polymorphic error type you’ll get this.

                                                  1. 3

                                                    Yes, Ocaml has it with Polymorphic variants + result monad. The one current downside is the error messages can be less than ideal.

                                                    A few blog posts describing it:

                                                    http://functional-orbitz.blogspot.se/2013/01/introduction-to-resultt-vs-exceptions.html

                                                    http://functional-orbitz.blogspot.se/2013/01/experiences-using-resultt-vs-exceptions.html

                                                  2. 4

                                                    The difference is that results are plain old values that fit in the normal type system. You can call a higher-order function with a function that returns a result and it will just work. Checked exceptions were indeed a terrible mistake, not because they force you to handle errors, but because they were a secondary type system that didn’t interoperate properly with the primary type system.

                                                    (People who are proposing effect systems should take note)

                                                    1. 1

                                                      Sure, but the solution would have been to wrap checked exceptions in a Result type for interop, not, like kotlin does today, to just swallow all of them.

                                                      1. 2

                                                        the solution would have been to wrap checked exceptions in a Result type for interop

                                                        There are a couple of problems with that - performing a JVM catch at every interop boundary is inherently inefficient, and exceptions don’t quite have the nice monadic composition you’d expect from results.

                                                  1. 1

                                                    Clever.

                                                    I don’t know the OpenBSD team or dev process very well; are there folks who work on OpenBSD with a background in exploit dev who would be able to approach this adversarially and see how hard it is to bypass?

                                                    1. 4

                                                      Perhaps a little, but probably not what you’re thinking. I think this is a problem in heuristic mitigation work, a lot of the review is “outsourced”.

                                                      So I tricked Theo into working on this by telling him I would do it and then slacking. The theory was just look at exploits, see that stack pivots are common, and make it not work. But not a lot of rigor. Can you bypass it? In some cases, probably. Always? Generically?

                                                      There’s some ongoing debate about the merit of mitigations. Are they always defeatable? Ever anything more than temporary roadblocks? Alas, I think there’s some survivor bias. We only see exploits with bypasses, not the exploits that couldn’t be made to work. But a review of history reveals all sorts of constraints on exploits. Must fit in a 96 byte payload, must not contain nil, etc. I remain hopeful that even if a mitigation can be bypassed, unknown future vulns may not always grant sufficient control to execute the bypass.

                                                      1. 2

                                                        FWIW, the way I try to breakdown mitigations that are described as “make exploits harder” are that that means either:

                                                        1. Requires a better quality vulnerability, some vulnerabilities will no longer be exploitable, or will need to be paired with a second vulnerability.

                                                        2. Requires a smarter exploit dev; no new bugs are required, but some exploit devs won’t have the skills to exploit it.

                                                        I think people use “make exploits harder” to describe both of these behaviours interchangeably, so I’m always trying to figure out which it is :-)

                                                        1. 2

                                                          That’s a good split. I’m uncertain this is #1.

                                                    1. 7

                                                      An interesting part of this: Fuchsia uses it’s own IPC Schema / Definition system named FIDL.

                                                      And since objects and messages are passed around describing … well … everything throughout the system, there are FIDL definitions for everything from “netstack”, “time_zone” and types used in graphics display to name a few.

                                                      Here’s a bunch of FIDL examples.

                                                      I’m also really digging how each component has it’s own namespace in place of a traditional global filesystem

                                                      1. 2

                                                        Another Google IPC format? Why not Protobuf?!

                                                        1. 3

                                                          At a first glance it does not seem a serialization format, but a binary RPC protocol.

                                                          1. 2

                                                            It looks like it not only handles data serialization but also stubs for interfaces to methods to be implemented in whatever languages that FIDL supports.

                                                            For example see the time_zone FIDL:

                                                            [ServiceName="time_zone::Timezone"]
                                                            interface Timezone {
                                                              // Returns local timezone offset (in minutes from UTC. Can be negative) for
                                                              // the supplied number of milliseconds since the Unix epoch. Returns a
                                                              // non-zero DST offset when appropriate.
                                                              1: GetTimezoneOffsetMinutes(int64 milliseconds_since_epoch)
                                                                  -> (int32 local_offset_minutes, int32 dst_offset_minutes);
                                                              // Sets the timezone for the machine based on an ICU ID.
                                                              2: SetTimezone(string timezone_id) -> (bool @status);
                                                              // Gets the timezone ID string.
                                                              3: GetTimezoneId() -> (string timezone_id);
                                                              // Watches for updates to the timezone ID.
                                                              4: Watch(TimezoneWatcher watcher);
                                                            };
                                                            
                                                            1. 1

                                                              Protobuf, or any serialization format like that, isn’t directly usable for IPC without significant modification because Protobuf doesn’t give you mechanisms for sending resources like file descriptors.

                                                              1. 1

                                                                Well, unix sockets send fds out of band :) but the point is, seems like they started from scratch instead of doing that modification.

                                                                1. 1

                                                                  Sure, but if you want to use protobuf then you need to hack up protoc to emit file descriptors specially into cmsgbuf, at that point you’re not interoperable with any existing protobuf implementation, so what’s the point?

                                                            2. 2

                                                              See also mach’s mig.

                                                            1. 4

                                                              The LKML thread on this patchset is miserably long. Linus seems to be super upset about Secure Boot, and I can’t quite understand why.

                                                              https://www.spinics.net/lists/kernel/msg2766909.html is a nice succinct message from Kees Cook explaining why this is useful.

                                                              1. 2

                                                                Linus seems to be super upset about Secure Boot, and I can’t quite understand why.

                                                                My understanding was that he isn’t upset about secure boot itself, but rather why the patchset was necessary, and Garrett’s response ended up being something along the lines of “why not” and “just disable it if you don’t like it” rather than actual arguments to back it up. In his reply to Linus’s outburst yesterday, Garrett ended up accusing Linus of not accepting the patch due to “political” reasons instead of technical ones.

                                                                Like you said, the thread is miserably long and difficult to keep up with if you’re not fully aware of the technical details of what they’re talking about (like me). I’m certain I’ve missed some finer details on this subject, and I definitely don’t know the technical aspects of what’s going on - I’m just reporting what frustrations I’ve seen from Linus.

                                                                1. 3

                                                                  I found mjg’s explanation pretty clear:

                                                                  1. SecureBoot without kernel lockdown has glaring security holes, so we should plug them
                                                                  2. kernel lockdown without a secured bootchain has an easy bypass, so we shouldn’t enable it and give people a false sense of security
                                                                  3. Enabling both is good.
                                                                  4. Enabling zero is silly, but has very clear security properties that won’t confuse anyone.
                                                              1. 4

                                                                I’m skeptic, but I think they can pull it off.

                                                                In the end, they only need to reach half of Intel’s performance, as benchmarks suggest that macOS’ performance is roughly half of Linux’ when running on the same hardware.

                                                                With their own hardware, they might be able to get closer to the raw performance offered by the CPU.

                                                                1. 7

                                                                  they only need to reach half of Intel’s performance, as benchmarks suggest that macOS’ performance is roughly half of Linux’ when running on the same hardware

                                                                  I’m confused. Doesn’t that mean they need to reach double Intel’s performance?

                                                                  1. 11

                                                                    It was probably worded quite poorly, my calculation was like:

                                                                    • Raw Intel performance = 100
                                                                    • macOS Intel performance ~= 50
                                                                    • Raw Apple CPU performance = 50
                                                                    • macOS Appe CPU performance ~= 50

                                                                    So if they build chips that are half as fast as “raw” Intel, but are able to better optimize their software for their own chips, they can get way closer to the raw performance of their hardware than they manage to do on Intel.

                                                                  2. 7

                                                                    Why skeptic? They’ve done it twice before (68000 -> PowerPC and PowerPC -> Intel x86).

                                                                    1. 4

                                                                      And the PPC → x86 transition was within the past fifteen years and well after they had recovered from their slump of the ‘90s, and didn’t seem to hurt them. They’re one of the few companies in existence with recent experience transitioning microarchitectures, and they’re well-positioned to do it with minimal hiccups.

                                                                      That said, I’m somewhat skeptical, too; it’s a huge undertaking even if everything goes as smoothly as it did with the x86 transition, which is very far from a guarantee. This transition will be away from the dominant architecture in its niche, which will introduce additional friction which was not present for their last transition.

                                                                      1. 2

                                                                        They also did ARM32->ARM64 on iOS.

                                                                        1. 3

                                                                          That’s not much of a transition. They did i386 -> amd64 too then.

                                                                          (fun fact, I also did that, on the scale of one single Mac - swapped a Core Duo to a Core 2 Duo in a ’06 mini :D)

                                                                          1. 1

                                                                            My understanding is that they’re removing some of the 32-bit instructions on ARM. Any clue if that’s correct?

                                                                            1. 1

                                                                              AArch64 processors implement AArch32 too for backwards compatibility, just like it works on amd64.

                                                                              1. 1

                                                                                As of iOS 11, 32-bit apps won’t load. So if Apple devices that come with iOS 11 still have CPUs that implement AArch32, I’d guess it’s only because it was easier to leave it in than pull it out.

                                                                                1. 1

                                                                                  Oh, sure – of course they can remove it, maybe even on the chip level (since they make fully custom ones now), or maybe not (macOS also doesn’t load 32-bit apps, right?). The point is that this transition used backwards compatible CPUs, so it’s not really comparable to 68k to PPC to x86.

                                                                                  1. 1

                                                                                    I of course agree that this most recent transition isn’t comparable with the others. To answer your question: the version of macOS they just released a few days ago (10.13.4) is the first to come with a boot flag that lets you disable loading of 32-bit applications to, as they put it, “prepare for a future release of macOS in which 32-bit software will no longer run without compromise.”

                                                                      2. 3

                                                                        I didn’t know this. Do you know which benchmarks show macOS at half of Linux performance?

                                                                        1. 3

                                                                          Have a look at the benchmarks Phoronix has done. Some of them are older, but I think they show the general trend.

                                                                          This of course doesn’t take GPU performance into account. I could imagine that they take additional hit there as companies (that don’t use triple AAA game engines) rather do …

                                                                          Application → Vulkan API → MoltenVK → Metal

                                                                          … than write a Metal-specific backend.

                                                                          1. 1

                                                                            I guess you’re talking about these? https://www.phoronix.com/scan.php?page=article&item=macos-1013-linux

                                                                            Aside from OpenGL and a handful of other outliers for each platform, they seem quite comparable, with each being a bit faster at some things and a bit slower at others. Reading your comments I’d assumed they were showing Linux as being much faster in most areas, usually ending up about twice as fast.

                                                                        2. 3

                                                                          The things they’re slow at don’t seem to be particularly CPU architecture specific. But the poor performance of their software doesn’t seem to hurt their market share.

                                                                        1. 3

                                                                          Best we can do without throwing entire hardware/software ecosystem away

                                                                          Can we at least start by moving away from producing new memory unsafe code?

                                                                          1. 10

                                                                            People The Media have been acting very suprised about all the news around Facebook which has been popping up for the last few weeks.

                                                                            But frankly, I find exactly these reactions the coverage far more surprising. I mean, didn’t everyone already kind of know that this has been going on if you use Facebook? People don’t have to be told that something unusual is going on. Just look at their app permissions (or their business model). What probably shocks irritates most people is the facts that they can’t go on telling themselves that everything is fine.

                                                                            Edit: I would like to clarify – my issue isn’t who knew what and who didn’t. I am talking about the popular reaction and the narrative in which tese events are being placed, which I belive to be wrong. I don’t understand why people see this as trolling?

                                                                            1. 31

                                                                              I see this sort of comment a lot, and I think it’s wrong headed and counterproductive:

                                                                              1. There’s a difference between a general believe that Facebook doesn’t respect your privacy and a very specific “they collected this data, unnecessarily and stored it in perpetuity”

                                                                              2. Chastising people for not having been aware in the past doesn’t encourage them to be more proactive in the future, it pushes them to just stop caring entirely. If you want people to be more upset and take action, use this opportunity to push them forwards, not lecture them for having been late to the party.

                                                                              1. 3

                                                                                I’m not blaming Facebook users or trying to act as if I were superior. I mean, I use WhatsApp on a (far too) regular basis, and have a pretty good feeling that it is going on there too And I understand why they are using it.

                                                                                But in the end, what else were they supposed to be doing with the data? The people I am “concerned” with are those who are talking about this the most, acting as if nobody would have guessed that this could be happening in a million years. If anything, this seems to be the harmful thing to do, since it seems to neglect that Facebook isn’t doing this because they are evil or something, but anyone, any social network with a similar history, size and system of operation, would have to do the same. The crime is intrinsic in the form.

                                                                                1. 1

                                                                                  Personally, I don’t know specifically what Facebook or other companies are doing. However, I know that they are in the business of data collection so this is not shocking. What they do specifically depends on what they are able to do technically.*

                                                                                  If they were doing something outside their scope of business, like raise a great old one from the void, then I might be shocked.

                                                                                  ** That something might be technically feasible might be shocking, but that’s another story.

                                                                                  E.g., “Facebook scraped call data from Android” vs. “Android leaks call data to third party apps.”

                                                                                  1. 2

                                                                                    That is kind of what I am trying to say. It isn’t supprising, and this fact should be emphasized. Sadly, @alex_gaynor misunderstood me a bit, in that I want people to understand why this shouldn’t be surprising. It is their buisness model, and no matter who or what, something along these lines happening will have ultimately unavoidable.

                                                                                    What they do specifically depends on what they are able to do technically

                                                                                    And what they have to do as a business to always be a step ahead of their competition! And again, this isn’t anyones individual responsibility, just as nobody is to blame when a player is ahead in Mensch ärgere Dich nicht and others loose.

                                                                              2. 8

                                                                                Maybe a bit snarky, but let me draw some parallels with this take:

                                                                                “The Big Bang? Why are you interested in it now? It happened 13 billion years ago. It obviously happened, otherwise we wouldn’t be here at all. Why study it? Pretty much everybody knows about cosmology. Add some fundamental laws and, well the current state of the universe naturally follows.”

                                                                                The point is: not even close to the number of people you think knew knew. Those who knew didn’t know details. Those who had some details didn’t have certainty. Those who knew, had details and certainty didn’t reach large enough numbers to have a public debate about this issue.

                                                                                1. 5

                                                                                  People see it as trolling because no one can be certain over the internet if anyone is actually surprised or not. A lot of people feign surprise to puff themselves up. An obnoxiously obvious version of this would be “Not only did I know about this breaking news before everyone else, I was so certain of it that I believed it was universal knowledge! I’m shocked, shocked that people did not understand this as well as me, a genius.” You didn’t write like this, but feigning surprise is common enough that any expression of surprise is received very skeptically.

                                                                                  1. 2

                                                                                    Ok, I understand that, but I hope I clarified my position in my other responses. Looking back at my original phrasing, I understand the possibility for misunderstanding. “Media coverage” might have been a better word to use instead of “reactions”, which could be understood to be too general.

                                                                                  2. 3

                                                                                    And despite Facebook being devoid of ethics and morality, despite them abusing their users and their data, people will keep using Facebook by the billions. It’s hopeless; people just don’t care enough.

                                                                                    1. 7

                                                                                      The network effects are so strong that competition is, for all intents and purposes, impossible. Google Plus is the canonical case study here, though I’m sure there’s an entire graveyard full of them. Facebook’s value is that it has all the people on it, and any competitor will by definition start without any people, which gives it no value proposition to pull people off Facebook.

                                                                                      1. 6

                                                                                        Unfortunately it’s still the most viable platform for certain things. I use Facebook almost exclusively to buy and sell event tickets at the last minute. In the past 2 years I have bought tickets from the actual ticket vendor for only 2 out of 20+ shows. Facebook provides a web of trust that no other platform can match. I would hesitate to buy a ticket from “edmfan1337,” but some random person with years of photos, a job, a school, and hundreds of friends is way more trustworthy. Often I’ll even have a mutual friend or two for events that are local. I’d love if there were some other platform equally viable, but I am not really interested in technical solutions involving third party guarantees or other “secure” systems. It’s better to deal with real people who can come to agreements and make compromises.

                                                                                      2. 1

                                                                                        I think there’s something far more sinister going on here. We don’t really have free media in today’s world. It looks free, but there are only a few major players and a lot of major advertisers controlling those outlets. At work we have a CNN feed in the entry way. 90% of the time the word Trump is on the screen. It’s all Trump all the time. Unlike 1984 with its 2 minute hate, for several decades we’ve been living in a 24/7 hate.

                                                                                        These types of stories are designed to keep us scared or to put the population down a certain path. I have a feeling Zuckerberg pissed off someone recently. Maybe it’s someone in the 1% trying to put him in his place after he talked about running for President. Maybe he pissed off some board members at Google. It doesn’t take much. Someone with the means just needs to get one or two publications to start down the path and soon the rest of the media follows because it’s what people want and it sells.

                                                                                        1. 0

                                                                                          I don’t believe in sinister masterminds controlling things from behind the scenes. And the usage of the word “media” retrospectively didn’t help much to clarify what I intended to say. Maybe “popular discourse” would have been better?

                                                                                          Regarding the points you brought up, I just believe that Trump is a easy to report topic that a lot of people (in some perverse sense) enjoy to hear about. And why should a media network not talk about it, if there’s a “marketplace of attention”? And also, one should avoid falling into cognitive biases. Trump get’s mentioned a lot, one the one hand because his policies are controversial, but also because he is the president of the USA… It’s not like Obama or Bush were minor political actors. And a “1%” is really a void term. It means nothing, and just gives space for ones own imagination. Some things just happen, randomly, and there isn’t a overarching narrative one can coherently place it in.

                                                                                      1. 2

                                                                                        Does this mean the spec is finished and we can start seeing it used in applications/servers?

                                                                                        1. 4

                                                                                          If I understand the IETF, it means assuming nothing is found to be totally broken, there’ll only be editorial changes from here on out, no technical changes.

                                                                                        1. 10

                                                                                          Couldn’t agree more strongly. I used to identify as a Python programmer, and took jobs primarily writing Python.

                                                                                          Then I took a job where I had no control of the tools I’d be using. In two years I wrote (approximately by lines of code): VBScript, Ruby, Java, PL/SQL, Python, Javascript, C++, and probably some other stuff for good measure.

                                                                                          The Python community is still my home base, but now I choose jobs and projects by whether their outcomes are important to me, not by language, and that allows for doing much much more rewarding work.

                                                                                          1. 5

                                                                                            Google contributes suprisingly little back to in terms of open source compared to the size of the company and the number of developers they have. (They do reciprocate a bit, but not nearly as much as they could.)

                                                                                            For example this is really visible in the area where they do some research and/or set a standard like with compression algorithms (zopfli, brotli), network protocols (HTTP/2, QUIC), the code and glue they release is minimal.

                                                                                            It’s my feeling that Google “consumes”/relies on a lot more open source code than they then contribute back to.

                                                                                            1. 10

                                                                                              Go? Kubernetes? Android? Chromium? Those four right there are gargantuan open source projects.

                                                                                              Or are you specifically restricting your horizon to projects that aren’t predominantly run by Google? If so, why?

                                                                                              1. 11

                                                                                                I’m restricting my horizon for projects that aren’t run by Google because it better showcases the difference between running and contributing to a project. Discussing how Google runs open source projects is another interesting topic though.

                                                                                                Edit: running a large open source project for a major company is in large part about control. Contributing to a project where the contributor is not the main player running the project is more about cooperation and being a nice player. It just seems to me that Google is much better at the former than the latter.

                                                                                                1. 2

                                                                                                  It would be interesting to attempt to measure how much Google employees contribute back to open source projects. I would bet that it is more than you think. When you get PRs from people, they don’t start off with, “Hey so I’m an engineer at Google, here’s this change that we think you might like.” You’d need to go and check out their Github profile and rely on them listing their employer there. In other words, contributions from Google may not look like Contributions From Google, but might just look like contributions from some random person on the Internet.

                                                                                                  1. 3

                                                                                                    I don’t have the hat, but for the next two weeks (I’m moving teams) I am in Google’s Open Source office that released these docs.

                                                                                                    We do keep a list of all Googlers who are on GitHub, and we used to have an email notification for patches that Googlers sent out before our new policy of “If it’s a license we approve, you don’t need to tell us.” We also gave blanket approval after the first three patches approved to a certain repo. It was ballpark 5 commits a day to non-Google code when we were monitoring, which would exclude those which had been given the 3+ approval. Obviously I can share these numbers because they’re all public anyway ;)

                                                                                                    For reasons I can’t remember, we haven’t used the BigQuery datasets to track commits back to Googlers and get a good idea of where we are with upstream patches now. I know I tried myself, and it might be different now, but there was some blocker that prevented me doing it.

                                                                                                    I do know that our policies about contributing upstream are less restrictive than other companies, and Googlers seem to be happy with what they have (particularly since the approved licenses change). So I disagree with the idea that Google the company doesn’t do enough to upstream. It’s on Googlers to upstream if they want to, and that’s no different to any other person/group/company.

                                                                                                    1. 2

                                                                                                      So I disagree with the idea that Google the company doesn’t do enough to upstream.

                                                                                                      Yeah, I do too. I’ve worked with plenty of wonderful people out of Google on open source projects.

                                                                                                      More accurately, I don’t even agree with the framing of the discussion in the first place. I’m not a big fan of making assumptions about moral imperatives and trying to “judge” whether something is actually pulling its weight. (Mostly because I believe its unknowable.)

                                                                                                      But anyway, thanks for sharing those cool tidbits of info. Very interesting! :)

                                                                                                      1. 3

                                                                                                        Yeah, sorry I think I made it sound like I wasn’t agreeing with you! I was agreeing with you and trying to challenge the OP a bit :)

                                                                                                        Let me know if there’s any other tidbits you are interested in. As you can tell from the docs, we try to be as open as we can, so if there’s anything else that you can think of, just ping me on this thread or cflewis@google.com and I’ll try to help :D

                                                                                                        1. 1

                                                                                                          FWIW I appreciate the effort to shed some light on Google’s open source contributions.Do you think that contributions could be more systemic/coordinated within Google though, as opposed to left to individual devs?

                                                                                                          1. 1

                                                                                                            Do you think that contributions could be more systemic/coordinated within Google though, as opposed to left to individual devs?

                                                                                                            It really depends on whether a patch needs to be upstreamed or not, I suppose. My gut feeling (and I have no data for this) and entirely personal and not representative of my employer opinion, is that teams as a whole aren’t going to worry about it if they can avoid it… often the effort to convince the upstream maintainers to accept the patch can suck up a lot of time, and if the patch isn’t accepted then that time was wasted. It’s also wasted time if the project is going in a direction that’s different to yours, and no-one really ever wants to make a competitive fork. It’s far simpler and a 100% guarantee of things going your way if you just keep a copy of the upstream project and link that in as a library with whatever patches you want to do.

                                                                                                            The bureaucracy of upstreaming, of course, is working as intended. There does have to be guidance and care to accepting patches. Open source != cowboy programming. That’s no problem if you are, say, a hobbyist who is doing it in the evenings here and there, where timeframes and so forth are less pressing. But when you are a team with directives to get your product out as soon as you can, it generally isn’t something a team will do.

                                                                                                            I don’t think this is a solved problem by any company that really does want to commit back to open source like Google does. And I don’t think the issue changes whether you’re a giant enterprise or a small mature startup.

                                                                                                            This issue is also why you see so much more open source projects released by companies rather than working with existing software: you know your patches will be accepted (eventually) and you know it’ll go in your direction, It’s a big deal to move a project to community governance as you now lose that guarantee.

                                                                                                2. 0

                                                                                                  Chromium?

                                                                                                  Did you ever tried to compile it?

                                                                                                  1. 2

                                                                                                    Yeah, and?

                                                                                                    1. 0

                                                                                                      How much time it took? On which hardware?

                                                                                                      1. 1

                                                                                                        90 minutes, on a mid-grade desktop from 2016.

                                                                                                        1. 1

                                                                                                          Cool! You should really explain to Google your build process!

                                                                                                          And to everybody else, actually.

                                                                                                          Because a convoluted and long build process, concretely reduce the freedom that an open source license gives you.

                                                                                                          1. 1

                                                                                                            Cool! You should really explain to Google your build process!

                                                                                                            Google explained it to me actually. https://chromium.googlesource.com/chromium/src/+/lkcr/docs/linux_build_instructions.md#faster-builds

                                                                                                            Because a convoluted and long build process, concretely reduce the freedom that an open source license gives you.

                                                                                                            Is the implication that Google intentionally makes the build for Chromium slow? Chromium is a massive project and uses the best tools for the job and has made massive strides in recent years to improve the speed, simplicity, and documentation around their builds. Their mailing lists are also some of the most helpful I’ve ever encountered in open source. I really don’t think this argument holds any water.

                                                                                                3. 5

                                                                                                  The amount Google invests in securing open source software basically dwarfs everyone else’s investment, it’s vaguely frightening. For example:

                                                                                                  • OSS-Fuzz
                                                                                                  • Patch Rewards for OSS projects
                                                                                                  • Their work on Clang’s Sanitizers and libFuzzer
                                                                                                  • Work on the kerne’s self protection program and syzkaller
                                                                                                  • Improvements to linux kernel sandboxing technologies, e.g. seccomp-bpf

                                                                                                  I don’t think anyone else is close, either by number (and severity) of vulnerabilities reported or in proactive work to prevent and mitigate them.

                                                                                                  1. 6

                                                                                                    Google’s interest in open source is self-serving. They have a history of nourishing open source communities, fostering dependence and shutting them down. They are on a downward trend of supporting open protocols in their own software.

                                                                                                    1. 2

                                                                                                      Google does care a lot about security and I know of plenty of positive contributions that they’ve made. We probably could spend days listing them all, but in addition to what you’ve mentioned project zero, pushing the PKI towards sanity, google summer of code (of which I was one recipient about a decade ago), etc all had a genuinely good impact.

                                                                                                      OTOH Alphabet is the world’s second largest company by market capitalization, so there should be some expectation of activity based on that :)

                                                                                                      Stepping out of the developer bubble, it is an interesting thought experiment to consider if it would be worth trading every open source contribution Google ever made for changing the YouTube recommendation algoritm to stop promoting extremism. (Currently I’m leaning towards yes.)

                                                                                                  1. 6

                                                                                                    I was thrilled that part 3 focused on building safe abstractions for things that would otherwise be unsafe.

                                                                                                    1. 12

                                                                                                      Not discussed here, but the next time Apple ends up in court (or the court of public opinion) needing to defend their insistence on provider-independent-security with keys for iPhones being purely in the users’ control, this will massively undermine their case.

                                                                                                      This has all the appearances of being a straightforward “profit over any sort of principle” decision. Lest anyone forget, Google exited the Chinese market following Operation Aurora, refusing to censor search results.

                                                                                                      1. 5

                                                                                                        That’s a great point. The U.S. LEO’s would definitely make an argument that they should get access if China is.

                                                                                                        1. 3

                                                                                                          Isn’t this about icloud data though, and not physical “on-device” data? Apparently the U.S. government/LEO’s already have warrant based access to this data.

                                                                                                          1. 0

                                                                                                            The U.S. LEO’s would definitely make an argument that they should get access if China is.

                                                                                                            Gee, I wonder what “argument” they “made” to start the Five Eyes program, or to whisk Bradley Manning to torture prison without due process.

                                                                                                            It must have been extremely convincing. Perhaps something like: “If other countries have authoritarian regimes, why can’t we?”

                                                                                                          1. 3

                                                                                                            I wish more folks involved in packaging for Linux distros were familiar with Homebrew. Obviously not everything Homebrew does is applicable to Debian, but the ability for folks to show up and easily contribute new versions with a simple PR is game changing. Last night I noticed that the python-paramiko package in Debian is severely out of date, but the thought of trying to learn the various intricacies of contributing to Debian well enough to update it is turns me right off.

                                                                                                            1. 15

                                                                                                              As an upstream dev of code that’s packaged with Homebrew, I have noticed that Homebrew is by far the sloppiest of any packagers; there is basically no QA, and often the packagers don’t even read the instructions I’ve provided for them. I’ve never tried it myself, but it’s caused me a lot of headaches all the same.

                                                                                                              1. 2

                                                                                                                I just looked at the packaging information for paramiko and I have more questions than before:

                                                                                                                How does this setup even work in case of a security vulnerability?

                                                                                                                1. 4

                                                                                                                  Unfortunately, Debian has still a strong ownership model. Unless a package is team-maintained, an unwilling maintainer can stall any effort to update a package, sometimes actively, sometimes passively. In the particular case of Paramiko, the maintainer has very strong opinions on this matter (I know that first hand).

                                                                                                                  1. 1

                                                                                                                    Strong opinions are not necessarily bad. Does he believe paramiko should not be updated?

                                                                                                                  2. 3

                                                                                                                    How does this setup even work in case of a security vulnerability?

                                                                                                                    Bugs tagged as security problems (esp. if also tagged with a CVE) get extra attention from the security team. How that plays out depends on the package/bug, but it can range from someone from the security team prodding the maintainer, all the way to directly uploading a fix themselves (as a non-maintainer upload).

                                                                                                                    But yeah in general most Debian packages have 1-2 maintainers, which can be a bottleneck if the maintainer loses interest or gets busy. For packages with a lot of interest, such a maintainer will end up replaced by someone else. For more obscure packages it might just languish unmaintained until someone removes the package from Debian for having unfixed major issues.

                                                                                                                1. 7

                                                                                                                  Neat idea! One question though: How do you handle renewals? In my experience, postgresql (9.x at least) can only re-read the certificate upon a server restart, not upon mere reloads. Therefore, all connections are interrupted when the certificate is changed. With letsencrypt, this will happen more frequently - did you find a way around this?

                                                                                                                  1. 5

                                                                                                                    If you put nginx in front as a reverse TCP proxy, Postgres won’t need to know about TLS at all and nginx already has fancy reload capability.

                                                                                                                    1. 3

                                                                                                                      I was thinking about that too - and it made me also wonder whether using OpenResty along with a judicious combination of stream-lua-nginx-module and lua-resty-letsencrypt might let you do the whole thing in nginx, including automatic AOT cert updates as well as fancy reloads, without postgres needing to know anything about it at all (even if some tweaking of resty-letsencrypt might be needed).

                                                                                                                      1. 1

                                                                                                                        That’s funny I was just talking to someone who was having problems with “reload” not picking up certificates in nginx. Can you confirm nginx doesn’t require a restart?

                                                                                                                        1. 1

                                                                                                                          Hmm, I wonder if they’re not sending the SIGHUP to the right process. It does work when configured correctly.

                                                                                                                      2. 2

                                                                                                                        I’ve run into this issue as well with PostgreSQL deployments using an internal CA that did short lived certs.

                                                                                                                        Does anyone know if the upstream PostgreSQL devs are aware of the issue?

                                                                                                                        1. 20

                                                                                                                          This is fixed in PG 10. “This allows SSL to be reconfigured without a server restart, by using pg_ctl reload, SELECT pg_reload_conf(), or sending a SIGHUP signal. However, reloading the SSL configuration does not work if the server’s SSL key requires a passphrase, as there is no way to re-prompt for the passphrase. The original configuration will apply for the life of the postmaster in that case.” from https://www.postgresql.org/docs/current/static/release-10.html