1. 2

    If you’re an Apple customer aren’t you supposed to be migrating to the iPad Pro?

    1. 4

      This isn’t even a funny joke.

      1. 6

        Learning modern c++ with move only semantics and rvalue references and so on let me understand the problem Rust is trying to solve.

          1. 23

            This is a bit disappointing. It feels a bit like we are walking into the situation OpenGL was built to avoid.

            1. 7

              To be honest we are already in that situation.

              You can’t really use GL on mac, it’s been stuck at D3D10 feature level for years and runs 2-3x slower than the same code under Linux on the same hardware.

              It always seemed like a weird decision from Apple to have terrible GL support, like if I was going to write a second render backend I’d probably pick DX over Metal.

              1. 6

                I remain convinced that nobody really uses a Mac on macOS for anything serious.

                And why pick DX over Metal when you can pick Vulkan over Metal?

                1. 3

                  Virtually no gaming or VR is done on a mac. I assume the only devs to use Metal would be making video editors.

                  1. 1

                    This is a bit pedantic, but I play a lot of games on mac (mainly indie stuff built in Unity, since the “porting” is relatively easy), and several coworkers are also mac-only (or mac + console).

                    Granted, none of us are very interested in the AAA stuff, except a couple of games. But there’s definitely a (granted, small) market for this stuff. Luckily stuff like Unity means that even if the game only sells like 1k copies it’ll still be a good amount of money for “provide one extra binary from the engine exporter.”

                    The biggest issue is that Mac hardware isn’t shipping with anything powerful enough to run most games properly, even when you’re willing to spend a huge amount of money. So games like Hitman got ported but you can only run it on the most expensive MBPs or iMac Pros. Meanwhile you have sub-$1k windows laptops which can run the game (albeit not super well)

                  2. 2

                    I think Vulkan might have not been ready when Metal was first skecthed out – and Apple does not usually like to compromise on technology ;)

                    1. 2

                      My recollection is that Metal appeared first (about June 2014), Mantle shipped shortly after (by a coupe months?), DX12 shows up mid-2015 and then Vulkan shows up in February 2016.

                      I get a vague impression that Mantle never made tremendous headway (because who wants to rewrite their renderer for a super fast graphics API that only works on the less popular GPU?) and DX12 seems to have made surprisingly little (because targeting an API that doesn’t work on Win7 probably doesn’t seem like a great investment right now, I guess? Current Steam survey shows Win10 at ~56% and Win7+8 at about 40% market share among people playing videogames.)

                      1. 2

                        Mantle got heavily retooled into Vulkan, IIRC.

                        1. 1

                          And there was much rejoicing. ♥

              1. 1

                I really miss when Apple keynotes announced interesting things.

                1. -1

                  I’m so old.

                  1. 3

                    I’m disappointed that companies who own significant copyright in Linux (like RedHat or Intel) and industry groups like the BSA don’t go after intellectual property thieves like Tesla. There are plenty of non-Linux choices if companies don’t want to comply with the GPL’s license terms. Other car companies seem to be happy with VxWorks and similar.

                    What’s the point of asking China to comply with American IP if the US won’t even police its own companies?

                    1. 10

                      I’m pretty unsurprised that a company like Intel or Red Hat wouldn’t sue. Lawsuits are expensive, and it’s not clear a GPL suit would produce any significant damages (can they show they’ve been damaged in any material way?), just injunctive relief to release the source code to users. So it’d be a pure community-oriented gesture, probably a net loss in monetary terms. And could end up a bigger loss, because with the modern IP regime as de-facto a kind of armed standoff where everyone accumulates defensive portfolios, suing someone is basically firing a first shot that invites them to dig through their own IP to see if they have anything they can countersue you over. So you only do that if you feel you can gain something significant.

                      SFC is in a pretty different position, as a nonprofit explicitly dedicated to free software. So these kinds of lawsuits advance their mission, and since they aren’t a tech company themselves, there’s not much you can counter-sue them over. Seems like a better fit for GPL enforcement really.

                      1. 8

                        a GPL suit would produce any significant damages (can they show they’ve been damaged in any material way?

                        This is generally why the FSF’s original purpose in enforcing the GPL was always to ensure that the code got published, not to try to shakedown anyone for money. rms told Eben in the beginning, make sure you make compliance the ultimate goal, not monetary damages. The FSF and the Conservancy both follow these principles. Other copyleft holders might not.

                        1. 3

                          Intel owned VxWorks until very recently. Tesla’s copyright violations competed directly with their business.

                          1. 2

                            I’m not a lawyer but the GPL includes the term (emphasis added)

                            1. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.

                            Even if monetary damages are not available (not sure if they are), it should be possibile to get injunctive relief revoking the right to use the software at all. Not just injunctive relief requiring them to release the source.

                            1. 3

                              This is from GPLv2.

                              GPLv3 is a bit more lenient:

                              However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

                              Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

                              Now, I think people should move to GPLv3 if they want this termination clausole.

                              And in any case, 5 years are completely unrespectful of the various developers that contributed to Tesla through their contribution to the free software they adopted.

                              To that end, we ask that everyone join us and our coalition in extending Tesla’s time to reach full GPL compliance for Linux and BusyBox, not just for the 30 days provided by following GPLv3’s termination provisions, but for at least another six months.

                              As a developer, this sounds a lot like changing the license text for the benefit of big corporates without contributors agreement.

                              When I read these kind of news I feel betrayed by FSF.
                              I seriously wonder if we need a more serious strong copyleft.

                              1. 2

                                It is not without contributor agreement. Any contributor who does not agree is free to engage in their own compliance or enforcement activity. Conservancy can only take action on behalf of contributors who have explicitly asked them to.

                                The biggest problem is that most contributors do not participate in compliance or enforcement activities at all.

                                1. 1

                                  Conservancy can only take action on behalf of contributors who have explicitly asked them to.

                                  Trust me, it’s not that simple.

                                  The biggest problem is that most contributors do not participate in compliance or enforcement activities at all.

                                  Maybe contributors already agreed to contribute under the license terms and just want it to be enforced as is?

                                  I’m sincerely puzzled by Software Freedom Conservancy.

                                  Philosophycally I like this gentle touch, I’d like to believe that companies will be inspired by their work.

                                  But in practice, to my untrained eye, they weaken the GPL. Because, the message to companies is that Conservancy is afraid to test the GPL in court to defend the developers’ will expressed in the license. As if it was not that safe.

                                  I’m not a lawyer, but as a developer, this scares me a bit.

                                  1. 3

                                    If contributors want they license enforced they have to do something about that. No one can legally enforce it for them (unless they enter an explicit agreement). There is no magical enforcement body, only us.

                                    Conservancy’s particular strategy wouldn’t be the only one in use if anyone else did enforcement work ;)

                                    1. 1

                                      You are right. :-)

                          2. 2

                            They’re asking China to comply with the kind of American IP that makes high margins, not the FOSS. They’re doing it since American companies are paying politicians to act in the companies’ interests, too.

                          1. 1

                            If I’m going to stay in the 🇺🇲 I really need to improve my Spanish.

                            1. 1

                              I’m really amazed that Pycon US managed to do this. It’s rare to get a few French talks in Pycon Canada. Is there really that much more Spanish in the US that Pycon could get a whole track in Spanish?

                            1. 5

                              It’s always nice to have Theo to remind us that Linus isn’t as bad of an asshole as people like to portray.

                              1. 11

                                Things I self-host now on the Interwebs (as opposed to at home):

                                • NextCloud
                                • Bookstack Wiki
                                • Various sites and smaller web apps (Privatebin, Lutim, Framadate etc)
                                • Mailu Mail Server
                                • Searx

                                Things I’m setting up on the Interwebs:

                                • Gitea on HTTPd
                                • OpenSMTPd
                                • Munin
                                • Pleroma
                                • Transmission
                                • DNS (considering Unbound for now)

                                Over time I may move the Docker and KVM-based Linux boxes over to OpenBSD and VMM as it matures. I’m moving internal systems from Debian to Open or NetBSD because I’ve had enough of Systemd.

                                1. 6

                                  Out of curiosity, why migrate your entire OS to avoid SystemD rather than just switch init systems? Debian supports others just fine. I use OpenRC with no issues, and personally find that solution much more comfortable than learning an entirely new management interface.

                                  1. 11

                                    To be fair, it’s not just systemd, but systemd was the beginning of the end for me.

                                    I expect my servers to be stable and mostly static. I expect to understand what’s running on them, and to manage them accordingly. Over the years, Debian has continued to change, choosing things I just don’t support (systemd, removing ifconfig etc). I’ve moved most of my stack over to docker, which has made deployment easier at the cost of me just not being certain what code I’m running at any point in time. So in effect I’m not even really running Debian as such (my docker images are a mix of alpine and ubuntu images anyway).

                                    I used to use NetBSD years back quite heavily, so moving back to it is fairly straightforward, and I like OpenBSD’s approach to code reduction and simplicity over feature chasing. I think it was always on the cards but the removal of ifconfig and the recent furore over the abort() function with RMS gave me the shove I needed to start moving.

                                    1. 4

                                      Docker doesn’t work on OpenBSD though, so what are you going to do?

                                      1. 2

                                        For now I’m backing up my configs in git, data via rsync/ssh and will probably manage deployment via Ansible.

                                        It’s not as easy as docker-compose, but not as scary as pulling images from public repos. Plus, I’ll actually know what code I’m running at a given point in time.

                                        1. 1

                                          Have you looked at Capistrano for deployment? Its workflow for deployment and rollback centers around releasing a branch of a git repo.

                                          I’m interested in what you think of the two strategies and why you’d use one or the other for your setup, if you have an opinion.

                                          1. 1

                                            I don’t run ruby, given the choice. It’s not a dogmatic thing, it’s just that I’ve found that there are more important things for me to get round to than learning ruby properly, and that if I’m not prepared to learn it properly I’m not giving it a fair shout.

                                    2. 4

                                      N.B. You can partially remove systemd, but not completely remove it. Many binaries runtime depend on libsystemd even if they don’t appear like they would need it.

                                      When I ran my own init system on Arch (systemd was giving me woes) I had to keep libsystemd.so installed for even simple tools like pgrep to work.

                                      Some more info and discussion here. I didn’t want to switch away from Arch, but I also didn’t want remnants of systemd sticking around. Given the culture of systemd adding new features and acting like a sysadmin on my computer I thought it wise to try and keep my distance.

                                      1. 2

                                        The author of the article regarding pgrep you linked used an ancient, outdated kernel, and complained that the newest versions of software wouldn’t work. He/She used all debug flags for the kernel, and complained about the verbosity. He/She used a custom, unsupported build of a bootloader, and complained about the interface. He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions. He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly, and likely with the default sRGB set (which is horribly inaccurate anyway). He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.

                                        1. 3

                                          I’m the author of the article.

                                          ancient, outdated kernel all debug flags for the kernel unsupported build of a bootloader

                                          The kernel, kernel build options and bootloader were set by Arch Linux ARM project. They were not unsupported or unusual, they were what the team provided in their install instructions and their repos.

                                          A newer mainstream kernel build did appear in the repos at some point, but it had several features broken (suspend/resume, etc). The only valid option for day to day use was the recommended old kernel.

                                          complained that the newest versions of software wouldn’t work

                                          I’m perfectly happy for software to break due to out of date dependencies. But an init system is a special case, because if it fails then the operating system becomes inoperable.

                                          Core software should fail gracefully. A good piece of software behaves well in both normal and adverse conditions.

                                          I was greatly surprised that systemd did not provide some form of rescue getty or anything else upon failure. It left me in a position that was very difficult to solve.

                                          He/She installed a custom kernel package, and was surprised that it (requiring a different partition layout) wiped his/her partitions

                                          This was not a custom kernel package, it was provided by the Arch Linux ARM team. It was a newer kernel package that described itself as supporting my model. As it turns out it was the new recommended/mandated kernel package in the Arch Linux ARM install instructions for my laptop.

                                          Even if the kernel were custom, it is highly unusual for distribution packages to contain scripts that overwrite partitions.

                                          He/She complains about color profiles, and says he/she “does not use color profiles” – which is hilarious, considering he/she definitely does use them, just unknowingly

                                          There are multiple concepts under the words of ‘colour profiles’ that it looks like you have merged together here.

                                          Colour profiles are indeed used by image and video codecs every day on our computers. Most of these formats do not store their data in the same format as our monitors expect (RGB888 gamma ~2.2, ie common sRGB) so they have to perform colour space conversions.

                                          Whatever the systemd unit was providing in the form of ‘colour profiles’ was completely unnecessary for this process. All my applications worked before systemd did this. And they still do now without systemd doing it.

                                          likely with the default sRGB set (which is horribly inaccurate anyway)

                                          1:1 sRGB is good enough for most people, as it’s only possible to obtain benefits from colour profiles in very specific scenarios.

                                          If you are using a new desktop monitor and you have a specific task you need or want to match for, then yes.

                                          If you are using a laptop screen like I was: most change their colour curves dramatically when you change the screen viewing angle. Tweaking of colour profiles provides next to no benefit. Some laptop models have much nicer screens and avoid this, but at the cost of battery life (higher light emissions) and generally higher cost.

                                          I use second hand monitors for my desktop. They mostly do not have factory provided colour profiles, and even then the (CCFL) backlights have aged and changed their responses. Without calibrated color profiling equipment there is not much I can do, and is not worth the effort unless I have a very specific reason to do so.

                                          He/She asks why pgrep has a systemd dependency – pgrep and ps both support displaying the systemd unit owning a process.

                                          You can do this without making systemd libraries a hard runtime dependency.

                                          I raised this issue because of a concept that seemed more pertinent to me: the extension of systemd’s influence. I don’t think it’s appropriate for basic tools to depend on any optional programs or libraries, whether they be an init system like systemd, a runtime like mono or a framework like docker.

                                          1. 2

                                            Almost all of these issues are distro issues.

                                            Systemd can work without the color profile daemon, and ps and pgrep can work without systemd. Same with the kernel.

                                            But the policy of Arch is to always build all packages with all possible dependencies as hard dependencies.

                                            e.g. for Quassel, which can make use of KDE integration, but doesn’t require it, they decide to build it so that it has a hard dependency on KDE (which means it pulls in 400M of packages for a package that would be fine without any of them).

                                          2. 1

                                            Why he/she instead of they? It makes your comment difficult to read

                                            1. 1

                                              tbh, I dunno. I usually use third-person they.

                                      2. 3

                                        I really wish the FreeBSD port of Docker was still maintained. It’s a few years behind at this point, but if FreeBSD was supported as a first class Docker operating system, I think we’d see a lot more people running it.

                                        1. 4

                                          IME Docker abstracts the problem under a layer of magic rather than providing a sustainable solution.

                                          Yes it makes things as easy as adding a line referencing a random github repo to deploy otherwise troublesome software. I’m not convinced this is a good thing.

                                          1. 3

                                            As someone who needs to know exactly what gets deployed in production, and therefore cannot use any public registry, I can say with certainty that Docker is a lot less cool without the plethora of automagic images you can run.

                                            1. 2

                                              Exactly, once you start running private registries it’s not the timesaver it may have first appeared as.

                                              1. 1

                                                Personally, I’ll have to disagree with that. I’m letting Gitlab automatically build the containers I need as basis, plus my own. And the result is very amazing because scaling, development, reproducibility etc are much easier given.

                                          2. 3

                                            I think Kubernetes has support for some alternative runtimes, including FreeBSD jails? That might make FreeBSD more popular in the long run.

                                          3. 1

                                            How is the next cloud video chat feature? Does it work reliably compared to Zoom.us?

                                            1. 1

                                              Works fine for me(tm).

                                              It seems fine both over mobile and laptop, and over 4G. I haven’t tried any large groups and I doubt I’ll use it much, but so far I’ve been impressed.

                                            2. 1

                                              Is bookstack good? I’m on the never ending search for a good wiki system. I keep half writing my own and (thankfully) failing to complete it.

                                              1. 2

                                                Cowyo is pretty straighforward (if sort of sparse).

                                                Being go and working with flat files, it’s pretty straightforward to run & backup.

                                                1. 2

                                                  Bookstack is by far one of the best wikis I’ve given to non-technical people to use. However I think it stores HTML internally, which is a bit icky in my view. I’d prefer it if they converted it to markdown. Still, it’s fairly low resource, pretty and works very, very well.

                                              1. 3

                                                I stopped hosting my own email when I realized that I wasn’t reading my personal email because of the spam. And yeah I tried greylisting and spamassassin and all kinds of shit. At that time I was running my own DNS too (primary & secondary on different continents).

                                                These days I’m only really self-hosting web stuff though I’m pretty sure that’s a bad idea. Nobody offers the web hosting flexibility I want at the price I want to pay, though I think letsencrypt’s ubiquity may start to change that.

                                                1. 3

                                                  God how I miss the real MacOS.

                                                  1. 1

                                                    Great to see this! Are you planning to send a PR so this gets merged into upstream?

                                                    1. 4

                                                      It is already merged into upstream, thanks! :-)

                                                      1. 1

                                                        Yeah they were very responsive when I was upstreaming Fuchsia compatibility.

                                                    1. 12

                                                      Cool article, but not what I expected from the phrase window manager!

                                                      1. 9

                                                        In hindsight, I guess the tag should have given it away. But I enjoyed the surprise as well :)

                                                        1. 1

                                                          Missed the hardware tag too. Been looking at electric blinds recently and the off the shelf ones seem prohibitively expensive.

                                                          1. 2

                                                            TechCrunch Disrupt 2019: “Electric Blinds Meets Machine Learning Monitored and Controlled via a $5 Droplet!”

                                                            1. 2

                                                              Author here. Although the raw material I used is <10$, as a rule for all DIY projects you should always consider the time and equipment needed. I’ve seen some motorized shades for ~200$, which for a finished/professional product is quite ok.

                                                          2. 2

                                                            It doesn’t even support ICCCM!

                                                          1. 2

                                                            This is a great piece with some definitely actionable advice. The one problem I have with it is it implies issues like developers ignoring the warnings are problems with static analysis. Those are people problems that should be fixed by management. It’s long known that you get more quality by getting buy-in of senior management to put it in company culture or require at least a percentage of time spent on it. In my company, there’s certain reviews and reports that have to get done with managers checking a sample of them to make sure we didn’t pretend to do them. If someone ignores reviews or problems, management takes action to force them to address them ranging from a warning to a write-up with more management review of their activities to termination after lots of write-ups.

                                                            The same thing could, probably should, be done in a company like Google. That changes the bar to integrating static analysis as a step in their build system. Note that a lot of good advice in the article still applies at that point about suppressing false positives, prioritizing/triaging bugs, focusing on analyses that comes with fixes, allowing customization, and so on. These would still benefit given you get more out of tools the developers like than are just forced to use. They could even split analysis between the quick, compiler-focused tools they developed and the slower, more-thorough tools that run over time. About everything they’re doing still applies. They just get even more results when people are pushed to address quality issues.

                                                            And, while we’re at it, I’ve always wondered why Google didn’t just buy one of the top vendors of static analysis tools. They could buy them, make first priority doing stuff like in the article to improve usability or integration with Google, that gets released to other customers that are paying, and tools continue to improve with both the vendor’s revenue and contributions from Google in article. That’s on top of the improvements from CompSci folks those vendors pick up regularly. At this point, though, they’ve put so much into their tooling with good results that they probably couldn’t justify such a purchase to management. Might have been a good idea earlier.

                                                            1. 3

                                                              The article focused a lot on the experiments to integrate static analysis into developers workflow. That is addressing the people problem. Since when does senior management have much influence on how programmers work?

                                                              1. 2

                                                                Of course management have a lot of influence on how programmers work. If not, management is incompetent.

                                                                1. 3

                                                                  Competent management will fund effort to incorporate improvements into existing workflows rather than telling programmers that they need to change their patterns and tools.

                                                                  1. 2

                                                                    Management can influence programmer incentive to change false positive rate tolerance, which Google reports as 10% in this article.

                                                                    Let’s say you have 20% false positive rate analysis. Google’s approach is to wait until the analysis improves. I think what nickpsecurity is saying is that with management buy-in, you can successfully deploy 20% analysis. Since the analysis catches real bugs 80% of the time, this can improve software quality a lot. I believe this actually happened with Microsoft and SAL.

                                                                    1. 1

                                                                      Exactly. Microsoft’s SDL was a great example that dramatically reduced 0-days and crashes in Windows kernel by embedding strong reviews into each part of the development process. Interestingly enough, it was actually optimized for development pace. The mandatory use of their driver verifier also eliminated most blue screens. Earlier, OpenVMS teams alternated one week to build features, run huge number of tests over weekend, a week fixing, and repeat. Great reliability. IBM also had mandated inspections and verifications under Fagan’s Inspection Process and Mills’ Cleanroom, respectively.

                                                                      Far as static analysis, it’s really common in safety-critical industry to force developers to use and review the output of the tools. They know they lose some time to fighting with false positives. Yet, that they deliver low-defect code day in and day out with workflows that include strong reviews, machine analysis, and testing proves it can be done. One NASA team actually used four, static analyzers with author saying they each caught things others missed. There’s also tools that focus on minimizing number of false positives so they don’t overload developers. If that was a priority, then a company could use a mix of those to start with. That’s actually something I advocate more often these days given so much developer resistance.

                                                                      Edit: Wait, I thought you meant SDL with SAL being a typo. Your other comment makes me think you might have intentionally been talking about benefits of SAL. So, count this as complementary evidence since both brought major benefits.

                                                                2. 1

                                                                  Since when does senior management have much influence on how programmers work?

                                                                  A combo of senior and middle management already dictated all kinds of practices at Google from using the monorepo or internal tools to their job compensation and performance ratings. They can mandate more time addressing bugs, too. It’s happened at other companies. Most write-ups on getting companies to do more QA in software also mention the importance of senior management buy-in so everyone is pushed from top-down to keep it a priority. Without it, people ignoring it might get away with it.

                                                                3. 2

                                                                  To your first point, I think that they partially address this with their points on developer happiness. Google is big enough that they have to consider the effects of onboarding a developer that doesn’t have any experience with static analysis tools or that has had negative experiences with them. You want your tooling to be perceived by all developers as adding value (the vast majority of the time) so that everyone feels comfortable with it being part of the workflow. Anything short of that adds to the friction and google could lose a developer that has already gotten over the hurdles of the hiring process over… toolchain arguments. Just to be clear, it do think the answer is more and more usable static analysis, but getting people to change their mind about things is more than just a simple engineering challenge.

                                                                  And to your final point - I suspect it’s fear of messing with the secret sauce. I heard (somewhere) that Google, Microsoft and other places with gigantic c/c++ code bases already are the biggest paying customers of static analysis tools but are content to buy without exposing what’s behind the curtain.

                                                                  1. 1

                                                                    Anything short of that adds to the friction and google could lose a developer that has already gotten over the hurdles of the hiring process over… toolchain arguments.

                                                                    I see where you’re going there as this is a valid concern. I think I just disagree about what they should be optimizing for. In this case, we’re talking one of the companies that almost everyone fights to join to get its salaries, perks, and prestige. I think a small fraction of the day vetting reports from static analysis won’t make most productive Googlers quit. If it does, I predict their places will be taken by others that will accept a QA step. In some companies, there’s usually even specific people or teams that do this sort of thing working alongside the other developers.

                                                                    “ I heard (somewhere) that Google, Microsoft and other places with gigantic c/c++ code bases already are the biggest paying customers of static analysis tools but are content to buy without exposing what’s behind the curtain.”

                                                                    Might be true but I can’t assume too much. Microsoft is heavily invested in them via Microsoft Research. They publish a lot of reports on their activities and even FOSS some of it. Microsoft SAL sanxiyn mentioned, PREfix/PREfast (older), Dafny, VCC (used in Hyper-V), Code Contracts (Design-by-Contract), F*, Midori project, and so on come to mind. A lot of their stuff requires more significant investment, though, since it aims for stronger correctness. The SDL process, driver verification, and Code Contracts were examples that didn’t add too much overhead for their standard, development pace and priorities. Microsoft also does writeups on using their tools with a random example I just got out of Google.

                                                                  2. 2

                                                                    If you’re going to start firing programmers for ignore static analysis warnings instead of trusting their judgement, you’d wanna pray for a serious improvement in static analysis.

                                                                    And your best programmers will probably quit, so there’s that.

                                                                    1. 4

                                                                      Microsoft deployed SAL annotation top-down, resulting in much improved security of Windows. The initial SAL paper reports 28% false positive rate.

                                                                      I claim: even with high false positive rate, forcing programmers to fix static analysis warnings work. Serious improvement in static analysis is welcome but not necessary. Also, as far as I know, best programmers in Microsoft didn’t quit over SAL.

                                                                      1. 1

                                                                        I think adding a contract system into your source tree wouldn’t really come under the umbrella of adding analysis to existing code, but either way I didn’t know about SAL and found it super-interesting, cheers!

                                                                  1. 2

                                                                    Amazing. I kind of can’t believe they built integer overflow into the etherium vm. But then I also totally can.

                                                                    Do the authors of those “contracts” have enough political clout to get a hard fork that patches & reverts or are they outside of the eth oligopoly?

                                                                    1. 2

                                                                      I’ve heard rumblings that they’re planning to replace the EVM with a WebAssembly based vm.

                                                                      1. 2

                                                                        Link? Have not heard Ethereum guys looking at WebAssembly…

                                                                        1. 2

                                                                          Sorry, on my phone, don’t have a conical link, just do a search for “Ethereum webassembly” on DDG/reddit/twitter

                                                                          1. 1

                                                                            WebAssembly integer operations also overflow.

                                                                            1. 0

                                                                              Oh god.

                                                                          1. 14

                                                                            Fuchsia is not Unix. ;-)

                                                                            1. 3

                                                                              Technically, Linux isn’t Unix either…

                                                                              1. 3

                                                                                Indeed.

                                                                                Not being Linux did not excluded it was a Unix (or a Plan 9), so I checked the syscall list.

                                                                                It’s a new operating system. Maybe Pike was wrong?

                                                                                1. 1

                                                                                  That was a good stack of slides. Thanks for sharing.

                                                                            1. 5

                                                                              Another “quirks” question: did you find any unexpected quirks of Go that made writing this emulator harder or easier?

                                                                              1. 5

                                                                                In this particular case, it feels like the code isn’t too far from what C code would be: here are some basic data structures and here are some functions that operate on them, mostly on the bit level. No fancy concurrency models nor exciting constructs. I think given the fact that this is an inherently low level program, most nicieties from Go weren’t immediately needed.

                                                                                I did use some inner functions/closures and hash maps, but could’ve just as well done without them. The bottom line is that the language didn’t get in the way, but I didn’t feel like it was enourmously helpful, other than making it easier to declare dependencies and handling the build process for me.

                                                                                1. 4

                                                                                  Did you run into any issues with gc pauses? That’s one of the things people worry about building latency sensitive applications in go.

                                                                                  1. 3

                                                                                    Not the OP, but I would assume this kind of application generates very little garbage in normal operation.

                                                                                    1. 2

                                                                                      The gc pauses are so miniscule now, for the latest releases of Go, that there should be no latency issues even for realtime use. And it’s always possible to allocate a blob of memory at the start of the program and just use that, to avoid gc in the first place.

                                                                                      1. 2

                                                                                        The garbage collector hasn’t been an issue either. Out of the box, I had to add artificial delays to slow things down and maintain the frame rate, so I haven’t done much performance tuning/profiling. I am interested in scenarios where this would be critical though.

                                                                                        1. 1

                                                                                          Go’s GC pauses are sub-millisecond so it’s not an issue.

                                                                                      2. 3

                                                                                        Interested in this as well. I’ve toying with the idea of writing a CHIP-8 emulator in Go and would love to hear about how is the experience of writing emulators.

                                                                                        1. 3

                                                                                          I did exactly this as a project to learn Go! I used channels in order to control the clock speed and the timer frequency and it ended up being a really nice solution. The only real hangup I had was fighting with the compiler with respect to types and casting, but having type checking overall was a good thing.

                                                                                      1. 4

                                                                                        I’m skeptic, but I think they can pull it off.

                                                                                        In the end, they only need to reach half of Intel’s performance, as benchmarks suggest that macOS’ performance is roughly half of Linux’ when running on the same hardware.

                                                                                        With their own hardware, they might be able to get closer to the raw performance offered by the CPU.

                                                                                        1. 7

                                                                                          they only need to reach half of Intel’s performance, as benchmarks suggest that macOS’ performance is roughly half of Linux’ when running on the same hardware

                                                                                          I’m confused. Doesn’t that mean they need to reach double Intel’s performance?

                                                                                          1. 11

                                                                                            It was probably worded quite poorly, my calculation was like:

                                                                                            • Raw Intel performance = 100
                                                                                            • macOS Intel performance ~= 50
                                                                                            • Raw Apple CPU performance = 50
                                                                                            • macOS Appe CPU performance ~= 50

                                                                                            So if they build chips that are half as fast as “raw” Intel, but are able to better optimize their software for their own chips, they can get way closer to the raw performance of their hardware than they manage to do on Intel.

                                                                                          2. 7

                                                                                            Why skeptic? They’ve done it twice before (68000 -> PowerPC and PowerPC -> Intel x86).

                                                                                            1. 4

                                                                                              And the PPC → x86 transition was within the past fifteen years and well after they had recovered from their slump of the ‘90s, and didn’t seem to hurt them. They’re one of the few companies in existence with recent experience transitioning microarchitectures, and they’re well-positioned to do it with minimal hiccups.

                                                                                              That said, I’m somewhat skeptical, too; it’s a huge undertaking even if everything goes as smoothly as it did with the x86 transition, which is very far from a guarantee. This transition will be away from the dominant architecture in its niche, which will introduce additional friction which was not present for their last transition.

                                                                                              1. 2

                                                                                                They also did ARM32->ARM64 on iOS.

                                                                                                1. 3

                                                                                                  That’s not much of a transition. They did i386 -> amd64 too then.

                                                                                                  (fun fact, I also did that, on the scale of one single Mac - swapped a Core Duo to a Core 2 Duo in a ’06 mini :D)

                                                                                                  1. 1

                                                                                                    My understanding is that they’re removing some of the 32-bit instructions on ARM. Any clue if that’s correct?

                                                                                                    1. 1

                                                                                                      AArch64 processors implement AArch32 too for backwards compatibility, just like it works on amd64.

                                                                                                      1. 1

                                                                                                        As of iOS 11, 32-bit apps won’t load. So if Apple devices that come with iOS 11 still have CPUs that implement AArch32, I’d guess it’s only because it was easier to leave it in than pull it out.

                                                                                                        1. 1

                                                                                                          Oh, sure – of course they can remove it, maybe even on the chip level (since they make fully custom ones now), or maybe not (macOS also doesn’t load 32-bit apps, right?). The point is that this transition used backwards compatible CPUs, so it’s not really comparable to 68k to PPC to x86.

                                                                                                          1. 1

                                                                                                            I of course agree that this most recent transition isn’t comparable with the others. To answer your question: the version of macOS they just released a few days ago (10.13.4) is the first to come with a boot flag that lets you disable loading of 32-bit applications to, as they put it, “prepare for a future release of macOS in which 32-bit software will no longer run without compromise.”

                                                                                              2. 3

                                                                                                I didn’t know this. Do you know which benchmarks show macOS at half of Linux performance?

                                                                                                1. 3

                                                                                                  Have a look at the benchmarks Phoronix has done. Some of them are older, but I think they show the general trend.

                                                                                                  This of course doesn’t take GPU performance into account. I could imagine that they take additional hit there as companies (that don’t use triple AAA game engines) rather do …

                                                                                                  Application → Vulkan API → MoltenVK → Metal

                                                                                                  … than write a Metal-specific backend.

                                                                                                  1. 1

                                                                                                    I guess you’re talking about these? https://www.phoronix.com/scan.php?page=article&item=macos-1013-linux

                                                                                                    Aside from OpenGL and a handful of other outliers for each platform, they seem quite comparable, with each being a bit faster at some things and a bit slower at others. Reading your comments I’d assumed they were showing Linux as being much faster in most areas, usually ending up about twice as fast.

                                                                                                2. 3

                                                                                                  The things they’re slow at don’t seem to be particularly CPU architecture specific. But the poor performance of their software doesn’t seem to hurt their market share.