1. 5

    I’m really amused they generate a C file and call out to gcc / clang. I wonder if they plan to move away from that strategy.

    1. 3

      This is a draft implementation of the concept, so probably yes.

      1. 2

        What would be gained by moving away from that?

        1. 6

          For one, no runtime dependency on a C compiler.

          C compilers are also fairly expensive to run, compared to other more targeted JIT strategies. And it’s more difficult to make the JIT code work nicely with the regular uncompiled VM code.

          Take LuaJIT. It starts by compiling the Lua code to VM bytecode. Then instead of interpreting the bytecode, it “compiles” the bytecode into native machine code that calls the interpreter functions that would be called by a loop { switch (opcode) { ... } }. That way when the JIT compiles a hot path, it directly encodes all entry points as jumps directly into the optimized code, and all exit conditions as jumps directly back to the interpreter code.

          Compare this to a external compiled object, which can only exit wholesale, leaving the VM to clean up and figure out the next step. A fully external object—C compiled or not—can’t thread into the rest of the execution, so its scope is limited to pretty isolated functions that only call similarly isolated functions, or functions with very consistent output.

          1. 2

            Compare this to a external compiled object, which can only exit wholesale, leaving the VM to clean up and figure out the next step. A fully external object—C compiled or not—can’t thread into the rest of the execution, so its scope is limited to pretty isolated functions that only call similarly isolated functions, or functions with very consistent output.

            This doesn’t seem to be related to the approach Ruby is taking, though? They’re callng out to the compiler to build a shared library, and then dynamically linking it in. There shouldn’t be anything stopping the code in the shared object from calling back into the rest of the Ruby runtime.

            1. 2

              Right, it can use the Ruby runtime, but it can’t jump directly to a specific location in the VM bytecode. It has to call a function that can execute for it, and will return back into the compiled code when that execution finishes. It’s very limited, compared to all types of code being able to jump between each other at any time.

          2. 4

            exec’ing a new process each time probably gets expensive.

            1. 3

              Typically any kind of JIT startup cost is quite expensive, but as long as you JIT the right code, the cost of exec’ing clang over the life of a long running process should amortize out to basically nothing.

              I’d expect that the bare exec cost would only become a significant factor if you were stuck flip-flopping between JITing a code section and deoptimizing it, and at that point you’d gain more from improving your JIT candidate heuristics rather than folding the machine code generator in process and continuing to let it flip-flop.

              There are other reasons they may want to move away from this C-intermediary approach, but exec cost doesn’t strike me as one of them.

        1. 1

          Walking around Tokyo, I often get the feeling of being stuck in a 1980’s vision of the future and in many ways it’s this contradiction which characterises the design landscape in Japan.

          Could this also be because many American films in the 80’s about the future used Japanese culture? Rewatching the original Blade Runner made me think about this.

          1. 3

            Japan is one of our favorite places to visit, but there is a definite retro-futuristic vibe going on. Cash everywhere, or single-purpose cash cards instead of credit cards, fax machines, high-speed Internet access on your feature phone, no air conditioning or central heat but a robot vending machine at 7/11.

            (We kept having children and so we haven’t gotten to travel internationally for a while now, but that’s our memory of it.)

            1. 2

              The feature phones have died – everybody on the train is staring at their iPhone or Android, now. Contactless smart cards (Suica, Passmo, etc), used for train fares, are gaining momentum as payment cards in 7/11 etc, but otherwise it’s still mostly a cash-only.

              Otherwise it’s pretty much the same.

            2. 2

              Living in NYC, it feels like the 70’s version of the future!

            1. 4

              A solid list, with one question mark.

              Lynn Conway started life as a man. does this mean he/then her achievements give equally credited to men/women?

              1. 52

                No. Trans women are women.

                1. 10

                  Thank you . I want to live in a world where this is just taken as a given. Lets start with our little world here people.

                  1. 8

                    What is the goal of creating a list of women in CS? If it’s to demonstrate to young girls that they can enter the field, it seems unproductive to include someone who grew up experiencing life as a man.

                    If the goal of creating the list is some kind of contest, then it’s counterproductive for entirely different reasons.

                    1. 28

                      someone who grew up experiencing life as a man

                      Do you know any trans women who have said they grew up experiencing life as a man? I know quite a few and none of them have expressed anything like this, and my own experience was certainly not like that.

                      However, if you mean that we were treated like men, with the privilege it brings in many areas, then yes, that became even more obvious to me the moment I came out.

                      Regardless, trans folks need role models too, and we don’t get a lot of respectful representation.

                      1. 21
                        $ curl https://www.hillelwayne.com/post/important-women-in-cs/ | grep girl | wc -l
                        0
                        

                        The motivation for the post are clearly layed out in the first paragraph:

                        I’m tired of hearing about Grace Hopper, Margaret Hamilton, and Ada Lovelace. Can’t we think of someone else for once?

                        It’s a pretty pure writeup for the sake of being a list you can refer to.

                        On your statement about “girls”. It’s quite bad to assume a list of women is just for kids, it’s also bad to assume trans women can’t be examples to (possibly themselves trans) girls.

                        1. 4

                          That’s not a motivation, that’s a tagline.

                          The primary reason I would refer to a list like this is if I was demonstrating to a young woman considering CS that, perhaps despite appearances, many women have historically made major contributions to the field. I’m not sure what else I would need something like this for.

                          1. 5

                            Maybe its not for you to distribute but for women to discover …

                          2. 1

                            I don’t see why it’s bad to assume that. It feels like it would be a pretty serious turn off to me if I we’re looking for successful women and found people who were men into adulthood. I find it hard to imagine that I’m unique in that feeling. I’m sure it feels good for trans people but I’d that’s your goal admit the trade-off rather than just telling people they’re women and not transwomen.

                            You can berate people for not considering trans-women to be the same as born women but it will likely just keep them quiet rather than convince them to be inspired.

                            1. 19

                              people who were men into adulthood

                              Now I’m curious what your criteria are, if not self-identification. When did this person cease to be a man, to you?

                              When they changed their name?

                              When they changed their legal gender?

                              When they started hormones?

                              When they changed their presentation?

                              When they got surgery?

                              What about trans people who do none of that? E.g. I’ve changed my name and legal gender (only because governments insist on putting it in passports and whatnot,) because I had the means to do so and it bothered me enough that I did, is that enough? What about trans people who don’t have the means, option, or desire to do so?

                              When biologist say that there’s not one parameter that overrides the others when it comes to determining sex¹, and that it makes more sense to just go by a person’s gender identity if you for whatever reason must label them as male/female, why is that same gender identity not enough to determine someone’s own gender?

                              1. http://www.nature.com/news/sex-redefined-1.16943
                          3. 16

                            If it’s to demonstrate to young girls that they can enter the field, it seems unproductive to include someone who grew up experiencing life as a man.

                            This is a misunderstanding of transexuality. She grew up experiencing life as a woman, but also as a woman housed in a foreign-feeling body and facing a tendency by others to mistake her gender.

                            Does that mean she faced a different childhood from many other women? Sure. But she also shared many of the disadvantages they faced, frequently to a much stronger degree. Women face difficulty if they present as “femme” in this field, but it is much more intense if they present as femme AND people mis-bucket them into the “male” mental box.

                        2. 14

                          If they identified as a woman at the time of accomplishment, it seems quite reasonable that it’d count. For future work, just think about it in terms of trans-woman extends base class woman or at least implements the woman interface.

                          In any event, your comment is quite off-topic. Rehashing this sort of stuff is an exercise that while interesting is better kept literally anywhere else on the internet–if you have questions of this variety, please seek enlightenment via private message with somebody you think may be helpful on the matter, and don’t derail here.

                          1. 7

                            The point of this is not to give more achievements to women… It’s to showcase people who were most likely marginalized.

                            1. [Comment removed by author]

                              1. 9

                                This is definitely not what life is like for trans people pre-transition.

                            2. 12

                              It’s rude to talk about people’s gender like this fyi

                              1. 0

                                It’s ridiculous to allow this framing to suppress a reasonable point.

                                1. 10

                                  It’s not a reasonable point. This is not the place to make whatever point you’re trying to make.

                              2. 3

                                Depends on where a person is on political spectrum. I’d probably note they’re trans if targeting a wide audience, not if a liberal one, and leave person off if a right-leaning one.

                                1. 5

                                  what they dont know wont hurt them. As far as the right is concerned , she is a woman …

                                2. 2

                                  It is irrelevant, and you asking this is offensive.

                                  1. -1

                                    Interesting question. I think it may be met with hostility, as it brings to mind the contradiction inherent in both claiming that sex/gender is arbitrary or constructed and also intentionally emphasizing the achievements of one gender. Based on the subset of my social circle that engages in this kind of thing, these activities are usually highly correlated. Picking one or the other seems to get people labeled as, respectively, some slang variation of “nerd”, or a “TERF”.

                                    1. 34

                                      Can we please not for once? Every time anything similar to this comes up the thread turns into a pissfight over Gender Studies 101. Let’s just celebrate Conway’s contributions and not get into an argument about whether she “counts”.

                                      1. 10

                                        Much as I sympathize, transgender is controversial enough that merely putting a trans person on a list that claims all its members are a specific gender will generate reactions like that due to a huge chunk of the population not recognizing the gender claim. That will always happen unless the audience totally agrees. So, one will always have to choose between not mentioning them to avoid noise or including them combating noise.

                                        1. 20

                                          I would like to live in a world where trangender isnt controversial and we dont have to waste energy discussing this. Can lobsters be that world please ?

                                          1. 18

                                            Perhaps this is why we get accused of pushing some kind of agenda or bringing politics into things, by merely existing/being visible around people who find us ”controversial” or start questioning whether our gender is legit or what have you. I usually stay out of such discussions, but sometimes feel the need to respond to claims about trans folks that I feel come from a place of ignorance rather than bigotry or malice, but most of the time I’m proven wrong and they aren’t really interested in the science or whatever they claim, they just want an excuse to say hateful things about us. I’ve had a better than average experience on this website, when it comes to responses.

                                            1. 6

                                              I cant speak for everyone on the side that denies trans identity. Just my group I guess. For us and partly for others, the root of the problem is there is a status quo with massive evidence and inertia about how we categorize gender that a small segment are countering in a more subjective way. We dont think the counters carry the weight of status quo. We also prefer objective criteria about anything involving biology or human categorization where possible. I know you’ve heard the details so I spare you that

                                              That means there will be people objecting every time a case comes up. If it seems mean, remember that there’s leftists who will be quick to counter anything they think shouldn’t be tolerated on a forum (eg language policing) on their principles. For me, Im just courteous with the pronouns and such since it has no real effect on me in most circumstances: I can default on kindness until forced to be more specific by a question or debate happening. Trans people are still people to me. So, I avoid bringing this stuff up much as possible.

                                              The dont-rock-the-boat, kinder approach wouldve been for person rejecting the gender claim to just ignore talking about the person he or she didnt think was a woman to focus on others. The thread wouldve stayed on topic. Positive things would be said about about deserving people. And so on. Someone had to stir shit up, though. (Sighs)

                                              And I agree Lobsters have handled these things much better than other places. I usually like this community even on the days it’s irritating. Relatively at least. ;)

                                              1. 6

                                                For us and partly for others, the root of the problem is there is a status quo with massive evidence and inertia about how we categorize gender that a small segment are countering in a more subjective way.

                                                I know you’re a cool dude and would be more than happy to discuss this with you in private, but I think we all mostly agree that this is now pretty outside the realm of tech, so continuing to discuss it publicly would be getting off topic :) I’ll DM you?

                                                1. 7

                                                  I was just answering a question at this point as I had nothing else to say. Personally, Id rather the political topics stay off Lobsters as I voted in community guidelines thread. This tangent couldnt end sooner given how off topic and conflict-creating it is.

                                                  Here’s something for you to try I did earlier. Just click the minus next to Derek’s comment. This whole thread instantly looks the way it should have in first place. :)

                                                2. 4

                                                  I find the idea that everyone who disagrees with these things should avoid rocking the boat extremely disconcerting. It feels like a duty to rock it on behalf of those who agree but are too polite or afraid for their jobs or reputations to state their actual opinions, to normalize speaking honestly about uncomfortable topics.

                                                  I mean, I also think it’s on topic to debate the political point made by the list.

                                                  1. 4

                                                    I agree with those points. It’s why I’m in the sub-thread. The disagreement is a practical one a few others are noting:

                                                    “I mean, I also think it’s on topic to debate the political point made by the list.”

                                                    I agree. I told someone that in private plus said it here in this thread. Whether we want to bring it up, though, should depend on what the goal is. My goal is the site stays focused on interesting, preferably-deep topics with pleasant experience with minimal noise. There’s political debates and flamewars available all over the Internet with the experience that’s typical of Lobsters being a rarity. So, I’d just have not brought it up here.

                                                    When someone did, the early response was a mix of people saying it’s off-topic/unnecessary (my side) and a group decreeing their political views as undeniable truth or standards for the forum. Aside from no consensus on those views, prior metas on these things showed that even those people believed our standards would be defined by what we spoke for and against with silence itself being a vote for something. So, a few of us with different views on political angle, who still opposed the comment, had to speak to ensure the totality of the community was represented. It’s necessary as long as (a) we do politics here and (b) any group intends to make its politics a standard or enforeable rule. Countering that political maneuvering was all I was doing except for a larger comment where I just answered someone’s question.

                                                    Well, that plus reinforcing I’m against these political angles being on the site period like I vote in metas. You can easily test my hypothesis/preference. Precondition: A site that’s usually low noise with on-topic, productive comments. Goal: Identify, discuss, and celebrate the achievements of women on a list or in the comments maintaining that precondition. Test: count the comments talking about one or more women versus the gender identity of one (aka political views). It’s easier to visualize what my rule would be like if you collapse Derek’s comment tree. The whole thread meets the precondition and goal. You can also assess those active more on politics than the main topic by adding up who contributed something about an undisputed woman in CompSci and who just talked about the politics. Last I looked, there were more users doing the politics than highlighting women in CompSci as well. Precondition and goal failed on two measurements early on in discussion. There’s a lot of on-topic comments right now, though, so leaned back in good direction.

                                                    Time and place for everything. I’d rather this stuff stay off Lobsters with me only speaking on it where others force it. It’s not like those interested can’t message each other, set up a gender identity thread on another forum, load up IRC, and so on to discuss it. They’re smart people. There’s many mediums. A few of us here just want one to be better than the rest in quality and focus. That’s all. :) And it arguably was without that comment tree.

                                                  2. 8

                                                    So, I avoid bringing this stuff up much as possible.

                                                    Keep working on this

                                                    1. 2

                                                      The dont-rock-the-boat, kinder approach wouldve been for person rejecting the gender claim to just ignore talking about the person he or she didnt think was a woman to focus on others. The thread wouldve stayed on topic. Positive things would be said about about deserving people.

                                                      Do you believe the most deserving will be talked about most? If you have a population that talks positively about people whether or not they are trans, and you have a smaller population that talks only about non trans people and ignores the trans people, Which people will be talked about most in aggregate? It isn’t kinder to ignore people and their accomplishments.

                                                      It is also very strange for technology people to reject a technology that changes your gender. What if you had a magic gun and you can be a women for a day, and then be a man the next, why the hell not? We have a technology now where you can be a man or a women or neither or both if you wanted to. Isn’t technology amazing? You tech person you!

                                          1. 7

                                            At that time, when you turned on your computer, you immediately had programming language available. Even in 90’s, there was QBasic installed on almost all PCs. Interpreter and editor in one, so it was very easy to enter the world of programming. Kids could learn it themselves with cheap books and magazines with lot of BASIC program listings. And I think the most important thing - kids were curious about computers. I can see that today, the role of BASIC is taken by Minecraft. I wouldn’t underestimate it as a trigger for a new generation of engineers and developers. Add more physics and more logic into it and it will be excellent playground like BASIC was in 80s.

                                            1. 5

                                              Now we have the raspberry pi, arduino, python, scratch and so many other ways kids can get started.

                                              1. 10

                                                Right, but at the beginning you have to spend a lot of time more to show kid how to setup everything properly. I admire that it itself is fun, but in 80’s you just turned computer on with one switch and environment was literally READY :)

                                                1. 7

                                                  I think the problem is that back then there was much less competition for kids attention. The biggest draw was TV. TV – that played certain shows on a particular schedule, with lots of re-runs. If there was nothing on, but you had a computer nearby, you could escape and unleash your creativity there.

                                                  Today – there’s perpetual phones/tablets/computers and mega-society level connectivity. There’s no time during which they can’t find out what their friends are up to.

                                                  Even for me – to immerse myself in a computer, exploring programming – it’s harder to do than it was ten years ago.

                                                  1. 5

                                                    I admire that it itself is fun, but in 80’s you just turned computer on with one switch and environment was literally READY :)

                                                    We must be using some fairly narrow definition of “the ‘80s”, because this is a seriously rose-tinted description of learning to program at the time. By the late 80’s, with the rise of the Mac and Windows, the only way to learn to program involved buying a commercial compiler.

                                                    I had to beg for a copy of “Just Enough Pascal” in 1988, which came with a floppy containing a copy of Think’s Lightspeed Pascal compiler, and retailed for the equivalent of $155.

                                                    Kids these days have it comparatively easy – all the tools are free.

                                                    1. 1

                                                      Windows still shipped with QBasic well into the 90s, and Macs shipped with HyperCard. It wasn’t quite one-click hacking, but it was still far more accessible than today.

                                                    2. 4

                                                      Just open the web-tools in your browser, you’ll have an already configured javascript development environment.

                                                      I entirely agree with you on

                                                      And I think the most important thing - kids were curious about computers.

                                                      You don’t need to understand how a computer program is made to use it anymore; which is not necessary something bad.

                                                      1. 4

                                                        That’s still not the same. kred is saying it was first thing you see with you immediately able to use it. It was also a simple language designed to be easy to learn. Whereas, you have to go out of your way to get to JS development environment on top of learning complex language and concepts. More complexity. More friction. Less uptake.

                                                        The other issue that’s not addressed enough in these write-ups is that modern platforms have tons of games that treat people as consumers with psychological techniques to keep them addicted. They also build boxes around their mind where they can feel like they’re creating stuff without learning much in useful, reusable skill versus prior generation’s toys. Kids can get the consumer and creator high without doing real creation. So, now they have to ignore that to do the high-friction stuff above to get to the basics of creating that existed for old generation. Most won’t want to do it because it’s not as fun as their apps and games.

                                                        1. 1

                                                          There is no shortage of programmers now. We are not facing any issues with not enough kids learning programming.

                                                          1. 2

                                                            I didnt say there was a shortage of programmers. I said most kids were learning computers in a way that trained them to be consumers vs creators. You’d have to compare what people do in consumer platforms versus things like Scratch to get an idea of what we’re missing out on.

                                                    3. 4

                                                      All of those require a lot more setup than older machines where you flipped a switch and got dropped into a dev environment.

                                                      The Arduino is useless if you don’t have a project, a computer already configured for development, and electronics breadboarding to talk to it. The Raspberry pi is a weird little circuit board that, until you dismantle your existing computer and hook everything up, can’t do anything–and when you do get it hooked up, you’re greeted with Linux. Python is large and hard to put images to on the screen or make noises with in a few lines of code.

                                                      Scratch is maybe the closest, but it still has the “what programmers doing education think is simple” problem instead of the “simple tools for programming in a barebones environment that learners can manage”.

                                                      The field of programming education is broken in this way. It’s a systemic worldview problem.

                                                      1. 1

                                                        Those aren’t even close in terms of ease of use.

                                                        My elementary school circa 1988 had a lab full of these Apple IIe systems, and my recollection (I was about 6 years old at the time, so I may be misremembering) is that by default they booted into a BASIC REPL.

                                                        Raspberry Pis and Arduinos are fun, but they’re a lot more complex and difficult to work with.

                                                      2. 3

                                                        I don’t think kids are less curious today, but it’s important to notice that back then, making a really polished program that felt professional only needed a small amount of comparatively simple work - things like prompting for all your inputs explicitly rather than hard-coding them, and making sure your colored backgrounds were redrawn properly after editing.

                                                        To make a polished GUI app today is prohibitive in terms of time expenditure and diversity of knowledge needed. The web is a little better, but not by much. So beginners are often left with a feeling that their work is inadequate and not worth sharing. The ones who decide to be okay with that and talk about what they’ve done anyway show remarkable courage - and they’re pretty rare.

                                                        Also, of course, back then there was no choice of which of the many available on-ramps to start with. You learned the language that came with your computer, and if you got good enough maybe you learned assembly or asked your parents to save up and buy you a compiler. Today, as you say, things like Minecraft are among the options. As common starting points I’d also like to mention Node and PHP, both ecosystems which owe a lot of their popularity to their efforts to reduce the breadth of knowledge needed to build end-to-end systems.

                                                        But in addition to being good starting points, those ecosystems have something else in common - there are lots of people who viscerally hate them and aren’t shy about saying so. A child just starting out is going to be highly intimidated by that, and feel that they have no way to navigate whether the technical considerations the adults are yelling about are really that important or not. In a past life, I taught middle-school, and it gave me an opportunity to watch young people being pushed away by cultural factors despite their determination to learn. It was really disheartening.

                                                        Navigating the complicated choices of where to start learning is really challenging, no matter what age you are. But for children, it’s often impossible, or too frightening to try.

                                                        I agree with what I took to be your main point, that if those of us who learned young care about helping the next generation to follow in our footsteps, we should meet them where they are and make sure to build playgrounds that they can enjoy with or without a technical understanding. But my real prediction is that the cultural factors are going to continue to be a blocker, and programming is unlikely to again be a thing that children have widespread mastery of in the way that it was in the 80s. It’s really very saddening.

                                                      1. 10

                                                        Some of us miss native desktop applications that worked well. It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app. But at the same time not everyone is satisfied with the solution of “build them all as electron apps starting with a cross-platform browser base plus web technology for the UI”. I can sympathize with app developers who in no way want to sign up to build for 2 or 3 platforms, but I feel like berating dissatisfied users is unjust here. Try comparing a high quality native macOS app like Fantastical with literally any other approach to calendar software: electron, web, java, whatever. Native works great, everything else is unbearable.

                                                        1. 8

                                                          I think people are just tired of seeing posts like Electron is cancer every other day. Electron is here, people use it, and it solves a real problem. It would be much more productive to talk about how it can be improved in terms of performance and resource usage at this point.

                                                          1. 2

                                                            One wonders if it really can be improved all that much. It seems like the basic model has a lot of overhead that’s pretty much baked in.

                                                            1. 2

                                                              There’s a huge opening in the space for something Electron-like, which doesn’t have the “actual browser” overhead. I’m certain this is a research / marketing / exposure problem more than a technical one (in that there has to be something that would work better we just don’t know about because it’s sitting unloved in a repo with 3 watchers somewhere.)

                                                              Cheers!

                                                              1. 2

                                                                There’s a huge opening in the space for something Electron-like, which doesn’t have the “actual browser” overhead.

                                                                Is there? Electron’s popularity seems like it’s heavily dependent on the proposition “re-use your HTML/CSS and JS from your web app’s front-end” rather than on “here’s a cross-platform app runtime”. We’ve had the latter forever, and they’ve never been that popular.

                                                                I don’t know if there’s any space for anything to deliver the former while claiming it doesn’t have “actual browser” overhead.

                                                                1. 1

                                                                  “re-use your HTML/CSS and JS from your web app’s front-end”

                                                                  But that’s not what’s happening here at all - we’re talking about an application that’s written from the ground up for this platform, and will never ever be used in a web-app front end. So, toss out the “web-app” part, and you’re left with HTML/DOM as a tree-based metaphor for UI layout, and a javascript runtime that can push that tree around.

                                                                  I don’t know if there’s any space for anything to deliver the former while claiming it doesn’t have “actual browser” overhead.

                                                                  There’s a lot more to “actual browser” than a JS runtime, DOM and canvas: does an application platform need to support all the media codecs and image formats, including all the DRM stuff? Does it need always on, compiled in built-in OpenGL contexts and networking and legacy CSS support, etc.?

                                                                  I’d argue that “re-use your HTML/CSS/JS skills and understanding” is the thing that makes Electron popular, more so than “re-use your existing front end code”, and we might get a lot further pushing on that while jettisoning webkit than arguing that everything needs to be siloed to the App Store (or Windows Marketplace, or whatever).

                                                                  1. 2

                                                                    But that’s not what’s happening here at all - we’re talking about an application that’s written from the ground up for this platform, and will never ever be used in a web-app front end. So, toss out the “web-app” part, and you’re left with HTML/DOM as a tree-based metaphor for UI layout, and a javascript runtime that can push that tree around.

                                                                    Huh? We’re talking about people complaining that Electron apps are slow, clunky, non-native feeling piles of crap.

                                                                    Sure, there are a couple of outliers like Atom and VSCode that went that way for from-scratch development, but most of the worst offenders that people complain about are apps like Slack, Todoist, Twitch – massive power, CPU, and RAM sucks for tiny amounts of functionality that are barely more than app-ized versions of a browser tab.

                                                                    “Electron is fine if you ignore all of the bad apps using it” is a terribly uncompelling argument.

                                                                    1. 1

                                                                      Huh? We’re talking about people complaining that Electron apps are slow, clunky, non-native feeling piles of crap.

                                                                      Sure, there are a couple of outliers like Atom and VSCode that went that way for from-scratch development, but most of the worst offenders that people complain about are apps like Slack, Todoist, Twitch – massive power, CPU, and RAM sucks for tiny amounts of functionality that are barely more than app-ized versions of a browser tab.

                                                                      “Electron is fine if you ignore all of the bad apps using it” is a terribly uncompelling argument.

                                                                      A couple things:

                                                                      1. Literally no one in this thread up til now has mentioned any of Slack/Twitch/Todoist.
                                                                      2. “Electron is bad because some teams don’t expend the effort to make good apps” is not my favorite argument.

                                                                      I think it’s disingenous to say “there can be no value to this platform because people write bad apps with it.”

                                                                      There are plenty of pretty good or better apps, as you say: Discord, VSCode, Atom with caveats.

                                                                      And there are plenty of bad apps that are native: I mean, how many shitty apps are in the Windows Marketplace? Those are all written “native”. How full is the App Store of desktop apps that are poorly designed and implemented, despite being written in Swift?

                                                                      Is the web bad because lots of people write web apps that don’t work very well?

                                                                      I’m trying to make the case that there’s value to Electron, despite (or possibly due to!) it’s “not-nativeness”, not defending applications which, I agree, don’t really justify their own existence.

                                                                      Tools don’t kill people.

                                                                    2. 1

                                                                      we’re talking about an application that’s written from the ground up for this platform, and will never ever be used in a web-app front end.

                                                                      I’m really not an expert in the matter, just genuinely curious from my ignorance: why not? If it is HTML/CSS/JS code and it’s already working, why not just uploading it as a webapp as well? I always wondered why there is no such thing as an Atom webapp. Is it because it would take too long to load? The logic and frontend are already there.

                                                                      1. 2

                                                                        I’m referring to Atom, Hyper, Visual Studio Code, etc. here specifically.

                                                                        I don’t think there’s any problem with bringing your front end to desktop via something like Electron. I do it at work with CEFSharp in Windows to support a USB peripheral in our frontend.

                                                                        If it is HTML/CSS/JS code and it’s already working, why not just uploading it as a webapp as well?

                                                                        I think the goal with the web platform is that you could - see APIs for device access, workers, etc. At the moment, platforms like Electron exist to allow native access to things you couldn’t have otherwise, that feels like a implementation detail to me, and may not be the case forever.

                                                                        no such thing as an Atom webapp

                                                                        https://aws.amazon.com/cloud9/

                                                                        These things exist, the browser is just a not great place for them currently, because of the restrictions we have to put on things for security, performance, etc. But getting to that point is one view of forward progress, and one that I ascribe to.

                                                                2. 1

                                                                  I can think of a number of things that could be done off top of my head. For example, the runtime could be modularized. This would allow only loading parts that are relevant to a specific application. Another thing that can be done is to share the runtime between applications. I’m sure there are plenty of other things that can be done. At the same time, a lot can be done in applications themselves. The recent post on Atom development blog documents a slew of optimizations and improvements.

                                                              2. 4

                                                                It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app.

                                                                It’s a necessarily sacrifice if you want apps that are and feel truly native that belong on the platform; a cross-platform Qt or (worse) Swing app is better than Electron, but still inferior to the app with a UI designed specifically for the platform and its ideals, HIG, etc.

                                                                1. 1

                                                                  If we were talking about, say, a watch vs a VR system, then I understand “the necessary sacrifice” - the two platforms hardly have anything in common in terms of user interface. But desktops? Most people probably can’t even tell the difference between them! The desktop platforms are extremely close to each other in terms of UI, so I agree that it’s tragic to keep writing the same thing over and over.

                                                                  I think it’s an example of insane inefficiency inherent in a system based on competition (in this case, between OS vendors), but that’s a whole different rabbit hole.

                                                                  1. 2

                                                                    I am not a UX person and spend most of my time in a Terminal, Emacs and Firefox, but I don’t think modern GUIs on Linux (Gnome), OS X and Windows are too common. All of them have windows and a bunch of similar widgets, but the conventions what goes where can be quite different. That most people can’t tell, does not mean much because most people can’t tell the difference between a native app and an electron one either. They just feel the difference if you put them on another platform. Just look how disoriented many pro users are if you give them a machine with one of the other major systems.

                                                                    1. 1

                                                                      I run Window Maker. I love focus-follows-mouse, where a window can be focused without being on top, which is anathema to MacOS (or macOS or whatever the not-iOS is called this week) and not possible in Windows, either. My point is, there are enough little things (except focus-follows-mouse is hardly little if that’s what you’re used to) which you can’t paper over and say “good enough” if you want it to be good enough.

                                                                  2. 2

                                                                    It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app.

                                                                    There is a huge middle ground between shipping a web browser and duplicating code. Unfortunately that requires people to acknowledge something they’ve spent alot of time working to ignore.

                                                                    Basically c is very cross platform. This is heresy but true. I’m actually curious: can anyone name a platform where python or javascript run where c doesn’t run?

                                                                    UI libraries don’t need to be 100% of your app. If you hire a couple software engineers they can show you how to create business logic interfaces that are separate from the core services provided by the app. Most of your app does not have to be UI toolkit specific logic for displaying buttons and windows.

                                                                    Source: was on a team that shipped cross platform caching/network filesystem. It was a few years back, but the portion of our code that had to vary between linux/osx/windows was not that big. Also writing in c opened the door for shared business logic (api client code) on osx/linux/windows/ios/android.

                                                                    Electron works because the web technologies have a low bar to entry. That’s not always a bad thing. I’m not trying to be a troll and say web developers aren’t real developers, but in my experience, as someone who started out as a web developer, there’s alot of really bad ones because you start your path with a bit of html and some copy-pasted javascript from the web.

                                                                    1. 1

                                                                      There’s nothing heretical about saying C is cross-platform. It’s also too much work for too little gain when it comes to GUI applications most of the time. C is a systems programming language, for software which must run at machine speed and/or interface with low-level machine components. Writing the UI in C is a bad move unless it’s absolutely forced on you by speed constraints.

                                                                    2. 1

                                                                      It’s tragic that desktop platforms are utterly non-interoperable and require near-complete duplication of every app.

                                                                      ++ Yes!

                                                                      Try comparing a high quality native macOS app like Fantastical with literally any other approach to calendar software: electron, web, java, whatever. Native works great, everything else is unbearable.

                                                                      Wait, what? I think there’s two different things here. Is Fantastical a great app because it’s written in native Cocoa and ObjC (or Swift), or is it great because it’s been well designed, well implemented, meets your specific user needs, etc? Are those things orthoganal?

                                                                      I think it’s easy to shit on poorly made Electron apps, but I think the promise of crossplatform UI - especially for tools like Atom or Hyper, where “native feeling” UI is less of a goal - is much too great to allow us to be thrown back to “only Windows users get this”, even if it is “only OS X users get this” now.

                                                                      It’s a tricky balancing act, but as a desktop Linux user with no plans to go back, I hope that we don’t give up on it just because it takes more work.

                                                                      Cheers!


                                                                      PS: Thanks for the invite, cross posted my email response if that’s ok :)

                                                                      1. 2

                                                                        Wait, what? I think there’s two different things here. Is Fantastical a great app because it’s written in native Cocoa and ObjC (or Swift), or is it great because it’s been well designed, well implemented, meets your specific user needs, etc? Are those things orthoganal?

                                                                        My personal view is that nothing is truly well designed if it doesn’t play well and fit in with other applications on the system. Fantastical is very well designed, and an integral part of that great design is that it effortlessly fits in with everything else on the platform.

                                                                        “Great design” and “native” aren’t orthogonal; the latter is a necessary-but-not-sufficient part of the former.

                                                                        1. 1

                                                                          “Great design” and “native” aren’t orthogonal; the latter is a necessary-but-not-sufficient part of the former.

                                                                          Have to agree to disagree here, I guess. I definitely can believe that there can be well-designed, not-native application experinces, but I think that depends on the success and ‘well-designed-ness’ of the platform you’re talking about.

                                                                          As part of necessary background context, I run Linux on my laptop, with a WM (i3) rather than a full desktop manager, because I really didn’t like the design and cohesiveness of Gnome and KDE the last time I tried a full suite. Many, many apps that could have been well designed if they weren’t pushed into a framework that didn’t fit them.

                                                                          I look at Tomboy vs. Evernote as a good example. Tomboy is certainly well integrated, and feels very native in a Gnome desktop, and yet if put next to each other, Evernote is going to get the “well-designed” cred, despite not feeling native on really any platform it’s on.

                                                                          Sublime Text isn’t “native” to any of the platforms it runs on either.

                                                                          Anyway, I feel like I’m losing the thread of discussion, and I don’t want to turn this into “App A is better than App B”, so I’ll say that I think I understand a lot of the concerns people have with Electron-like platforms better than I did before, and thank you for the conversation.

                                                                          Cheers!

                                                                    1. 25

                                                                      I used to do the things listed in this article, but very recently I’ve changed my mind.

                                                                      The answer to reviewing code you don’t understand is you say “I don’t understand this” and you send it back until the author makes you understand in the code.

                                                                      I’ve experienced too much pain from essentially rubber-stamping with a “I don’t understand this. I guess you know what you’re doing.” And then again. And again. And then I have to go and maintain that code and, guess what, I don’t understand it. I can’t fix it. I either have to have the original author help me, or I have to throw it out. This is not how a software engineering team can work in the long-term.

                                                                      More succinctly: any software engineering team is upper-bound architecturally by the single strongest team member (you only need one person to get the design right) and upper-bound code-wise by the single weakest/least experience team member. If you can’t understand the code now, you can bet dollars to donuts that any new team member or new hire isn’t going to either (the whole team must be able to read the code because you don’t know what the team churn is going to be). And that’s poison to your development velocity. The big mistake people make in code review is to think the team is bound by the strongest team member code-wise too and defer to their experience, rather than digging in their heels and say “I don’t understand this.”

                                                                      The solution to “I don’t understand this” is plain old code health. More functions with better names. More tests. Smaller diffs to review. Comments about the edge cases and gotchas that are being worked around but you wouldn’t know about. Not thinking that the code review is the place to convince the reviewer to accept the commit because no-one will ever go back to the review if they don’t understand the code as an artifact that stands by itself. If you don’t understand it as a reviewer in less than 5 minutes, you punt it back and say “You gotta do this better.” And that’s hard. It’s a hard thing to say. I’m beginning to come into conflict about it with other team members who are used to getting their ungrokkable code rubber stamped.

                                                                      But code that isn’t understandable is a failure of the author, not the reviewer.

                                                                      1. 7

                                                                        More succinctly: any software engineering team is upper-bound architecturally by the single strongest team member (you only need one person to get the design right) and upper-bound code-wise by the single weakest/least experience team member.

                                                                        Well put – hearing you type that out loud makes it incredibly apparent.

                                                                        Anywhoo, I think your conclusion isn’t unreasonable (sometimes you gotta be the jerk) but the real problem is upstream. It’s a huge waste when bad code makes it all the way to review and then and then needs to be written again; much better would be to head it off at the pass. Pairing up the weaker / more junior software engineers with the more experienced works well, but is easier said than done.

                                                                        1. 4

                                                                          hmm, you make a good point and I don’t disagree. Do you think the mandate on the author to write understandable code becomes weaker when the confusing part is the domain, and not the code itself? (Although I do acknowledge that expressive, well-structured and well-commented code should strive to bring complicated aspects of the problem domain into the picture, and not leave it up to assumed understanding.)

                                                                          1. 3

                                                                            I think your point is very much applicable. Sometimes it takes a very long time to fully understand the domain, and until you do, the code will suffer. But you have competing interests. For example, at some point, you need to ship something.

                                                                            1. 2

                                                                              Do you think the mandate on the author to write understandable code becomes weaker when the confusing part is the domain, and not the code itself?

                                                                              That’s a good question.

                                                                              In the very day-to-day, I don’t personally find that code reviews have a problem from the domain level. Usually I would expect/hope that there’s a design doc, or package doc, or something, that explains things. I don’t think we should expect software engineers to know how a carburetor works in order to create models for a car company, the onus is on the car company to provide the means to find out how the carburetor works.

                                                                              I think it gets much tricker when the domain is actually computer science based, as we kind of just all resolved that there are people that know how networks work and they write networking code, and there’s people who know how kernels work and they write kernel code etc etc. We don’t take the time to do the training and assume if someone wants to know about it, they’ll learn themselves. But in that instance, I would hope the reviewer is also a domain expert, but on small teams that probably isn’t viable.

                                                                              And like @burntsushi said, you gotta ship sometimes and trust people. But I think the pressure eases as the company grows.

                                                                              1. 1

                                                                                That makes sense. I think you’ve surfaced an assumption baked into the article which I wasn’t aware of, having only worked at small companies with lots of surface area. But I see how it comes across as particularly troublesome advice outside of that context

                                                                            2. 4

                                                                              I’m beginning to come into conflict about it with other team members

                                                                              How do you resolve those conflicts? In my experience, everyone who opens a PR review finds their code to be obvious and self-documenting. It’s not uncommon to meet developers lacking the self-awareness required to improve their code along the lines of your objections. For those developers, I usually focus on quantifiable metrics like “it doesn’t break anything”, “it’s performant”, and “it does what it’s meant to do”. Submitting feedback about code quality often seems to regress to a debate over first principles. The result is that you burn social capital with the entire team, especially when working on teams without a junior-senior hierarchy, where no one is a clear authority.

                                                                              1. 2

                                                                                Not well. I don’t have a good answer for you. If someone knows, tell me how. If I knew how to simply resolve the conflicts I would. My hope is that after a while the entire team begins to internalize writing for the lowest common denominator, and it just happens and/or the team backs up the reviewer when there is further conflict.

                                                                                But that’s a hope.

                                                                                1. 2

                                                                                  t’s not uncommon to meet developers lacking the self-awareness required to improve their code along the lines of your objections. For those developers, I usually focus on quantifiable metrics like “it doesn’t break anything”, “it’s performant”, and “it does what it’s meant to do”. Submitting feedback about code quality often seems to regress to a debate over first principles.

                                                                                  Require sign-off from at least one other developer before they can merge, and don’t budge on it – readability and understandability are the most important issues. In 5 years people will give precisely no shits that it ran fast 5 years ago, and 100% care that the code can be read and modified by usually completely different authors to meet changing business needs. It requires a culture shift. You may well need to remove intransigent developers to establish a healthier culture.

                                                                                  The result is that you burn social capital with the entire team, especially when working on teams without a junior-senior hierarchy, where no one is a clear authority.

                                                                                  This is a bit beyond the topic at hand, but I’ve never had a good experience in that kind of environment. If the buck doesn’t stop somewhere, you end up burning a lot of time arguing and the end result is often very muddled code. Even if it’s completely arbitrary, for a given project somebody should have a final say.

                                                                                  1. 1

                                                                                    The result is that you burn social capital with the entire team, especially when working on teams without a junior-senior hierarchy, where no one is a clear authority.

                                                                                    This is a bit beyond the topic at hand, but I’ve never had a good experience in that kind of environment. If the buck doesn’t stop somewhere, you end up burning a lot of time arguing and the end result is often very muddled code. Even if it’s completely arbitrary, for a given project somebody should have a final say.

                                                                                    I’m not sure.

                                                                                    At very least, when no agreement is found, the authorities should document very carefully and clearly why they did take a certain decision. When this happens everything goes smooth.

                                                                                    In a few cases, I saw a really seasoned authority to change his mind while writing down this kind of document, and finally to choose the most junior dev proposal. And I’ve also seen a younger authority faking a LARGE project just because he took any objection as a personal attack. When the doom came (with literally hundreds of thousands euros wasted) he kindly left the company.

                                                                                    Also I’ve seen a team of 5 people working very well for a few years together despite daily debates. All the debates were respectful and technically rooted. I was junior back then, but my opinions were treated on pars with more senior colleagues. And we were always looking for syntheses, not compromises.

                                                                                2. 2

                                                                                  I agree with the sentiment to an extent, but there’s something to be said for learning a language or domain’s idioms, and honestly some things just aren’t obvious at first sight.

                                                                                  There’s “ungrokkable” code as you put it (god knows i’ve written my share of that) but there’s also code you don’t understand because you have had less exposure to certain idioms, so at first glance it is ungrokkable, until it no longer is.

                                                                                  If the reviewer doesn’t know how to map over an array, no amount of them telling me they doesn’t understand will make me push to a new array inside a for-loop. I would rather spend the time sitting down with people and trying to level everyone up.

                                                                                  To give a concrete personal example, there are still plenty of usages of spreading and de-structuring in JavaScript that trip me up when i read them quickly. But i’ll build up a tolerance to it, and soon they won’t.

                                                                                1. 7

                                                                                  I’m seeing a lot of people downplay Spectre compared to Meltdown – basically, the common claim seems to be that patching for Meltdown is super important but people shouldn’t worry too much about Spectre because it’s “hard to exploit in practice”.

                                                                                  This is the third working proof-of-concept in a VM or sandbox I’ve seen since yesterday.

                                                                                  1. 3

                                                                                    The base PoC mentioned here involves a very specifically crafted function within the same process as the attacker function.

                                                                                    Unlike Meltdown, Spectre is a class of attack which requires some pretty specific style of code path to be present in the victim process, and for that code path to be somewhat controllable from outside processes. It’s a really high bar!

                                                                                    Thought it’s a bit of a nightmare for sandboxing, since it gives easy ways to read a process’ own memory. Though to be honest, that’s probably more evidence of the futility of sandboxing in general than other things. But cross-process attacks are tricky.

                                                                                    (The biggest danger I saw from Spectre was the possibility of exploitable code being in shared libraries, because that makes it a lot easier for the attacker process to poison the well).

                                                                                  1. 3

                                                                                    AMD claims “zero vulnerability due to AMD architecture differences”, but without any explanation. Could someone enlighten us about this?

                                                                                    1. 10

                                                                                      AMD’s inability to generate positive PR from this is really an incredible accomplishment for their fabled PR department.

                                                                                      1. 7

                                                                                        The spectre PoC linked elsewhere in this thread works perfectly on my Ryzen 5. From my reading, it sounds like AMD processors aren’t susceptible to userspace reading kernelspace because the cache is in some sense protection-level-aware, but the speculative-execution, cache-timing one-two punch still works.

                                                                                        1. 4

                                                                                          From reading the google paper on this it’s not quite true but not quite false. According to google AMD and ARM are vulnerable to a specific limited form of Spectre. They’re not susceptible to Meltdown. The google Spectre PoCs for AMD and ARM aren’t successful in accessing beyond the user’s memory space so it’s thought that while the problem exists in some form it doesn’t lead to compromise as far as we currently know.

                                                                                          1. 2

                                                                                            aren’t successful in accessing beyond the user’s memory space so … it doesn’t lead to compromise as far as we currently know.

                                                                                            Well, no compromise in the sense of breaking virtualization boundaries or OS-level protection boundaries, but still pretty worrying for compromising sandboxes that are entirely in one user’s memory space, like those in browsers.

                                                                                          2. 4

                                                                                            I just found this in a Linux kernel commit:

                                                                                            AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against. The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault.

                                                                                            1. 4

                                                                                              Which is a much stronger statement than in the AMD web PR story. Given that it is AMD, I would not be surprised if their design does not have the problem but their PR is unable to make that clear.

                                                                                            2. 2

                                                                                              AMD is not vulnerable to Meltdown, an Intel-specific attack.

                                                                                              AMD (and ARM, and essentially anything with a speculative execution engine on the planet) is vulnerable to Spectre.

                                                                                            1. 7

                                                                                              Clojure. It felt like the natural progression, especially since I was interested in diving deeper into FP. Now I can’t not love s-exps and structural editing, as well as even more powerful meta-programming.

                                                                                              (Also notable that I saw Russ Olsen, author of Eloquent Ruby, moved to Clojure, and now works for Cognitect.)

                                                                                              1. 3

                                                                                                I’m really interested in Clojure, but compared to Ruby there seems to be an order of magnitude fewer jobs out there for it.

                                                                                                I can’t swing a dead cat without seeing 4 or 5 people a week looking for senior Rubyists. I’ve seen maybe 2 major Clojure opportunities in the last 6 months.

                                                                                                1. 4

                                                                                                  I can’t swing a dead cat without seeing 4 or 5 people a week looking for senior Rubyists.

                                                                                                  What’s been your success rate when bringing carrion to job fairs?

                                                                                                  1. 1

                                                                                                    The way the local job market is, I doubt it’d damage my chances that much.

                                                                                                2. 1

                                                                                                  Clojure is absolutely great and so is Russ. He still loves Ruby (as well) though :)

                                                                                                  I still maintain that one of the best books I ever read for my coding skills is Functional Programming Patterns in Scala and Clojure.

                                                                                                  Clojure never really got me personally - I would have liked but weirdly short names, friends telling me that for libs tests are more considered “optional” & others were ultimately a bit off putting to me. Still wouldn’t say no, just - switched my focus :)

                                                                                                  1. 3

                                                                                                    Tests are definitely not considered optional by the Clojure community. However, you’re likely to see a lot less tests in Clojure code than in Ruby.

                                                                                                    There are two major reasons for this in my experience. First reason is that development is primarily done using an editor integrated REPL as seen here. Any time you write some code, you can run it directly from the editor to see exactly what it’s doing. This is a much tighter feedback loop than TDD. Second reason is that functions tend to be pure, and can be tested individually without relying on the overall application state.

                                                                                                    So, most testing in libraries tends to be done around the API layer. When the API works as expected, that necessarily tests that all the downstream code is working. It’s worth noting that Clojure Spec is also becoming a popular way to provide specification and do generative testing.

                                                                                                1. 7

                                                                                                  Really? 24 comments already and no one has even mentioned the phrase responsible disclosure?

                                                                                                  Bugs are bugs. No one can claim Apple deliberately shipped this behaviour. Yes it should have been caught but there is no malicious intent.

                                                                                                  This fucking clown knowingly publicises a vulnerability with instructions on fucking Twitter.

                                                                                                  This is 0% about security and 100% about some jackass getting his 15 minutes of fame. “Software Craftsman” my ass.

                                                                                                  1. 16

                                                                                                    Over on the Orange Site half the people are having to explain to the other half what responsible disclosure even is. I wouldn’t be terribly surprised if this guy, of which there is no signs he’s familiar with standard security community practices, just didn’t know that there was a standard practice for disclosure. especially when these days the only way to get half-decent tech support is to publicly complain on Twitter.

                                                                                                    Yes, he should have done the research on responsible disclosure. But if we’re willing to say extend the benefit of the doubt to the richest tech company on the planet, we should extend the benefit of the doubt a random guy who stumbled into this bug.

                                                                                                    1. 1

                                                                                                      Oh believe me, I saw the comments in that other place.

                                                                                                      And I agree the guy is clearly not a security researcher or anything close to it. But even so - he clearly understood the ramifications of the bug, heck he could have just stopped typing at the first full stop (period) and hit “Send Tweet”.

                                                                                                      Apple Support would have responded, asking for more details via DM.

                                                                                                      I understand you want to give him the benefit of the doubt, but I find it hard to believe anyone who understands how powerful a local root account is, didn’t comprehend the danger of publicising an exploit to the fucking world.

                                                                                                      As I said. He wants his 15 minutes of fame.

                                                                                                    2. 8

                                                                                                      Responsible disclosure is a “truthy” phrase like “responsible encryption”. It sounds good on the visceral level but once you unpack the arguments, not so much.

                                                                                                      Full disclosure is actually preferred by a lot of security people, because - especially in the case of a very simple bug - you never know who knows about the security issue already.

                                                                                                      In this case, before it was posted on twitter someone already mentioned this issue two weeks ago on apple’s developer forum and I find it hard to believe that adversaries do not already have pretty thorough test suites running against popular operating systems to discover vulnerabilities like this.

                                                                                                      Apple should also have a bug bounty program covering macOS. Reporting a vulnerability is a long and painful* task, where a security researcher is playing project manager for months with oftentimes unfriendly organizations. I perfectly appreciate the argument that people who discover vulnerabilities are under no obligation to spend a lot of time and effort helping random companies fix security issues. That’s where a bug bounty should come in.

                                                                                                      *I regularly watch security researchers ask their twitter followers to help them get in touch with organization x or y, after they already exhausted pretty much every other avenue of contacting people in an org who can deal with a security issue. See this for an example from today.

                                                                                                      (Disclaimer: I’m blue team infosec)

                                                                                                      1. 3

                                                                                                        Full disclosure is actually preferred by a lot of security people, because - especially in the case of a very simple bug - you never know who knows about the security issue already.

                                                                                                        I would seriously question the judgement or motivations of any researcher who believed that. You may not know who knows about the issue already, but following disclosure, you can be extremely confident that a whole lot more people know. If your goal is to minimize harm to users, letting a whole bunch of attackers know about a vulnerability before the vendor has a chance to patch it is not a rational route to that goal.

                                                                                                        1. 4

                                                                                                          More people know about it, therefor they can mitigate the impact even before a vendor patch is out. Once a vulnerability is publicly known, vendors usually react faster too. Compare that to a month-year-multiyear process where some people know, noone knows to which extent blackhats know about it (they are heavily incentivised against disclosing what they know) and people who would be impacted by a given issue generally don’t.

                                                                                                          For example in the recent infineon RSA vulnerability the >6 month timeline put people relying on that faulty library at risk.

                                                                                                        2. 2

                                                                                                          I would totally buy this argument in general, but for Apple it’s not that hard. I did a search for “Apple security” and literally the first result had an email for the security team and even a PGP key you could use if you wanted. No excuses for this guy, he put lots of people at risk and acted like an ass for no reason.

                                                                                                        3. 3

                                                                                                          Responsible disclosure is for security researchers. When your bug is so bad that someone who has no idea what they’re doing finds it they’re just going to tell their friends, or maybe even just exploit it themselves quietly. We’re frankly lucky he decided to shout that out to the whole world, and that everyone didn’t think it was a prank.

                                                                                                          1. 0

                                                                                                            Right, like driving safely is only for bus drivers and safe sex is only for hookers?

                                                                                                            1. 1

                                                                                                              The difference is that driving safely and safe sex have personal implications. With responsible disclosure there really are absolutely no consequences for not doing it. So don’t expect people to do it, and definitely don’t depend on it.

                                                                                                          2. 2

                                                                                                            If apple quietly fixed it without fanfare, mac customers wouldn’t realize they bought from a shit company. Especially important since Apple has been hitting the security angle. If they can’t get this right, what chance in hell do they have getting a neural net based face detection as password system right?

                                                                                                            1. 2

                                                                                                              Apple has been ‘hitting’ the privacy angle.

                                                                                                              But frankly, I say bullshit to your entire premise. Responsible disclosure doesn’t mean:

                                                                                                              you can’t say shit about this

                                                                                                              It means (assuming the vendor responds and is cooperative):

                                                                                                              You can’t tell the world about this until an agreed upon date, before which the vendor will distribute a patch to users

                                                                                                              Once the embargo date passes, you can make any manner of publication about the bug you like.

                                                                                                              What this guy did is just fucking dickish.

                                                                                                              1. 7

                                                                                                                He’s not a security researcher, he’s just some dude. I don’t understand what incentive he has, or how he would even know to do what you’re saying. Just be grateful he didn’t sell it on the black market.

                                                                                                                1. 1

                                                                                                                  The responsible thing to do is publish it as soon as possible with a known work around. Since users can protect themselves without waiting for Apple to fix the issue, they should know about it NOW. In this case the known work around is to set a password for the root account. Another work around is to never let anyone near your computer until its fixed.

                                                                                                                  Users can fix it themselves, they need to know NOW.

                                                                                                                  In regards to my premise, I disagree. In most cases users can take actions to protect themselves. It is irresponsible for the security community to keep these these private to “protect the weak and useless user”.

                                                                                                                  These embargoes serve only the purpose of minimizing embarrassment and cost to the bottom line. “Look, we screwed up a month ago but fixed it, so don’t worry” doesn’t really bring about the same anger as “Look, you are exposed now, protect yourself by doing X, we are working on a fix.”, which brings less anger than “Hey guys, Apple has this huge issue and they didn’t even know about it”. The first costs Apple little. The second costs Apple more. The third costs Apple the most. Which do you think will push Apple to change their processes to prevent these problems?

                                                                                                                  1. 3

                                                                                                                    The responsible thing to do is publish it as soon as possible with a known work around. Since users can protect themselves without waiting for Apple to fix the issue, they should know about it NOW. In this case the known work around is to set a password for the root account. Another work around is to never let anyone near your computer until its fixed.

                                                                                                                    Users can fix it themselves, they need to know NOW.

                                                                                                                    This really depends on what your mental model of a “user” is:

                                                                                                                    1. In a “Responsible Disclosure” scenario, there are certainly users who are being “hung out to dry” in that they would be in a position to do something to protect themselves a lot sooner if there were informed of the bug ASAP. These users are exposed to risk by not being informed immediately.

                                                                                                                    2. In an “Immediate Disclosure” scenario, there are a large number of users who will not hear about the security bug, and are not possessed of the kinds of skills that would allow them to mitigate the problem even if they did hear about it. These users are exposed to risk by not giving the vendor time to step in and provide them with mitigation via automated channels.

                                                                                                                    It is irresponsible for the security community to keep these these private to “protect the weak and useless user”.

                                                                                                                    I would suggest that this is in no way a clear-cut conclusion, and that reasonable people have room for reasonable disagreement on this topic. It’s fundamentally a train-tracks Ethics problem, and there is no real “right answer” here, however passionately you believe that yours is the only correct one.

                                                                                                                    These embargoes serve only the purpose of minimizing embarrassment and cost to the bottom line.

                                                                                                                    I don’t think that’s a charitable reading. My own take is that the number of users exposed to risk in scenario #2 is larger than the number in scenario #1, and therefore scenario #1 is preferable. You’re free to disagree, of course, but attributing that disagreement to bad faith isn’t conductive to communication.

                                                                                                                    1. 1

                                                                                                                      I don’t believe your disagreement is in bad faith. I believe there are business reasons, not user reasons, that disclosure rules for bounties are the way they are.

                                                                                                                      Read the first paragraph of their disclosure. https://support.apple.com/en-us/HT208315

                                                                                                                      Might as well say “we will keep security issues secret from you so forget any thought you had of keeping yourself safe. Let us adults take care of you.”

                                                                                                                      If you are right that the #2 group is larger, doesn’t that bother you? What would have to change so most users have power to protect themselves vs rely on the paternal powers of the company?

                                                                                                            1. 2

                                                                                                              But what if you actively turn off location services, haven’t used any apps, and haven’t even inserted a carrier SIM card?

                                                                                                              Ok, but what if you haven’t logged into your Google Account? Then this was far less of an issue (not to say that it wasn’t one), at least for me.

                                                                                                              1. 3

                                                                                                                Not logging in doesn’t change much.

                                                                                                                The account information gives them a few more data points, but they’re not very important ones. They don’t need your account info to send you advertisements about nearby businesses, for example, or to know you’ve been searching for some type of product.

                                                                                                                Just because they don’t know your name doesn’t mean they haven’t been following you around the internet monitoring everything you’ve been doing.

                                                                                                                1. 5

                                                                                                                  The article describes Google being really invasive about collecting data on you.

                                                                                                                  Why would they give a fuck about whether you’re logged in or not? It’s not like being signed in signifies your acceptance of everything they’re doing to you either!

                                                                                                                  No one should be surprised by this. Google is basically an arm of the US surveillance state, and has always been. If you look into it, you’ll find they were funded by the CIA (In-Q-Tel) to begin with.

                                                                                                                  Ever wonder why no other search engine has come close to the quality of Google’s search results? No one in 2017 can do what Sergey and Larry did in the early 2000’s?

                                                                                                                  Investors wouldn’t fund a massive money making machine? People wouldn’t flock to a non-invasive alternative with roughly equal quality search results?

                                                                                                                  1. 4

                                                                                                                    People wouldn’t flock to a non-invasive alternative with roughly equal quality search results?

                                                                                                                    Correct. Unless that alternative can also provide maps, multimedia, try to satisfy sci-fi fantasies, perform nearly every service under the sun, and become just as big of a household name, no one is going anywhere.

                                                                                                                    People already see critics and those who use the smaller alternatives as “power-hungry loonies who demand privacy in the postprivacy age” and “reject the inevitable” as I keep getting told.

                                                                                                                    1. 3

                                                                                                                      Correct. Unless that alternative can also provide maps

                                                                                                                      Come on. Google Maps would still work just fine, even if you used something else for searches.

                                                                                                                      multimedia, try to satisfy sci-fi fantasies, perform nearly every service under the sun, and become just as big of a household name, no one is going anywhere.

                                                                                                                      Now you’re just listing some hand-wavy services that Google supposedly provides, that we couldn’t live without.

                                                                                                                      Again, as if you couldn’t use a search engine like you and everyone else started using Google back in the day. It gave you much better results than anything before, and you never looked back.

                                                                                                                      Somehow we all managed without maps, “multimedia”, or the search engine “satisfying sci-fi fantasies”, whatever that’s supposed to mean.

                                                                                                                      1. 2

                                                                                                                        Somehow we all managed without maps, “multimedia”, or the search engine “satisfying sci-fi fantasies”, whatever > that’s supposed to mean.

                                                                                                                        Maps has positively impacted my life in a big way. I don’t ever feel lost even in a completely new city. Even in a place with utterly insane streets, it is trivial to get around. It’s pretty freeing to know how to get some place completely new and how long it will take you to get there. Hands down, the best feature modern phones have.

                                                                                                                        Gmail and Drive are nice too, but not nearly as important.

                                                                                                                        1. 2

                                                                                                                          Yes, maps is nice. Again, you could use both Google Maps AND someone else’s search.

                                                                                                                          1. 1

                                                                                                                            Agreed.

                                                                                                                    2. 4

                                                                                                                      You’re oversimplifying. Big corporations by necessity cooperate with the states in which they operate. That’s the reality of doing business, anyone who thinks anything different is deluding themselves.

                                                                                                                      Also, anyone who thinks they can own a modern smartphone and thinks they can’t be tracked, that their location isn’t being recorded somewhere, and that everything they send and receive isn’t being scanned is also deluding themselves.

                                                                                                                      We live in David Brin’s Transparent Society - best either get used to it, or learn to forego the conveniences such modern technological advances bestow.

                                                                                                                      1. 10

                                                                                                                        We live in David Brin’s Transparent Society - best either get used to it, or learn to forego the conveniences such modern technological advances bestow.

                                                                                                                        Brin’s Transparent Society was predicated on “transparency from below”, in which we had an equal view into the lives of those viewing us.

                                                                                                                        Our current society is merely an authoritarian surveillance state. It looks nothing like what he described. “Get used to it” is a disastrously passive response to the current situation.

                                                                                                                        1. 2

                                                                                                                          My understanding is that the paper outlines two models - one in which total transparency reigns, and everyone can see everyone all the time. I agree we are nowhere near there.

                                                                                                                          The other is the model where only certain parties -state agencies and big companies see everything - we are getting there very quickly IMO.

                                                                                                                          1. 4

                                                                                                                            The paper outlines those two models, labels the former “The Transparent Society” and presents the latter as, essentially, a dystopian hell on earth inimical to human rights and freedom.

                                                                                                                            Since you feel we’re very quickly ending up in the latter, why advocate “best either get used to it, or learn to forego the conveniences”? That really seems to fly in the face of Brin’s paper, which was presenting an alternative to the current state of affairs that we could only ever hope to engage with by ignoring the very “resign yourself or go luddite” attitude that your post reifies.

                                                                                                                            tl;dr it’s weird to cite his paper in an argument that someone should resign themselves to the current surveillance status quo, when the paper advocates a radical alternative the current surveillance status quo

                                                                                                                            1. 4

                                                                                                                              You’re right. Thanks for pointing that out.

                                                                                                                        2. 1

                                                                                                                          You’re oversimplifying. Big corporations by necessity cooperate with the states in which they operate. That’s the reality of doing business, anyone who thinks anything different is deluding themselves.

                                                                                                                          Oversimplifying how? You don’t seem to be refuting anything I said.

                                                                                                                          You know the “co-operation” you referred to is all about either: 1) the government controlling the masses, and/or 2) the government preventing competition to the BigCorp, right?

                                                                                                                          But you made it sound like a vaguely good thing. It’s not. It never is.

                                                                                                                          1. 2

                                                                                                                            In the sense that compliance does not imply ownership. Google no doubt cooperates with various US intelligence agencies, but that does not make them owned by them or an “arm” of the government. I don’t disagree at all, I’m just pointing out that the phrasing you use implies things that I do not think are true.

                                                                                                                            1. 2

                                                                                                                              Investment by In-Q-Tel does imply at least part-ownership by the government / CIA / surveillance apparatus. It’s not unreasonable to call Google an arm of the government.

                                                                                                                        3. 1

                                                                                                                          The problem is that people, for the most part, assess risk by how often they know of bad outcomes. When was the last time you heard that somebody was bitten by Google’s invasion of their privacy? Europe is a bit different with regard to a cultural memory of spying, and accordingly European policies usually favor privacy.

                                                                                                                          I don’t think things are looking up, either. As robots slowly eclipse humans in various kinds of labor, people’s opinions and attention will become increasingly valuable. If Facebook and Google’s revenue are any indication, there’s a lot of value in people’s privacy.

                                                                                                                          1. 1

                                                                                                                            I thoght that it might be harder for them to accurately track a device without an account, but after thinking about it in more detail, a kind of artifical device IP really shouldn’t be that hard for them to implement f they’ve gotten this far. The second reason was that until recently my phone was rooted with Cyanogen Mod w/o Gapps, so unless they pulled a MINIX on my phone, they shouldn’t have been able to access my device directly.

                                                                                                                          2. 2

                                                                                                                            Have you used that sim on another phone?
                                                                                                                            Have you used that phone number on another phone?
                                                                                                                            Does someone have a contact in their phone/google contact/facebook that says “zge, phone number xxx-xxx-xxx”?
                                                                                                                            Have you visited/logged into some other website that uses some google API that could identify you?
                                                                                                                            Have you connected to a wifi network? Have you used bluetooth? In both cases what you connect to could easily identify you.
                                                                                                                            Have you had wifi or bluetooth turned on but not connected to a network?
                                                                                                                            Has you phone been turned on? Android and iOS will both search for networks/devices anyway, to either make connecting quicker when you do turn it on, aid location information in maps etc., or, track you.

                                                                                                                          1. 1

                                                                                                                            I’m not familiar with the studies about type systems, but I feel pretty confident many large projects I’ve worked on would fall apart if they were dynamically typed. So in the spirit of the article, what do those studies actually say? In particular, what do they say about projects of different code sizes? Are there even enough multi-million line dynamic language projects to analyze?

                                                                                                                            1. 13

                                                                                                                              The article quite literally links to a review of studies made. I can’t help but notice that you kind of prove their complaint: people not even doing the simplest review of linked sources.

                                                                                                                              https://danluu.com/empirical-pl/

                                                                                                                              1. 4

                                                                                                                                Yes, it’s true, I didn’t review the linked sources. Because they are long and I don’t quite care enough. I’m just curious enough to ask, since it’s likely someone here would already know and have an answer slightly more concise than several pages. :)

                                                                                                                                1. 3

                                                                                                                                  The problem with comparing static and dynamic languages is that that variable is hard to isolate between all the other differences between individual languages that have a static type system and individual languages that have a dynamic type system. I’ve seen danluu’s analysis before, and not been particularly impressed that any of them were really answering the question of “all else being equal, is static typing better than not?”, as opposed to showing that you can write code with fewer bugs in Haskell specifically than with Python specifically, or that Java’s type system doesn’t gain you all that much, or some result along those lines. Not all type systems are equally powerful or ergonomic in any case. I’m a big fan of strong typing myself, from my own intuitional experience of maintaining large codebases written in dynamically-typed languages and debugging things that a type-checker would’ve caught, but I think that has as much to do with the fact that a specific set of popular strongly-typed languages encourage a different mindset around creating code compared with a specific set of popular dynamically-typed languages, than about the raw fact of having types or not.

                                                                                                                                  1. 2

                                                                                                                                    The fact that it’s hard to analyze is itself evidence that static typing does not play a dominant role. If the type system was the defining feature of the language it would dominate other factors, in practice it clearly does not.

                                                                                                                                    1. 7

                                                                                                                                      That may not be a valid way to look at the problem. See McNamara Fallacy:

                                                                                                                                      https://en.wikipedia.org/wiki/McNamara_fallacy

                                                                                                                                      But I might not be completely understanding what you’re saying.

                                                                                                                                      1. 3

                                                                                                                                        I’m not talking about disregarding anything. To put it another way, we have to observe a statistical difference for a language or a group of languages compared to others. For example, if projects written in Haskell were statistically of higher quality than those written in other languages, we could make a hypothesis that Haskell type system has a positive impact on quality and test that hypothesis.

                                                                                                                                        1. 4

                                                                                                                                          This is a much different statement than the first one I responded to. Your first statement was that “X being hard to measure is evidence X isn’t a big factor”. But that is clearly wrong. It means you cannot say much about X, not evidence for one particular interpretation of it.

                                                                                                                                          1. 2

                                                                                                                                            That was not my statement. The statement was that X does not appear to play a more significant role than other factors.

                                                                                                                                            1. 1

                                                                                                                                              Does that mean you do know how to measure the factors?

                                                                                                                                              1. 1

                                                                                                                                                I’m saying that you have to show that statistical trends exist before you can start having discussion about different factors and their potential impact.

                                                                                                                                                1. 2

                                                                                                                                                  Does that mean you’re changing your original statement? Your original statement was:

                                                                                                                                                  The fact that it’s hard to analyze is itself evidence that static typing does not play a dominant role.

                                                                                                                                                  Being hard to analyze is not evidence for a particular interpretation other than the interpretation that it’s hard to analyze.

                                                                                                                                                  1. 1

                                                                                                                                                    I’m not changing anything. If a particular factor had a significant contribution that would be an invariant in all languages that have this factor. I think at this point you know exactly what I’m saying, and you’re just playing word games with me.

                                                                                                                                                    1. 1

                                                                                                                                                      But that is only true if what one is measuring is well-defined, which it doesn’t appear to be, right? It feels like you’re saying the absence of evidence is evidence of absence, which is clearly false.

                                                                                                                                                      I’m not playing word game with you, I’m being precise in order to understand what you are saying. Based on what you’ve said, it seems like one would have to conclude that that one cannot say one way or the other the effect of the type system on a project rather than that it has no effect. Isn’t this like the difference between not-rejecting the null hypothesis vs accepting the null hypothesis?

                                                                                                                                                      1. 1

                                                                                                                                                        Once again, what I’m saying is that you have to demonstrate that there is a statistical difference before you can discuss the cause of that difference. What part of that statement are you having trouble with?

                                                                                                                                                        1. 1

                                                                                                                                                          That statement is perfectly reasonable, but that is not the first statement you made where you claimed that the mere fact that something is difficult to measure is evidence of a particular conclusion. Did I misinterpret what you said? The exact quote is:

                                                                                                                                                          The fact that it’s hard to analyze is itself evidence that static typing does not play a dominant role.

                                                                                                                                                          1. 2

                                                                                                                                                            That directly follows from the fact that statistical differences have not been demonstrated. If a particular language is shown to be a statistical outlier, then you could reasonably say that some feature or combination of thereof are responsible for that. Since we are not seeing such outliers, that strongly implies that the choice of language is not a dominant factor.

                                                                                                                                                            It’s entirely possible that things like developer skill, development process, and testing practices eclipse any effect a language might have.

                                                                                                                              2. 14

                                                                                                                                I’m not familiar with the studies about type systems, but I feel pretty confident many large projects I’ve worked on would fall apart if they were dynamically typed.

                                                                                                                                Sure. And ancient Greek scholars felt pretty confident that rubbing garlic on a magnet would demagnetize it.

                                                                                                                                We need to start by recognizing that a personal feeling of confidence is an absolutely terrible basis from which to derive objective facts about the world.

                                                                                                                                1. 3

                                                                                                                                  Fall apart in what sense? I worked for 15 years on a codebase that was maintained by about 500 engineers. 90% of it was written in TCL.

                                                                                                                                  You might argue the codebase would have been better if it had been typed, but we continued to make money just fine, and despite competitors (internal and external) trying to rewrite or replace it, they couldn’t. It didn’t ‘fall apart’.

                                                                                                                                1. 21

                                                                                                                                  The fundamental problem with USB-C is also seemingly its selling point: USB-C is a connector shape, not a bus. It’s impossible to communicate that intelligibly to the average consumer, so now people are expecting external GPUs (which run on Intel’s Thunderbolt bus) for their Nintendo Switch (which supports only USB 3 and DisplayPort external busses) because hey, the Switch has USB-C and the eGPU connects with USB-C, so it must work, right? And hey why can I charge with this port but not that port, they’re “exactly the same”?

                                                                                                                                  This “one connector to rule them all, with opaque and hard to explain incompatibilities hidden behind them” movement seems like a very foolish consistency.

                                                                                                                                  1. 7

                                                                                                                                    It’s not even a particularly good connector. This is anecdotal, of course, but I have been using USB Type-A connectors since around the year 2000. In that time not a single connector has physically failed for me. In the year that I’ve had a device with Type-C ports (current Macbook Pro), both ports have become loose enough that simply bumping the cable will cause the charging state to flap. The Type-A connector may only connect in one orientation but damn if it isn’t resilient.

                                                                                                                                    1. 9

                                                                                                                                      Might be crappy hardware. My phone and Thinkpad have been holding up just fine. The USB C seems a lot more robust than the micro b.

                                                                                                                                      1. 3

                                                                                                                                        It is much better, but it’s still quite delicate with the “tongue” in the device port and all. It’s also very easy to bend the metal sheeting around the USB-C plug by stepping on it etc.

                                                                                                                                      2. 6

                                                                                                                                        The perfect connector has already been invented, and it’s the 3.5mm audio jack. It is:

                                                                                                                                        • Orientation-free
                                                                                                                                        • Positively-locking (not just friction-fit)
                                                                                                                                        • Sturdy
                                                                                                                                        • Durable

                                                                                                                                        Every time someone announces a new connector and it’s not a cylindrical plug, I give up a little more on ever seeing a new connector introduced that’s not a fragile and/or obnoxious piece of crap.

                                                                                                                                        1. 6

                                                                                                                                          Audio jacks are horrible from a durability perspective. I have had many plugs become bent and jacks damaged over the years, resulting in crossover or nothing playing at all. I have never had USB cable fail on me because I stood up with it plugged in.

                                                                                                                                          1. 1

                                                                                                                                            Not been my experience. I’ve never had either USB-A or 3.5mm audio fail. (Even if they are in practice fragile, it’s totally possible to reinforce the connection basically as much as you want, which is not true of micro USB or USB-C.) Micro USB, on the other hand, is quite fragile, and USB-C perpetuates its most fragile feature (the contact-loaded “tongue”—also, both of them unforgivably put the fragile feature on the device—i.e., expensive—side of the connection).

                                                                                                                                          2. 4

                                                                                                                                            You can’t feasibly fit enough pins for high-bandwidth data into a TR(RRRR…)S plug.

                                                                                                                                            1. 1

                                                                                                                                              You could potentially go optical with a cylindrical plug, I suppose.

                                                                                                                                              1. 3

                                                                                                                                                Until the cable breaks because it gets squished in your bag.

                                                                                                                                            2. 3

                                                                                                                                              3.mm connectors are not durable and are absolutely unfit for any sort of high-speed data.

                                                                                                                                              They easily get bent and any sort of imperfection translates to small interruptions in the connection when the connector turns. If I – after my hearing’s been demolished by recurring ear infections, loud eurobeat, and gunshots – can notice those tiny interruptions while listening to music, a multigigabit SerDes PHY absolutely will too.

                                                                                                                                            3. 3

                                                                                                                                              This. USB-A is the only type of usb connector that never failed for me. All B types (Normal, Mini, Micro) and now C failed for me in some situation (breaking off, getting wobbly, loose connections, etc.)

                                                                                                                                              That said, Apple displays their iPhones in Apple Stores solely resting on their plug. That alone speaks for some sort of good reliability design on their ports. Plus the holes in devices don’t need some sort of “tongue” that might break off at some point - the Lightning plug itself doesn’t have any intricate holes or similar and is made (mostly) of a solid piece of metal.

                                                                                                                                              As much as I despise Apple, I really love the feeling and robustness of the Lightning plug.

                                                                                                                                              1. 1

                                                                                                                                                I’m having the same problem, the slightest bump will just get it off of charging mode. I’ve been listening to music a lot recently and it gets really annoying.

                                                                                                                                                1. 2

                                                                                                                                                  Have you tried to clean the port you are using for charging?

                                                                                                                                                  I have noticed that Type C seems to suffer a lot more from lint in the ports than type A

                                                                                                                                              2. 6

                                                                                                                                                It’s impossible to communicate that intelligibly to the average consumer,

                                                                                                                                                That’s an optimistic view of things. It’s not just “average consumer[s]” who’ll be affected by this; there will almost certainly be security issues originating from the Alternate Mode thing – because different protocols (like thunderbolt / displayport / PCIe / USB 3) have extremely different semantics and attack surfaces.

                                                                                                                                                It’s an understandable thing to do, given how “every data link standard converges to serial point-to-point links connected in a tiered-star topology and transporting packets”, and there’s indeed lots in common between all these standards and their PHYs and cable preferences; but melding them all into one connector is a bit dangerous.

                                                                                                                                                I don’t want a USB device of unknown provenance to be able to talk with my GPU and I certainly don’t want it to even think of speaking PCIe to me! It speaking USB is frankly, scary enough. What if it lies about its PCIe Requester ID and my PCIe switch is fooled? How scary and uncouth!

                                                                                                                                                1. 3

                                                                                                                                                  Another complication is making every port do everything is expensive, so you end up with fewer ports total. Thunderbolt in particular. Laptops with 4 USB A, hdmi, DisplayPort, Ethernet, and power are easy to find. I doubt you’ll ever see a laptop with 8 full featured usb c ports.

                                                                                                                                                1. 24

                                                                                                                                                  So, in some ways this reminds me of the care and feeding of interns. My philosophy on that is basically:

                                                                                                                                                  • Don’t give them “opinion” projects. Things like “compare these two frameworks” are just busywork, because they generally don’t have the experience to say anything of substance that you can’t read off of a blog somewhere. Further, they probably aren’t immersed enough in the bullshit business requirements of your environment to make really good analyses.
                                                                                                                                                  • Give them clear deliverables. Every project they get should have a clear GO/NO-GO completion criteria. There should be an easily-testable (automatically or manually) way of verifying that their work is done, if for no other reason than to give them immediate feedback on their progress.
                                                                                                                                                  • Never put them in the critical path of a project. They aren’t paid enough to justify the stress or responsibility associated with that, and the crazy bizarre shit they may come up with will become heinous legacy code overnight.
                                                                                                                                                  • Give them housecleaning tasks. Building on the above two points, doing things like making deployments easier, setting up CI, or writing isolated reports for business are all great things for them to do: useful to the team, easy to gauge progress on, and unimportant if they fuck up.
                                                                                                                                                  • Be constantly asking questions about how they approach things, even if they’re doing it right. Interns (and junior engineers) need to learn, and to learn they need introspection. Introspection is easiest if somebody external is asking you to explain your thought process and give constructive pushback.

                                                                                                                                                  Of course, there is another school of thought (which has its benefits) which goes:

                                                                                                                                                  • Use juniors to do the hard and backbreaking work of building a company. As Stalin showed us, nobody is too underskilled to be fed to the front-lines. They don’t know how bad things are, so they’ll keep working.
                                                                                                                                                  • Use a few senior devs (commisars?) to keep them pushing in the right directions and not retreating. Pay these devs well, since their job is basically to prevent the impressionistic juniors from thinking too critically about why they’re rebuilding the frontend for the third total redesign in three weeks.
                                                                                                                                                  • If things get bad, start farming the juniors out to other departments to help fight fires (and there will be fires). Be wary of juniors that realize they have cross-disciplinary skills, because they can disrupt management and start requesting more money. Fire them or promote them.
                                                                                                                                                  • Use your juniors to recruit from their schools so you have a continual stream of warm bodies.

                                                                                                                                                  EDIT: I obviously don’t support the second operating model, but a lot of people have had success with it.

                                                                                                                                                  1. 8

                                                                                                                                                    Don’t give them “opinion” projects. Things like “compare these two frameworks” are just busywork, because they generally don’t have the experience to say anything of substance that you can’t read off of a blog somewhere. Further, they probably aren’t immersed enough in the bullshit business requirements of your environment to make really good analyses.

                                                                                                                                                    This is the only thing here I disagree with. I think “compare these two frameworks” can be a great project, if done well. The problem with reading off blogs is that they often tend to be pretty one-sided, or worse in the honeymoon phase of a technology. The best way to learn about a tech is to field-test it outside of its easy paths. So comparing works if

                                                                                                                                                    • The test is a moderately-complex specification, complicated enough that you can’t just rely on framework magic
                                                                                                                                                    • The implementations are hard enough for the assistant to run into difficulties with the frameworks they can write up
                                                                                                                                                    • An engineer is keeping tabs and occasionally course correcting
                                                                                                                                                    • There’s some quantifiable benchmark that the engineer can run, or some additional way of testing the two examples (how hard is it to add one more feature?).
                                                                                                                                                    • While the assistant writes up an experience report, the engineer should run or oversee the quantifiable benchmarks.

                                                                                                                                                    One example: for a data warehousing project at work, I wrote complete ETLs and implementations for both Redshift as a data store and Postgres, and ran a set of business queries to compare runtimes. That’d be the kind of thing I’m thinking of.

                                                                                                                                                    1. 4

                                                                                                                                                      One example: for a data warehousing project at work, I wrote complete ETLs and implementations for both Redshift as a data store and Postgres, and ran a set of business queries to compare runtimes. That’d be the kind of thing I’m thinking of.

                                                                                                                                                      That might be a reasonable task for a new graduate of a four-year college with a couple of semesters of DB-related courses under their belt, but I think you’re setting your expectations far to high for someone coming out of a bootcamp. The average bootcamp grad probably has a cursory understanding of a DB as “the place where your data is saved”, little if any direct exposure to SQL, and certainly no grasp of the underlying storage paradigms or the idea that queries have a time-complexity related to how data is modeled, stored, and queried against. It’s unlikely they’ll have ever interacted with a database without going through an ORM, honestly.

                                                                                                                                                      Unless you’re going to pair with them 100% of the project, you’re throwing them into the deep end.

                                                                                                                                                      1. 1

                                                                                                                                                        A lot of the full-timers where I work are in a similar boat :) The database is deep magic for advanced users, or something

                                                                                                                                                  1. 3

                                                                                                                                                    Other crafts have this somewhat figured out, so it doesn’t hurt to look for them for inspiration. Apprentice woodworkers were tasked initially with setup work (planing boards flat and the like), and then with easy finishing work (simple sanding tasks, say), and with progressively more challenging parts of a build from there.

                                                                                                                                                    A lot of feature and bugfix work in a codebase involves a combination of “heavy lifting” and “straightforward setup & polish” work. You can productively use an assistant’s time by taking on the heavy lifting portions of feature implementation or a bug fix, and then handing it over to them to do the easy parts that get it the rest of the way towards done. At first this is going to involve you essentially spelling out how to do the easy 20% (You’re going to need to write a test in this file, called this, that tests for this in this way and fails, so that I have a regression test to check my work against. When I’ve got this passing, I will have put the elements on the page, you just need to style them to appear where they should. Here’s some links to how that is done), but you’ll quickly be able to dial that back, and eventually ease them backwards into taking on progressively more of the hard 80%. Walk them through how you approached the heavy lifting portion of each task on a whiteboard at a fairly high level, but leave them to walk through your code on their own. If you directly walk them through the codebase, they’ll take longer to develop the critical skill of reading and understanding someone else’s code.

                                                                                                                                                    This avoids giving them synthetic busywork that they’re not equipped to do well (fresh bootcamp grads are not going to be able to research and report on code in a way you find useful. If you give people tasks they’re not equipped to succeed at, that’s bad for them and reflects poorly on you), lets them feel like they’re making visible, real contributions, and lets them learn from the commits they’re inheriting from you.

                                                                                                                                                    Don’t give them complete vertical features or QoL projects to handle on their own. Left to their own devices, juniors will approach things in bizarre ways they aren’t equipped to recognize the shortcomings of, and without constant guidance they’re apt to internalize those approaches and turn into “10 years of 1 year of experience” programmers.

                                                                                                                                                    1. 6

                                                                                                                                                      The W3C has never been in a position to block DRM.

                                                                                                                                                      DRM could be blocked by convincing media companies that they don’t need it, by convincing browser vendors to reject it, or by convincing consumers to boycott it. The W3C’s blessing (or lack thereof) is completely irrelevant to all of these parties.

                                                                                                                                                      1. 14

                                                                                                                                                        W3C has been irrelevant since they abdicated progressing web standards. in useful directions.

                                                                                                                                                        DRM was the absolute last thing we as devs and users needed.

                                                                                                                                                        Vast javascript libraries is NOT what we needed.

                                                                                                                                                        Personally, viewing the web landscape today… I think EFF did the right thing.

                                                                                                                                                        Ignore W3C and find an organization that actually wants to make the web a better place.

                                                                                                                                                        1. 6

                                                                                                                                                          IETF

                                                                                                                                                          1. 2

                                                                                                                                                            One issue that complicates this is the W3C’s copyright policy for their specifications: https://www.w3.org/Consortium/Legal/2015/doc-license

                                                                                                                                                        2. 6

                                                                                                                                                          TBH the whole W3C debate and protest over DRM was a boondoggle.

                                                                                                                                                          The three largest browser makers are all pro-DRM, and Mozilla is mostly funded by pro-DRM companies. W3C approval or not, web DRM was going to be implemented and rolled out. The only question was whether it would be standardized or if each browser would implement their own.

                                                                                                                                                          Like it or not, the internet is just a revenue source these days.

                                                                                                                                                          1. 10

                                                                                                                                                            If each browser required its own DRM implementation, using DRM would become more expensive and challenging, which is the goal.

                                                                                                                                                            Why are people trying to save the DRM companies money? Standardization isn’t a goal in and of itself. You wouldn’t want to standardize malware APIs (although arguably that’s what W3C is doing) or JS APIs designed to help pop-up ads.

                                                                                                                                                            1. 3

                                                                                                                                                              If each browser required its own DRM implementation, using DRM would become more expensive and challenging, which is the goal.

                                                                                                                                                              You fundamentally misunderstand the role of standards bodies.

                                                                                                                                                              If the W3C had rejected EME…all of the browsers would have gone ahead and continued to use the EME standard they’d all already agreed upon and implemented. All that would be happening would be that the W3C had stuck its head in the sand and chosen to pretend a standard that existed in practice did not exist, because they didn’t like it.

                                                                                                                                                              Standards bodies exist to help facilitate coordination between the browser vendors. Standards bodies are only useful insofar as they facilitate that coordination. Standards bodies have zero power to force browser vendors to do, or not do, anything. If standards bodies cease to be useful for coordination purposes, they will be ignored and replaced by a new body that the vendors can actually cooperate through.

                                                                                                                                                              There was no universe in which the W3C could force browser vendors to not implement EME, or all do their own thing. The only choice was “acknowledge the existence of EME” or “stick head in sand and become irrelevant”.

                                                                                                                                                              1. 2

                                                                                                                                                                The browsers didn’t require W3C to approve EME to implement it. They implemented it long before it was approved. A ‘no’ vote would not have made anything more expensive or challenging.

                                                                                                                                                                1. 1

                                                                                                                                                                  If each browser required its own DRM implementation, using DRM would become more expensive and challenging, which is the goal.

                                                                                                                                                                  That’s not the goal, and that premise doesn’t hold anyway.

                                                                                                                                                                  The expenses around this are pocket change for Microsoft, Google, and Apple, and not having a standard means they SAVE money because they can use their existing DRM. They’d cross license each other’s DRM, and they’d all be compatible.

                                                                                                                                                                  The people hurt by not having a standard would be people developing new browsers. They’d have to implement multiple DRM technologies, instead of just one.

                                                                                                                                                                  I don’t like DRM, but it’s not going away, so it might as well be dealt with in a sane way.

                                                                                                                                                                2. 3

                                                                                                                                                                  I totally agree. To me, this seems like throwing the baby out with the bathwater. So, the EFF has abdicated its right to advocate for digital privacy rights as part of the W3C?

                                                                                                                                                                  I realize that sometimes groups like the EFF need to make a stand, I’m just not sure this is the best way to achieve what they’re looking for, or whether this is the right hill to die on, so to speak.

                                                                                                                                                                  1. 2

                                                                                                                                                                    I don’t think it’s just a revenue source, because non profits use it well. It is built for companies and organizations, because it is a client-server model and running servers takes money, initiative, and persistence.

                                                                                                                                                                    What I am always a bit confused about is: if the internet isn’t good enough, you are necessarily missing (small $) money, initiative, or persistence, but then why is your goal worth anything? Those are all reasonable signals for social usefulness.

                                                                                                                                                                    Thus, distributed/“libre” networks inherently have little value. You’ll know you’re doing something useful when a few other people are willing to help foot the aws bill.

                                                                                                                                                                    1. 11

                                                                                                                                                                      You’ll know you’re doing something useful when a few other people are willing to help foot the aws bill.

                                                                                                                                                                      I respectfully disagree.

                                                                                                                                                                      When I was younger, I enjoyed learning about things on all manner of odd private websites. To this day, when I’m feeling down, reading webcomics (many of which lack advertising!) cheers me up. Flipping through archives of essays and memos hosted by people gratis has taught me much.

                                                                                                                                                                      If I can help give back in that same way by hosting content myself (even silly things like my own blog), then I believe that I have done something useful–quite without consideration for profitability.

                                                                                                                                                                  2. 5

                                                                                                                                                                    There’s a difference between blocking DRM and refusing to support DRM.

                                                                                                                                                                    It’s not a pointless moral play. Refusing to standardize malware APIs makes it more expensive and inconvenient to write malware, even if it’s probably going to happen anyway.

                                                                                                                                                                  1. 5

                                                                                                                                                                    Yes, the above example was for building a very small project. But just because the project becomes larger, doesn’t mean the user experience of Make diminishes.

                                                                                                                                                                    Historically, no large project’s Makefile agrees with this assertion. Make scales very poorly to complex build scenarios, and once you need to build on platforms with any differences between them, it’s an absolute shitshow.

                                                                                                                                                                    1. 7

                                                                                                                                                                      This is an assertion that could benefit from more precision. OpenBSD is a fairly large and complex project, runs on many platforms with quite some differences, and uses make pretty much exclusively for building.

                                                                                                                                                                      1. 1

                                                                                                                                                                        I do have one minor jab to make at OpenBSD’s use of makefiles. (footnote 3) Reactions/corrections appreciated.

                                                                                                                                                                        1. 2

                                                                                                                                                                          It calls clean when you run “make build”, but you can also just run make. You’re right it shouldn’t be necessary, but there’s not much inclination to fix it because the point of make build is “from scratch”.

                                                                                                                                                                          A bigger complaint I have is that recursive make slows down parallel builds when it gets to one source file utilities.

                                                                                                                                                                        2. 1

                                                                                                                                                                          Cheerfully amended to: Only one large project in the default install, in a heck of a long time!

                                                                                                                                                                          (Although recursive make considered harmful, and all that)

                                                                                                                                                                      1. 4

                                                                                                                                                                        Just for the sake of argument: YAGNI boils down to “prefer simplicity to complexity”. RDBMS and SQL are very complex. If I’m following YAGNI, why should I use an RDBMS instead of something much simpler, like Mongo?

                                                                                                                                                                        ((I’m strongly in favor of SQL over NoSQL is 99% of cases, just curious about how other lobsters answer the puzzle.))

                                                                                                                                                                        1. 18

                                                                                                                                                                          Two acronyms that trigger me immensely after seeing a lot of devs abuse them are YAGNI and DRY, usually because they are parroted back blindly by people that aren’t thinking holistically about their systems and the people building and maintaining those systems. For DRY, as an example, a bunch of copy-paste config scripts or boilerplate can actually be a lot easier to troubleshoot and maintain than a byzantine architecture design to abstract away things to let people skip writing var window = new Window(0,0,200,200); var window2 = new Window(100,100,200,200);.

                                                                                                                                                                          More to your point, with YAGNI, the answer for me is that yeah, starting out it’s honestly faster to use a memory store (say, var sessions = Object.create(null)) instead of even Mongo! If you need persistence quick, use property files or json blobs flushed to disk periodically.

                                                                                                                                                                          But, and this is where people usually screw up, you use your experience to inform what you’re going to need. Things that every business needs within the first few months of development:

                                                                                                                                                                          • Monitoring, even a simple heartbeat 200 route.
                                                                                                                                                                          • Sending emails
                                                                                                                                                                          • Collecting user emails
                                                                                                                                                                          • Authenticating (not authorizing!) users
                                                                                                                                                                          • Metrics on pageviews to show traffic and conversion
                                                                                                                                                                          • Querying relationships between business domain entities
                                                                                                                                                                          • Logging for when things blow up
                                                                                                                                                                          • Persisting user data to disk

                                                                                                                                                                          Decades of work has shown that there are no special snowflakes in these regards!

                                                                                                                                                                          And yet, claiming YAGNI, a lot of places pretend that those things are not a concern right now and never will be a concern and end up doing really heinous shit that even a moment of reflection would’ve prevented. Example of this would be building an e-commerce site (one of the literal academic exercises for SQL) with a store like Mongo.

                                                                                                                                                                          Like, yes, right now there is no need to do a rollup of quaterly sales by product line and vendor, but that is something we know you’re going to want as soon as you figure out that such a thing exists. But, if people have been strict lean-startup YAGNI the whole time, you’re probably going to find out that the way forward is to retroactively bolt on some hideous schema and relational model to the application layer and hope that that gives you real numbers.

                                                                                                                                                                          Similar things that people cry YAGNI on:

                                                                                                                                                                          • “We don’t need transactions for our database yet.”
                                                                                                                                                                          • “We don’t need more than one prod server yet.”
                                                                                                                                                                          • “We aren’t going to need site analytics.”
                                                                                                                                                                          • “We aren’t going to need an HTTP API.”
                                                                                                                                                                          • “We don’t need linters and code climate stuff yet.”

                                                                                                                                                                          One of the signs of seniority, in my opinion, is to have an engineer that recognizes when the business is, in fact, going to need it–and in all other cases, aggressively fake it in such a way as to not hamper later fixes.

                                                                                                                                                                          1. 7

                                                                                                                                                                            There’s also a tendency to mistake “I don’t know it” for “it’s too complex,” when other people who you can hire are more likely to know it than the “simpler alternative.” Relational databases are the best example: I’ve never seen anyone argue against them who was comfortable with them. If you base your system on the Next New Thing, how likely is it that it will still be around in forty years, like SQL? Or that you’ll be able to hire someone to help you with it?

                                                                                                                                                                          2. 5

                                                                                                                                                                            Just for the sake of argument: YAGNI boils down to “prefer simplicity to complexity”. RDBMS and SQL are very complex. If I’m following YAGNI, why should I use an RDBMS instead of something much simpler, like Mongo?

                                                                                                                                                                            The complexity of a RDBMS is isolated in a single unit that was thoroughly tested and presents a simple API to the programmer. The complexity of a key-value store is mostly in the programmer having to maintain a scheme outside the database and having to deal with new and exciting bugs that usually end up with the kind of data loss that would make MySQL look sane.

                                                                                                                                                                            1. 4

                                                                                                                                                                              YAGNI boils down to “prefer simplicity to complexity”. RDBMS and SQL are very complex. If I’m following YAGNI, why should I use an RDBMS instead of something much simpler, like Mongo?

                                                                                                                                                                              “Simpler internally” is not the same as “simple to deal with.” The latter is more relevant.

                                                                                                                                                                              I’d ask two questions: 1) what needs are we most likely to have in the future? and 2) how much pain will we have if we’re wrong?

                                                                                                                                                                              For instance, you may need high scalability. You also may need relational integrity.

                                                                                                                                                                              Which one are you more likely to need? I’d guess “relational integrity”, as every system I’ve ever seen has had at least some relational data. (Even loosely-structured document data needs to belong to a specific user.)

                                                                                                                                                                              Which one is harder to bolt on later? If you pick a RDBMS and need to scale it, indexes, caching, sharding and clustering are all things that can help. If you pick a NoSQL database and need to add relational integrity and transactions… you’re basically sunk.

                                                                                                                                                                              Which problem hurts more to have? If you have scaling problems (and your business model is sane) you have proportionally large revenue and can afford to work on scaling. If you have data integrity problems, they may be costing you the only customers you have.

                                                                                                                                                                              1. 3

                                                                                                                                                                                It depends, doesn’t it? If you’ve already got a RDBMS humming along, adding a second type of database is definitely more complex. If you don’t, I can see a case to be made for grabbing MongoDB, but.. throwing random schemaless json documents can cause headaches without adding in complexity in the form of discipline/coordination of changes/monitoring/etc. Some of which you may already have, further complicating the analysis. Either way, it’s easiest to work with the grain of your architecture– which fits the spirit of YAGNI.

                                                                                                                                                                                Looking at YAGNI in particular, does the term itself ever get invoked in discussion as something more than a tool to shut down conversation?

                                                                                                                                                                                1. 1

                                                                                                                                                                                  Just for the sake of argument: YAGNI boils down to “prefer simplicity to complexity”. RDBMS and SQL are very complex. If I’m following YAGNI, why should I use an RDBMS instead of something much simpler, like Mongo?

                                                                                                                                                                                  I think people misunderstand YAGNI. As an engineering principle the idea is that, when you find yourself asking “Hmm, should I do X or build Y now, because someone might want it”, then the answer should be No. It arose in opposition to the Java Factory Factory Factory Overapplication pattern of adding extra injection points “just in case” someone wanted to introduce a different kind of FooBean down the road, which usually never happened, leaving you with a lot of extra complexity to read through for zero real-world gain.

                                                                                                                                                                                  It doesn’t really apply to questions like “Do I want X or Y?”, in my opinion. It’s purely a heuristic for rejecting undertaking “just in case” work you don’t actually have a concrete use-case for.

                                                                                                                                                                                  (In the case of RDBMS vs Mongo, I propose a different heuristic: NENM. Nobody Ever Needs Mongo).

                                                                                                                                                                                1. 4

                                                                                                                                                                                  My concern is that as we add these sorts of features to Mongo, we keep pushing it outside of what it’s good at. One of the things I’ve seen in a similar way, for example, is almost immediately adding some kind of schema mechanism on top of Mongo–schemaless being one of its nominal strongpoints.

                                                                                                                                                                                  If you need transactions, use a real database.

                                                                                                                                                                                  Edit: Must. Be. Positive.

                                                                                                                                                                                  1. 6

                                                                                                                                                                                    My concern is that as we add these sorts of features to Mongo, we keep pushing it outside of what it’s good at.

                                                                                                                                                                                    You know, in years of experience cleaning up the use of Mongo at various employers, I’m still not clear on what that is, other than marketing to people who don’t know better.

                                                                                                                                                                                    To a first approximation, none of the shops who chose to use it seem to have actually wanted a non-transactional document store with data so heterogenous and unreliable that they (should have been but didn’t) write extremely granular key-existence checks for every time they needed to deal with a record.

                                                                                                                                                                                    All they really wanted was “DB but fast”, which they mistook Mongo for. But yeah, I agree, don’t staple a two-phase commit protocol to Mongo – just use something less ill-suited for anything anyone actually wants in real life.

                                                                                                                                                                                    1. 2

                                                                                                                                                                                      Well, if you like using a tool that doesn’t quite have all the features you want, there’s really nothing wrong with adding those features yourself; that’s why the MongoDB docs include a description for two-phase commits as a way to handle transactions. Also, most of these added features (schemas for instance) are implemented via third party libraries so if they fit your use case, great, otherwise you can completely avoid them and use the DB as you see fit.

                                                                                                                                                                                    1. 3

                                                                                                                                                                                      While I appreciate the removal of autotools, is making me have a Python interpreter any better? I guess it’s the case that many of us already do, so then I’ll ask does it work on 2.x and 3.x? And, I’m a bit out of the loop, has RedHat enterprise started shipping 3.x as the system version?

                                                                                                                                                                                      1. 15

                                                                                                                                                                                        While I appreciate the removal of autotools, is making me have a Python interpreter any better?

                                                                                                                                                                                        I assume that the developers feel that making you install Python, which is dirt simple in most environments, is a lot easier than them having to continue to live through the endless hell that is autotools, forever.

                                                                                                                                                                                        1. 6

                                                                                                                                                                                          Well, the question, which @glesica answers here, is which version? And my ask about RedHat is because for a while RedHat wouldn’t ship anything more than Python 2.4, which was ancient. If Meson doesn’t take into consideration the versions of Python that are default on a bunch of different OSes, then it could be a burden on people.

                                                                                                                                                                                          What autotools has going for it is acceptance, and therefore, the requirements for it are readily available, easily, on most systems. That doesn’t mean we shouldn’t strive for something better (please, please, please, do!) but we still should be mindful of the dependencies we’re adding when we adopt a new tool.

                                                                                                                                                                                          1. 7

                                                                                                                                                                                            Redhat should bear some of the burden for stranding people on a desert island as well.

                                                                                                                                                                                            1. 2

                                                                                                                                                                                              No doubt! But that still doesn’t make it easy for some people to meet the dependency easily.

                                                                                                                                                                                            2. 4

                                                                                                                                                                                              That’s no longer true. And you can easily get python3 from redhat software collections.

                                                                                                                                                                                              1. 2

                                                                                                                                                                                                Excellent! I can adjust my preconceived notions that RedHat is too far behind the times to be useful.

                                                                                                                                                                                          2. 10

                                                                                                                                                                                            The change means that GTK+ master now has a build-time dependency on:

                                                                                                                                                                                            • Python 3.x
                                                                                                                                                                                            1. 4

                                                                                                                                                                                              I know nothing about meson, but it would not surprise me that the developers who maintain the build scripts find it easier to work with than autoconf.

                                                                                                                                                                                              1. 2

                                                                                                                                                                                                I don’t know much about Meson either, as I don’t really have use for it.

                                                                                                                                                                                                Having said that, I’ve worked with Jussi the Meson guy, for a long time not too long ago, and he would not have it any other way :)

                                                                                                                                                                                                Way cool he made it through with GTK, it didn’t come easy.

                                                                                                                                                                                                Maybe this will inspire more people to look at it as an alternative.

                                                                                                                                                                                              2. 1

                                                                                                                                                                                                I think RedHat is trying to stop shipping python2 entirely. https://lwn.net/Articles/729366/