1. 2

    I made something like this a while back in Python, but I used RSA as the encryption layer which meant the carrier messages would have to be a bit larger. I also didn’t implement the compression they include in this library as my project was just a proof of concept.

    It’s a neat trick, but it’s very easy to discover by looking for text with an abnormal amount of zero-width characters.

    1. 12

      This is missing the “why”.

      • What are the benefits of this whole mess of new build infrastructure?
      • Who claims these are the new best practices, or are these just the author’s preferences?
      • Why are these tools curl | bash-ed onto my machine instead of packaged with my distribution like a piece of dev tooling I can actually trust to base my entire project on?
      1. 16

        Agree. I now have a sudden urge to write a guide how to start from very simple (i.e. system interpreter, no packaging, minimal verification) and how to progress further on. And how to decide what tools do you even need to consider and at which stage.

        In hindsight as a somewhat experienced Python programmer I can see how each of these tools can be helpful – but if I was a beginner I would be completely overwhelmed.

        1. 4

          Please do. I am by no means a beginner at Python itself, but most of these “guides” confuse me. I’ve gotten pretty far with a few venvs and some other minor tools. Why would I need all of these other things?

          It feels very much like people write these kinds of guides where they try to score points by cargo culting a lot without themselves understanding why they are doing things the way they are.

        2. 1

          I think “why” kinda answers itself. People who are familiar with setuptools don’t need an answer as they would gladly try something else. As for beginners they don’t need an answer either as you should really learn the newest and most popular standard.

          So who’s really asking “why”?

          1. 4

            Everyone, beginners especially, has a right to ask for a rationale for the recommendations they receive.

        1. 2

          Someone brought it up on the winget repo: https://github.com/microsoft/winget-cli/issues/353

          1. 4

            Why do people insist on taking stuff like this up on the MS GitHub repos? What do they think they accomplish other than making themselves look like petulant children? I understand the outrage, but nobody (and I mean literally nobody) who is working on that repo can do anything with this kind of thing. It’s the same thing as the last “controversy” where Microsoft used the MAUI name that someone else already had a trademark for. Vitriol and shitposting like that does not make Microsoft act faster or necessarily take action in the direction you want. More likely the post eventually gets locked and MS doubles down on their position. The only way to actually push Microsoft is to make this more public in feeds where non-coders reside. Make the sysops who are going to use WinGet know that they are using a copycat that Microsoft just stole, and believe me, sysops don’t read GitHub issues unless it was linked from a Google search…

          1. 27

            I don’t get why people think Microsoft should discuss this on GitHub? Nobody who has a GitHub account at Microsoft is qualified to speak on a trademark issue. The reality is that their legal department seems to have dropped the ball and the only thing the devs can really say to this is “It’s been through legal” even if they were wrong. It’s not like they can ignore a trademark, so we will probably see a rebrand pretty soon.

            1. 10

              It’s also not unheard of that Microsoft’s legal department misses checking German trademarks. Modern UI was called Metro UI at some point, too (https://www.techpluto.com/metro-ui-renamed/)

              1. 2

                SkyDrive had to be renamed OneDrive because of a trademark dispute with Sky Group in the UK. Google has to rename Team Drives to Shared Drives for a similar reason. Big company lawyers often get trademarks wrong. If you’re using such common words, it’s likely to be used by other companies, too.

              2. 8

                I would have said “I didn’t know that, I’ll double check about this with legal” instead of “perhaps the Linux project should change its name […]”

                No reason to be an ass about it.

                1. 11

                  The person who was an ass about it didn’t work for Microsoft, and was banned from commenting on Microsoft repos for 7 days according to the repo code of conduct agreement. In addition, a Microsoft employee can’t admit to knowing or not knowing about them on behalf of the company. If they did, legal would have their ass and he’d probably get them in trouble. Remember, anything you say can and will be used against you.

                  1. 4

                    If I can say “I didn’t know about that” and that can be construed to mean “Microsoft didn’t know about that,” then that’s bananas. Not every person at a company knows every single thing. If I were speaking on behalf of Microsoft, all I would be saying is “at least one person at Microsoft does not know about the existing Maui project.” And I’m not claiming I will do anything about it, just that I personally will attempt to learn about it by asking someone else at the company.

                    Either way, in this position I would probably ask someone with more context how to respond before firing off something on GitHub. That could be exactly what happened here. So I see your point.

                    1. 5

                      If I can say “I didn’t know about that” and that can be construed to mean “Microsoft didn’t know about that,” then that’s bananas.

                      Yet you feel the need to express in your profile that opinions are your own and not Google’s?

                      1. 2

                        Yet you feel the need to express in your profile that opinions are your own and not Google’s?

                        Signatures to that effect go back to 1980s Usenet at the very least.

                        It’s likely a cargo cult but it’s one with a long history.

                        1. 2

                          On Lobste.rs I’m not constantly on guard about what I say or how I say it. I’m sure I’ve written comments that could be interpreted as an opinion held by Google, or Google employees in general. I really could not tell you if I’ve written “we…” anything or not. I don’t think about it and I don’t care to. But I sure wouldn’t say “we didn’t know about this trademark” when speaking as a Google employee on a Google repo that Google pays me to work on.

                        2. 1

                          It’s not so much that they know about it or not. It’s more about acknowledging that there is a trademark issue. If he had acknowledged the others trademark then they would be admitting to negligence from the get-go, severely limiting their options in court or settlement negotiations. Their best course of action, legally speaking, is to not acknowledge the others existence. It looks pretty dumb from a PR point of view, but legal doesn’t care about our opinions of the company as long as it saves them a couple millions from a lawsuit.

                    2. 3

                      Why do you think microsofts projects should be any different from any other project on github? Most projects on github use the platform for all kinds of changes. The fact that their legal and PR departments don’t seem to be aware of github doesn’t mean that the developer who has the issue is doing it wrong, it means that MS has to catch up with the way software projects work today.

                    1. 11

                      I agree completely with the point of the article. But one thing I don’t understand: why do people have so much trouble building IKEA furniture? How am I surrounded by such incredibly smart and capable people who, when faced with furniture, can’t follow simple directions? It’s bananas.

                      1. 6

                        Yeah, this utterly baffles me. Perhaps it’s a self-reinforcing cycle — building IKEA furniture is said to be difficult, so people find it difficult, so people say it is difficult, so…

                        Was it genuinely difficult at one time, but now they can’t escape this trap?

                        1. 1

                          I also thought it was easy my whole life and always wondered the same thing, but a few years ago, I had a really hard time assembling a TV unit.

                          Looking back, I think the directions were clear, but after a long day at IKEA and late in the evening, all the pieces looked the almost identical. So I ended up drilling holes in the wrong place and had to rebuild it a couple times.

                          1. 1

                            I agree on that point, but I also find the lack of shading to be unhelpful sometimes. It’s easy to install things backwards and only notice in later steps because they weren’t explicit about the directionality of some component regarding holes or texture or whatever.

                        2. 1

                          It’s especially weird in that IKEA must put a lot of effort into making instructions that are as clear as possible. Granted, they are basically without text of any kind, but I’ve found that they’re generally designed to be clear and unambigous. Where there is symmetry, it’s called out.

                          1. 1

                            It’s definitely the case with most no-name furniture kits I’ve bought. Perhaps the Wayfair effect is more correct?

                            IKEA is definitely a better experience. Less messing with rough edges or screwing things at odd angles. The instructions are just diagrams for easy localization, but they’re easy to understand.

                            1. 1

                              I think you have a will to build it quickly as you know that it should be simple. So you don’t read as carefully as you should, make a mistake and now spend hours debugging your furniture.

                              Reminds me something…

                            1. 3

                              If you want to make an HTTP Request from scratch, you must first invent the universe.

                              1. 1

                                Just feel lucky that I didn’t go into how nerves and muscles work :D

                              1. 35

                                I have very mixed feelings about this article. Some parts I agree with:

                                It’s you, the software engineering community, that is responsible for tools like C++ that look as if they were designed for shooting yourself in the foot.

                                There is very little impetus to build tools that are tolerant of non-expert programmers (golang being maybe the most famous semi-recent counterexample) without devolving entirely into simple toys (say, Scratch).

                                Some of you have helped with a first round of code cleanup, which I think is the most constructive attitude you can adopt in the short term. But this is not a sustainable approach for the future.

                                […] always keeping in mind that scientists are not software engineers, and have neither the time nor the motivation to become software engineers.

                                Yep, software engineers pitching in to cleanup academic messes after the fact definitely doesn’t work. One of the issues I’ve run into when doing this is that you can totally screw up a refactor in ways that aren’t immediately obvious. Further, honestly, a lot of “best practices” can really hamper the explorative liminality required to do research spikes and feel out a problem.

                                But then, there’s a lot of disagreement I have too:

                                The scientists who wrote this horrible code most probably had no training in software engineering, and no funding to hire software engineers.

                                We expect people doing serious science to have a basic grasp of mathematics and statistics. When they don’t, we make fun of them (that is, when the peer review system works properly). If you’re doing computational models, you damned well should understand how to use your tools properly. No experimental physicist worth I damn that I’ve known couldn’t solder decently well–nobody doing science that relies on computers shouldn’t be expected to know how to program competently and safely.

                                clear message saying “Unless you are willing to train for many years to become a software engineer yourself, this tool is not for you.”

                                Where’s the clear messaging in the academic papers saying “Yo, this is something that I can only reproduce on my Cray Roflcluster with the UT Stampede fork of Python 1.337”? Where’re the warnings “Our university PR department once again misrepresented our research in order to keep sucking at the teat of NSF and donors, please don’t discuss this incredibly subtle work you’re probably gonna misrepresent.” Where’s the disclaimer for “This source code was started 40 years ago in F77 and lugged around by the PI, who is now tenured and doesn’t bother to explain things to his lab anymore because they’re smart and should just get it, and it has been manhandled badly by generations of students who have been under constant pressure to publish results they can’t reproduce using techniques they don’t understand on code they don’t have the freedom to change.”?

                                The core of that research is building and applying the model it implemented by the code, the code itself is merely a means to this end.

                                This callous disregard for the artifact that other people will use is alarming. Most folks aren’t going to look at your paper with PDEs and sagely scratch their chins and make policy decisions–they’re going to run your models and try to do something with the results. I don’t think it is reasonable to disavow responsibility for how the work is going to be used in the future if you also rely on tax dollars and/or bloated student tuition to fund your adventures.

                                There’s something deeply wrong with academic research and computing, and this submission just struck me as an attempt to divert attention away from it by harnessing the techlash.

                                1. 18

                                  I’m someone who’s done their (extremely) fair share of programming work in academia, but outside a CS department: I can guarantee that anyone insisting that the solution was simple and that it’s just “they should have hired real software engineers” has had zero exposure to “real software engineers” trying to write simulation software. Or if they had, it was either in exceptional circumstances, or they didn’t actually pay attention to what happens there.

                                  (This is no different to CS, by the way. The reason why you can’t just hire software engineers and expect they’ll be able to understand magnetohydrodynamics (or epidemiology, or whatever else) is the same reason why you can’t just hire electrical engineers or mechanical engineers and expect them to write a Redis clone worth a damn in less than two years – let alone something better.)

                                  As Dijkstra once remarked, the easiest machine applications are the technical/scientific computations. The programming behind a surprising proportion of simulation software is trivial. By the time they’re done with their freshman year, all CS students know enough programming to write a pretty convincing and useful SPICE clone, for example. (Edit: just to be clear, I’m not talking out of my ass here. For two years I’ve picked the short straw and ended up herding CS first-years through it, and I know from experience that two first-year students can code a basic SPICE clone in a week, most of which is spent on the parser). I haven’t read it in detail but from glossing over it, I think none of the techniques, data structures, algorithms and tools significantly exceed a modest second-year CS/Comp Eng curriculum.

                                  Trouble is, most of the domain-specific knowledge required to understand and implement these models far exceeds a CS/Comp Eng curricula. You think epidemiologists who learned C++ on their own and coded by themselves for 10 years write bad simulation code? Wait ’til you see what software engineers who have had zero exposure to epidemiology can come up with.

                                  “Just enough Python” to write a simple MHD flow simulator is something you can learn in a few afternoons. Just enough electromagnetism understand how to do that is a four-semester course, and the number of people who can teach themselves how to do that is very low. I know a few and I know for a fact that most companies, let alone public universities, can’t afford their services.

                                  This isn’t some scholastic exercise. No one hands you a two-page description of an algorithm for simulating how the flu spreads and says hey, can you please turn this mess of pseudocode into C++, I’m not that good at C++ myself. The luckiest case – which is how most commercial-grade simulation software gets written – is that you get an adnotated paper and a Matlab implementation from whoever developed the model.

                                  (Edit: if you’re lucky, and you’re not always lucky, that person is not an asshole. But if you think translating Matlab into C++ isn’t fun, wait until you have to translate 4,000 lines of uncommented Matlab from someone who doesn’t like talking to software engineers because they’re not real engineers).

                                  However, by the time that happens, the innovation has already happened (i.e. the model has been developed) months before, sometimes years. If you are expected to produce original results – i.e. if you do research – you don’t get a paper by someone else and a Matlab implementation. You get a stack of 80 or so (to begin with) papers on – I’m guessing, in this case, epidemiology, biochemistry, stochastic processes and public health policies – and you’re expected to come up with something better out of them (and, of course, write the code). Yeah, I’m basically describing how you get a PhD.

                                  1. 7

                                    I can guarantee that anyone insisting that the solution was simple and that it’s just “they should have hired real software engineers” has had zero exposure to “real software engineers” trying to write simulation software.

                                    I totally agree with this. That’s also why my argument is “researchers need to learn to write better code” and not “we should hire software engineers to build their code for them”.

                                  2. 13

                                    …no funding to hire software engineers.

                                    Speaking as a grant-funded software engineer working in an academic research lab, it’s amazing what you can get money for if your PI cares about it and actually writes it into grant applications.

                                    My suspicion, and I have zero tangible evidence for this, just a handful of anecdotal experiences, is that labs outside of computer science are hesitant to hire software engineers. It’s better for the PI’s career to bring in a couple more post-docs or PhD students and expect them to magically become software engineers than to hire a “real” one.

                                    Another interesting problem, at least where I work, is that the pay scale for “software engineer” is below market. I’m some kind of “scientist” on paper because that was the only way they could pay the position enough to attract someone out of industry.

                                    1. 5

                                      Speaking as a grant-funded software engineer working in an academic research lab, it’s amazing what you can get money for if your PI cares about it and actually writes it into grant applications.

                                      Oh, totally agree. I’ve made rent a few times by being a consulting software engineer, and it’s always been a pleasure to work with those PIs. Unfortunately, a lot of PIs just frankly seem to have priorities elsewhere.

                                      I’ve heard also that in the US there’s less of a tradition around that, whereas European institutions are better about it. Am unsure about this though.

                                      Also, how to write code that can survive the introduction of tired grad students or energetic undegrads deserves it’s own consideration.

                                      1. 6

                                        Yeah, “Research Software Engineering” is a pretty big thing in the UK at least… https://society-rse.org.

                                        1. 11

                                          It is (I’m an RSE in Oxford). It costs as much within bizarre University economic rituals for a researcher to put (the equivalent of) one of us (full time, but what they usually get is that time shared across a team of people with various software engineering skills and experiences) on a project as it would to hire a postdoc research assistant, and sometimes less. Of course they only do that if they know that they have a problem we can help with, and that we exist.

                                          Our problems at the moment are mostly that people are finding out about us faster than we’re growing our capability to help them. I was on a call today for a project that we couldn’t start before January at the earliest, which is often OK in the usual run of research funding rounds, less OK for spin-out and other commercial projects. We have broken the emergency glass for scheduling Covid-19 related projects by preempting other work, I’ve been on one since March and another was literally a code review and plan for improvement as the linked project got after it was shared. We run about 3 surgery sessions a week on helping researchers understand where to take their software projects, again that only lands with people who know to ask. But if we told more people they could ask, we’d be swamped.

                                          While we’re all wildly in agreement that this project got a lot of unfair context-free hate from the webshits who would gladly disrupt epidemiology, it’s almost certainly the case that a bunch of astrophysicists somewhere are glad the programming community is looking the other way for a bit.

                                          1. 3

                                            I’m an RSE in Oxford

                                            A lot of UK universities don’t have an RSE career track (I’ve been helping work to get one created at Cambridge). It’s quite difficult to bootstrap. Most academics are funded out of grants. The small subset with tenure are funded by the department taking a cut of all grants to maintain a buffer for when they’re not funded on specific ones. Postdocs are all on fixed-term contracts. This is just about okay if you regard postdoc as a position like an extended internship, which should lead to a (tenured) faculty position but increasingly it’s treated as a long-term career path. RSE, in contrast, does not even have the pretence that it’s a stepping stone to a faculty job. A sustainable RSE position needs a career path, which means you need a mechanism for funding a pool of RSEs between grants (note: universities often have this for lab technicians).

                                            The secondary problem is the salary. We (Microsoft Research Cambridge) pay starting RSEs (straight out of university) more than the UK academic salary scale pays experienced postdocs or lecturers[1]. RSEs generally expect to earn a salary that is comparable to a software engineer and that’s very hard in a university setting where the head of department will be paid less than an experienced software engineer. The last academic project I was on had a few software engineers being paid as part-time postdocs, so that they had time for consulting in the remaining time (a few others we got as contractors, but that was via DARPA money that is a bit more flexible).

                                            The composition of these two is a killer. You need people who are paid more than most academics, who you are paying out of a central pool that’s covered by overhead. You can pay them much less than an industry salary but then you can’t hire experienced ones and you get a lot of turnover.

                                            [1] Note for Americans: Lecturer in British academia is equivalent to somewhere between assistant and associate professor: tenured, but junior.

                                            1. 2

                                              Postdocs are all on fixed-term contracts.

                                              Happy to talk more: what we’ve done is set up a Service Research Facility, which is basically a budget code that researchers can charge grant money against. So they “put a postdoc” on their grant application, then give us the money and get that many FTEs of our time. It also means that we can easily take on commercial consultancy, because you multiply the day rate by the full economic cost factor and charge that to the SRF. A downside is that we have to demonstrate that the SRF is committed to N*FTE salaries at the beginning of each budget year to get our salaries covered by the paymasters (in our case, the CS department), making it harder to be flexible about allocation and side work like software surgeries and teaching. On the plus side, it gives us a way to demonstrate the value of having RSEs while we work to put those longer-term streams in place.

                                              The secondary problem is the salary […] so that they had time for consulting

                                              You’re not wrong :). I started by topping mine up with external commercial consultancy (I’ve been in software engineering much longer than I’ve been in RSE), but managed to get up to a senior postdoc grade so that became unnecessary. I’m still on half what I’ve made elsewhere, of course, but it’s a livable salary.

                                              Universities and adjacent institutions (Diamond Light Source, UKAEA, Met Office/ECMWF all pay more but not “competitive” more) aren’t going to soon be comparable to randomly-selected public companies or VC funded startups in terms of “the package”, and in fact I’d hate to think what changes would be made in the current political climate to achieve that goal. That means being an RSE has to have non-monetary incentives that being a FAANG doesn’t give: I’m here for the intellectual stimulation, not for the most dollars per semicolon.

                                              A sustainable RSE position needs a career path, which means you need a mechanism for funding a pool of RSEs between grants (note: universities often have this for lab technicians).

                                              I’m starting a DPhil (same meaning as PhD, different wording because Oxford) on exactly this topic in October: eliciting the value of RSEs and providing context for hiring, training, evaluating and progressing RSEs. I’ve found in conversations and panel discussions at venues like the RSE conference that some people have a “snobbish” attitude to the comparison with technicians, BTW. I’m not saying it’s accurate or fair, but they see making the software for research as a more academically-valid pursuit than running the machines for research.

                                              1. 2

                                                Thanks, that’s very informative. Let me know if you’re in Cambridge (and pubs are allowed to open again) - I’ll introduce you to some of our SREs.

                                            2. 2

                                              Seeing as you seem to have experience in the field, from a very high level view, does the complaints about this project seem valid or not? I understand that one could only make an educated guess considering this is 15K lines, hotly debated, and also a developing situation (the politics… Whoo boy!), but I would love to have someone with experience calibrate the needle on the outrage-o-meter somewhat.

                                              1. 1

                                                I haven’t examined the code, which is perhaps a lesson in itself.

                                                1. 1

                                                  As a baseline I put the code through clang’s scan-build and it found 8 code flows where uninitialized variables may affect the model early in the run. It’s possible that not all them can realistically be triggered (it doesn’t know all dependencies between pieces of external data), but it’s not a great sign.

                                                  Among others that’s a reasonable explanation why people report that even with well-defined random seeds they see different results, and I wouldn’t count “uninitialized variables” in the class of uniform randomness, so I’d be wary about just averaging it out.

                                              2. 2

                                                If you cannot pay somebody much, give them a fancy title, e.g., “Research Software Engineering”. It’s purely an HR ploy.

                                          2. 6

                                            It’s you, the software engineering community, that is responsible for tools like C++ that look as if they were designed for shooting yourself in the foot.

                                            There is very little impetus to build tools that are tolerant of non-expert programmers (golang being maybe the most famous semi-recent counterexample) without devolving entirely into simple toys (say, Scratch).

                                            I actually agree with the author on this.

                                            Let’s not even pretend that the only alternative to the absolutely mind-boggling engineering and design shit show that is C++ is “devolving entirely into simple toys”.

                                            1. 1

                                              Rust?

                                              1. 1

                                                One option.

                                            2. 4

                                              I think you put it very well. Look: if there’s a hierarchy of importance I’m happy to put science far ahead of software development. But the fact remains: when it comes to producing scientific results using software, software developers do know a thing or two about how hard it is to fool yourself and we are rightly horrified at someone handwaving away lack of tests and input validation by “a non-programmer expert will look at this code and make sure not to hold it wrong”

                                              I guess in that sense it’s not much different than the rampant misuse of statistics in science, it’s just that software misuse might be currently flying a little below the radar.

                                              1. 4

                                                exactly. It is the job of the researcher to be aware of the limitation of his own limited ability to implement his model with a particular tool. To badly implement something then make grandiose claim that the result of said badly implemented model should inform decision that affect millions, is his own fault.

                                                You can’t blame a screw driver ‘community’ if you use it badly and poke yourself in the eye. Not even the lack of “do not poke eye with screwdriver” warning label counts as failure.

                                                1. 1

                                                  This plays out in an interesting way at Google’s Research division. Whatever else you might think about the company, Google software engineers (SWEs) are generally pretty decent. Many of them are interested in ML research projects because they’re all the rage these days. The research teams, of course, just want to do research. But they can get professional SWEs to build their tools for them by letting them feel like they’re part of cutting edge research. So they end up with a mix of early-career SWEs building tools that aren’t inherently all that interesting or challenging but get used to do very interesting and impactful research work and a few more experienced SWEs who want to make the transition into doing research.

                                                1. 1

                                                  Are you sure you want to be handling passwords yourself? Shouldn’t you be using a third-party authentication provider? That way, you run no risk of getting compromised and leaking (reused) passwords.

                                                  1. 11

                                                    Handling passwords is really not that complicated. There are libraries around to do it, and quite frankly, it’s not magic. Just use bcrypt or something similar.

                                                    1. 2

                                                      I would note that it’s not so much just the handling of passwords, but getting all of the workflows for authentication and session management right too. That’s why I like libraries like Devise for Rails that add the full set of workflows and DB columns already using all best-practices to your application, with appropriate hooks for customization as needed.

                                                      1. 2

                                                        It’s not only the password in the database, but also the password in transit. For example, Twitter managed to log passwords:

                                                        Due to a bug, passwords were written to an internal log before completing the hashing process.

                                                        The risk remains, it’s just more subtle and in places you might not immediately think of instead.

                                                        1. 3

                                                          If anything that’s an argument against “just let someone else do it”.

                                                          You can review your own systems, you can organise an audit for them.

                                                          How do you plan to review Twitter’s processes to ensure they do it securely, given that they already have precedence for screwing the pooch in this domain?

                                                          1. 1

                                                            It’s easier in smaller systems.

                                                            1. 1

                                                              Well, there’s a risk with anything you do when dealing with secrets; you can leak tokens or whatnot when using external services too.

                                                              As I mentioned in another comment, the self-host use case makes “just use an external service” a lot harder. It’s not impossible, but I went out of my way to make self-hosting as easy as possible; this is why it can use both SQLite and PostgreSQL for example, so you don’t need to set up a PostgreSQL server.

                                                          2. 2

                                                            you run no risk of getting compromised and leaking (reused) passwords

                                                            You still have to handle authentication correctly, and sometimes having an external system to reason about can expose other bugs in your system.

                                                            I recall wiring up Google SSO on an app a few years ago and thinking configuring google to only allow people through who were on our domain was sufficient to stop anyone being able to sign in with a google account. Turns out in certain situations you could authenticate to that part of the app using a google account that wasn’t in our domain (we also had Google SSO for anyone in the same application, albeit at a different path.) Ended up having to check the domain of the user before we accepted their authentication from google, even though google was telling us they’d authenticated successfully as part of our domain.

                                                            1. 1

                                                              If password hashing is a hard task for your project, I’d argue that’s because your language of choice is severely lacking. In most languages or libraries (where it isn’t part of the stdlib) it should be one function call to hash a new password, or a different single function call to compare an existing hash to a provided password.

                                                              This idea that password hashing is hard and thus “we should use X service for auth” has never made any sense to me, and I don’t quite understand why it persists.

                                                              I have never written a line of Go in my life, but it took me longer to find out that the author’s project is written in Go, than it did for me to find a standard Go module providing bcrypt password hashing and comparison.

                                                              1. 1

                                                                And salting! So many of these libraries store the salt as part of the hash, making comparison easy but breaking hard.

                                                                1. 1

                                                                  I would consider it a bug for a library/function to (a) require the developer to provide the salt, or (b) not include the salt in the resulting string.

                                                              2. 1

                                                                Problem is what provider do you choose to use? Do you just go and “support everyone”, or do you choose one that you hope all your users use, and that you are in support of (I don’t support nor have accounts at Facebook, Twitter, and Google), which narrows it down quite a bit. And what about those potential users that aren’t using your chosen platform(s)? Are you gonna provide password-based login as an alternative?

                                                              1. 7

                                                                So, uhh…. what now? Shut down the Internet until this is fixed? Disconnect your wifi router? Never log on to another web site again?

                                                                1. 30

                                                                  It doesn’t matter at all unless you trust that certificate, or whoever published it. It’s just a self-signed certificate that is valid for any domain. If you don’t trust it, then you don’t trust it, and it will be invalid for any use where you come across it.

                                                                  1. 5

                                                                    Gotcha; I missed the critical detail that it’s self-signed. So to use this in an attack you’d have to trick someone into trusting the cert for some trivial site first.

                                                                    1. 3

                                                                      Exactly. And then they would have to serve some content with that cert that the target would access. There’s essentially no practical way this could be used in an attack except for a man-in-the-middle attack, but you would still need to get the target to trust the certificate first.

                                                                      1. 3

                                                                        Trusting the cert is easy with technical people. I link you guys to my site, with a self signed cert like this. You accept it because you want to see my tech content.

                                                                        This is a huge issue.

                                                                        1. 4

                                                                          How is this different from using any other self-signed certificate?

                                                                          1. 4

                                                                            Here’s what I think @indirection is getting at:

                                                                            1. Your connection to the net is MITMed.
                                                                            2. You visit sometechgeek.com, which is serving this wildcard certificate
                                                                            3. You think “weird, crazy tech bloggers can never take proper care of their servers” and click through the SSL warning
                                                                            4. Your browser trusts the wildcard cert. Next, you visit yourbank.com
                                                                            5. Since the wildcard cert is trusted by your browser, the holder of the key for that cert can intercept your communication with yourbank.com

                                                                            However, I would hope SSL overrides are hostnane-specific to prevent this type of attack…

                                                                            1. 2

                                                                              Yep that’s exactly it! Thank you.

                                                                      2. 2

                                                                        I missed the critical detail that it’s self-signed

                                                                        You didn’t quite miss it, it’s been misleadingly described by the submitter — they never explicitly mention that this is merely a self-signed certificate, neither in the title here, nor in the GitHub repository. To the contrary, “tested working in Chrome, Firefox” is a false statement, because this self-signed certificate won’t work in either (because, self-signed, duh).

                                                                        1. 2

                                                                          I never say that it’s signed by a CA either 😅 I wasn’t trying to mislead folks, but some seem to have interpreted “SSL certificate” as meaning “CA-issued SSL certificate”. It does work in Chrome and Firefox insofar as it is correctly matched against domain names and is valid for all of them.

                                                                    2. 11

                                                                      This isn’t signed by a trusted CA, so this specific cert can’t intercept all your traffic. However, all it takes is one bad CA to issue a cert like this and… yeah, shut down the Internet.

                                                                      1. 4

                                                                        For any CA that has a death wish sure!

                                                                        1. 8

                                                                          Or any CA operating under a hostile government, or any CA that’s been hacked. See DigiNotar for just one example of a CA that has issued malicious wildcard certs.

                                                                          1. 3

                                                                            And as you can see it was removed from all browser’s trust stores and soon declared bankrupt (hence, death wish). And that wasn’t even deliberate. I can’t see a CA willfully destroying their own business. Yes, it’s a huge problem if this happens though, and isn’t announced to the public, as the case in the article.

                                                                      2. 2

                                                                        Normally, certificates are doing three separate things here:

                                                                        1. Ensuring nobody can read your communications.
                                                                        2. Ensuring nobody can modify your communications.
                                                                        3. Ensuring you’re communicating with the entity which validly owns the domain.

                                                                        Most people who are against HTTPS ignore the second point by banging on about how nobody’s reading your webpages and nobody cares, when ISPs have, historically, been quite happy to inject ads into webpages, which HTTPS prevents. This strikes at the third point… except that it doesn’t. It’s self-signed, which defeats the whole mechanism by which you use a certificate to ensure you’re communicating with the entity you think you are. The weird wildcard stuff doesn’t make it any less secure on that front, since anyone can make their own self-signed certificate without wildcards and it would be just as insecure.

                                                                        If you could get a CA to sign this, it would be dangerous indeed, but CAs have signed bad certificates before. Again, a certificate can be bad and can get signed by an incompetent or corrupt CA without any wildcards.

                                                                        So this is a neat trick. I’m not sure it demonstrates any weakness which didn’t exist already.

                                                                      1. 0

                                                                        This sure seems weird. wtf is going on with python lately?

                                                                        First twisted style async… which was like.. ok. I’m not a fan but fine.
                                                                        Then walrus operator, and now string eval’ing subinterpreters which … don’t really seem to do much other than break lots of modules? o.O

                                                                        1. 2

                                                                          A couple of recent features that have made their way into “Python” are features that already existed, in the sense that lower-level C code using the CPython APIs could already do these things, and are just bubbling those up to the level where actual Python code can use them too.

                                                                          For example, positional-only arguments were a feature the C API already supported and that some of Python’s C-implemented built-ins already used; now they’re supported in pure-Python argument signatures as well. Subinterpreters are another thing that already existed in the C API, and now are on track to also be exposed for creation and manipulation at the level of pure Python code.

                                                                          1. 1

                                                                            I still can’t quite wrap my head around the async stuff. Whenever I’d have to use it I end up with just tossing awaits around when the interpreter complains… This is probably more of my own failing than the implementation of it, but it’s a bit of a curveball to how “normal” Python was written.

                                                                            I don’t mind the walrus operator to be honest. I know it only saves a line, but it implicitly tells the reader that this variable is only relevant in the following code block. If it was defined outside the block then you would have to make sure it’s not used anywhere else if you want to modify things.

                                                                            The subinterpreter though… I don’t know. Since it’s evaling strings I just see it as a massive pain in the ass as you would have to pass the code in as text, so strings inside that code will be annoying to deal with. On top of that you might get all the security issues of eval as well, and all this without actually getting it done faster since the GIL is still there… It honestly just seems like a shittier version of threads until they deal with the GIL.

                                                                            1. 1

                                                                              For a more straightforward approach, see PyPy’s features borrowed from from Stackless Python. Both Stackless Python and Go were inspired by Limbo’s approach to concurrency.

                                                                              1. 1

                                                                                async/await makes sense to me, but that’s because I tried writing a generator based coroutine library using yield from and sendto (actually I didn’t have yield from being on Py 2 at the time and had to inject every layer with a complex macro that would call generator.send to propagate exceptions/results up the stack). If you try doing that, await becomes “suspend this coroutine until the thing I’m awaiting is done” while handling all the exceptions, etc that come from an inversion of control flow.

                                                                                As to sub interpreters, I think the real pain point is that there’s no easy way to:

                                                                                • create or memmove a complex python object in/to/from a Shared Memory mapping
                                                                                • attach/detach from the GC

                                                                                If that existed, it would be much easier to do multiprocess concurrency send/recv semantics.

                                                                                I would absolutely love something like:

                                                                                Sender process (assume both have the same exact definition of user class A, B)

                                                                                fh = mmap.mmap(“/dev/shm/myshared”)
                                                                                b = A(“p”) # currently in the private process heap
                                                                                
                                                                                with gc.foreign_heap(fh) as roots:
                                                                                    a = A(“A”, B(1), {“1”:2})
                                                                                    a.field = 6
                                                                                    roots.move(b)
                                                                                    gc_refs: Tuple[int] = roots.detach(a, b)
                                                                                # a is now deleted or functionally invalid/cleared
                                                                                pipe.send(pickle.dumps(refs))
                                                                                

                                                                                Receiver:

                                                                                fh = mmap.mmap(“/dev/shm/myshared”)
                                                                                refs = pickle.loads(pipe.recv())
                                                                                with gc.foreign_heap(fh) as roots:
                                                                                    a, b = root.attach(refs)
                                                                                    assert a.field == 6
                                                                                    assert b.foo == “p”
                                                                                

                                                                                At least something like that would allow me to build a nice, guarded Channel send/receive semantics without the immense overhead of pickle (I used pickle in this example for brevity - in reality, it would be simple enough to just send the gc root integers over the pipe in by struct packing or getting lazy and cffi casting to char and back again)

                                                                                1. 1

                                                                                  If you try doing that, await becomes “suspend this coroutine until the thing I’m awaiting is done” while handling all the exceptions, etc that come from an inversion of control flow.

                                                                                  Oh absolutely. I understand the concept, but it seems nearly random when it makes a library incompatible or when you have to use await or wrap something around a function. I’m guessing it’s because I haven’t really spent enough time to understand what’s going on in the background and what limitations that arise from it. It just feels a bit weird when you’re not familiar with it.

                                                                            1. 4

                                                                              mypackage/__init__.py preferred to src/mypackage/__init__.py

                                                                              I used to feel this way, and although I knew plenty of people had written reasonable, practical arguments for preferring the src/ version, I dismissed them.

                                                                              Then I was bitten by exactly the thing those people had warned me about, and now I evangelize the src/ version for repository layout.

                                                                              The problem with putting the mypackage/ folder top-level is also the thing that makes it convenient and tempting: if you’re working in the top-level of your repository, mypackage will be on your Python import path automatically, and you won’t have to do anything fancy to get import mypackage to work.

                                                                              This really is a problem, because it means that when you do things like invoke your test suite, you’re implicitly testing against the directory and file structure of the repository, which – for something you intend to distribute to others – is not necessarily the directory and file structure of the artifact packaging tools will produce. So you can have something that works when running tests from a checkout of the repository, and breaks when installed from a packaged artifact.

                                                                              Using src/, or any other conventional name for a top-level directory that contains your package’s source code, forces you to deal with getting mypackage into an importable state to run tests, at which point the natural thing to do is have your test run build and install the package. Yes, this can be a bit more work up-front than just putting the package at top-level. And I used to think that was unnecessary work, right up until the day I discovered one of my projects was in a broken state and the test suite didn’t reveal it, because the test suite was not confirming that my code could actually be packaged and installed. I learned the hard way. Learn from my example, so you don’t have to learn the hard way, too!

                                                                              1. 1

                                                                                I’m intrigued. Do you have any examples on how you have solved the building and installing for test runs in a repeatable way that doesn’t bleed out into the global interpreter? Do you just install the package into the same venv as you are developing in?

                                                                                1. 1

                                                                                  Automation tools like tox use virtualenvs by default, and if you have all your packaging config files set up correctly will also automatically build the package as the first step, and install it into each virtualenv.

                                                                                  I’m not sure what your use case is, but I’m a major user of tox because I maintain things that need to work across a whole matrix of versions of Python, Django, etc., and tox is really good at managing that.

                                                                              1. 14

                                                                                I’m a disappointed in the lack of e-ink ubiquity as I am in the fact we never got the flying noodle-bars promised by science fiction. I want walls of e-ink. That stuff should be everywhere.

                                                                                1. 6

                                                                                  I agree. There was a post here a month or so ago where someone had made a pretty large e-ink display to hang on his wall. The entire thing cost around 4k USD if I remember correctly. It’s stupid expensive for something that should be ubiquitous. Same goes for OLED displays. I really want to have rooms plastered in them to have cool effects on the walls. Patterns and videos and whatnot.

                                                                                  And while we’re at it…. Where’s my flying cars?!

                                                                                  1. 1

                                                                                    We do have hover bikes to be fair

                                                                                  2. 2

                                                                                    Any one have an idea on what the barriers for e-ink are? Is it the low refresh rate preventing it from wider use and subsequent economies of scale? Or is there some other issue?

                                                                                    1. 9

                                                                                      remarkable made an e-ink drawing/reading tablet with amazing responsiveness and fair price. I consider them a fairly small company, so it is really possible to make a wide spread product based on e-ink.

                                                                                      1. 7

                                                                                        Mostly patents I think. Those that can afford to license them are mostly interested in selling commercial signage. It’s definitely not the refresh rate, the use-cases where refresh rates really matter are always going to be better suited to emissive displays, e-ink has value because it keeps state.

                                                                                        1. 4

                                                                                          IIRC there is one company who really designs and makes on e-ink panels which might be a problem…

                                                                                          1. 4

                                                                                            +1 e-ink (the company) appears to have a monopoly on the technology via patents, though that could change in the future if competition authorities decide that the patent is harmful to consumer welfare (see this link).

                                                                                          2. 3

                                                                                            That’s the main problem, yeah. It’s lovely for things which don’t update very often but you have to think very carefully about your UI to make anything work.

                                                                                            1. 2

                                                                                              It’s certainly possible as remarkable did, e-ink has a partial refresh feature so if you are clever enough you don’t have to do a full panel refresh. It certainly costs more effort but ereaders have already proven that it can work

                                                                                        1. 10

                                                                                          It’s a bit jarring to me that Pythonistas are so… authoritarian in their views. Maybe as a consequence of “There’s Only One Way To Do It” mentality, but anything remotely out of the ordinary seems to scare them into blogging about how everything else sucks…?

                                                                                          Languages and their communities are made up of people, and people are flawed so yes projects will have flaws. So does Python! There’s no standardized way of interacting with a language project, and forums will be all over the place, people on them will be cursed with a number of vices, but eventually you make it out with more knowledge.

                                                                                          How do I install it? The docs say brew install, but I’m on Windows.

                                                                                          You’ll have a lot of problems outside of Python/C++/C#/Java if you’re on Windows.

                                                                                          How do I read from a file? How do I parse JSON? How do I pull environmental variables?

                                                                                          Read the docs?

                                                                                          How am I supposed to be writing this? Do I download an IDE? Is there a Vim plugin?

                                                                                          Use whatever - there’s no single right way you’re “supposed to be writing this”.

                                                                                          What are the language quirks that will cost me an hour to discover?

                                                                                          Everyone gets tripped up on different things…? And even if not, it won’t be a wasted hour because a language feature (either technical, or of design) is behind the quirk, and you’ll be better off knowing it?

                                                                                          I could go on, but the questions are too lazy. Languages are not easy.

                                                                                          Screw this, I’m going back to Python.

                                                                                          Oh well.

                                                                                          1. 24

                                                                                            I think you are misunderstanding his point. I don’t think he’s using Python as the gold standard on how to do things (because it sure as hell isn’t even though I love it). He’s using Python as a stand-in for “insert familiar preferred language”.

                                                                                            That list is a pretty decent summary of what a tutorial or reference for new programmers should contain. Go to any unfamiliar language and check out their tutorial and see how many of these points they check off. I’m sure there’s quite a lot of the more important points left. You are probably capable of finding out those things yourself, considering the criticism you are levelling at the post, but imagine you are very new to programming; what even is JSON? Why wouldn’t brew install work on Windows? Environmental what? Do I use Word or Notepad to write the code? Why can’t I do if a > b > c?

                                                                                            Being familiar with another language does not really mean you are that proficient with it either. It just means you have managed to start coding in it and your programs mostly function when you start them up. Pretending everyone has 10 years of experience with C, Rust, JS, Java, Python and insert your flavor of functional language is absurd and elitist.

                                                                                            1. 17

                                                                                              How do I read from a file? How do I parse JSON? How do I pull environmental variables?

                                                                                              Read the docs?

                                                                                              The first time I tried to use Swift, I ran into a bunch of these issues. Swift is a “app developer” language, at least as documented by Apple, so you don’t read files from a path, you read them from a bundle. What’s a bundle? Welcome to Foundation! Don’t know what that is? Down the rabbit hole of Apple specific APIs that don’t map to other languages we go.

                                                                                              1. 3

                                                                                                Yeah it’s one thing to just ‘read the docs’ when you’re asking simple questions like that and the language you’re using is Yet Another System Call Wrapping Language With A Basic Standard Library And FFI like Python, Ruby, Node.JS, Rust, etc. It’s quite a different issue when you’re learning a new language but where ‘new language’ is code for ‘entirely new set of APIs, things you might want to do, etc.’ like Swift where you’re not just learning a new fairly mundane and simple language but also an enormous API surface, UI paradigm, app development conventions, operating system, etc.

                                                                                                1. 1

                                                                                                  Pretty sure the docs for Swift cover all that.

                                                                                                2. 0

                                                                                                  There’s a difference between Swift and the APIs you call with it. You don’t need bundles to work with files:

                                                                                                  let file = "file.txt" //this is the file. we will write to and read from it
                                                                                                  
                                                                                                  let text = "some text" //just a text
                                                                                                  
                                                                                                  if let dir = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first {
                                                                                                  
                                                                                                      let fileURL = dir.appendingPathComponent(file)
                                                                                                  
                                                                                                      //writing
                                                                                                      do {
                                                                                                          try text.write(to: fileURL, atomically: false, encoding: .utf8)
                                                                                                      }
                                                                                                      catch {/* error handling here */}
                                                                                                  
                                                                                                      //reading
                                                                                                      do {
                                                                                                          let text2 = try String(contentsOf: fileURL, encoding: .utf8)
                                                                                                      }
                                                                                                      catch {/* error handling here */}
                                                                                                  }
                                                                                                  

                                                                                                  (from StackOverflow)

                                                                                                3. 9

                                                                                                  To me this post sounded like a bunch of made up reasons to avoid learning a new language. Yes, some languages will have better documentation than others, some languages will have better support for $OS than others, some languages will have better editor support for your favourite editor. But guess what, that’s all because someone put in the effort to make it so. If you believe for example that the editor support is not up to snuff, you can improve it for yourself and everyone else after you.

                                                                                                  And yes, there will be “that one package everyone uses” because it’s better than the others, but why should there be an official statement about that anywhere? This is basically a cultural thing. Also, you can’t expect to completely become a native in just a few days. Besides, what’s best today might not be best tomorrow (think e.g. Requests, which is by many people considered the de facto Python library for HTTP, but with the introduction of async/await in the language, it’s no longer so clear that Requests is the way forward). Also, something might be great today but something even better might come along in a few years.

                                                                                                  What do you do when you want to become more embedded in a culture? You make some friends and ask people to explain the local customs to you. Or maybe you buy a book about the culture and learn it from that. But mostly, you learn it through osmosis, by spending time in there.

                                                                                                  1. 7

                                                                                                    I suspect the post is more of a collection of common objections and stumbling blocks for people that might want to try out a new language. Any one of them might be the the thing that turns away a single person, as they pile up, less people are willing to jump over the hurdles.

                                                                                                    I don’t think anyone expects to be “native” in a few days, but I know for me, if I can’t find a way to get a foothold in a language or framework within 4-6 hours, my interest moves onward. A foothold here is a basic, but useful (ideally) program that I can use as a base for building greater understanding over time. (Useful being a bit subjective). There are some exceptions to this, especially at $WORK, but yeah.

                                                                                                  2. 8

                                                                                                    I don’t think this post is Python-specific. I think everything makes sense even if you substitute “Python” for “$MY_LANGUAGE_OF_CHOICE”

                                                                                                    1. 2

                                                                                                      My point was that looking for the one specific way you’re “supposed to be writing this” is a sentiment I get from a lot of Python coders.

                                                                                                      1. 2

                                                                                                        Is that so bad though? “You could do X in 10 different ways” would tend to confuse beginners (arguably) even more.

                                                                                                        1. 2

                                                                                                          Yes, it is bad for a general purpose language to have one specific way of doing things. I have enough of a hard time having to always work with the same cloud provider (AWS), don’t tell me my code doesn’t conform to some standard if it achieves the same result. Coding is a creative endeavor, not paint by numbers.

                                                                                                          For me, it’s an excuse to only support code written a certain way, and screw you for thinking differently.

                                                                                                          1. 0

                                                                                                            Yes, it is bad for a general purpose language to have one specific way of doing things.

                                                                                                            In the language camp that you’ve spent the most time in, that advice may apply. But claiming anything to be universally true (or false) is taking things a bit too far.

                                                                                                            For instance, Ruby folks are happy having 10 different ways of doing the same thing, and Python folks are happy having one single way. Different people feel productive in different ways. That doesn’t have to mean that one is better than the other one.

                                                                                                            1. 1

                                                                                                              claiming anything to be universally true (or false) is taking things a bit too far.

                                                                                                              We’re in agreement here. Python coders say there’s only one right way to do things, I reject that because different people feel productive in different ways.

                                                                                                              1. 0

                                                                                                                Python coders say there’s only one right way to do things

                                                                                                                … in the Python world

                                                                                                    2. 11

                                                                                                      It’s a bit jarring to me that Pythonistas are so… authoritarian in their views. Maybe as a consequence of “There’s Only One Way To Do It” mentality, but anything remotely out of the ordinary seems to scare them into blogging about how everything else sucks…?

                                                                                                      You’re missing the point. Python isn’t some magical language that has all of these fixed. In particular, the packaging situation on Python is an utter mess. The point is that the learning a language means learning the ecosystem and accidental complexity bundled with using a language in the real world. And that’s hard. Someone switching to Python from Java would have these exact same problems.

                                                                                                      You’ll have a lot of problems outside of Python/C++/C#/Java if you’re on Windows.

                                                                                                      In the 2018 SO survey, half of all developers said they used windows. Dismissing them out of hand is a great way to show just how little you care about adoption.

                                                                                                      I could go on, but the questions are too lazy. Languages are not easy.

                                                                                                      Many languages are essentially complex. Most are far more accidentally complex, making it artificially hard for beginners to start using it.

                                                                                                      Perhaps this should be another bullet point under community: “Does the community think its the beginner’s fault for struggling?”

                                                                                                      1. 3

                                                                                                        I’ll stand as a Windows developer who has used quite a few languages on Windows outside of that group. Support varies, for sure.

                                                                                                        1. 1

                                                                                                          In the 2018 SO survey, half of all developers said they used windows. Dismissing them out of hand is a great way to show just how little you care about adoption.

                                                                                                          I didn’t dismiss them. I just pointed out the large majority of them are using Python/C++/C#/Java. I guess I could have added PHP to the list.

                                                                                                          Perhaps this should be another bullet point under community: “Does the community think its the beginner’s fault for struggling?”

                                                                                                          There’s struggling and there’s I expect every problem to be already fixed for beginners, so I don’t have to struggle. Struggling while learning a new language is always going to happen. To dismiss it with “Screw this, I’m going back to my blanket” is unrealistic.

                                                                                                          1. 11

                                                                                                            There’s struggling and there’s I expect every problem to be already fixed for beginners, so I don’t have to struggle. Struggling while learning a new language is always going to happen. To dismiss it with “Screw this, I’m going back to my blanket” is unrealistic.

                                                                                                            Dude, my job is improving accessibility for formal verification languages. Look how many of the other commenters are talking about the same pain points. These are not “I want my blanket” problems, and you thinking they are is a sign you haven’t had to go through this in a very long time.

                                                                                                            1. 0

                                                                                                              Dude, my job is a bit immaterial to the point. And I’ve been learning a language that is still in alpha and makes me file Github issues when I use it because it explodes left and right. I don’t say “screw this, I’m going back to $COMFORTABLE_LANGUAGE_I_KNOW” because I realize a) it has future potential and b) creating languages and growing them, along with a community around it, is no easy feat. And it doesn’t have to be for me to consider learning said language worthwhile. The world doesn’t revolve around my needs (or my job). And plenty of other commenters see your post as whining, too.

                                                                                                              1. 7

                                                                                                                Dude, my job is a bit immaterial to the point.

                                                                                                                The point of that statement is that I’m not demanding anything radical, and I’m often the person who’s fixing these issues for formal verification languages. I’m not a no-stakes observer complaining about languages being hard, I’m talking about actual barriers to learning and adoption.

                                                                                                                The world doesn’t revolve around my needs (or my job). And plenty of other commenters see your post as whining, too.

                                                                                                                Okay, I don’t think this conversation is going to go anywhere. If you think these very real issues are “whining” or a “blanket”, then we don’t have any common ground.

                                                                                                                1. 0

                                                                                                                  If your point is that you have languages have to:

                                                                                                                  • Explain how to install them for every operating system available
                                                                                                                  • Tell you how you should be “writing” with it
                                                                                                                  • Tell you all the quirks that might cost you (personally) an hour or more to discover
                                                                                                                  • How the help is organized (not just where it is)
                                                                                                                  • Where to look for your specific problem X they don’t know about (docs, FAQ, community, Google)
                                                                                                                  • Teach you how to debug with it
                                                                                                                  • How to do unit testing
                                                                                                                  • How to build, package, manage your environment
                                                                                                                  • Package management
                                                                                                                  • Where the language community is, who the abusers are, tell you about the high-profile rivalries, all the in-jokes
                                                                                                                  • ….

                                                                                                                  for you to figure them out or else you screw back to Python, then yeah I don’t think we have common ground.

                                                                                                      1. 2

                                                                                                        Question: why is the binary representation “the true data” and not yet another representation in a (probably infinite?) set of possible representations? There are lots of neat properties afforded by representing an IP address as a 32-bit integer, for example, but those don’t seem like they elevate it beyond the category of “representations”.

                                                                                                        1. 1

                                                                                                          It is a representation, but it is also the way the computer works with it. It’s kind of the same reason we work in decimal when we have to do math manually. It’s the way we are trained to think, so that’s the way we are used to do the math. You could train yourself to do math in hexadecimal, trinary, base64 or whatever other base system and it would just be another representation and completely the same. The only difference between us using a new mental model and a computer using the binary system is that, as far as I know, only the computers binary model maps 1-to-1 to the physical representation of the math.

                                                                                                          Binary is literally the exact same way the computer thinks, which is why it’s useful to use as a base to understand what’s going on under the hood.

                                                                                                          1. 1

                                                                                                            I think you’re fixating on the “binary” aspect when I meant to highlight the “numerical” aspect. Why is number-as-binary truer than text-as-binary; everything that runs on computers is implicitly “-as-binary”, so I suppose we can drop that suffix. Why is the numerical representation (and specifically the fixed-width numerical representation) the “true data”. I understand that there are useful properties, but the author seems to be driving at a more categorical distinction. And I guess perhaps a deeper mathematical/philosophical question might be “what if anything does ‘true data’ mean?”.

                                                                                                          2. 1

                                                                                                            No, binary representation is still just representation of data. One can represent IP as radio frequency, flag messages, tertiary voltages, etc. Still encoding the same data but in different ways. There is no “true” representation, there can be canonical one though.

                                                                                                          1. 6

                                                                                                            I think it’s brave that the author tried this out seriously like they did. PowerShell is an excellent shell on Windows, and I’d believe it would be on UNIX as well when the kinks get ironed out with the migration, but this is still early days for it.

                                                                                                            I’d love if the author could explain what issues they had with PSReadLine. The other issues seem to be either known, or just a result of things being new and not highly adopted yet. Most PowerShell modules around are designed to work on PowerShell v5 or earlier, which was before the PowerShell Core version. There’s bound to be some teething issues here until the Core version becomes more widespread.

                                                                                                            1. 10

                                                                                                              This is super cool from a hardware perspective, but I would find it wildly stressful to have the day’s news plastered on my wall all day.

                                                                                                              1. 7

                                                                                                                yeah, I can see that.

                                                                                                                but our phones do that anyway, and a static frontpage, with a few stories, wouldn’t be as bad (I think).

                                                                                                                Also that display is ~$1500 :/

                                                                                                                1. 3

                                                                                                                  Yeah, I think e-ink is super cool, being able to use it in full sunlight is fun and the battery life is pretty cool too… but the cost is just yikes.

                                                                                                                  I did get myself a used amazon kindle (the older model with physical buttons to turn the page. why anyone would actually want to chafe their fingers swiping is beyond me) and it is actually pretty decent. But I wouldn’t mind being able to use one as like a unix terminal too… just the price is yikes.

                                                                                                                  1. 1

                                                                                                                    You don’t swipe on the newer models. You just touch the screen on the left or right side to turn the page. It’s honestly a very nice experience.

                                                                                                                  2. 1

                                                                                                                    the display is $1500, and they won’t sell it to you unless you’ve got a certain amount of clout. I’m eagerly awaiting for someone to steal their manufacturing process and eat their lunch.

                                                                                                                    1. 1

                                                                                                                      And you need the v5 board to use it (mandatory) that is another $500.

                                                                                                                    2. 3

                                                                                                                      so the front page of a newspaper is designed to sell you the paper. The idea is you see a story you actually want to read when it is sitting on the stand and then buy it to read more on page A-5 or whatever.

                                                                                                                      I guess it might be an amusing art display but saying “the best user interface is none” strikes me as silly. If you actually see anything you care about, you are gonna want to read the whole article but there’s no way to actually do that without a UI of some sort!

                                                                                                                      1. 2

                                                                                                                        It would be cool to have a remote control that runs on your computer or phone (or even better, a dedicated actual handheld remote control with only two buttons: forward and backward). That way, the display can stay completely void of any buttons, which would ruin the aesthetic effect. It already has wifi, so it’s connected.

                                                                                                                        Also, does anyone find this really depressing?

                                                                                                                        E-Ink’s NDA prevents me from sharing the source code, but you get the idea.

                                                                                                                      2. 2

                                                                                                                        Cool idea: do this, but instead of displaying today’s news, display the front page from this day 50 years ago.

                                                                                                                        1. 2

                                                                                                                          I would too, but I would love it if I could have a thing like this displaying all the slow-to-refresh things that we rely on screens for: the weather, bus schedules, et cetera. This really needs to be scaled up so it doesn’t cost $2K a unit.

                                                                                                                        1. 18

                                                                                                                          If you work at any of these places, you should quit.

                                                                                                                          I do work at Microsoft (but I’m posting this on my own initiative and speaking only for myself), and I won’t quit over this. Here’s why: I work on the Windows accessibility team, on software that helps blind people and people with other disabilities, particularly the Narrator screen reader. So I think it’s safe to say I’m doing a lot more good than harm in my current position.

                                                                                                                          1. 11

                                                                                                                            This comes up a lot these days. For whatever it’s worth, I think you are better placed than anybody else to evaluate whether you, personally, are doing more good than harm by staying in that role. I would never tell anybody that they have to quit. If everybody quit, there would be nobody able to work for change from within. I think that making change in the industry is going to require a broad coalition including both people who have built power within the system, and people who have built power outside it.

                                                                                                                            1. 5

                                                                                                                              Meh, I think it’s hard to really make a convincing argument this way or another based on “doing more good than bad”. Is someone who bought a laptop with windows pre-installed helping Microsoft do crimes? It seems to me that based on Microsoft’s revenue and the relatively small part that a single bundled windows license contributes to it, but also that they would be collaborating with whatever government they can, regardless of your choice, you’re decision is quite negligible. Worse still, wouldn’t Microsoft be even more eager to assist whatever Government they can, if their consumer revenue would decrease?

                                                                                                                              And to turn the argument around, what about all the blind bad people (nazis, pedophiles, drug dealers, …) that use Windows’ accessibility? How many good people do you have to help to make up for every bad person you’re enabling the usage of computers?

                                                                                                                              I personally do believe there are plenty of reasons not to use Windows or work for Microsoft, but these are mostly issues affecting individuals first (eg. lack of software freedom), society second (eg. dependence on non-free software and vendor lock-ins). Conflating the two and their relation tends to lead to confusion.

                                                                                                                              1. -5

                                                                                                                                Ultimately, it would be better if the accessibility features (or any functionality) in Windows were worse: fewer people would use it, and demand for Microsoft products and services would reduce. The additional functionality you add causes more money to flow to Microsoft than would otherwise. Microsoft uses that money to fund development of software which helps people incarcerate children and coordinate and orchestrate mass murder.

                                                                                                                                You can’t get out of a hole by digging harder.

                                                                                                                                1. 10

                                                                                                                                  You assume that the people who decide whether to use Windows actually care about the quality of its accessibility support, and other aspects that don’t affect executives and IT decision makers. The truth is that like it or not, some people are compelled to use Windows, for their education or for the only job that they managed to find. By working at Microsoft, I and other people are, in a small way, making these people’s lives better.

                                                                                                                                  1. 2

                                                                                                                                    Windows mandates were not handed down by some higher power; people made decisions. It was the same deal at one time with buying IBM. Those decisions can be changed.

                                                                                                                                    You can make a strong argument that, because of antidiscrimination laws, if Windows doesn’t have support for the appropriate accessibility features, it is perhaps illegal to require that people use it to do a job. You can always tell yourself a story to make working for a giant amoral military supplier seem like you’re doing the right thing, whether it’s making disabled lives better, “protecting your country”, “empowering developers”, whatever. But at the end of the day all you’re really practically doing is making sure Microsoft products remain competitive and that Microsoft’s revenue stream continues unimpeded, and that’s still a very, very bad thing that harms our entire society and planet, for all the reasons I wrote before.

                                                                                                                                    You really should stop doing that. People doing the work that you do is one of the reasons Windows remains a standard, and that is one of the major reasons Microsoft is still around to sell PowerPoint licenses to mass murderers and child rapists. (Well, that and Bungie.)

                                                                                                                                    1. 6

                                                                                                                                      all you’re really practically doing is making sure Microsoft products remain competitive and that Microsoft’s revenue stream continues unimpeded, and that’s still a very, very bad thing that harms our entire society and planet, for all the reasons I wrote before.

                                                                                                                                      Of all the things that “harm our entire society and planet”, Microsoft is quite far down the list.

                                                                                                                                      Besides, all the points in your article are just about facilitating others (US military, NSA, ICE). Do you think that these organisations are incapable of running their own GitLab instance or whatnot? And what if the USS Yorktown would start running Linux instead of Windows NT? Would Linux now be “harming our entire society and planet”? Let’s not forget that North Korea runs on Linux, for example, but no one is blaming Linux for the North Korean regime.

                                                                                                                                      We live in a very interconnected world. As long as the NSA and such exist they will use the existing tools and there will always be some vendor to point fingers at. It seems to me that actually fixing the underlying problems would be much more fruitful than going after vendors who facilitate this in some minor way, which is ineffective and a massive distraction from the actual problems.

                                                                                                                                      1. 4

                                                                                                                                        Chances are USS Yorktown actually uses Linux now. I’ve seen interesting papers by the US military on that subject, vibration-protected racks for aircraft carriers, hiding commodity servers running Linux.

                                                                                                                                        Then the russian military also officially uses Linux, so one doesn’t even need to carefully choose sides to be able to claim that Linux is harming our entire society and planet. ;)

                                                                                                                                        1. 1

                                                                                                                                          Then the russian military also officially uses Linux, so one doesn’t even need to carefully choose sides to be able to claim that Linux is harming our entire society and planet. ;)

                                                                                                                                          You owe me a new keyboard, jerk! :)

                                                                                                                                        2. 1

                                                                                                                                          There is an ethical distinction between distributing software that anyone can use, and explicitly selling/providing software to an organization with which you have a direct relationship.

                                                                                                                                        3. 3

                                                                                                                                          I’m pretty sure most MIL and LEO agencies around the worlds are using FOSS products as well. Should you stop contributing to those products because you are enabling them to do harm?

                                                                                                                                          1. 5

                                                                                                                                            It seems to me if your overarching goal is to create software and prevent specific organisations from using it, the only practical way is to create proprietary software and then be picky about to whom you sell it.

                                                                                                                                            If OP is making a statement that developers need to stop creating software in general because it’s being used for unconscionable purposes, then that would be a pretty bold argument but I could entertain it.

                                                                                                                                            If they mean that we should be promoting FOSS instead (as hinted by the reference to Gitea in the article) then I really don’t understand at all. Gitea is more accessible to actors good and bad. Unless the real crime is making money off it? But given the “harm to society” mentioned in the comment above, I don’t think that’s it either.

                                                                                                                                            I’m forced to conclude the argument is that it’s good that Microsoft is selling proprietary software but bad that they’re selling it to particular customers. But it’s unclear to me how nobbling the accessibility features would help in this matter.

                                                                                                                                            1. 1

                                                                                                                                              There is an ethical distinction between distributing software that anyone can use, and explicitly selling/providing software to an organization with which you have a direct relationship.

                                                                                                                                        4. 1

                                                                                                                                          This is a horribly ableist statement. A better way to make MS unattractive would be for the platform to be bad for developers.

                                                                                                                                      1. 7

                                                                                                                                        I have plenty of complaints about PowerShell, but passing structured data around … isn’t among my complaints.

                                                                                                                                        If UNIX standardized on whatever was the JSON of the early 70s for IPC, would it have survived and thrived?

                                                                                                                                        1. 2

                                                                                                                                          The concrete format could be replaced easily, especially if it’s only used within pipelines. Flip the switch and all utilities switch from (say) JSON to YAML. If people stored some data that they’d get out of pipelines, just provide some conversion utility.

                                                                                                                                          Of course, there are going some leaky abstractions somewhere, but having every utility use its own custom text format is definitely more friction.

                                                                                                                                          The only strong counterargument that I can think of is that processing a structured format sometimes has a considerable overhead.

                                                                                                                                          1. 5

                                                                                                                                            I like S-expressions for this task. It seems like most things can be represented with trees. They’re not trivially easy to work with using “primitive” editors that you might have found on early Unices, but they’re far more readable than XML and they also can be used as a programming language syntax, which is a testament to their usefulness. I couldn’t see myself using a programming language that uses YAML as a syntax.

                                                                                                                                            That said, nearly anything standardized is better than plain text. So long as you can edit it as plain text and it encodes some high-level structure, I think it could be useful for this application. PowerShell might take the structure idea a little too far, but you can still (mostly) pretend that it’s just a regular Unix shell.

                                                                                                                                            1. 3

                                                                                                                                              Now my code needs to handle 2 input and output formats, depending on what it’s connected to. No thanks.

                                                                                                                                              1. 6

                                                                                                                                                not necessarily - it could be handled by something like libxo. not that that doesn’t have it’s own problems, but libraries do exist :)

                                                                                                                                                1. 2

                                                                                                                                                  You might not have to if there are suitable tools available to convert between the common formats.

                                                                                                                                            1. 2

                                                                                                                                              When tar has nearly as many flags as the UN headquarters you know you have done something wrong at some point…

                                                                                                                                              1. 4

                                                                                                                                                Reading through tar(1), a lot of the flags deal with subtleties and complexities in various filesystems and the like. Nothing really stands out as “excessive bloat” to me.

                                                                                                                                                The same applies to something like curl, which has many more flags and it’s actually quite confusing at times, but they’re all there for a reason: to deal with the complexities of network transfers.

                                                                                                                                              1. 2

                                                                                                                                                I imagine this will end like every other time a government or local council does it: Baby duck syndrome will kill the project and everything will be back on Windows in 2/3 years.

                                                                                                                                                1. 2

                                                                                                                                                  They will most likely be back to Windows because there’s probably a thousand or so systems that were only designed to function on Windows, and an old version at that. Linux itself might not be a problem (it probably will be since it’s quite frankly not on par with Windows’ user experience), but imagine all the tasks you can’t complete anymore since you didn’t plan for updating all those legacy systems. On top of that, think of how crap the IT skills of most support personell. Now imagine them having to support Linux instead…

                                                                                                                                                  Of course these kinds of projects are doomed to fail. Just look at their starting point:

                                                                                                                                                  “We will resolve our dependency on a single company while reducing the budget by introducing an open-source operating system.” …said Choi Jang-hyuk, South Korea’s head of Ministry of Strategy and Finance,

                                                                                                                                                  Although most Linux distros are free, South Korean officials estimate that migrating their current fleet of approximately 3.3 million PCs from Windows 7 to Linux will cost about 780 billion won (approximately $655 million). The price tag will cover the implementation, transition, and purchase of new PCs.

                                                                                                                                                  They are practically speaking planning to fail here. They need to stand up an entirely new server infrastructure, replace (or code new versions of) all their domain apps (I’m thinking government functions here, like planning department mapping systems, archival systems, regulatory process systems, etc.), change out every single workstation, train the system administrators and support personell in managing Linux, and on top of that, handle the expected drop of productivity in the transitional period as none of the users actually know how to do their job in ANY of the new systems.

                                                                                                                                                  This project is doomed to fail, not because of the baby duck syndrome, but because $665 million is a rounding error compared to what you need to do all of the planned changes. There’s just not a snowballs chance in hell that they have planned adequately for this.

                                                                                                                                                  1. 4

                                                                                                                                                    (it probably will be since it’s quite frankly not on par with Windows’ user experience)

                                                                                                                                                    That’s not been my experience, at all. I’ll grant you baby duck syndrome, but I’ve had four family members running on Linux for years now with no issues. My MIL has used both Linux and Windows, and was about equally stuck trying to administer either of them.

                                                                                                                                                    1. 3

                                                                                                                                                      I’ve had four family members running on Linux for years now with no issues

                                                                                                                                                      That’s great, but we’re not talking about using Facebook, watching videos on YouTube or reading the odd email in Gmail. We’re talking about actual work in productivity software and domain applications. I know I tried changing to Linux a few times and I always found things that didn’t work out for the tasks I wanted to do. Sure, there’s some programs that would fulfil almost the same purpose, but that’s just not good enough when you need it to fill all of the uses of the other platform. A government can’t just stop handling taxation, immigration, zoning, healthcare, or any other function because they are switching to Linux. They need all of those functions working more or less the exact same as previously. The software can change, of course, but these people are migrating from Windows 7. I think it’s highly likely they still have Win 2K boxes handling some important function that they haven’t swapped out yet and they are not likely to replace either because the software doesn’t exist.

                                                                                                                                                      My MIL has used both Linux and Windows, and was about equally stuck trying to administer either of them.

                                                                                                                                                      I hope your MIL is not a system administrator for anything else than her own machine, and I hope the people who are going to manage this system actually gets training in it as it’s pretty different to work with Windows and Linux when you are managing it.

                                                                                                                                                      1. 4

                                                                                                                                                        My MIL has used…

                                                                                                                                                        I know I tried changing…

                                                                                                                                                        I think both of you are speaking on personal experience here :) None of us know what a typical workflow for a typical clerk in the “Ministry of Public Administration” looks like.

                                                                                                                                                        As much as I would like them to succeed, I’m think that it might end up failing as Brekkjern has said. Munich’s attempt a while back teaches some nice lessons on that.

                                                                                                                                                        From some of what the people around here have told me, a lot of it can be attributed to Microsoft pushing really hard to not make this happen - but there were also technical issues that can be attributed to the long history of developing for Windows specifically.

                                                                                                                                                        Who knows? South Korea might be afraid of being too dependant on US (like we all are I guess), so they might have a political push to make this happen in any case. I don’t know what the stance in South Korea on these items is? I guess we’ll see.

                                                                                                                                                        1. 2

                                                                                                                                                          I think both of you are speaking on personal experience here :)

                                                                                                                                                          All of Brekkjern’s points are perfectly valid :) The point I was _trying _ (and failing, apparently) to make was just that the (typical distro) Linux user experience is at least on par with the Windows user experience, and is in many ways superior.

                                                                                                                                                          And also, both are equally difficult to administer. I recently had to debug a WiFi module driver issue w/ power management on my MIL’s new Windows laptop, and it was an awful experience.

                                                                                                                                                          1. 1

                                                                                                                                                            The point I was trying to make was that $200 per machine doesn’t really cover the expenses of getting a new machine, let alone swapping to a different OS and all the other issues that entails.

                                                                                                                                                    2. 1

                                                                                                                                                      I’d never heard the term “baby duck syndrome” before, surprisingly, but after looking it up it makes a lot of sense. I’ve experienced & observed it myself numerous times but never had a good way to describe it succinctly! That’s also why I’ve been taking to wait until my kids are old enough to read before giving them computers (running Linux sans X11) so they develop some initial familiarity with the command line before using a GUI.