1. 9

    There are students that say “Why do I need computer science in my life? It’s useless! I shouldn’t have to remember all this crap!”.

    Everyone says this about everything in high school, and I don’t even think that it’s wrong, but rather that it could be the point.

    I believe the mistake is in seeing high school as a kind of lite-university, when you basically start learning whatever you’re going to study and have as your career later on (which is great for all those who don’t know yet). Instead everything until high school should be seen as general education. This isn’t quite the case, I’m the first to admit it, because of all the stress that is being put on the educational system, from parents who might just want the best, to schools and teachers who have to maintain some grade-average.

    I had CS in “high school” (the german variant) too, and while I certainly disagreed with points and procedures in the curriculum (I thought python might be better than java for teaching algorithms, but I’m a bit more reserved about that now), I always held modesty up as a virtue in class. Never say stuff like “Well actually, …” or laughing down at teachers for not knowing about the newest frameworks. I was a tutor at university last year, and seeing people who do think that they are “too good for introductions” is really annoying for educators. Ultimately, I was right in being modest, for there were many things I did learn by not having an attitude that made me thing I was above and beyond all of it.

    On a side note, one reason CS theory is nice is that it doesn’t change that much (at least when it comes to the basics, the halting problem isn’t put on it’s head daily).

    the string functions from string.h, including the dreaded strtok

    I might have missed something, but what’s the issue with strtok. It’s a C function with state, but other than that is pretty average C.

    1. 7

      I believe the mistake is in seeing high school as a kind of lite-university, when you basically start learning whatever you’re going to study and have as your career later on (which is great for all those who don’t know yet). Instead everything until high school should be seen as general education.

      I agree, and I will say this: No part of college is job training. Vocational schools are job training. You can see this because it’s right there in their name, which should be your first clue. A college education is designed to do one thing: Create researchers and academics in an academic field. If someone can apply what they learned in a degree program to practical careers, that’s one thing, but it shouldn’t be the end goal of that degree program.

      1. 2

        I’m sympathetic to not viewing universities as purely job-training; among other reasons, if you really did want purely job-training, they are a pretty convoluted and inefficient way to deliver it. But I think you go a bit too far in saying that their primary purpose is to create researchers and academics, at least for the bachelor’s degree. Even in the era when many fewer people went to university, it would have been still too few if it were really only aspiring researchers who attended!

        The specifics vary by country & era, but if you look at who got university/college degrees in, say, late 19th century America, a lot were on their way to just general “educated person jobs”. Stuff like school teacher or principal, civil servant, tutor, editor, etc. Jobs that came with a certain level of prestige and expectation of being educated beyond high-school level, but not necessarily as a researcher in a specific subject (hence the popularity at the time of broad liberal-arts degrees).

        1. 2

          Also worth noting that it’s relatively recent that people get a degree in a topic relating to their eventual jobs. Yes, college was about education, and not necessarily just educating researchers, but that doesn’t mean it became vocational.

      2. 2

        Everyone says this about everything in high school, and I don’t even think that it’s wrong, but rather that it could be the point.

        With that quote, I was pointing out the irony that computer science (hs) students say that computer science feels useless for them. They are those who choose to study computer science in school, who either have 5 hours a week or 7 for the “harder” variant, they voluntarily enroll in this programme. Middle school together with the first two years of high school are considered general education, the last two years of high school (the blog post applies to these years) really are considered lite-university. At the end you even get a “programmer helper” license, you can get a job at other schools to become a sysadmin with it.

        1. 1

          If it’s anything like our system (after 7’th gradewe had to choose a focus between “natural sciences”, “languages” and “social sciences”), then I would guess there will always people who choose “what they dislike the least”, instead of a real interest. Also CS has the “Computer” appeal, and as you mention, computers are used to play games, and there are certainly a lot of people who love to play games.

          1. 3

            Also CS has the “Computer” appeal, and as you mention, computers are used to play games, and there are certainly a lot of people who love to play games.

            Not to mention how many people get into the field by wanting to make their own video games.

      1. 13

        It’s interesting to see compile-time framed as if it were a tooling issue. Language design affects it deeply. Go will be fast no matter what. Languages like Scala and C++ have some features that are inherently costly. At that point, the real gains come from providing guidance about what features/dependency structures to avoid.

        1. 7

          A possible counter-example here is D, which has the same momomorphization compilation model as C++, but is prized for the fast compiler.

          1. 2

            Is there anything reasonably in-depth written on why it’s faster? It seems implausible that it’s entirely due to just being a really good compiler and 100% unrelated to the design of D as a language. Two heuristic reasons to guess that there must be some relationship: 1) the primary D designer was also the primary compiler implementer with a long history of compiler implementation, so seems likely to have been at least somewhat influenced in his design of D by implementation concerns, and 2) nobody has written a fast C++ compiler, despite there being a big market there (certainly bigger than the market for a fast D compiler). I could be wrong though!

            1. 1

              I unfortunately don’t know of such an article. The folklore explanation I think is that dmd backend is much faster than llvm/gcc.

        1. 3

          Have a day off in London on Saturday after a conference this past week, then flying back to the US on Sunday. And then the fall semester starts on Monday…

          1. 6

            Two things on this:

            I just recently started playing with Twitter’s APIs, and was a bit surprised to discover that you can only retrieve the most recent 3,200 tweets of a user. Any tweets older than that are forever inaccessible by the semi-public API. Note that you seem to need to provide a rather detailed description of what you intend to do with it in order to get developer keys. It seems that there are commercial APIs available for a very substantial price that may allow accessing older tweets, but nobody talks much about them. I did hear a rumor that their Firehose access - all tweets sent by anybody in realtime - costs 30% of your company’s revenue, whatever that is. I’m not sure if that’s true, but it does seem odd that so much of our history on Twitter is, for all practical purposes, forever locked away behind extremely expensive contracts.

            It also seems that Twitter’s Rules are being weaponized, by both sides of the political divide, in attempts to control the conversation. The ban lists seem semi-random, and the decisions of what is and is not considered hateful seem rather arbitrary, possibly depending on which particular moderator gets a particular case. Aside from the difficulty and expense of accessing old data in general, it’s entirely possible people are running algorithms to search Twitter for potentially actionable things said against their favorite figures, even if they were said very long ago.

            1. 2

              Twitter has been slowly cutting all the good bits out of itself ever since they looked up and said “oh HEY we gotta make some money!” a few years back.

              This is why decentralized platforms will prevail, because people are just not willing to pay for social media en masse.

              1. 3

                This is why decentralized platforms will prevail, because people are just not willing to pay for social media en masse.

                Users are not going to be paying for Twitter any time soon, either? Twitter’s selling their content, but the platform remains free to use, and I can’t see that changing.

                Companies, meanwhile, are perfectly willing to pay Twitter for access to their data.

                1. 2

                  That’s right. That’s why the Fediverse’s model will win IMO, but I predict it will take much more work to get where it needs for truly widespread adoption - that being it becoming point and drool easy to start your own instance.

                  Right now you have to have some basic sysadmin skills in order to run one.

              2. 2

                I might misremember this, but I believe Twitter made a big splash about how you (as a logger in user) could now access all your tweets. I seem to remember downloading an archive back when that happened.

                1. 3

                  You can download your own tweets, yeah. In the web interface, it’s under Settings->Account->Your Twitter data. What’s not possible to do easily is get the full tweet history of anyone else.

                  1. 1

                    What’s not possible to do easily is get the full tweet history of anyone else.

                    I’m certain that this is for performance reasons. No doubt if you pay for API access it’s possible.

              1. 3

                It’s always interesting to see people tip their hand by what terms they think should be used instead… because they are always wildly different terms! This one chooses the following:

                The hype on terms like “machine learning” and “AI” is a rebranding of the terms “statistics” and “general programming logic”.

                You could make an argument for there being some overlap between those fields, but they aren’t the same fields, historically or today, and the overlap is nowhere near 100%. Other proposals for what AI is “just”, or should be considered a subdiscipline of, have historically included: cybernetics, applied psychology, cognitive science, philosophy, systems engineering, HCI, computational logic, operations research, decision theory, …

                The other comment I’d make is that AI as a field isn’t “rebranded” from anything recently… the field has existed for 60+ years! There are indeed some unwarranted hyped up claims (often from people who also don’t know the history of the field they claim to be in) that could be criticized as shallow rebranding, but criticizing badly done or overhyped AI research is a bit different than trying to wholesale erase a field without really studying it.

                1. 12

                  This has nothing to do with tech, it happens everywhere in higher paid jobs when comparing the US to the EU. Everyone is giving you gut feel answers, but this is an economics question and has answers that we can back up with data.

                  The driving force for this is inequality. The inequality in the US is far larger than that in the EU. So people who have worse jobs have worse lives than in the EU and people who have really good jobs do far better. Just to put this in context. You’re in the top 1% of earners in France with $215k per year but you need $475k per year to be in the top 1% in the US.

                  You can also see this from another angle. You say that devs are paid doctor or lawyer salaries in the US compared to the EU. But those are EU doctors and lawyers. In the US they also make far more.

                  Another, but much smaller contributor to this, is that the average salary in the US is 30% higher than the richer EU countries.

                  1. 4

                    You can see this across almost all scales in almost all fields, too. As a professor in Denmark, I made significantly less than most American professors—but our PhD students and non-academic staff made significantly more than most American equivalents. In the US you often have 5x or more ratios between different levels, e.g.: cafeteria worker makes $20k, PhD student makes $25k, prof makes $100k, senior administrator makes $500k. In Denmark, it’s more often 0.5x than 5x, something like: cafeteria worker makes $40k, PhD student makes $55k, prof makes $70k, senior administrator makes $100k. By American standards, some of these salaries are low and some are high.

                    1. 2

                      The driving force for this is inequality. The inequality in the US is far larger than that in the EU.

                      It also has to do with global inequality.

                      1. 1

                        I was investigating software engineering jobs in the EU about 1-2 years ago; this was roughly the conclusion I came to. The EU has less inequality and, usually, better social benefits. While I like being paid US salaries, I can’t help but think the EU is generally a healthier place on most axes.

                      1. 13

                        Regarding this point:

                        Scheme seems to be an interesting compromise.

                        Scheme is relevant to this post, I think, but in a bit different way. It has basically tried all the responses to the dilemma outlined, at various points and sometimes simultaneously. Various Schemes added more or fewer things. The 6th revision (R6RS) tried to standardize large amounts of similar functionality, but was rejected by significant parts of the community for being too big/complex. Racket split off at this point, mostly going the “big” direction and rebranding itself not a Scheme. The 7th revision (R7RS) responded by issuing two language standards, one following each path, named R7RS-large and R7RS-small. Various implementations have chosen either of those standards, or some other point near or between them, or stuck with R6RS, or stuck with R5RS, etc.

                        I definitely think all this experimentation is interesting, but I’d argue the jury is still out on whether any kind of stable compromise in the design space has been reached.

                        1. 13

                          At least R7RS seems to be pretty much universally accepted, and makes it possible to write portable libraries. We’re not quite there yet, but I believe Scheme is definitely close to the point where it is easy enough to write portable code. As for all the other points, as I was reading the article I kept thinking “Scheme fits the bill” all the time, until of course the author mentioned it.

                        1. 18
                          1. Bellard is as impressive as always.

                          2. Someone found a use-after-free.

                          1. 8

                            A bit of context on the ‘someone’ for those interested: qwertyoruiop is the individual who created (half of) the Yalu jailbreak for iOS 10, and has contributed to many other big jailbreaking releases for both iOS and other platforms (e.g., PS4).

                            1. 2

                              I don’t understand that use after free. Isn´t that a legit use of js? That is: isn’t the interpreter doing what it is supposed to do? Or not?

                              1. 1

                                use after free is using the content of a pointer after the memory of it was released, allowing writing at any part of the process running the javascript interpreter. It means, that it allow going outside of the javascript sandbox and as such allow for a webpage taking full control of your computer. As any important security bug, it is a way for any virus or malware to install itself on a computer.So definitely it is not a legit use of js.

                                1. 1

                                  i was asking if the use after free bug is JS or in the interpreter.

                                  1. 2

                                    The bug is in the interpreter here. The JS in the link is a proof-of-concept exploit for the bug.

                            2. 1

                              This is a quick reminder that script VMs are hard to develop, especially for complexe PLT such as JavaScript. Never ever run arbitrary code in those kind of interpreter, even if you believed you hardened it by removing privileged functions or I/Os. FWIW, don’t even try to run to run arbitrary code in widely used engine such as spidermonkey or V8 if they are not sandboxed. RCE still get found every now and then.

                            1. 10

                              Rolling one’s own Unicode! This library sounds like it could be useful on its own:

                              A specific Unicode library was developped so that there is no dependency on an external large Unicode library such as ICU. All the Unicode tables are compressed while keeping a reasonnable access speed.

                              The library supports case conversion, Unicode normalization, Unicode script queries, Unicode general category queries and all Unicode binary properties.

                              The full Unicode library weights about 45 KiB (x86 code).

                              1. 10

                                To my knowledge tmux doesn’t advertise/support privilege-separation by pane, so I’m not sure how big a deal this is in practice (I’m fairly certain that if you start a tmux session as user you cannot send commands to another tmux sesssion which was started as root, for example).

                                1. 4

                                  Agreed, I’m not seeing the vulnerability here. At least, not with tmux

                                  1. 0

                                    I believe the issue is that, given one single tmux session with multiple windows/panes, you can send-keys to any pane. The user could have used su or sudo to open and keep a root shell in one of the panes. So, the send-keys is done as non-root, but the keystrokes/characters go into a root shell.

                                    As far as I know, you can’t send from one shell session to another (as the same user) with ssh or common shells like bash or zsh.

                                    1. 3

                                      I think you are confusing shells, remote connect protocols and terminal multiplexers. Tmux is a terminal multiplexer, its job is to allow users with access to the tmux server to access multiple shells or programs on a single display. This is also “scriptable” using the provided tmux commands. The pseudo terminals that are accessible under tmux are under the user’s control; the user is also deciding what programs are running in these pseudo terminals (by decision, I mean he actually has to authenticate willingly as root, using su).

                                      I really like tedu’s comment above. The analogy is quite simple to explain: if you have multiple VTs, in each VT a different user being logged on. You now have a monitor and a keyboard attached to this system. Does this mean the keyboard is vulnerable to privilege escalation, as you can now switch consoles with Ctrl+Alt+Fn ?

                                      Here’s another analogy: you have a web browser logged-on your webmail. Does this mean that the shell spawning the browser is vulnerable to privilege escalation? Obviously, someone with shell access to that system, under your username, can read the browser cookies and use them to access your webmail account.

                                      1. 4

                                        I really like tedu’s comment above. The analogy is quite simple to explain: if you have multiple VTs, in each VT a different user being logged on. You now have a monitor and a keyboard attached to this system. Does this mean the keyboard is vulnerable to privilege escalation, as you can now switch consoles with Ctrl+Alt+Fn ?

                                        The thing that I think could actually trip people up is that VT’s are routinely used to run different things at different privilege levels (especially while debugging system issues). For example, say you log on as root on VT1, and a throwaway user on VT2 to try something out. On any Unix I know of, it wouldn’t be necessary to log out of VT1 before running untrusted code on VT2, because a script run as an untrusted user on VT2 shouldn’t be able to send keystrokes to the root shell on VT1, even though you, the operator, could switch to VT1 and type keystrokes there. (There are also programmatic ways to send stuff to other terminals, but on a typical Unix, the permissions to do that require running as root.)

                                        People sometimes treat it as a replacement for VTs, which might be surprising if code can potentially execute at the highest privilege level available in any pane of the tmux session, and/or as the owner of the tmux process itself. So it generally shouldn’t be used for sysadmin-type tasks where you might sometimes drop privileges to an untrusted user before running things. (Maybe nowadays those should all be automated and/or explicitly sandboxed anyway, but it’s not uncommon with traditional Unix system administration to sometimes have such tasks.)

                                        edit: This is wrong, see below…

                                        1. 7

                                          I think this reflects a misunderstanding of what’s happening. tmux does not allow a program from one pane to send keystrokes to another pane. tmux allows the user who ran tmux to send keystrokes to their tmux session.

                                          If user alice runs tmux, and then uses su to switch to bob in one session, bob cannot send keystrokes to any other alice session. alice, however, can send keystrokes to the session with a bob login, because alice still owns the terminal it’s running in.

                                          1. 2

                                            I think this reflects a misunderstanding of what’s happening. tmux does not allow a program from one pane to send keystrokes to another pane.

                                            Yes, if the above is truly not allowed, then I admit I have misunderstood the original post. However, I just tested the following script, after using su (including manual password entry) to set up pane 1 to have a root shell. It does indeed type into the root shell in pane 1, including pressing Enter to execute the command.

                                            #!/bin/sh
                                            tmux send-keys -t 1 "echo 'hello'"
                                            tmux send-keys -t 1 "Enter"
                                            

                                            I admire the knowledge level and logic skills of those for whom this is unsurprising, or who downplay it as equivalent to some other already-known and already-existing attack vector. I myself would not have conceived this use of tmux.

                                            1. 1

                                              Ah, right, sorry, I think I did misunderstand. So the case where you drop privileges in one tmux pane to a user who doesn’t own the tmux session is still safe-ish, which I thought this link had claimed wasn’t the case.

                                            2. 1

                                              It is possible to send keystrokes from VT1 to VT2, by using the uinput kernel module. This is exactly the behaviour happening in tmux. If the user is able to access /dev/uinput (or the tmux session), then it can send any keystroke to any VT (or any tmux window/pane).

                                              Edit: I do agree that the typical Linux does not have this module loaded the device accessible by default. Neither is the tmux session of a single user.

                                      2. 1

                                        Just to satisfy my curiosity, I logged into my ubuntu machine and started a tmux session as root. Then I started a session as my regular user. As expected, the sessions aren’t able to see one another.

                                        1. 1

                                          The problem scenario is different than that:

                                          Start one session as a non-root user. Open two windows/panes. su or sudo to become root in one pane. Then you can send-keys from the non-root pane to the root pane. send-keys can be executed from the CLI (that is, can be done from shell scripts or programs executing shell commands).

                                          1. 1

                                            I understand that, it just doesn’t personally bother me all that much. It doesn’t seem conceptually different than running a sudo command, and then running some shell script which then attempts to do something with sudo permissions. It is worth being aware of, but seems more like an expected (if edge-case) property of how tmux works rather than some security hole. That’s just my personal view though.

                                            1. 1

                                              Well, for me, the key difference is passwordless vs. not.

                                              1. 1

                                                But that would be the same with the sudo case too, right? I don’t know what the sudo password expiry time limit is, but on my system if I issue a sudo command, there is a window of time in which a subsequent sudo command will not require a pw (and if it comes from a script etc issued by my user in that session, there is no difference)

                                                It will be interesting to see what if anything tmux changes about this - I’m not sure, if that were my project, what I would do. Even reading the comments here, I’m finding my opinion sort of still in formation.

                                                1. 1

                                                  Perhaps one option is to offer two binaries, one with send-keys, one without. I’ve never used that feature, and probably wouldn’t, even now that I’ve found out about it. A similar pair of options could be offered for source builders via ./configure switches, perhaps.

                                      1. 12

                                        My brother who studies maths just took an exam for the programming course at his uni, which was taught in C using a terrible old IDE and seemed to mostly focus on undefined behavior, judging from the questions in the exam. The high school programming class was similar, from what he told me.

                                        I’m baffled that this is considered acceptable and even normal, and that Racket, with its beautiful IDE, its massive standard library and its abundance of introductory programming course material is not even considered. I know there’s a lot of understandable reasons for this, but it’s still so backwards.

                                        1. 8

                                          Ha! Yes. That reminds me how angry I used to get about mediocre, obsolete, industry-driven CS pedagogy as a student. I dealt with it in part by finding a prof who was willing to sponsor an independent study course (with one other CS student) where we worked through Felleisen’s How To Design Programs, using what was called Dr Scheme at the time. But eventually I gave up on CS as a major, and switched to Mathematics. Encountered some backwardness there too, but I’ve never regretted it – much better value for my time and money spent on higher ed. The computer trivia can always be picked up as needed, like everybody does anyway.

                                          From what I understand, my school now teaches the required intro CS courses in Python. This seems like a reasonable compromise to me, because average students can get entry-level Python jobs right out of school.

                                          1. 7

                                            As someone who has had to deal with a lot of code written by very smart non-computer-scientist academics, please be careful telling yourself things like “The computer trivia can always be picked up as needed”. Good design is neither trivial nor taught in mathematics classes.

                                            Usually isn’t taught in CS classes either, I confess, but the higher level ones i’ve experienced generally at least try.

                                            1. 3

                                              I agree completely, and I actually took most of the upper-division CS courses that seemed genuinely valuable, even though they didn’t contribute to my graduation requirements after I switched. (The “software engineering” course was… disappointing.) But I’ve learned a ton about good engineering practices on the job, which is where I strongly suspect almost everybody actually learns them.

                                              I currently deal with a lot of code written by very smart CS academics, and most of it is pretty poorly engineered too.

                                          2. 4

                                            Racket is used in the intro course at Northeastern University, where several of the developers are faculty, so there’s at least one place it’s possible to take that route. I think this might be either the only or one of the only major universities using a Lisp-related language in its intro course though. MIT used Scheme in its intro course for years, but switched to Python a few years ago.

                                            I haven’t been seeing much C at the intro level in years though (I don’t doubt it’s used, just not in the corners of academia I’ve been in). We use Python where I teach, and I think that’s overwhelmingly becoming the norm. C is used here only in the Operating Systems class. When I was a CS undergrad in the early 2000s, seemingly everywhere used Java.

                                            1. 3

                                              Sounds like the exam was designed to teach the sorts of things he’ll be asked in programming interviews. Now he has great “fundamentals”!

                                              1. 3

                                                Same here. Professors suck at my university, which happens to be one of the top universities in China (It’s sponsored by Project 985). Our C++ exams are mostly about undefined behaviors from an infamous but widespread textbook, the SQL course still teaches SQL Server 2008 which has reached its EoL over 5 years ago and cannot be installed on a MacBook, and it’s mandatory to learn SAS the Legendary Enterprise Programming Language (mostly SAS is used in legacy software). Well, I’m cool with it because I’m a fair self-learner, but many of my fellows are not.

                                                I have a feeling that the professors are not really into teaching, and maybe they don’t care about the undergraduates at all. Spending time on publishing more papers for themselves is probably more rewarding than picking up some shiny “new technologies” which can benefit their students. I guess they would be more willing to tutor graduate students which can help to build their academic career.

                                                1. 1

                                                  Our first three programming courses were also in C (first two general intro, the third one was intro to algorithms and data structures). After that, there was a C++ course. This is the first time I had an academic introduction to C++–I already knew it was a beast from personal use, but seeing it laid out in front of me in a few months of intense study really drove the point home. I was told this was the first year they were using C++11 (!)

                                                  Programming education in math departments seems to be aimed at making future math people hate it (and judging by my friends they’ve quite succeeded, literally everyone I ask says they’re relieved that they “never have to do any programming again”).

                                                  1. 2

                                                    Programming education in math departments seems to be aimed at making future math people hate it

                                                    Exactly! I can’t imagine how somebody with no background in programming would enjoy being subjected to C, let alone learn anything useful from such bad courses, especially at university age.

                                                    1. 2

                                                      I thought C was awesome when university gave us a 6-week crash course in it, we had to program these little car robots.

                                                      1. 4

                                                        “6-week crash course in it” “program these little car robots.”

                                                        The choice of words is interesting given all the automotive C and self-driving cars. Is it your past or something prophetic you’re talking about?

                                                1. 6

                                                  Please, please, please don’t use a GUI toolkit like this, that draws its own widgets rather than using platform standard ones, when developing a plugin for a digital audio workstation (e.g. VST or Audio Unit), as this author is apparently doing. Unless someone puts in all the extra effort to implement platform-specific accessibility APIs for said toolkit. Inaccessible DAW plugins are a problem for blind musicians and audio engineers because of GUI toolkits like this one. Maybe screen readers will eventually use some kind of machine learning to provide access to otherwise inaccessible GUIs, but that’s probably still a long way off. So for now, we need application developers to do their part.

                                                  I don’t actually know what the best cross-platform solution is. Is wxWidgets fully functional in a context where it doesn’t own the event loop? I suspect not on Windows in particular. Yes, yes, I know it’s ugly 90s C++, but really, what’s more important, code aesthetics or usability?

                                                  1. 5

                                                    Unfortunately, there’s no way to make platform standard widgets work in audio plugin UIs on Linux. On Mac and Windows, yes, you can do this, but I’m not aware of any company that does. Perhaps Sinevibes ships their Mac plugins with some custom skinning on top of AppKit widgets. On Linux, by the way, this is a combination of event loop issues and dynamic linking. Trying to run a GTK2 UI inside of a GTK3 app is apparently completely broken on account of this.

                                                    Aesthetics matter in these applications, and there are plugin companies that have built reputations on the back of their user interfaces (Fabfilter in particular comes to mind).

                                                    Say, for example, that I’ve developed a UI engine like this and I’d like to implement the platform-specific accessibility APIs for it. Do you have any good resources for what that code should look like?

                                                    1. 2

                                                      Isn’t this just a matter of adding the necessary accessibility hooks in the widgets? No saying it is easy but surely there is a way to write a GUI library that draws directly to the screen and has good accessibility features.

                                                      1. 2

                                                        In principle, yes. In practice it seems it’s a ton of work to DIY the integration with every platform’s hooks, and nobody has sufficiently abstracted the various a11y APIs to provide a simpler cross-platform library that you could plug into. As a result I believe the only non-native GUIs that have managed to produce decent cross-platform support for mapping their GUI widgets to the platform’s a11y systems are the major browsers, which are obviously huge codebases with a lot of dev resources.

                                                        There are some initial attempts in other toolkits, but I don’t think any of them work well on more than one platform. GNOME is cross-platform but has a good a11y story only on Linux, with a very basic start at porting any of it to Windows. Mono at various points had a start at hooking into things on multiple platforms, but I’m not sure what the current state of things in the post-Mono .NET world is.

                                                    1. 6

                                                      I notice my SSH key is RSA, because that seems to be the default generated by OpenSSH’s ssh-keygen. I usually trust the OpenSSH developers to make good decisions, so not sure what to think here. Should I generate a new key using a different system? Or is this just about not implementing RSA yourself, while using it through OpenSSH is fine, because the OpenSSH developers have avoided these pitfalls?

                                                      1. 14

                                                        You should think about using ed25519 keys if you’re going to make a change, but it’s not urgent.

                                                        1. 4

                                                          It’s not such a problem that you need to go change your keys right now, but next time you’re making one, choose ed25519 instead. It’s faster than RSA, and the public key string is much smaller.

                                                          1. 3

                                                            The latter. Your keys (assuming they’re at least 2048-bit) are fine. If you’re implementing RSA yourself, there’s lots of room to screw things up, but done correctly, the cryptography is very secure, and OpenSSH is probably one of the most security-audited pieces of software in existence.

                                                            1. 2

                                                              Long-battletested implementations are fine.

                                                            1. 19

                                                              I work in Bioinformatics. Bioinformatics is a fast moving field where there is a lot of code written in academic and research settings that is quickly moved into a production environment. A lot of the code is written by scientists who have no formal software engineering training. In bioinformatics, even more than other fields, the code is something that is a means to an end, and a necessary evil. Almost everyone who is writing the code is trying to solve a bigger problem and the code is cost center in terms of effort and energy and so on.

                                                              Researchers have to use code written by other researchers. Code has conflicting dependencies. Code has been compiled by different compiler versions dynamically linked against different library versions. Codes are in different language frameworks requiring widely different tool chains.

                                                              Docker has been a great solution to the problem of how to quickly install these kinds of tools and make them interoperate without spending impractical amounts of time and energy.

                                                              Does this encourage bad engineering practices? Yes. However, the alternative is do drastically slow down the pace of research in this sector to an impractical pace. The “proper” way to do the research would then be to hand off the software to a team of software engineers who then rewrite the code in possibly a different language, using “proper” software engineering practices. Only then would the tool be released - with a “proper” distribution mechanism - such that everyone could install it on all their systems. Leaving aside the question of whether this is indeed possible engineering wise, given all the different languages and paradigms out there, it separates the original tool developers from the code, slowing down evolution of the tool.

                                                              In fast paced fields like bioinformatics and other *-omics, tools, and tool ecosystems in real life are in a way, not designed; they evolve. Docker is a boon for interoperability, to overcome the design interface mismatches that come from such an evolutionary process. This is an awkward analogy, but Docker allows us to mix a giraffe head with a fish body and get away with it.

                                                              1. 10

                                                                This. As a researcher, I don’t want the bloat of installing docker on my systems. But when other people’s research code can “just work” on my machine with a docker run, I can get to work.

                                                                A fixed amount of overhead gives me access to the algorithmic (big-O class) improvements of a publication. Far too much research code grows decrepit and unusable due to non-operability.

                                                                1. 3

                                                                  A fixed amount of overhead gives me access to the algorithmic (big-O class) improvements of a publication

                                                                  I’d be curious to hear about how this works! That is kind of the promise, but I’ve never found the “ship a docker version” approach to provide me with anything approaching reusability for other projects. If I want to simply replicate the paper itself, yes, the docker approach is viable. It’s also viable if I want to experiment with variations on the original paper’s ideas. Then I download their archive, make modifications inside it, and run my own tests.

                                                                  But if I want to use it as a library-like dependency for my own separate project, things become messy. Am I really going to write my own research code inside this docker archive? Probably not. So I could depend on it externally, but the Docker dependency story is not really well sorted out. And what if I need 2 or 3 other papers’ results? Will my code really depend on 2 or 3 external Docker archives? Now my potentially not that complex research result becomes truly monstrous to reproduce. In many common cases, the whole thing quickly becomes worse than actually just putting in the time to reimplement the original paper, and not having to depend on a pile of trash-can Docker archives.

                                                                  I’m not against using other people’s code, really, but docker comes so far down the list that it might be in negative territory. If someone shipped a proper CRAN package encapsulating their work, I’m definitely interested. But a docker archive? Zero interest, less than reading the paper’s pseudocode.

                                                                  1. 2

                                                                    In many common cases, the whole thing quickly becomes worse than actually just putting in the time to reimplement the original paper, and not having to depend on a pile of trash-can Docker archives.

                                                                    [repeating something I said below]

                                                                    If you find yourself needing to build a big pile of other people’s stuff in a way that won’t drive you crazy and you’re willing to invest a bit in learning some tooling, I’d look at the Spack project.

                                                                    It’s similar to Homebrew, Nix, and EasyBuild but is focussed on scientific/HP applications (e.g. you can have multiple versions of an application installed simultaneously). It’s not a silver bullet, but it goes a long way towards organizing chaos.

                                                                    1. 1

                                                                      You bring up a fair point. There’s a difference between prototyping work by calling into one or more docker runnable projects via a CLI and handing around some (usually text-based) file between them, and building a reusable tool that someone else can build off of. Both are valuable. Docker is one way to enable the former, and you’re right that everything would rapidly kludge if it was exclusively done for the latter. What I failed to clarify is that it can be useful to test the need for the latter through the quick-and-very-dirty approach of the former.

                                                                  2. 7

                                                                    [edit: fixed a confusing use of “it’s”]

                                                                    TL;DRNOT this. Bad engineering leads to bad science.


                                                                    “Bad engineering practices” in a bioinformatics lab are no more (or less) acceptable than bad lab practices at the bench. They’re a slippery slope to bad science.

                                                                    There are many, many bioinformaticians, biologists who can code, computer scientist dabbling in biology, and/or etc. who have taken the time to learn how to use their tools effectively. They may consciously take on a bit of technical debt now and then, but in general the goals of their computational work are the same as their goals for the lab; reliability, repeatability, and reproducibility.

                                                                    It makes me sad and embarrassed for the/my discipline to hear claims that bad practices are normal or acceptable.

                                                                    In this context, my biggest complaint about Docker isn’t that it enables/encourages some of those bad practices, but that folks can use it as a smoke screen and hide a multitude of sins behind it.

                                                                    • If I only had nickle (but see below) for every time I’ve heard: “I use Docker, so I can reproduce my software environment.” When “Broken by default: why you should avoid most Dockerfile examples” came across Hacker News and generated a flurry of discussion I felt deja vu all over again.

                                                                    • Docker’s instability makes running it in environments where people depend on the computing infrastructure (“production”) a headache (and money sink, but see below) for the admin team.

                                                                    • Docker generally invalidates security on shared machines; you shouldn’t let someone run Docker containers on a machine if you wouldn’t give them sudo access on that machine.

                                                                      If you have data that should have restricted access (e.g. contractual obligations, or personally identifiable info, or …), you should really think about it.

                                                                      If you have HR paperwork for your reports in your home directory and it’s mounted on the computing cluster, oh dear….

                                                                    All that said, I see good use cases for container tech. I’ve been following Singularity with interest, it avoids a lot of the complexity and security issues but it’s still possible to put a bow on a container full of stink and smile innocently. It’s also definitely possible to use Docker cleanly, to great effect, but it requires more work, not less.

                                                                    On a related note, I’ve used Spack to great effect in setting up bioinformatics computing environments (containerized or not).

                                                                    I shouldn’t be too grumpy, I suppose. I do have nickles (and more) from many of the times I’ve helped clean up Docker-related messes in various environments. There’s a nice niche cleaning up engineering (aka “devops”) problems in the bioinformatics world.

                                                                    Will Work For Money, feel free to drop me a note.

                                                                    1. 3

                                                                      It makes me sad and embarrassed for the/my discipline to hear claims that bad practices are normal or acceptable.

                                                                      Nonetheless, it is true. I work (part-time) as a research software engineer at Oxford Uni, and have run courses on software engineering practices for researchers. Plenty of techniques that commercial developers might think normal (e.g. iterative development, using version control) are not present in research teams. We encourage them to adopt such practices, we tell them why. Often, we find that students and postdocs are willing to engage with changing their workflows, but more senior staff are not interested. And, as they always say in those management books with fake gold leaf on the cover you buy from airport bookstands, change has to come from the top if it’s going to stick.

                                                                      1. 1

                                                                        There’s no question that there are labs with poor practices. There are plenty of tech companies with marginal practices too.

                                                                        But:

                                                                        • I don’t think that it’s normal (but I haven’t surveyed the entire world recently…); and
                                                                        • I definitely don’t think that it’s acceptable (and I know that there are many, many groups in academia and industry that feel the same way).

                                                                        But^2:

                                                                        • eyes-open technical debt isn’t always bad; and
                                                                        • “sometimes things don’t need to be reproducible” (see below).
                                                                      2. 2

                                                                        This might be true in some cases. In others though it’s docker vs a pile of e.g. Perl scripts which take in an undocumented custom text file format and output an equally undocumented text file format.

                                                                        In my limited experience, bullets 2 and 3 have very little relevance to most research environments.

                                                                        The goal is absolutely reproducibility.

                                                                        1. 2

                                                                          This might be true in some cases. In others though it’s docker vs a pile of e.g. Perl scripts which take in an undocumented custom text file format and output an equally undocumented text file format.

                                                                          What really gets my goat is “a pile of e.g. Perl scripts which take in an undocumented custom text file format and output an equally undocumented text file format” that gets wrapped up in a Docker container.

                                                                          True, it’s easier to run the script, but the “solutions” suffers from bullets 2 and 3, plus it’s pretty likely that the Docker image was built with a combination of hackery, skulduggery and prayer (if it weren’t, then installing it w/out Docker would probably be straightforward).

                                                                          In my limited experience, bullets 2 and 3 have very little relevance to most research environments.

                                                                          This has been the case in my past two gigs, covering about 4 years. I’ve been doing this stuff (bio + computers) for quite a while.

                                                                          It’s definitely true that there is a wide variety of environments out there.

                                                                      3. 2

                                                                        Bioinformatics is a fast moving field where there is a lot of code written in academic and research settings that is quickly moved into a production environment. A lot of the code is written by scientists who have no formal software engineering training. In bioinformatics, even more than other fields, the code is something that is a means to an end, and a necessary evil. Almost everyone who is writing the code is trying to solve a bigger problem and the code is cost center in terms of effort and energy and so on.

                                                                        This should terrify people who depend on results in that field.

                                                                        1. 2

                                                                          Just to jump on the other side of the fence for a moment, sometimes non-reproducible work is ok.

                                                                          One way to understand a process is to break it and then compare the broken version to the “normal”. It is/was common to expose a bunch of organisms (e.g. vial of fruitflies) to a mutagen (radioactive, chemical), then search for ones that were different (e.g. had white eyes instead of red). Those experiments were never repeatable. But it didn’t matter, if you got something interesting, you could run with it. If not, you went fishing again.

                                                                          But, the downstream work you did needed to be repeatable/reproducible.

                                                                          Likewise, if your program’s task is to search for a list of possibilities and suggest a research candidate, then it might well be fine if it’s really consulting /dev/random to make its choice, or mistakenly skipping the first element in an array, or . One hopes that there’s more intelligence going in to it, but bad suggestions are just going to waste your resources (assuming good downstream science).

                                                                          On the other hand, if your selections are e.g. going to guide patient care then they had better be repeatable, reproducible and defensible, lest they:

                                                                          • hurt people
                                                                          • ruin your career

                                                                          E.g. https://en.wikipedia.org/wiki/Anil_Potti

                                                                          (you can choose which is more motivating).

                                                                      1. 4

                                                                        It’s an interesting read, but I think the problem with this article is just the problem with narratives in general: they oversimplify things to the point that they are often more wrong than they are right. You need to have a lot of data before you can truly explain things accurately. For example, the first line and very crux of the article:

                                                                        why the lisp computer language is no longer widely used

                                                                        At what point was this ever true? According to my data, the growth in the number of people who can program in lisps has never declined. In fact, it’s steadily risen. Of course, the % of programmers who can program in Lisp versus program in other languages perhaps has possibly declined (my database isn’t good enough yet to make such an assertion), but Lisp in absolute numbers has continued to grow.

                                                                        1. 4

                                                                          That’s kind of my reaction as well; I’m not sure I buy the premise. Besides Lisp “proper”, the article also seems to be looking at broad families of descendants when determining Lisp to have “failed”, but I’m not sure I agree with the grouping. For example, it judges that Algol successors succeeded, unlike Lisp successors. I can accept that C is an Algol successor, and succeeded, so I agree that Algol has been successful in a sense (through descendants, even though Algol itself was never a big hit).

                                                                          But where would you place something like Python? Viewed from the lens of circa-1975 programming languages, Python is a lot more like a Lisp than like anything else of that era… certainly more than it could be called a successor of Algol, Cobol, or Fortran.

                                                                          I think some of this is that people put too much emphasis on the syntax, so it’s “curly brace languages succeeded, while paren languages failed”. But there’s a lot more to these language families than syntax. Some of the things that distinguished Lisp compared to its contemporaries in that era were: garbage collection, dynamic typing, reflection, closures, common use of higher-order functions, etc.; and all these have gone on to quite a bit of success.

                                                                          1. 1

                                                                            Python is a lot more like a Lisp

                                                                            This is a good point! And the same would go for Javascript, which is a lot more Lisp-like even though it has C/Java-like syntax.

                                                                            IMO think Lisp has wildly succeeded, and its best days are still ahead.

                                                                        1. 4

                                                                          The style of writing and the types of claims reminds of the GWAN web server.

                                                                          1. 2

                                                                            I remember that site from a decade or so ago! It actually seems to have been cleaned up to be comparatively respectable now. Still some pretty strong claims, but the original site had a big rant alleging that Microsoft was behind an anti-GWAN campaign that involved collusion with anti-virus makers and Wikipedia to blacklist GWAN, due to MS’s “jihad against efficiency”.

                                                                          1. 4

                                                                            I noticed an issue in the release notes that might be important to know if you have a long-running Debian server that was first installed with Debian 8.0 or earlier (mine dates to 7.0) and has been upgraded since without a fresh install. The eth0/eth1 style interface names are now unsupported, and you need to migrate manually to the new-style systemd “predictable interface names”. I followed the instructions in the release notes and it went smoothly.

                                                                            The history seems to be: with the release of Debian 9.0, the new-style names were used by default on new installs, but the upgrader avoided renaming interfaces on existing installs to prevent breakage, since many scripts and things like /etc/network/interfaces likely reference the existing names. A temporary measure was put in place to map the old-style names to the ones systemd/udev wants, so they kept working. With Debian 10.0, this configuration is no longer officially supported, seemingly because upstream doesn’t support it, and Debian doesn’t want to try to maintain support indefinitely on its own. However, due to risk of breakage, they still aren’t automatically changed, so things might keep working for now (they kept working fine for me after upgrading, despite the warning). Nonetheless, I migrated to avoid future breakage at an unexpected time.

                                                                            1. 5

                                                                              I did not expect Wayland to be included as the default option in the stable release of such a conservative distribution as Debian. I think the GNOME team has made lot’s of improvements to their Wayland-compositor Mutter.

                                                                              1. 8

                                                                                Although a major change in one way, sticking with GNOME-on-X would also be a somewhat risky choice for a stable distribution at this point, because GNOME upstream has moved to Wayland as default. So Debian shipping GNOME-on-X in a new stable release would commit them to a 5-year maintenance cycle for a configuration that deviates from upstream. Another alternative would be to move to a default desktop whose upstream targets X, but moving away from GNOME as default would also be a pretty major change.

                                                                                It’s maybe worth adding that this is specifically a GNOME default. If you choose a different desktop than GNOME, most other desktop options that Debian ships will default to running on X.

                                                                              1. 3

                                                                                I wonder if OpenBSD will pick up the X development work now that RedHat is dropping it.

                                                                                1. 2

                                                                                  Most likely no, as X is a huge pile of code, and OpenBSD is not so large. It is more rational and economical, to add a shim to be able to run Wayland, and move on along with the rest of the world.

                                                                                  1. 1

                                                                                    Isn’t X notoriously insecure? I seem to remember issues with unencrypted tokens being slung around so one could launch windows on other machines.

                                                                                    1. 2

                                                                                      In general, you don’t expose X11 sockets on the internet any more. They’re unix domain sockets in /etc/X11/. Remote X is tunneled over SSH.

                                                                                      The insecurity that people tend to complain about in X11 comes from any window (such as a web browser) being able to look at the resources like clipboard contents of other windows run by the same user.

                                                                                      1. 2

                                                                                        Besides apps being able to look at each others’ stuff, X11 is also a perennial source of privilege-escalation bugs, because some of it needs to run SUID, and it’s a large pile of old, mostly un-audited code. E.g. a recent example that affected OpenBSD.

                                                                                      2. 1

                                                                                        I’m no expert in X, but I have the same impressions. As we all nkow if security is an afterthought it is usually far more expensive to add to a system, if possible at all. Given RedHat’s reluctance to further support the project it can be called dead. It has been in maintenance mode at least for 5 years now. Also, when I tried to configure X recently it hurt so badly, I had to give it up, and I’ll be happy when it will be gone. (No config file, but autodetection is the thing now, but you can override parts, but you need to know what would be in the config file, but you cannot get a dump of the config file as understood by the server… it was PITA. I wanted to tweak some input preferences, but it hurt so badly I gave up.)

                                                                                        1. 1

                                                                                          I understand the impetus to start using X in Linux, because it was there and it worked and it enabled graphical interfaces “out of the box”. But it’s woefully behind the times now, and a lot of its complexity is tied to an obsolete use case (running windows remotely on more powerful machines).

                                                                                          With that said I haven’t really followed the debate around its replacement(s). My Linux use is almost all command-line based nowadays.

                                                                                          1. 3

                                                                                            and a lot of its complexity is tied to an obsolete use case

                                                                                            I don’t think that’s true. Remote windows are not particularly complex – the bulk of the complexity comes from a woefully misdesigned font and keyboard handling architecture, and a protocol that wasn’t particularly designed, and as a result is an incredible mess of special cases to parse and handle correctly.

                                                                                            1. 2

                                                                                              Sure, I totally agree. Now with Wayland in the works hopefully stuff will get done soon. To be honest a session migration between remote and local terminals in the sense Windows RDP offers it would be pretty nice. Sometimes I find is useful.

                                                                                    1. 6

                                                                                      Apple donated a whopping $500-999:

                                                                                      https://www.freebsdfoundation.org/donors/

                                                                                      I guess every penny is welcome, but it is sad considering how much they benefited from FreeBSD. Also interesting and impressive that Intel donated $250,000+.

                                                                                      1. 3

                                                                                        I’d be curious whether that’s a result of their employee-donation matching program. Apple does 2-to-1 matches on employee donations, so if Apple employees donated a collective $250-499 as individuals, that’d explain a $500-999 Apple contribution – if the FreeBSD donors list counts things that way. I notice Google is in the same tier, possibly for the same reason?

                                                                                        1. 3

                                                                                          I’m surprised Juniper Networks is not even in that list. Or Sony with their PS4, for that matter.

                                                                                          1. 1

                                                                                            There’s also more ways to contribute to a project than just financially. Apple was foundational and crucial in TrustedBSD’s MAC implementation, which is still used today in macOS for code signing and in FreeBSD (and its derivatives, like HardenedBSD and Juniper’s JunOS). Not many are aware of how just how much Apple contributes to open source, at the very earliest as TrustedBSD (or earlier) and even until today with Darwin and llvm. (Holy cow did I state that awkwardly. I blame the lack of sleep. Or perhaps the wonderful spa date night I just had with the missus. Or both.)

                                                                                            I’ve also come to realize just how beneficial it is not to have an entitlement mentality. Apple is a for-profit business, answerable only to its shareholders. Apple’s lack of monetary contributions (regardless of accuracy of such a claim) demonstrates their priorities, which may end up including hiring open source contributors to continue doing their great work open and paid. Instead of a 501(c)(3) receiving funding, a family of five with three-point-one-four-one-five-nine dogs and a dead parrot named Steve, who died in a horrible plane crash,

                                                                                            So, perhaps there’s another side to the story that paints a different picture. The world would be a very boring place if everyone thought like I did.