1. 5

    I have a file of jokes of mine and this one is appropriate.

    With regards to Linux kernel development practices:

    The Linux kernel, due to C, is a monument to convention.

    Absolutely none of this nonsense is necessary; it’s anti-knowledge. Imagine what operating systems would be like if people weren’t determined to use garbage from the 1970s to reimplement garbage from the 1970s.

    1. 8

      Vaporware is always much better faster and more provably correct.

      1.  

        Windows NT and commodity clouds weren’t vaporware. Even Midori got built and deployed in the field. It turned vaporware in general market cuz Microsoft wanted nothing threatening their cash cow.

        1.  

          True enough, nobody ever said Windows NT was better, faster, and more provably correct. But it was written in C/C++ so that probably explains both that it works and its not so good?

          1.  

            People definitely said the user experience was better than DOS/UNIX, I don’t know if it was faster (or resource efficient) unless you’re comparing old Windows to modern Linux, and Shapiro wrote the definitive piece on its [in-]security certification. He had some vaporware himself in that which happened due to a mix of market reality for high-security OS’s and… Microsoft hiring him. Oh the irony.

            Then again, I usually think of MS Research and MS Operations (Windows etc) like different groups. MSR hired him. They do great work. MSO fucks up theirs to maximize numbers in annual reports. His essay called out MSO before being hired by MSR. So, maybe no irony even though “Microsoft” is involved in both situations.

          2.  

            It turned vaporware in general market cuz Microsoft wanted nothing threatening their cash cow.

            Is there a reference for this being the reason? Midori was super interesting and I find it hard to find info on it outside the blog post series.

            1.  

              I can’t remember if I have a source. This might be an educated guess. Microsoft has done everything they can with marketing, their legal team, and illegal deals to make Windows desktop and Windows Mobile (at one point) succeed against everything else. They tried lawsuits against Linux users. They pulled billions in patent royalties from Android suppliers. They’ll do anything to protect their revenues or increase them.

              Most of their profits in Windows ecosystem come from businesses that are locked in to legacy code that runs on Windows and/or consumers that want to run Windows apps. Their other software, which they cross-sell, is built around Windows. Any success of Midori threatens that with unknown positives. A turf war between the group wanting to maintain Windows and a group pushing Midori is almost certainly resulting in Midori losing.

              Further, they’d have to port their existing offerings to Midori to keep the cross-sells. Vista already showed how much they sucked at pulling off such a transition. That adds major risk to a Midori port. So, final decision by business people will be Windows is huge asset, Midori is minor asset with high liability, and they should just back Windows/.NET porting advances of Midori into it.

              That’s my educated guess based on both their long-term behavior and the fact that I can’t buy Midori.

              1.  

                Thanks for the response! I’m aware of the company’s history and can of course see how one can project forward to that conclusion, I just wanted to know if there was anything solid written about why the project came apart.

                1.  

                  I know the developers kept leaving. That’s usually a bad sign. Then, Duffy wrote this cryptic message on his blog:

                  “As with all big corporations, decisions around the destiny of Midori’s core technology weren’t entirely technology-driven, and sadly, not even entirely business-driven.”

                  Usually a sign management is being incompetent, scheming, or both.

      1.  

        That’s a really great development. A number of people don’t try to do anything to make life easier for people who want to build from source or package their programs, and some seem to intentionally make their life worse.

        I don’t want to name names, but there was one really nasty situation, when I absolutely needed a fresh build of a certain program because customer’s system suffered from a bug that was fixed recently and distro packages had no updates. It would be easy enough to build from a tarball, but to make things worse, their CDN got messed up and I couldn’t downloads one, so I had to build from git. And then it turned out that making a buildable release taball from the git source required a number of additional steps no one bothered to automate or even document!

        The maintainers in the IRC channel were online and told me the secret steps, and the problem was fixed, but since then I sworn to always check if my own stuff can be built on a clean setup by following its README mindlessly and when it fails, it explains what went wrong. Hard, but I’m trying.

        1.  

          A number of people don’t try to do anything to make life easier for people who want to build from source or package their programs, and some seem to intentionally make their life worse.

          That’s not what this is about. They’re trying to address Karger’s compiler subversion (very rare) that Thompson popularized with Wheeler’s reproducible builds which requires additional steps on top of this that most people don’t do from what I can tell. The other Karger attacks were vulnerabilities in the OS or software (99+% of attacks), compilers introducing security problems (still a thing), hardware failures/leaks (re-discovered around 2005), and subtle backdoors in any of them requiring strong requirements-to-design-to-code correspondence (common risk, near-nonexistent mitigation). Also, Karger said secure the distribution with a security-focused SCM, certifying compiler, transport security, and local builds after re-running security analysis on source.

          With that backdrop, the mainstream approach is… assuming easy-to-make-malicious hosts, servers, and build systems with buggy compilers… to put enormous effort into ensuring everyone’s running the same binary from the same easy-to-make-malicious source. Then, they feel safer and accomplished. Karger is probably rolling in his grave at that shit. He was probably rolling in his sleep while alive.

          At least Mozilla is gradually rewriting risky portions of Firefox in Rust to reduce risk on that part of the stack. Some other projects attempted to make mature OS’s, usually FreeBSD for some reason, safer with compiler transformations. INRIA did CompCert for certifying compiler. Aegis did a SCM with security improvements. These other folks high-five each other over matching hashes/signatures before they get hit by the same 0-days over the same preventable vulnerabilities that they’d get hit with if it was non-reproducible. The 0-day will also be reproducible in effect on most boxes. On the bright side, reproducibility will at least help with debugging.

          1.  

            I know.

            My point is, before you can have reproducible builds, you need to have repeatable builds to begin with, and there are still projects failing even at that!

            1.  

              I 100% agree with that part of your comment. I feel for all the developers that have to deal with that crap.

              1.  

                There are worse cases actually: companies intentionally hiding parts of the build toolchain so that no one can actually build exactly what they build. pfSense, for example, put their image build tools behind an NDA that prevents you from distributing the builds.

                1.  

                  There’s a lot of companies in proprietary software make builds hard or restricted. I didn’t know pfSense did it. Quite contrary to what I’d expect of an “open source distribution.”

                  1.  

                    That was the biggest reason for OPNSense maintainers to make the fork. pfSense’s response to the fork was… quite a sight: https://opnsense.org/opnsense-com/

            2.  

              That’s not what this is about. They’re trying to address Karger’s compiler subversion (very rare) that Thompson popularized with Wheeler’s reproducible builds which requires additional steps on top of this that most people don’t do from what I can tell.

              Really? Wikipedia, at least, claims that the primary reason for reproducible builds is to solve the “lol the source code I built the binary with has a backdoor that the open source code doesn’t” attack. While I’m pretty sure Mozilla isn’t going to pull something like that, it is such an obvious and easy attack that it’s embarrassing so few OSS projects have tried to address it.

              1.  

                Just making the source code available with a hashed, signed package that someone can build solves that. The vast majority of people, even security people, won’t perform the necessary steps to ensure something with reproducible builds is secure. Most people also don’t care to build from source. So, whatever security it provides is minimal over securing the host OS, browser software, the repo, the transport, and compiler transformations. The risk areas that lead to most hacks of Firefox users. Mozilla Corp also had $7-8 million in profit the year I looked at their financials assessing what they could spend on improving performance and security.

                Like you said, Mozilla is unlikely to subvert the code they give to their users in ways that do real damage. They’ll just accidentally introduce vulnerabilities that are easier to catch if the amount of labor invested in reproducible builds was instead invested in preventing and catching those vulnerabilities. The people are ignoring what leads to most hacks to focus on a risk that rarely if ever happens for openly-developed products like Firefox. Far as compiler-compler subversion, it’s happened about 2-3 times that I know of in decades. The resulting mobilization was a massive amount of attention with about nobody building trustworthy compilers and/or improving repo security against hackers. I can count each on one hand if we’re talking highly-robust approaches.

          1. 5

            Not going to say it’s the right tool for every job, but… here’s a convolutional neural net digit classifier in Excel: https://www.richardmaddison.com/2018/05/03/building-convolutional-neural-networks-excel/

            1.  

              “I would also argue that the speed of Excel gives you time to think as the failures manifest themselves. “

              Talk about selling a liability as an asset haha.

            1. 20

              And yet I have worked as a DevOps engineer whose job was orchestrating spreadsheets in a brokerage. I had a script that would spin specific versions of windows, install specific versions of excel on them, copy the last days spreadsheets into clean folders then start executing them in a specific order.

              Once that was done everything was nuked and the results were saved to a network drive for the next day and other data pipelines.

              Someone had cost the brokerage a few tens of millions by running the sheets in the wrong order which was why they were now locked down with no one having access to the ones that were used for trades any more.

              This was not the only, or worst, example of excel insanity in industry I’ve been privy to.

              There is a reason why we don’t let people build lego bridges for civil engineering and then put them in production. That we let it happen in software engineering in the name of ‘usability’ says a lot about how immature we are. Especially since spreadsheets have already killed thousands in Europe: https://www.washingtonpost.com/news/wonk/wp/2013/04/16/is-the-best-evidence-for-austerity-based-on-an-excel-spreadsheet-error/

              1. 19

                Are those issues because spreadsheets are conceptually bad or because we haven’t bothered to make better infrastructure around spreadsheets?

                1. 5

                  This is an important point. There’s no fundamental reason modern VCS and deployment techniques cannot be applied to the spreadsheet concept. Thanks to software patents, you might have a tough time building a startup around the concept, though.

                  1. 7

                    Licenses.

                    What I was doing at the brokerage was pretty clearly not allowed by the licences of the software. No one would ever hear about it because we system was completely isolated and couldn’t phone home.

                    If I turned this into a startup I’d be sued to hell and back before you can say mvp. Patents are so far down the list of blockers for this I wouldn’t even think about them.

                    1.  

                      Okay, but that’s for a specific instance of spreadsheet software (excel) and actual spreadsheet code. I’m talking about applying modern software engineering best-practices (e.g. VCS and dependency management) to the spreadsheet concept in general.

                      1.  

                        You might as well be applying modern software engineering to forth. No one uses it in business and there is no money in it from the hobbyists. If you’re not bit compatible with xls files you might as well not exist.

                  2.  

                    There’s constantly startups about replacing spreadsheets with something similar but better. I haven’t seen any of them make it. The moderator of Hacker News, Dan Gackle, even did one for YC called Skysheets. I don’t know any details about it. There was one that combined spreadsheets with a database that kept everything in sync instead of scattered around various PC’s. Lots of clever ideas. I think an interesting study would be a survey of all the failed companies to figure out what’s actually going on.

                    Meanwhile, I think it’s a situation combining a massively-popular tool, its usability, the herd mentality that brings new people into groups’ existing habits, and especially its format that was designed for lock-in with ridiculous switching costs. A bunch of Excel licenses with plumbing built around it is way easier to justify than getting off Excel in an already Excel-heavy industry with lots to lose in a failed transition with managers who have seen failed transitions eliminate jobs and bonuses.

                    1. 1

                      Yeah, I read the GP comment and I thought “start-up opportunity”. Spreadsheets / Excel aren’t going away. Maybe better infrastructure for managing them is really what people would want / be able to use.

                      1.  

                        Airtable.

                        1.  

                          This seems like a replacement for Excel? I guess I did say “Spreadsheets” but I really meant “infrastructure around Excel”.

                    2. 13

                      And if those spreadsheets had instead been glorious Haskell programs it would have been impossible to run them in the wrong order?

                      1. 1

                        Yes.

                        1. 0

                          Or better yet, Idris, where it is trivial to prove mathematically that everything is happening in the correct order, by encoding state machines in types which then don’t let your programs transition incorrectly:D

                          1. 4

                            How does idris prevent you from running other programs out of order?

                            1. 5

                              By wrapping calls to those programs in an interface / type class of its own, and then controlling calls to that interface. The interface is stateful, that’s how it ensures correct protocols.

                        2.  

                          Totally noob on spread sheets. How does one use the result of one spreadsheet on another? Is VBA magic involved?

                          1.  

                            In excel you can refer to data in another file by including its filename and the sheet within the file you want to reference. The complexity progresses something like:

                            Same sheet: =A3

                            Same file: =!sheet3:A3

                            Same filesystem: =[other.xslx]Sheet3:A3

                            1.  

                              Ah thanks!

                              One more question: In the case of OP, how does running it out of order work? Wouldn’t it have referred to empty cells and thrown an error? or does excel make up random values if you access a non-value cell?

                              1.  

                                Daily updates got propagated through. A simple example were some conversion rates grabbed online.

                        1. 17

                          Ever since I was a child, I’ve been a night person. I get my best work done between about 11 PM and 3 AM, with maybe an hour or two of variance on either side.

                          Now I have kids, and they get up early. My wife (bless her) gets up with them in the morning, but they’re still noisy and such and so it’s hard for me to stay asleep.

                          People told me that I’d adapt and start falling asleep earlier. Nope, just turns out I’m tired all the time.

                          1. 3

                            This reminds me of a post about night owls, here’s the link I’m sure you’ll agree in some aspects The Dawning Truth About Night Owls.

                            1. 2

                              Wow, this is exactly my situation as well. I feel you buddy!

                              1.  

                                I’m with you on that. Even close on the timing except maybe shift it an hour or two earlier. I would go to sleep by 3am on most night back in the day. My current position has me getting up early in the morning leaving in the afternoon or evening. Keeps my brain in a tired fog most of the time. It was an interesting experiment to see if I’d adapt and to learn some new things. I’ll probably try to switch shifts, positions, or something soon since it sucks so much.

                              1. 6

                                I have a pretty similar story with a backend focus. Throughout high school I taught myself the basic CS fundamentals. I had taken the MIT OpenCourseWare course on algorithms, as well as a number of other courses. To give you an idea I had completed ~200 problems on Project Euler.

                                When I graduated high school in 2015, I had taught myself roughly the first two years of a typical CS curriculum. If I went to college I would spend the first two years covering stuff I already knew. The topics I saw after those first two years didn’t appeal to me. I think it was worth it to spend four more years in school to learn about those topics and get a degree.

                                Due to my algorithms knowledge, I had an easy time interviewing with companies. I eventually found my way to Heap. When I joined Heap, I knew next to nothing about databases. I could explain to you what a join was, but had never performed one myself. When I joined Heap, they made me the person who was working on scaling Postgres. Since Heap has a large Postgres cluster with 100s of TB of data, I got to learn a ton about how to optimize and scaling Postgres. Initially I was the only person focused on scaling Postgres. After about a year a database team was formed and I soon became the leader of that team.

                                In April, I left Heap to use my Postgres expertise to start a business. I started Perfalytics which was recently backed by YCombinator. Right now we are focused primarily on providing tech support for Postgres. A lot of teams are using Postgres, but aren’t sure how to solve performance issues as they come up. We advise these companies as they scale. Overtime we plan to automate ourselves away by building tooling that would give the same advice we would otherwise give.

                                1.  

                                  We advise these companies as they scale. Overtime we plan to automate ourselves away by building tooling that would give the same advice we would otherwise give.

                                  Love the idea. There’s a lot of people in mid-sized to big companies that do that for their own jobs. Gives them more fun time. ;)

                                1.  

                                  For those new to Genetic Programming, you might like the Humie Awards that catalog instances where evolved solutions matched or exceeded human designers. Main site here with tons of links.

                                  1. 13

                                    There’s something cool about having a whole community on one machine. It’s like we’re all on one space-ship. We share the same silicon. The bits stay on the mothership.

                                    1. 5

                                      If you are on the same machine, finger and .plan work just as well, no need to reinvent them. ;)

                                      1.  

                                        The beautiful thing about the tilde-verse, which you’d never know until you have an account there and use it, is that it’s like a giant art experiment.

                                        People build things out of sheer whimsy, and other people use them and share in that whimsy. The result can often be quite beautiful in ways that are hard to find on the larger intertubes.

                                        This is a fine example.

                                        Could you use plan and finger? Kind of, but that wouldn’t create a timeline like this does, or collect all the .plans together in a ‘feed’.

                                        1.  

                                          Could you use plan and finger? Kind of, but that wouldn’t create a timeline like this does, or collect all the .plans together in a ‘feed’.

                                          There would definitely be value in building those things on top of .plan/finger.

                                      2.  

                                        This has fun privacy implications too. I’d like my social network to better model my actual meatspace social network. The problem, of course, is that people have many different meatspace social networks.

                                        1.  

                                          Except the physics of computer security, esp covert/side channels, make that impossible. You’ll always be more private if your friends are in a room together instead of on a server. Especially if there’s obstacles to seeing or hearing things that people can optionally use.

                                      1.  

                                        Here’s the paper by Danvy and Nielsen in case you want to add the link to your blog article.

                                        1. 5

                                          I really want to like Rust, I truly do. But I feel I might not be the target audience. Most of the time I don’t need to write safe code. I think most people don’t need to write safe code all the time. Rust is by default safe, making it hard to program most of the time so that certain bugs can be minimized. The question of course then is, does the effort spent writing safe code all the time outweigh the hypothetical reduction in bugs?

                                          You might not care about writing safe code, but the users of your code certainly do.

                                          This C/C++ mindset really needs to die, and it needs to die fast.

                                          1.  

                                            You might not care about writing safe code, but the users of your code certainly do.

                                            If they did, they wouldn’t buy or use unsafe, buggy apps. They do, though. Almost all money goes towards those kind of apps. So, that’s what most users want if it gets them whatever they’re getting out of the apps. They’d be fine with QA improvements. They usually won’t quit using the apps if those improvements don’t show up, though.

                                            That’s why I focus on things like Design-by-Contract, quick code reviews, test generation, and program analysis that take little time with a big improvement in quality and maintainability. Then, I can try to sell the managers on improved development velocity with more predictability in the delivery schedule. These practices can contribute to those goals the company and their customers actually care about.

                                            For real quality/security, that’s gonna take regulation or courtroom liability to move those requirements from externalities to things management actually cares about.

                                            1. 9

                                              To be fair, there usually aren’t any safe, unbuggy apps, so they don’t really have a choice. :)

                                              As a counterexample, in the world of music production, peoples’ professional reputations depend on doing a vast amount of live real-time computation with a collection of software by multiple vendors, live on stage. The stability of that software is very much a consideration for the buyer.

                                              1.  

                                                That’s true. Yet, most attempts to introduce high-reliability products to the market resulted in market share going to the alternatives. Probably why it’s true. Same with secure and private. It can even be something free like Signal or cheap/fast like FastMail. They’ll go for something else in mass to the tune of billions of dollars. The usage and revenue numbers back this up in about every market segment. Even safety-critical is optimizing on size, weight, power, and cost in risky ways.

                                                If you’re smart and trying to maximize users/revenue, you’ll be focusing on what they care about the most. It’s not quality or security. Even in security market, it’s mostly buzzwords and features instead of actual security. There is a market for high quality and security. It’s just tiny with a harder sell and slower growth. I think there’s potential for increasing it by cross-selling the same stuff to the luxury markets with beautiful exteriors and brand names with them fueling the development of what’s on the inside. Got the idea from Volkswagon which built one or more models of Porsches and Beetles reusing parts inside.

                                            2.  

                                              For me balance is key. Your statement is wrong because all it takes is one user to refute it and I’m more than willing to be that user. I turn off spectre mitigations on computers that aren’t connected to the net. We all have different threat models. I’ll take performance over security sometimes, and security over performance in other times. In areas where this decision affects the lives of people significantly I welcome government regulations.

                                            1. 9

                                              Maybe we should defend the 1x engineer: http://1x.engineer

                                              1. 8

                                                This whole thing seems like nonsense. I thought a 10x engineer was defined as one who delivers 10x the value (and a myth, according to the internet)… and a lot of the things on that 1x list delivers value. Over the last couple days, I’ve seen “10x engineer” get defined (IMO redefined) as “jerk” (which now very much exists), and now 1x engineer - instead of just being “average skill” - is being redefined as “not a jerk” just to contrast with the alleged 10x.

                                                And it is now a useless term completely divorced from reality. A lot of extraordinarily productive programmers are team players. A lot of average programmers are arrogant jerks.

                                                1.  

                                                  Yeah, 10x was originally about 10x the value. It’s better to keep that definition since Nx programmers actually exist. They’re also rare and often problematic enough that we can continue critiquing companies whose HR focuses on them.

                                                2.  

                                                  I love the retro styling on that.

                                                  1.  

                                                    The mouse cursor is a nice touch

                                                  2.  

                                                    I notice that there are a lot of overlapping traits between 1x and 10x engineers.

                                                    1. 3

                                                      If nothing else (I digress), it provides a substantial list of tools relevant to the solving the problem at hand, and where (and briefly how) they’re used. That alone is an invaluable start for further research into the topic, and a goldmine for a heads-up as to how secure systems can be architected.

                                                      1. 2

                                                        Like hell. They mention all kinds of tools for readers to look into, how they use some of them, and a bunch of articles and presentations. Can’t see how you equate that to zero content.

                                                        1. 9

                                                          This is heavy on promoting their use of formal methods. I think it’s a mix that mostly doesn’t involve formal methods. One component is probably customized stuff that attackers have no 0-days for since they simply don’t have the code. Another might be the fact that they do get breached but it’s not reported. That’s been very common in large companies for a long time.

                                                          1. 20

                                                            An additional factor which the article touches on but only dismissively is that the the big three have a budget that allows them to “Get sleep, Eat right, and Exercise”. Google, AWS, and MS spend a lot of money on security and they regularly require issues to be addressed when they are found. Part of the reason they can afford to so is that they have enough budget and market share that they can afford to delay a release, or put engineers on fixing a security bug. But part of it is just that they take it seriously. The magic bullet here is commitment and follow through.

                                                            1. 6

                                                              I guess we’ll see how well formal methods hold up once one of the big three starts losing their way into a Yahoo! position, and the ninjas have to be replaced with mall cops.

                                                              1. 3

                                                                I suspect the first thing they’ll do (at least in the AMZ case) is stop making changes to established products (I mean, they already barely ever make changes to established products). In that case, the security system is already built and doesn’t really need an awful lot of additional defence.

                                                                Most security flaws are introduced when you add new features to an existing thing.

                                                                1. 2

                                                                  Intereting idea. One difference is they mostly have other lines of business that subsidize the cloud ambitions; ad/search, os and office suite, etc… where as yahoo didn’t.

                                                                2. 4

                                                                  The magic bullet here is commitment and follow through.

                                                                  As a metapoint, this is pretty much the magic bullet for nearly everything I’ve seen in a serious sense: skill at music, painting, staying fit, academics, you name it, that’s how to make meaningful improvements.

                                                                  The other metapoint is that you are mostly likely to succeed at what you prioritize: given the n hours in a day, the hours spent are what you prioritize (contra what you put on your sprint board, posters, whatever). Thus if you want security, you need to have k > 0 hours/day devoted to security in a good faith effort.

                                                                  1. 3

                                                                    That sounds about right.

                                                                  2. 1

                                                                    All US states require some notification for security breaches that affect their residents.

                                                                  1. 2

                                                                    I can’t remember if this is the one I submitted a while back that @andyc had good commentary on. Lobsters, DDG, and Startpage are giving me nothing for some reason. Andy, do you have a link to that Lobsters thread?

                                                                      1. 6

                                                                        Yup, that’s it. There are currently two different academic projects about shell:

                                                                        1. This submission, Smoosh, by Greenberg et. al. I’ve been having some nice discussions with Greenberg about shell. He reported a good bug in OSH, and got me interested in running some more test suites.
                                                                        2. That one from a group in France. Morbig is a static POSIX shell parser (using a grammar and Menhir), and Colis is a shell-like language that you can translate certain shell scripts to.
                                                                        • In contrast, Smoosh uses the dash parser via “libdash”, i.e. its syntax is abstract rather than concrete.
                                                                        • In contrast, OSH’s parser is hand-written (although the lexer is generated and does a lot of the heavy lifting).

                                                                        I haven’t been in contact with group #2, I gave feedback on this paper (#1) last week. I was impressed by the empirical evaluation, and it got me interested in the other test suites, as mentioned. I also found the written POSIX spec to be somewhat incomplete / underspecified, so it’s nice to have this executable semantics to use an oracle.

                                                                        My main quibble was that I believe “word expansion” (string-based rewriting in stages) is essentially an implementation detail, and not a fundamental feature of the shell language, or what makes it good for anything in particular.

                                                                        I prefer to think of shell as a “normal” programming language, except that it has several sublanguages. I wrote this wiki page after reading the paper, and sent it to Greenberg:

                                                                        https://github.com/oilshell/oil/wiki/OSH-Word-Evaluation-Algorithm

                                                                        Basically in OSH, command / word / arith / bool are mutually recursive sublanguages, each with their own parser and evaluator. I don’t think there needs to be a notion of “expansion” separate from evaluation. For example, in the Oil language, there won’t be splitting stage (use arrays instead), and globs will be statically parsed.

                                                                        Basically, I think splitting, dynamic globbing, and dynamic arithmetic are all mistakes. They’re the source of all the advice to quote everything that we’ve had to drill into every new shell programmer’s head for decades. Quoting inhibits some parts of the expansion pipeline.

                                                                        Try !qefs in #bash on Freenode. That gives you:

                                                                        "$Quote" "$Every" "$Fucking" "$Substitution"
                                                                        

                                                                        (I googled that and got one of my own blog posts back :) https://www.oilshell.org/blog/2017/02/26.html )

                                                                        1. 2

                                                                          Yup! Thanks! I’ll save it so it doesn’t happen again.

                                                                      1. 3

                                                                        Got this from HN. One of the authors is there answering questions. Interestingly, one claim was that the training could be done with “under $150 on cloud computing services.”

                                                                        1. 6

                                                                          I like the idea of Gopher to be a suckless alternative to the more and more complex web. However, I wonder why they didn’t implement a new, simple protocol from scratch. Gopher suffers from a lot of legacy and had virtually no usage apart from sporadic support in some terminal web browsers.

                                                                          The tradeoff is critical in my opinion and if Gopher takes off a lost chance to simplify the protocol drastically.

                                                                          1. 13

                                                                            There is a new protocol being developed called Gemini that is about as simple as gopher, but 1) includes status codes (not found, redirect, okay, temporary error, permanent error) 2) uses MIME types when delivering content and 3) is exclusively served up via TLS. It’s not finalized yet, but there are at least three gemini servers running that I know of.

                                                                            1. 5

                                                                              The complexity of the web does not come from HTTP, so why focus on the protocol? The issue is more about HTML+CSS+Javascript. Why not build a web browser which only accepts Markdown instead HTML?

                                                                              1. 1

                                                                                Yeah, HTTP is pretty simple. HTML 3.2 was pretty straight-forward. Even Dillo runs it. I figure a subset of HTML mixed with a non-Turing Complete subset of CSS1-3 could be fine. If scripting, make it optional, sandboxed, and native like Juice Oberon.

                                                                                1. 1

                                                                                  Yeah. I see nothing wrong with HTTP. In fact, it was actually pretty fun to develop an HTTP-server (see quark). Gopher is fighting an uphill battle of course. For the whole benefit of the web, I think it makes more sense to encourage simplicity. Switching over to a completely new technology does not sound realistic, especially when you can’t serve ads with it.

                                                                                  1. 1

                                                                                    Not being able to serve ads is the point of Gopher. At least that’s the impression I get from proponents of the protocol here on Lobste.rs and elsewhere.

                                                                              1. 2

                                                                                Links in the description to referenced projects. Five submissions in one if you don’t count KLEE that we mention often.

                                                                                1. 20

                                                                                  I’ve been using Racket as my daily driver for 10 months at this point and I much prefer it to Python, which I had previously been using professionally for a decade. Everything is more well thought out and well integrated. The language is significantly faster, the runtime is evented and the concurrency primitives are based on Concurrent ML, which I find to be a nice model. There is support for real parallelism (in the same manner that Python 3 is going to support real parallellism in the future (one interpreter per system thread, communicating between them via channels))). I could go on and on. The overall the experience is just much, much nicer.

                                                                                  The only real downsides are

                                                                                  • the learning curve – there is a lot of documentation and a lot of concepts to learn coming from a language like Python – and
                                                                                  • the lack of 3rd party libraries.

                                                                                  To be honest, though, I don’t consider the latter that big of a deal. My approach is to simply write whatever library I need that isn’t available. I’ve released 11 such libraries for Racket in the past year, but I only invested about two weeks total working on all of them and the upside is they all behave in exactly the way I want them to. Part of the reason for that is that you can get shit done quickly in Racket (not unlike Python in that regard) and part of it is knowing what I want and exactly how to build it which comes with experience.

                                                                                  EDIT: I just ran cloc over all of my racket projects (public and private) and it appears I’ve written over 70k sloc in Racket so I would say I’m way past the honeymoon phase.

                                                                                  1. 14

                                                                                    To be honest, though, I don’t consider [he lack of 3rd party libraries] that big of a deal. My approach is to simply write whatever library I need that isn’t available.

                                                                                    I use Python almost exclusively for its ecosystem, and I imagine that’s pretty common. As much as I would love to reinvent the world, I don’t think that reimplementing numpy (for example) is ever going to be on my plate. But, for projects with limited needs for external libraries, there are certainly many better languages, and Racket’s a great one.

                                                                                    1. 3

                                                                                      the learning curve – there is a lot of documentation and a lot of concepts to learn coming from a language like Python

                                                                                      I’ve been meaning to pick up Racket since a few weeks now. Also coming from a language like Python, are there any specific resources you would recommend?

                                                                                      1. 12

                                                                                        Racket’s documentation is excellent.

                                                                                        1. 5

                                                                                          I’ve been using a combination of the official guides, the reference and reading the source code for Racket itself and for whatever library I’m interested in using. I also learned a lot by joining the racket-users mailing list.

                                                                                          If you’re like me, you might be used to skimming the Python documentation for the information that you need. I learned the hard way that it’s not a good idea to do that with Racket. You’ll often save time by just taking a few minutes to read and absorb the wall-of-text documentation you hit when you look up a particular thing than you are interested in.

                                                                                          You might also find this blog post by Alex Harsányi useful.

                                                                                          1. 2

                                                                                            The have an official book that teaches both programming and Racket. Might be worth looking at.

                                                                                        1. 12

                                                                                          My brother who studies maths just took an exam for the programming course at his uni, which was taught in C using a terrible old IDE and seemed to mostly focus on undefined behavior, judging from the questions in the exam. The high school programming class was similar, from what he told me.

                                                                                          I’m baffled that this is considered acceptable and even normal, and that Racket, with its beautiful IDE, its massive standard library and its abundance of introductory programming course material is not even considered. I know there’s a lot of understandable reasons for this, but it’s still so backwards.

                                                                                          1. 8

                                                                                            Ha! Yes. That reminds me how angry I used to get about mediocre, obsolete, industry-driven CS pedagogy as a student. I dealt with it in part by finding a prof who was willing to sponsor an independent study course (with one other CS student) where we worked through Felleisen’s How To Design Programs, using what was called Dr Scheme at the time. But eventually I gave up on CS as a major, and switched to Mathematics. Encountered some backwardness there too, but I’ve never regretted it – much better value for my time and money spent on higher ed. The computer trivia can always be picked up as needed, like everybody does anyway.

                                                                                            From what I understand, my school now teaches the required intro CS courses in Python. This seems like a reasonable compromise to me, because average students can get entry-level Python jobs right out of school.

                                                                                            1. 7

                                                                                              As someone who has had to deal with a lot of code written by very smart non-computer-scientist academics, please be careful telling yourself things like “The computer trivia can always be picked up as needed”. Good design is neither trivial nor taught in mathematics classes.

                                                                                              Usually isn’t taught in CS classes either, I confess, but the higher level ones i’ve experienced generally at least try.

                                                                                              1. 3

                                                                                                I agree completely, and I actually took most of the upper-division CS courses that seemed genuinely valuable, even though they didn’t contribute to my graduation requirements after I switched. (The “software engineering” course was… disappointing.) But I’ve learned a ton about good engineering practices on the job, which is where I strongly suspect almost everybody actually learns them.

                                                                                                I currently deal with a lot of code written by very smart CS academics, and most of it is pretty poorly engineered too.

                                                                                            2. 4

                                                                                              Racket is used in the intro course at Northeastern University, where several of the developers are faculty, so there’s at least one place it’s possible to take that route. I think this might be either the only or one of the only major universities using a Lisp-related language in its intro course though. MIT used Scheme in its intro course for years, but switched to Python a few years ago.

                                                                                              I haven’t been seeing much C at the intro level in years though (I don’t doubt it’s used, just not in the corners of academia I’ve been in). We use Python where I teach, and I think that’s overwhelmingly becoming the norm. C is used here only in the Operating Systems class. When I was a CS undergrad in the early 2000s, seemingly everywhere used Java.

                                                                                              1. 3

                                                                                                Sounds like the exam was designed to teach the sorts of things he’ll be asked in programming interviews. Now he has great “fundamentals”!

                                                                                                1. 3

                                                                                                  Same here. Professors suck at my university, which happens to be one of the top universities in China (It’s sponsored by Project 985). Our C++ exams are mostly about undefined behaviors from an infamous but widespread textbook, the SQL course still teaches SQL Server 2008 which has reached its EoL over 5 years ago and cannot be installed on a MacBook, and it’s mandatory to learn SAS the Legendary Enterprise Programming Language (mostly SAS is used in legacy software). Well, I’m cool with it because I’m a fair self-learner, but many of my fellows are not.

                                                                                                  I have a feeling that the professors are not really into teaching, and maybe they don’t care about the undergraduates at all. Spending time on publishing more papers for themselves is probably more rewarding than picking up some shiny “new technologies” which can benefit their students. I guess they would be more willing to tutor graduate students which can help to build their academic career.

                                                                                                  1. 1

                                                                                                    Our first three programming courses were also in C (first two general intro, the third one was intro to algorithms and data structures). After that, there was a C++ course. This is the first time I had an academic introduction to C++–I already knew it was a beast from personal use, but seeing it laid out in front of me in a few months of intense study really drove the point home. I was told this was the first year they were using C++11 (!)

                                                                                                    Programming education in math departments seems to be aimed at making future math people hate it (and judging by my friends they’ve quite succeeded, literally everyone I ask says they’re relieved that they “never have to do any programming again”).

                                                                                                    1. 2

                                                                                                      Programming education in math departments seems to be aimed at making future math people hate it

                                                                                                      Exactly! I can’t imagine how somebody with no background in programming would enjoy being subjected to C, let alone learn anything useful from such bad courses, especially at university age.

                                                                                                      1. 2

                                                                                                        I thought C was awesome when university gave us a 6-week crash course in it, we had to program these little car robots.

                                                                                                        1. 4

                                                                                                          “6-week crash course in it” “program these little car robots.”

                                                                                                          The choice of words is interesting given all the automotive C and self-driving cars. Is it your past or something prophetic you’re talking about?