About the no silver bullet thing.
I’ve been wondering lately, if an organization wanted to maximize the number of features/products it could develop while minimizing the number of developers (they just cost money!), what should it do?
In my own experience, writing Ocaml feels like a much more productive use of my time than writing Java. A lot of the things I hear Java programmers saying is good about their environment I tend not to notice as a problem in Ocaml. But 10x improvement? I don’t think so, not for me at least. But Ocaml has weird syntax so very few people want to use it in production.
But then one reads about those k programmers, who seem to be able to solve very big and complicated problems in small amounts of code quite quickly. But k is very odd looking so very few people seem to be into it (and the licensing thing, but I don’t know of many people who have gotten to the point of wanting to use k and the license is what stopped them). k programmers might actually be doing 10x the Java programmer.
Then, prior to being a Java monkey, I was an Erlang monkey. And the system I worked on was a mess. A lot of the developers weren’t the most skilled and a lot of bad code and architecture made it into production. But the Erlang model kept the thing resilient to terrible code so the system kept on working despite all of this. Sure, programmer errors that just did the wrong thing were an issue, but a lot of bad code is good in the happy path and fails during unexpected situations. The Erlang VM handles those well as long as you use OTP. The Java systems I’ve experienced that were subjected to the same abuse tended to collapse in on themselves quite a bit earlier. At some point adding features required doing a rewrite just because the system was too unstable. In some cases I think the Erlang programmer can do 10x the Java programmer. Those cases where they can’t tend to revolve around specific integrations that Java may happen to just come with or performance needs. But Erlang looks weird so nobody is interested in using it. Maybe Elixir will save the day there but who knows.
So, I don’t know if there is a silver bullet, but it does make me think that this C heritage that seems required to make a language popular might also be blocking productivity wins. Everything is just based off my own observations (and I do dislike Java so that doesn’t help) so maybe I’m way off. Also, even if the k programmer is 10x faster than the Java programmer, that doesn’t necessarily mean the project will deliver 10x faster. But it should be faster if that is true. At the very least, one can iterate faster on costumer needs/ideas.
You’ve identified two things here: (a) use languages that make it easier to express the solution to your problem with minimal problems from language itself; (b) platforms that respond to failure better. On (a), I’ll add that malleable languages with good tooling such as commercial Smalltalk or Common LISP boost things even faster. Then, you want to be able to find errors and get right to them. Basic Design-by-Contract helps narrow problems down quickly. On platforms, OpenVMS clusters and NonStop architecture have been used in systems that ran a long, long time.
The one thing missing is monitoring and update capabilities along lines of Nygard’s book Release It. Combine all these to have fast-moving, robust systems. From there, trade off one for the other as that’s always necessary. I’d say also have experienced programmers at least on the review process to catch any obvious mistakes. Invest in the ones you have to get them good enough to raise the baseline. The cheapest will screw things up while many of the better ones will leave for better offers. Maybe design and coding standards as part of onboarding process with the codebase itself teaching new hires some of them.
“They propose having a project saboteur deliberately introduce flaws into the program to see if your testsuite detects them. I’ve never heard of this idea. I wonder how well it works.”
I get to again cite the early inventors of INFOSEC since they did that in their developments to test the evaluators and their processes. Karger did it in MULTICS and one other work which I can’t remember.
He and Schell worried most about malicious developers. That was major reason for formal methods, traceability, and so on. They also liked to simulate malicious developers. See Section 3.1 where he seeds trap doors into the system. He also invents the compiler-compiler subversion with PL/I as the target but doesn’t implements it. Just tells them to watch out.
In the other one, the authors slipped one or more errors in the formal specifications on purpose. The evaluators were supposed to find mismatches between formal specs and implementation as part of their work. They just send a report about it asking for a justification with a fix demanded for what doesn’t seem justified. That’s where the hand-waiving can kick in. In that case, though, they admitted they didn’t match since it was a test of the pen-testers. The latter were clearly paying attention to detail.
It’s a good thing for security evaluations. The finds can even be a temporary boost of energy/morale for the evaluators if they don’t know they’re fake until the end. You’re right it can get adversarial if they’re not expecting it, though. They’ll resort to name calling or just saying it’s unprofessional. They’ll say people are wasting their time. Stuff like that.
Can’t access the page. Redirects to HTTPS but that fails. https://archive.is can’t access it either.
@tedu is his own certificate authority. See basically any comment thread for a post from his domain for more information.
Maybe we need to temporarily add something to Lobsters engine that looks for “can’t access,” “HTTPS,” and so on in posts from @tedu domain. Then, it automatically deletes that comment or sub-thread. Then, it sends a private message to the poster with a link to tedu’s post on what he’s doing and why. Maybe a new flag called Ted’s Prank for those the automated system doesn’t get.
Then, the DDOS of Lobsters comment sections might be eliminated. That he co-opted so many boxes to do it just using one blog post is reminiscent of the DNS amplification attacks. As usual, the attacker also retains deniability. Impressive.
Google won’t index it, for your safety, but they still cache it.