See also: MAYHEM, the winner of the recent DARPA Cyber Grand Challenge, and SAGE, which has been in large-scale use at Microsoft for years (more).
Sadly neither is open-source, which makes it hard to verify their effectiveness on real-world software. Comparatively dumb coverage-guided fuzzers (AFL, LibFuzzer, syzkaller, etc.) certainly have a much bigger impact on open-source software than any tool employing symbolic execution. This might be simply because noone created a tool as streamlined and robust as AFL yet, but it might also reflect limitations of the technology itself: after all, most programs are orders of magnitude more complex than coreutils-style utilities.
I’m not so confident about evaluating ‘competing’ analysis technologies by their number of vulnerabilities produced. There is enormous flexibility in fuzzers: they are frequently run without any modification to the underlying source code, the creation of incidental machinery or even any understanding of the underlying codebase. Contrawise, in a prevailing atmosphere of intense focus on a small number of programs exhaustively analysed yet still suspected to be vulnerable (Browsers, PDF Readers), it’s hard to imagine a scenario that SAT solvers and symbolic analysis don’t play an important role.
I do understand the hesitation but the auction of hyperbole is trading in both houses – security analysts panning the state of the art while ‘Academia’ (whatever that designation might imply) prophesies a fundamental shift to symbolic or even artificially intelligent analysis.
In spite of this, I’ve personally used these tools and their siblings in the service of checking real-world code. Assisting me not only in finding vulnerabilities, but also generating several exploits (as well of plenty of ordinary bugs) in complex software like Plan9’s JPEG parser and the Erlang runtime system. Erlang has been reviewed by dozens of extremely intelligent people coding in a straightforward style, fuzzers have been run on virtually every major component along with the incidental artifacts like EPMD. Yet still, I found 4 (1 stack, 3 heap overflow) remotely exploitable vulnerabilities and all of which I can say with confidence could have never been found by a fuzzer.
It would have been impossible to comprehensively analyse even modestly complex software just as your comment suggests, but by relying on my own intuition about the weaknesses of software and the people who wrote it, along with the perseverance and extraordinary speed of my machine I could semi-exhaustively search for problems.
Yes, it was a huge hassle. The most flashy and visible symbolic execution tools turn out to either not work or be extremely special case, while the stuff that does (like KLEE) is like translating Greek. I had to learn enough about SAT solvers to get a Knuth Check (literally). I’m confident that none of this is in any way ‘essential to the art’ – All of this could be trivialized by a visionary team who decides to tackle this problem in the public domain.
Many of the manufacturers and machines you’ve listed here (notably HP’s and Dell’s) are fantastic computers! Even better, they are also leading manufacturers of ethically sourced laptops!
I maintain a guide on ‘Conflict Free Computers’, which I encourage you to visit if you are buying a new machine this year @ http://zv.github.io/buyers-guide.html This also includes information such as machine UEFI-standards compliance, cryptographic coprocessor inclusion and other information important to a techincally discerning crowd
If you are interested in the conflict minerals, African Kleptocracies profiting from electronics, or the longstanding north Kivu Province conflict, check out http://www.enoughproject.org/
Just like IBM and their clients have benefitted from Lenovo.
have the customers benefitted? The Thinkpads got worse and worse the more time has progressed after the sale. The old IBM built machines were absolutely top-class in build-quality and absence of bloatware (minus some IBM-built updaters which were bad but could be removed).
As time has progressed, the Thinkpads have become Notebook like all other PC Notebooks. Nothing special, some with better build quality, some with worse, but definitely no longer top class.
Yes. The machines got cheaper, but sometimes, cheaper isn’t the be-all, end-all. For some types of usage, build-quality matters much more than price.
I really hope that Apple can keep supporting the Mac for a very long time as I still love their hardware very much, exclusively future-looking ports be dammed.
My first reaction was very similar. While Lenovo kept the quality of Thinkpads high for a few years following the sale, they’ve now become (as you said) very much average. There may be cases in which this type of thing worked, but Lenovo + IBM is not one of those situations.
Yeah, I used to say that thinkpads were the only laptops worth buying, and now I feel none are worth buying. There probably won’t be a laptop I’m excited about for a very long time.
Hopefully someone raises tons of money and is able to make laptops targeting developers that isn’t shy about charging a ton of money for a top class product for people who use their machines for work all day.
Not just the quality of the products, but also the ethical standards and practices of the firm that makes them.
IBM was (and is) a leader in the practice of ethically sourcing minerals, Lenovo has continually purchased conflict minerals from African kleptocracies and genocidal terrorists, particularly in the Kivu province and the Congo at large.
If you are buying a new machine this holiday season, please check out http://zv.github.io/buyers-guide.html
[Comment removed by author]
To my recollection, SICP doesn’t really address the distinction between dynamic / lexical scope. There are the “what would Bob / Eva / So-and-so do?” questions when you are building the (first) evaluator however, and those are proxies for different views on scoping.