I’m working on a language for genetic programming which is also Lisp like, but it won’t be stack based. Instead, I’m using S-expressions for predicates in a Blackboard Architecture inspired engine for game AI. One can easily map things like FSA and behavior trees to this, as well as mutate the collection of predicates and implement something analogous to gene crossover. Like Gush, it doesn’t have crashes.
My intention is to use it to run the AI in this project: https://www.emergencevector.com/
I’ve known about some time about jwe’s situation. I can probably answer any questions on his behalf that you might have.
Octave is huge, he can monetize this by:
None of these things are easy though. It will take him at least a year to get this of the ground. The story of sidekick is exceptional, that’s the 1% in super effective monetization. Not quite sure why it’s working so well for sidekick.
Not quite sure why it’s working so well for sidekiq
Sidekiq is very close to money streams for businesses (web products), a scientific computing platform is not.
I’m not saying the Octave bloke shouldn’t do the things you suggest – he should try many/most of them, but squeezing money out of mathematicians and scientists working for gov/edu is going to be very hard compared to a programmer getting a yearly contract paid for by a product manager.
I thought this HN comment was actually spot-on. Has jwe thought about offering a “pro” version of Octave?
I responded to it. I’m not sure I’m understanding it correctly, but I know jwe is committed to free software and will not sell non-free software. That’s typically what people mean when they say “pro version”.
I know jwe is committed to free software and will not sell non-free software.
My understanding is that there’s nothing against “free as in speech” that precludes your charging for support and distribution. This is a misconception. It’s getting “free as in beer” mixed up with “free as in speech.” Free speech isn’t against the selling of books. Protecting free speech is about ensuring that speech is unencumbered by governmental and systemic shackles. Likewise, Free Software is about the right to knowledge, not the right to have stuff without paying for it.
That’s typically what people mean when they say “pro version”.
Cygnus made lots of money in the 90’s by selling support for GCC. jwe should be able to monetize and still keep the license free. It will be more difficult, however. It may also mean switching licenses. (Did he consign his license to FSF?)
The article mentions his income currently comes largely from selling support services, but this has apparently been in decline.
But I must also face the reality of my financial situation. For the
last 8 years I have been almost able to pay my expenses by offering
support contracts. Recently though, the balance has shifted in the
wrong direction so that I am using personal savings to maintain my
ability to contribute to Octave development.
Edit: When I was in school the math department decided to drop their Matlab licenses in favor of Mathematica (they got a good deal). When they did that, they switched the numerical analysis course to Octave. I wonder if this is a common pattern: if you have money, you use Matlab, if you don’t, you use Octave. If this is the case, support contracts wouldn’t be so lucrative, despite the software helping a lot of people.
Yes, I understand the difference. :-)
What I mean is that people always tell us we should keep some features proprietary and sell those as extra addons in a pro version. That’s what we’ll never do. Selling support or selling access is entirely ok.
the FSF’s insistence … is a hindrance to spreading the message of free software.
The general pattern!
Given what I’ve heard about the pitfalls of outsourcing, I wonder why outsourcing test implementation hasn’t taken greater hold in corporate America? All one would need is to have two independent companies, one of which writes the tests and the other which is tasked with “grading” a subset of those tests, the testability of the code to be tested, and making recommendations for refactoring. If a contract can be written to make the profits of those companies dependent on their performance, this system could eventually be made to work well. It would take some doing and building of relationships and expectations. It would not be simple, but it could work.
Such a task would fit what I’ve heard about the work patterns, and even the work pathologies, of outsourcing companies in India. It’s a suitable task for 1st year hires. Such companies are motivated to produce large amounts of output, but can also be directed through proper incentives to produce decent output. This would be particularly true if compensation structures included a disincentive for “dead” tests that never found errors.
And yes, such a system can be gamed. Any system can be gamed. There is no substitute for building relationships and working relationships amongst good people that want to produce good results.
In my experience, writing a good test is more difficult than writing a good implementation. Perhaps, it can be the other way around?
I’ve found that the difficulty of writing tests depends on several factors, three of the most important (in my unscientific experience):
The first is largely dealt with by breaking down the problem further, though sometimes you simply cannot – problems which are hard to verify are – indeed – hard to test (which is essentially just verification by another name). In these cases I think your experience is probably not observed, in that writing both halves of the ‘problem’ of writing tested code is difficult. More commonly I suspect that #2 and #3 bite you (as it does me), in particular, #2 is a very difficult problem, because if I take as an a priori requirement that a given API be supplied, then it is the case that writing the implementation might be utterly trivial, but mapping it to a particular API takes several intermediate steps. Take, for instance, the standard MVC architectural pattern. We have at least three layers of abstraction between the client-observed API (the “V”), and the storage-API (the data store behind the “M”), Trying to write code that tests the V and involves the M is quite hard, and usually brittle, and therefore quite painful. The mitigation strategy involves either confining tests and assertions to V (e.g., using something like capybara to poke a UI and then observe changes in the UI to confirm the behavior expected), or using mocks and the like to ensure that stuff that doesn’t result in UI changes at least gets the right set of messages sent down the line.
Fundamentally those are just hard problems to solve, but the tests doubtless add value – when they work. The do function as regression catchers (in a limited sense, particularly when refactoring), and they also serve as a way to work through building out the set of API transformations that turn a click into SQL. Depending on your team, your domain, and your preference, sometimes tests make the most sense to provide this ability, sometimes types do, sometimes QA people do – each has costs and benefits. Tests – I think – stand in between types and QA. Types have a lot of power to make static assertions and prevent you from changing assumptions on the fly – and after all, programming is just managed, repeated assumption and assertion – but they also can constrain you when refactoring if not well designed, preventing you from making changes until all the details are worked out. They can force you into local minima wrt. complexity. QA people, on the other hand, allow you to freely change code, and often ‘sneak past’ changes which technically break expected behavior, especially unintentionally. They also are open to human error in a way Types and Tests are not. Tests are a nice middleground, allowing static assertions but not necessarily tying you to ensuring every behavior is preserved as it was when refactoring (that is, they allow you to cut corners, like with QA), but at the same time they can be difficult to write, maintain (especially when they’re written poorly) and – especially – they can be difficult to believe. That is to say, it can be difficult to know that a particular test, especially one with mocks, is actually testing stuff. There are tools that help to address this problem in some areas, but it is a real problem.
I guess my beleaguered point here is that I think writing a good test can be more difficult than writing a good implementation, but it’s not necessarily the case, nor is it correlated – that is, a bad test can effectively test a good implementation, a good test can effectively test a bad implementation, and so on. The point of the test is to verify in some mostly-static sense that your assumptions and assertions are correct.
 I hate this term, or – at least – have come to hate it. Software is not ‘powerful’, it is sometimes ‘capable’ and often ‘incapable’, but capability and power are different concepts and it’s hard to say that something is narrowly capable using the language of ‘power’. For instance, a DSL can be very capable (and should be, by definition), but it is not necessarily “powerful” (in the sense of ‘power’ as being the ability to do (general) work, rather than specific work).
SSL libraries aren’t bananas, though, and I think this is where the argument falls down. For instance, SSL libraries don’t have immune systems – if a vulnerability exists in one instance, it exists in all of them.
Yes, having fewer implementations means that vulnerabilities affect more users – but, it also means that the vulnerabilities are easier to fix, and more effort is concentrated in the implementations that exist.
To put it another way: if we’re not (collectively) capable of writing one correct implementation, why would it be easier to write three, or five, or ten, correct implementations? The problem isn’t diversity, it’s the fact that we don’t know how to write a correct SSL implementation.
(Moreover: I would speculate that we don’t yet have the tools to do so – formally verified systems seem too cumbersome to use, C is too unsafe, and Haskell doesn’t seem to allow control of resources sufficient to prevent information leaks).
EDIT: added the word “yet” to the previous sentence: “we don’t” -> “we don’t yet”
formally verified systems seem too cumbersome to use
This was posted on Lobste.rs about a month ago, and seems more relevant now:
Not all the problems are technical. The way in which the organization behind the project is run probably matters more than any given piece of code, and that is closer to being ‘like bananas’ (i.e. we get benefits from diversity). I’ve a blogpost detailing some thoughts about that here: http://rz.github.io/articles/2014/apr/security-orgs.html
formally verified systems seem too cumbersome
fwiw, it is not impossible: see https://github.com/vincenthz/hs-tls
Is hs-tls actually formally verified, though? It looks like it’s just written in Haskell, which is a big bonus on ‘free theorems’, but falls short of formal verification. Moreover, I’d be afraid of information leaks via timing, etc., as noted.
SSL libraries don’t have immune systems
Use of valgrind and automatic methods for catching use after free are somewhat like immune systems. Sophisticated type systems, design by contract, or comprehensive unit tests are somewhat like immune systems.
To put it another way: if we’re not (collectively) capable of writing one correct implementation, why would it be easier to write three, or five, or ten, correct implementations?
That’s not the point. The point is that the various systems aren’t all vulnerable to the same single bug all at the same time, or at least to minimize the chances of that happening.
As a user of an SSL implementation, though, it doesn’t really matter to me whether I’m vulnerable to some particular bug – it matters whether there is a vulnerability.
Having everyone be vulnerable to different bugs instead of the same bug isn’t really much of a benefit – all the users are still vulnerable, just in different ways. It’s only a benefit if there are fewer bugs, and I don’t think that more implementations in total reduce the number of bugs in each implementation.
Yes, having fewer implementations means that vulnerabilities affect more users – but, it also means that the vulnerabilities are easier to fix
Why is this the case?
Some formal verification can be annoying but I definitely think we should still try at least some. Someone here in Boulder has recently started a very exciting Idris library for this:
With a little misdirection on his part, I wonder if this article would have ever been written. There isn’t a lot there to tie Mr. Nakamoto to the project. The money quote that he lets slip is really the anchor of newsweek’s story.
“I am no longer involved in that and I cannot discuss it,” he says, dismissing all further queries with a swat of his left hand. “It’s been turned over to other people. They are in charge of it now. I no longer have any connection.”
edit: redundant link
That quote could easily have been the answer to a question about his government contracting work and not bitcoin.
Agreed. I wonder if he’s not pulling a prank.
Also, I wonder how that reporter got the police to go with her.
She said that when she went to his house, he called the police, saying she was endangering him. I imagine his life expectancy is pretty short now; how often do you have half a billion dollars stored in a regular house in the suburbs, by somebody who has few friends and no security guards?
Oh, got it.
This guys needs to cash out some BC and move his family to the Grand Cayman
I’ve seen this idea elsewhere too, and it’s sometimes given as meaning that Satoshi can just protect himself if he wants to. But there are several things wrong with such an argument:
(1) He might not still have access to the BTC. Even if he does, Newsweek don’t know that
(2) He might not even be the BTC Satoshi; it’s not established beyond all doubt
(3) Moving house, changing your life, is hard at the best of times, and Satoshi is ill—a fact known to Newsweek before publication
(4) Satoshi appears to have made the commitment not to spend his BTC. Why force him to spend his own money on security that he ought not to have needed?
(5) As you point out yourself, it’s not only Satoshi’s security at stake here. Does anyone think that the whole Satoshi family should have to change their lives over this?
Not to mention that a life in the Grand Caymans doesn’t appeal to
everyone. Maybe he’s happy with his life there (or at least he was)?
I didn’t mean to imply that Newsweek’s behavior is somehow defensible. Or that it will be easy for this person to run away to saftety.
This Satoshi and his family are basically in for deep upheaval. It’s sortof like a popstar becoming super famous, but not quite rich yet. Dealing with it will be extremely difficult for him and his family (hopefully he does have access to the Bitcoins).
Does anyone think that the whole Satoshi family should have to change their lives over this?
Rich people do exist in this country (even richer than this guy). And, in general, I don’t think that their families have to live in fear all the time, or live in something like “witness protection”. The police crack down real hard on kidnapping and other types of extortion. Otherwise, everyone would be a criminal.
When I was in Sligo, Ireland back in 2000, I met this guy on the dole who invited me over to a house party. We were walking around the neighborhood and he pointed out the house of the local millionaire. It was basically the same design as his house, and it was simply on the corner at the end of the same block.
Original article: http://mag.newsweek.com/2014/03/14/bitcoin-satoshi-nakamoto.html
What I find remarkable is that it took real “shoe leather” journalism to find this guy (if he is the right guy). All the arm chair speculation and chasing clues on the internet weren’t sufficient to find him. This reporter actually had to call people and get the police involved.
What I mean by that is that we geeks were trying to find him by analyzing block chains, and analyzing email text, etc. Instead old fashioned journalist goes out and finds him. Nice.
Of course, not so nice for this guy and his family.
There are parallels with how the feds found the Silk Road people.
There is an important lesson to be learned here about coding standards. This particular bug would have been obvious if either:
IMO, all-out banning goto is too extreme, but enforcing the second one is a good idea and it an be automated as part of a test framework.
Personally, I’m very against optional syntax, saving a few lines of code is not worth this bug.
Agreed. Fancy character-saving syntax tricks are about ego and programmer pissing matches. I say: Save that stuff for the obfuscated code contest. If you want to work for me or with me, then your cleverness had better be directed towards furthering our mutual goals. Code should be written for easy readability, not for showing off your knowledge of the ins and outs of the language.
I’m less concerned with Apples coding standards than I am by the fact they don’t appear to have a single negative unit test to check that a wrong cert is detected.
From the original article:
A test case could have caught this, but it’s difficult because it’s so deep into the handshake. One needs to write a completely separate TLS stack, with lots of options for sending invalid handshakes. In Chromium we have a patched version of TLSLite to do this sort of thing but I cannot recall that we have a test case for exactly this. (Sounds like I know what my Monday morning involves if not.)
Good point. I wonder if this is good news. Did anyone discover this by accident and exploit it?
I spent a while as an embedded engineer, where we made heavy use of goto
and braceless ifs, particularly for the construct
as opposed to long if (condition_a) && (condition_b) ... statements.
if (condition_a) && (condition_b) ...
I don’t really see the benefit there. It’s not like the number of symbols in the source means much for the compiled code.
At first, and for no good reason, I disliked the idea of bcrypting a password hash. It just felt weird.
Kudos for overcoming the weird feeling and investigating further. It “feels weird” to hear, but the reality is that lots of IT decisions are made on the basis of truthyness and not on the basis of fact and rationality.
I suggested we do this at a previous job and was turned down–without a good reason. We did it the traditional way, maintain two code paths such that if a user logged in and had an old hash, we’d upgrade them. Not only was this stupid, it was also dangerous, since the old hashes were significantly more weak (just md5 with a 5 character salt), and most users never returned due to the type of application.
Helping to implement a testing cloud and also working on my “pseudo-roguelike game with a huge procedurally generated world and huge procedurally generated tech tree and everything else that can foster emergence thrown into it” project. In Clojure.
Pretty cool. I’d love to give it a play when it’s ready. Have you considered doing the front-end in ClojureScript so that you can use the same language throughout?
That makes sense.