Hah, I was actually curious whether AST will make a move. Good to see he did.
Still, it’s sad that he doesn’t seem to care about ME.
Whether he cares about ME is irrelevant here. By releasing the software under most (all?) free software and open source licenses, you forfeit the right to object even if the code is being used to trigger a WMD - with non-copyleft licenses you agree not to even see the changes to the code. That’s the beauty of liberal software licenses :^)
All that he had asked for is a bit of courtesy.
AFAIK, this courtesy is actually required by BSD license, so it’s even worse, as Intel loses here on legal ground as well.
No, it is not - hence the open letter. You are most likely confused by the original BSD License which contained the so called, advertising clause.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Correct. The license requires Intel to reproduce what’s mentioned in the parent comment. The distribution of Minix as part of the IME is a “redistribution in binary form” (i.e., compiled code). Intel could have placed the parts mentioned in the license into those small paper booklets that usually accompany hardware, but as far as I can see, they haven’t done so. That is, Intel is breaching the BSD license Minix is distributed under.
There’s no clause in the BSD license to inform Mr. Tanenbaum about the use of the software, though. That’s something he may complain about as lack of courtesy, but it’s not a legal requirement.
What’s the consequence of the license breach? I can only speak for German law, but the BSD license does not include an auto-termination clause like the GPL does, so the license grant remains in place for the moment. The copyright holder (according to the link above, this is Vrije Universiteit, Amsterdam) may demand compensation or acknowledgment (i.e. fulfillment of the contract). Given the scale of the breach (it’s used in countless units of Intel’s hardware, distributed all over the globe by now), he might even be able to revoke the license grant, effectively stopping Intel from selling any processor containing the then unlicensed Minix. So, if you ever felt like the IME should be removed from this world, talk to the Amsterdam University and convince them to sue Intel over BSD license breach.
That’s just my understanding of the things, but I’m pretty confident it’s correct (I’m a law student).
Actually, they may have a secret contract with the University of Amsterdam that has different conditions. But that we don’t know.
University of Amsterdam (UvA) is not the Vrije University Amsterdam (VU). AST is a professor at VU.
I’ve read the license - thanks! :^)
The software’s on their chip and they distribute the hardware so I’m not sure that actually applies - I’m not a lawyer, though.
Are you saying that if you ship the product in hardware form, you don’t distribute software that it runs? I wonder why all those PC vendors were paying fees to Microsoft for so long.
Yes, software is licensed. It doesn’t mean that if you sell hardware running software, you can violate that software’s license.
This is the “tivoization” situation that the GPLv3 was specifically created to address (and the BSD licence was not specifically updated to address).
No, it was created to address not being able to modify the version they ship. Hardware vendors shipping GPLv2 software still have to follow the license terms and release source code. It’s right in the article you linked to.
BSD license says that binary distribution requires mentioning copyright license terms in the documentation, so Intel should follow it.
Documentation or other materials. Does including a CREDITS file in the firmware count? (For that matter, Intel only sells the chipset to other vendors, not end users, so maybe it’s in the manufacturer docs? Maybe they’re to blame for not providing notice?)
You have a point with the manufacturers being in-between Intel and the end users that I didn’t see in my above comment, but the outcome is similar. Intel redistributes Minix to the manufacturers, which then redistribute it to the end-users. Assuming Intel properly acknowledges things in the manufacturer’s docs, it’d then be the manufacturers that were in breach of the BSD license. Makes suing more work because you need to sue all the manufacturers, but it’s still illegal to not include the acknowledgements the BSD license demands.
Edit:
Does including a CREDITS file in the firmware count?
No. “Acknowledging” is something that needs to be done in a way the person that receives the software can actually take notice of.
You’re correct, my bad. But “reproduce the above copyright notice” etc. aims at the same. Any sensible interpretation of the BSD license’s wording has to come to the result that the receivers of the source code must be able to view those parts of the license text mentioned, because otherwise the clause would be worthless.
If they don’t distribute that copyright notice (I can’t remember last seeing any documentation coming directly from Intel as I always buy pre-assembled hardware) and your reasoning is correct, then they ought to fix it and include it somewhere.
However, the sub-thread started by @pkubaj is about being courteous, i.e. informing the original author about the fact that you are using their software - MINIX’s license does not have that requirement.
Still, it’s sad that he doesn’t seem to care about ME.
Or just refrains from fighting a losing battle? It’s not like governments would give up on spying on and controlling us all.
Do you have a cohesive argument behind that or are you just being negative?
First off, governments aren’t using IME for dragnet surveillance. They (almost certainly) have some 0days, but they aren’t going to burn them on low-value targets like you or me. They pose a giant risk to us because they’ll eventually be used in general-purpose malware, but the government wouldn’t actually fight much (or maybe at all, publicly) to keep IME.
Second off, security engineering is a sub-branch of economics. Arguments of the form “the government can hack anyone, just give up” are worthless. Defenders currently have the opportunity to make attacking orders of magnitude more expensive, for very little cost. We’re not even close to any diminishing returns falloff when it comes to security expenditures. While it’s technically true that the government (or any other well-funded attacker) could probably own any given consumer device that exists right now, it might cost them millions of dollars to do it (and then they have only a few days/weeks to keep using the exploit).
By just getting everyday people do adopt marginally better security practices, we can make dragnet surveillance infeasibly expensive and reduce damage from non-governmental sources. This is the primary goal for now. An important part of “marginally better security” is getting people to stop buying things that are intentionally backdoored.
Do you have a cohesive argument behind that or are you just being negative?
Behind what? The idea that governments won’t give up on spying on us? Well, it’s quite simple. Police states have happened all throughout history, governments really really want absolute power over us, and they’re free to work towards it in any way they can.. so they will.
They (almost certainly) have some 0days, but they aren’t going to burn them on low-value targets like you or me.
Sure, but do they even need 0days if they have everyone ME’d?
They pose a giant risk to us because they’ll eventually be used in general-purpose malware
Yeah, that’s a problem too!
Defenders currently have the opportunity to make attacking orders of magnitude more expensive, for very little cost. [..] An important part of “marginally better security” is getting people to stop buying things that are intentionally backdoored
If you mean using completely “libre” hardware and software, that’s just not feasible for anyone who wants to get shit done in the real world. You need the best tools for your job, and you need things to Just Work.
By just getting everyday people do adopt marginally better security practices, we can make dragnet surveillance infeasibly expensive and reduce damage from non-governmental sources.
“Just”? :) I’m not saying we should all give up, but it’s an uphill battle.
For example, the blind masses are eagerly adopting Face ID, and pretty soon you won’t be able to get a high-end mobile phone without something like it.
People are still happily adopting Google Fiber, without thinking about why a company like Google might want to enter the ISP business.
And maybe most disgustingly and bafflingly of all, vast hordes of Useful Idiots are working hard to prevent the truth from spreading - either as a fun little hobby, or a full-time job.
It reads to me like he just doesn’t want to admit that he’s wrong about the BSD license “providing the maximum amount of freedom to potential users”. Having a secret un-auditable, un-modifiable OS running at a deeper level than the OS you actually choose to run is the opposite of user freedom; it’s delusional to think this is a good thing from the perspective of the users.
Oh, it’s still not lost. ME_cleaner is getting better, Google is getting into it with NERF, Coreboot works pretty well on many newish boards and on top of that, there’s Talos.
I’ll add to this that being on call when it’s quiet limits your ability to live your life as you please outside of office hours - you can’t disappear into the wilderness, you can’t go to the movies and turn your phone off, you can’t go out to dinner and not take your laptop, you can’t go out to a party and get drunk so that you sleep through the beeping.
That’s the best scenario. When things are broken you might lose a lot of sleep. You might have to interrupt dinner with friends. You might have to jump in a cab and head home so you can get properly online and work. You come into the office tired; your partner is grumpy because they got woken up, too; you feel like crap because you haven’t had an evening all week where you didn’t have to deal with something.
On-call can be a scourge. It’s random, unpaid work, demanding your full attention at the worst of times. The best thing I can recommend is: don’t be on call. Don’t get in that critical path. If you are a manager with on-call staff you should be telling people to come in late, or not at all, if they’ve had a night of activity.
And make fixing that issue so it never wakes anyone up again your biggest priority.
It’s random, unpaid work, demanding your full attention at the worst of times.
Is this something specific to states? Where I live I’m paid (constant amount) for the fact that I’m on-call even if nothing happens. And 150% of my hourly rate if I have to work.
In the US, it varies by the job and by the state.
Some employees are paid hourly, and there are state and federal labor rules about how many hours a week (and sometimes how many hours per day) an employee can work before an overtime rate has to be paid.
There are other workers, however, who are paid ‘on salary’ instead of hourly. That means they get payed monthly or bi-weekly at a fixed rate, and hours worked aren’t tracked and don’t enter into the pay equation. They are called ‘exempt’ employees, because they are not covered by the minimum wage and overtime rules that apply to hourly employees under the Fair Labor Standards Act.
Exempt employees often preferentially asked to go on call because, if they’ll do it, they aren’t required to be paid extra for the work like an hourly employee would be. Some jobs choose to pay their exempt employees an on-call bonus, or to compensate them in other ways- extra time off for example- but not all do. If you work at one of those places, you have to decide if your salary makes up for the hassle and inconvenience of putting up with on-call work.
In the US, by an unfortunate quirk of labor regulations, software engineers are considered “clerical” and are exempt from the requirement that they be paid overtime. Consequently, for all intents and purposes all are salaried and not paid for hours actually worked.
Yeah, likewise. I’m a massive advocate for putting devs on-call, but I won’t enter a rotation unless it’s compensated: at a minimum, a base rate per hour, regardless of incidents.
In the US there are a lot more people working as salaried, non-hourly employees than other places I’m aware of in europe and SE asia. It’s rare for a salaried job to pay any sort of overtime, or additional compensation for on-call.
All places I’ve worked at, including startups and small companies paid for you to be on call. And you matched hours to hours if there was night work (ie come in late next day), and you got an extra day off at the end of your on call shift.
I don’t support the FSF when so much of their income ends up in the pockets of lawyers and not with programmers.
I find that point incredibly weird. Most of the FSFs work is policy work and is legal counsel to programmers doing open source.
You might not agree with what they do, but yes, that’s mostly the place where lawyers are appropriate.
I found rcm a couple of years ago, and haven’t looked back.
Right now we just install the Mesos agent and run everything on container images. Some prometheus exporter images and that’s about it.
In case anyone wants to cross-check, out of the 23 curl CVEs in 2016, at least 10 (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) are due to C’s manual memory management or weak typing and would be impossible in a memory-safe, strongly-typed language. (Note that, while I like Rust and it seems to have been the motivator for this post, many modern languages meet this bar.) While “slightly more than half” as non-C-related vulnerabilities may technically be “most”, I’m not sure it’s fitting the spirit of the term.
There are some very compelling advantages to C, certainly, which the author enumerates; in particular, its portability to nearly every platform in existence is a major weakness of Rust (and, to the best of my knowledge, any other competitor) at the moment. But it’s very important to note that nontrivial C code practically always contains serious vulnerabilities, and nothing we’ve tried (especially “code better”, the standard advice for avoiding C vulnerabilities) works to prevent them. We should be conscious that, by writing C, we are trading away security in favor of whatever benefits C provides at that moment.
edit: It’s worth noticing and noting, as I failed to, that 2016 was an unusual year for curl vulns. /u/amaurea on Reddit helpfully counted and cataloged all the vulns on that page, and 2016 is an obvious outlier for raw count, strongly suggesting an audit or new static analysis tool or something. However, the proportion of C to not-C bugs is not wildly varied over the entire list, so the point stands.
[…] 2016 is an obvious outlier for raw count, strongly suggesting an audit or new static analysis tool or something.
especially “code better”, the standard advice for avoiding C vulnerabilities
If the curl codebase is as bad as its API then this is honestly a completely fair response.
We had this code recently:
int status;
void * some_pointer;
curl_easy_getinfo( curl, CURLINFO_RESPONSE_CODE, &status );
which trashes some_pointer on 64bit Linux because curl_easy_getinfo( CURLINFO_RESPONSE_CODE ) takes a pointer to a long and not an int. The compiler would normally warn about that, but curl_easy_getinfo is a varargs function, which brings no benefits and means the compiler can’t check the types of its arguments. WTF seriously? Why would you do that??
I also recall reading somewhere that curl is over 100k LOC, which is insane. If the HTTP spec actually requires the implementation to be that large (and it wouldn’t surprise me if it does), then you are free to, and absolutely should, just not implement all of it. If the spec is so unwieldy that nobody could possibly get it right, then why try? Implement a sensible subset and call it a day.
If you know you’re not going to be using many HTTP features, it’s not hard to implement it yourself and treat anything that isn’t part of the tiny subset you chose as an error. For example, it’s only a few hundred lines to implement synchronous GET requests with non-multipart responses and timeouts, and that’s often good enough.
I also recall reading somewhere that curl is over 100k LOC, which is insane. If the HTTP spec actually requires the implementation to be that large (and it wouldn’t surprise me if it does), then you are free to, and absolutely should, just not implement all of it.
curl supports a lot more protocols than just http though.
Indeed. From the man page.
curl is a tool to transfer data from or to a server, using one of the supported protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP).
CURL is highly compatible with a lot of the strange behaviors that browsers do support and are usually outside of (or even prohibited by) the spec/standard. Just implementing the spec doesn’t quite make it useful to the world, when the world isn’t even spec compliant. Even if you write down the standard, the real standard is what all the other browsers do, not what a piece of paper says.
But it is useful even if you only implement a tiny subset of HTTP, because most use cases involve sending trivial requests to sensible servers.
The point is that cURL isn’t a project that supplies that subset, regardless of it being useful or not. cURL supplies a complete and comprehensive package that runs pretty much anywhere and supports pretty much any protocol you might need at some point (and some you might not need).
Nothing wrong in making a slimmed down works-most-of-the-time-and-will-be-enough-for-most-people project, it might be very useful indeed, but thats not the goal of the cURL project. There’s space for both.
This is the way. Start small. I would assume that 90% of the use cases for curl is just some simple HTTP(S) queries and that can be implemented in any language quite quickly.
For example, D currently has curl in its standard library, which will probably be deprecated and removed. For simple HTTP(S) queries, there is requests, which is pure D except for the ssl and crypto stuff.
Verifying seL4 took a few years and it was roughly 10000 LoC. Curl has an order of magnitude more. 113316 as counted by sloccount on the Github repo right now. Verification is getting easier, but only very slowly.
There is no immediate commercial advantage since curl works fine. This leaves it to academia to get the ball rolling.
Verifying seL4 took a few years and it was roughly 10000 LoC.
Formally verifying 15,000ish lines of Haskell-generated C in seL4 took ~200,000 lines of proof, actually, per this. Formally verifying all of curl would easily run into the millions of lines of proof – and you’d basically be rewritting it into C-writing Haskell to boot.
seL4 has two versions, a Haskell version that’s used to verify model safety and a C version that’s just a translation of the Haskell version. It may actually be a bit of a counter-example to your claim (that formal verification on C works in practice).
This is incorrect. seL4 project actually proved C version is equivalent to (technically, refines) Haskell version. And then they (semi-automatically) proved generated assembly is equivalent to (refines) C so that they don’t need to rely on C compiler correctness.
Yes but a lot of these are only published and fixed because curl is so widely used—and scrutinized. For example number 2 on your list:
If a username is set directly via CURLOPTUSERNAME (or curl’s -u, –user option), this vulnerability can be triggered. The name has to be at least 512MB big in a 32bit system. Systems with 64 bit versions of the sizet type are not affected by this issue.
Literally this doesn’t matter.
Also, how would Rust prevent this? I’m pretty sure multiplication overflow happens in Rust too.
Not quite yet; or at least, it’s not all in one place. While all those universities are working on formalisms, we’re not working hard to get one in place, since it’d have to take that work into account, which would mean throwing stuff out and re-writing it that way, I’d imagine.
There is some work going on to make the reference (linking to nightly docs since some work has recently landed to split it up into manageable chunks) closer to a spec; there’s also been an RFC accepted that says before stabilization, we must have the reference up-to-date with the changes, but we have to backfill all the older ones. So currently, it’s always accurate but not complete.
This area is well-specified though, in RFC 560 https://github.com/rust-lang/rfcs/blob/master/text/0560-integer-overflow.md (one RFC I refer to so often I remember its number by heart)
That’s neat! Still, I find it hard to believe anything would have coverage of all multiplication errors in allocations, even if it were written in Rust. If anyone can show me a single Rust project that deliberately trips the debug panic for multiplication errors during allocation in its unit tests, I’ll be impressed. But I’ll bet the only way to really be robust against this class of error is to use something like OpenBSD’s reallocarray. That’s equally possible in C and Rust.
I do have an few overflow tests in one of my projects, but not for that specifically: https://github.com/steveklabnik/semver-parser/blob/master/src/range.rs#L682
We have pretty decent fuzzer support, seems like that might be something it would be likely to find.
I guess that depends on how often you run your fuzzer on 32 but systems long enough for it to accumulate gigabytes of input.
The example here triggers after half a gig, but many of this class of bug would need more.
A problem with stow is that, very often, I only want the files to be symlinked, with the directories created rather than symlinked. Otherwise, too many applications have a habit of writing to temporary and log files within the config directories, and these files appear inside the dotfile repository, which I do not want.
For example, my configuration file for foo is .foo/config. Unfortunately, foo will also write a file .foo/history. If I create a foo/.foo/config directory in my dotfile repo, and stow it, the ~/.foo is made into a symlink to the directory foo/.foo. So, the file ~/.foo/history actually appears under foo/.foo/log
Stow unfortunately does not support making directories (it makes directories only if the directory is shared between another application). I currently get by with some scripting on top of stow, but it would have been nice if this could have been implemented.
Unless I’m fundamentally misunderstanding something, shouldn’t stow’s “--no-folding” argument do what you want?
It seems it is. My version of stow (1.3.3) seems to not to have it, and hence missed it. Thank you for pointing it out.
I also use .gitignore to deal with this, but I would prefer to have the directory structure copied and the files symlinked. Still, it works reasonably well for personal use.
If you do this with your .emacs.d directory, the .gitignore can get extensive.
I have been learning terraform for the past two weeks or so, doing a some migration work for a client from colocation to AWS. It’s been frustrating and fun.
This is one reasons that I switched to 1Password from LastPass. The idea of keeping my passwords on a website and using a different website to authenticate made me nervous.
LastPass only syncs to and from their servers. 1Password has a few options for transferring without hitting their servers.
Except their options are gone. Macs, while still friendly, are now UNIX. VMS is deader than a doornail. Windows is the only reasonable option left for UNIX haters, and that has problems of its own.
Most of the traditional haters (of the UNIX Haters' Handbook) hated UNIX and its concepts - Plan 9 took the concepts to the extreme, and to them, would be even worse
Plan 9 from User Space? http://en.wikipedia.org/wiki/Plan_9_from_User_Space
There is also this presentation https://www.youtube.com/watch?v=2S0k12uZR14
As sebcat mentioned on HN: “I’m not sure how to put it mildly, but I think you might have been scooped on this some 1-2 decades ago…”
That’s very cool. You can remove a couple of the pipes with awk:
git log --raw | awk '/^Author: / {$1=""; printf "- %s\n", $0}' | sort | uniq
I am currently listening to a lot of Mitch Murder and Power Glove. Retro 80s electronic with reverbed synths, and very little song which tends to be a plus when trying to focus.
fwiw, colin disagrees.
imo, scrypt is clear overkill for storing passwords. the value of an authorization password is equal to the value of what it grants access to, and generally access to a password hash implies access to whatever the password protects. anybody who can read /etc/spasswd is already root, they can login as me just by calling setuid(). in the event of a breach, i can change my password and revoke access.
encryption keys are a little different. having access to my encrypted hard drive does not imply the ability to read its contents. and in the event of theft, i can’t simply change my password. a pbkdf needs to be more durable than a password hash.
Never heard of /etc/spasswd before, is the equivalent of /etc/shadow on most Linux/GNU systems?
Because /etc/passwd is word readable.
Yeah, shadow, whatever it is. The one with the secrets. I figured that would be more recognizable than spwd.db. oops.
If the only problem with Erlang is the syntax, has there been any effort to make a C-style language that compiles down to Erlang?
Edit: A quick googling answered my question for me.
lfe – lispy erlang Efene – python/js syntax Mercury – declarative (maybe? the website isn’t too clear) Elixir – metaprogramming
It looks like Efene is the closest to what I would be interested in, although it still has the libraries problem. Does anyone have any experience with it?
There is also reia ( http://reia-lang.org/ )
I don’t see why Erlang’s syntax is much of a problem. It’s just different from C, in my limited use it’s been much easier to get a handle on than Haskell was.
As we are moving to Nomad for cluster orchestration we also use their Batch/Periodic jobs for scheduling tasks. (We also wrote our own based on something from Spring, I’m not touching that…)