to this day I’m surprised that Postgres cannot be upgraded without downtime. I guess there’s maintenance windows, but it feels like so many DBs out there have uptime requirements
EDIT: don’t want to be too whiny about this, Postgres is cool and has a lot of stuff. I guess it’s mostly the webdev in me thinking “well yeah of course I need 100% uptime” that made me expect DBs to handle this case. But I ugess the project predates these sorts of expectation
I don’t disagree… but just to be clear:
minor versions(i.e. bug fixes) do not need any downtime really, you just replace the binaries and restart. (i.e. from 9.4.6 -> 9.4.7)
Major versions (9.4 -> 9.5 ) do need a dump/restore of the database, which is annoying. You can avoid this almost completely now with logical replication, which is now included with PG 10 (before this version it’s available as a module back to PG9.4 I think).
Ah thanks for the information, super helpful! Previously, when reading up on upgrading PG I got the impression I couldn’t do this on major versions.
it’s one of the use-cases.
Major versions (9.4 -> 9.5 ) do need a dump/restore of the database
Major versions (9.4 -> 9.5 ) do need a dump/restore of the database
pg_upgrade has been available and part of the official codebase since 9.0 (7ish years). It’s still not perfect, but it’s been irreplaceable for me when migrating large (45+TB) databases.
True, I had forgotten. I’ve been using PG since the 8.x days. pg_upgrade didn’t work for me from 9.0 -> 9.1 (or thereabouts, def. at the beginning of pg_upgrade existence) and haven’t ever tried it again. I should probably try it again, see if it works better for us!
There have also been numerous logical replication tools (Slony for example) that allowed upgrades without downtime since at least around 8.0, but probably earlier.
Also posted yesterday as https://lobste.rs/s/8783gk/cve_2017_17482_openvms_security_notice
I find debate around this weird. This code is protected by a license, just like the Linux kernel is protected by a license (and the Linux kernel has 20 year old bits still protected by that license today). Anyone who would be angry with a company for violating the GPL should applaud sending the disc back. Agreeing with a license is different from honouring it, but we should honour others licenses just as we expect the open source code we love and use to be honoured.
It’s actually not hard to understand. If you support the GPL because it is a pragmatic way to, within our legal system, push for more software to be open source, then it is not inconsistent at all to also want people to violate proprietary licenses in order to make more software open source. You have a goal (people should be able to read the source code of software they use) and you use whatever means you have available to make that happen.
The idea that this is hypocritical (not saying you are necessarily saying this, but it’s a common argument) is based on a particular liberal political philosophy (liberal as in the individualist, equal application of rules, not its use to mean further to the left on the political spectrum). Certain people take that political philosophy as somehow a ground truth, and then claim people are hypocritical if they don’t fit their beliefs into it.
When can I expect food neutrality next so I can get steak and lobster and the same price as the guy who got a salad?
Maybe when eating a salad prevents you from using the infrastructure your taxes paid for.
Or when your internet connection and mobile plan cost twice as much because you have to tick the Gmail, Facebook, Netflix and Youtube boxes that were once provided for free.
While it’s easy for me to scoff at anything you post because of your username, I’m gonna guess that you didn’t mean any harm with this joke but over 17 million american households were food insecure in 2012, and 18 million americans live in a food desert, where access to perishables is either overwhelmingly expensive or simply absent.
Maybe food neutrality wouldn’t be such a bad idea :)
That’s an incorrect comparison; food neutrality in your hypothetical restaurant already exists since the restaurant doesn’t charge differently for using their cutlery and crockery depending on what you eat.
You aren’t a libertarian, you’re just a capitalist.
Comes with among other things:
This was 16 years ago. Did they complete the stuff under Future Work or are there still gains to be had there?
Yep, I believe that most of that work is now complete, although it took a long time. For example, VFS giant lock compatibility was still in the 9 tree, ten years later.
For those that missed them, the FreeBSD 5.x releases (the first with SMPng) were painful…
Not sure about all the details, but one big thing that happened soon after was Matthew Dillon’s Dragonfly fork:
Dragonfly forked off FreeBSD 4 rather than the 5 branch in which the SMPng work took place, so it’s not entirely relevant for this.
The fork took place two years after this paper. For many reasons, but a large part of it was the difficulty realizing the course plotted here. So very relevant, IMO. DFly went back and forked from 4 because 5 was floundering.
Correct, reading my comment again I realize I was very unclear. With “not entirely relevant” I was referring to the code in Dragonfly not being based on the ideas/realizations in the paper but on a prevision version, so anyone wanting to read/trace code shouldn’t expect to find SMPng in Dragonfly.
This seems like more or less a copy-paste of Jacques Mattheij’s blogpost, https://jacquesmattheij.com/sorting-two-metric-tons-of-lego, adding nothing but advertising and webtrackers. The blogpost is awesome, read that instead of click-revenue republishing.
the only bit that surprised me was the intro
The idea of Ikea plus internet security together at last seems like a pretty terrible one, but having taken a look it’s surprisingly competent.
why would ikea not be expected to do it right? they have a reputation for being extremely competent when it comes to getting all the small details correct.
For furniture, sure, but do they have previous IOT experience? I wouldn’t expect Ikea to produce super high quality IOT lighting software, or any software, just because it’s not their thing.
You’d be surprised:
They do seem like a company that cares about their software.
75% of their catalog is CGI
I’m obviously getting old - I initially thought “75% of their catalogue uses CGI scripts - that’s not terribly modern!”.
Albeit many years ago, I’ve worked with IKEA as a consultant on their catalog printing and at least back then they had very competent software engineers in the company to support that effort (making that catalog is no simple task). Writing software might not be where their revenue stream comes from, but don’t underestimate the inhouse capabilities in such a monster company where everything from catalog production to website to logistics and inventory management runs on.. software.
do they have previous IOT experience?
So many companies with long and storied histories suck ass at security. “Experience” might just mean “Oh, hey, we know we can leave gaping security holes and the market will let us get away with it”.
The thing to always remember is that a culture of competent and thorough engineering is nearly always going to trump buzzword “experience”.
This is 100% correct. It was even true when INFOSEC was invented as the founders started as teams of smart people just carefully thinking about security effects on each aspect of the lifecycle. A small, almost-fringe number of people doing that caused emergence of high-assurance security. They also reused anything proven to help subgoals as engineers do.
I’m obviously not expecting Ikea to repeat that. However, just right culture and time/effort invested could lead engineers to Google their ass off, read INFOSEC books/articles, and talk to people in field. They’d then apply what they could within their constraints. Many IT people do this, esp in smaller firms. Hell, they have to do that for everything since they can’t afford specialists. Works pretty well, too, for most I meet.
i am a bit sceptical about the idea of IOT needing to be “in a company’s dna” for them to do a good job of it. getting things right is very much in ikea’s dna; i’d trust them to hire the right people to make that happen.
What sort of reputation does IKEA have with respect to software?
that’s the thing - the actual software implementation is something you can hire for. what you need from ikea’s side is a willingness to identify the people who know what’s important to get right, hire them, and then listen to what they tell you. from what else i’ve seen of ikea i’d definitely trust them not to override the people who say “look, we need to get security right before shipping anything”
Wait, what? Aren’t they known for incredibly-hard-to-assemble, fiddly-as-hell furniture kits with such laughably tiny tolerances and complex instructions that the notion of most customers sitting in a pile of screws and bolts crying into their hands is almost a cliché?
that’s the popular joke, yes, but in reality i’ve found their furniture startlingly well-made for flat-pack stuff, and relatively easy to assemble as long as i get another person to help me (it’s a pain with just one person).
Fair enough. Me, not so much. I don’t think it’s easy to assemble and often found bad stuff like threads just not machined properly, tolerances so tight that tears and breaks are inevitable, etc, to the point where I stopped buying and using it. Maybe it’s better these days. I just know that I saved myself aggro by not buying it any more.
Also, where would a “popular joke” come from if it had no basis at all in truth? Do people joke that, I don’t know, Apple products are poorly designed and don’t work properly?
Also, where would a “popular joke” come from if it had no basis at all in truth?
Assume the following:
a) Flat pack furniture is, in general, very difficult to assemble.
b) IKEA is the most well-known manufacturer of flat pack furniture, serving as a readily identifiable eponym for the genre.
Now consider proposition c: IKEA furniture is incredibly easy to assemble, and !c: IKEA furniture is very difficult to assemble.
(A ^ B) ^ !C allows the joke to hold quite nicely. (A^B)^C does not; but that means you need to find another readily identifiable company to serve as a specimen for “flat pack furniture” - I’ve actually considered this for many seconds and can’t, can you?
Therefore the joke is made irrespective of C! \qed
no, they joke that you cannot rename an MP3 in iTunes without a personal phone call to apple HQ for permission. again, a slight exaggeration.
Are we using the same OSX?
IKEA wouldn’t be the global giant it is if their products were that poor or hard to assemble (hint: they’re not, and the instructions are very well done).
Well, in my experience that’s very much not the case. It’s time-consuming and fiddly, and I’ve bought other pieces from other places which have been so much better designed, prepared and tooled, with so much better quality materials, that it really stunned me how much better they were than IKEA and how much hassle those things are and how much time they take. Personally I think they’re the global giant they are because they’re cheap and they use aspirational styling. Which is obviously totally fine and if you like their stuff that’s fair and great for you. But, “hint”: that doesn’t make your opinion fact and mine “incorrect”.
Could you tell me what those pieces are and where you got them from, because
I’ve found IKEA uniformly excellent and I’d be delighted if I could find
something even better.
Main one was a WaterRower. You’d think a rowing machine of all things would be complicated. I was just blown away by how easy and solid it was to put together, everything just slipped into place and the thing is built like a tank.
There is a tabloid-style summary at The Register as well: https://www.theregister.co.uk/2017/03/31/researchers_steal_data_from_shared_cache_of_two_cloud_vms/
In case anyone wants to cross-check, out of the 23 curl CVEs in 2016, at least 10 (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) are due to C’s manual memory management or weak typing and would be impossible in a memory-safe, strongly-typed language. (Note that, while I like Rust and it seems to have been the motivator for this post, many modern languages meet this bar.) While “slightly more than half” as non-C-related vulnerabilities may technically be “most”, I’m not sure it’s fitting the spirit of the term.
There are some very compelling advantages to C, certainly, which the author enumerates; in particular, its portability to nearly every platform in existence is a major weakness of Rust (and, to the best of my knowledge, any other competitor) at the moment. But it’s very important to note that nontrivial C code practically always contains serious vulnerabilities, and nothing we’ve tried (especially “code better”, the standard advice for avoiding C vulnerabilities) works to prevent them. We should be conscious that, by writing C, we are trading away security in favor of whatever benefits C provides at that moment.
edit: It’s worth noticing and noting, as I failed to, that 2016 was an unusual year for curl vulns. /u/amaurea on Reddit helpfully counted and cataloged all the vulns on that page, and 2016 is an obvious outlier for raw count, strongly suggesting an audit or new static analysis tool or something. However, the proportion of C to not-C bugs is not wildly varied over the entire list, so the point stands.
[…] 2016 is an obvious outlier for raw count, strongly suggesting an audit or new static analysis tool or something.
It was an audit.
especially “code better”, the standard advice for avoiding C vulnerabilities
If the curl codebase is as bad as its API then this is honestly a completely fair response.
We had this code recently:
void * some_pointer;
curl_easy_getinfo( curl, CURLINFO_RESPONSE_CODE, &status );
which trashes some_pointer on 64bit Linux because curl_easy_getinfo( CURLINFO_RESPONSE_CODE ) takes a pointer to a long and not an int. The compiler would normally warn about that, but curl_easy_getinfo is a varargs function, which brings no benefits and means the compiler can’t check the types of its arguments. WTF seriously? Why would you do that??
curl_easy_getinfo( CURLINFO_RESPONSE_CODE )
I also recall reading somewhere that curl is over 100k LOC, which is insane. If the HTTP spec actually requires the implementation to be that large (and it wouldn’t surprise me if it does), then you are free to, and absolutely should, just not implement all of it. If the spec is so unwieldy that nobody could possibly get it right, then why try? Implement a sensible subset and call it a day.
If you know you’re not going to be using many HTTP features, it’s not hard to implement it yourself and treat anything that isn’t part of the tiny subset you chose as an error. For example, it’s only a few hundred lines to implement synchronous GET requests with non-multipart responses and timeouts, and that’s often good enough.
I also recall reading somewhere that curl is over 100k LOC, which is insane. If the HTTP spec actually requires the implementation to be that large (and it wouldn’t surprise me if it does), then you are free to, and absolutely should, just not implement all of it.
curl supports a lot more protocols than just http though.
Indeed. From the man page.
curl is a tool to transfer data from or to a server, using one of the
supported protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP,
IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS,
SMTP, SMTPS, TELNET and TFTP).
damn, that’s a juicy attack surface
CURL is highly compatible with a lot of the strange behaviors that browsers do support and are usually outside of (or even prohibited by) the spec/standard. Just implementing the spec doesn’t quite make it useful to the world, when the world isn’t even spec compliant. Even if you write down the standard, the real standard is what all the other browsers do, not what a piece of paper says.
But it is useful even if you only implement a tiny subset of HTTP, because most use cases involve sending trivial requests to sensible servers.
The point is that cURL isn’t a project that supplies that subset, regardless of it being useful or not. cURL supplies a complete and comprehensive package that runs pretty much anywhere and supports pretty much any protocol you might need at some point (and some you might not need).
Nothing wrong in making a slimmed down works-most-of-the-time-and-will-be-enough-for-most-people project, it might be very useful indeed, but thats not the goal of the cURL project. There’s space for both.
This is the way. Start small. I would assume that 90% of the use cases for curl is just some simple HTTP(S) queries and that can be implemented in any language quite quickly.
For example, D currently has curl in its standard library, which will probably be deprecated and removed. For simple HTTP(S) queries, there is requests, which is pure D except for the ssl and crypto stuff.
nothing we’ve tried works to prevent them
Formal verification actually works. seL4 exists.
Verifying seL4 took a few years and it was roughly 10000 LoC. Curl has an order of magnitude more. 113316 as counted by sloccount on the Github repo right now. Verification is getting easier, but only very slowly.
There is no immediate commercial advantage since curl works fine. This leaves it to academia to get the ball rolling.
Verifying seL4 took a few years and it was roughly 10000 LoC.
Formally verifying 15,000ish lines of Haskell-generated C in seL4 took ~200,000 lines of proof, actually, per this. Formally verifying all of curl would easily run into the millions of lines of proof – and you’d basically be rewritting it into C-writing Haskell to boot.
seL4 has two versions, a Haskell version that’s used to verify model safety and a C version that’s just a translation of the Haskell version. It may actually be a bit of a counter-example to your claim (that formal verification on C works in practice).
This is incorrect. seL4 project actually proved C version is equivalent to (technically, refines) Haskell version. And then they (semi-automatically) proved generated assembly is equivalent to (refines) C so that they don’t need to rely on C compiler correctness.
Yes but a lot of these are only published and fixed because curl is so widely used—and scrutinized. For example number 2 on your list:
If a username is set directly via CURLOPTUSERNAME (or curl’s -u, –user option), this vulnerability can be triggered. The name has to be at least 512MB big in a 32bit system. Systems with 64 bit versions of the sizet type are not affected by this issue.
Literally this doesn’t matter.
Also, how would Rust prevent this? I’m pretty sure multiplication overflow happens in Rust too.
Rust specifies that:
In the future, if overflow checking is cheap enough, this gives us the ability to require it. Who knows when that’ll ever be :)
Also note that this means it might lead to a logic error, but not a memory safety error. Just by making it defined helps a lot.
Is there a formal or semi-formal Rust specification anywhere?
Not quite yet; or at least, it’s not all in one place. While all those universities are working on formalisms, we’re not working hard to get one in place, since it’d have to take that work into account, which would mean throwing stuff out and re-writing it that way, I’d imagine.
There is some work going on to make the reference (linking to nightly docs since some work has recently landed to split it up into manageable chunks) closer to a spec; there’s also been an RFC accepted that says before stabilization, we must have the reference up-to-date with the changes, but we have to backfill all the older ones. So currently, it’s always accurate but not complete.
This area is well-specified though, in RFC 560 https://github.com/rust-lang/rfcs/blob/master/text/0560-integer-overflow.md (one RFC I refer to so often I remember its number by heart)
That’s neat! Still, I find it hard to believe anything would have coverage of all multiplication errors in allocations, even if it were written in Rust. If anyone can show me a single Rust project that deliberately trips the debug panic for multiplication errors during allocation in its unit tests, I’ll be impressed. But I’ll bet the only way to really be robust against this class of error is to use something like OpenBSD’s reallocarray. That’s equally possible in C and Rust.
I do have an few overflow tests in one of my projects, but not for that specifically: https://github.com/steveklabnik/semver-parser/blob/master/src/range.rs#L682
We have pretty decent fuzzer support, seems like that might be something it would be likely to find.
I guess that depends on how often you run your fuzzer on 32 but systems long enough for it to accumulate gigabytes of input.
The example here triggers after half a gig, but many of this class of bug would need more.
The NAME and SYNOPSIS sections have typos - the devil is in the detail.
Thanks for letting me know! Fixed them.
For what workload? The article neglects to say what kind of query they are doing.
I’d be willing to bet that full table scans are faster than both hash and btree indexes.
(for databases with 2 or 3 elements.)
What do you mean? The first paragraph covers it:
There are multiple ways in which we can compare the performance of Hash and Btree indexes, like the time taken for creation of the index, search or insertion in the index. This blog will mainly focus on the search operation.
He also shows the pgbench command lines that he used, so it should be possible for others to reproduce the results.
Then the paragraph after the second graph covers it again, mentioning community feedback where hash indexing has improved performance on integer IDs and varchar fields.
Not sure what else you could ask for.
This blog will mainly focus on the search operation.
Range queries are very different from querying for a specific key. Hash indexes help with one, and do not help with the other. If hash indexes were faster in general, that would be remarkable.
And maybe I’m not familiar enough with pgbench, but the information in the post does not seem to indicate anything about the queries or the data.
You might then want to do your homework and read a paragraph or two from the pgbench documentation before claiming that the post author didn’t do his homework?
It seems reasonable to expect that performance claims where you expect tradeoffs would address this. Either by showcasing the tradeoff, or showing that the expected tradeoff has been solved.
If the reader sees a universal statement and:
Maybe the article could use work. And if the performance boost was universal in the way that this post implied, then I’d be very interested to see far more information on how that was actually accomplished, because fast range queries with hash indexes would be amazing.
I, for one, don’t want every technical article and blog post to be written for people with zero experience.
And if the performance boost was universal in the way that this post implied, …
TFA didn’t imply that at all.
If it were an intro to databases article, I think you would have a point.
However, the author doesn’t mention range queries at all, and it’s safe to assume that a PostgreSQL developer’s blog is targeted at people who know that range queries and searching for a single item are different.
No point being snarky simply because somebody didn’t qualify every little thing they said and go into mundane details of easily researched tools like pgbench.
It’s worth noting that this release marks the EOL for the 9.1 series, no further releases will be made.
Considering that the code is AGPL it will be interesting to see what happens with the installbase once the company is closed down.
It’s worth noting that PGAdmin4 1.0 was released yesterday https://www.pgadmin.org/
Not to forget the obligatory logo http://security.360.cn/cve/CVE-2016-6304/
“I think the people getting parking tickets are the most vulnerable in society. These people aren’t looking to break the law. I think they’re being exploited as a revenue source by the local government,”
I can’t speak for London and New York but I’m pretty sure thats not a universal truth. There are for example cities in Sweden where 10% of all cars are registered with a “goalkeeper”, generally homeless and other easily exploitable people. These people then amass the parkingtickets and the actual offenders and owners go free. The Wikipedia page on this translates reasonably well to english https://sv.wikipedia.org/wiki/M%C3%A5lvakt_(brottslighet). (Due to the parenthesis the link must be copy/pasted). Not saying a good appeal process is bad, but universally blaming the local governments for draconian oppressiveness isn’t helping.
In Philadelphia it’s pretty easy to see who gets tickets because the parking authority is well staffed. Just hang out on the side walk for a while and you’ll see somebody come by and ticket all the “vulnerable” people. As in people who think they’ll be able to slip into Starbucks and grab their latte before the meter maid makes another round. There actually is 30 min free parking nearby, but it’s all the way across the street from the Starbucks, unlike the reserved bus stop directly in front.
Anyone have a link to the technology behind this? I did not find an ‘about’ page on the website and the guardian article has no information. Thanks!
What technology? It’s a web site with a few options. You pick the “I didn’t see the sign” option and it gives you a form letter that says “I didn’t see the sign” which you mail to the parking authority.
It’s astonishing and wonderful that something so simple is so effective.
I suspect it’s effective because it’s below the threshold of caring. At some point, the city is going to do an inventory of every parking sign and start rejecting appeals.
Other jurisdictions have already solved this by having the ticket writer take a picture of the car and the sign.
In Stockholm, Sweden, the officer writing the ticket takes a photo of the car (or a series of photos) as well as records measurements with a laser instrument (for violations where the car is too close to a pedestrian crossing etc).
Whoa! Man from all the description and the tag line I thought it had some database of legal theory, and it looked at your citation and parsed … boy did I over think. A little disappointing, but thanks @tedu
When there are satellite workers to an office (I work exactly like that), there will always be informal meetings over coffee that aren’t recorded on Slack or email etc - trying to fight that will inevitably be counterproductive for everyone. I don’t have any silver bullets but for me, proactively seeking out contact (over whatever means are available in the company) and leading by example with using mailinglist etc works pretty well.
Changing culture is hard and is usually made infinitely harder by any attempts to formalize how to change said culture.
Even though the Internet have never been so densely populated with amazing Ninja Rockstar Superstar Experts to guide our path through this computer software industry (which surprisingly enough didn’t exist until AJ, Anno JQuery, ~ 10 years ago); taking a firm step back, looking at your requirements and needs, thinking for a bit and actually selecting the right tool for the job is still a thing. Good on you beets.io for trusting your own judgment.