Experimental broken code enabled in release versions!? So much for the audit process.
Default install has a hole and they initially refused to disclose it despite supposedly believing in full disclosure. It would probably be a good idea to admit to both of these things.
Apparently you’re relying on ASLR (which can be broken) and malloc’s features (most of which are not enabled by default and sometimes do not apply).
Default install has a hole and they initially refused to disclose it despite supposedly believing in full disclosure.
Where did you get that information from?
The initial information was to add the undocumented “UseRoaming no” to ssh_config, with no other information provided on the “upcoming” CVE. This occurred (at least 5 hours?) before a fix was committed.
http://www.openbsd.org/security.html has a whole section on Full Disclosure.
And you seriously thought that no other information was going to be made available?
The “initial information” was in no way an attempt to “refuse to disclose it”, it was a heads up to try to protect as many people as possible with a quick fix before a proper one could be made (ripping out the code), which included coordinating a proper release between a number of people in different countries.
“Full disclosure” does not mean “instantly release all details to everyone with no warning.” Most people involved in security would probably agree with that. Full disclosure can take time, as long as everything is eventually released.
I have no idea how one could interpret “here’s a workaround for a forthcoming bug” as “we refuse to disclose this bug”.
If this was a bug in Linux, we would never hear the end of the smugness from OpenBSD devs.
methinks scarlett might have an agenda to push…
Instead of calling me a troll please explain what you think is wrong with my comment.
inetd is pretty unused these days, isn’t it? I can’t even recall the last time I used it.
Maybe it should be removed from a few base installs, and put into ports or something?
Aside: That is one thing I really like about the OpenBSD project. They actually remove crufty old things sometimes!
OpenBSD has inetd(8) in base. It is just disabled by default.
One handy trick I’ve used it for recently in production is wiring-up port redirections using nc as the server spawned by inetd.
OpenBSD still includes a CGI daemon too.
Like CGI, you just wish that fork, exec, and reap were faster. :-)
This is such a bougie problem to have. “My ridiculously expensive storage array might need to be a bit bigger because of the ridiculously bloated filesystem I’m using…”
I wouldn’t say ZFS is ridiculously bloated. It moves the needle closer to “use current technology to improve data integrity” rather than “try to avoid consuming resources”.
The article is certainly something of a whinge, though.
[Comment removed by author]
If you are poor, you do not have a NAS in the first place.
I have had a NAS while poor. It involved a 5 year old 1TB WD Green, sitting on the floor, connected via USB2 to a low-power headless box, without backups. My cat peed on it.
The system didn’t have enough memory for ZFS, but I’d be more likely to try HAMMER anyway.
Sure - my “NAS” is an old desktop. Anyone who is in the position of choosing which filesystem to use is probably middle rather than upper class - rich people will just buy an off-the-shelf NAS system, plug disks into it, and let it do its thing.
I don’t want to be Debbie Downer, but is this page new? heavily updated? or was it just now discovered by someone and deemed interesting?
Regardless of which it is, I’m not complaining, just curious if I missed something.
It looks like a tutorial that appeared on BSDNow a few years ago. According to the 2 days ago commit message, it was written by the same person.
Are these pages in a CVS repo somewhere? Where did you find a commit message?
Some of this is good, but it’s poisoned by a smug, patronizing tone that is a) all too common in this sort of material, b) extremely off-putting.
I agree with what the author says about the abstractions of Unix and that they could be better but…it’s all phrased with the most contempt culture-y way imaginable and it’s really really icky.
I may have a little bit of confirmation bias, but I think I might have thought that way at different times. Just shameful. Of course the irony is that I switched language fairly often.
In general I think I understand what the author is saying, but I am entirely unconvinced that his replacement is much better. For example, he wants users to use a command line system and even a scripting language. I really don’t think he has seen real users.
Now his statement about using languages that are not memory safe being engineering malpractice – I would tend to agree, if we had a real developed software engineering field. But I don’t think we are anywhere close to where that should be.
Unix must be destroyed indeed.
I always saw memory safety as an implementation problem rather than a language problem. There’s no reason a C implementation can’t guarantee that the application will immediately crash when out-of-bounds memory is accessed, like many languages with exceptions and such do. This is pretty similar to what some malloc implementations already do with guarding allocations and unmap upon free, although the ones that currently exist aren’t really silver bullets.
However, concerns about type safety are definitely more of a language problem…
There’s no reason a C implementation can’t guarantee that the application will immediately crash when out-of-bounds memory is accessed
I am afraid that the C type system is unable to deal with these issues, so it would be left to a runtime system check or a static analysis tool. The runtime system would be unacceptable because it would slow things down, and a static analysis tool cannot be guaranteed to find all instances.
An invalid array access is usually confined to undefined behaviour for this very reason. I can see malloc being the basis for a runtime system though.
Looking at what rust or haskell do with memory access is interesting. Rust tries to contain ownership and Haskell just wraps it in a function call.
Maybe finding most instances is enough - being able to ask for a large object and treating it like malloc treats an address space is a fact of life loophole in most general purpose safe languages, and a common technique in areas like embedded runtimes and emulators. Runtime checks are also a common solution in a few popular languages, but I suppose the trade-off is more acceptable there. :)
Runtime checks are a great example of why the ‘culture’ of C++ is a benefit for performance: If you check the bounds in some manner yourself (to be explicit) and then access the element, you definitely do not want to have it check it again. So on an vector you can use .at(…) to check or operator to avoid the check.
I think what would be nice would be a way of proving you have checked the bounds for a static analyzer in the compiler, and it can assume a certain range is thus valid. That would be a great way to discount the areas you don’t need to worry about.
I love how he promotes the usage of VLAs only to give plenty of warnings later on how easily this can fail for larger objects and that a user can exploit that to crash your program.
At least with malloc, you can check to see if the allocation failed. With VLAs, you literally have to just live with the stack overflow in case you requested too much stack at once.
The only thing I agree on is the stdint.h-usage. It greatly improves the readability. Most of the other points were more of an experimental nature or don’t matter (personal taste).
1) “C99 allows variable declarations anywhere” / “C99 allows for loops to declare counters inline”
Why is it “bad practice” to declare the variables at the top of the function?
Only because you can doesn’t mean you should. And if your functions grow too large you might have to think about
splitting them up a bit, not scatter your variable declarations all over the place. This way you end up with more cruft
in the end, not less.
2) “#pragma once”
Way to go for portability. If you only care about the gcc/clang-monoculture, this may seem logical, but it’s non-standard,
so don’t use it. :P
If you do numerical mathematics, go ahead, use it. I use it for my work as well. Most people however don’t even know
how restrict even works exactly and just add it anywhere, thinking it’s safe. In most cases, the speed benefit won’t matter
anyway, because your program is stuck in I/O 99% of the time.
4) “Return Parameter Types”
The convention ‘0’ for success and ‘1’ for error is common knowledge. The bool-proposal was kind of stupid, because
you end up setting up conventions there as well. Does return ‘true’ mean error or success?
5) “Never use malloc, use calloc”
Seriously? This can actually shadow bugs in your program (forgotten 0-terminators on dynamic strings) which can fuck
things up later on. Also it’s slower. If you use calloc everywhere, you basically admit that your data structures are messed
up and have let your program grow too much. Or that you have simply not understood the language/machine.
And the most important point: Make up your own mind people! If you prefer your own coding style, then use it. If it’s too weird, it’s not guaranteed if people will commit something, but in C you can’t go too wrong anyway.
Nevertheless, I like the gofmt approach. :)
Also, take those “how to’s” always with a grain of salt. This is merely a reflection of the author’s opinion. Hell, take what I say with a grain of salt. Read the docs, read the standards(!) and inform yourself. C is simple enough that you can make up your own mind on those technical details.
If you are still thinking about using VLAs in your code, take a look at the GCC implementation.
Guides like this are the reason so many people are still writing bad code, because they let others think for them instead of informing themselves.
I don’t think 1 for error is that a common convention, though I agree not 0 is a relatively common way to signal failures. In a lot of the code I work on, almost everything returns an int which is 0 for success and -1 for failure.
I personally think it’s a “bad practice” (whatever that means – to be avoided, I guess) to declare variables outside the scope in which they are used. If you need a variable inside one arm of an if statement, put it in there, not at the top of the block. Inline loop counter declaration is essentially the same thing.
Regarding (1), declaring variables as needed instead of at the beginning of the block can help you in my experience. in ANSI C, it is easy to miss if a variable has not been initialized or actually has vanished from the code. Also, patterns of variable reduce (like i am going to reuse i here…) are probably not emerging as often.
So its not necessarily that declaring variabls at the top is bad, it is just nicer to declare them as you go.
In the end it doesn’t matter. I often reuse my loop variables, you probably don’t. I guess even if we worked on a project together this wouldn’t be too much of an issue, anything else is not important.
Way to go for portability. If you only care about the gcc/clang-monoculture, this may seem logical, but it’s non-standard, so don’t use it. :P
I actually think this as nice bonus! :P
Every time I have been using some other compiler than gcc/clang, there has been horrible headaches in every corner (especially with IAR, damn it!). Although I must say that all my experiences outside gcc/clang world has been with propietary compilers. I might be somewhat biased.
Try PCC. You might be pleasantly surprised? :)
This seems to be more of a guide of what not to do.
I would be more likely to point to this with the disclaimer “You see this guy’s opinions? Do the opposite of what he says.”
Some of the compiler features he mentions are non-standard. This matters for me. I actually use a C compiler that isn’t GCC or clang on a regular basis (pcc). -march=native is often unacceptable for downstream distributors, and generally I’m annoyed when programs ignore my CFLAGS in favor of their own ridiculous optimizations. Usually I value a fast compilation far more than non-hot parts of the code being sprinkled with magic. As others have mentioned, “#pragma once” is also non-standard, and variable size arrays (i.e. alloca) can be a security risk.
No specific comments on types (though you should certainly use char to refer to utf-8 octets, otherwise people who have to use your libraries or read your code will be annoyed). I use “unsigned” when I want a integer that’s at least 16 bits and don’t care about specifics. That’s in line with the standard.
There are valid arguments for separating declarations from code, especially when you have resources you want to allocate and free. for loops are perhaps a case when this rule can be broken - not sure I have a strong opinion here.
“You see this guy’s opinions? Do the opposite of what he says.”
That’s exactly what I meant. My sentence was obviously ambiguous.
Ah, ok, that makes more sense. Thanks.
Isn’t that at least half of any effective programming guide? Knowing how to write a program that compiles and runs in a given language is easy. Knowing how to write a good program that minimizes errors, maximizes readability, performance, security, and refactoribility is hard.
Package manager for the C programming language
Oh, you mean pkgsrc?