1.  

    I can’t help but be surprised that authenticated encryption/AEAD doesn’t seem to be intended to be or become the default for symmetrical encryption in Percival. crypto_secretbox in NaCl and libsodium as well as crypto_lock in Monocypher all default to some AE or AEAD construction.

    1. 3

      do not use your real name

      Yeah, sure. (And I especially hate stuff like CLA forms that demand your “real” name, phone number and fscking POSTAL ADDRESS)

      In addition, hide your location, gender, race, political alignment, and sexual orientation. Create multiple email addresses, create multiple github accounts, and use hacker names.

      That is just excessive paranoia though.

      1. 1

        What is “CLA” in this context?

        1. 2

          Contributor license agreement. You never saw these forms? Try contributing to a google project :D

          1. 1

            Or canonical. Thankfully GitLab removed theirs.

        2. 1

          and fscking POSTAL ADDRESS

          The postal address is probably there to be able to efficiently sue you.

        1. 14

          In addition, hide your location, gender, race, political alignment, and sexual orientation.

          I agree with this sentiment, but for another reasons: They’re wholly and entirely irrelevant for the purpose of writing software. If the “normal” class (I presume that’s cishet white men) starts hiding these, too, we can make software communities effectively blind to these attributes. This would take a lot of wind out of the drama sails that have loved rearing their head these past few years.

          create multiple github accounts

          Side note: That’s going to get expensive. From GitHub’s terms of service: “you may not have more than one free Account.

          1. 8

            “Blind to attributes” is not the Ultimate Solution for Everything.

            It can be very useful in specific circumstances to avoid discrimination — like hiring processes and conference talk selection.

            It doesn’t seem like a good idea to try to apply it to, like, whole communities doing their everyday things.

          1. 4

            Is the category useful? Is there anyone who can’t stand a mention of markup languages and wants to filter all related stories from their browsing, or someone who wants to subscribe to an only-textprocessing rss feed? As much as I love categorization, I’m skeptical that the benefit of this outweighs the added complexity for submitters.

            1. 5

              I’m mainly proposing this for two reasons:

              Primarily, the submission guidelines clearly spell out “If no tags clearly apply to the story you are submitting, chances are it does not belong here. Do not overreach with tags if they are not the primary focus of the story.” The “LaTeX Tooling Guide”, the “New mandoc -mdoc -T markdown converter” and the “DocBook rocks! a gentle introduction” stories are all overreaching tags. This could discourage valuable submissions in this general area.

              Secondarily, yes, I personally do want a way to search for textprocessing posts. It’s a special area of interest of mine. As seen in the examples above, the submissions are very heterogenous and hard to find without issuing multiple searches by term. I suspect there may be other people with an interest like that.

              1. 3

                “the submission guidelines clearly spell out “If no tags clearly apply to the story you are submitting, chances are it does not belong here.”

                Maybe we just get rid of that instead. We already define community norms by votes, comments, and flags. We also evolve. Formal methods and osdev tags are examples. I also try to sneak hardware design in here periodically with it getting a little more attention (i.e. votes) now than in the past. So, the quoted sentence doesn’t reflect what actually goes on.

                The other thing I do is send new people that What Lobsters Is and Isn’t post to give them a heads up on what kinds of things are good candidates for submission. Especially if their stuff is already getting flagged in Recent.

                1. 1

                  Even if that were taken out, there’s still the informal burden to tag a story. If you don’t find any tag matching your submission, refraining from submitting would seem like the most obviously correct course of action.

                  1. 2

                    So exclude ideas that don’t fall into a hierarchy?

                    The OP (you) is talking about a subset of what could be called “programming”.

                    refraining from submitting would seem like the most obviously correct course of action.

                    I can’t even. I think you are being overly pedantic in trying to find a classification.

                    1. 1

                      That’s the point I was making. We wouldn’t have a lot of these tags if people refrained from submitting when there’s no matching tag. Notice that your tag suggestion is done, like many others, by highlighting all the stories it would’ve applied to. The stories already submitted are being used to justify the creation of the tag. So, the stories come before the tag. So, the lack of a tag doesn’t mean don’t submit or at least didn’t to them.

                      If anything, a tag seems to just indicate something is common enough on this site to filter or highlight it.

                2. 5

                  Too many tags also become a burden for submitters having to look at or add them. I’m opposed since binary and text are the default things programmers will be working with. I also don’t see an overwhelming number of articles on it justifying filtering.

                1. 4

                  If adopted, I suggest a different name. textprocessing makes me think of regexps, parsing, full-text search, stuff like that.

                  Maybe document-processing ?

                  1. 1

                    I’d say that’s the more accurate description and a good middle ground between textprocessing and typesetting. One does wonder if documentation/man pages still count as documents in that sense, but I’d say it’s clear enough. Though, as far as I know, tags cannot have hyphens/dashes, so it’d have to be documentprocessing.

                  1. 4

                    As far as I know, XP is still commonly found in developing countries. Firefox was the last browser to still support XP, so this is rather unfortunate.

                    1. 3

                      Hopefully XP users will fork Firefox to support XP, just like TenFourFox people did.

                      1. 3

                        that would seem nice for those people. But having an unmaintained, unpatched, old Microsoft system on the internet will bite them in the end. I’m hoping that people will find something else instead. Maybe a somewhat-usable linux distribution + wine? Maybe something built on ReactOS?

                        I’m just hoping that we, as humanity, will be able to get rid of Windows XP :)

                        1. 4

                          Yeah, they need to get off Windows entirely. Developing countries just keep making low-cost, usable, Linux boxes for them. Alternatively, a Linux distro that keeps drivers for as much old hardware as possible. I’m not sure if one already does. Each upgrade I do seems to kill something off on a random, old machine.

                        2. 3

                          There is RetroZilla, which supports as far back as Windows95. Though it is based on SeaMonkey, not Firefox.

                      1. 2

                        It’s worth noting that some of the games in OpenBSD base are broken. For example, hack(6) states:

                        hack is currently unusable because it relies on setgid(2) to allow multiple users read and write access to the same files.

                        Somebody on the openbsd-tech mailing list was trying to sort out various issues with hack in May, June or July, but none of the stuff actually got committed.

                        1. 5

                          It may be worth noting that copyright on Windows 95 is still intact. In all likelihood, this infringes upon Microsoft’s copyright.

                          1. 3

                            It’s worth noting that some people consider inheriting from Exception rather than StandardError to be bad style. Because catching Exception includes things like ScriptError::SyntaxError and SignalException::Interrupt (^C), the advice seems to be to catch StandardError instead if you don’t know what exceptions to expect and thus your custom exception classes would need to inherit from StandardError to match that.

                            1. 3

                              Yep, and some common libraries (looking at you, nokogiri) don’t follow this advice and instead throw things like SyntaxError that should be limited to the Ruby parser itself.

                              1. 0

                                Actually, It’s better to create a custom error that makes sense within the execution context.

                                Anyway, Thank you for the precision. :-)

                                But the purpose of my articles are just to explain how it works, not how to use it.

                                I believe in the fact that developers are not robots and they’ll learn all the aspects of a notion by using it, repeatedly. Instead of telling them what’s good or not, I prefer let them figure it out by themselves.

                                I prefer put all of my energy on explaining the main concept and let the 5% of edge-cases on side.

                                But, again, thanks for your precious feedback :-)

                              1. 1

                                There’s another part of note that this post seems to ignore.

                                It’s a common occurrence that Google serves an intentionally unsolveable (correct solution is rejected) CAPTCHA or even straight up refuses to serve a CAPTCHA at all. In that case, there’s just no way to access your site. And these things usually happen for months at a time.

                                1. 7

                                  A better test bed. Although my work focus on developing programs on Linux, I will try to compile and run applications on OpenBSD if it is possible.

                                  I feel like the lack of valgrind does hurt OpenBSD as a testbed. I know there’s malloc.conf(5), but that doesn’t seem to help much in the case of, say, out of bounds access of a stack-allocated variable.

                                  a) Patches. Although most of them are trivial modifications, they are still my contributions.

                                  Don’t claim it’s just trivialities. The small things and adding polish is what really makes OpenBSD stand out (or any software project, really), and every “trivial” modification helps.

                                  1. 14

                                    OpenBSD does have Valgrind.

                                    1. 4

                                      I stand corrected. Oops. Thank you.

                                      1. 1

                                        What about ASan and the other sanitizers?

                                    1. 3

                                      I’m so happy to read about the progress on this, we need this on linux though! Who is interested in developing this?

                                      1. 4

                                        I’d assume that the answer you’d get if you ask a Linux kernel developer is “use SELinux or AppArmor”.

                                        But if we’re wishing for arbitrary things, then I’d also like for strlcpy, strlcat and the arc4random family to be an actual part of POSIX and subsequently adopted by glibc/Linux.

                                        1. 3

                                          Or Smack for those favoring simplicity. Looking at the link, I just found out it got big in automotive Linux, too.

                                        2. 3

                                          The closest equivalent on Linux are probably the systemd filesystem sandboxing options: https://www.freedesktop.org/software/systemd/man/systemd.exec.html#ReadWritePaths=

                                          1. 2

                                            I think it’d be easier to make OpenBSD as good as Linux. I’m not sure what it’s missing, though. Momentum, I guess.

                                            1. 5

                                              Culture and priorities are different. Linux’s specifically targets what brings in mainstream and corporate audiences. OpenBSD explicitly rejects a lot of that to be simpler, more UNIX like, or quality/security. Lastly, there’s more attempts to sell Linux-based systems that generate revenue that can fund development.

                                          1. 13

                                            Don’t forget that performance enhancements, security enhancements, and increased hardware support all add to the size over what was done a long time ago with some UNIX or Linux. There’s cruft and necessary additions that appeared over time. I’m actually curious what a minimalist OS would look like if it had all the necessary or useful stuff. I especially curious if it would still fit on a floppy.

                                            If not security or UNIX, my baseline for projects like this is MenuetOS. The UNIX alternative should try to match up in features, performance, and size.

                                            1. 13

                                              We already have a pretty minimalist OS with good security, and very little cruft: OpenBSD.

                                              1. 7

                                                The base set alone is over 100mbyte, though. That’s a lot more than OP wants.

                                                1. 5

                                                  Can you fit it with a desktop experience on a floppy like MenuetOS or QNX Demo Disc? If not, it’s not as minimal as we’re talking about. I am curious how minimal OpenBSD could get while still usable for various things, though.

                                                2. 12

                                                  Modern PC OS needs ACPI script interpreter, so it can’t be particularly small or simple. ACPI is a monstrosity.

                                                  1. 2

                                                    Re: enhancements, I’m thinking Nanix would be more single-purpose, like muLinux, as a desktop OS that rarely (or never) runs untrusted code (incl. JS) and supports only hardware that would be useful for that purpose, just what’s needed for a CLI.

                                                    Given that Linux 2.0.36 (as used in muLinux), a very functional UNIX-like kernel, fit with plenty of room to spare on a floppy, I think it would be feasible to write a kernel with no focus on backwards hardware or software compatibility to take up the same amount of space.

                                                    1. 3

                                                      Your OS or native apps won’t load files that were on the Internet or hackable systems at some point? Or purely personal use with only outgoing data? Otherwise, it could be hit with some attacks. Many come through things like documents, media files, etc. I can imagine scenarios where that isn’t a concern. What’s your use cases?

                                                      1. 5

                                                        To be honest, my use cases are summed up in the following sentence:

                                                        it might be a nice learning exercise to get a minimal UNIX-like kernel going and a sliver of a userspace

                                                        But you’re right, there could be attacks. I just don’t see something like Nanix being in a place where security is of utmost importance, just a toy hobbyist OS.

                                                        1. 4

                                                          If that’s the use, then I hope you have a blast building it. :)

                                                          1. 3

                                                            It pretty much sounds like what Linus said back then, though, so who knows? ;)

                                                      2. 2

                                                        Linux 2.0 didn’t have ACPI support. I doubt it will even run on modern hardware.

                                                        1. 2

                                                          It seems to work, just booted the ISO (admittedly not the floppy, don’t have what is needed to make a virtual image right now) of muLinux in Hyper-V and it seems to work fine, even having 0% CPU usage on idle according to Hyper-V.

                                                          1. 2

                                                            Running in a VM is not the same as running on hardware.

                                                    1. 18

                                                      I don’t like the design of Enchive.

                                                      The process for encrypting a file:

                                                      1. Generate an ephemeral 256-bit Curve25519 key pair.
                                                      2. Perform a Curve25519 Diffie-Hellman key exchange with the master key to produce a shared secret.

                                                      OK.

                                                      1. SHA-256 hash the shared secret to generate a 64-bit IV.

                                                      Kinda OK, can justify this complexity by the need for a quick check before decryption (“validate the IV against the shared secret hash and format version”) if we got the correct key.

                                                      1. Add the format number to the first byte of the IV.

                                                      OK.

                                                      1. Initialize ChaCha20 with the shared secret as the key.

                                                      This is using raw multiplication result as a key. It’s recommended to hash the result (but not pure SHA256 as we’re already exposing 56 bits of it as IV) before using is as a cipher key (for example, NaCl uses HSalsa20 as a quick hash for that).

                                                      1. Write the 8-byte IV.
                                                      2. Write the 32-byte ephemeral public key.
                                                      3. Encrypt the file with ChaCha20 and write the ciphertext.

                                                      OK. But for big files, it may be worth using chunked authenticated encryption to avoid spilling out unauthenticated plaintext or wasting time (see https://www.imperialviolet.org/2014/06/27/streamingencryption.html and my implementation https://github.com/dchest/nacl-stream-js).

                                                      1. Write HMAC(key, plaintext).

                                                      Here we have three problems.

                                                      First is that is uses the same key for HMAC as for encryption. I don’t think there’s a particular interaction problem between HMAC-SHA-256 and ChaCha20 that would lead to something scary, but this design is not ideal. To fix this and previous issue in one shot, the authors could use a 64-byte hash function to derive both encryption and authentication keys from Curve25519 shared key: encr_key || mac_key = SHA512(shared_key), or use HMAC-SHA256 with different personalization strings (encr_key = HMAC-SHA256(“EncrKey”, shared_key) and mac_key = HMAC-SHA256(“AuthKey”, shared_key), or HKDF.

                                                      Secondly, it’s MAC-then-encrypt, which exposes cipher to various attacks before there’s a chance of authenticating. Finally, I would also authenticate everything, not just the ciphertext. So I’d use HMAC(mac_key, everything) where everything is IV, ephemeral public key, and ciphertext. This way, HMAC will be checked before decrypting, and malicious payload will be rejected early.

                                                      Enchive uses an scrypt-like algorithm for key derivation, requiring a large buffer of random access memory.

                                                      If it’s scrypt-like, why not just use scrypt? I haven’t checked the whole algorithm, but I can already see a drawback: it uses SHA-256 to perform work on memory. Scrypt specifically uses a very fast function (8-round Salsa20) so that it can perform this computation as quickly as possible, which is very important for a memory-hard function.


                                                      To summarize: there’s nothing particularly broken with this design, as far as I can tell from a quick look, but it’s not a solid design, unfortunately.

                                                      1. 5

                                                        Enchive’s author here. These are all good points. Most of the mistakes are me not knowing any better when I designed it, but, fortunately, none of them fatal as far as I know.

                                                        But for big files, it may be worth using chunked authenticated encryption to avoid spilling out unauthenticated plaintext

                                                        I did eventually figure out chunked authentication for myself months later, but too late for Enchive. If I ever redesign the file format, it would definitely use chunked authentication, among other corrections like using EtM.

                                                        If it’s scrypt-like, why not just use scrypt?

                                                        At the time (early 2017) I couldn’t find a drop-in scrypt library with a friendly license, and I didn’t want to try implementing it myself. A major design goal was ANSI C and no dependencies. As a result, Enchive can easily be compiled just about anywhere, probably even decades into the future (to, say, decrypt some old archives). As evidence of this, you can build it and run it on Windows 98 decades in the past.

                                                        1. 5

                                                          I get the feeling most of those shortcomings are caused by direct use of primitives. I suspect that the author was trying to:

                                                          1. minimize dependencies – especially looking at optparse.h, which is (mostly) redundant on a POSIX system due to getopt(3) existing – and source files, and
                                                          2. keep the license unencumbered (all third party code seems to be in the public domain:, but then ended up making dangerous decisions given raw primitives.

                                                          argon2 not being in there is probably not an accident but a result of how difficult it is to implement and how he’d have two hash functions (SHA-256 and BLAKE2 for the argon2 state).

                                                          The author might’ve had a better result and less work with naive use of Monocypher, libsodium or TweetNaCl, though TweetNaCl still would’ve let him shoot himself in the foot with raw X25519.

                                                          1. 1

                                                            If it’s scrypt-like, why not just use scrypt?

                                                            Yeah, it’s like they’re not aware that scrypt comes with a file encryption utility.

                                                            1. 3

                                                              I didn’t mean using the file encryption utility itself, but the KDF primitive. Although, indeed, the scrypt utility is great (I use it for my files), but it doesn’t do asymmetrical encryption, which seems to be the point of Enchive.

                                                              1. 1

                                                                but it doesn’t do asymmetrical encryption, which seems to be the point of Enchive.

                                                                Ah, I missed that part. Hmm, well in that case Enchive seems pretty alright as far as goals are concerned. Hopefully the author will incorporate your suggestions.

                                                          1. 3

                                                            Fortunately, arc4random_uniform(3) has mostly solved the range problem. If you have it available.

                                                            1. 5

                                                              “missing” out of the box for composition and revision are tools for version control

                                                              There’s RCS and CVS in the base system for that.

                                                              One thing that I find somewhat unfortunate is that OpenBSD has a lot of great text editing tools, yet it’s missing any kind of typesetter (troff, TeX) in the base system.

                                                              1. 3

                                                                …editing because @xorhash had been kind enough to remind me of rcs(1) and cvs(1)…

                                                                OpenBSD’s base system doesn’t provide dictionary searches or spell check, either, but I’m fine with that. I’m grateful they provide X Window as part of the base system. Stuff like git, troff, aspell, diction, pandoc, and dictd I’m happy to install using the package mangler.

                                                                What I would love to know is why OpenBSD ports has the dict server but none of the dictionaries. If I want a dict daemon on my laptop so I can check definitions offline, I have to get the actual dictionary archives out of the FreeBSD port’s distfiles because ftp.dict.org is dead. While I can do that, I’d rather not have to. :)

                                                                1. 3

                                                                  I second xorhash’s mention of RCS. (Though, I’m no BSD user.)

                                                                  I heard somewhere that RCS was designed with your sort of use case in mind! Maybe it was this post (2009)?

                                                                  It’s certainly an easily understood, unixy tool. Maybe I’ll try using it one day. ;)

                                                                  1. 3

                                                                    That’s an excellent introduction. Thanks.

                                                                    However, RCS isn’t actually suited to my use case because I don’t use one file per novel. Instead, I write novels the way I code at my day job, with text distributed across various files in a directory tree. Yes, it’s probably overkill, but it beats paying a shitload of money for a Mac so I can use Scrivener or Ulysses.

                                                                    My hierarchy currently looks somewhat like this:

                                                                    $SERIES/
                                                                      $TITLE/
                                                                        title
                                                                        dedication
                                                                        disclaimer
                                                                        acknowledgements
                                                                        $SUBPLOT1/
                                                                          01.scene
                                                                          02.scene
                                                                          01.revision01.sed
                                                                        $SUBPLOT2/
                                                                          01.scene
                                                                    

                                                                    When I’m ready to read what I’ve done as a whole, I’ll assemble the whole mess using cat and fmt. Likewise when I’m done with all revisions and am ready to submit to a publisher. At that point I’ll put everything together into a file like “submission01”, mark it up with with Markdown or reStructuredText (depending on whether I was pretentious enough to include footnotes), run it through pandoc and convert it to Word format (unless the publisher is hip enough to accept an OpenDocument Text file, and then edit the output in LibreOffice to suit the publisher’s house style.

                                                                    You can’t manage something like this with RCS. CVS would be more appropriate, but as I mentioned in another comment I’m already familiar with git. I use it when tinkering with static site generators, build websies, and at my day job.

                                                                    1. 4

                                                                      I don’t know much about the BSDs but I use Scrivener on Debian via WINE, flawlessly! Just a note.

                                                                      1. 3

                                                                        Apparently there’s an AppImage of the unfinished Linux version for people who don’t want to use WINE.

                                                                        Believe it or not, I’ve tried Scrivener. It’s not a bad app, but I don’t like that it stores everything in RTF files. When I’m drafting something, I’d rather work in plain text.

                                                                        Also, as @qznc noted, a tool like ed(1) is great if you have a tendency to go back and edit unfinished work. I have this tendency in spades.

                                                                      2. 2

                                                                        I don’t see why you can’t use RCS.

                                                                        % ed test
                                                                        a
                                                                        this is a test of using
                                                                        RCS for version control.
                                                                        .
                                                                        w
                                                                        49
                                                                        !ci -l % 
                                                                        ci -l test
                                                                        test,v  <--  test
                                                                        enter description, terminated with single '.' or end of file:
                                                                        NOTE: This is NOT the log message!
                                                                        >> test check in
                                                                        >> .
                                                                        initial revision: 1.1
                                                                        done
                                                                        !
                                                                        ,n
                                                                        1	this is a test of using
                                                                        2	RCS for version control.
                                                                        a
                                                                        
                                                                        Now we add a new paragraph.
                                                                        .
                                                                        w
                                                                        78
                                                                        !ci -l %
                                                                        ci -l test
                                                                        test,v  <--  test
                                                                        new revision: 1.2; previous revision: 1.1
                                                                        enter log message, terminated with single '.' or end of file:
                                                                        >> new paragraph 
                                                                        >> .
                                                                        done
                                                                        !
                                                                        ,n
                                                                        1	this is a test of using
                                                                        2	RCS for version control.
                                                                        3	
                                                                        4	Now we add a new paragraph.
                                                                        q
                                                                        

                                                                        Compared to git, the only thing that’s missing is keeping track of contents that get moved from one file to another.

                                                                        1. 2

                                                                          RCS is one repository per file. That’s not what I want. I want one repository for the entire project. And I want the master repository to live on BitBucket (or some other provider I trust because I’m too lazy to self-host on a VPS). This lets me sync between multiple machines.

                                                                          This way, when I’m dead because somebody got upset about me typing in public and decided to beat me into the ground with my laptop, it’s possible that some other nerd who overdosed on JRPGs and Blue Öyster Cult albums as a kid might find it and take over. :)

                                                                          1. 1

                                                                            In the true spirit of unix, you use one tool for one purpose only. Just use a separate tool for syncing. scp(1) works. rsync(1) works better. unison(1) beats everything.

                                                                            You can’t really call RCS a ‘repository’. It is, after all, just one ‘,v’ file for the version history of a single file. You can setup rsync or unison to sync up ‘,v’ files exclusively, which essentially transforms rcs to a hand-rolled cvs.

                                                                1. 2

                                                                  (Preface: I didn’t know much, and still don’t, about the *Solaris ecosystem.)

                                                                  So it seems like the evolution of *Solaris took an approach closer to Linux? Where there’s a core chunk of the OS (kernel and core build toolchain?) that is maintained as its own project. Then there’s distributions built on top of illumos (or unleashed) that make them ready-to-use for endusers?

                                                                  For some reason, I had assumed it was closer to the *BSD model where illumos is largely equivalent to something like FreeBSD.

                                                                  If I wanted to play with a desktop-ready distribution, what’s my best bet? SmartOS appears very server oriented - unsurprising given *Solaris was really make more in-roads there in recent years. OpenIndiana?

                                                                  1. 3

                                                                    If Linux (kernel only) and BSD (whole OS) are the extremes of the scale, illumos is somewhere in the middle. It is a lot more than just a kernel, but it lacks some things to even build itself. It relies on the distros to provide those bits.

                                                                    Historically, since Solaris was maintained by one corporation with lots of release engineering resources and many teams working on subsets of the OS as a whole, it made sense to divide it up into different pieces. The most notable one being the “OS/Net consolidation” which is what morphed into what is now illumos.

                                                                    Unleashed is still split across more than one repo, but in a way it is closer to the BSD way of doing things rather than the Linux way.

                                                                    Hope this helps clear things up!

                                                                    If I wanted to play with a desktop-ready distribution, what’s my best bet? SmartOS appears very server oriented - unsurprising given *Solaris was really make more in-roads there in recent years. OpenIndiana?

                                                                    OI would be the easiest one to start with on a desktop. People have gotten Xorg running on OmniOS (and even SmartOS), but it’s extra work vs. just having it.

                                                                    1. 1

                                                                      Solaris is like BSD in that it includes the kernel + user space. In Linux, Linux is just the kernel and the distros define user space.

                                                                      1. 1

                                                                        So…. is there no desktop version of Illumos I can download? Why does their “get illumos” page point me at a bunch of distributions?

                                                                        Genuine questions - I’m just not sure where to start if I want to play with illumos.

                                                                        1. 3

                                                                          illumos itself doesn’t have an actual release. You’re expected to use one of its distributions as far as I can tell, which should arguably be called “derivatives” instead. OpenIndiana seems to be the main desktop version.

                                                                          1. 1

                                                                            I don’t know. I know there are some people who run SmartOS on their desktop, but I get the feeling it’s not targeting that use case, or at least there isn’t a lot of work going into supporting it.

                                                                      1. 4

                                                                        If the book is so bad, then what is the publisher doing? Isn’t it their job to weed out bad content?

                                                                        1. 6

                                                                          I wanted to explore that question some more in the post, but it got out of scope and is really its own huge topic.

                                                                          The short version is that perhaps, as readers, we think they are asking “Is this content any good?” when what they’re really asking is, “Will this sell?”

                                                                          1. 5

                                                                            In the preface of the second edition it says that the first edition was reviewed “by a professional C programmer hired by the publisher.” That programmer said it should not be published. That programmer was right, but the publisher went ahead and published it anyway.

                                                                            Can you expand slightly on this? I understand that the second edition contains a blurb that someone they hired reviewed the 1st edition and decided it should never be published. I’m slightly lost in meaning here.

                                                                            1. Did they hire a person for the second edition, to review the first edition where the conclusion was ‘that should have not been published’?
                                                                            2. Hired a person to review the first edition, the conclusion was to not publish but they still decided to publish and included a blurb about it in the second edition?

                                                                            I guess the question is, did they knew before publishing that it’s this bad.

                                                                            Additionally was the second edition reviewed by the same person and considered OK to be published?

                                                                            1. 5

                                                                              Here’s a longer excerpt from the second edition’s preface.

                                                                              Prior to the publication of the first edition, the manuscript was reviewed by a professional C programmer hired by the publisher. This individual expressed a firm opinion that the book should not be published because “it offers nothing new—nothing the C programmer cannot obtain from the documentation provided with C compilers by the software companies.”

                                                                              This review was not surprising. The reviewer was of an opinion that was shared by, perhaps, the majority of professional programmers who have little knowledge of or empathy for the rigors a beginning programmer must overcome to achieve a professional-level knowledge base of a highly technical subject.

                                                                              Fortunately, that reviewer’s objections were disregarded, and “Mastering C Pointers” was released in 1990. It was an immediate success, as are most books that have absolutely no competition in the marketplace. This was and still is the only book dedicated solely to the subject of pointers and pointer operations using the C programming language.

                                                                              To answer your question, then, all we can conclude is that a “professional C programmer” reviewed the first edition before it was published, recommended against publishing it, but the book was published anyway. If the quoted portion were the reviewer’s only objection, then we could surmise that the reviewer didn’t know much either, or didn’t actually read it.

                                                                              1. 1

                                                                                little knowledge of or empathy for … a beginning programmer

                                                                                This is an important point I feel that has been left out of the discussion of this book. Yes the book contains harmful advice that should not be followed. It is probably a danger to make this text available to beginners, and it serves as little more than an object of ridicule for more experienced readers.

                                                                                However, I think there is something to be gained from a more critical analysis that doesn’t hinge on the quality or correctness of the example. This reviewer takes a step in the right direction by trying to look at Traister’s background and trying to interpret how he arrived at holding such fatal misconceptions about C programming from a mental model seemingly developed in BASIC.

                                                                                Traister’s code examples are in some cases just wrong and non-functioning, but in other cases I can understand what he wanted to achieve even if he has made a serious mistake. An expert C programmer has a mental model informed by their understanding of the memory management and function call semantics of C. A beginner or someone who has experience in a different sort of language will approach C programming from their own mental model.

                                                                                Rather than pointing and laughing at his stupidity, or working to get this booked removed from shelves, maybe there’s something to be gained by exercising empathy for the author and the beginner programmer. Are the mistakes due to simple error, or do they arise from an “incorrect” mental model? Does the “incorrect” mental model actually make some sense in a certain way? Does it represent a possibly common misconception for beginners? Is it a fault of the programmer or the programming language?

                                                                                1. 1

                                                                                  …an opinion that was shared by, perhaps, the majority of professional programmers who have little knowledge of or empathy for the rigors a beginning programmer must overcome…

                                                                                  What utter nonsense. This is inverse-meritocracy: claiming that every single expert is blinded by their knowledge & experience. Who are we to listen to then?

                                                                                  It seems like they’d prefer lots of terrible C programmers cropping up right away, to a moderate number of well-versed C programmers entering the craft over time. Which, now that I think about it, is a sensible approach for a publisher to take.

                                                                            2. 3

                                                                              Cynically? The publishers job is to make money. If bad content makes them money, they’ll still publish it.

                                                                              1. 2

                                                                                Exactly. There’s tons of news outlets, magazines, and online sites that make most of their money on fluff. Shouldn’t be surprised if computer book publishers try it. The managers might have even sensed IT books are BS or can risk being wrong individually given how there’s piles of new books every year on the same subjects. “If they want to argue about content, let them do it in the next book we sell!” ;)

                                                                                1. 2

                                                                                  I recommend a scene from Hal Harley’s film “Fay Grim” (the sequel to “Henry Fool”) here. At a point, Fay questions the publishers decision to publish a work (‘The Confessions’) of her husband - she only read “the dirty parts” but still recognized the work as “really, really bad”.

                                                                                  Excerpted from a PopMatters review: “One proposal, from Simon’s publisher Angus (Chuck Montgomery), will lead to publication of Henry’s (admittedly bad) writing and increased sales of Simon’s poetry (on which royalties Fay and Ned depend to live). (Though the writing is, Fay and Angus agree, “bad,” he asserts they must press on, if only for the basest of reasons: “We can’t be too hard-line about these things, Fay. Anything capable of being sold can be worth publishing.”)”

                                                                            1. 17

                                                                              You’d save yourself a lot of trouble upfront not borrowing the filezilla name - it’s trademarked. Already there’s an argument for whether “-ng” postfix constitutes a new mark, why bother even having it. Just completely rename it

                                                                              Hilariously their trademark policy seems to prohibit their use of their own name

                                                                              1. 3

                                                                                Oh, great point. We will need to think of a new name.

                                                                                How about godzilla-ftp.

                                                                                1. 14

                                                                                  How about filemander? It’s still in the same vein as “zilla,” but far more modest. The fact that you’re refusing cruft, provides a sense of modesty.

                                                                                  Also, “mander” and “minder” — minder maybe isn’t exactly right for an FTP client, but it’s not completely wrong…

                                                                                  1. 4

                                                                                    filemander

                                                                                    Great name! A quick ddg search does not show any existing projects using it.

                                                                                    1. 1

                                                                                      And it sounds a bit like “fire mander”, which ties in well with the mythological connections between salamanders and fire.

                                                                                      1. 1

                                                                                        Yeah, the intention was to have a cute salamander logo–way more modest a lizard than a “SOMETHINGzilla!”

                                                                                    2. 8
                                                                                      1. 5

                                                                                        Just remember to make sure it’s easy for random people to remember and spell. They’ll be Googling it at some point.

                                                                                    1. 11

                                                                                      Nice. If you distribute pre-compiled binaries, please gpg-sign them and perhaps provide sha512 checksums of them as well.

                                                                                      1. 5

                                                                                        Thank you. I was planning on GPG signing and using SHA256. Is that OK?

                                                                                        I also hope to make the build reproducible on linux, using debian’s reproducible build tools.

                                                                                        1. 3

                                                                                          Reproducible builds would be awesome.

                                                                                          As for SHA256 vs. SHA512, from a performance point of view, SHA512 seems to perform ~1.5x faster than SHA256 on 64-bit platforms. Not that that matters much in a case like this, where we’re calculating it for a very small file, and very infrequently. Just thought I’d put it out there. So, yeah, SHA256 works too if you want to go with that :)

                                                                                          1. 2

                                                                                            Also remember defaulting on SHA-1 or SHA-256 means hardware acceleration might be possible for some users.

                                                                                            1. 2

                                                                                              SHA-1 has been on the way out for a while, and browsers refuse SHA-1 certificates these days. It might be a good idea to just skip SHA-1 entirely and rely on the SHA-2 family.

                                                                                              1. 1

                                                                                                True. I was just noting there’s accelerators for it in many chips.

                                                                                              2. 2

                                                                                                Isn’t SHA-512 faster on most modern hardware? ZFS uses SHA-512 cut down to SHA-256 for this reason, AFAIK.

                                                                                                A benchmark: https://crypto.stackexchange.com/questions/26336/sha512-faster-than-sha256

                                                                                                1. 1

                                                                                                  Oh idk. I havent looked at the numbers in a while. I recall some systems, esp cost- or performance-sensitive, stuck with SHA-1 over SHA-256 years ago when I was doing comparisons. It was fine if basic collisions weren’t an issue in thd use case.

                                                                                                  1. 4

                                                                                                    Anecdotal, but I just timed running sha 512 and 256 10 times each, on a largeish (512MB) file. Made sure to run them a couple of times before starting the timer to make sure it was in cache. Results for sha-512 were:

                                                                                                    27.66s user 2.86s system 99% cpu 30.562 total
                                                                                                    

                                                                                                    And 256:

                                                                                                    42.18s user 2.72s system 99% cpu 44.943 total
                                                                                                    

                                                                                                    So it looks like sha-512 pretty clearly wins. (CPU is an i3-5005u).

                                                                                                    1. 2

                                                                                                      Cool stuff. Modern chips handle good algorithms pretty well. What I might look up later is where the dirt-cheap chips are on offload performance and if they’ve upgraded algorithms yet. That will be important for IoT applications as hackers focus on them more.

                                                                                                    2. 0

                                                                                                      You should probably be sure to have your facts straight before giving security advice.

                                                                                                      1. 1

                                                                                                        I said there’s hardware accelerators for SHA-1 and SHA-2. Both are in use in new deployments with one used sometimes for weak-CPU devices or legacy support. Others added more points to the discussion with current, performance stats something I couldnt comment on.

                                                                                                        Now, which of my two claims do you think is wrong?

                                                                                                        1. 3
                                                                                                          1. As noted, SHA-1 has been on its way out for awhile and shouldn’t be suggested.
                                                                                                          2. I don’t know if your claim on weak-CPU devices or legacy support is true, plus you mentioned IoT in response elsewhere, it clearly doesn’t apply in the context of filezilla, an FTP app people will be running on desktops/laptops. Even if one is using the a new ARM laptop that is somewhat under powered…
                                                                                                          3. As the comment you responded to points out, one installs new software quite infrequently, so the suggestion based on performance seems odd, especially since the comment you responded to already points out that SHA-512 is generally faster to compute than SHA-256. In any case, suggesting SHA-1 for performance reasons seems unsecure.
                                                                                            2. 2

                                                                                              Ideally, OP would also get a code signing certificate from Microsoft to decrease the amount of warnings Windows spouts about the executable.