1. 3

    It’s been on my todo list to do a similar reverse engineering of the earlier (for the Gold / Silver games): Pokémon Pikachu 2 GS which similarity has never, to my knowledge, been dumped.

    I’m slightly worried Dmitry will get to it before me. ;)

    1. 1

      No time like the present! Never let somebody coming before you stop you from doing what you want to.

      1. 1

        You’re right, and I’m inspired by this write-up with some ideas to try that don’t involve having to deblob the board.

    1. 5

      My handwriting is absolutely the product of years of hand-written math note-taking since college. Notice the blackboard-bold and calligraphic fonts, which are also very common. Some of my own thoughts / habits:

      • I can’t for the life of me write ζ or ξ. Many professors share this deficiency, making chalkboard lectures on complex analysis quite difficult to follow. Recently, I have been using the Japanese る and そ in their place, and no one even notices!

      • For the purposes of communicating with others, I am overzealous with parentheses. I’d rather have them and not need them than need them but not have them. I also use whitespace quite frequently to group different parts of a longer expression.

      • I never use the prime (’) symbol. Some authors use x’ (read “x prime”) to indicate a quantity which is conceptually similar to x, but I prefer to use numbers (x_0, x_1, x_2, …) or well-known letter groupings like (x,y,z) or (p,q) or (f,g) or (α, β) or (\phi, \varphi). I avoid using prime for differentiation as well, preferring the notation ∇_x f or D_x [ f ].

      • Redundancy is good, and helps catch mistakes. Ideally, it should be possible to understand the gist of my proofs even if all the mathematical expressions are deleted.

      • “Put a hook on the x to distinguish it from a times sign.” – not really needed

      • “Put a loop on the q, to avoid confusion with 9”. – also unnecessary, since mathematicians use exclusively the digits 0, 1, 2 !

      • The number two shouldn’t be written with a curl, otherwise it’s easily confused with δ or α. As long as the letter Z is crossed, there should be no ambiguity.

      1. 5

        Redundancy is good, and helps catch mistakes. Ideally, it should be possible to understand the gist of my proofs even if all the mathematical expressions are deleted.

        Coming from a different background (implementing cryptography), I agree with this but for another reason: Readers of papers may not even have enough mathematical background to discern the notation because they come up from (possibly hobbyist) programming, rather than down from math. Therefore, having redundancy not only helps you catch mistakes, but also helps familiarize people with notation they’ve never seen before.

        1. 4

          Yes, absolutely! I”ve once spent two days debugging a point multiplication on an elliptic curve because x / 2 on an integer did not mean a division by two (bit shift) but rather the multiplication of the group inverse of 2. This was a really frustrating error although the solution to it proved to be quite gratifying.

        2. 3

          Could you post a direct link to the image? My browser is unable to render the Imgur web page.

          1. 2
            1. 2

              thanks

        1. 5

          For custom written rules, I personally find that nothing beats the simplicity of redo. The build system is language agnostic in that you can execute a build step in any language and redo only tracks what needs to be rebuilt. For the example at hand, you could write the conversion rule as (in a file named default.pdf.do):

          svg2pdf $2.svg $3
          

          Then by calling redo-ifchange *.svg the entire process would be taken care of.


          I would recommend using rsvg-convert -f pdf $2.svg > $3 in the example above. It is a much faster way to convert svgs to pdfs. If I remember correctly, it’s part of librsvg, an svg library written in rust.

          1. 4

            What implementation are you using? If I were going to use Redo, I would want to keep it in-tree because it’s not nearly as widely deployed as Make/Ninja.

            1. 4

              One of the nice things about redo is that there’s a single-file pure-shell implementation that only knows how to call the all build-scripts in the right order, which is great for shipping to end-users so they can compile it and move on with their lives.

              Meanwhile, developers who are invested enough to build your software multiple times can install redo, which does all the incremental-build and parallelisation magic to make repeated builds efficient.

              1. 1

                Do you have a link?

                1. 3

                  Not your parent commenter, but maybe they meant https://github.com/apenwarr/redo/blob/main/minimal/do

                  1. 2

                    Aha! Thanks. Currently I am using a build script by our very own @akkartik, which has worked for me so far:

                    #!/bin/sh -e
                    
                    test "$CC" || export CC=cc
                    export CFLAGS="$CFLAGS -O0 -g -Wall -Wextra -pedantic -fno-strict-aliasing"
                    
                    # return 1 if $1 is older than _any_ of the remaining args
                    older_than() {
                      local target=$1
                      shift
                      if [ ! -e $target ]
                      then
                        echo "updating $target" >&2
                        return 0  # success
                      fi
                      local f
                      for f in $*
                      do
                        if [ $f -nt $target ]
                        then
                          echo "updating $target" >&2
                          return 0  # success
                        fi
                      done
                      return 1  # failure
                    }
                    
                    update_if_necessary() {
                      older_than ./bin/$1 $1.c greatest.h build && {
                        $CC $CFLAGS $1.c -o ./bin/$1
                      }
                      return 0  # success
                    }
                    
                    update_if_necessary mmap-demo
                    update_if_necessary compiling-integers
                    # ...
                    
                    exit 0
                    
          1. 1

            3.4.0-rc6 was in 2017. Still no release. Even with the blocker bugs gone.

            I’m sad Minix is in such state.

            They need someone to take over as release manager. Even a part timer would do. The university where Tanenbaum taught seems to not care about Minix anymore.

            1. 3

              Honestly, at this point, Intel might as well take ownership of the project since they’re the largest user of Minix anyway.

              1. 3

                As far as I’m aware, Intel has made zero contributions to Minix.

                Not surprising, Intel being a scummy company.

            1. 2

              How is portable identity acheived? I see some indication it is by OpenPGP? So for all non-JS users is signing manually via gpg or similar required to post?

              1. 3

                portable identity is achieved by relying on pgp as the authentication method.

                create profile = gen key and post pubkey (text file)

                new post = signed text (in text file)

                vote, reply, change title, change config = tokens in signed text (in text file)

                when you copy these textfiles to a new instance and import it into your relational database, you get a portable account.

                1. 3

                  I’m interested as to why OP chose gpg (and all the fragility that comes with parsing its output) over signify/minisign (cf. Latacora indicating avoiding PGP as a whole as best practice); they must have been aware of its existence going by the files in misc/.

                  1. 9

                    mainly because

                    it is a stable standard

                    it,s been proven to work

                    it,s supported by just about everyone

                    it,s easy to integrate due to copious tooling

                    it,s accessible to noobs and hackers alike

                    it,s plaintext

                    it,s backwards compatible at least 10 years

                    there are hundreds of toolkits for it

                    it,s reliable

                    i already have experience with it

                    i have a wide range of features to choose from and i can choose to only use the basics

                    maybe just a little bit to annoy anyone who says they have a better solution to all the problems pgp solves and it,s all here in my 200 commit repository

                    1. 1

                      I see. Thank you for your response.

                    2. 1

                      Using OpenPGP does not in any way require gpg.

                      1. 3

                        i use both. one of thw reasons i chose pgp and gpg because of compatibility and wide range of toolkits

                        1. 0

                          OpenPGP does not require gpg, yes. Nonetheless, the README clearly refers to it, as does the code. Nonetheless PGP is still discouraged.

                          1. 3

                            excuse me?

                            discouraged by whom, exactly?

                            someone struggling over the decision whether to make business cards or not? ))

                            1. 3

                              discouraged by whom, exactly?

                              I gave this information in the initial reply: Latacora discourages PGP. See also:

                              Not all of these affect your scenario directly or even indirecty. Seeing PGP is more like a smell like seeing MD5 in a use case that it is technically still useful for.

                              1. 3

                                Here is what I’m getting out of these articles:

                                PGP is a long-lived, still-living, backwards-compatible, audited/hardened, many-featured library/standard with a wide range of features, nearly all optional.

                                There are many pitfalls in implementing truly secure PGP solutions. However, I think that in a low-stakes project like small-scale internet forum, it’s good enough.

                                I think that PGP is a great library which lacks a good interface. There has yet to be built an accessible interface on top of PGP, but I don’t think it is impossible.

                    1. 2

                      Finally catch up on sleep.

                      1. 2

                        Maybe finally get over my dislike of business cards and try to design one. Maybe just nothing. We’ll see.

                        1. 3

                          I hope that this is an end to the general prohibition on Apache 2-licensed code in the tree. The explanation offered in their copyright policy always felt weak to me in light of the specific clause they were discussing, and it would be an improvement IMO if that stopped being a consideration across the board.

                          1. 4

                            It’s in the gnu/ subtree, where licensing containment starts in general. GCC also lives/lived there. I doubt they’ll change their minds just because their compiler (again) changed the license to an undesirable one from out under them.

                            1. 2

                              Minor pedantry, but it’s not quite the Apache 2 license, it adds some extra exemptions for linking with GPLv2 code. Apache 2 and GPLv2 are incompatible. One of my concerns early in the relicensing was that there are things like QEMU that might want to use LLVM that are GPLv2-only. This was addressed with the exemption. The resulting license is compatible with pretty much anything.

                            1. 1

                              An authenticated, local attacker could modify the contents of the GRUB2 configuration file to execute arbitrary code that bypasses signature verification.

                              If the attacker can do this, they can also overwrite the whole bootloader with something that bypasses signature verification. If you can do this, your system is already compromised.

                              1. 2

                                No they can’t. Or rather they can, but if Secure Boot is on the UEFI firmware will refuse to load the modified grub.efi image, so the system won’t boot.

                                1. 1

                                  So, this vulnerability allows jailbreaking, but does not affect security against attacker without a root password?

                                  1. 3

                                    How about an attacker armed with a local root privilege escalation vulnerability aiming to achieve persistence?

                                    1. 1

                                      https://xkcd.com/1200/

                                      To do what? They already have root for plenty of persistence. I mean, yeah, they can penetrate deeper. They can also exploit a FS vulnerability or infect HDD or some other peripheral.

                                      But that’s just not economical. In most cases, it’s the data they are after. Either to exfiltrate then or to encrypt them.

                                      In other cases they are after some industrial controller. Do you seriously believe there to be anyone competent enough to catch the attack but stupid enough not to wipe the whole drive?

                                      The only thing I can imagine TEE being used for is device lockdown.

                                    2. 1

                                      Not sure what you mean by jailbreaking - can you clarify? We’re talking about a laptop/desktop/server context, not mobile. Secure Boot does not necessarily imply that the system is locked down like Apple devices are. See Restricted Boot.

                                      If the attacker cannot write to /boot, then they can’t exploit this vulnerability. If the attacker has physical access this probably doesn’t hold true, regardless of whether they have a root password. If the attacker is local and has a root password or a privilege escalation exploit then this also doesn’t hold true, and can be used to persistently infect the system at a much deeper level.

                                1. 18

                                  I have nothing but admiration for your work and the dedication it took to educate yourself well enough to produce what is apparently a good working crypto library. I also think you are literally one in a million for not only being able to do that, but investing the time and effort to actually do it. You are nowhere near the target for the advice to not roll your own crypto.

                                  In other words, replacing the advice “don’t roll your own crypto“ with “don’t roll your own crypto unless you can spend years learning how to do it properly” is not a significant change to the advice.

                                  Not to mention that even if you do know what you’re doing, at any moment you might find out you don’t know what you’re doing. Remember when everyone seemed to suddenly realize constant timing was important? Cryptography is a continuous study, not like a data structure you learn once.

                                  1. 10

                                    One fear I have is that some people might be discouraged to even start because of the negativity. Though my bigger fear is that the wrong kind of people end up charging ahead.

                                    1. 11

                                      I’m just not sure discouraging people is a bad thing. Cryptographic code is extremely unusual, perhaps even unique, in that it can fail catastrophically and silently, years after a perfect test suite passes perfectly. The gap between amateur and professional quality is huge, yet nearly invisible by normal code quality standards. Even the normal tools don’t work: I mean, have you ever seen a unit test framework that automatically tests timing attacks in addition to inputs and outputs? So if you’re going to propagate a one-liner meme about “rolling your own”, there are very good reasons to make it a discouraging one.

                                      1. 3

                                        How you discourage people is important in my opinion. One way to do it is to show the gap between casual and professional. Takes more than a one liner, though.

                                    2. 5

                                      suddenly realize constant timing was important?

                                      Suddenly? There were papers at least as early as 1996 (Kocher’s Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems that appeared in CRYPTO 1996). AES was standardized in 2001. The writing was on the wall if anyone had cared to look.

                                      1. 11

                                        That’s why I said people “suddenly realized it was important”, not “it was suddenly important”. It was always important, but the news seemed to take a while to reach a lot of implementors who had thought they knew what they were doing (and probably continued to think that).

                                        1. 7

                                          Yep. As late as 2017 (maybe later, that was when I checked), the most popular Java bcrypt library did not compare the hash in constant time when verifying passwords.

                                    1. 10

                                      Wasn’t there an article the other day explaining that GNU crypto projects have a lot of issues regarding security?

                                        1. 7

                                          The products criticized were developed at different times, by completely different teams, and their perceived insecurity differs. The greatest issue with GnuPG seems to be its poor usability, but that didn’t stop the author from spouting off about it. The only thing that these projects have in common is their attachment to GNU.

                                          As far as I can remember, GnuTLS has had a poor reputation - I remember one issue involving it using zero-terminated strings for binary data. Which is why OpenSSL was seen as the ‘serious’ choice.

                                        1. 11

                                          Oh cool, I opened Lobste.rs to find something to read and saw this headlining the site! It was a pleasure to work on this audit.

                                          1. 3

                                            I couldn’t let your work go unseen after all. :-)

                                            1. 3

                                              But now for the question burning under everyone’s nails: How much did it cost?

                                              1. 3

                                                Looking at my application, it cost $7.000, all paid by the OTF.

                                          1. 1

                                            May I be debunked around post-quantum proof cryptography: Is it something to bother this early? I feel like this is up to CryptoPeople to tell to NonCryptoPeople about that rather than the other way around.

                                            I have the impression that it is more about studying well how ciphers face the threat than finding the Golden Bullet.

                                            1. 3

                                              Should we be bothering with research and serious implementations? Yes. Quantum computers are an inevitability and it’d be nice to be ready when they’re there.

                                              Should we be putting them in production? Probably not. Many NIST post-quantum cryptography candidates are still getting attacked left and right. And there’s a non-zero chance that the result will still either be impractical, patent-encumbered or both.

                                              1. 2

                                                Being able to build large enough quantum computers to break current asymmetric cryptography is definitely not inevitable. There are many issues that may end up making it physically impossible to make such a computer that runs long enough to do such a computation. Of course, it is prudent to assume it will happen and develop resistant cryptography in the meantime.

                                            1. 1

                                              As a non-cryptographer, I am curious about the promises blake3 offers, and whether it is worth considering it instead of blake2. I saw blake2 in monocypher?

                                              Given it is a crypto primitive, it may not work as simply as bumping a dependency from version 2 to 3 (unique output length, variant…) or it may! Any major change like this one could also require a new audit (no idea).

                                              1. 3

                                                Blake3 is Blake2s, with 2 differences:

                                                • The core loop can be run in parallel. That enables vector instructions, making it much faster on modern processors.
                                                • The number of rounds is reduced. This reduces the security margin, but it is also faster.

                                                Personally, the reduced rounds make me a little nervous. The parallelisation however is a killer feature. This allows Blake3 to stay 32-bits and fast on big processors. That makes it a one-size fits all algorithm, much like Chacha20.

                                                Bumping from Blake2b to Blake3 would not require a new audit in my opinion. Blake3 is a symmetric primitive based on a RAX design (Rotate, Add, Xor), which makes it easy to implement, and very easy to test: just try every possible input length from zero to a couple block length then compare the results with a reference implementation.

                                                Now if I were to redo Monocypher now, I would consider Blake3. There’s just one problem: Argon2i, which is based on Blake2b. I could repeat the shenanigans I did for EdDSA (allow a custom hash interface, provide Blake3 by default, as well as a Blake2b fall back), but that would mean more optional code, a more complex API, all for a limited benefit. I believe Blake2b makes a slightly better trade-off for now, even though many of my users are embedded programmers.

                                                1. 4

                                                  and very easy to test: just try every possible input length from zero to a couple block length then compare the results with a reference implementation.

                                                  There are machine-parseable test vectors that test various edge cases as well.

                                                  The existing BLAKE2b API in Monocypher would need to be broken anyway because of the mandated API design with a context string.


                                                  Edit: Also, why is Argon2i an issue? As far as I’m aware, Monocypher implemets it from spec, which is notoriously incompatible with the reference implementation. So if Monocypher is already incompatible with every other implementation under the sun (which are all just derivatives of the reference implementations), why would you bother caring about the hash function used in Argon2i?

                                                  1. 2

                                                    Monocypher is compatible with both the reference implementation and Libsodium. The divergence with the spec is explicitly noted in the source code.

                                                    Also, one of authors said the specs “will be fixed soon”, so that’s a clear sign that everyone should conform to the implementation error of the reference specification. (And yeah, he made that promise over 3 years ago, and the specs still have not been fixed.)

                                                  2. 1

                                                    Thank you for the overview. I understand the balance better now.

                                                    And obviously, thank you for Monocypher!

                                                1. 1

                                                  Would there be benefits on using it for existing projects? Such as the classics (TLS, SSH, PGP…). Or is the benefit only noticeable for new projects, for which there is not yet a (too) large crypto code base in use?

                                                  1. 2

                                                    Monocypher is focused on a small number of modern primitives. That makes it incompatible with most of what those old standards need. No AES, no RSA… So I’d say new projects only.

                                                    In addition, Monocypher is a low level crypto library. A toolkit with which you can build higher-level protocols For instance, I’m currently working on authenticated key exchange with Monokex. Or you could build Noise.

                                                    1. 2

                                                      Forgive the possibly ignorant question, but would Monocypher be useful for encrypting traffic between two servers? I’m in need of encryption in a distributed system where SSL certificates would be unreasonably expensive and self-signed is not acceptable.

                                                      1. 3

                                                        It would be, but you’d need to implement an existing protocol (such as a suitable Noise pattern) that provides the security guarantees you want.

                                                      2. 1

                                                        I like the idea of small, strongly built, loosely coupled building blocks on top of which implement higher-level parts.

                                                    1. 2

                                                      Congrats!

                                                      Aside from it being a good best practice (which I guess implies a higher level of trust by security-conscious people?) are there any other benefits of being audited? Does this make it eligible to be run in some environments? Or make it a viable option for some standards?

                                                      1. 9

                                                        Thanks :-)

                                                        As far as I know, eligibility tends to increase with the user base: the more users use it, the more people feel safe using it. An audit certainly increases confidence, and with it, eligibility. Personally, the audit is a big reason why I now feel confident recommending Monocypher in a corporate setting. Before the audit, my own conflict of interest (choosing the best option vs choosing my option) always gave me pause.

                                                        Standards are different. Monocypher only implements existing standards & specifications. It could be used as a reference implementation, but that’s about it.

                                                        1. 6

                                                          Does this make it eligible to be run in some environments?

                                                          The audit is unfortunately relatively meaningless in that context. Highly regulated environments tend to insist on either ISO or NIST standards and require specific certification for them. Monocypher does not implement any of them (though Ed25519 may become part of FIPS 186-5 the way things have been going).

                                                          1. 3

                                                            Yeah, there tends to be a fairly long delay between “good” and “standard”. I get the vibe that standardisation bodies don’t trust themselves to assess cryptographic primitives and constructions. Being overly conservative is the only rational choice in this circumstance.

                                                        1. 7

                                                          The section “Finite Field Definition” is unfortunately imprecise. The right way to look at algebraic structures is that you have a fixed set S and then define operations on it:

                                                          • A nullary (0-ary) operation is just a fixed element of S.
                                                          • A unary operation is a function S -> S.
                                                          • A binary operation is a function S x S -> S.
                                                          • A ternary operation is a function S x S x S -> S.
                                                          • And so on.

                                                          The way the author phrases the “closed property” makes it seem as though operations could, at least in principle, return things that are not elements of S. This is not a very good way to look at algebraic structures. Math is not like programming, where a function that “supposedly returns int” could have, ahem, “interesting results” like throwing an exception, or your program remaining stuck in an endless loop, or even returning a string if you are using a funny enough programming language.

                                                          In addition to a set and a bunch of operations, we need axioms, or else there is not much that one can do with the resulting structures. The list of field axioms is a little bit long (I think it has about 10 axioms, although it can be shortened at the price of using even heavier abstraction), and very few of them are mentioned in the article. Sadly, the axioms are not something you can omit, because they are a part of the definition of the structure. Without the axioms, you do not get the theorems, and the theorems are precisely what you need to justify that the algorithms actually work.

                                                          The way the examples are presented is also a bit iffy. Sadly, here ring theory has a history of abuse of notation that makes its conventions hard to explain. (Ring theory is not particularly bad. Other parts of mathematics are worse.) The symbols -2, -1, 0, 1, 2, 3, etc. do not mean the usual integers, but rather their images under the canonical ring map Z -> A (where Z is the ring of integers and A is an arbitrary ring). So, in general, it does not make sense to say “1 + 2 = 3, hence the set {0,1,2} is not closed under addition”. If you have a ring of characteristic 3, then 3 = 0 and 4 = 1 in that ring, hence 1 + 2 = 0 and 2 + 2 = 1, hence the set {0,1,2} is closed under addition in that ring! In particular, all finite fields are rings of characteristic p for some prime number p.

                                                          The author’s assertion that the order of a finite field is always a prime number is incorrect. For every prime p and every positive integer n, there exists a field of order p^n. (And it is unique up to isomorphism.) But maybe it is the case that only fields of order p are of interest in cryptography.

                                                          Back to proper programming concerns, I do not think it is particularly useful to declare specific data types for doing modular arithmetic. Types (both static or dynamic) are supposed to give you automated checking that you are only passing around arguments that make sense. However, in number-theoretic routines, the proposition “passing this argument makes sense” is usually a much deeper theorem than what a normal type system can prove. So there is no substitute for clear documentation, and actually understanding why and how the algorithms work.

                                                          1. 2

                                                            But maybe it is the case that only fields of order p are of interest in cryptography.

                                                            Prime-order fields are of primary interest for elliptic curve cryptography, yes. However, extension fields (BLS signatures and other pairing-based cryptography) are sort of popular in some niches recently. Binary fields are also found on occasion for elliptic curve implementations (though the current speed winners are all GF(p) curves).

                                                          1. 3

                                                            This is probably not the reason, but it’s still something worth keeping in mind: X.509 certificates are in their default state large because RSA keys are large. These examples use DER since PEM is just a thin base 64 wrapper around DER.

                                                            $ openssl req -nodes \
                                                                -newkey rsa:2048 \
                                                                -keyout x.key -out x.crt \
                                                                -days 365 -outform DER -x509 \
                                                                -subj '/CN=API client #1337'
                                                            $ ls -l x.crt
                                                            -rw-r--r-- 1 xh xh 795 MMM DD TT:TT x.crt
                                                            

                                                            And you have that overhead on every connection, even more if you have a certificate chain. Plus the overhead of actually doing the asymmetric cryptography, which may or may not be faster or slower than whatever people do on the OAuth backend depending on the exact loads involved. OAuth tokens are substantially smaller.

                                                            You can at least alleviate this to some extent by using elliptic curve keys, but then you’re off the beaten path. What’s more, ECDSA is notoriously fragile.

                                                            $ openssl genpkey -genparam \
                                                                -out ec.param \
                                                                -algorithm EC \
                                                                -pkeyopt ec_paramgen_curve:P-256
                                                            $ openssl req -nodes \
                                                                -newkey ec:ec.param \
                                                                -keyout x.key -out x.crt \
                                                                -days 365 -outform DER -x509 \
                                                                -subj '/CN=API client #1337'
                                                            $ ls -l x.crt
                                                            -rw-r--r-- 1 xh xh 399 MMM DD TT:TT x.crt
                                                            

                                                            The actual public key is just 65 bytes (04 to indicate uncompressed key, 32 bytes of x-coordinate and 32 bytes of y-coordinate); compression isn’t widespread either due to patent issues that only somewhat recently got resolved by patent expiry. This means that there are 334 bytes of overhead, a lot of which is related to ASN.1/DER encoding and other X.509 metadata.

                                                            RFC 7250 lets you use raw public keys in place of certificates (RFC 8446, section 2, p. 13 for TLSv1.3), but support is not very widespread, you’re very much off the beaten path and have no way to indicate expiry other than manual revocation. And you want to be on the beaten path because otherwise you’ll probably run into some issue or another with implementation support. Certificates for EdDSA keys (yay!) theoretically exist, too (RFC 8446, section 4.2.3, p. 43), but you can basically pack it up if you need to interoperate with anything using either an off-beat TLS library, anything in FIPS mode or anything older than two years.

                                                            1. 5

                                                              This means that there are 334 bytes of overhead

                                                              I have a solution: remove all the tracking cookie junk we’re getting forced on us and add this instead, win-win! Browser-controlled session cookies for first-party connections only could be so very good…

                                                            1. 4

                                                              Interesting read, but I don’t understand one detail of the argument: what makes Perl more secure than the other scripting languages mentioned?

                                                              1. 13

                                                                Taint checking comes to mind, and Perl has it. I think OpenBSD folks prefer tech where it’s easier to do the right thing; doing the right thing in shell or php can require more effort, with painstaking, error-prone effort to avoid pitfalls.

                                                                1. 2

                                                                  ruby might be an interesting alternative, but I would assume it doesn’t support nearly as many platforms or architectures as perl.

                                                                  EDIT: huh. Apparently ruby removed taint checking in 2.7.

                                                                  1. 10

                                                                    Ruby code ages poorly compared to perl, though. I’ve been on too many projects where code written in ruby a year or two earlier at most had already been broken by language or dependency changes.

                                                                    1. 2

                                                                      To be fair, OpenBSD controls base, so they could keep a check on the dependency changes. Language changes are rarely breaking with Ruby, when was the last one?

                                                                      1. 5

                                                                        Now, you’ve got to start auditing upstream for bug and security fixes, and backporting them, rather than just updating when needed.

                                                                        Forking is a bunch of work – why do it when there’s a suitable alternative?

                                                                        1. 1

                                                                          We may be talking past each other here. I said that they could keep a check on the dependency changes, by which I meant that they would author code in such a way that it does not require external dependencies (or at least not few enough that they couldn’t vendor them), which wouldn’t be any different from what they’re doing with Perl already. This means that this downside of the Ruby ecosystem could be mitigated. And language changes they’d just have to accept and roll with, but I hold that Ruby rarely introduces breaking changes.

                                                                          OpenBSD will have to vendor $language_of_choice in any case because that’s how the BSDs’ whole-OS approach works.

                                                                          1. 2

                                                                            Yes. I thought you meant essentially forking the shifting dependencies instead of fully avoiding them.

                                                                            In any case, perl is there and in use, so switching would be a bunch of work to solve a non-problem.

                                                                      2. 1

                                                                        Yeah, you’re not wrong. Excellent point.

                                                                  2. 4

                                                                    Maybe the “use warnings” and “use strict”?

                                                                    1. 3

                                                                      That doesn’t bring any security though: it may give you a bit of safety, catching bugs earlier than in some other script languages.

                                                                      1. 6

                                                                        What would bring any security then, as opposed to just helping catch bugs? Barring “legitimate” cases of correct algorithms outliving their usefulness (e.g. the math behind crypto algorithms getting “cracked” to the point where it’s feasible to mount attacks on reasonably-priced hardware) virtually all security issues are bugs. Things like shell injections are pretty easy to miss when writing shell code, no matter how careful you are.

                                                                        1. 1

                                                                          Probably the taint mode that the other commenter mentioned

                                                                          1. 3

                                                                            But that’s exactly what taint checking does: it helps you catch bugs that occur due to executing stuff that’s under the user’s control. Some of these can be exploited for security reasons, but security isn’t the only reason why you want this – it’s just as good at preventing a user enumeration attack as it is at preventing accidental “rm -rf /”

                                                                        2. 2

                                                                          I thought the same. I figure the OpenBSD people know what they are talking about but I am still not really clear on what Perl has over Tcl, for example. Hopefully a Perl monk will show up and clarify.

                                                                    1. 9

                                                                      you need something under and acceptable licence, so python is out.

                                                                      What’s wrong with python’s license? This is the first time I’ve heard anyone say there’s issues with it.

                                                                      Also, I think he forgot to mention Rust. Must definitely rewrite everything in Rust. /s

                                                                      1. 2

                                                                        Marc Espie elaborates a bit on this in another post on the openbsd-misc mailing list:

                                                                        As for the license, python’s license appears fairly similar to Perl’s artistic license. I would worry a bit about the strong terms in

                                                                        1. This License Agreement will automatically terminate upon a material breach of its terms and conditions.

                                                                        for which no equivalent is visible in Perl’s license.

                                                                          1. 12

                                                                            That was fixed in Python 2.0.1, released in June 2001…