1. 18

    RIP. The last good version of Windows, as opposed to an ad platform strapped to a program loader. So much for “if it’s free, you’re the product”; if you pay for a Windows device, you’re still the product.

    1. 6

      Free upgrade from Win7 is not available, so you have to pay at least $200 for the chance to become a product if you want to keep using Windows on your old device as well.

      1. 6

        Free upgrade from Win7 is not available

        Officially, it isn’t. However, the servers will still happily churn out digital licenses if you do an in-place upgrade with media you can freely download from Microsoft themselves.

        Are you in violation of copyright when doing this? Maybe, pretty hairy question when you think about it. Is Microsoft going to do anything about it? Given they’ve had a few years to, probably not.

        1. 4

          It is important to note this is not “free” it’s extralegal price discrimination. It’s akin to releasing a torrent of your own game on pirate bay to quality control your pirated copy, so that people who are willing to “break the law” can have it for free, but everyone else must pay. The hope being that the people who broke the law to play it are excited enough to talk to others about it. Economics is a complex beast that often has little care for human laws.

          1. 1

            To this point, pirated copies of Windows are probably riddled with pernicious viruses.

            1. 1

              You usually just install DAZ loader.

        2. 3

          It still is AFAIK - and there are so many loopholes (i.e: Windows 7 keys can be used to activate 10, a11y based free upgrades, etc.) that there’s no real reason to buy a Windows 10 license if you already have a Windows 7 license.

          1. 1

            can’t you keep using Windows 7?

            1. 1

              so you have to pay at least $200 for the chance to become a product …

              This is exaggerated, a Windows 10 Pro Retail version costs 100€ on amazon and you can get a valid key for like 3$ on ebay.

              1. 3

                A valid key doesn’t necessarily mean a legit one.

                1. 1

                  Interesting. Why the prices on store.microsoft.com are so much higher?

              2. 1

                I don’t understand - can’t most of these ads be trivially removed with a bunch of end user visible settings?

                1. 4

                  Having to change settings in five or more different places to remove ads from a paid OS is much less trivial than it should be, especially in the face of glaring usability issues that could have benefitted from the organizational resources that were invested in promoting shovelware instead.

                  1. 3

                    Ah the age old fight between the bottom line and merchantability. It’s eternal, at least until money goes away and we transition to some kind of post scarcity [U/Dys]topia :)

                    Honestly, and I know I’m a minority view here, I think Windows 10 is by far the most usable Windows version ever. They’ve actually finally added accessibility features that make it usable to me as a partially blind person.

                    1. 2

                      I’m glad to hear that accessibility has improved significantly with Windows 10, thanks for bringing attention to that. I think there’s a less objectionable middle ground between ubiquitous advertising and Star Trek than what we have right now, but at least this revision of the OS isn’t entirely a regression.

                      1. 1

                        To be clear I totally agree that the ads in Windows 10 are an affront and we should all strenuously oppose it. I’ve personally given them quite a bit of feedback on the topic, specifically around making disabling it all permanently easier.

                        But for me its existence, especially since it can be turned off, doesn’t get in the way of my using the very stable and usable work environment Windows 10 + WSL represents.

                  2. 1

                    The average end user struggles to complete their daily tasks let alone audit every setting screen to remove tracking he or she might not even know exists.

                    1. 1

                      Does the average user care though? In most cases the answer is no.

                      I’ll warrant that the morality of adding such advertisements and tracking to a paid product with a non trivial consumer cost is questionable, but, I suspect we’d need a fairly revolutionary change in the way our industry is regulated to get any traction on changing that.

                      1. 1

                        The average user doesn’t know that they can care, because they’re so numb from having their software change out from under them all the damn time.

                        1. 2

                          Speaking as a personal privacy advocate who has been trying to explain things like software freedom and the importance of being able to create privacy first computing environments, even when you invest huge amounts of time in helping them understand, my anecdotal experience says they really, REALLY could care less.

                1. 9

                  I’m quite uncomfortable with the idea of discord recording voice calls. Keeping records of chat logs is obviously necessary with the way Discord is designed, which is around long duration searchable history of channels, anyone being able to invite anyone to the server, etc.

                  But voice calls are totally ephemeral. And people expect them to be treated that way. Someone keeping logs of a text conversation in Discord wouldn’t be considered odd. Someone recording a voice call they were in, without telling anyone? That’d be considered a breach of trust in every Discord community I’ve been in. So Discord the company having the ability to do so is just creepy.

                  1. 6

                    I’m not sure what drives you to expect privacy from a communications platform fueled with venture capital money. I wouldn’t be surprised they’re trying to do at least two things:

                    1. Applying a censor to voice depending on server/user DM configuration. I know they’ve got some kind of OCR that tries to identify and block offensive words contained in images, such as the N word, when people are not friends and at least one side hasn’t changed the “safe direct messaging” option down to “I live on the edge”.
                    2. Store records at least temporarily for law enforcement.

                    And the obvious other things are keeping for post-processing and derive user interests for advertising, or batching and forwarding the information to intelligence agencies.

                    It’s hard to tell, realy.

                    1. 4

                      If voice calls are being recorded, users should be shown a very clear warning, at the very least.

                      On a side node, the fact that a behavior is not surprising does not make it acceptable or not worthy of discussion.

                      1. 2

                        Is there a mention of this in the ToS? (I don’t get a hit for the string “audio” there).

                        At least in Sweden (and maybe in the EU in general), if you call a contact center that employs “sentiment analysis” and “quality control”, you are informed of this beforehand.

                        If Discord does record voice but doesn’t inform beforehand (through a ToS), they could get in big trouble in the EU.

                        1. 2

                          I’m not a lawyer, get a lawyer for good advice.

                          I couldn’t find anything related to recording and retention, or user deletion outside of copyright-infringement contexts, which is what a good section of this doc appears to be (dcma, etc).

                          There is a dense “Your Content” paragraph, which I have modified to bullet by sentence, and also bold the major points:

                          You represent and warrant that:

                          • Your Content is original to you and that you exclusively own the rights to such content including the right to grant all of the rights and licenses in these Terms without the Company incurring any third party obligations or liability arising out of its exercise of such rights and licenses.
                          • All of Your Content is your sole responsibility and the Company is not responsible for any material that you upload, post, or otherwise make available.
                          • By uploading, distributing, transmitting or otherwise using Your Content with the Service, you grant to us a perpetual, nonexclusive, transferable, royalty-free, sublicensable, and worldwide license to use, host, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, perform, and display Your Content in connection with operating and providing the Service.

                          @gerikson, this appears to be full grant and indemnification, which also covers traditional voice chat.

                          1. 2

                            Thanks for this. The “content” section seems to be standard boilerplate that many content platforms include to allow them to duplicate content over CDNs etc. Periodically there’s a panic in the form of “OMG Facebook owns all your content!!!” based on misunderstanding of these clauses.

                            Possibly Discord reserves the right to terminate service if they can determine that someoene is abusive in voice chat. It would be interesting to hear if anyone has lost access in this way - i.e. been unfailingly polite in text but violating the ToS in voice. That would be somewhat strong proof that audio is recorded and monitored, at least after complaints are made.

                          2. 1

                            Putting some fine print in the ToS that nobody reads doesn’t count as ‘notifying beforehand’ in my opinion.

                      2. 2

                        If they’re up-front with it, I say there’s nothing wrong. Otherwise, I agree. I use discord all the time because many communities are using it these days, but never the voice chat, just because text is more consistent and easier to communicate with many people and ideas.

                        1. 10

                          If they’re up-front with it, I say there’s nothing wrong.

                          Muggers are often quite up-front too, and less opaque than most web TOSes these days.

                          1. 3

                            Thanks for that comment, you made my morning :)

                            1. 3

                              Muggers and TOSes are not comparable…

                              1. 1

                                Honest Americans offering a service of stress release, with clear and direct terms of service agreements. God bless

                            2. 2

                              There’s nothing suggesting they do any recording of voice calls. I wouldn’t at all be surprised they have the ability to, they own the server and the proprietary service you’re using to communicate with.

                              1. 5

                                Discord provides a policy regarding user privacy, which explains it may capture “transient VOIP data”. While it’s a bit unclear what this may entail, our research shows that this “data” includes all voice and video data.

                                This suggests to me they’re recording voice calls.

                                1. 2

                                  They could be doing literally anything with this unspecified data, and I’d basessly assert it’s probably related to audio processing features like noise cancelling and echo reduction, versus being vague terminology for nefarious purposes.

                              2. 2

                                Are there any well-polished and E2EE (or selfhosted) voice + video call applications that people here on lobste.rs would recommend? The ones I could find don’t seem to work very well on slow connections (dynamic video bitrate pls), so I’m looking for more alternatives.

                                1. 2

                                  The only thing I can recommend right now is Matrix.org. You can self-host it and compared to many many other solutions, the protocol is rather consistent and nothing is bolted on. I like the idea how encryption keys are first-class-citizens compared to XMPP and others.

                                  1. 1

                                    Does matrix.org support reactions in text chat (thumbs up, etc.)?

                                    I tried the Fractal client and I couldn’t find a way to see or create reactions.

                                    1. 2

                                      i currently use the riot client and it supports emoji-style reactions in text. so i assume it’s part of matrix itself and maybe some clients haven’t implemented it (or it’s buried in the UI?)

                              1. 3
                                1. Yay, it exists!
                                2. We just got somewhere pushing people to provide BLAKE2 in libraries. Now we get to do the whole exercise again…
                                1. 12

                                  Incidentally, archive.codeplex.com (still owned by Microsoft!) has been marked as containing harmful programs by Google Safe Browsing. As in, all of it. This is mildly entertaining to me. If inactive/archived code repositories are now getting flagged, how come code.google.com/archive isn’t?

                                  And finally, I am also providing my binaries on my Discord server in a special #releases channel so that there’s a method of obtaining the binaries outside of web browsers where pages and files can be blocked.

                                  Infosec Twitter has been trying to convince Discord to actually scan executables for malware. I wonder if this won’t end up with Discord going down the code signing route, too.

                                  1. 12

                                    Infosec Twitter has been trying to convince Discord to actually scan executables for malware. I wonder if this won’t end up with Discord going down the code signing route, too.

                                    Article author here: I think that’s a great idea to scan binaries for malware. Google’s Safe Browsing flags binaries as “harmful content” without scanning them at all, solely on the basis that it hasn’t seen those particular binaries ‘much’ before.

                                    If it were to run a Virus Total scan like this on my file before flagging it, it would have seen the file was safe in 70 of 70 different scanners. If it had considered that my domain was 14 years old and never once hosted anything harmful, that would have also been great.

                                    Unfortunately, Safe Browsing is a shoot-first, don’t let alone ever contact you to ask questions approach =/

                                  1. 2

                                    Disclaimer: I haven’t run this, mainly because I can’t be arsed to install Nim and fight it and its build system to get things running. I know nothing about Nim.


                                    Cryptography: First of all, big props for sticking with the high level primitives that libsodium gives you. This makes everyone’s work much easier if someone were to seriously audit this (what follows may look like a some kind of audit, but I’m not qualified to audit in any capacity).

                                    The key management is a bit weird, which is kind of the point of the tool I guess. Do I understand this correctly?

                                    1. You generate 14 random words, from a predefined list of 2048 words. This is picked uniformly at random (sodium randombytes_uniform().
                                    2. You run the 12 of those random words through crypto_pwhash() with a random salt (sodium RNG) to form a key K; the other two form an identifier.
                                    3. You generate a signing keypair and an encryption keypair.
                                    4. You use K to encrypt the secret keys with sodium crypto_secretbox_easy in some opaque format dictated by Nim (?).
                                    5. You store the encrypted+authenticated secret keys along with plaintext salt and public keys in the same format.
                                    6. You output the 14 words, which the user remembers/writes down and uses those to recover the asymmetric keys.

                                    Your code asserts that assert entropy_bits_per_word*num_secret_words >= 128, which seems reasonable enough to me. This does seem, however, oddly elaborate and computationally expensive; the keys are high entropy to begin with: I assume you target 128-bit symmetric key strength. What you’re doing is generating a 128-bit key, then encoding it using base 2048, then using a password-based key derivation function to derive the final key from that. That’s needless complexity; it may feel like a password, but it’s really just base 2048 with a predefined English wordlist, so you can skip the hash entirely. You may just as well generate the 128-bit key and encode that in base 2048 directly, skipping the expensive and slow password hash and the need to store a salt. When you need to actually use it, pad it with zero bytes (on either end, doesn’t matter) to get the 256 bits required for libsodium’s cryptography functions. (Base 2048 decoding isn’t entirely trivial to write, however, but that’s another story.)

                                    Your encrypted data contains a plaintext header that specifies:

                                    1. The two paperkey identifier words of the expected paperkey.
                                    2. The salt in base64.
                                    3. The actual ciphertext in base64.
                                    4. An ephemeral public key (used for crypto_box_seal_open) in base64.

                                    The first thing to note is that you have “only” 22 bits of key identifier (two words). This may be an issue: It’s not unlikely to get a natural collision of two key identifiers. While I don’t immediately see an issue with this security-wise, it’s still a bit iffy. If two people have the same paperkey, they’ll have to do some weird manual agreement on a new paperkey identifier.

                                    I don’t know why there is any JSON involved here. It adds 33% overhead. You definitely should be doing binary files instead. You gain nothing from JSON other than having even less memory to actually decrypt data in.

                                    Your design also leaks metadata by encoding the expected paperkey. I think you know why you do it (to avoid expensive key derivation and decryption), but perhaps having a verification vector coupled with skipping the expensive pwhash instead would be a less leaky solution (e.g. decrypting 32 zero bytes with the paperkey before the actual decryption and seeing if that’s still all-0). Alternatively, brand it a usability feature and get people to search for paperkeys that start with the two words; still leaks metadata though. This may be okay, again depending on your threat model.

                                    Finally, you’re not guaranteeing anything about the sender by using an ephemeral keypair. This may be a conscious design decision for plausible deniability, maybe not. Depends on your threat model I guess. However, you also make it difficult to actually do add a guarantee for the sender: There’s nothing that uses crypto_box proper, and neither is there an easy way to sign+encrypt. With your current JSON encoding, encrypting and signing together make for a ~77% increase in data size. That’s just downright painfully wasteful.

                                    Signatures are straightforward. You might perhaps want to have an option to have detached signatures so that you don’t have to carry the entire message around (e.g. for big files transferred separately).

                                    You’re also not wiping secrets from memory with sodium_memzero(). I don’t know if it can be made to work in Nim at all, however, and FiloSottile’s age seems to be happy to forgo that entirely.

                                    (libsodium crypto_secretbox_easy takes a nonce parameter; the Nim version doesn’t. I had to look up where the nonce comes from; that’s not your issue though. Apparently it’s from a CSPRNG and then prepended to the ciphertext, which works. But if people are supposed to write compatible implementations for this later, you’ll need to know these things explicitly. Nonce reuse is fatal, so I’m glad there’s no issue on that front.)


                                    Usability: This thing will just straight up fail on files that don’t fit in memory. If files fitting in memory is all you’re going to use it for, okay. If not: This is probably going to turn out to be an issue later down the road.

                                    There’s also no documentation other than some help output. Consider adding a man page to better integrate with *NIX systems.


                                    I’m not including an english.txt so that I don’t have to worry about licensing/copyright concerns.

                                    Ignoring the issue whether you can copyright a list of 2048 words that is required for interoperability reasons, it seems mildly ironic that you’d worry about copyright concerns yourself, but then completely fail to attach a license to your own code.

                                    Incidentally, BIP 0039 links to python-mnemonic, licensed under the MIT license, which contains the word list. So you can probably just take the list and comply with the MIT license’s attribution/license notice preservation requirements if you’re that worried.

                                    1. 1

                                      Thanks for having a read through and putting this together.

                                      Ignoring the issue whether you can copyright a list of 2048 words that is required for interoperability reasons, it seems mildly ironic that you’d worry about copyright concerns yourself, but then completely fail to attach a license to your own code.

                                      Dang, I forgot that step when uploading. MIT license added. (The paperkey.nimble file still says Proprietary but I think the LICENSE file in combination with commit history is clear enough.)

                                    1. 10

                                      Is your Make­file a GNU make­file, or BSD make­file?

                                      This question is why I recommend mk (the successor to Make).

                                      1. 8

                                        The best successor to Make I’ve seen is redo. State maintenance is much more explicit and the shell DSL is beautiful.

                                        1. 3

                                          djb writes embarassingly good software.

                                          1. 3

                                            This DJB writes embarrassingly good drafts. Then he leaves the rest of us to actually turn that into a product. 3 examples:

                                            • NaCl signature code is to this day marked as “experimental”
                                            • TweetNaCl has two uncorrected instances of undefined behaviour (left shifts of negative integers, lines 281 and 685).
                                            • the Redo link above is not from DJB, it’s from someone who re-implemented DJB’s idea.

                                            This is not a criticism. He’s very good at leading the way, and time taken to polish software is time not taken to lead the way.

                                            1. 2

                                              I’ve been thinking about this for a while. Why is so much of what he makes so embarrassingly good? I think it’s a combination of two things:

                                              1. He is able to rid himself of all preconceived notions to reach a goal. Salsa20 and Chacha20 is probably the most striking example there: An ARX cipher that mortals have an actual chance of implementing correctly and securely without any notable bumps in the way, despite it also being highly performant. Similarly, djbhash came from the other end: Starting with the simplest possible construction and fiddling with the operations and constants until the result worked well.
                                              2. He has seen “both ends” extensively, the perspective of the implementor and the mathematical perspective of the algorithm and its properties. The result is that his designs end up being extremely pragmatic, especially as to how much complexity you can expect from people to be able to actually follow. A striking example is probably the Poly1305 paper, where it’s clear that he thinks both in “how would an implementation actually go about executing this” and in mathematical terms.
                                              3. He has a highly analytical mind. His papers (especially his design papers) thend to be very accessible, even for people not deep into the subject matter. This is only possible because he understands what he’s saying at a fundamental level: This allows him to decompose complex thoughts and constructions into simpler parts. As a side effect of this, he will probably often recognize redundancy or opportunity for simplification.

                                              This started getting increasingly clear to me as I read the Salsa20 and ChaCha20 papers. The Curve25519 documentation still seems a bit obtuse, which can just be blamed on elliptic curves and their moon math themselves; maybe someday, someone, somewhere will come up with a scheme that’s a bit easier to follow. Looking at post-quantum cryptography efforts, however, the trend seems to be going in the opposite direction.

                                          1. 4

                                            Better than 14 mutually incompatible implementations of the same standard.

                                            1. 7

                                              Yeah, but you can solve that with a make file. ;)

                                            2. 5

                                              Sure, and we should be okay with that. Competition is healthy.

                                              1. 4

                                                In what way do build systems “compete”? The fragmented ecosystem of open source build systems appears nothing like a market to me, it’s really strange to ascribe the ideals of markets onto that ecosystem, especially when people just use/build the toolchain that makes them happy and nearly never worry about other toolchains. There’s no real social impetus between each system.

                                                1. 1

                                                  They compete for your attention, that’s really the point in writing a “new, improved” build system. It doesn’t have anything to do with market economics, aside from the concept being relevant in both it’s not something exclusive to markets

                                                  1. 2

                                                    If nobody is paying attention, what attention are you competing for? What’s the point? How is that good?

                                                    1. 1

                                                      So why is it healthy?

                                                      I want competition when the existing solutions are poor. When the existing solutions are good, or even fine, I would much prefer standardization.

                                                      1. 1

                                                        It’s healthy because it leads to people implementing solutions that are better than the preexisting solutions. If something becomes standardised across an industry, I think we call that winning…

                                                        1. 1

                                                          So if existing solutions are good, but nonetheless 30 more less effective but specialized solutions pop up you consider that winning?

                                                  2. 3

                                                    I’m not saying that it’s bad, just that the way you phrased your comment sounded a lot like “If you have to choose between A and B, take C” for any value of A, B, C.

                                                    But otherwise, I’ve never really recognized any major benefit that GNU Makefiles (since I use those the most) offer over Plan9’s mk. A quick look at Hume’s paper on the topic didn’t really convince me that it’s so much more advanced, especially when considering that GNU has Functions like Guile Integration.

                                                2. 2

                                                  Could you share a link to it? I tried searching, and all I get are Micheal Kors, Mario Kart, Macedonia related articles and some android build system…

                                                  1. 7

                                                    mk is available in plan9port (my preferred version). There is a standalone version written in Go, but is marginally incompatible (changes regex, allows for extra whitespace) which I don’t recommend (Go regex sux) but would be fine with becoming the default.

                                                    1. 2

                                                      Neat, thanks. Indeed it is way simpler and with clean semantics, which I appreciate a lot. Make has so many special cases that after a month of not using it, I have to reach for documentation to understand even simple things like assignment :/

                                                      I would note a nice parallel in implementing Mk in go — both assume multiple platforms and are small.

                                                      1. 2

                                                        Go regexps are guaranteed to not be stuck in an eternal loop, which is nice.

                                                        1. 1

                                                          It’s a bit sad to reflect that mk already has two incompatible variants, despite being much newer and less adopted than Make.

                                                          (Not meaning to bash mk specifically here, this is not a make-specific problem as much as a universal problem.)

                                                          1. 2

                                                            mk appeared in Unix version 9, more than 30 years ago. Not that much newer :-)

                                                            1. 1

                                                              Ah, right. It’s doing OK, then :)

                                                            2. 2

                                                              honestly I think the developer of the Golang version didn’t want to implement a plan 9 regex engine (probably the simplest regex i’ve ever used)

                                                      1. 1

                                                        https://grumpy.website/post/0SysyXDMH

                                                        I love this one. There are so many websites which would have been functional (if a bit annoying) if they just showed the desktop view on mobile, but they explicitly added <meta name="viewport" content="width=device-width"> yet their HTML/CSS assumes the screen is wider than a phone screen. The “correct” solution would’ve been less work, yet they actively work to fuck it up.

                                                        1. 1

                                                          My guess is that these people just copied a template and didn’t really think about what the boilerplate does.

                                                        1. 1

                                                          a copy of HP-UX 10.20 for 700 series machines. This was tricky to find! A friendly computer historian was willing to send me an ISO, but I’m not aware of one that’s publicly downloadable. Dear HP: please just post these.

                                                          This stuff is so old that HP themselves may no longer have them anyway.

                                                          1. 12

                                                            Protobufs are an attempt at a solution for a problem that must be solved at a much lower level.

                                                            The goal that Protocol Buffers attempt to solve is, in essence, serialization for remote procedure calls. We have been exceedingly awful at actually solving this problem as a group, and we’ve almost every time solved it at the wrong layer; the few times we haven’t solved it at the wrong layer, we’ve done so in a manner that is not easily interoperable. The problem isn’t (only) serialization; the problem is the concept not being pervasive enough.

                                                            The absolute golden goal is having function calls that feel native. It should not matter where the function is actually implemented. And that’s a concept we need to fundamentally rethink all of our tooling for because it is useful in every context. You can have RPC in the form as IPC: Why bother serializing data manually if you can have a native-looking function call take care of all of it for you? That requires a reliable, sequential, datagram OS-level IPC primitive. But from there, you could technically scale this all the way up: Your OS already understands sockets and the network—there is no fundamental reason for it to be unable to understand function calls. Maybe you don’t want your kernel serialize data, but then you could’ve had usermode libraries help along with that.

                                                            This allows you to take a piece of code, isolate it in its own module as-is and call into it from a foreign process (possibly over the network) without any changes on the calling sites other than RPC initialization for the new service. As far as I know, this has rarely been done right, though Erlang/OTP comes to mind as a very positive example. That’s the right model, building everything around the notion of RPC as native function calls, but we failed to do so in UNIX back in the day, so there is no longer an opportunity to get it into almost every OS easily by virtue of being the first one in an influential line of operating systems. Once you solve this, the wire format is just an implementation detail: Whether you serialize as XML (SOAP, yaaay…), CBOR, JSON, protobufs, flatbufs, msgpack, some format wrapping ASN.1, whatever it is that D-Bus does, or some abomination involving punch cards should be largely irrelevant and transparent to you in the first place. And we’ve largely figured out the primitives we need for that: Lists, text strings, byte strings, integers, floats.

                                                            Trying to tack this kind of thing on after the fact will always be language-specific. We’ve missed our window of opportunity; I don’t think we’ll ever solve this problem in a satisfactory manner without a massive platform shift that occurs at the same time. Thanks for coming to my TED talk.

                                                            1. 5

                                                              You might want to look into QNX, an operating system written in the 80s.

                                                              1. 1

                                                                It should not matter where the function is actually implemented.

                                                                AHEM OSI MODEL ahem

                                                                /offgetlawn

                                                              2. 3

                                                                I’ve been thinking along the same lines. I’m not really familiar with Erlang/OTP but I’ve taken inspiration from Smalltalk which supposedly influenced Erlang. As you say it must be an aspect of the operating system and it will necessitate a paradigm shift in human-computer interaction. I’m looking forward to it.

                                                                1. 2

                                                                  cap’n proto offers serialisation and RPC in a way that looks fairly good to me. Even does capability-based security. What do you think is missing? https://capnproto.org/rpc.html

                                                                  1. 2

                                                                    Cap’n proto suffers from the same problem as Protobuffers in that it is not pervasive. As xorhash says, this mechanism must pervade the operating system and userspace such that there is no friction in utilizing it. I see it as similar to the way recent languages make it frictionless to utilize third-party libraries.

                                                                  2. 2

                                                                    well, the fundamental problem imho is pretending that remote and local invokations are identical. when things work you might get away with it, but mostly they dont. what quickly disabuses you of that notion is, that some remote function calls have orders of magnitude higher turnaround time than local ones.

                                                                    what does work is asynchronous message passing with state-machines, where failure modes need to be carefully reasoned about. moreover it is possible to build a synchronous system on top of async building blocks, but not so the other way around…

                                                                  1. -1

                                                                    According to section 6.2.5.12, integers are arithmetic types. This, in combination with the second rule, makes that i will now be 0.

                                                                    But the text you cite says: “If an object that has automatic storage duration is not initialized explicitly, its value is indeterminate. If an object that has static storage duration is not initialized explicitly, then:”. I fail to see how i; is equivalent to static i;. As far as I can tell, i; being initialized to zero just so happened to be done by the compiler and/or resident memory, conveniently, but there’s no actual guarantee of that.

                                                                    Plus i is implicitly (signed) int, so --i; is signed integer overflow, and also well into undefined behavior territory. I’d imagine a compiler at would be well in its rights to just optimize the entire function to nothing because UB occurs first thing, since once you hit UB, all bets are off.

                                                                    1. 6

                                                                      i has external linkage, and static storage duration.

                                                                      C99 6.2.2p5

                                                                      If the declaration of an identifier for an object has file scope and no storage-class specifier, its linkage is external.

                                                                      C99 6.2.4p3

                                                                      An object whose identifier is declared with external or internal linkage, or with the storage-class specifier static has static storage duration. Its lifetime is the entire execution of the program and its stored value is initialized only once, prior to program startup.

                                                                      The static storage-class specifier means that the linkage is internal or none, depending on whether the declaration is at file scope or block scope, and the storage duration is static. Objects with external linkage (for example extern int i;), or no linkage (for example static int i; at block scope), also have static storage duration.

                                                                      1. 3

                                                                        Thank you so much for taking the time to look up the relevant pieces in the standard!

                                                                        1. 5

                                                                          No problem :) I spent a long time studying these pieces of the standard when writing cproc, and I know how tricky they are.

                                                                        2. 2

                                                                          Drats, I actually got out-language-lawyered. Learned something new, cheers!

                                                                          1. 2

                                                                            Great link to an HTML version of the standard, I’ve been using the PDF and it’s much harder to navigate. I’m very impressed by your compiler too, it’s much further along than mine: https://github.com/jyn514/rcc.

                                                                            I noticed your compiler is a little inconsistent about functions without prototypes:

                                                                            $ ./cproc-qbe
                                                                            int f() { return 0; }
                                                                            int main() { f(1); }
                                                                            export
                                                                            function w $f() {
                                                                            @start.1
                                                                            @body.2
                                                                            	ret 0
                                                                            }
                                                                            <stdin>:2:17: error: too many arguments for function call
                                                                            $ ./cproc-qbe
                                                                            int f();
                                                                            int main() { return f(1); }
                                                                            export
                                                                            function w $main() {
                                                                            @start.1
                                                                            @body.2
                                                                            	%.1 =w call $f(w 1)
                                                                            	ret %.1
                                                                            }
                                                                            
                                                                            1. 3

                                                                              The difference between

                                                                              int f();
                                                                              

                                                                              and

                                                                              int f() { return 0; }
                                                                              

                                                                              is that the first declaration specifies no information about the parameters, and the second specifies that the function has no parameters. When calling a function, the number of parameters must match the number of arguments, so I believe the error message is correct here.

                                                                              C99 6.7.5.3p14

                                                                              An identifier list declares only the identifiers of the parameters of the function. An empty list in a function declarator that is part of a definition of that function specifies that the function has no parameters. The empty list in a function declarator that is not part of a definition of that function specifies that no information about the number or types of the parameters is supplied.

                                                                              C99 6.5.2.2p6

                                                                              If the number of arguments does not equal the number of parameters, the behavior is undefined.

                                                                              I’m very glad C2X is removing function definitions with identifier lists (n2432). So int f() { return 0; } will actually be the same thing as int f(void) { return 0; }.

                                                                              1. 3

                                                                                I missed 6.7, thank you! That makes things easier for me to implement I think :)

                                                                          2. 4

                                                                            I am always afraid to answer to detailed questions like these, because I am still learning a lot about C and not always sure. However, I believe that i actually has a automatic static storage duration, because it is defined outside of the scope of a function or whatsoever. This means that the variable will be persistent throughout the whole program.

                                                                            Edit: I said that i would have an automatic storage duration, but the arguments are for a static storage duration. This is what I actually meant to write. My apologies for the inconvenience.

                                                                            The second point is a good one, about which I responded earlier in a comment under my post. You depend on your compiler for that indeed. This line was written for gcc specifically however.

                                                                            1. 0

                                                                              I am always afraid to answer to detailed questions like these, because I am still learning a lot about C and not always sure.

                                                                              Very few people actually know C. Given that writing it is actually just an extremely elaborate exercise in language lawyering, it truly is a language only a lawyer could love. The worst that could happen is that you get corrected if you give a wrong response—meaning that you’d learn something from it.

                                                                              However, I believe that i actually has an automatic storage duration, because it is defined outside of the scope of a function or whatsoever. This means that the variable will be persistent throughout the whole program.

                                                                              Indeed so, that’s my understanding as well. But that also means that its initial value is indeterminate: Automatic storage donation is mutually exclusive with static storage donation. Therefore, you cannot actually get the zero-initialization you’d get from static storage duration, and instead the value is indeterminate because it has automatic storage donation.

                                                                              1. 6

                                                                                Every object declared at file scope has static storage duration. The only objects that have automatic storage duration are those declared at block scope (inside a function) that don’t have the storage-class specifier static or extern.

                                                                                1. 3

                                                                                  The worst that could happen is that you get corrected if you give a wrong response—meaning that you’d learn something from it.

                                                                                  I agree, which is why I always answer indeed. Thank you.

                                                                                  Indeed so, that’s my understanding as well. But that also means that its initial value is indeterminate: Automatic storage donation is mutually exclusive with static storage donation.

                                                                                  I changed my post above. I made a mistake while writing that, due to my lack of time. The fact that i is defined outside of the scope of a function, means that it is static, as mcf described

                                                                            1. 7

                                                                              NACL is a library wrapping crypto primitives in a manner that’s easy to use. It provides public key crypto, secret key crypto, and cryptographic signing. And because it’s built on simple concepts, there’s a version in almost every language.

                                                                              1. 11

                                                                                You’ll find that libsodium, which forked from NaCl (and is now almost unrecognizable as a fork save for a few areas), is the thing that actually caught on. NaCl barely missed the mark in terms of ease of use, libsodium filled the missing gaps.

                                                                              1. 7

                                                                                It’s like someone looked at JavaScript, Go, took their defining characteristics, made them as pure as possible and then thought “what’s the worst possible consequence I could take from this”. And then sells it completely straight without even flinching at the sheer nonsense result, while also responding to current trends in programming languages.

                                                                                I just wonder how long until someone comes along who doesn’t realize this is satire perfected to a work of art and actually use it in production.

                                                                                1. 3

                                                                                  The original INTERCAL was basically the same thing, but for the “hip” mainframe languages of the 1970s; I feel like IntercalScript is a worthy successor in that regard.

                                                                                1. 11

                                                                                  Packaging can be very, very hard. Alpine and Arch may have fairly simple systems that make providing your own into a community repository easy, but other distributions may not.

                                                                                  Debian, for example, not only has a very complex system, it also has byzantine policies: e.g. the wiki links to the new maintainer’s guide, which states in chapter 1 that it’s getting outdated, instead linking to the Guide for Debian Maintainers, which is supplemented by a policy manual and a developer’s reference, each having multiple pages of tables of contents alone.

                                                                                  How you’re going to get anything into Red Hat Enterprise Linux is beyond me, since you’ve got no control about it and they’re notoriously picky.

                                                                                  1. 2

                                                                                    I’m curious: is it just the fragmentation of the documentation that makes Debian packaging feel complex?

                                                                                    In general I find that sensible software can be packaged in minutes since it’s all automated.

                                                                                    1. 5

                                                                                      I’ll dig out the low-down from what I can tell and you be the judge if it feels complex (CC @federico3, who obviously may correct me if I’m wrong about any of this or have omitted something major):

                                                                                      1. To contribute a package, you must become a “maintainer”, who is a person that maintains a package—which is either a “Debian Maintainer”, who is a person that can upload specific packages directly, or someone who is a “sponsored maintaner”, who needs a sponsor to upload packages. When you start out, you’ll be the latter, so you’ll need a sponsor. What nearly nobody tells you: You’re allowed to become a maintainer for your own software as well. But usually the follow-up objection to that is if nobody asked for it on WNPP, chances are it will just get rejected for having no demand.
                                                                                      2. Try to generate most of the package automatically. If you’re lucky, you’re in one of the cases where it all works automatically and you “only” have the policy to worry about. If you’re unlucky, you first have to make it work.
                                                                                      3. Fill in the policy-mandated fields: A description (note: RFC 822, so you can’t use empty lines, you’ll need empty lines with a leading space and then a single dot), dependencies, build-time dependencies.

                                                                                      Now comply with the rest of the policy, such as:

                                                                                      1. If the source package contains hard links, those needs to be removed or replaced.
                                                                                      2. Split the package if necessary (always necessary for C/C++ libraries since you’ll need at least -dev and the main library).
                                                                                      3. If you’re packaging a library, also maintain a separate, semi-automated account of all symbols and the associated version.
                                                                                      4. If your package includes third-party code, remove that as well and instead patch it to use Debian’s copies of the third-party code if you have to.
                                                                                      5. Fill in the debian/copyright file. If you’re lucky, every file has a header that identifies the license. If you aren’t, you’re off to checking every single file’s license and try to infer it from other files (namely the COPYING or LICENSE file).
                                                                                      6. Write a package changelog, which is apparently a mix of changes made to packaging and upstream changes.
                                                                                      7. If the program is not covered by man pages, you should write man pages yourself; you can get bugs filed against this and if upstream is not cooperative, you’ll be writing them yourself. Upstream refusing is not an excuse.
                                                                                      8. A lot of mess regarding init systems, generally technically requiring you to still write init scripts in a systemd-centric distribution.
                                                                                      9. If your package falls in the scope of a team, join that first and check their policies as well.
                                                                                      10. Sign your package with GPG, so you also get all the extra fun of GPG key management.

                                                                                      etc. etc. There’s almost no way you’ll get all of this right on your own and that you won’t have a special case, so you should join the debian-mentors mailing list, but that means you get all the overhead of the list and the usual caveats apply (lurk for a while and see what the usual tone is before you ask a question, search the archives, etc.).

                                                                                      Now that you’re reasonably certain that you’ve handled all the corner cases in the documentation, which will usually get even the seemingly easiest of packages, you get to the fun part:

                                                                                      1. Create an account on mentors.debian.net.
                                                                                      2. Upload your signed package there.
                                                                                      3. Find a sponsor by filing a bug against the sponsorship-requests pseudo-package. Since the ratio of Debian Developers that can and want to be sponsors to prospective sponsored maintainers is rather imbalanced, chances are you’ll just get ignored entirely unless you can make a good case that your package is both low-risk for them and already well-made. If your first language isn’t English, this necessarily becomes comparatively much harder for you.
                                                                                      4. If you fail to find a sponsor, wait a few weeks, post a follow-up.
                                                                                      5. If that still fails, either give up or start bothering sponsors that work on related packages.

                                                                                      There is no equivalent to the Arch User Repository, so either your package eventually gets in, or it’s just in limbo forever if you can’t get anyone to bother sponsoring you.

                                                                                  1. 3

                                                                                    Interleaved deltas are an underrated gem of efficiency for retrieving old revisions and doing blame operations. It’s a bit unfortunate that they haven’t caught on, but part of it is probably that manipulating a weave is surprisingly difficult.

                                                                                    1. 2

                                                                                      This was before ZFS was a thing on Linux. I wonder what Linus’s opinion is on how ZFS behaves in this area.

                                                                                      1. 1

                                                                                        Since ZFS isn’t in mainline, I could conceivably see him just straight up not caring and going “we have enough filesystems in the kernel that are varying degrees of broken, I don’t have time to look at another one”.

                                                                                      1. 23

                                                                                        Have you considered not putting hard end-of-lines at all? This way your mail should be readable by all readers: mobile, tablet, horizontal/vertical, terminal, etc. It would be only a matter of setting your terminal size to 80 chars wide, if your mail reader doesn’t support soft wrapping.

                                                                                        Sentences in the mail are not code. It’s natural to wrap them. We do it all the time in books, articles, lobste.rs comments, etc. It allows people to use different fonts, different sizes (some people see worse than others, have to use bigger fonts), different devices, etc.

                                                                                        Is it natural to

                                                                                        write a comment

                                                                                        like this? What

                                                                                        is the point of

                                                                                        the newline cha-

                                                                                        racter here?

                                                                                        That is, of course, with respect for rules imposed by mailing lists. E.g. if OpenBSD mailing list require 72 chars, then everyone should respect this setting.

                                                                                        1. 5

                                                                                          Have you considered not putting hard end-of-lines at all?

                                                                                          I think this is the way to go. Every client will be able to render the text the way it looks best. How about limiting lobste.rs to 72 characters? Where’s the difference? That was a rhetorical question :-)

                                                                                          On the other hand: This is a classic debate that will most likely never have an end. It’s preference. And thus I have to live with badly readable mail, when reading on mobile.

                                                                                          But I tend to prefer tabs instead of spaces, so… what sane person would take me serious?

                                                                                          1. 1

                                                                                            Sentences in the mail are not code. It’s natural to wrap them.

                                                                                            Sentences in e-mail are, however, often quoted. It’s much easier to take one sentence and quote it if it’s on a separate line. Newlines after punctuation are a feature, not a bug when it comes to e-mail.

                                                                                            That is, of course, with respect for rules imposed by mailing lists

                                                                                            Since people composing their e-mail in e-mail clients would have to go and hunt down the setting to control the line length limit to match the rules imposed by mailing lists, there’s kind of a natural race to the bottom for settings that comply with the most mailing lists.

                                                                                            1. 3

                                                                                              I’m pretty sure you can select text with a mouse and copy it…

                                                                                              1. 2

                                                                                                Sentences in the mail are not code. It’s natural to wrap them.

                                                                                                Sentences in e-mail are, however, often quoted. It’s much easier to take one sentence and quote it if it’s on a separate line. Newlines after punctuation are a feature, not a bug when it comes to e-mail.

                                                                                                Do you suggest that long lines without hard end-of-lines and quoting are incompatible, or hard to perform?

                                                                                                But even if, then still, personally I believe that it’s more important that lots of people will be able to conveniently read what I write, than my personal convenience of editing such an e-mail.

                                                                                            1. 1

                                                                                              Given that OpenBSD mailing lists require 72 characters per line, that’s what I send, but not necessarily expect to receive. Considering mobile clients seem to aggressively render in non-monospace fonts from what I’ve heard, this should be reasonable on mobile as well, so I myself see no reason to change it.

                                                                                              1. 1

                                                                                                What’s “aggressive” about rendering non-code text nicely?

                                                                                                And no, as someone who sometimes reads mail on a phone, 72-char wrapped non-monospace text on a phone doesn’t look reasonable. Phones usually don’t have the horizontal space for 72 characters even when using a proportional font.

                                                                                                EDIT: To illustrate, here’s how an email from launchpad looks like in my phone’s email client: https://cloud.mort.coffee/s/LN2C34S6wTntJQ2/preview - the 72 characters end up being a bit less than 1.5 actual lines, so you end up with a jarring reading experience with unnatural pauses every other line because some piece of software decided there had to be a physical newline there.

                                                                                              1. 1

                                                                                                A great use I found for this is a script that lets me scp files to an embedded machine without typing the password each time.

                                                                                                1. 6

                                                                                                  I’m glad that your workflow works for you, and I know there may be constraints that I’m not aware of, but I feel like I should mention ssh-agent as another way to solve this problem, just in case.

                                                                                                  1. 6

                                                                                                    The ssh fingerprint changes with every boot on the device, as we don’t persist much right now. It’s on the list of things to fix one day. :)

                                                                                                    1. 1

                                                                                                      Without wanting to invalidate your use case: This sounds like the machine is in development. Wouldn’t it be possible to have an authorized key baked into the filesystem from which it boots (which the build system only does on development/testing builds of the image)?

                                                                                                      1. 1

                                                                                                        It is! It is a bit of a process and I don’t work directly with the hardware side of things, so the expect script is a hack. :)

                                                                                                  2. 1

                                                                                                    You can’t use public key authentication?

                                                                                                  1. 14

                                                                                                    I’m not sure about this. In the category of “yet another build system nobody has heard of” there are plenty of strong contenders. I use make only because it’s ubiquitous, not because it’s good.

                                                                                                    1. 5

                                                                                                      I hattteeee make. The idea is fine, but a Makefile tends towards disaster. I rarely see one that is properly maintained and commented. There is something about Makefiles that cause them to grow organically way out of control very quickly. And as they are just bash scripts, they’re Turing complete and can do anything.

                                                                                                      You then don’t know which build rule to use because there are three that look similar and no one remembers which does what. Worse, they drift from implementations and just stop working after a refactor and no one knows.

                                                                                                      I strongly believe that this is due to the Makefile format and not developers, because I’ve seen it so many times that it just isn’t isolated to one set of devs.

                                                                                                      I feel like build files that are built on the language you are primarily building for (e.g. Mage for Go) are the right choice. Unit testing is there. Type safety if you can get it. Obvious error checking requirements. Readable function composition. Heck go nuts and write an integration test that uses reflection to run every single build rule in your CI chain to ensure no breakage.

                                                                                                      I’m pretty tired of *nix usability being held back by the requirement of ubiquity. Things like the fish shell is obviously a better choice for most devs, but so many people still just use Bash and POSIX because it’s ubiquitous. I want to see a distribution that sees you are trying to execute a program that doesn’t exist, and whitelist known good packages and just pull them before execution.

                                                                                                      1. 2

                                                                                                        I’m not sure that learning N good build systems is better than learning 1 not-great build system for life.

                                                                                                        Just like bash, it’s really easy to jump right into writing fresh Makefiles. This is a trap for both Bash and Make.

                                                                                                        Anecdotally, I’ve never seen someone study Bash or Make like they would Go or Python. I don’t think Make is mostly to blame. Just like you wouldn’t blame Go if your new coworker writes shitty Go code. Additionally, people don’t spend as much time writing Bash or Make as they do Go or Python, so they also practice less.

                                                                                                        Not taking the time to learn something and hardly ever practicing seems like a bad idea with any build tool. I’m willing to bet someone who doesn’t know Go can write a pretty shitty Mage file.

                                                                                                        Yes, the Make syntax isn’t the sexiest thing out there, but by putting in my time now, I’m betting that I’ll know something relevant 5, 10, 50 years from now. I don’t want to learn a bunch of different build systems for different projects every few years.

                                                                                                        1. 2

                                                                                                          This! Most of the times we deal with machines that are under our control. If we can’t have tools that make us productive there, we’re losing so much. For example, if we need to sporadically inspect/debug our fleet of machines, wouldn’t it be better to have already all frequently used tools there? Not just for debugging, but for all the things that make us effective? Things that are installed by default are there either to satisfy POSIX or because distro maintainers thought the tools are needed. However, much of that works for the averages, and many uses fall outside of the comfort of two sigma. One of the reasons I started liking Nix is exactly because it is so simple to describe the composition of tools.

                                                                                                          1. 1

                                                                                                            The power of redo is the fact that the build script can be in any language you want. Anything that supports #! can be a redo build script.

                                                                                                          2. 3

                                                                                                            i use make solely for its ubiquity and familiarity, because those are the good parts of it. my makefiles are typically calls to the build system i’m actually using, because it might be easier and better to configure my project’s build using some other build tool, but it’s nice to just be able to type make to kick things off.

                                                                                                            1. 3

                                                                                                              Well, this version of redo has a pure shell fallback that you can just include in your project.

                                                                                                              1. 1

                                                                                                                That’s similar to why I use autotools. They’re the devil everybody knows how to deal with and carries no dependencies other than a POSIX environment at a user’s build time (unlike, say, CMake, which also wants itself to be present at build time; at least it’s slowly becoming more ubiquitous, but I’m not changing my stance until it’s in Debian build-essential).

                                                                                                              1. 1

                                                                                                                Authentication as a whole is broken.

                                                                                                                As a developer, your first and foremost priority should be integrating with a third-party identity provider wherever possible; they probably know better than you about how to secure user authentication. Alternatively, go for something that supports U2F, e.g. ed25519-sk (currently in OpenBSD-current), WebAuthn. Hardware tokens give you much better guarantees than anything else, and with Windows Hello and the general shift towards smartphones (which have their own built-in hardware), this is becoming an increasingly realistic options. Failing that, password (with ideally a PAKE, else a CPU/memory/cache-hard hash; rate limit with exponential backoff in any case) + TOTP is the bare minimum you need to have; the phishing protection is still lessened, but at least you’re better off in the situation of a server database compromise.

                                                                                                                As a user, you should be using a password manager. Yes, it’s a single point of failure, but that’s why you can make backups of your password manager database. Ultimately, you’ll only need five passwords (the firmware password to boot a device, the encryption passphrase for the volume encryption, the user authentication password when logging in and unlocking the screen, the password manager master passphrase and the passphrase for the backup volumes that contains your password manager database and other backups).