1. 1

    Lack of any feedback from the repo maintainers would be such a let down for me.

    1. 2

      The last time this project had significant activity was around 20 days ago. The pull request is aged at 19 days - I think it’s just unlucky timing. Perhaps the maintainer went on for holiday with his family? Perhaps he just finished a lot of work around the project and needs a rest from it? Perhaps he just doesn’t feel like looking at it now - it’s volunteer time.

      Point is, don’t assume your work is unappreciated because you didn’t get a reply - open source is hard work but once you put things into the public time works for you, someone will see it sooner or later :)

      1. 2

        Rubygems.org is maintained by the RubyTogether non-profit, which pays devs hourly for maintenance. The money is nice but it likely means it will take a bit to get approved for work. I’m guessing this also isn’t high priority – Rails 5.1 is working just fine.

    1. 1

      What I’m curious to know is if “Mark” is a contributor or just someone use uses Redis (or doesn’t even do that). I my eyes this changes the debate from a discussion about the project and it’s values to a random interjection from outside.

      1. 4

        “Mark” is a pretty well known person in the database world and a quite prolific FOSS contributor that has been raising this concern multiple times.

        1. 3

          apparently it will be decided by a twitter poll which is an absurd place to settle this as people completely unrelated to redis (not even in the IT field) can manipulate the result. This is farce.

        1. 3
          • remotely attending as technical support a demo/testing event of a work product
          • playing urban terror with #openbsd-gaming this evening (drop by on IRC!)
          • reading https://nostarch.com/seriouscrypto
          • trekking a mountain trail with wife and the dog (tomorrow)
          1. 15

            Congrats and great work. I would like to emphasise the pull request as it’s also a nice read!

            edit: @pushcx do you have stats pre & post update? I’m curious what impact this had on system load and performance if you have metrics for that.

            1. 1

              Didn’t track any stats beyond what alynpost linked already, no.

            1. 26

              Fight for Pareto’s law, look for the 20% of effort that will give you the 80% of results.

              Perfect is enemy of good, first do it, then do it right, then do it better.

              It’s either worth doing right or not worth doing at all.

              I don’t identify with this manifesto, it feels more like something written for startup cowboys slinging code out than people wanting to write minimal, clean code.

              Here is my take on a minimalist software engineer manifesto.

              1. Keep it correct - measure twice, code once
              2. Keep it simple - aim to solve a single task well
              3. Keep it secure - only allow things required to perform your task
              4. Keep it documented - your tool is useless, if no one knows how to use it
              1. 3

                When something isn’t already a solved problem, being able to identify potential correctness pitfalls depends upon having a functional implementation. In other words: write a prototype and throw it away (i.e., first do it and then do it right).

                Of course, most code written for businesses isn’t actually original or solving original problems – the ‘prototype’ is somebody else’s implementation, written years before.

                1. 2

                  Sure. This is engineering 101. Build a model in scale.

                  However the way this manifest is written it sounds like he would like to build a bridge then patch it up when cracks start to show up while people start driving through it - that is not a prototype and that is not engineering.

                  1. 1

                    About half the crustaceans are interpreting this manifesto the way you are, and about half are interpreting it the way I am, which probably means it’s poorly written!

                    I’ve got a tendency to steelman essays like this, so it’s absolutely possible that this guy is actually defending cowboy coding, but that’s absolutely not how I read it.

              1. 1

                oh this is a delight! had no idea there were so many re-implementation efforts. would be nice to have this outside of a pdf, hah!

                1. 4

                  The link is actually a regular site, not a pdf. I agree that it reminds the in-browser pdf renderers a lot in style :D

                  1. 1

                    hahahah oh my goodness. got me. a lovely list indeed!

                1. -3

                  “Considered Harmful” Essays Considered Harmful (I think “considered dangerous” falls in the same category)

                  It’s not difficult to use C correctly. Don’t blame your vulnerabilities on C when the real culprit is your own sloth.

                  I’ll concede that C (and it’s API) has quite a few foot guns, but I’ve learned how to avoid them pretty effectively, and I should be able to expect the same from kernel devs. The whole “rewrite everything in <insert promising new lang here>” mentality doesn’t work for large projects (like kernels). To rewrite the Linux kernel in Rust would take months (even if you had all hands on deck). And, who’s to say that Rust wouldn’t change incompatibly three times in the middle?

                  1. 21

                    It’s not difficult to use C correctly.

                    [citation needed]

                    There is no evidence to suggest that large codebases written in C can maintain memory safety in the face of that. The counter evidence, that writing code in C/C++ tends to produce large volumes of vulnerabilities, for reasons that are explained by language choice, is plentify. To whit, every major OS (Windows, Linux, macOS), every major browser (Chrome, Firefox, Edge, Safari), every major anti-virus program, every major image parsing library, I can keep going for a while.

                    Denialism about the dangers of memory unsafety is not productive, we need to move on to discussing how we address this.

                    1. 0

                      There is no evidence to suggest that large codebases written in C can maintain memory safety in the face of that.

                      Using C correctly means not making large codebases. C isn’t a language for programming in the large.

                      1. -1

                        there is no evidence that large codebases in any language produces anything better.

                        1. 7

                          Yes there is. The default, failure mode of safe languages doing common things is not potential code injection. The default for C language is. Given same bug count, using C will lead to more severe problems. The field results confirm that from fuzzing to CVE’s.

                          1. 4

                            Yes there is. The default, failure mode of safe languages doing common things is not potential code injection.

                            I don’t think this is wrong, exactly, but there’s a 100 exploits related to python pickle, etc. as counterexamples. And java serialize, etc.

                            1. 3

                              Do the memory-safe parts have the memory errors of C (a) at all or (b) as much? And do libraries in concurrency safe languages show same or less races as equivalent in multithreaded C?

                              You’re going to find vulnerabilities in all of them. My side are saying C amplifies that number by default or others greatly reduce it by default. That’s all we’re saying. I think the evidence is already supporting that.

                              1. 1

                                amplify requires some comparative numbers.

                                1. 2

                                  The numbers on using C are that the common operations lead to piles of vulnerabilities with code injection. This happens a lot on average. It happens less with veterans but still happens. That’s irrefutable. The numbers on safe languages show the problems mostly lead to compiler failures or DOS’s from runtime checks. The burden of proof is on your side given your side’s stuff is getting smashed the hardest all the time whether the app is small or big.

                                  What numbers do you have showing C is safer for average developer than Ada, Rust and so on? And I’m especially interested in fuzzing results of software to see how many potentially lead to code injection among new, half-ass, or just time-constrained programmers in C vs the same in safe, systems languages.

                                  1. 1

                                    you don’t even have good examples of large scale systems built using some other language that are substantially safer. Until you do, it’s just folklore.

                            2. 0

                              I see a real shortage of example of large-scale systems constructed in any language that are secure and bug free but I am happy to look at references. Like what do we have comparable to Qmail written in something better that has fewer bugs? I know that C has numerous limitations, but in CS we tend to embrace projects that claim a win by hiding a problem by e.g. using pragmas to do the things that are the most buggy as if pushing the problem into the corner made it go away.

                              And the code injection bugs I see are all example of bad engineering - not of bad programming.

                              1. 3

                                There’s bugs and there’s serious bugs that the language causes. The latter are what hackers hit the most. The latter are what we’re talking about, not just bugs in general. The size of the program also doesnt matter since the safe language is immune to the latter by design. Scaling code up just increases odds of severe vulnerabilities in the unsafe, control language.

                                Java and .NET apps are what to look at if you want big ones. Very few CVE’s posted on the apps of the kind you see in C apps. The ones that are posted are usually in C/C++ runtimes or support libraries of such languages. That just illustrates the problem more. The languages whose runtimes arent C have fewer of those since they’re immune or contain them by design.

                                1. 1

                                  My impression is that a) the reasons that those c/c++ runtimes show up so much is that these language delegate the most dangerous code such as parsing of raw input or packets or complex interaction with the OS to the C/C++ runtimes where it is possible to do that work and b) the same errors show up in different form in different languages. The massive prevalence of scripting exploits is not due to C but to lazy interface construction where, for example, user inputs are treated as parts of database scripts etc etc. I do not think that “do all the hard stuff in pragmas or C libraries” actually does limit vulnerabilities.

                                  1. 1

                                    “where it is possible to do that work”

                                    The first part is true. That part isn’t. They think lower-level language is better for speed, bit handling, or OS interface. The second part implies you need C to do that work. There’s systems languages which can do that work with more safety than C. So, it’s “possible to do that work” in them without C’s drawbacks. Many low-level programs and OS’s were written in PL/0, PL/S, Ada, Modula-2, Oberon, Modula-3, Clay, and so on. They’re safe by default turning it off only where you need to. C doesn’t do that since it’s designers didn’t care when they were hacking on a PDP-11 for personal use.

                                    “b) the same errors show up in different form in different languages. The massive prevalence of scripting exploits is not due to C but to lazy interface construction where, for example, user inputs are treated as parts of database scripts etc etc.”

                                    Aside from something language-specific, the logic errors that happen in scripting languages can happen in C, too. You get those errors plus C’s errors plus the catastrophic effect that comes with them being in C. Let’s say you wrote the interpreter in Ada or Rust with safety-checks on. Most of the errors in the interpreter won’t lead to hacks. The extensions would have same property if building on base language like how extensions to C-based programs are often in C having same problems. Platforms like Java that built libraries on C are hit heavily in those C dependencies.

                                    Additionally, the extensions could leverage aspects of these languages, such as type or module systems, designed for knocking out integration errors. Finally, if it’s Ada 2012 and SPARK, they can eliminate runtime checks in performance-critical code by using the provers to show they’re not needed if specific pre-conditions pass early on. Unlike Frama-C, they get a good baseline on code they hurried and highest assurance of what they proved.

                                    1. 1

                                      Data would help. These arguments by what seems sensible to different people don’t go anywhere.

                        2. 16

                          To rewrite the Linux kernel in Rust would take months (even if you had all hands on deck).

                          Months? It would take at least 10 years, regardless of headcount.

                          I’ve learned how to avoid them pretty effectively, and I should be able to expect the same from kernel devs.

                          I’m impressed with your abilities, but then something nags me about the order-of-magnitude mistake in your rewrite estimate. Hmm.

                          1. 13

                            It’s not difficult to use C correctly. Don’t blame your vulnerabilities on C when the real culprit is your own sloth. I’ll concede that C (and it’s API) has quite a few foot guns, but I’ve learned how to avoid them pretty effectively, and I should be able to expect the same from kernel devs. The whole “rewrite everything in ” mentality doesn’t work for large projects (like kernels). To rewrite the Linux kernel in Rust would take months (even if you had all hands on deck). And, who’s to say that Rust wouldn’t change incompatibly three times in the middle?

                            I suggest you read the linked article first. The title is clickbait but the content is solid. No one even mentioned Rust or anything else… The guy talks on their effort to reduce the foot guns in the kernel code…

                            Here is a quote for the lazy:

                            Kees Cook gave a presentation on some of the dangers that come with programs written in C. In particular, of course, the Linux kernel is mostly written in C, which means that the security of our systems rests on a somewhat dangerous foundation. But there are things that can be done to help firm things up by “Making C Less Dangerous” as the title of his talk suggested.

                            1. 4

                              I suggest you read the linked article first.

                              Ok, you got me, I only skimmed the article and I didn’t see any mention of rewrite until the comments (it was literally the first response to the second comment). Although I do hear that mentality about other large projects (such as Firefox) as well. I guess I should’ve said “Clickbait considered harmful” ;-)

                              I’ve read some more of the article and he seems to know what he’s talking about but I would like to see the original talk.

                              As far as reducing foot guns, I guess Linux did start out as just one guy so I can understand a lot of foot shooting, but it’s been years and I would’ve thought that things like VLAs would’ve been avoided in the kernel. Then agian, I’ve never worked on a project as large as Linux so i guess I’m not the best judge of such things.

                              1. 4

                                Ok, you got me, I only skimmed the article and I didn’t see any mention of rewrite until the comments (it was literally the first response to the second comment). Although I do hear that mentality about other large projects (such as Firefox) as well.

                                Agreed. It’s annoying as hell, and the loud-mouths never do the work.

                                I guess I should’ve said “Clickbait considered harmful” ;-)

                                Funny because the talk is titled ‘Making C Less Dangerous’ - the lwn reporter is actually responsible for the horrible title that misrepresents the content and invites rewrite talks. I think this is the first time I’m using the lobste.rs ‘suggest’ a new title option to rename the link to ‘Making C Less Dangerous’ disrespecting the reporters chosen title. This is an abstract of the talk so keep the title close to the content.

                            2. 7

                              Literally 20+ years of unending computer security exploits disagree with you.

                            1. 6

                              He asked: why is there no argument to memcpy() to specify the maximum destination length?

                              That’s the third one.

                              If you really insist, #define safe_memcpy(d, s, dn, sn) memcpy(d, s, min(dn, sn))?

                              1. 4

                                Yeah, also, I don’t understand why would they want that.

                                Imagine calling memcpy(d, 10, s, 15), and having your data not copied entirely, having your d buffer with cropped data. Garbage, essentially. How would that be better?

                                edit: to be clear, I’m not complaining about your suggestion, but about the reasoning of the presenter on this.

                                1. 4

                                  Yeah, also, I don’t understand why would they want that.

                                  Imagine calling memcpy(d, 10, s, 15), and having your data not copied entirely, having your d buffer with cropped data. Garbage, essentially. How would that be better?

                                  Cropped data would be a logic error in your application. With standard memcpy the additional 5 bytes overwrite whatever is in memory after the d buffer. This can even enable an attacker to introduce execution of their own code. That’s why ie. Microsoft ships a memcpy_s.

                                  Reading materials:

                                  1. 7

                                    But the unanswered question is why you’re calling memcpy(d, s, 15) instead of memcpy(d, s, 10)? At some level the problem is calling the function with the wrong argument, and adding more arguments maybe doesn’t help.

                                    1. 4

                                      Every security exploit can be drilled down to “why were you doing this!”. If there was an obvious answer, security exploit would have been a thing of the past. Meanwhile advocating harm reduction is as good as we can get because even if calling memcpy with a smaller destination is wrong to begin with, truncated data still has a more chance to end up with non-exploitable crash than plain old buffer overflow that often end up with reliable code exec.

                                      1. 3

                                        But why do we assume this extra parameter is better than the other parameter which we have assumed is incorrect? Why not add another extra parameter? memcpy_reallysafe(dest, src, destsize, srcsize, destsize_forserious, doublechecksize)

                                        1. 3

                                          Because in ten years a line of code can change and the assumptions that made one variable the right one will break. Suddenly you got the wrong variable in there. Personally, I think this is where asserts belong, to codify the assumptions over a long span of time and multiple developers.

                                          1. 3

                                            A common use case of memcpy is to copy a buffer over another. The way program are structure we often end up with srcsize and dstsize that matches their buffer. The error come from the implicit contract that srcsize is always at least bigger than dstsize. Sure, good code would ensure this is always true. Actual code had many instance where it is not. Adding dstsize to memcpy means that this contract is now explicit and can be asserted by the actual function that put this contract in place.

                                            I mean, at this point we are not arguing of hypothetical scenario, we have a whole history of this bug class happening over and over again. Simply keeping track of the semantic (Copy one buffer to the other) and asking for all the properties required (Buffer and their size) is a low effort and easy way to prevent many of those bug.

                                            1. 1

                                              Yeah, keeping track of the buffer size is a very good idea. But if you want it to always be correct, it should be done without requiring the programmer to manually carry the buffer size along in a separate variable from the buffer pointer.

                                              Either something like “Managed C++”, where the allocator data structures are queried to figure out the size of the buffer, or something like Rust slices:

                                              typedef struct {
                                                  char *ptr;
                                                  size_t len;
                                              } slice_t;
                                              slice_t slice(slice_t slice, size_t start, size_t end) {
                                                  assert(start <= end);
                                                  assert((end - start) <= slice.len);
                                                  slicet.ptr += start;
                                                  slice.len = end - start;
                                                  return slice;
                                              }
                                              slice_t slice_front(slice_t slice, size_t start) {
                                                  assert(start <= slice.len);
                                                  slice.ptr += start;
                                                  slice.len -= start;
                                                  return slice;
                                              }
                                              slice_t slice_back(slice_t slice, size_t end) {
                                                  assert(end <= slice.len);
                                                  slice.len = end;
                                                  return slice;
                                              }
                                              void slicecpy(slice_t dest, slice_t src) {
                                                  assert(dest.len == src.len);
                                                  memcpy(dest, dest.len, src);
                                              }
                                              

                                              The point being to make it harder to mix up which len goes with which ptr, plus providing a assert-assisted pointer manipulation in addition to the safe memcpy itself. A safe abstraction needs to account for the entire life cycle of its bounds check, not just the point of use.

                                              Also, this would really, really benefit from templates.

                                1. 4

                                  I tried everything. From Remember the Milk, todoist and a plethora of other online web apps to org-mode, task warrior & a full implementation of GTD using various tools (including a cork board for pinning notes).

                                  The bullet journal approach is the only thing that really worked for me and that’s what I am doing now. The difference is amazing, I have things actually planned a month ahead and regularly executing on them (which was almost never the case in the past).

                                  Software didn’t work for me for two main reasons:

                                  1. It was never handy (split between phone & desktop) and a pain in the ass to capture notes on the go or in a format that the tool didn’t expect.
                                  2. It was too easy to overfill it with tasks and the tools were hiding how much they already had in them (lacking a good overview).

                                  GTD failed with too many items to book keep and it encouraged me to just continue filling in more items to-do.

                                  In contrast with the bullet journal:

                                  1. I have a single capture point - the bujo, got a bag to carry it around and never leave the house without it. A5 is not that much of a hassle to carry around.
                                  2. I stopped over committing, when you are migrating tasks daily you really think twice about what is worth of finding a place in the journal - that’s the best thing that doing a bujo gave me.
                                  3. It’s a good quick overview and just a single place to check.

                                  If you are having issues with picking a method or a tool. Give bullet journalling a try - it worked for me.

                                  1. 3

                                    @Yogthos why invite an obvious spammer?

                                    He mass spammed people on twitter to get an invite:

                                    https://screenshots.firefox.com/nZG3MDpLsp6V90oW/twitter.com

                                    He most likely had this account (discovered by Nei on IRC):

                                    https://lobste.rs/u/niravseo

                                    Can we get some admin action here? @pushcx, @irene, @alynpost

                                    1. 17

                                      I strongly disagree with a CVE tag. If a specific CVE is worth attention it can be submitted under security, most likely with a nice abstract discussing it like the Theo undeadly link or the SSH enumeration issue. Adding a CVE tag will just give a green light to turning lobste.rs in a CVE database copy - I see no added value from that.

                                      1. 7

                                        I agree. I think it comes down to the community being very selective about submitting CVEs. The ones that are worth it will either have a technical deep-dive that can be submitted here, or will be so important that we won’t mind a direct link.

                                        1. 2

                                          Although I want to filter them, I agree this could invite more of them. Actually, that’s one of @friendlysock’s own warnings in other threads. The fact that the tags simultaneously are for highlighting and filtering, two contradictory things, might be a weakness of using them vs some alternative. It also might be fundamentally inescapable aspect of a good, design choice. I’m not sure. Probably worth some contemplation.

                                          1. 2

                                            I completely agree with you. I enjoy reading great technical blog posts about people dissecting software and explaining what went wrong and how to mitigate. I want more of that.

                                            I don’t enjoy ratings and CVSS scores. I’d rather not encourage people by blessing it with a tag.

                                          1. 14

                                            I will repeat what I said on twitter here:

                                            Are you vetting the jobs? Ie. ‘Email Security Support Engineer at Cisco Systems’ in Krakow lists email and phone support asks for Linux experience and just mentions FreeBSD scripting among a ton of other keywords… Feels like a ‘call center’ job listing not a ‘bsd job’.

                                            It’s a nice initiative however I am a bit sceptic on the sudden amount of items that showed up on the listing. Hope this doesn’t turn into ‘list of jobs containing bsd keyword’.

                                            1. 3

                                              You are right, right now it’s keyword-based list. Will fix in the next update :) What do you think should be a good definition of a “BSD job”?

                                              1. 8

                                                Something that gets you direct, preferably daily exposure to the system. Building or supporting/administering a product that directly runs on a BSD would be fair game imho and making this more worthwhile. I don’t want to scroll through 1 00 entries when 80% of them are Linux jobs.

                                                1. 2

                                                  Thanks!

                                            1. 3

                                              Duplicate from 3 days ago: https://lobste.rs/s/qnmrs2/happy_bob_s_libtls_tutorial

                                              Perhaps merge?

                                              1. 1

                                                I marked it as already posted. sorry for the noise.

                                              1. 13

                                                I think I understand where the author’s coming from, but I think some of his concerns are probably a bit misplaced. For example, unless you’ve stripped all the Google off your Android phone (which some people can do), Google can muck with whatever on your phone regardless of how you install Signal. In all other cases, I completely get why Moxie would rather insist you install Signal via a mechanism that ensures updates are efficiently and quickly delivered. While he’s got a point on centralized trust (though a note on that in a second), swapping out Google Play for F-Droid doesn’t help there; you’ve simply switched who you trust. And in all cases of installation, you’re trusting Signal at some point. (Or whatever other encryption software you opt to use, for that matter—even if its something built pretty directly on top of libsodium at the end of the day.)

                                                That all gets back to centralized trust. Unless the author is reading through all the code they’re compiling, they’re trusting some centralized sources—likely whoever built their Android variant and the people who run the F-Droid repositories, at a bare minimum. In that context, I think that trusting Google not to want to muck with Signal is probably honestly a safe bet for most users. Yes, Google could replace your copy of Signal with a nefarious version for their own purposes, but that’d be amazingly dumb: it’d be quickly detected and cause irreparable harm to trust in Google from both users and developers. Chances are honestly higher that you’ll be hacked by some random other app you put on your phone than that Google will opt to go after Signal on their end. Moxie’s point is that you’re better off trusting Signal and Google than some random APK you find on the Internet. And for the overwhelming majority of users, I think he’s entirely correct.

                                                When I think about something like Signal, I usually focus on, who am I attempting to protect myself from? Maybe a skilled user with GPG is more secure than Signal (although that’s arguable; we’ve had quite a few CVEs this year, such as this one), but normal users struggle to get such a setup meaningfully secure. And if you’re just trying to defend against casual snooping and overexcited law enforcement, you’re honestly really well protected out-of-the-box by what Signal does today—and, as Mickens has noted, you’re not going to successfully protect yourself from a motivated nation-state otherwise.

                                                1. 20

                                                  and cause irreparable harm to trust in Google from both users and developers

                                                  You have good points except this common refrain we should all stop saying. These big companies were caught pulling all kinds of stuff on their users. They usually keep their market share and riches. Google was no different. If this was detected, they’d issue an apologetic press release saying either it was a mistake in their complex, distribution system or that the feature was for police with a warrant with it used accordingly or mistakenly. The situation shifts from everyone ditch evil Google to more complicated one most users won’t take decisive action on. Many wouldn’t even want to think to hard into it or otherwise assume mass spying at government or Google level is going on. It’s something they tolerate.

                                                  1. 11

                                                    I think that trusting Google not to want to muck with Signal is probably honestly a safe bet for most users.

                                                    The problem is that moxie could put things in the app if enough rubberhose (or money, or whatever) is applied. I don’t know why this point is frequently overlooked. These things are so complex that nobody could verify that the app in the store isn’t doing anything fishy. There are enough side-channels. Please stop trusting moxie, not because he has done something wrong, but because it is the right thing to do in this case.

                                                    Another problem: signals servers could be compromised, leaking the communication metadata of everone. That could be fixed with federation, but many people seem to be against federation here, for spurious reasons. That federation & encryption work together is shown by matrix for example. I give that it is rough on the edges, but at least they try, and for now it looks promising.

                                                    Finally (imho): good crypto is hard, as the math behind it has hard constraints. Sure, the user interfaces could be better in most cases, but some things can’t be changed without weakening the crypto.

                                                    1. 2

                                                      many people seem to be against federation here, for spurious reasons

                                                      Federation seems like a fast path to ossification. It is much harder to change things without disrupting people if there are tons of random servers and clients out there.

                                                      Also, remember how great federation worked out for xmpp/jabber when google embraced and then extinguished it? I sure do.

                                                      1. 2

                                                        Federation seems like a fast path to ossification.

                                                        I have been thinking about this. There are certainly many protocols that are unchangeable at this point but I don’t think it has to be this way.

                                                        Web standards like HTML/CSS/JS and HTTP are still constantly improving despite having thousands of implementations and different programs using them.

                                                        From what I can see, the key to stopping ossification of a protocol is to have a single authority and source of truth for the protocol. They have to be dedicated to making changes to the protocol and they have to change often.

                                                        1. 2

                                                          I think your HTTP example is a good one. I would also add SSL/TLS to that, as another potential useful example to analyze. Both (at some point) had concepts of versioning built into them, which has allowed the implementation to change over time, and cut off the “long tail” non-adopters. You may be on to something with your “single authority” concept too, as both also had (for the most part) relatively centralized committees responsible for their specification.

                                                          I think html/css/js are /perhaps/ a bit of a different case, because they are more documentation formats, and less “living” communication protocols. The fat clients for these have tended to grow in complexity over time, accreting support for nearly all versions. There are also lots of “frozen” documents that people still may want to view, but which are not going to be updated (archival pages, etc). These have also had a bit more of a “de facto” specification, as companies with dominant browser positions have added their own features (iframe, XMLHttpRequest, etc) which were later taken up by others.

                                                        2. 1

                                                          Federation seems like a fast path to ossification. It is much harder to change things without disrupting people if there are tons of random servers and clients out there. Also, remember how great federation worked out for xmpp/jabber when google embraced and then extinguished it? I sure do.

                                                          It may seem so, but that doesn’t mean it will happen. It has happened with xmpp, but xmpp had other problems, too:

                                                          • Not good for mobile use (some years back when messenger apps went big, but mobile connections were bad)
                                                          • A “kind-of-XML”, which was hard to parse (I may be wrong here)
                                                          • Reinventing of the wheel, I’m not sure how many crypto standards there are for xmpp

                                                          Matrix does some things better:

                                                          • Reference server and clients for multiple platforms (electron/web, but at least there is a client for many platforms)
                                                          • Reference crypto library in C (so bindings are easier and no one tries to re-implement it)
                                                          • Relatively simple client protocol (less prone to implementation errors than the streams of xmpp, imho)

                                                          The google problem you described isn’t inherent to federation. It’s more of a people problem: Too many people being too lazy to setup their own instances, just using googles, forming essentially an centralized network again.

                                                      2. 10

                                                        Maybe a skilled user with GPG is more secure than Signal

                                                        Only if that skilled user contacts solely with other skilled users. It’s common for people to plaintext reply quoting the whole encrypted message…

                                                        1. 3

                                                          And in all cases of installation, you’re trusting Signal at some point.

                                                          Read: F-Droid is for open-source software. No trust necessary. Though to be fair, even then the point on centralization still stands.

                                                          Yes, Google could replace your copy of Signal with a nefarious version for their own purposes, but that’d be amazingly dumb: it’d be quickly detected and cause irreparable harm to trust in Google from both users and developers.

                                                          What makes you certain it would be detected so quickly?

                                                          1. 5

                                                            “Read: F-Droid is for open-source software. No trust necessary”

                                                            That’s non-sense. FOSS can conceal backdoors if nobody is reviewing it. Often the case. Bug hunters also find piles of vulnerabilities in FOSS just like proprietary. People who vet stuff they use have limits on skill, tools, and time that might make them miss vulnerabilities. Therefore, you absolutely have to trust the people and/or their software even if it’s FOSS.

                                                            The field of high-assurance security was created partly to address being able to certify (trust) systems written by your worst enemy. They achieved many pieces of that goal but new problems still show up. Almost no FOSS is built that way. So, it sure as hell cant be trusted if you dont trust those making it. Same with proprietary.

                                                            1. 3

                                                              It’s not nonsense, it’s just not an assurance. Nothing is. Open source, decentralization, and federation are the best we can get. However, I sense you think we can do better, and I’m curious as to what ideas you might have.

                                                              1. 4

                                                                There’s definitely a better method. I wrote it up with roryokane being nice enough to make a better-formatted copy here. Spoiler: none of that shit matters unless the stuff is thoroughly reviewed and proof sent to you by skilled people you can trust. Even if you do that stuff, the core of its security and trustworthiness will still fall on who reviewed it, how, how much, and if they can prove it to you. It comes down to trusting a review process by people you have to trust.

                                                                In a separate document, I described some specifics that were in high-assurance security certifications. They’d be in a future review process since all of them caught or prevented errors, often different ones. Far as assurance techniques, I summarized decades worth of them here. They were empirically proven to work addressing all kinds of problems.

                                                            2. 2

                                                              even then the point on centralization still stands.

                                                              fdroid actually lets you add custom repo sources.

                                                              1. 1

                                                                The argument in favour of F-Droid was twofold, and covered the point about “centralisation.” The author suggested Signal run an F-Droid repo themselves.

                                                            1. 8

                                                              Speaking as a C programmer, this is a great tour of all the worst parts of C. No destructors, no generics, the preprocessor, conditional compilation, check, check, check. It just needs a section on autoconf to round things out.

                                                              It is often easier, and even more correct, to just create a macro which repeats the code for you.

                                                              A macro can be more correct?! This is new to me.

                                                              Perhaps the overhead of the abstract structure is also unacceptable..

                                                              Number of times this is likely to happen to you: exactly zero.

                                                              C function signatures are simple and easy to understand.

                                                              It once took me 3 months of noodling on a simple http server to realize that bind() saves the pointer you pass into it, so makes certain lifetime expectations on it. Not one single piece of documentation I’ve seen in the last 5 years mentions this fact.

                                                              1. 4

                                                                It once took me 3 months of noodling on a simple http server to realize that bind() saves the pointer you pass into it

                                                                Which system? I’m pretty sure OpenBSD doesn’t.

                                                                https://github.com/openbsd/src/blob/4a4dc3ea4c4158dccd297c17b5ac5a6ff2af5515/sys/kern/uipc_syscalls.c#L200

                                                                https://github.com/openbsd/src/blob/4a4dc3ea4c4158dccd297c17b5ac5a6ff2af5515/sys/kern/uipc_syscalls.c#L1156

                                                                1. 2

                                                                  Linux (that’s the manpage I linked to above). This was before I discovered OpenBSD.

                                                                  Edit: I may be misremembering and maybe it was connect() that was the problem. It too seems fine on OpenBSD. Here’s my original eureka moment from 2011: https://github.com/akkartik/wart/commit/43366d75fbfe1. I know it’s not specific to that project because @smalina and I tried it again with a simple C program in 2016. Again on Linux.

                                                                    1. 1

                                                                      Notice that I didn’t implicate the kernel in my original comment, I responded to a statement about C signatures. We’d need to dig into libc for this, I think.

                                                                      I’ll dig up a simple test program later today.

                                                                      1. 2

                                                                        Notice that I didn’t implicate the kernel in my original comment, I responded to a statement about C signatures. We’d need to dig into libc for this, I think.

                                                                        bind and connect are syscalls, libc would only have a stub doing the syscall if anything at all since they are not part of the standard library.

                                                                2. 2

                                                                  Perhaps the overhead of the abstract structure is also unacceptable..

                                                                  Number of times this is likely to happen to you: exactly zero.

                                                                  I have to worry about my embedded C code being too big for the stack as it is.

                                                                  1. 1

                                                                    Certainly. But is the author concerned with embedded programming? He seems to be speaking of “systems programming” in general.

                                                                    Also, I interpreted that section as being about time overhead (since he’s talking about the optimizer eliminating it). Even in embedded situations, have you lately found the time overheads concerning?

                                                                    1. 5

                                                                      I work with 8-bit AVR MCUs. I often found myself having to cut corners and avoid certain abstractions, because that would have resulted either in larger or slower binaries, or would have used significantly more RAM. On an Atmega32U4, resources are very limited.

                                                                  2. 1

                                                                    Perhaps the overhead of the abstract structure is also unacceptable..

                                                                    Number of times this is likely to happen to you: exactly zero.

                                                                    Many times, actually. I see FSM_TIME. Hmm … seconds? Milliseconds? No indication of the unit. And what is FSM_TIME? Oh … it’s SYS_TIME. How cute. How is that defined? Oh, it depends upon operating system and the program being compiled. Lovely abstraction there. And I’m still trying to figure out the whole FSM abstraction (which stands for “Finite State Machine”). It’s bad enough to see a function written as:

                                                                    static FSM_STATE(state_foobar)
                                                                    {
                                                                    ...
                                                                    }
                                                                    

                                                                    and then wondering where the hell the variable context is defined! (a clue—it’s in the FSM_STATE() macro).

                                                                    And that bind() issue is really puzzling, since that haven’t been my experience at all, and I work with Linux, Solaris, and Mac OS-X currently.

                                                                    1. 1

                                                                      I agree that excessive abstractions can hinder understanding. I’ve said this before myself: https://news.ycombinator.com/item?id=13570092. But OP is talking about performance overhead.

                                                                      I’m still trying to reproduce the bind() issue. Of course when I want it to fail it doesn’t.

                                                                  1. 5

                                                                    Kinda surprised that reddit - a site which hosts rougher parts of the internet - has not had a Head of Security until 2.5 months ago?

                                                                    1. 8

                                                                      Their headcount has always been kinda small I think? You need to hit a certain size before carving out a specific position.

                                                                      1. 7

                                                                        “Kinda small” is ~250 people. They have data of 330 Million users.

                                                                        I wouldn’t attach the headcount to the position directly, the question is how much a security need you have.

                                                                        1. 2

                                                                          They seemed to have done pretty well for a long time without having one though.

                                                                          1. 1

                                                                            Did they? How do you know there weren’t previous leaks/breaches that simply went undetected?

                                                                            1. 2

                                                                              That’s probably not a good way to measure it, but maybe the number of posts like this? But that’s true.

                                                                              1. 3

                                                                                My point is, they could have been regularly infiltrated for years and they only noticed know thanks to new talent in house. There’s only so much a jack of all trades team can do while fire fighting all the needs.

                                                                                1. 1

                                                                                  I’ll add to mulander’s hypothetical that this happened in all kinds of big companies with significant investments in security. They were breached for years without knowing they were compromised. They started calling them “APT’s” as a PR move to reduce humiliation. It was often vanilla attacks or combos of those with some methods to bypass monitoring that companies either didn’t have or really under-invested in. Reddit could be one if they had little invested in security or especially intrusion detection/response.

                                                                        2. 3

                                                                          Because reddit is not hosting financial data or (for the most part) deeply personal data that is not already out in the open, I would assume that they are not that interesting a target for hackers looking for financial gain, but more interesting for people script kiddies who are looking to DOX or harass other users.

                                                                          1. 5

                                                                            Many subreddits host content and discussions that people don’t want to be attached to. The post even appreciates that and recommends deletion of those posts.

                                                                            I find it telling that you go out of your way pushing people interested in gaining personal data in the script kiddie corner. Yes, SMS based attacks are in the range of “a script kiddie could do that”, which makes it even worse.

                                                                            1. 2

                                                                              Criminals are using this type of information for targeted extortions and other activities. The general view that that this is mostly the realm of “script kiddies” detracts from the seriousness and provides good cover for their activities.

                                                                              1. 1

                                                                                I made an assumption, but reading your reply and that of @skade you are right that there are lots of uses for the data from a criminal perspective, especially for a site the size of reddit.

                                                                          1. 2

                                                                            Why do they even still have backups from 2007 in this post-GDPR world? They have no authority to retain that data, surely.

                                                                            1. 3

                                                                              I was thinking exactly this when I read about the breach. Backups from 2017 maybe, but almost ten year old backups are useless right?

                                                                              1. 4

                                                                                Maybe it was a seed for a staging/testing system? It’s not uncommon for many places to flop around a data seed for developers - usually they would be anonymized but that’s not always the case in all places.

                                                                                1. 3

                                                                                  It’s not too surprising to me. When changing over to a new system, it’s fairly common to dump the old pile of spaghetti into an archive labeled Someone Sort This Mess Out Later, if you aren’t 100% sure that it doesn’t still have something important in it that needs to be ported over to the new system. Naturally, nobody ever gets around to sorting through it.

                                                                              1. 22

                                                                                After writing Go for 5 years, I’d recommend Rust for C developers. It’s more complicated than Go for sure, but also has more to offer. The lack of garbage collection and support of generics are definitely a plus compared to Go.

                                                                                Go is a better language for junior devs, but I wouldn’t call C programmers junior. They should be able to digest Rust’s complexity.

                                                                                1. 9

                                                                                  They should be able to digest Rust’s complexity.

                                                                                  Non trivial amount of C programmers are still doing C to avoid additional complexity. Not everyone wants a kitchen & sink programming language.

                                                                                  1. 6

                                                                                    Rust can definitely get overly complex if the developers show no constraint (i.e. type golf), but the control afforded by manual memory management makes up for it, IMHO. Unless it’s a one-run project, performance will eventually matter, and fixing bad allocation practices after the fact is a lot harder than doing it right from the beginning.

                                                                                    1. 1

                                                                                      Couldn’t they just start with a C-like subset of Rust adding from there to their arsenal what extra features they like? It’s what I was going to recommend to those trying it for safety-critical use since they likely know C.

                                                                                      1. 9

                                                                                        I think it’s rather difficult to write rust in a C like manner. This contrasts with go, where you can basically write C code and move the type declarations around and end up with somewhat unidiomatic but working go.

                                                                                        1. 3

                                                                                          I think C++ as a better C works because you still have libc besides the STL, etc. The Rust standard library uses generics, traits, etc. quite heavily and type parameters and lifetime parameters tend to percolate to downstream users.

                                                                                          Though I think a lot of value in Rust is in concepts that may initially add some complexity, such the borrow checker rules.

                                                                                          1. 3

                                                                                            The problem with C++ is its complexity at the language level. I have little hope of teams of people porting various tools for static analysis, verification, and refactoring to it that C and Java already have. Certifying compilers either. C itself is a rough language but smaller. The massive bandwagon behind it caused lots of tooling to be built, esp FOSS. So, I now push for low-level stuff either safer C or something that ties into C’s ecosystem.

                                                                                          2. 4

                                                                                            You could argue the same for C++ (start with C and add extra features). Complexity comes with the whole ecosystem from platform support (OS, arch), compiler complexity (and hence subtle difference in feature implementations) to the language itself (C++ templates, rust macros). It’s challenging to limit oneself to a very specific subset on a single person project, it’s exponentially harder for larger teams to agree on a subset and adhere to it. I guess I just want a safer C not a new C++ replacement which seems to be the target for newer languages (like D & Rust).

                                                                                            1. 4

                                                                                              It’s challenging to limit oneself to a very specific subset on a single person project, it’s exponentially harder for larger teams to agree on a subset and adhere to it.

                                                                                              I see your overall point. It could be tricky. It would probably stay niche. I will note that, in the C and Java worlds, there’s tools that check source code for compliance with coding standards. That could work for a Rust subset as well.

                                                                                              “I guess I just want a safer C not a new C++ replacement which seems to be the target for newer languages (like D & Rust).”

                                                                                              I can’t remember if I asked you what you thought about Cyclone. So, I’m curious about that plus what you or other C programmers would change about such a proposal.

                                                                                              I was thinking something like it with Rust’s affine types and/or reference counting when borrow-checking sucks too much with performance acceptable. Also, unsafe stuff if necessary with the module prefixed with that like Wirth would do. Some kind of module system or linking types to avoid linker errors, too. Seemless use of existing C libraries. Then, an interpreter or REPL for the productivity boost. Extracts to C to use its optimizing and certifying compilers. I’m unsure of what I’d default with on error handling and concurrency. First round at error handling might be error codes since I saw a design for statically checking their correct usage.

                                                                                              1. 3

                                                                                                I can’t remember if I asked you what you thought about Cyclone. So, I’m curious about that plus what you or other C programmers would change about such a proposal.

                                                                                                I looked at it in the past and it felt like a language built on top of C similar to what a checker tool with annotations would do. It felt geared too much towards research versus use and the site itself states:

                                                                                                Cyclone is no longer supported; the core research project has finished and the developers have moved on to other things. (Several of Cyclone’s ideas have made their way into Rust.) Cyclone’s code can be made to work with some effort, but it will not build out of the box on modern (64 bit) platforms).

                                                                                                However if I had to change Cyclone I would at least drop exceptions from it.

                                                                                                I am keeping an eye on zig and that’s closest to how I imagine a potentially successful C replacement - assuming it takes up enough community drive and gets some people developing interesting software with it.

                                                                                                That’s something Go had nailed down really well. The whole standard library (especially their crypto and http libs) being implemented from scratch in Go instead of being bindings were a strong value signal.

                                                                                                1. 2

                                                                                                  re dropping exceptions. Dropping exceptions makes sense. Is there another way of error handling that’s safer or better than C’s that you think might be adoptable in a new, C-like language?

                                                                                                  re Zig. It’s an interesting language. I’m watching it at a distance for ideas.

                                                                                                  re standard library of X in X. Yeah, I agree. I’ve been noticing that pattern with Myrddin, too. They’ve been doing a lot within the language despite how new it is.

                                                                                                  1. 4

                                                                                                    Dropping exceptions makes sense. Is there another way of error handling that’s safer or better than C’s that you think might be adoptable in a new, C-like language?

                                                                                                    Yes, I think Zig actually does that pretty well: https://andrewkelley.me/post/intro-to-zig.html#error-type

                                                                                                    edit: snippet from the zig homepage:

                                                                                                    A fresh take on error handling that resembles what well-written C error handling looks like, minus the boilerplate and verbosity.

                                                                                                    1. 2

                                                                                                      Thanks for the link and tips!

                                                                                        2. 7

                                                                                          Short build/edit/run cycles are appreciated by junior and senior developers alike. Go currently has superior compilation times.

                                                                                          1. 10

                                                                                            Junior and senior developers also enjoy language features such as map, reduce, filter, and generics. Not to mention deterministic memory allocation, soft realtime, forced error checking, zero-cost abstractions, and (of course) memory safety.

                                                                                            1. 3

                                                                                              Junior and senior developers also enjoy language features such as map, reduce, filter, and generics.

                                                                                              Those are great!

                                                                                              deterministic memory allocation, soft realtime, forced error checking, zero-cost abstractions, and (of course) memory safety.

                                                                                              Where are you finding juniors who care about this stuff? (no, really - I would like to know what kind of education got them there).

                                                                                              1. 8

                                                                                                I cared about those things, as a junior. I am not sure why juniors wouldn’t care, although I suppose it depends on what kind of software they’re interested in writing. It’s hard to get away with not caring, for a lot of things. Regarding education, I am self-taught, FWIW.

                                                                                              2. 1

                                                                                                Map, reduce and filter are easily implemented in Go. Managing memory manually, while keeping the GC running, is fully possible. Turning off the GC is also possible. Soft realtime is achievable, depending on your definition of soft realtime.

                                                                                                1. 1

                                                                                                  Map, reduce and filter are easily implemented in Go

                                                                                                  How? Type safe versions of these, that is, without interface{} and hacky codegen solutions?

                                                                                                  1. 1

                                                                                                    Here are typesafe examples for Map, Filter etc: https://gobyexample.com/collection-functions

                                                                                                    Implementing one Map function per type is often good enough. There is some duplication of code, but the required functionality is present. There are many theoretical needs that don’t always show up in practice.

                                                                                                    Also, using go generate (which comes with the compiler), generic versions are achievable too. For example like this: https://github.com/kulshekhar/fungen

                                                                                                    1. 9

                                                                                                      When people say “type safe map/filter/reduce/fold” or “map, reduce, filter, and generics” they are generally referring to the ability to define those functions in a way that is polymorphic, type safe, transparently handled by the compiler and doesn’t sacrifice runtime overhead compared to their monomorphic analogs.

                                                                                                      Whether you believe such facilities are useful or not is a completely different and orthogonal question. But no, they are certainly not achievable in Go and this is not a controversial claim. It is by design.

                                                                                                      1. 1

                                                                                                        Yes, I agree, Go does not have the combination of type safety and generics, unless you consider code generation.

                                                                                                        The implementation of generics in C++ also works by generating the code per required type.

                                                                                                        1. 5

                                                                                                          The implementation of generics in C++ also works by generating the code per required type.

                                                                                                          But they are not really comparable. In C++, when a library defines a generic type or function, it will work with any conforming data type. Since the Go compiler does not know about generics, with go generate one can only generate ‘monomorphized’ types for a set of predefined data types that are defined an upstream package. If you want different monomorphized types, you have to import the generic definitions and run go generate for your specific types.

                                                                                                          unless you consider code generation

                                                                                                          By that definition, any language is a generic language, there’s always Bourne shell/make/sed for code generation ;).

                                                                                                          1. 1

                                                                                                            That is true, and I agree that go does not have support for proper generics and that this can be a problem when creating libraries.

                                                                                                          2. 3

                                                                                                            That’s why I said “transparently handled by the compiler.” ;-)

                                                                                                            1. 0

                                                                                                              I see your point, but “go generate” is provided by the go compiler, by default. I guess it doesn’t qualify as transparent since you have to type “go generate” or place that command in a build file of some sort?

                                                                                                              1. 1

                                                                                                                Yes. And for the reasons mentioned by @iswrong.

                                                                                                                My larger point here really isn’t a technicality. My point is that communication is hard and not everyone spells out every point is precise detail, but it’s usually possible to infer the meaning based on context.

                                                                                                                1. -1

                                                                                                                  I think the even larger point is that for a wide range of applications, “proper” and “transparent” generics might not even be needed in the first place. It would help, yes, but the Go community currently thrives without it, with no lack of results to show for.

                                                                                                                  1. 1

                                                                                                                    I mean, I’ve written Go code nearly daily since before it was 1.0. I don’t need to argue with you about whether generics are “needed,” which is a pretty slimy way to phrase this.

                                                                                                                    Seems to me like you’re trying to pick a fight. I already said upthread that the description of generics is different from the desire for them.

                                                                                                                    1. -2

                                                                                                                      You were the first to change the subject to you and me instead of sticking to the topic at hand. Downvoting as troll.

                                                                                                2. 1

                                                                                                  By superior, I guess you meant shorter?

                                                                                                  1. 2

                                                                                                    Compiling a very large go project with a cold cache might take a minute (sub-second once the cache is warm).

                                                                                                    Compiling a fairly small rust app with a warm cache has taken me over a minute (I think it’s a little better than that now).

                                                                                                    1. 1

                                                                                                      Yes, and superior to Rust in that regard. Also the strict requirement to not have unused dependencies contributes to counteract dependency rot, for larger projects.

                                                                                                1. 18

                                                                                                  I suppose I know why, but I hate that D is always left out of discussions like this.

                                                                                                  1. 9

                                                                                                    and Ada, heck D has it easy compared to Ada :)

                                                                                                    1. 5

                                                                                                      Don’t forget Nim!

                                                                                                    2. 3

                                                                                                      Yeah, me too. I really love D. Its metaprogramming alone is worth it.

                                                                                                      For example, here is a compile-time parser generator:

                                                                                                      https://github.com/PhilippeSigaud/Pegged

                                                                                                      1. 4

                                                                                                        This is a good point. I had to edit out a part on that a language without major adoption is less suitable since it may not get the resources it needs to stay current on all platforms. You could have the perfect language but if somehow it failed to gain momentum, it turns into somewhat of a risk anyhow.

                                                                                                        1. 4

                                                                                                          That’s true. If I were running a software team and were picking a language, I’d pick one that appeared to have some staying power. With all that said, though, I very much believe D has that.

                                                                                                        2. 3

                                                                                                          And OCaml!

                                                                                                          1. 10

                                                                                                            In my opinion, until ocaml gets rid of it’s GIL, which they are working on, I don’t think it belongs in this category. A major selling point of Go, D, and rust is their ability to easily do concurrency.

                                                                                                            1. 6

                                                                                                              Both https://github.com/janestreet/async and https://github.com/ocsigen/lwt allow concurrent programming in OCaml. Parallelism is what you’re talking about, and I think there are plenty of domains where single process parallelism is not very important.

                                                                                                              1. 2

                                                                                                                You are right. There is Multicore OCaml, though: https://github.com/ocamllabs/ocaml-multicore

                                                                                                            2. 1

                                                                                                              I’ve always just written of D because of the problems with what parts of the compiler are and are not FOSS. Maybe it’s more straightforward now, but it’s not something I’m incredibly interested in investigating, and I suspect I’m not the only one.

                                                                                                              1. 14
                                                                                                            1. 1

                                                                                                              Has anyone seen what the other two packages mentioned in the email are/were?

                                                                                                              (Seems even if they were accidentally installed by someone they won’t do any harm, but seems odd not to name them so people can check.)

                                                                                                              1. 3

                                                                                                                I found someone on reddit mentioning balz and minergate as the other two packages.

                                                                                                                1. 1

                                                                                                                  Thanks!