1. 2

    One possible strategy is design it to fail during testing on anything not in the contract. I mean, obviously want to be cautious about doing this at all or how one does it. I just remember some people using it. Netflix’s Simian Army or other fault-injection of distributed systems are examples where reliable, ordered messaging might be assumed in code despite no contract or implementation for it.

    1. 4

      It’s hard to test for stuff that’s not in the contract but is in the code. Let me tell you a story of something that happened to me yesterday.

      I have a script that updates Mu on my students’ servers. However, my students sometimes modify Mu themselves (they are encouraged to do so). So the way my script works is that it runs rsync -n (dry run), shows me what would happen, then asks me for confirmation that all is well before performing a real rsync. If I see changes from my students, I cancel the operation and perform a more manual merge.

      This workflow has been pretty much unchanged for a couple of years. The only change was that I switched the script to use /bin/sh a year or so ago, as part of a recent kick to minimize dependencies. And everything continued to work fine. A month ago I upgraded Ubuntu on my machine to 16.04, and yesterday (teaching has been slow recently because life) I ran my script for the first time after the upgrade – and it ran the rsync without prompting me at all. A little digging showed that my approach of waiting for a prompt by running just read was a violation of Posix, and on Ubuntu Xenial /bin/sh now hews closer to the letter of Posix, raising this error:

      read: arg count
      

      read needs to pass a variable to save the input in, or read _ to not save the input.

      I’m not sure what the lesson is here. My bias is to think contracts are shit, because people don’t read contracts. But maybe this is a learning experience to change my mind.

      1. 1

        Remember that contracts a la DbC in a language with good tooling can be runtime checks or generate tests to ensure they’re being used. They can ignore the contracts but the check you leave in won’t ignore the bad input.

        Some things, esp in build systems, need human review to catch, though. What you described seems to be a side effect of UNIX style of composing programs without contracts in a mix of unsafe and informal languages. The problems that came from such stuff are why stuff like contracts were invented and deployed.

        1. 1

          Oh I see, by “contract” you mean “formal contract”. I think the point of OP is that it’s impossible to enumerate the “intended contract” in all particulars as a “formal contract”. Because if you could, you’d just make those scenarios well-defined.

          More rigorous and formal languages will make these corner cases rarer, but they don’t actually obsolete OP. They just push up the “sufficient number of users”.

          1. 1

            Hmm, on reflection my comment is bad. OP conflates Hyrum’s law with XKCD 1172 and I was mindlessly following along, but really they’re separate scenarios. The XKCD is about unintended uses of a piece of code. Hyrum’s law is about intended divergence between code and some spec. Between the two Hyrum’s law is actually easier. If you have a spec it is possible in principle to catch violations as you surmise. There’s just the question of what the costs are. The XKCD is however about violations you didn’t even know you cared about. Hyrum’s law is about known unknowns. XKCD 1172 is about unknown unknowns.

            1. 1

              “Oh I see, by “contract” you mean “formal contract”.”

              I mean both depending on what we’re talking about. It was originally API but can also be formal contracts. The thing is that unspecified or poorly-specified things won’t be checked by default. So, you gotta mandate they get checked, make them fail in ways that nobody relies on them versus correct behavior, or deal with them plus what people are doing yourself. These are a few possibilities that come to mind.

              1. 1

                I wrote a DNS packet decoder library and to ensure safety, I check every bit of the incoming packet. There’s one bit left undefined (no RFC defines it as far as I know), and if it’s not 0, I reject the packet. Am I too intolerant of the DNS packet contract?

                1. 1

                  I can’t give you a right answer to how to handle stuff in Internet protocols given what users will demand and implementations will do is so out of most of us’ control. Internet standards are a much bigger problem than an API for your personal project or commercial product. I will tell you some things that came to mind reading the question:

                  1. The stance I have on API’s is pro formal specification. That’s because just formally specifying things has caught problems. Comparing output of random implementations with executable, formal specifications also caught problems. In your article, you report that your code that follows the spec… that leans toward an executable specification… caught problems in other implementations. That’s a valuable thing that matches the prediction.

                  2. There’s a middle ground that says you can accept something but log it to analyze situation further. What you find might lead to changes in your spec if not the true one. Also, maybe run it with more checks on or isolation as your protocol engine goes through it. In micro and separation kernel schemes, there’s often simple functionality that’s trusted due to its well-vetted spec/implementation. Everything else got re-routed through user-mode components whose output is validated for sanity. Any explosions the unusual features cause will be contained.

                  3. Postel’s law. If app need to integrate well with 3rd parties’ apps, then you might need to be able to accept any crap the products around you do just to succeed financially or with uptake if FOSS. Alternatively, your solution might be replacing another that the users built a lot of code on that expects the out-of-spec behavior of the original a bit. For FOSS, the struggles of OpenOffice to make headway in a Microsoft world designed for lock-in to existing doc files are a perfect example. On protocol side, another is comparing behavior of your protocol implementation against many others to either create a superset spec containing all of it or a series of profiles the user can select based on what their internal environment expects. People without legacy baggage get your most robust version with others taking on as much risk as they already chose to.

                  So, those are the three things that come to mind reading your question followed by your article. On DNS side, I think people commonly try to copy whatever behavior is accepted by most popular clients with least trouble with middleboxes. I have no idea what that set is, though. I’d imagine it changes over time, too, where you’d have to constantly run tests probably with help of other vendors or their customers, too.

      1. 2

        I do know the distraction problem, but is it really required to solve it via a programme? Can’t you just pull the Ethernet cable and/or disconnect from wifi? After all, the main cause of distractions are notifications from all kinds of programs, and they are effectively removed by that. If you need the documentation for your programming language or library, there’s often a way to download it and read it offline, and some languages offer documentation via commandline (like Ruby’s ri). I often make use of this when travelling by train, where Internet tends to be wonky.

        Side note: For distractionless writing of anything that is not source code, I have – not kidding – switched to a physical, mechanical typewriter. It’s a relief. No distraction, just you, the keys, and the paper. For source code that doesn’t work due to lack of special keys, but otherwise it’s great if you really want to concentrate on a specific topic and just write.

        1. 2

          The all or none approach is a bit more difficult when you have actual work that needs to happen on GitHub or some remote server.

          Though I’ve been a bit more honest about the ratio of that kind of work. Most work can happen offline

          (I have a Pomera DM200 for typing up paragraphs in a concentrated way. Not perfect but good enough for my needs)

          1. 1

            can you recommend a good typewriter that’s old enough to be simple and durable, but also new enough to be easy to use?

            1. 2

              Um … any manual typewriter that still works? They’re not complicated to use …

              1. 1

                I have helped in courses on C programming, and when I explained the difference in newlines between Linux and Windows, I used to bring around a typewriter to make the students see what a “Carriage Return” really is. Of course, I left the typewriter available for playing around and from that I can definitely tell you that a typewriter is not self-explanatory anymore. By far the most common question I was asked was:

                “How do I advance to a new line?”

                It’s not obvious if you’re used to have an Enter key.

              2. 2

                Any mechanical typewriter that was manufactured before 1970 should be good and durable, after that date quality appears to decline, and for electric/electronical typewriters it’s much more difficult to repair things if they break. I’m happily typing on a 1960ies Olympia SM9, but really, take a look at eBay, your local antiques shop or similar and just buy one that looks nice to you. You shouldn’t probably start with a “Standard” (i.e. full-size) type writer, because they’re very heavy and hard to sell again. Don’t worry if the ribbon is dried out, it’s easy to get replacement ribbons e.g. on Amazon. If everything else works, the typewriter is probably fine.

                Also, don’t go with extremely old models (pre 1920) if you want to actually use them and not just look at them.

                If you handle your machine with care (never clean it with WD40!), it will last decades as they have lasted already. Apart from my Olympia I have a completely functional 1930ies Continental typewriter that I occasionally type at, but it requires much more pressure on the keys, which I find uncomfortable.

                Lobsters isn’t a typewriter community (yet), so I think that should complete it. You might want to register at a typewriter forum like this one if you have further questions, as they can be answered there much better and by more competent people than me.

            1. 25

              Original linker here.

              I think it’s ridiculous. Literally every link aggregator and forum that has NSFW/“sensitive” tagging quickly realizes that nobody defines it the same and they should have been more specific!

              If you want to filter out anything having to do with sex, then have a sex tag. Same goes for graphic/gory images. Also, we should differentiate between the word “sex” in writing and photos of people having sex, so we’ll want a sexual imagery tag (and then, to be fair to the weebs, hentai, yaoi, yuri, futa and the rest so they can still see just the types they approve of). Plus a tag or, better yet, trigger warnings for my acute trypophobia. And then a tag for profanity because I might have children walk behind me while I’m at a bus stop and I don’t want some kid picking up new words because of me. Oh, and tag anything mentioning my employer’s competitor, because I don’t want to be caught with their logo big as day on my screen when my boss walks by. Plus any posts linking to Linux newsgroups will need a threats of physical violence tag because of that truculent Fin!

              Or we can just remain a technology-focused link aggregator and flag+remove anything off-topic and leave the things that are reasonable for that site description, even if they have the horrible, no-good, very bad word “sex”.

              1. 8

                Please don’t get overboard. This response draws in a lot of unrelated things that didn’t happen here and we generally react on actual issues. Many of the things you describe have not happened, so it’s no use to bring them into the discussion. For example, no one mentioned trigger warnings, it’s you introducing them. (we can have a trigger warning discussion elsewhere, I find them useful for $reasons, but have had no practical need here)

                As useless as I find an NSFW or sensitive tag, keeping the discussion at a serious and constrained level is also important. It’s a valid point to raise, please don’t make it seem like is not.

                My stance on the issue is that your title made it sufficiently clear what the topic of linked post is.

                1. 12

                  This is not about what any of us may think—it’s about what our respective employers may think, and I’m pretty sure they are, with few exceptions, pretty conservative on the issue.

                  I don’t think your slippery slope is very compelling. What’s being proposed is a single tag to broadly indicate to employed lobsters—most of us, by all indications—that a given story could generate awkward conversations with one’s boss. I think it’s pretty clear what “NSFW” means, and objective criteria aren’t required—the suggestion mechanism will handle edge cases just fine.

                  1. 5

                    Yep. So, I look at aggregators on my phone so nobody can see any stuff that pops up. Few workplaces would ban smartphones but allow people to goof off on computers. Seems like it’s easy to solve for people worrying about it. Plus, I dont force others to put work into meeting my preferences that came with the job I chose.

                    I dont object to a nsfw tag, though. It’s pretty common practice on social media. Im for courtesy. Im just also for realism. People concerned about a web page getting them fired should take precautions cuz this is random people on the Internet posting stuff.

                    1. 4

                      What is Not Safe For Work? Here in the United States, nudity is pretty much Not Safe For Work, but in Europe, maybe not (I don’t know, I don’t live in Europe). Conversely, violence is okay here in the United States (sadly) but it’s probably Not Safe For Work in Europe.

                      Much better then to have tags like “nudity”, “sexual imagry”, “violence” etc. than just one NSFW tag.

                      1. 3

                        If it’s not safe for your work, suggest the tag. If it is don’t worry about it. I’d rather get some false positives than some false negatives. After all I can always open on my phone with the tag not hidden. I think tagging with nudity, sexual imagery etc is way too complicated, and frankly I don’t care why it’s not safe for work. I just care that someone felt that they couldn’t show it at their job.

                      2. 3

                        it’s about what our respective employers may think

                        I’ll bite - your employeer’s unreasonable work-monitoring policies should not be our problem or nuisance.

                        1. 8

                          I completely fail to see how an nsfw tag rises to the level of a problem or a nuisance.

                          This is not about “work monitoring”. My workplace is fairly permissive, but it would still be awkward if my boss happened to see an article about smart dildoes on my screen. Many, many workplaces would go beyond just an awkward moment. I think it’s safe to say that most users here are employed, and I think it’s also safe to say that most are not employed at a workplace so free-wheeling as to be completely unconcerned if its employees are visiting inappropriate pages.

                          1. 5

                            Sure, but if an article about smart dildoes is on your screen, you already clicked a link that says “Deldo is a sex toy control and teledildonics mode for Emacs”. How would the tag have helped you? It’s not like someone hid the nature of the content.

                            1. 3

                              That title is on the front page of lobste.rs regardless, and there’s nothing resembling a guarantee that titles are always so explicit.

                      3. 8

                        A rather sanctimonious response to someone who just wants to be able to look at a programming site at their job. If you think it could be NSFW, then mark it, if not and someone does they’ll mark it. I was the one who made the comment on your post, and I read the article at home. It’s really great that you work at a place where you can scroll through titles about dildos or are willing and wealthy enough to get fired out of principle. To those of us without those liberties, you sound like an asshole.

                        1. 3

                          I find that problem description weird. If you can run into problems of getting fired for the link titles on a news page, we cannot reliably save you from that.

                          1. 4

                            Cool to ignore the thing that I said would work, and works for literally nearly every site on the web. Why is there push back on this? I’m not saying we should hide content, or censor anything. I merely would like to be able to filter out NSFW things at work. I find this whole conversation super weird. If there’s no way to filter NSFW content on lobsters, then I’m going to have to start reporting every “NSFW” article and that seems frankly draconian. A lot of american jobs are like this, you are the one in the bubble. I don’t think it’s right that our workplaces are like this, I think its shitty and regressive but I also am not in denial about the reality of the average american workplace.

                        2. 5

                          I agree. It’s impossible to come up with a consensus about what is “sensitive” and what’s not. I think that by looking at the title and the URL that is being linked to, a reasonable person should be able to decide if it’s “safe” for them to open the link. If it’s borderline, then don’t open it or click the “save” button and view it at home.

                          1. 8

                            The linked poster wants the tag so that the title itself can be filtered from the homepage, not as a warning not to open it.

                            1. 1

                              I understand the purpose of a filter. The filter will always be flawed because it will filter out what the hivemind/mods/vocal minority think is sensitive, not what the user thinks is sensitive and it will generate all sorts of low value meta discussion about whether an article is/isn’t sensitive.

                          2. 2

                            Hey, I agree with your position–just running the process. :)

                            1. 13

                              It’s already tagged with emacs; that should make most reasonable people not want to open it anyhow 😉

                          1. 7

                            Is anyone still adopting CMake since Meson exists?

                            1. 2

                              CMake is still in wide use in the field and I can personally say I’ve seen it adopted on a bunch of new projects.

                              1. 1

                                I had never heard of Meson, but in checking it out, it seems to have existed since 2011. Might I suggest a marketing campaign? What advantages does Meson have over CMake? Why are there two commands, meson and ninja? Is meson opinionated (it seems to be) or flexible?

                              1. 2

                                A video that really resonated with me is Greg Young’s The Art of Destroying Software. It’s a refreshing talk because it’s just that—a talk (no slides) about software development.

                                1. 6

                                  stick to pre-ansi c instead of c++17, this is the only way to make sure that your code will run on ancient unix platforms that died well before you were born

                                  1. 3

                                    If it wasn’t in Unix V7 it’s not worth using.

                                  1. 2

                                    We used an open source third party program for one of our components. Because it was open source, we were able to modify it (add some additional logging for our needs). It wasn’t without issues though (resource leaks over an extended period of time) as well as it doing way too much—way more than we use (which I read as: it’s a larger attack surface).

                                    We looked at using the latest version of the software (we were at x.y.z and we’re looking at x.y.z+1) but so much changed that we would be starting over with adding our changes, and if we’re going to do that, why not just scrap the program entirely (there’s no indication that the new version would fix the issues we had) and roll our own? It does just what we need and because the code is simpler, we will have an easier time debugging it.

                                    1. 4

                                      Or you could go the phlog route. I did that as a lark. It wasn’t that difficult.

                                      1. 12

                                        I would rewrite this as a shell function:

                                        spaces2underscores() {
                                           for i in "$@"; do
                                              mv -iv "$i" "${i// /_}";
                                           done
                                        }
                                        

                                        Usage:

                                        spaces2underscores *
                                        spaces2underscores */*.txt
                                        

                                        It’s also easy to make the default *:

                                        if test $# -eq 0; then
                                           set -- *  # set "$@" to all files in the current directory
                                        fi
                                        

                                        The bash manual even says that functions can do everything aliases can do, and are preferred.

                                        Aliases are a bit hacky from an implementation point of view and don’t compose as nicely. Shell aliases and functions are somewhat analogous to C macros and C functions – aliases are a lexical construct, not really part of the language proper. There’s a somewhat confusing rule about trailing spaces in aliases.

                                        That said, I am resigned to implementing aliases in Oil, because even I have aliases in my .bashrc from 10 years ago ! :-/

                                        1. 3

                                          Yes, my alias doesn’t take an argument. That’s an oversight… Thanks idea, args will be useful!

                                          You’re right, functions > aliases.

                                          Using a function rather than an alias gives me opportunity to more easily evolve the thing over time, to handle new cases I hadn’t considered… Like this one:

                                          "A very important document - FINAL.docx"
                                          

                                          Naively that would become:

                                          "A_very_important_document_-_FINAL.docx"
                                          

                                          …But I’d prefer:

                                          "A_very_important_document-FINAL.docx"
                                          

                                          Since that construct is common, why not address it? I’ll just do a second replacement with "_-_" as the match pattern.

                                          It’s done easily enough as an alias…<– NOPE

                                          Okay, a function:

                                          function spaces2underscores() {
                                            # globs don't need to be wrapped in quotes, right??
                                            # I couldn't make it work with the quotes..
                                            for i in ${@:-*' '*}; do # if "$@" isn't set, set it to "all the files with spaces"
                                              s2u_temp="${i// /_}"
                                              mv -iv "$i" "${s2u_temp//_-_/-}";
                                            done
                                          }
                                          

                                          For anything much more complicated, I’d just do it manually with dired or simply mv.

                                          1. 1

                                            And here I use alias to fix common typos I make:

                                            alias mroe=more
                                            alias maek=make
                                            alias amke=make
                                            alias rm='echo That command is not available'
                                            

                                            That last one is to remind me to double check what I’m about to delete.

                                            1. 1

                                              Why not just rm=“rm -i”?

                                              1. 1

                                                I would find that exceedingly annoying. There are times when I delete thousands of files (generated data for test cases that need to be regenerated between runs because the tests modify the files).

                                                1. 2

                                                  So use rm -f if you dont want the confirmation.

                                                  How does making it unavailable to type easily work?

                                                  If youre using GNU rm, theres also rm -I (capital I) which will prompt once for >3 files

                                          1. 4

                                            The devil’s in the detail and there are plenty of places where you can mess up (as I found out writing my own DNS encoder/decoder). The author briefly mentions the domain name encoding scheme when decoding, but if you blindly assume valid data, you could end up in an infinite loop. Another issue I found out the hard way—I have yet to find a nameserver that actually supports multiple questions. The spec allows it, but I’ve yet to find a server that can handle it (which is a shame, since it could be used to reduce DNS queries).

                                            1. 11

                                              This is quite disappointing, his solution for each point is basically “just get it right!”

                                              Here are some actual solutions for the problems he brings up:

                                              Not freeing memory after allocation:

                                              Try to avoid the “free everything individually” pattern where possible because it’s the easiest to get wrong. Use RAII if you like. Try to centralise your resource management higher up the call stack so leaf code does not call malloc/free, people coming from other languages tend to do that because it’s fine if you have a GC, but you can make big messes for yourself in C. Use easier allocation strategies like “free everything allocated after this point when this statement falls out of scope”, which works very well for temp scratch space allocations.

                                              All of your allocations should go through memory managers that check for leaks. The least intrusive way to do it is by keeping a hashtable that maps from the pointer to some info struct, which at least contains file and line of the allocation.

                                              Freeing already freed memory (double-freeing):

                                              Largely mitigated by what I said before. I have no specific suggestions for this because in practice I never have problems with it.

                                              Invalid memory access, either reading or writing:

                                              I don’t have any recommendations for NULL pointers, but stuffing asserts/NULL checks all over the place is a signal you have no idea what your code is doing. TBH it’s not really been a problem for me.

                                              For uninitialised data, making all your allocators call memset( 0 ) is a good start (try to make your classes have some valid default state when they’re all 0), beyond that it’s not really a problem because it’s pretty easy to catch. Use RAII if you like.

                                              Also lobby the C++ committee for something like #pragma primitive_types_initialise_to_zero_unless_you_tell_them_not_to.

                                              Buffer-overflow, either reading or writing:

                                              Implement sane primitives, such as:

                                              template< typename T >
                                              struct array {
                                                  T * elems;
                                                  size_t n;
                                                  T & operator[]( size_t i ) { ASSERT( i < n ); return elems[ i ]; }
                                              };
                                              

                                              Replace any usage of pointer + size with that class. You will probably want StaticArray< size_t, T >, DynamicArray, array2d, etc too.

                                              You’ll also want a sane string class, a sane string parsing library (Lua’s patterns library is very very good), and a sane stream writer/parser class. Use these classes instead of C strings/pointers + sizes.

                                              1. 2

                                                As for your first solution, I would first check to see if the runtime can do the checks for you. I know that the GNU C library can do more checks if you set certain environment variables, and using valgrind is wonderful if you have access to it. Check first before writing your own memory wrapper (that is what lead to Heartbleed, by the way).

                                                Second, pick a better value to initialize memory with than just 0. If you really want to help with debugging on the x86 platform, I suggest using the value 0xCC. As a signed quantity, it’s a large negative value that is easy to spot. As an unsigned quantity, it’s a huge number. As a character, it’s invalid ASCII (but it is a valid UTF-8 initial byte character, but two 0xCC’s in a row is invalid UTF-8). As a pointer, it’s probably an invalid pointer (so you’ll get a SIGSEGV most likely) and if it’s accidentally executed, it’s the INT3 debugging instruction.

                                                I don’t really have any good solutions to string handling in C other than “don’t do that!” Which is why I use Lua if possible.

                                              1. 2

                                                I’m on vacation so I get to work on some personal projects.

                                                Project 1) I went antique shopping with the SO and came across a few Linksys routers for $2 a piece. I couldn’t pass that up, and I’ve been playing with DD-WRT, which is something I’ve always wanted to do, but were reluctant to do because of fear of bricking it. I’m finding the software so much better than what came with the router that I’m now seriously considering upgrading my main router.

                                                Project 2) an operating system for a simulated 8-bit computer of my own design. It’s based upon a Motorola 6809, a serial chip and timer and has been quite fun (and a nice change of pace from what I do). I even got dynamic linking working, although the tool chain I’m using could use some serious work.

                                                1. 4

                                                  Is it trustworthy? Or is it just another kind of clickbait? (I know nothing about networking and these claims look incredibly significant, so… I would be pleased if someone can confirm this)

                                                  1. 5

                                                    so, it’s of the style of various IoT botnet scanners/hackers we’ve seen in the skiddie space, so even if from a strange source, it’s definitely fitting of the style of tools you’ll see, usually prefaced with PRIV8PRIV8PRIV8PRIV8PRIV8 or gr33tz 2 mah krew sirPWN, leetjar, ....

                                                    Furthermore, as someone who works in the penetration testing & adversarial simulation (aka “red team”) space, nothing of the document is terribly surprising: many places rely on terribly-configured infrastructure, there’s a lot of garbage floating around in networks, and teams very often take a “we don’t have money to fix that” approach to security. For example, I’ve had more clients than I care to count receive report after report detailing high or critical findings ala NIST 800-30, and yet claim to not have money for the same. I mean simple things like “sshv1 running on all internal routers” or “world-readable anonymous FTP server contains sensitive client information.”

                                                    I’ve discussed this with colleagues in the space and the general consensus is one of malaise; everyone knows this to be the case, but no one really cares. What impact did Equifax have? None, no one even thinks about these things anymore. Businesses write off these risks via Risk Acceptance, and move one. The government is more concerned about critical infrastructure, but that is a double-edged sword (and I say that as someone who used to work in gov).

                                                    tl;dr: even if not credible, the source is relatively spot on with similar “posts from the underground,” and no one really cares, because so much is broken, but businesses often can just accept the risk and move on.

                                                    1. 2

                                                      I worked at a small business oriented ISP/web hosting company from around 2003 to 2010. What I remember was getting a “security audit” once that was 500 pages of crap like “OH MY GOD YOU HAVE PING ENABLED! DO YOU KNOW PEOPLE CAN FIND THOSE COMPUTERS?” and “OH FOR XXXX SAKE YOU’RE RUNNING DNS DO YOU KNOW HOW HORRIBLE THAT IS THAT PEOPLE CAN FIND YOUR COMPUTERS?” to even “XXXXXXXXXXXX YOU’RE RUNNING A WEB SERVER! ANONYMOUS PEOPLE CAN ACCESS THIS COMPUTER YOU XXXXXXX XXXXNUT!” Yeah, hard to take seriously page after page of “just cut the network cables if you want to be safe” crap.

                                                      So here’s how I would respond to the “OH XXXX YOU HAVE SSHv1 ON INTERNAL ROUTERS!” claim—“Hey boss, we need to upgrade all our Cisco routers.”

                                                      “Do you have $NNNNNN to upgrade the infrastructure?”

                                                      “You’re the one with the money.”

                                                      “Do the best that you can. I’m dealing with customers that are late with their payments.”

                                                      We were buying equipment on the second hand market because we couldn’t afford do deal directly with Cisco. So, for the sake of the Internet, we’re supposed to shut down and go quietly into the night? But in the meantime, I just restricted SSH (when we got SSH on the routers—early on we were stuck with TELNET) to only accept connections from known hosts.

                                                      1. 3

                                                        Oh ja, I’m not surprised at all by this either. For ever good pentester I know, there are dozens or more of ZOMG LE TOOL SAYS YOU HAS 0DAY. Honestly, the infosec industry is one of shills, and the infosec community is one of hero worshipping cliques. It’s pretty rough at times to be a simple professional.

                                                        Wrt your example of SSHv1, the overall risk for me would depend on what other environmental controls are in place. For example, I worked at an ISP that had all management interfaces exposed only to a special administration VLAN for routers. So, the likelihood in that case would be very low; an attacker would either have to transit multiple security boundaries and launch a fairly noisy attack, or it would have to be a malicious internal attacker who would likely already have legitimate access to those same devices. The impact is high regardless because this could impact core business functionality. Very low x High = low, please fix it during your next upgrade cycle.

                                                        And that’s my problem with the “hah! they should have just patched everything!” mentality: people don’t have the $ or time to take infrastructure down. I mean, good heavens, Equifax blamed one person… clearly that’s a sign of a dysfunctional org if you’ll ever see one.

                                                  1. 7

                                                    “Compiled languages are always faster.”

                                                    This link seems to indicate problems in one of most overly-complicated languages designed (C++) rather than evidence of that bullet point. Plus, the comparisons rarely use a profile-guided optimizer for the AOT compiler when comparing to JIT’s which are essentially that on bytecode. That the AOT has more time to optimize with even more information than the JIT means it should always produce faster code in theory. The only exception I can recall is for code whose runtime patterns change a lot. Even then there could be an AOT-style solution combined with JIT that periodically recompiles the code or loadbalances in a heterogenous way matching workloads to ideal AOT-compiled processes. I haven’t seen one I don’t think.

                                                    1. 9

                                                      “Compiled languages are always faster”

                                                      But Java has a compiler.

                                                      We’re left trying to interpret what the author meant. Does “compiled languages” mean “AOT compiles to native code” vs. “JIT compiles to native code” as you reasonably assume based on the linked Forbes article? Or is it about interpreters, as I’ve suffered this debate before? Who knows!

                                                      Gas on the fire.

                                                      1. 1

                                                        That’s true in general but these are supposed to be memes. So, the meme would be an AOT compiler for a mainstream language vs one doing JIT. Author gave C++ vs Java as very common example. I always knock that down as apples vs oranges with one not allowed profiling even though it’s possible for AOT.

                                                      2. 6

                                                        I’ll add a misconception to the list: “The term ‘compiled language’ is a useful concept”.

                                                        You don’t have languages that are compiled[1]; you can have a language for which a compiled implementation exists, or a language for which no compiled implementation exists, or a language for which the primary implementation is compiled but not always, etc. And even that is not a very useful category, because the performance implications of a compiler that targets machine code can be very different from one that compiles to bytecode.

                                                        I think most people mean “a language which is typically AOT-compiled directly to machine code” when they say this, but I can’t be sure because the terminology is so vague.

                                                        1 - OK, so technically there are languages like Forth where the compiler is part of the language spec, but A) these are very unusual and B) no one who uses the term “compiled language” is actually talking about this.

                                                        1. 2

                                                          Of note, it is possible for a language to be uncompilable[0] — c.f. some of the reflective-tower languages of the 1980s (Lisp in Small Pieces has an example of an uncompilable language).

                                                          [0] Really, it’s not so much that they’re uncompilable as that compilation has no real utility as applied to them.

                                                        2. 2

                                                          It also depends on your definition of “faster”. Generally, higher level representations of software are more compact - source is smaller[*] than bytecode which is smaller than machine code. If your performance is constrained by transferring your program representation (for example over a slow network, from slow storage medium) or if your interpreter + bytecode can fit in cache better than a compiled representation then compiled code may be slower.

                                                          [*] At least once gzipped or converted to a minimal representation.

                                                          1. 1

                                                            On that note, the Juice project that substituted Oberon for Java had about that effect I think. They wanted fast compilation and small transfers over dial-up more than ultra-optimized result. Their innovation was sending app as compressed AST’s so compiler still had more info to work with versus bytecodes.

                                                            I wonder about that last part, though. How often is it a problem that AOT-compiled code is worse at caching than interpreted/JIT’d bytecode?

                                                          2. 1

                                                            But it’s interpreters all the way down, and some interpreters are faster than others.

                                                            1. 1

                                                              Yeah, the fully-analog ones are still kicking the rest of them’s asses. People are just too picky about accurate results. ;)

                                                          1. 1

                                                            I read the article and the comments here, and I can’t help but think of USENET. An any arbitrary group, people would ask questions, others would answer. Enough time would pass and one person (or a few) would basically create a curated list of questions and answers (“Frequently Asked Questions” aka FAQ) that would be posted periodically (usually monthly). I am seriously surprised something like this hasn’t popped up on Stack Overflow.

                                                            1. 2

                                                              The value of the FAQ is significant: it essentially archives a set of questions and moves them off the discussion table. This lets experienced users move on to something new, while still providing an answer to new users with those questions.

                                                              This would be a great SO feature. I think the suggested question list you get when writing a question tries to accomplish this.

                                                              1. 1

                                                                Wasn’t it the reason to create Stackoverflow Documentation? (I am not sure, I did not follow closely that part)

                                                              1. [Comment removed by author]

                                                                1. 8

                                                                  Somehow, I was able to program without Google and Stackoverflow for twenty years or so. And perhaps you went to a better college than I did, but the assembly taught there was minimal and we never did learn low-level network details and compiler writing was a graduate level course (and as an undergrad, I helped a few grads with their compilers for that class).

                                                                  In fact, I’ve learned more on my own than I ever did in college (Programming 101 for me was in Fortran).

                                                                  1. 7

                                                                    As a person who switched into CS from Physics, I don’t feel like not having a formal grounding has been a huge problem. Most of that you can learn in books and essays. Looking back, the most helpful classes were in the humanities. That’s where I learned a lot of important soft skills:

                                                                    • Editing and proofreading things I wrote
                                                                    • Making (somewhat more) watertight arguments
                                                                    • Finding obscure or missing primary sources
                                                                    • Detecting bias, agendas, or holes in secondary sources
                                                                    • Identifying That Kids
                                                                    • Ruining discussions by appealing to Wittgenstein

                                                                    But, if one’s a js dev, one should have enough problems with the mataphysical concept of equality, to bother with actual equations /s.

                                                                    Don’t be rude. Knowing a harder language doesn’t make you smarter than a person who uses JavaScript. It just makes you a person who knows a harder language.

                                                                    1. 4

                                                                      Don’t be rude. Knowing a harder language doesn’t make you smarter than a person who uses JavaScript. It just makes you a person who knows a harder language.

                                                                      It helps to know “harder” languages, as you gain understanding what happens at the lower levels (with C for example). It also helps with creative thinking if you know a broad spectrum of languages (and maybe some of the things about languages and compilation), as it exposes you to different ways of thought.

                                                                      1. 3

                                                                        Oh, I definitely agree! Learning harder languages can stretch your mind and expand your skills. I was objecting to the idea that being s js dev means you’re not smart or don’t understand programming. You can’t just a person’s ability just by how many languages they know.

                                                                        1. 2

                                                                          So should I learn Malboge? ;)

                                                                          1. 5

                                                                            Learning Malbolge definitely counts as exposure, but I think it’s measured in Sievert.

                                                                            1. 1

                                                                              the ternary number system sounds interesting enough ;)

                                                                              not that i’m in any position to give advice:

                                                                              • knowing a bit C has helped me
                                                                              • (ba)sh, together with other classic tools is sometimes exactly the right tool.
                                                                              • a general purpose scripting language. i like python for this.
                                                                              • any modern compiled language like go or rust. i still have to give rust a look.
                                                                              • something functional is always nice for the different approach to problems. i never used one for anything of relevance, but worked through a few tutorials.

                                                                              related: https://pragprog.com/book/btlang/seven-languages-in-seven-weeks

                                                                      1. 2

                                                                        DNS can do most of this already. It can’t check to see if a URL or interface is up, but doing so would make the reply take longer anyway.

                                                                        1. 1

                                                                          OpenDNS already does this… how is this service better?

                                                                          1. 1

                                                                            And NortonDNS also. Competition is this area is nice.

                                                                            1. 1

                                                                              Is running your own DNS server that hard?

                                                                              1. 3

                                                                                Not every user on the Internet understands RFC1034 and family. Most do not even know what a DNS server is. They are the potential users for such services.

                                                                                1. 2

                                                                                  Weird argument there—someone who doesn’t know what a DNS server is, so this DNS service is for them?

                                                                                  While running a program like bind would be a bit beyond most people, a DNS server that does nothing but resolve should be easy enough to install and run. The primary root servers are at well known addresses, and for resolving, there’s not much to configure for the simple DNS server itself. Okay, getting the computer to use the simple DNS server is an issue (DHCP? What’s that? /etc/resolv.conf? What’s that?) but that’s the only problem I can see.

                                                                                2. 1

                                                                                  Running your own DNS doesn’t help you stop malware

                                                                            1. 1

                                                                              At one point in the article, it states about software applications that “It’s the primary way we communicate our thoughts and feelings to our friends and family.”

                                                                              Am I the only one who finds this deeply shocking? I am sure not alone who refuses to use software (instant messaging, social media, e-mail, phone, long-distance video calling, online gaming) as a primary way of contact. I’d rather meet with my friends and family in person and have a good time, than communicate indirectly. I still use those tools, of couse, for the logistical part of meeting: agreeing on time and location, or letting them know where I am.

                                                                              Can people give me examples of why one would use software as a primary means to communicate with ones intimi?

                                                                              1. 4

                                                                                Distance is one reason; many of my friends and family don’t live anywhere near me, since I’ve moved a few times, and I’d like to talk to them more than once or twice a year. This might be more idiosyncratic, but for me, I also find it easier to have personal conversations via methods that are more mediated, especially text chat and email. I find that with people I mainly hang out with in person, the relationship is more like them being acquaintances, while close friends are people I’ve gotten to know at least in part through a lot of textual communication.

                                                                                1. 4

                                                                                  18 years ago I started writing some software so I could update my wesite easily because I was planning on taking a contracting job for six months about a thousand miles away (In the end I did not get the job) and I felt this would be an easy way to keep in contact with friends and family while I was away. I still have the blog though.

                                                                                  These days my best friend lives two thousand miles away. Until Niven-style teleportation becomes a reality, how else am I to remain in contact with my best friend (whom I’ve known for almost 40 years now)?

                                                                                  1. 3

                                                                                    Because flights are much more expensive than XMPP messages

                                                                                  1. 6

                                                                                    I’ve been using semantic versioning for my own projects now for a few years, and in my experience with it, it’s great (if used correctly as tscs37 mentions) for libraries (or modules) but less so for applications.

                                                                                    My gripe with semantic versioning is that it allows pre-release and build metadata as part of the version string. I think this is wrong as it’s ill-defined and complicates dependency tracking (I’m also not a fan of pre-releases—why prolong the release? It won’t get much additional testing because of a general reluctance of using pre-releases). Just use X.Y.Z.

                                                                                    Also, for an extended meditation on backwards compatibility, read The Old New Thing, which explains the lengths that Microsoft goes to to maintain backwards compatibility (and the problems that can cause).

                                                                                    1. 4

                                                                                      Build metadata SHOULD be ignored when determining version precedence.

                                                                                      Pre-release versions have a lower precedence than the associated normal version.

                                                                                      So, only the pre-release versions do, and most implementations I know of only support using a pre-release version if you explicitly ask for it.

                                                                                      1. 1

                                                                                        I’m also not a fan of pre-releases—why prolong the release? It won’t get much additional testing because of a general reluctance of using pre-releases).

                                                                                        Our customers extensively test our release candidates. Linux kernel release candidates are also pretty thoroughly tested. And no, I don’t think there is a reasonable alternative to RCs.