1. 7

    You know what, IMO the spookiest thing about this story is NoMachine. I’ve never heard of it before, this is the first reference I’ve seen towards it. The website is a lot of marketing fluff, but nothing about exactly how they went about securing it. So the first reference I’ve ever heard of it is a story about how somebody managed to bypass it’s security somehow to get remote access to a machine that had important credentials on it. This suggests I should stay far, far away from it, at least until I see a deep dive into how this was possible and how the company responsible for it is working to ensure that it can never happen again.

    I can’t say I’ve ever heard of anybody breaking into a server with access secured through SSH configured with best practices.

    I mean you can bash Google a bit for this I guess, but with how huge their systems and attack surface is, it’s hard to believe they could ever secure things enough that letting somebody unauthorized get your account credentials won’t result in very bad things happening. Maybe we should try and stop it before that happens.

    1. 1

      NX is typically secured with…SSH. The same best practices should apply.

    1. 2

      I have started trying to outline the process for candidates on their first interview, to hopefully make them more comfortable with the process. I’m not sure about maintaining a written document of it in general.

      However, I do think that the described process of expecting the candidate to have a development environment set up on their local system that you can collaborate with them on something in is not a very good idea. If you do choose to do that, the candidate should have a written description of what is expected of their setup well before the interview. In general, this sounds like a great way to spend all of the time you expected to spend on coding and debugging on troubleshooting their development environment setup instead. Either that, or turn off half of the candidates by expecting them to spend several hours getting a dev env setup to very specific details just to do an interview.

      Any coding collaboration should take place on a pre-configured environment. Any of the interviewer’s actual device, a standard environment like repl.it or coderpad.io, or maybe some standard instance spun up by CloudFormation is fine. Spending the time of both the interviewer and candidate troubleshooting environment setup is a waste of everyone’s time.

      1. 1

        I suggest that this is a social problem, more than a technical one.

        About dependencies: There are tools to see the dependency tree and the total size of any npm library. Less dependencies is less inherited code, and less code is better. A little bit of code is better than a little dependency.

        About the runtime: Some things can be done, like what Deno is doing: https://deno.land/

        Secure by default. No file, network, or environment access, unless explicitly enabled.

        Further than that, it’s Node’s runtime design flaws what makes it impossible to be really secure from the dependencies you choose. For example: Any dependency can re-assign methods from node’s stdlib https package, intercepting web requests that were supposed to be secure.

        As a final suggestion to any Node developer out there: Write your own stuff more offen. Node’s stdlib is great.

        If you’re a frontend developer, you’re doomed anyways.

        1. 1

          Secure by default. No file, network, or environment access, unless explicitly enabled.

          Now that leads to some interesting ideas: What if there was a dependency management and security system combined that let you specify packages to depend on, and you had to explicitly grant each package desired sub-permissions, such as access to files outside the project’s own files, env vars, network URLs it could communicate with, global-level variables it could touch, etc. Each package could have sub-package dependencies, but a permission could only be granted to a sub-package if the parent package had been granted it by the top-level project.

          Maybe an option somewhere too for whether any package trying to do something it doesn’t have permissions for should blow up the program, or be silently black-holed.

          I guess this would be tough to do with any existing dependency management system. But hey we can dream right?

          1. 1

            Maybe this can be done if our imaginary runtime only gives access to “the outside” (disk, net, env, stdout/err…) to the main function. In an OOP world, let’s say those are unique, irreproducible instances of some classes.

            Then if any other part of the program needs access to that, you need to pass it as parameters, or using some dependency injection, inversion-of-control containers, etc.

            Maybe we even improve global testability with this.

            1. 2

              This is more or less how I understand a least part of capability oriented programming to work. I think E and Monte are the current big instances of this in practice, though last I checked, Monte was still in alpha. @Corbin would know more about this space.

              1. 1

                That’s pretty cool from a testability standpoint too. I was envisioning more strict restrictions, like only network requests to explicitly authorized URLs, disk to explicitly authorized paths, etc. Authorize your HTTP client to only talk to my-api-endpoint.com, and you’ll know for sure that there’s no sub-dependency secretly sending data to evil-server.com.

                1. 2

                  Those restricted accesses could be instances extracted from the “global instance”.

                  For example, the disk access instance can generate other instances that allow accessing a portion of the disk. For example, just the user’s home directory.

          1. 9

            Short version: In a department full of developers, one particular developer’s code can’t be understood by anybody even after multiple hours of descriptions.

            It may be my own prejudices, but I’m inclined to think that in the situation described, the code itself is the problem, not the explanation strategy. IMO it’s a common problem with people who are smart and well-educated, but lack the wisdom that comes from long experience, to over-use complex, elaborate architectures and too many design patterns for simple and straightforward projects. The resulting code is usually excessively difficult to understand, prone to obscure bugs, and even more difficult to extend to cover unforeseen edge cases than the simple version would have been.

            A few rules of thumb I’ve found useful to avoid the anti-pattern of too many patterns (heh):

            Rule of 3. Don’t create an abstraction for a pattern until you have at least 3 examples of it. This helps ensure that it will be used enough times to make sense to be abstracted, and that you actually understand the problem well enough to create a generalized solution. It’s okay to, once you actually have 3+ implementations, decide the cases are not similar enough and/or too far apart to be worth extracting.

            Prefer to start with the simple and dumb solution, if needed solving only the simple case of the problem. Push refactoring back until you have it at least sort-of working. Refactor with a focus towards shortening methods and splitting up places that have too many variables in scope when needed.

            Bias your decisions towards You Ain’t Gonna Need It. Ideally to the point where you sometimes consciously ignore the full implications of something you are quite sure you really are going to need down the line.

            1. 2

              Except that I don’t think that the example of the code that was hard to explain was especially hard to understand? It looked like overkill for the problem, sure, but it didn’t look like it so difficult to follow as to warrant a hours of explanation.

              1. 1

                His example wasn’t all that hard, that’s right. But it certainly makes me wonder what sort of things he was doing for larger segments of code.

            1. 2

              I mostly agree, and go out of my way to hardwire everything that I can’t find a major benefit from being wireless.

              Desktop is wired, because duh. Why pay for a Gigabit FIOS connection and then pipe it over a WiFi that can’t possibly be that good, to a desktop a foot away that would need a wireless card? Keyboard too, since those never move much either, plus batteries, plus sending passwords over sniff-able interfaces. I also use wired speakers and mike and camera at home, though I have my own place.

              I do like wireless headphones and mice though. Headphone wires seem so unwieldy and likely to get tangled or snagged. I haven’t had any issues with latency or dropouts over Bluetooth luckily. And mice have proved to be pretty good with the custom wireless adapters, and no wire to get in the way of moving around is nice.

              1. 3

                Many are citing the post for missing nuance. Which is true. I have my own opinions on exactly what it’s missing though.

                Building a good framework for something as complex and with such high attack surface as the web is really tough. Evidence seems to suggest that basically nobody can just build one that’s free from security bugs. What this means is that if you want a web framework to actually be secure, you have to first build it and host some things on it that attract actual attacks. There will be bugs and exploits, and you’ll have to find them and fix them. Serious bugs being found and fixed doesn’t mean your framework is bad. Your worry should be for the ones that haven’t had any - how many bugs do they have that haven’t been found yet because they haven’t hosted something big enough for anybody to care? How will the project team respond when one of those bugs is found? The frameworks that deserve our trust are battle-tested.

                If you wanna build your own, you need to be prepared to pay that price. Unless you’ve hired somebody with like 20 years of experience in running web framework projects, who for some reason doesn’t want to use the one he’s been working on, you’re gonna have bugs.

                1. 2

                  One of the things I’ve been thinking of lately is could sqlite be used as the configuration file as well? That avoids many problems with parsing, but makes editing by endusers quite more painful.

                  1. 15

                    I’ve had a bunch of people suggest this to me over the years in response to my YAML post, and I think both editing and reading it will be painful.

                    With a simple config file (e.g. key = value # comment) it’s quite easy to make even extensive configs quite readable, add explanation/context where needed, etc. But with SQLite that seems a lot harder to me, even if you add a comments column for this.

                    Want to update a few words in that comments column or remove a sentence? Not so easy.

                    If your application is intended to be primarily modified through a GUI or some such then it’s probably a reasonable choice, but even there a simple config file would do just as well in most cases (it’s not that hard to parse a config file).

                    1. 8

                      Reading a config file is easy, but robustly reading and writing them might be hard.

                      • When serializing after your software changed its config, do you make sure to keep the original order (which the user would expect)? Does our config file format library make this easy?
                      • When serializing after your software changes its config, do you keep comments? Does your config file format library make this easy?
                      • When writing your new config file, do you do that atomically? I’ve both written and encountered programs which would just corrupt its config file when saving and the disk is full.
                      • What happens if a user changes the config while the software is running? Do you attach an inotify listener to re-read the file when it changes? (What if the user writes an invalid config file which they didn’t intend to take effect yet?) Or do you just not care? (Do you make sure not to overwrite the user’s config if they change it? How will your config file conflict resolution UI work?)
                      • What happens when a user writes an invalid config file? You can’t just crash a graphical program; does your config file parsing library make it easy to read the data that’s not corrupt while informing the user of which parts are corrupt?

                      I’m not against using human-read/writable config files for all kinds of software. It’s probably what I would do to be honest. There are good (or at least good enough) answers to all of these questions. However, it’s not as straightforward as you’re implying to use a simple text config file format as a program’s configuration file if you want the config to be also modified through a GUI.

                      1. 5

                        All valid points as such, but I’m not sure if this is really a big problem in most cases. If you allow editing manually and have an interface to edit it then a # DO NOT EDIT WHILE THE APPLICATION IS RUNNING comment at the top is an easy way to solve most confusion/problems. It’s simplistic and crude perhaps, but generally speaking people mostly want text config files so they can bootstrap their machines and/or share extensive configurations, rather than fiddle with their config while the app is running.

                        Another solution is to use two files: one that’s generated by the GUI, and one that the user can edit which overwrites the settings from automatically generated one. This is what Firefox does with prefs.js and user.js for example.

                        I think there’s a place for SQLite configuration, but you’re making sharp trade-offs with it and probably only worth it in fairly specialized cases.

                        1. 2

                          I know that the point of your comment isn’t really for those questions to be answered, because you do say that you know answers exist, but some of the answers really are pretty trivial.

                          • I don’t think that software should change its own configuration file. If it does though then it probably should maintain the contents of the original file other than the bit it changed, which is actually not that hard if you put a little thought into the parser and parse to a CST that retains the spans of input text on the syntax tree nodes. But probably you should have two separate files, one of which is only touched by the programme and the other only by the user, where the user’s preferences override the programme’s. This also fixes the problem of the developer changing the defaults in an update, resetting the preferences of users.

                          • If your software can’t robustly read and write files to the disk then sort that out, because the whole programme is unreliable if that is true, not just the configuration. Sort out your primitives. If you have good primitives they’ll work fine on config files.

                          • If the user wants to reload the config file they run reload.

                          • If you try to reload with malformed configuration, do nothing and print an error. If you try to start with malformed configuration, do nothing and print an error. What you should not to do is try to ‘DWIM’.

                          1. 3

                            All those answers are valid for some kinds of applications, but they’re not universal.

                            • Most graphical software written for non-developers (and honestly most graphical software written for developers too) wants a way to change preferences through a GUI. You’re probably right that it’s fairly easy to write a parser from scratch which preserves the spans of text, but do existing config parsing deserialization/serialization libraries do that? My experience is that the don’t, meaning you have to write an entire parser/serializer yourself just for your config file. If you wanna use a standard like yaml or toml, writing a correct parser isn’t a small amount of work.
                            • My experience is that most software uses the posix open/write/close/rename interfaces directly (or thin wrappers), and manually do atomic writes by creating a temp file and renaming. Maybe an atomic file write function would be better, but you’re probably going to use your file format library’s writeToFile method anyways, which probably isn’t going to be atomic, meaning you still have to manually do the temp file + move dance manually.
                            • The next two points are OK.

                            Curiously, the top comment on one of the posts on the front page discusses how Firefox fails to make sure it writes JSON data atomically, leaving you with corrupt or empty file: https://lobste.rs/s/xt82a0/performance_avoid_sqlite_your_next#c_eslys1

                      2. 10

                        I’ve worked with applications that did this before (or used similar formats), and honestly it’s a pain because it makes configuration management incredibly difficult. You have to either (a) keep the binary db file in your repo, making it harder to track changes; or (b) use scripts to execute commands against the db. And then build a bunch of error handling into your scripts for the tool you’re calling to configure the app…

                        For single-user desktop apps, where you don’t do as much configuration management, db-based config can be less painful. Until one of your users decides to manage a fleet of laptops with the app; or hack it into some automated pipeline and run it in a VM in a data center, so now it’s actually a prod app.

                        … and as I’m typing, I realize I’ve actually been paged for production incidents involving all of those scenarios except the fleet of laptops one. Sigh. So yeah, I still strongly prefer text configuration.

                        1. 2

                          Exactly this the review process + git makes this incredibly awkward

                          pseudo-code example:

                          case App.env do 
                            :dev -> Config.find(api: "dev") # -> dev.api.com
                            :prod -> Config.find(api: "prod") # -> prod.ap.com
                            _ -> raise "unknown App environment" <> App.env <> "refusing to boot"
                          end
                          

                          Would make the fact ap.com was committed a non-trivial thing to review, and would not appear during dev/staging testing

                          1. 1

                            I don’t disagree, but there are solutions, such as a –csv option that will read a CSV file of the config, or a –sql option that will read a SQL file/run a command. Then it’s no big deal to store the config(s) in plain text in a VCS, etc.

                            We don’t have a –csv option, but we do have a –sql option.

                            there are def. trade-offs, It’s not magically delicious, but then nothing in tech usually is.

                          2. 3

                            I guess you could, but I don’t really love the idea. Configurations are usually small and simple and seldom modified. Doesn’t seem to fit the RDBMS paradigm well or play to SQLite’s advantages. Maybe if you already had Sqlite in your codebase for something else and/or you have a really complex configuration for some reason. I’d prefer Json/Yaml/Toml or something most of the time.

                            1. 2

                              I do exactly this. It’s not overly hard to configure, we give a –get/set key value CLI to the config table, plus encourage them to use sqlite3 themselves. For GUI’s it’s just a table view. The config table holds the default values as well, so there is never any doubt as to what the value is.

                              1. 1

                                It’s not overly hard to configure, we give a –get/set key value CLI to the config table, plus encourage them to use sqlite3 themselves.

                                You say not hard to configure, but that sounds a lot harder to configure to me than just editing a simple text file. It seems okay for things where the configuration variables are all simple booleans, strings or integers. But what if they’re more complicated? Imagine trying to configure Postfix with a CLI

                                $ sudo postfix config --set smtpd.recipient_restrictions 'permit_mynetworks, permit_sasl_authenticated, reject_unknown_client_hostname, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_invalid_hostname, reject_non_fqdn_sender'
                                $ sudo postfix config --get smtpd.helo_restrictions
                                permit_mynetworks
                                permit_sasl_authenticated
                                reject_non_fqdn_helo_hostname
                                reject_invalid_helo_hostname
                                reject_unknown_helo_hostname
                                permit
                                $ sudo postfix config --set smtpd.relay_restrictions '
                                >permit_mynetworks 
                                >permit_sasl_authenticated 
                                >reject_unauth_destination'
                                

                                The cases where sqlite configuration isn’t an issue are the cases where text configuration is trivial and the cases where it does create big issues are the cases where text configuration is necessary.

                                1. 1

                                  That’s pretty much http://www.postfix.org/postconf.1.html.

                                  But yes, I too prefer editing the configuration file with an editor. (And postconf has more options, and is presumably more intended as a scripting target than as an interactive command?)

                                  1. 1

                                    your example is a touch annoying, but it’s not difficult. What becomes difficult is when your config has complex relationships between config items. It’s easy to store, YAY FK’s, but it’s not obvious how to expose that easily with a CLI interface.

                                    In those cases,if we can’t figure out a nice CLI interface, we generally just use the GUI, or allow them to execute SQL directly against the config table:

                                    myprog –sql ‘insert …’

                                    But we try to avoid those sorts of config options if possible.

                                    The upsides is, it’s very easy when debug time comes around, they just ship us the .db file as we store the last-ran args, etc, and we store logs in an audit table as well. (we still do the standard stdout logging too). Also we have none of the issues where IO nightmares come to roost, like files being 1/2 written or corrupted, etc. It happened often enough in our deployments before we switched that it was annoying for sure.

                                    there are def. trade-offs, It’s not magically delicious, but then nothing in tech usually is.

                                2. 2

                                  Arcan (desktop framework) does this: https://github.com/letoram/arcan/wiki/Configuration-Support

                                  It provides a FUSE layer into the database, too.

                                  1. 2

                                    If you provide nice GUI and CLI tools or API to manage such configuration, it might be a great option. Despite one detail: version control – many people manage their configuration using a VCS (Mercurial, Git, Fossil etc.) and want to see, what has changed. Classic diff is quite useless on binary files and databases. Text formats (e.g. XML) are much more VCS-friendly. You might version SQL dumps which are also text. Or you can provide a diff tool for databases… It depends on situation – if your users are consumers, they would probably never tweak the config files by hand or manage them in a VCS.

                                    1. 3

                                      Version control should be easier to do on a database, as we have much more granular data. You can say this field changed, which is better than saying this whole line changed. It’s just that our tools for text diffs are better “right now”.

                                      1. 1

                                        Yes, you’re right, that is one of pretty important points — most of configurations now sit in version control systems, and seeing the differences between deployments is quite important for ops teams.

                                        True, I could write a script that populates such a database when I’m deploying a service, and that would be the new config. However, that would just add more complexity to our already complex systems.

                                        1. 2

                                          Actually, this is not uncommon and it is used in real world for a long time – e.g. in Postfix you have pairs of files like:

                                          $ file /etc/postfix/client_checks*
                                          /etc/postfix/client_checks:    ASCII text                                                                                                                                                                                                    
                                          /etc/postfix/client_checks.db: Berkeley DB (Hash, version 9, native byte-order)
                                          

                                          You edit the first text one. And during runtime, the database is created and used. The the text form is converted to the more efficient database form. In your VCS you just ignore the .db files and diff the text ones.

                                          Regarding the complexity: it is question whether the features provided by a DBMS (or any other library) overweight its complexity. This differs project from project, there is not universal answer to this question.

                                        2. 1

                                          if your users are consumers, they would probably never tweak the config files by hand or manage them in a VCS.

                                          Some will want to though. If your users are consumers then they’re many and varied. Now you might not want to put a lot of effort into letting your users control their configuration with git but it doesn’t hurt anyone or force you to make negative tradeoffs so why not?

                                          1. 1

                                            We solved this with a –sql option. So you store the plaintext .sql file in VCS/etc and just read it in. Since we store more than just config, we can do a lot of end-to-end testing this way as well, since we can setup our state to be whatever we want with a given SQL file, and then we just have to test the output.

                                        1. 4

                                          I’ve written some Go and some Rust. I feel like I usually enjoy Rust more, though I struggle to explain why.

                                          I think, for Rust, I find the error handling really ergonomic. Using ? in a function that does a bunch of things that can fail is just so much nicer than having every other line be a if err == nil { return err }. I also find it easier to follow how references work in Rust, oddly enough maybe. And using modules through Cargo is just so nice, while Go modules is kind of a messy hack in comparison. Oh and the macros are just so nice too.

                                          But on Go’s side, Go concurrency is really awesome and smooth, especially compared to the half-complete hacks that are tokio and the Rust async system. Did I mention how nice the built-in channels are, and how a bunch of places in the standard lib use them? And easy cross-compilation is pretty nice too. And you gotta love that massive standard library. And I suppose not having to wrestle with complex too-clever generic hierarchies is nice sometimes too.

                                          1. 16

                                            side-note: i think it’s a bit off-topic (and meme-y, rust strike force, etc. :) to compare to rust when the article only speaks of go :)

                                            Using ? in a function that does a bunch of things that can fail is just so much nicer than having every other line be a if err == nil { return err }.

                                            i really like the explicit error handling in go and that there usually is only one control flow (if we ignore “recover”). i guess that’s my favorite go-feature: i don’t have to think hard about things when i read them. it’s a bit verbose, but that’s a trade-off i’m happy to make.

                                            1. 7

                                              i really like the explicit error handling in go

                                              I would argue that Go’s model of error handling is a lot less explicit than Rust’s - even if Go’s is more verbose and perhaps visually noticeable, Rust forces you to handle errors in a way that Go doesn’t.

                                              1. 1

                                                I have just read up un rusts error handling, it seems to be rather simila, except that return types and errors are put together as “result”: https://doc.rust-lang.org/book/ch09-00-error-handling.html

                                                my two cents: i like that i’m not forced to do things in go, but missing error handling sticks out as it is unusual to just drop errors.

                                                1. 4

                                                  Well since it’s a result, you have to manually unwrap it before you can access the value, and that forces you to handle the error. In Go, you can forget to check err for nil, and unless err goes unused in that scope, you’ll end up using the zero value instead of handling the error.

                                                  1. 1

                                                    i like that i’m not forced to do things in go, but missing error handling sticks out as it is unusual to just drop errors

                                                    The thing is, while it may be unusual in Go, it’s impossible to “just drop errors” in Rust. It’s easy to unwrap them explicitly if needed, but that’s exactly my point: it’s very explicit.

                                                2. 3

                                                  The explicit error handling is Very Visible, and thus it sticks out like a sore thumb when it’s missing. This usually results in better code quality in my experience.

                                                  1. 2

                                                    It did occur to me that it may come off like that :D It’s harder to make interesting statements about a language without comparing it to its peers.

                                                    IMO, Rust and Go being rather different languages with different trade-offs that are competing for about the same space almost invites comparisons between them. Kind of like how temping it is to write comparisons between Ruby, Python, and Javascript.

                                                    1. 1

                                                      I think Swift fits in quite well in-between. Automatic reference counting, so little need to babysit lifetimes, while using a powerful ML-like type system in modernised C-like syntax.

                                                  2. 15

                                                    But on Go’s side, Go concurrency is really awesome and smooth

                                                    Concurrency is an area I feel Go really lets the programmer down. There is a simple rule for safe concurrent programming: No object should be both mutable and shared between concurrent execution contexts at the same time. Rust is not perfect here, but it uses the unique ownership model and the send trait to explicitly transfer ownership between threads so you can pass mutable objects around, and the sync trait for safe-to-share things. The only safe things to share in safe rust are immutable objects. You can make other things adopt the sync trait if you’re willing to write unsafe Rust, but at least you’re signposted that here be dragons. For example, the ARC trait in Rust (for atomic reference counting), which gives you a load of read-only shared references to an object and the ability to create a mutable reference if there are no other outstanding references.

                                                    In contrast, when I send an object down a channel in Go, I still have a pointer to it. The type system gives me nothing to help avoid accidentally aliasing an object between two threads. To make things worse, the Go memory model is relaxed consistency atomic, so you’re basically screwed if you do this. To make things even worse, core bits of the language semantics rely on the programmer not doing this. For example, if you have a slice that is in an object that is shared between two goroutines, both can racily update it. The slice contains a base and a length and so you can see tearing: the length from one slice and the base from another. Now you can copy it, dereference it and read or write past the end of an array. This is without using anything in the unsafe package: you can violate memory safety (let alone type safety) purely in ‘safe’ Go, without doing anything that the language helps you avoid.

                                                    I wrote a book about Go for people who know other languages. It didn’t sell very well, in part because it ended up being a long description of things that Go does worse than other languages.

                                                    1. 2

                                                      That’s a worthwhile point. I haven’t been bitten by the ability to write to Go object that have already been sent down a channel yet, but I haven’t worked on any large-scale long-term Go projects. I’ve found it straightforward enough to just not use objects after sending. But then, the reason why we build these fancy type systems with such constraints is that even the best developers have proved to be not very good at consistently obeying these limits on large-scale projects.

                                                      I’m hoping that the Rust issues with async and tokio are more like teething pains for new tech than a fundamental issue, and that eventually, it will have concurrency tools that are both as ergonomic as Go’s and use Rust’s thread safety rules.

                                                      1. 4

                                                        I’ve found it straightforward enough to just not use objects after sending.

                                                        This is easy if the object is not aliased, but that requires you to have the discipline of linear ownership before you get near the point that sends the object, or to only ever send objects allocated near the sending point. Again, the Go type system doesn’t help at all here, it lets you create arbitrary object graphs with N pointers to an object and then send the object. The (safe) Rust type system doesn’t let you create arbitrary object graphs and then gives strong guarantees on what is safe to send. The Verona type system is explicitly designed to allow you to create arbitrary (mutable or immutable) object graphs and send them safely.

                                                    2. 9

                                                      And using modules through Cargo is just so nice, while Go modules is kind of a messy hack in comparison.

                                                      I have always found Rust’s module system completely impenetrable. I just can’t build a mental model of it that works for me. I always end up just putting keywords and super:: or whatever in front in various combinations until it happens to work. It reminds me of how I tried to get C programmes to compile when I was a little kid: put more and more & or * in front of expressions until it works.

                                                      And of course they changed in Rust 2018 as well which makes it all the more confusing.

                                                      1. 3

                                                        Yeah, I’ve had the same experience. Everything else about Cargo is really nice, but modules appear to be needlessly complicated. I have since been told that they are complicated because they allow you to move your files around in whatever crazy way you prefer without having to update imports. Personally I don’t think this is a sane design decision. Move your files, find/replace, move on.

                                                        1. 2

                                                          And of course they changed in Rust 2018 as well which makes it all the more confusing.

                                                          One of the things they changed in Rust 2018, FYI, was the module system, in order to make it a lot more straightforward. Have you had the same problem since Rust 2018 came out?

                                                        2. 6

                                                          For me Go is the continuation of C with some added features like CSP. Rust is/was heavily influenced by the ML type of languages which is extremely nice. I think ML group is superior in my ways to the C group. ADTs are the most trivial example why.

                                                          1. 4

                                                            I generally agree. I like ML languages in theory and Rust in particular, but Rust and Go aren’t in the same ballpark with respect to developer productivity. Rust goes to impressive lengths to make statically-managed memory user-friendly, but it’s not possible to compete with GC. It needs to make up the difference in other areas, and it does make up some of the difference in areas like error handling (?, enums, macros, etc and this is still improving all the time), IDE support (rust-analyzer has been amazing for me so far), and compiler error messages, but it’s not yet enough to get into a competitive range IMO. That said, Rust progresses at a remarkable pace, so perhaps we will see it get there in the next few years. For now, however, I like programming in Rust–it satisfies my innate preference to spend more time building something that is really fast, really abstract, and really correct–but when I need to do quality work in a short time frame in real world projects, I still reach for Go.

                                                            1. 9

                                                              To me Go seems like a big wasted opportunity. If they’d only taken ML as a core language instead of a weird C+gc hybrid, it would be as simple (or simpler) as it is, but much cleaner, without nil or the multi-return hack. Sum types and simple parametric polymorphism would be amazing with channels. All they had to do was to wrap that in the same good toolchain with fast compilation and static linking.

                                                              1. 2

                                                                Yeah, I’ve often expressed that I’d like a Go+ML-type-system or a Rust-lite (Rust with GC instead of ownership). I get a lot of “Use OCaml!” or “Use F#”, but these miss the mark for a lot of reasons, but especially the syntax, tooling, and ecosystem. That said, I really believe we overemphasize language features and under-emphasize operational concerns like tooling, ecosystem, runtime, etc. In that context, an ML type system or any other language feature is really just gravy (however, a cluster of incoherent language features is a very real impediment).

                                                                1. 1

                                                                  Nothing is stopping anyone from doing that. I’d add that they make FFI to C, Go, or some other ecosystem as easy as Julia for the win. I recommend that for any new language to solve performance and bootstrapping problem.

                                                                2. 3

                                                                  Then, you have languages like D that compile as fast as Go, run faster with LLVM, have a GC, and recently an optional borrow checker. Contracts, too. You get super productivity followed by as much speed or safety as you’re willing to put in effort for.

                                                                  Go is a lot easier to learn, though. The battle-tested, standard libraries and help available on the Internet would probably be superior, too.

                                                                  1. 3

                                                                    I hear a lot of good things about D and Nim and a few others, but for production use case, support, ecosystem, developer marketshare, tooling, etc are all important. We use a lot of AWS services, and a lot of their SDKs are Python/JS/Go/Java/dotnet exclusively and other communities have to roll their own. My outsider perspective is that D and Nim aren’t “production ready” in the sense that they lack this sort of broad support and ecosystem maturity, and that’s not a requirement I can easily shrug off.

                                                                    1. 2

                                                                      I absolutely agree. Unless easy to handroll, those kind of things far outweigh advantages in language design. It’s what I was hinting at in 2nd paragraph.

                                                                      It’s also why it’s wise for new languages to plug into existing ecosystems. Clojure on Java being best example.

                                                            1. 6

                                                              It’s cool that they were able to collaborate so smoothly with the V8 team to get the whole thing working better for everyone. But on the other hand, I wonder how concerned I should be that more and more of the infrastructure of web browsers, and therefore the effective definition of the web itself, seems to be converging into a single codebase.

                                                              1. 4

                                                                Thankfully, WebKit still maintains a different regular expression engine than Blink. It happens to be the same YARR which Gecko abandoned in favor of Irregexp.

                                                                https://github.com/WebKit/webkit/tree/master/Source/JavaScriptCore/yarr

                                                                1. 3

                                                                  I wonder how concerned I should be

                                                                  You should be very concerned. Monocultures don’t make things “better for everyone,” they subvert standards, stifle innovation, and restrict your freedom of choice. They can make genuine catastrophes of the occasional security flaw, too. We have much less diversity in browsers than would be healthy for such a pervasive and essential platform.

                                                                  1. 1

                                                                    I agree, and that’s also why I was disappointed when Microsoft decided to abandon their Edge browser code and move to a Chromium fork. That leave us down to 2 truly independent web browser implementations in the world.

                                                                    I don’t like their monopolistic shenanigans back in the 90s, but I don’t trust Google to be a monopoly over the web either.

                                                                1. 7

                                                                  I was really hoping this article would be about how Mozilla decided to write a new, awesome RegExp library written in Rust that implemented the ECMAScript standard. I guess it makes sense to re-use Irregexp, since it’s already written and optimized…

                                                                  1. 1

                                                                    Well if they were going in that direction, I would hope they would have used the already-existing Rust Rexexp library, and added any features needed to make it do the job.

                                                                  1. 30

                                                                    Not entirely on topic, but related: your website has a banner which says

                                                                    By continuing to browse the site, you are agreeing to the use of cookies.

                                                                    The EU data protection body recently stated that scrolling does not equal consent, see for instance https://techcrunch.com/2020/05/06/no-cookie-consent-walls-and-no-scrolling-isnt-consent-says-eu-data-protection-body/

                                                                    1. 25

                                                                      Then again, he is the type who “cares about SEO”.

                                                                      1. 3

                                                                        Wait, what’s wrong with caring about SEO?

                                                                        1. 5

                                                                          There was a time were SEO was synonymous with tricking the search engines into featuring your site. The running theme was SEO was a set of dark patterns and practices to boost your ranking without investing in better content.

                                                                          For many people SEO still has similar connotations.

                                                                          1. 16

                                                                            There was a time …

                                                                            Did that change?

                                                                            1. 0

                                                                              Did that change?

                                                                              Based on my recent efforts at looking into these things from a developer point of view, I would say yes it’s changing.

                                                                            2. 6

                                                                              AFAIK, there’s still considered to be “White hat” and “Black hat” SEO. White hat SEO involves stuff like organizing links and URLs well, including keywords appropriate to what you actually do, writing quality content, and per this article, encouraging links to your domain and paying attention to whether they use nofollow. Generally, stuff that doesn’t go against the spirit of what the search engine is trying to do, tries to get more legitimate users who genuinely want your product to find it and learn about it more easily etc.

                                                                              Black hat SEO involves stuff like spinning up link farms, spamming links across social media and paying for upvotes, adding a bazillion keywords for everything under the sun unrelated to what you’re doing, etc. Generally trying to trick search engines and visitors into doing things against their purposes.

                                                                              It may feel a little dirty at times, but it’s probably tough to get a business going in a crowded market without paying attention to white hat SEO.

                                                                              1. 2

                                                                                It may feel a little dirty at times, but it’s probably tough to get a business going in a crowded market without paying attention to white hat SEO.

                                                                                This is common issue for healthcare sites. If you have bona fide information that’s reviewed and maintained by experts it competes with sites selling counterfeits, outdated information, conspiracy theories, etc. These sites try every trick they can to scam people. If you don’t invest in SEO you are wasting people’s time with bad information in most cases, but some people can be harmed. In the US this can boil down to a freedom of speech discussion, but if you work internationally you have clearer legal obligations to act.

                                                                                Search engines do want to help directly in some cases, but there is still an expectation that the good guys are following what would be considered white hat SEO practices. White hat SEO often has other benefits with accessibility, so I think it’s worth listening.

                                                                                1. 3

                                                                                  Yep, this is a bit unfortunately true. IIRC, StackOverflow had to implement SEO practices as, without it, other sites that scraped their content and rehosted it were actually getting higher rankings in Google than SO themselves.

                                                                              2. 3

                                                                                Makes sense. I wish more people (developers in particular) would start questioning these connotations. The present-day advice on how to do SEO right is a lot different from what it used to be.

                                                                                1. 8

                                                                                  As the parent said, SEO originally meant “hacking” google search rankings but over time, Google eliminated these hacks one by one, saying the whole time that their goal was to deliver search results that were relevant and useful. However, the way they define “relevant and useful” is primarily:

                                                                                  1. How closely the page content matches the serarch query
                                                                                  2. How many “reputable sources” link to the page
                                                                                  3. How long visitors stay on the page (usually directly related to length)
                                                                                  4. How many people click on the link

                                                                                  So SEO became less about technical trickery and is now more about human trickery. This resulted in the rise of what I call “blogspam”, i.e. blogs that crank out content with affiliate links and ads peppered throughout. This might not be a bad thing per se, except that most of the time I land on blogspam, I am inundated by pop-up dialogs, cookie warnings, ads and miles of empty content designed to make you Just Keep Scrolling or hook you with an auto-play video. Because both of these things keep you on the page longer, which increases their search rankings.

                                                                                  This isn’t always quite so bad for tech-related queries, where StackOverflow and its ilk have cornered nearly every result, but try searching for something generic like “hollandaise sauce recipe” or “how to get rid of aphids” or “brakes on a Prius” and you will drown in an unending sea of blogspam.

                                                                                  This has been another episode of “What Grinds bityard’s Gears”

                                                                                  1. 1

                                                                                    This isn’t always quite so bad for tech-related queries, where StackOverflow and its ilk have cornered nearly every result, but try searching for something generic like “hollandaise sauce recipe” or “how to get rid of aphids” or “brakes on a Prius” and you will drown in an unending sea of blogspam.

                                                                                    I feel the pain, but is this less about SEO and more about how certain people have developed business opportunities? SO has largely replaced expertsexchange in search results, but in a way this was one of the founder’s aims that has been mentioned in various places.

                                                                                    The StackExchange network of sites has been trying to expand to cover, your example of “how to get rid of aphids”, but it hasn’t yet been successful. There is inertia with getting these sites off the ground and employing people to write quality questions and answers, but this doesn’t align with the community ethos. Arguably, it would be better for the web since you’d get a better experience when you click through search results. I wish there was an easier answer.

                                                                                    I don’t see why there couldn’t be a recipe site with the quality user experience you associate with SO. There are however a lot of entrenched interests and competition. People also have a tendency of sharing copyrighted recipes they’ve copied down from friends or family. Incumbents won’t react like expertsexchange to SO.

                                                                                    1. 1
                                                                                      1. How closely the page content matches the serarch query

                                                                                      Since you put “relevant and useful” in quotes, I’m assuming you feel that a search query matching the page content is not a good signal of whether a search result is good. I’m curious why you think that?

                                                                                      Just Keep Scrolling or hook you with an auto-play video. Because both of these things keep you on the page longer, which increases their search rankings.

                                                                                      That’s actually not true. Google made a blog post a while ago mentioning that pop-up dialogs (or anything that reduces content accessibility) reduces search rankings.

                                                                                      In any case, while I do agree that not all SEO advice is (or has historically been) good, the blanket statement that all SEO advice is bad is also not correct (or fair). Besides, the black-hat SEO advice is slowly becoming more and more pointless as Google gets smarter at figuring things out.

                                                                                2. 3

                                                                                  SEO is like ads on the internet; in theory it’s a good thing, helps people to find relevant content, helps companies to make more profits. But in reality, it’s just a pissing contest who exploits the user most. If a company made some money by using some shady SEO tricks, then we’ll do it 2x more intensively, so we’ll earn some money too. Who cares that the search engine results will be less accurate?

                                                                                  1. 1

                                                                                    To be honest, try looking up the modern SEO recommendations (black hat SEO is becoming more and more pointless as Google gets smarter at figuring things out). You’ll be pleasantly surprised.

                                                                              3. 6

                                                                                The funny part is that the only cookie used on this site (that I can see) is the cookie that stores the fact that the user accepted the use of cookies :D

                                                                                Also, the law never forced the display of the “cookie wall” for purely-technical cookies (eg: login and such), but only those aimed at tracking.

                                                                              1. 2

                                                                                This has been seen a lot, probably more interesting to ponder how many of these are still relevant in today’s environments.

                                                                                Many of these are built around the ideas of desktop software, distributed physically. Releases are thus infrequent, and it’s easy to miss or let slide things like daily builds. With web software being more the norm now, it’s a much bigger advantage and much easier to to Continuous Deployment, an even better version.

                                                                                Writing specs seems to call back to the era of Waterfall, which admittedly did fit somewhat better with shrinkwrap software. It seems rather less relevant in an Agile era, where we now look to get a MVP out as fast as possible and iterate rapidly on user feedback, based on the fact that we probably don’t know enough about the business domain to write a good spec. It certainly helps if you can push new releases and get customers using them in hours instead of months.

                                                                                In theory it’s still nice to have professional testers, but many businesses seem to have replaced that with having the customer do the test, using feature flags to expose features to a small percentage of users, etc.

                                                                                Having quiet working conditions never goes out of style, but also never got much easier to actually get in a real business.

                                                                                Getting the best tools is still true, though somewhat less relevant today with how much more slowly the hardware world seems to move.

                                                                                Source control is a big duh nowadays. But it seems weird to not mention automated testing. Nowadays, you’d expect anything to have some sort of automated test suite, even if it may not cover as much as you would like.

                                                                                1. 3

                                                                                  Writing specs seems to call back to the era of Waterfall, which admittedly did fit somewhat better with shrinkwrap software. It seems rather less relevant in an Agile era, where we now look to get a MVP out as fast as possible and iterate rapidly on user feedback

                                                                                  I used to think this, but in retrospect “we don’t need a spec” is more often used as an excuse to avoid hard thinking about the problem. Just because you can’t have all the answers up-front doesn’t mean you shouldn’t at least try to come up with a coherent design.

                                                                                  1. 1

                                                                                    This. Consider two people working on a client and a server or two micro services. Building those without a specification would be like building a bridge from two ends, hoping that the ends line up without measuring before laying the foundations.

                                                                                    1. 1

                                                                                      It might be apocryphal since I can’t find a good source, but I recall hearing about basically this happening on the St. Louis Loop Trolly project: they built the rail line from two ends heading towards each other, and upon meeting in the middle discovered that they didn’t in fact meet.

                                                                                1. 3

                                                                                  The only way to “settle” is to forget about passwords entirely. I can fully control a remote machine using public key cryptography without ever having to deal with dirty passwords. Why cannot I read my webmail or buy stuff from an online shop? It is ridiculous that in the age of public key crypto we are still using passwords.

                                                                                  1. 3

                                                                                    Do you think that non technical users can and will use public key crypto? I mean, I guess they are every time they visit a site with an https:// in the URL.

                                                                                    Is it just that the right tools haven’t been found yet? I was on a call with HYPR a few days ago (disclaimer, we’ve done some work integrating with their solution): https://www.hypr.com/why-hypr/ and it seems pretty sweet, but then we move from securing knowledge to securing devices.

                                                                                    Something has to hold the private key, after all.

                                                                                    1. 3

                                                                                      I doubt they will be able to manage private keys well.

                                                                                      Servers indeed are doing that now with HTTPS, but we expect server admins to be a little better at these things. And they still fail more often than we would like. IIRC, HPKP was deprecated because it was too easy for sysadmins to get wrong, or to have used against them by malicious actors, rendering their domain semi-permanently inaccessible. Are we going to expect casual users to do better than them?

                                                                                      Casual users may have even messier use cases. Say you have 5 devices that you want to be able to access all of your accounts from. Now you’d have to register all 5 public keys with every service you want secure access to. And correctly manage dropping the right key from all of them if you lose or discard a device, and add one to all of them if you get a new device.

                                                                                      1. 2

                                                                                        Build the protocol into the browser, have it manage your key. Browser vendors can even store an encrypted version of your key on their servers (optionally) to allow you to regain access if you lose it/sync to multiple devices.

                                                                                        Edit: Like BitID but instead of using a bitcoin private key you use any other type of private key, and it’s in your browser instead of in another app.

                                                                                        1. 2

                                                                                          You would still have to synchronize the private key between your devices. And even if nowadays you browser can sync itself across devices, it is done through an online account. Secured with a password.

                                                                                          Passwords are going to last, because they are immaterial, so you can have them with you at all times “just” by remembering them. Physical private keys are too complex to manage, and to easy to lose, thus locking you out. The last option we have is biometrical identification which would be easier for everyone (nothing to remember, everything with you at all times), but this is a further step in the privacy field…

                                                                                          1. 1

                                                                                            Mozilla tried this with Persona (née BrowserId), and it did not take off.

                                                                                      1. 1

                                                                                        Thanks, this is interesting to read! I’ve only used 1 site that used email-based login, and mostly found it mildly annoying, as somebody who mostly uses a password manager and Google OAuth for logins.

                                                                                        Regarding prefetches breaking your workflow, I think there are some hosted anti-spam systems that try to access URLs in emails to see if the page looks spammy before the email even gets delivered to the user. I wonder if any of the big services have some special sauce to handle this sort of thing, since single-click email verification emails seem to be pretty common. I suppose this is also why they say that you shouldn’t change anything based on GET requests, but I’m not sure how to get around that while providing a single-click email experience.

                                                                                        1. 1

                                                                                          Are you sure you want to be handling passwords yourself? Shouldn’t you be using a third-party authentication provider? That way, you run no risk of getting compromised and leaking (reused) passwords.

                                                                                          1. 11

                                                                                            Handling passwords is really not that complicated. There are libraries around to do it, and quite frankly, it’s not magic. Just use bcrypt or something similar.

                                                                                            1. 2

                                                                                              It’s not only the password in the database, but also the password in transit. For example, Twitter managed to log passwords:

                                                                                              Due to a bug, passwords were written to an internal log before completing the hashing process.

                                                                                              The risk remains, it’s just more subtle and in places you might not immediately think of instead.

                                                                                              1. 4

                                                                                                If anything that’s an argument against “just let someone else do it”.

                                                                                                You can review your own systems, you can organise an audit for them.

                                                                                                How do you plan to review Twitter’s processes to ensure they do it securely, given that they already have precedence for screwing the pooch in this domain?

                                                                                                1. 1

                                                                                                  It’s easier in smaller systems.

                                                                                                  1. 1

                                                                                                    Well, there’s a risk with anything you do when dealing with secrets; you can leak tokens or whatnot when using external services too.

                                                                                                    As I mentioned in another comment, the self-host use case makes “just use an external service” a lot harder. It’s not impossible, but I went out of my way to make self-hosting as easy as possible; this is why it can use both SQLite and PostgreSQL for example, so you don’t need to set up a PostgreSQL server.

                                                                                                  2. 2

                                                                                                    I would note that it’s not so much just the handling of passwords, but getting all of the workflows for authentication and session management right too. That’s why I like libraries like Devise for Rails that add the full set of workflows and DB columns already using all best-practices to your application, with appropriate hooks for customization as needed.

                                                                                                  3. 2

                                                                                                    you run no risk of getting compromised and leaking (reused) passwords

                                                                                                    You still have to handle authentication correctly, and sometimes having an external system to reason about can expose other bugs in your system.

                                                                                                    I recall wiring up Google SSO on an app a few years ago and thinking configuring google to only allow people through who were on our domain was sufficient to stop anyone being able to sign in with a google account. Turns out in certain situations you could authenticate to that part of the app using a google account that wasn’t in our domain (we also had Google SSO for anyone in the same application, albeit at a different path.) Ended up having to check the domain of the user before we accepted their authentication from google, even though google was telling us they’d authenticated successfully as part of our domain.

                                                                                                    1. 1

                                                                                                      If password hashing is a hard task for your project, I’d argue that’s because your language of choice is severely lacking. In most languages or libraries (where it isn’t part of the stdlib) it should be one function call to hash a new password, or a different single function call to compare an existing hash to a provided password.

                                                                                                      This idea that password hashing is hard and thus “we should use X service for auth” has never made any sense to me, and I don’t quite understand why it persists.

                                                                                                      I have never written a line of Go in my life, but it took me longer to find out that the author’s project is written in Go, than it did for me to find a standard Go module providing bcrypt password hashing and comparison.

                                                                                                      1. 1

                                                                                                        And salting! So many of these libraries store the salt as part of the hash, making comparison easy but breaking hard.

                                                                                                        1. 1

                                                                                                          I would consider it a bug for a library/function to (a) require the developer to provide the salt, or (b) not include the salt in the resulting string.

                                                                                                      2. 1

                                                                                                        Problem is what provider do you choose to use? Do you just go and “support everyone”, or do you choose one that you hope all your users use, and that you are in support of (I don’t support nor have accounts at Facebook, Twitter, and Google), which narrows it down quite a bit. And what about those potential users that aren’t using your chosen platform(s)? Are you gonna provide password-based login as an alternative?

                                                                                                      1. 5

                                                                                                        This feels like a useful counterpoint to me. The Unix-y CLI tools and their ability to be chained together can be very useful sometimes. But chaining too many together in too elaborate ways tends to run into weird issues, to the point where it becomes easier to use a programming language or all-in-one application where things are more likely to work together smoothly.

                                                                                                        1. 3

                                                                                                          Definitely. I personally find it disappointing that it’s such a struggle for programmers to accept that there is a spectrum of tools. Each one has a trade-off and I would expect this community to prefer those of Unix-y CLI tools for personal use. At the same time, a not insignificant number of people here may be using those same tools to build software at the other end of the spectrum for other sets of users.

                                                                                                          I personally find it joy to introduce users of a monolith, that’s not quite right for them, to the subset of tools that solves their problem exactly. That’s a very satisfying experience and then they often go off and apply these to other problems and replace the inappropriate monoliths. It’s healthy if it goes both ways and people get what they need to make their lives better.

                                                                                                        1. 2

                                                                                                          I was just working on another project that needed to build a few lines worth of HTML in JS to add to an existing webpage. I’ve built a few similar things before, and never really liked any of the solutions. This time, I decided the least-worst was to just write them HTML directly to a string and insert into the doc using insertAdjacentHTML. The code was agreeably short and simple, but it felt a little ugly to then need to do a bunch of querySelector calls to get the DOM elements that I had just created by IDs.

                                                                                                          In the past, I’ve tried creating elements using DOM JS calls, but that’s an awful lot of boilerplate per element. Using jQuery doesn’t seem to be much less ugly. The use cases I’ve worked with have been simple enough that it seems a bit much to pull in a major JS view library to make the code slightly simpler.

                                                                                                          1. 14

                                                                                                            My own story: there was a triangle of 3 people working with a new customer, each of whom thought the next was the one handling communication with the customer. Turns out, the person who was actually supposed to be handling it was me. Upshot was, the customer didn’t get any of their time-sensitive requests handled and ditched us. My first job, and I lost us a customer paying more than my salary was worth.

                                                                                                            Not computers, but still technology: I was working on an oil rig drilling a natural gas well. The client I’d worked with before, but it was a new supervisor, new rig, and my first job working as senior person. My job was “geosteering”, basically being on-site geologist and letting the drill crew know where in the target rock formation they were to keep up with the minor weaves and wobbles of the layer of rock they wanted to be in. It is as much art as anything else, and I was pretty okay at it, but I screwed up badly – the target formation dipped down in a way that didn’t show up on any of the nearby wells or seismic surveys, and I misinterpreted what was going on and didn’t notice for nearly a day. Looked dumb to my superior who’d suggested earlier that might be the case, I had to call up the head of the drill crew with 20 years of experience on me to admit I’d screwed up, etc. I was just lucky my mistake had taken us above the target area, since right below it was a very hard rock formation that would have taken days to drill out of. IIRC a day of drill rig time starts at 5 figures and goes up from there.

                                                                                                            Kinda surprised I don’t have more of these, in retrospect. Plenty of more minor ones, like pulling the wrong drive out of a RAID array and needing to restore a customer’s server from backups, but those are less dramatic. Has any data center tech of any experience not done that once?

                                                                                                            Edit: shout out to people having really bad days, like dropping a satellite on the floor: https://upload.wikimedia.org/wikipedia/commons/4/43/NOAA-N'_accident.jpg

                                                                                                            1. 5

                                                                                                              Reminded me of one of my big screwups in my oilfield days. One of the things our crews did when we got to the rig was to install a special pressure sensor on the main drilling mud line, which is pressurized to 1500-3000psi or so during normal operations. The rig manager assigned the least experienced rig worker at the site to help me, as it normally goes. He showed me where their mud line hookup was and where the rig’s pile of spare parts and adapter was, and I grabbed an adapter that looked like it would fit, and we proceeded to get it all screwed together. Note that these are 2” NPT connections that require enthusiastic action on a 4 foot pipe wrench to install or remove.

                                                                                                              Anyways, everything seemed to work, and we went on our way, drilling according to the plan. Then 3 days later, the rig manager came back to our trailer with the adapter I had used to tell me that it was an adapter meant for water pipe, and only rated to 200psi. We had been drilling with it in place, at 2500psi, the whole time, over 10 times the rated pressure. That thing could have let go at any moment, and could have easily killed somebody if they happened to be in the way at the time.

                                                                                                              The rig manager promptly found an adapter with the correct rating, and we reinstalled and got back to drilling. That was quite the reality check for me. I learned to be much more careful about verifying the ratings of adapters and slings and other kinds of parts. There’s a reason why every company in the industry has hardcore policies about things like throwing out and destroying things that don’t have easily visible ratings.

                                                                                                              Speaking of safety policies, at least whoever was working on that satellite had correctly roped off the area where it would hit if it fell. Fortunately, that thing didn’t hurt anybody when it fell, besides somebody’s pride, budget, and timeline.

                                                                                                              1. 2

                                                                                                                Oof. That sounds like an oilfield story, yeah. I’ve screwed up multiple other things now that I’ve unrepressed the memories, but not quite like that. I know a LOT more about machinery now than I did at the time, so all I was really knew was that if I ever needed to touch something mechanical, I had to get the rig crew to help. Ideally by asking someone who knew what they were doing, as your tale demonstrates.

                                                                                                                Out of curiosity, can I ask what you did in the oilfield? The first things that come to mind would be MWD or whatever the IT system that reports all the rig data is called… I’ve forgotten so much of the random-ass terminology. But there’s so many other things going on that you could have been doing something I’ve never heard of.

                                                                                                                1. 2

                                                                                                                  That was MWD all right. We needed special pressure sensors that were sensitive to pressure changes in the frequency ranges our tools operated in. Funny career path, I ended up later on working with the people who specced out and designed those sensors and the software that demodulated the digital data being sent. Data management on the rig tends to be a hodgepodge of companies all measuring slightly different things for different reasons and reporting that data in different ways.

                                                                                                                  I’ve screwed up plenty of software stuff too over the years. Alas, nothing comes to mind that makes a good story - mostly not too dramatic consequences, usually too deep in the weeds of some piece of technology to explain simply.

                                                                                                            1. 49

                                                                                                              I saw this described on the IRC channel as ‘what-color-are-your-underpants threads’ - lots of easy engagement stuff, crowding more interesting stuff off the front page. My perception is that there is now a lot less of the stuff that differentiated lobste.rs from the other hundredty-dillion tech sites - it was good at bridging computer-science-proper topics and real applications, e.g. someone’s experience report of using formal verification in a real product, or how property testing uncovered some tasty bug in some avionics, or how to synthesize gateware with Z3. That sort of thing.

                                                                                                              It doesn’t have to be the case that underwear-threads exist at the cost of quickcheck threads, but as they increasingly crowd the front page and stay there, it means the slightly more cerebral stuff has less chance to be seen, and new people get a different perception of what lobste.rs is about, and so the tone of the place gradually shifts. Some people might think that’s fine, I think it’s a shame. Differentiation is good.

                                                                                                              As for ‘if it gets upvotes then by definition it belongs’, I’ve always thought that ‘just leave it to the market’ attitude is total and utter cow-dung. Of course there should be regulation. If you applied that confusion everywhere you’d have sport be replaced by gladiatorial combat, McDonalds purchasing every farm until that was all you could eat, and other kinds of dystopia that unfortunately some americans are beginning to suffer a flavor of (choosing between insulin and death, $1000 toilet roll…). There is nothing inevitable about letting upvotes decide the tone of the site, it’s not a fundamental physical force. You’re allowed to intervene, complain, and so on. It should be encouraged, I think.

                                                                                                              1. 21

                                                                                                                crowding more interesting stuff off the front page

                                                                                                                Come on, there’s very rarely more than one of these threads on the front page, how is that crowding?

                                                                                                                1. 7

                                                                                                                  Well I counted three at one point today, which is over 10% of the front page. I’d like to nip this virus in the bud! It’s too facile to make corona references but regardless, we can go from ‘15 deaths out of 300 million people, no big deal’ to We Have A Problem is a fairly short space of time.

                                                                                                                  One of the more useful and formative talks I watched when helping to start my business was Ed Catmull [computer graphics pioneer and former Pixar president]’s talk entitled ‘Keep your crises small’*in which he makes the case that businesses are fundamentally unstable and it’s especially hard to notice the bad stuff during periods of high growth or profitability. He contends that you must always have your hand on the tiller and make steering corrections before problems get too big. I see an analogous situation on lobste.rs.

                                                                                                                  Look at my posting history here. It’s crap. I am a consumer and not a contributor. I have no right to voice my opinion really because I have not done my bit to try and steer lobsters in the direction I want. I am a mechanical engineer with no formal CS background and I stayed here merely because I learned a great deal, and my industry is one built on MScs and PhDs committing abominations in Excel and Matlab, in which a bit of solid CS and solid industrial best-practice would reduce the friction in aerospace R&D by an order of magnitude. It took me five years to get one of my customers to switch to python. Now one of them is using Hypothesis (!) and advocating its usage more widely in a reasonably large aerospace company. I am a True Believer in the value of Advocating the fruits of Computer Science in a field where most participants think the low hanging fruit lies elsewhere. All I’ve been doing is sharing the good stuff that lobsters introduced me to. And this is why I lament the fact that it’s being muscled out by what vim colorscheme do we all prefer, and why I therefore am moved to leave a comment like the grandparent.

                                                                                                                  Yes, I will make more effort to upvote and comment on the bits of lobsters I value from now on.

                                                                                                                2. 11

                                                                                                                  is that there is now a lot less of the stuff that differentiated lobste.rs from the other hundredty-dillion tech sites

                                                                                                                  There is and it’s due to less demand. What the audience wants has changed. I was still doing submissions like you described. They rarely hit the front page. The things getting upvoted were a mix of Lobsters-like content and stuff that gets high votes on other sites. Sometimes cross-posted from those sites. I stopped submitting as much for personal reasons (priority shift) but lack of demand/interest could’ve done it by itself.

                                                                                                                  1. 8

                                                                                                                    I stopped submitting as much for personal reasons (priority shift)…

                                                                                                                    For what it’s worth, I noticed that you have been posting less. Hope all is well.

                                                                                                                    1. 12

                                                                                                                      I’ll message you the details if you want. It’s been a trip with me back to the Lord, surviving some Gone Girl shit, and facing COVID workload right after. Right now, Im focused on fighting COVID and problems it causes however I can. City-wide shortage on toilet paper, soap, cleaners, etc and nurses having no alcohol/masks made me source them first. Gotta block hoarders and unscrupulous resellers, though.

                                                                                                                      Gonna have to find some web developers who can build a store or subscription service. Plan to let people pick up limited quantities that I order in bulk and resell just over cost. Might also scan what’s in local stores to reduce people’s time in them. After that, maybe a non-profit version of InstaCart with advantages that may or may not be easy to code. Got an anti-microbial scanner on the way for whatever.

                                                                                                                      Once everything settles, I’ll get back to my security projects. I just go where needed the most. Worse, people arent social distancing here: enveloping around me constantly. COVID can kill me. So, Im tired after work from holding my breath and dodging people up to 14hrs a day. Had no energy for doing CompSci papers either.

                                                                                                                      So, there’s a brief summary of some things Ive been up to for anyone wondering.

                                                                                                                      1. 4

                                                                                                                        I’m sorry to hear that. I assumed that you must be busy with other stuff or taking a break, but I wouldn’t have guessed how hard of a time you were having. I hope that things start looking up for you soon.

                                                                                                                        1. 4

                                                                                                                          I really appreciate it. Everyone supporting these comments, too, more than I thought I’d see. I’m good, though. (taps heart) Good where I need to be.

                                                                                                                          The possibilities and challenges do keep coming, though. Hope and pray those of us fighting this keep making progress both inside ourselves and outside getting things done in the real world. I’ll be fine with that result. :)

                                                                                                                    2. 2

                                                                                                                      Speaking of which, where do you find your papers?

                                                                                                                      1. 6

                                                                                                                        I applied old-school methods of using search engines to paper discovery. I pay attention to good papers that cite other work. Each sub-field develops key words that are in most of the papers. I type them into DuckDuckGo and/or Startpage with quotation marks followed by a year. Maybe “pdf” with that. This produces a batch of papers. I open all of them glancing at abstracts, summaries, and related work. I’ll go 5-10 pages deep in search results. I repeat the process changing the terms and years to get a batch for that sub-field. Then, I used to post the better ones one by one over a week or two. Then, do a different sub- or sub-sub-field next.

                                                                                                                        The Lobsters didn’t like seeing it that way. Too much on the same topic. So, I started getting the batches, saving them in a file, batch another topic when I have time/energy, and trickling out submissions over time with varying topics. So, I might have 50-100 papers across a half dozen to a dozen topics alternating between them. I just pick one, submit it, and mark it as submitted. Eventually, when there’s not much left, I would just grab some more batches.

                                                                                                                        1. 2

                                                                                                                          Wow that’s amazing! Thank you so much for doing this! I’ve seen some really nice papers here but I didn’t realize there would be this kind of work behind the posting.

                                                                                                                          1. 2

                                                                                                                            I thought people like you just happened to read extremely much.

                                                                                                                            Equally impressed now, just for a different reason.

                                                                                                                      2. 2

                                                                                                                        I get the idea of that. I think that what makes the distinction between good and not-so-good ask threads is the length of the responses. For share your blog - what is there to say except a link to your blog? I didn’t bother looking at that one. On the other hand, the distro question generated a ton of long responses about various Linux distros and the pros and cons thereof, interesting stuff. I wonder if there’s some way we could discourage short responses on ask threads, or ask thread topics that tend to generate short responses.

                                                                                                                        1. 1

                                                                                                                          Of course there should be regulation. If you applied that confusion everywhere you’d have sport be replaced by gladiatorial combat, McDonalds purchasing every farm until that was all you could eat, and other kinds of dystopia that unfortunately some americans are beginning to suffer a flavor of (choosing between insulin and death, $1000 toilet roll…).

                                                                                                                          It’s not that I’m looking forward to opening a discussion about this topic, but are you sure this would be the case? Lots of pathological actions done by monopolies are the result of regulating the market in a way which effectively removes competition, leaving the power to the monopolies (in fact, lots of megacorporations that exist nowadays wouldn’t be able to grow to such sizes if it wasn’t for the help from the government). I wouldn’t be so sure that the lack of regulations is the main problem.

                                                                                                                        1. 1

                                                                                                                          Well it’s an ugly hack all right. But I can’t say that I haven’t written any ugly hacks and kept them running for way longer than anyone ever should.