1. 5

    Many objects with getters/setters are just structs with extra steps. As much as Meyer’s gets cited, most people seem to have skipped over the Uniform Access Principle when bringing OOP to languages.

    Getters and setters are a violation of UAP and a major failing of most languages which allow OOP, with the notable exceptions of Ruby, Objective-C and C#.

    1. 2

      Can you expand on what you mean here? I thought the whole argument for getters and setters was to make field access by clients be method calls, to follow UAP?

      The idea being that by making clients do ‘x.getFoo()’ instead of ‘x.foo’ you leave space to refactor without breaking clients later on.

      ie. in what way are getter/setters a violation of UAP?

      In my mind the thing I disagree with is not getters and setters, its UAP thats the problem.

      1. 3

        If I understand correctly “planning ahead” and using getter and setter methods are a workaround for the lack of UAP, it’s being treated as a property of a language not a program. Ruby and Objective-C don’t allow you to access members without going through a method (they force UAP), C# lets you replace member access with methods since it supports member access syntax for methods.

        1. 1

          C# lets you replace member access with methods since it supports member access syntax for methods.

          Python has the same feature (in fact, I’m pretty sure had it first). You can start with a normal member, then replace it with a method via the property decorator and if you want to, implement the full set of operations (get, set, delete) for it without breaking any previous consumers of the class.

          Of course, Python also doesn’t actually have the concept of public versus private members, aside from socially-enforced conventions, but your concern seems to be less whether someone can find the private-by-convention attributes backing the method, and more with whether x.foo continues to work before and after refactoring into a method (which it does, if the new foo() method is decorated with property).

          1. 1

            Of course, Python also doesn’t actually have the concept of public versus private members, aside from socially-enforced conventions

            I’m genuinely curious, as at first glance you seem to be an advocate of this, what’s the benefit of socially-enforced conventions over compiler-enforced privacy? Also, another thing I’ve been curious about not having programmed much in a language with these semantics, how does a language like Python handle something like the following:

            There’s a class I want to extend, so I inherit it. I implement a “private member” __name. The base class also implemented __name, does my definition override it?

            I’ve been wondering about that because if that’s the case, it seems like that would require people to know a lot of implementation details about the code their using. But for all I know, that’s not the case at all, so I’d be happy to hear someone’s perspective on that.

            1. 3

              There’s a class I want to extend, so I inherit it. I implement a “private member” __name. The base class also implemented __name, does my definition override it?

              It doesn’t. Double underscores mangle the attribute name in this case by prepending _<class name> to the original attribute name, which obfuscates external access. The attribute is only accessible by its original name in the class they are declared.

              Python docs: https://docs.python.org/3/tutorial/classes.html#private-variables

              1. 2

                I’m genuinely curious, as at first glance you seem to be an advocate of this, what’s the benefit of socially-enforced conventions over compiler-enforced privacy?

                It was my first day at the new job. I’d been shown around the office, and now I was at my new desk, laptop turned on and corporate email set up. I’d been told my first ticket, to help me get to know the process, would be changing the user-account registration flow slightly to set a particular flag on certain accounts. Easy enough, so I grabbed a checkout of the codebase and started looking around. And… immediately asked my “onboarding buddy”, Jim, what was going on.

                “Well, that’s the source code”, he said. “Yeah, but is it supposed to look like that? “Of course it is, it’s encrypted, silly. I though you were supposed to be coming in with years of experience in software development!” Well, I said I’d seen some products that shipped obfuscated or encrypted code to customers, but never one that stored its own source that way. “But this way you have proper access control! When you’re authorized to work on a particular component, you reach out to Kevin, who’s the senior engineer on our team, and he’ll decrypt the appropriate sections for you, then re-encrypt when you’re done. That way you never see or use any code you’re not supposed to know about. It’s called Data Hiding, and it’s a fundamental part of object-oriented programming. Are you sure you’ve done this before?”

                I sighed. And then recognition dawned. “Hey, wait”, I said, “this isn’t really encrypted at all! It’s just ROT13! Look, here this qrs ertvfgre_nppbhag is actually just def register_account…”


                I’d been pointing at the code on my screen excitedly, and didn’t notice someone sneaking up behind me. Until he whacked me across the fingers, hard, with a steel ruler. “YOU ARE NOT AUTHORIZED TO READ THAT CODE!” he yelled, and then walked away.

                “Who was that?”

                “That was Kevin, the senior engineer I told you about. You really should be more careful, you’ll get written up to HR for an access violation. And if you accumulate three violations you get fired. Maybe they’ll let this one slide since you’re new and obviously inexperienced.”

                “But how does anyone get anything done here?”, I asked.

                “I told you – you ask Kevin to decrypt the code you’re supposed to work on.”

                “But what if I need to use code from some other part of the codebase?”

                “Then Kevin will liaise with senior engineers on other teams to determine whether you’re allowed to see their code. It’s all very correct according to object-oriented design principles!”

                I goggled a bit. Jim finally said, “Look, it’s obvious to me now that you’ve never worked somewhere that followed good practices. I’m not going to tell on you to HR for whatever lies you must have put on your résumé to get hired here, but maybe you could tell me what you used to do so I can help you get up to speed on the way professionals work.”

                So I explained that at previous jobs, you could actually see all the code when you checked it out, and there was documentation explaining what it all did, how to perform common tasks, what APIs each component provided, and so on, and you’d look things up and write the code you needed to write for your tasks and file a pull request that eventually got checked in after review.

                Now Jim was goggling at me. “But… what if someone used the code in a way the original team didn’t want it to be used? How would you protect against that?”

                “Well, there were conventions for indicating and documenting which APIs you were committing to support and maintain, and the policy was anyone could use those APIs any time. But if you needed something that wasn’t provided by any supported API, you’d talk to the team that wrote the component and work something out. Maybe they would say it was OK to use a non-supported API as long as you took responsibility to watch for changes, maybe they’d work with you to develop a new supported API for it, or come up with a better solution.”

                Jim couldn’t believe what I was telling him. “But… just knowing which team wrote some other code is a violation of the Principle of Least Knowledge! That’s a very important object-oriented principle! That’s why everything that crosses boundaries has to go through Kevin. Why, if you could just go talk to other teams like that you might end up deciding to write bad code that doesn’t follow proper object-oriented principles!”

                I tried my best to explain that at my previous jobs people trusted and respected each other enough that there wasn’t a need for fanatically-enforced controls on knowledge of the code. That we did just fine with a social-convention-based system where everybody knew which APIs were supported and which ones were “use at your own risk”. That there certainly weren’t senior engineers wandering among the desks with steel rulers – that senior engineers had seen it as their job to make their colleagues more productive, by providing tools to help people write better code more quickly, rather than being informational bottlenecks who blocked all tasks.

                After I finished explaining, Jim shook his head. “Wow, that sounds awful and I bet the code they produced was pretty bad too. I bet you’re glad to be out of those old jobs and finally working somewhere that does things right!”

                1. 1

                  So just to make sure I’m following, your argument is that if you need to use something that’s not included in the public API, compiler-enforced privacy requires you to talk to the team that developed the code if you need an extension to the API, while convention-enforced privacy requires that in order to make sure you don’t break anything you… talk to the team that developed the code so that you can work out an extension to the API?

                  1. 1

                    My argument is that in languages with enforced member-access/data-hiding, I can’t even think about using a bit of API that hasn’t been explicitly marked as available to me. If I try it, the compiler will thwack me across the hand with a steel ruler and tell me that code is off-limits. My only options are to implement the same thing myself, with appropriate access modifiers to let me use it, or somehow convince the maintainer to provide public API for my use case, but even that won’t happen until their next release.

                    In Python, the maintainers can say “go ahead and use that, just do it at your own risk because we haven’t finalized/committed to an API we’re willing to support for that”. Which really is what they’re saying when they underscore-prefix something in their modules. And Python will let me use it, and trust that I know what I’m doing and that I take on the responsibility. No steel rulers in sight.

                    A lot of this really comes down to Python being a language where the philosophy is “you can do that, but it’s on you to deal with the consequences”. And that’s a philosophy I’m OK with.

            2. 1

              I have mixed feelings about UAP, because I want to know when accessing a field is a trivial operation, and when it can run some expensive code.

              If a field is just a field, then I know for sure. If it could be a setter/getter, then I have to trust that author of the class isn’t doing something surprising, and will not make it do something surprising in the future.

              1. 1

                Yeah, IMO it’s a terrible idea for multiple reasons.

                One, like you’re saying, it gives up a super powerful tool for improving readability in client code. If you’re using a language that has “property accessor” nonsense, every access to fields provided by a library may - for all you know - throw IO exceptions or have any other arbitrary behavior. With method calls being explicitly separate, you add a huge aid in optimizing for readers by reducing the number of possible things that can happen.

                Two, it makes library authors think they can swap client field access for computation without breaking backwards compatibility, which is some sort of post-modernist academic fiction.

                1. 1

                  You can’t know that in languages like Java which default to get_X() and set_Y() for even the simplest field access, if only to not break their interface when they need to add a simple range guard.

                  Languages without UAP will go to such lengths to emulate it using methods that you will seldom see a bare field exposed, at which point you can no longer know if a simple getter will return a local field or fire nuclear missiles.

          1. 9

            Implementing accessors like these in C++ is a huge code smell IMO, especially if you need a setter. In these cases, most of the logic in the setter method should be in the constructor of a specialized type pretty much like Latitude or SphericalCoordinate, but with copy/move constructors and overloaded assignment operators—instead of operator() overloads—just like the rule of three/five tells us to. And of course the member in question should be of that type instead.

            1. 4

              Some of the examples provided in the text look like watered-down versions of real use cases which happen to benefit from the proposed refactorings. But, for instance, if one has to check more than one error condition, that will lead to either a bunch of cascading ifs or an unreasonable number of auxiliary functions. At that point, the readability benefits of keeping separate branches of the same if clause vanish quickly.

              1. 1

                Since the point of an example is to take a point across, this is probably true, but regarding the case with the multiple errors, I would be pretty happy with

                if (error1) {
                   throw exception1
                } else if (error2) {
                   throw exception2
                } else {

                (The rule for not conflating is only valid for “multiple conditions that are dependent on one another” )

                Otherwise, I would be happy to see some examples where the rules don’t work. After all, we know that every rule has exceptions.

                1. 1

                  So, I was thinking in scenarios such as this one, which is a very common pattern in C. Every single function call can raise an error, and their results in case of success are feeded into the next call.

                  I tried to follow your recommendations and it resulted in extremely unidiomatic C.

                  In these cases, early returns—or even gotos jumping to the cleanup section—are much more readable.

                  fd = open(path, O_RDONLY);
                  if (fd < 0) {
                    status = -1;
                  } else if ((buf = malloc(sz)) == NULL) {
                    status = -2;
                  } else if ((n = read(fd, buf, sz)) < 0) {
                    status = -3;
                  } else {
                    /* TODO consume `buf` and `n` */
                    status = 0;
                  /* TODO cleanup */
                  return status;
                  1. 2

                    I am not a C programmer, but for me this looks pretty readable.

              1. 1

                Very nice article :) One thing I couldn’t help but look sideways at is the use of NoReturn as a function argument type to assert_never—I know it is not a big deal, but I went to check mypy source code for its internal representation of the bottom type.

                NoReturn is a possible representation of the bottom type (yes, that’s weird). The bottom type itself is named UninhabitedType. I will spare you the details, but that type is equivalent to an empty union, Union[()], which kind of makes sense. I was surprised Guido himself did not point it out in the respective GitHub issue thread, maybe because that’s a minor detail and he didn’t bother to.

                So, to please my inner armchair type theorist, I would probably replace that signature by

                def assert_never(value: 'Union[()]') -> NoReturn: ...

                Edit: the original snippet was not enough to get it working in runtime; the quotes around the argument type annotation are required. Please check the full explanation in the replies below.

                The emitted error message also gets slightly more intuitive, since NoReturn is replaced by <nothing>:

                ~$ mypy src/exhaustiveness_check.py
                src/exhaustiveness_check.py:18: error: Argument 1 to "assert_never" has incompatible type "Literal[OrderStatus.Scheduled]"; expected <nothing>
                Found 1 error in 1 file (checked 1 source file)
                1. 2

                  That’s a great idea! I updated the article with your suggestion


                  1. 2

                    Hi, just an update because I noticed things get more complex in runtime.

                    If we use the following annotation format:

                    def assert_never(value: Union[()]) -> NoReturn: ...

                    We get the following error in runtime:

                    $ python src/exhaustiveness_check.py
                    Traceback (most recent call last):
                      File "src/exhaustiveness_check.py", line 9, in <module>
                        def assert_never(value: Union[()]) -> NoReturn:
                      File "/usr/lib/python3.7/typing.py", line 251, in inner
                        return func(*args, **kwds)
                      File "/usr/lib/python3.7/typing.py", line 344, in __getitem__
                        raise TypeError("Cannot take a Union of no types.")
                    TypeError: Cannot take a Union of no types.

                    This is because Python actually attempts to construct a typing.Union object to compose the annotation. That can be avoided by having the argument type annotation as a string:

                    from typing import NoReturn, Union
                    def assert_never(value: 'Union[()]') -> NoReturn:
                        raise AssertionError(f'Unhandled value: {value} ({type(value).__name__})')

                    One other option is the snippet below, which is certainly much more verbose than the original solution; besides that, it introduces different function types at compile and runtime.

                    from typing import NoReturn, Union, TYPE_CHECKING
                    if TYPE_CHECKING:
                        def assert_never(value: Union[()]) -> NoReturn: ...
                        def assert_never(value) -> NoReturn:
                            raise AssertionError(f'Unhandled value: {value} ({type(value).__name__})')

                    The advantage of these approaches is that at least they do not contradict PEP 484, as NoReturn is only used as a return annotation.

                1. 1

                  Very nice! MyPy’s flow sensitive type checking is indeed powerful.

                  I also like it for the nullable checks, which is related to this argument [1]. If the nullable type is flow sensitive, that’s basically what I want, and it’s useful in real code.

                  [1] https://lobste.rs/s/hek0ym/why_nullable_types

                  1. 1

                    It actually is. The only gotcha is that sometimes you must rewrite your code in order to allow the type checker to figure things out. For instance, consider the two functions in the following snippet:

                    from typing import Dict, Optional
                    def dynamic_option_check(d: Dict[str, Optional[str]]) -> str:
                        return d['key'] if d.get('key', None) is not None else ''
                    def static_option_check(d: Dict[str, Optional[str]]) -> str:
                        return d['key'] if 'key' in d and d['key'] is not None else ''

                    … And the following type checker output:

                    $ mypy src/optional_test.py
                    src/optional_test.py:4: error: Incompatible return value type (got "Optional[str]", expected "str")
                    Found 1 error in 1 file (checked 1 source file)

                    Type checking fails for dynamic_option_check() because mypy cannot narrow d’s type from the d.get() call. static_option_check() works fine, though, as we are explictly testing if d['key'] is not None.

                  1. 6

                    I like it.

                    This general space (maybe “reheating cold context”?) has been interesting to me (read: my achilles’ heel?) for a while.

                    Things I already do or have done:

                    • open a terminal tab for a distinct project (sometimes more than one, for distinct sub-tasks/sub-projects)
                    • keep tabs around for ongoing but inactive projects, so that I can review what I was doing
                    • working on a project-oriented shell-history module to help sand down some rough edges for the above
                    • working on an ST3 extension to sand down some sharp edges around ST3 projects (group multiple to open/close them together; have a persistent list of these meta projects that won’t get scrolled off of the recent-project list each time I open a bunch of related work projects…)

                    I’ve also daydreamed about:

                    • some sort of editor plugin/extension that keeps a contextual log of what you’ve been touching
                    • some affordance for annotating shell history, and probably for grouping+annotating common command sequences (probably eventually part of or paired with the shell-history module) that (ideally) does things like:
                      • passively notice common patterns and prompt me to annotate them (I notice you run these 4 commands in this order a lot; would you like me to help you annotate it, create a shortcut (script? function? alias?) and remind you about it when I see you manually run the commands?)
                      • make it easy to see annotations + context (frequency, location, project, etc) by command/directory/project/etc.
                      • maybe notice when I’m fumbling around with a command (you probably don’t need two guesses :)
                      • maybe append/prepend/wrap/annotate the syntax help or manpage with my own invocations
                    1. 12

                      I am a bash history junkie somehow; I’d rather have one-liners in my history when I notice they are long but simple to come up with (e.g. find usages with several options). That means I don’t need to pollute $PATH with writable directories in order to reach these commands from the current working directory.

                      So, far from being an automated process, when I notice I will need to run my-lengthy-one-liner more than once over the next couple of hours, I annotate them like this:

                      : mnemonic ; my-lengthy-one-liner

                      Then I can search for mnemonic on my shell history anytime I want to use that command.

                      1. 2

                        Oh, wow, that’s brilliant, thanks for sharing!

                      2. 2

                        Some useful tips there. I also keep tabs around (browser and terminal).

                        Terminal: it certainly helps being able to rename the title so context shows up in the tab. There is also a way to add colour to iTerm2 tabs for almost a tag system. 1

                        Browser-wise I use Tree-Style Tabs which allows me to set a project “parent” tab, say the git repo, and then collapse it’s children when I’m not working on it.

                        As for shell history, I often find myself doing things along the lines of

                        % command -with weird -flags i -wont remember # some additional context here about what I’m doing so all my notes get written to my eternal shell history (which is also in git)
                        1. 2

                          maybe notice when I’m fumbling around with a command (you probably don’t need two guesses :)

                          perhaps you want one or both of tldr and thefuck?

                          brew install tldr


                        1. 16

                          People like me have been saying this for quite some time. You could use traditional non-linear optimization techniques here to do even better than what the author’s simple random search does, for example gradient descent.

                          My old boss at uni used to point out that neural networks are just another form of interpolation, but far harder to reason about. People get wowed by metaphors like “neural networks” and “genetic algorithms” and waste lots of time on methods that are often outperformed by polynomial regression.

                          1. 12

                            Most of ML techniques boil down to gradient descent at some point, even neural networks.

                            Youtuber 3blue1brown has an excellent video on that: https://www.youtube.com/watch?v=IHZwWFHWa-w .

                            1. 3

                              Yep, any decent NN training algorithm will seek a minimum. And GAs are just very terrible optimization algorithms.

                              1. 1

                                I’d say that only a few ML algorithms ultimately pan out as something like gradient descent. Scalable gradient descent is a new thing thanks to the advent of differentiable programming. Previously, you’d have to hand-write the gradients which often would involve investment into alternative methods of optimization. Cheap, fast, scalable gradients are often “good enough” to curtail some of the other effort.

                                An additional issue is that often times the gradients just aren’t available, even with autodiff. In this circumstance, you have to do something else more creative and end up with other kinds of iterative algorithms.

                                It’s all optimization somehow or another under the hood, but gradients are a real special case that just happens to have discovered a big boost in scalability lately.

                              2. 6

                                A large part of ML engineering is about evaluating model fit. Given that linear models and generalized linear models can be constructed in a few lines of code using most popular statistical frameworks [1], I see no reason for ML engineers not to reach for a few lines of a GLM, evaluate fit, and conclude that the fit is fine and move on. In practice for more complicated situations, decision trees and random forests are also quite popular. DL methods also take quite a bit of compute and engineer time to train, so in reality most folks I know reach for DL methods only after exhausting other options.

                                [1]: https://www.statsmodels.org/stable/examples/index.html#generalized-linear-models is one I tend to reach for when I’m not in the mood for a Bayesian model.

                                1. 1

                                  Didn’t know about generalized linear models, thanks for the tip

                                2. 5

                                  For a two parameter model being optimized over a pretty nonlinear space like a hand-drawn track I think random search is a great choice. It’s probably close to optimal and very trivial to implement whereas gradient descent would require at least a few more steps.

                                  1. 3

                                    Hill climbing with random restart would likely outperform it. But not a bad method for this problem, no.

                                  2. 1

                                    I suppose people typically use neural networks for their huge model capacity, instead of for the efficiency of the optimization method (i.e. backward propagation). While neural networks are just another form of interpolation, they allow us to express much more detailed structures than (low-order) polynomials.

                                    1. 4

                                      There is some evidence that this overparameterisation in neural network models is actually allowing you to get something that looks like fancier optimisation methods[1] as well as it’s a form of regularisation[2].

                                      1. http://www.offconvex.org/2018/03/02/acceleration-overparameterization/
                                      2. http://www.offconvex.org/2019/10/03/NTK/
                                      1. 2

                                        The linked works are really interesting. Here is a previous article with a similar view: https://lobste.rs/s/qzbfzc/why_deep_learning_works_even_though_it

                                      2. 1

                                        neural networks […] allow us to express much more detailed structures than (low-order) polynomials

                                        Not really. A neural network and a polynomial regression using the same number of parameters should perform roughly as well. There is some “wiggle room” for NNs to be better or PR to be better depending on the problem domain. Signal compression has notably used sinusodial regression since forever.

                                        1. 2

                                          A neural network and a polynomial regression using the same number of parameters should perform roughly as well.

                                          That’s interesting. I have rarely seen polynomial models with more than 5 parameters in the wild, but neural networks easily contain millions of parameters. Do you have any reading material and/or war stories about such high-order polynomial regressions to share?

                                          1. 3

                                            This post and the associated paper made the rounds a while ago. For a linear model of a system with 1,000 variables, you’re looking at 1,002,001 parameters. Most of these can likely be zero while still providing a decent fit. NNs can’t really do that sort of stuff.

                                    1. 13

                                      I don’t believe the logic behind the EARN IT act adds up. If we ban things because unsavory people use them then why does the US allow guns, for example?

                                      This excerpt summarizes a majority of the article, and I think it exemplifies a particularly weak line of argument. People will be more likely to be convinced a law is right or not by you elaborating on what it does and how that effects them than they would be because you’ve moralized the pretenses under which it was passed. Case in point since you mention them just after that, the NRA has been pushing the “most gun owners are good guys!” angle for a very long time and it’s done little but intensify the ire of people they might be trying to sway. Saying “we shouldn’t ban encryption because not everyone that uses encryption is a pedophile” doesn’t exactly make the strongest case.

                                      1. 8

                                        the NRA has been pushing the “most gun owners are good guys!” angle for a very long time and it’s done little but intensify the ire of people they might be trying to sway.

                                        I think part of the issue is that the NRA isn’t always trying to sway the other side with this line. They’re often trying to rally support on their side. As such, I see using the same line of argument as useful in helping people on the right who may not normally identify with a tech issue to see it in the same way they view their gun rights.

                                        1. 3

                                          While I agree with most of what you’re saying, there just has to be a better way to combat illegal sexual exploitation than this. As much as I don’t like that it is still a big thing on the internet, removing encryption is not the solution.

                                          1. 5

                                            Sexual exploitation of kids was a thing before the internet was a consumer thing. Those who partake in such despicable acts will just find another way to do what they do if online transit is no longer practical or safe. And then we’ll have no legitimate encryption, and still have sexual exploitation of kids.

                                            1. 2

                                              I’m not against encryption either. Maybe if you really want to confront that part of the issue I think instead of talking about how it’s “not all encryption” I would personally take on the route of not only further exploring how futile it is to try to curb these crimes by pursuing them once they’re already being shared, but also showing how much more effective things like community programs might be at trying to fight the issue at its source.

                                            2. 3

                                              Saying “we shouldn’t ban encryption because not everyone that uses encryption is a pedophile” doesn’t exactly make the strongest case.

                                              Maybe it does, if one poses it as “encryption is the tool that allows you safely exchange, say, intimate pics with your partner, and financial information with your family members; if we ban it, your next-door neighbor could creep into your personal stuff”.

                                              1. 3

                                                That’s my point.

                                            1. 7

                                              VT2000 is the web (HTML+CSS), and the terminal emulator is the web browser. VT2020 could be pretty much the same thing, warts removed. I would be OK with that.

                                              … Or just give me Plan 9’s /dev/bitblt any day.

                                              1. 3

                                                Never heard about /dev/bitblt.

                                                What can you do with it?

                                                1. 6

                                                  I would refer you to 8½, the Plan 9 Window System, specifically the Graphical Output section.

                                                  1. 3

                                                    You can try it out with the devdraw in plan9port. The protocol is not really documented, but it’s concise enough to understand within one function, https://github.com/9fans/plan9port/blob/92aa0e13ad8cec37936998a66eb728bfca88d689/src/cmd/devdraw/devdraw.c#L639

                                                    1. 2

                                                      The protocol is muxed mouse, keyboard, and draw. They’re all documented here:

                                                      http://man.9front.org/8/kbdfs http://man.9front.org/3/mouse

                                                      However, you don’t get the interposability or window takeover that plan 9 gives you with plan9port, so in the end this is just a less powerful X11

                                                      1. 1

                                                        It does not prevent you from doing that, does it? Currently it always launch a new devdraw process, but in principle you could reuse the existing one. It would need a lot of book keeping though.

                                                        By the way, this: https://bitbucket.org/yiyus/devwsys-prev/src/default/ seems to be a nice direction.

                                                  2. 1

                                                    If web is part of the progression, wtf happened with vt2000?!

                                                    1. 5

                                                      It’s on a farm upstate, together with JPG2000…

                                                  1. 1

                                                    Interesting point that SOLID itself doesn’t adhere to the Interface Segregation Principle, because they’re supposed to be applicable to all code in all circumstances. And that the goals of:

                                                    1. Code being understandable
                                                    2. Code being flexible
                                                    3. Code being maintainable

                                                    Are probably 3 separate interfaces that require different principles.

                                                    Very interesting way to think about it, and it seems like the author is advocating for being pragmatic while still thinking deeply about what we’re trying to get out of our code.

                                                    1. 2

                                                      From my personal experience, this is mostly because you are not expected to tick all the boxes. SOLID principles are more like overall rules that you’d better follow unless you have good enough an excuse not to do so.

                                                      In other words, the dangerous thing is that they can only do you any good when you are clueless.

                                                      Nowadays, what I do is that I assume I’m working on a temporary solution if I notice that these principles do me a better service than the actual project requirements. YMMV.

                                                      1. 1

                                                        I don’t buy it - if the three goals were addressing three things that were actually different things then yes. But these goals are fundamentally interconnected: three adjectives used in combination to describe “software which is easy to make changes to”. “Maintainable” is a direct dependency of “understandable” and “flexible”: “flexible” software must itself be understood before it can be flexed - they all lead into each other. This is like saying “life, liberty and the pursuit of happiness” are unrelated, but you can ask Dr Maslow what happens when you kick out the bottom layer of the pyramid.

                                                      1. 5

                                                        The same kind of flexibility that gives room to unexpected behaviors as mentioned in the article is responsible for their fitness to creative problem solving. Developers nowadays take things like pipes for granted but, in 1968, their architectural description served as a true paradigm shift to an industry that had been ailing since its early days. More than 50 years later, we still have two main approaches to software development: either produce it fast and full of failures, giving developers time to discover which are those and fix them, hence delivering somewhat reliable products; or produce it full of failures. No amount of sanding on the rough edges of complexity could make things better.

                                                        1. 3

                                                          “Developers nowadays take things like pipes for granted but, in 1968, their architectural description served as a true paradigm shift to an industry that had been ailing since its early days. “

                                                          We had function calls that can do the same thing. Except there were languages emerging with type checks. Hoare and Dijkstra were doing pre/post-conditions and invariants. You get composition with stronger guarantees. Smalltalk and Lisp showed better pace of development with easier maintenance, too.

                                                          “we still have two main approaches to software development”

                                                          Fagan Inspections (pdf), Cleanroom, and OpenVMS argued oppositely way back in the 1980’s with Hansen’s RC 4000 showing it in the 1960’s. You can cost-effectively inject reliability into software processes in many ways. It’s even easier now with safer languages plus automated tooling for test generation, static analysis, etc. Even moving quickly, the majority of errors should be in business/domain logic or misunderstanding of the environment at this point. Far as environments, OS’s such as OpenVMS and OpenBSD also show you can eliminate a lot of that by using platforms with better design, documentation, and resilience mechanisms.

                                                          Example: FoundationDB was a startup using ultra-reliable engineering in an intensely-competitive space. They got acquired by Apple.

                                                          So, newest approach if reliability-focused is to prototype it fast using modern tooling, document your intended behavior as contracts + tests (if hard to specify), use automated tooling to find the problems in that, make sure the components are composable, and integrate them in safe ways. You can do this very quickly. If 3rd-party, use mature components, esp libraries and infrastructure, to reduce number of time-wasting surprises. You will get a small slow-down up-front, possible speed-up from reduced debugging, and the speed-up may or may not knock out the slow-down. The result should be reliable software coming out at a competitive pace, though.

                                                          1. 2

                                                            Hi, Nick, I am pretty sure you have your historical facts in much better shape than I have mine, but I will give it a go and try to discuss.

                                                            We had function calls that can do the same thing. Except there were languages emerging with type checks. Hoare and Dijkstra were doing pre/post-conditions and invariants. (…) Smalltalk and Lisp showed better pace of development with easier maintenance, too.

                                                            Programming languages with support to function calls predate pipes by approximately 15 years: the first implementation of pipes dates from 1973. FORTRAN II, for example, had support to subroutines in 1958; and I am not sure, but ALGOL introduced procedures either with its first iteration, from 1958, or with ALGOL 60. So even if their use was not widespread, at least they had implementations by 1968, year of NATO Software Engineering Conference in which Doug McIlroy’s ideas were presented. Subroutines were an important paradigm shift, but if they did offer any substantial help to developers back in the day, that was avoiding the aggravation of the software engineering crisis, not preventing it.

                                                            Needless to say, LISP predates any of those technologies, and Smalltalk got popular by the mid 80’s. And, IMHO, Unix systems were definitely a big influence over Smalltalk’s architectural decisions: at least one of them, LOCUS, is mentioned by Alan Kay as an inspiration to Smalltalk. IIRC, it was by watching that system in action that he thought that even natural numbers could be represented as processes and communicate via IPC. But the overhead of such solution would be too big back in the day, and he opted to represent message passing via ordinary procedure calls.

                                                            Far as environments, OS’s such as OpenVMS and OpenBSD also show you can eliminate a lot of that by using platforms with better design, documentation, and resilience mechanisms.

                                                            I chose to reply this fragment because it was the easiest one to address with a platitude. :) OpenBSD also shows that you can keep the bulk of an Unix system and still provide above par security.

                                                        1. 4

                                                          This is a case of improper data modeling, but the static type system is not at fault—it has simply been misused.

                                                          The static type system is never at fault, it behaves just like the programmer tells it to. But this is kind of handwaving over the very point this article attempts to address. This particular case of “improper data modeling” would never be a problem on dynamically-typed systems.

                                                          Bad analogy time: it is pretty much like advocating the use of anabolic steroids, because they make you so much stronger, but when the undesired side effects kick in, you blame the hormonal system for not keeping things in balance.

                                                          1. 9

                                                            Bad analogy time: it is pretty much like advocating the use of anabolic steroids, because they make you so much stronger, but when the undesired side effects kick in, you blame the hormonal system for not keeping things in balance.

                                                            To me it feels like that’s exactly what proponents of dynamic typing often do “I can write all code super fast” and then when people say there’s issues when it is accidentally misused (by another programmer or the same programmer, in the future) it is always “you should’ve used more hammock time to think about your problem real hard” or “you should’ve written more tests to make sure it properly handles invalid inputs”.

                                                            1. 5

                                                              You are not wrong and this is just a proof that the debate around type systems is still too immature. There is certainly a component of dynamism in every computer system that programmers crave, and it usually lives out of bounds of the language environment, on the operating system level. Dynamically typed languages claim to offer that dynamism inside their own environment, but most of the programs don’t take advantage of that.

                                                              There is no known definitive argument on either side that would definitively bury its respective contender. Programmers sometimes seem too afraid of some kind of Tower of Babel effect that would ruin the progress of Computer Science and I believe that the whole debate around static and dynamic type systems is just a reflex of that.

                                                            2. 2

                                                              This particular case of “improper data modeling” would never be a problem on dynamically-typed systems.

                                                              I think this is addressed in the appendix about structural vs nominal typing. In particular, very dynamic languages like Python and Smalltalk still allow us to do such “improper data modelling”, e.g. by defining/using a bunch of classes which are inappropriate for the data. Even if we stick to dumb maps/arrays, we can still hit essentially the same issues once we get a few functions deep (e.g. if we’ve extracted something from our data into a list, and it turns out we need a map, which brings up questions about whether there’ll be duplicates and how to handle them).

                                                              Alternatively, given the examples referenced by the author (in the linked “parse, don’t validate” post) it’s reasonable to consider all data modelling in dynamically-typed systems to be improper. This sounds a bit inflammatory, but it’s based on a core principle of the way dynamically-typed languages frame things: they avoid type errors in principle by forcing all code to “work” for all values, and shoehorning most of those branches into a sub-set of “error” or “exceptional” values. In practice this doesn’t prevent developers having to handle type errors; they just get handled with branching like any other value (with no compiler to guide us!). Likewise all dynamic code can model all data “properly”, but lots of code will model lots of data by producing error/exceptional values; that’s arguably “proper” since, after all, everything in a dynamic system might be an error/exception at any time.

                                                              Side note: when comparing static and dynamic languages, it’s important to remember that using “exceptions” for errors is purely a convention; from a language/technology standpoint, they’re just normal values like anything else. We can assign exceptions to variables, make lists of exceptions, return exceptions from functions, etc. it’s just quite uncommon to see. Likewise “throwing” and “catching” is just a control-flow mechanism for passing around values; it doesn’t have anything to do with exception values or error handling, except by convention. I notice that running raise 42 in Python gives me TypeError: exceptions must derive from BaseException, which doesn’t seem very dynamic/Pythonic/duck-typical; yet even this “error” is just another value I can assign to a variable and carry on computing with!

                                                              1. 1

                                                                The point I was trying to make is that, in the example mentioned in the article, the reason why the type description was inaccurate at first has only to do with the fact that the programmer must provide the checker information about UserId‘s subtype. On a dynamically-typed system, as long as the JSON type supports Eq, FromJSON and ToJSON, you are fine, and having to accurately determine UserId‘s subtype would never be a problem.

                                                                So I do understand the appeal of static typing in building units, but not systems, especially distributed ones, and this is why I believe the article is myopic, but dynamic language advocates do a terrible job in defending themselves. Having to process JSON payloads is the least of the problems if you are dealing with distributed systems; how would you accurately type check across independent snippets of code in a geographically-distributed network over which you have no control is a much more interesting problem.

                                                                1. 1

                                                                  On a dynamically-typed system, as long as the JSON type supports Eq, FromJSON and ToJSON, you are fine, and having to accurately determine UserId‘s subtype would never be a problem.

                                                                  That’s not true. At some point your dynamically typed system will make an assumption about the type of value (the UserId in this case) that you’re applying some function to.

                                                                  1. 1

                                                                    For practical purposes, it is true. The system internals indeed need to resolve the dependency on that interface, indeed, either with fancy resolution mechanisms or attempting to call the function in a shoot-from-the-hip fashion. But it is not common between dynamic language implementations that the programmer needs to specify the type, so it is not a problem.

                                                            1. 1

                                                              I think the Lexi Lambda post is a good one, although I found its argument kind of obvious–maybe because I lean toward static typing myself. The other post that was merged in, though, I don’t fully understand. It says:

                                                              “So, give me an example of a thing dynamically-typed languages can do that statically-typed languages can’t?”

                                                              and then gives the example of sayHello = putStrLn messageToBeDefined at a fresh REPL. Since messageToBeDefined is an undefined symbol, this is going to fail in almost all languages. In Python:

                                                              >>> print(messageToBeDefined)
                                                              Traceback (most recent call last):
                                                                File "<stdin>", line 1, in <module>
                                                              NameError: name 'messageToBeDefined' is not defined

                                                              or in Scheme:

                                                              > (print messageToBeDefined)
                                                              . . messageToBeDefined: undefined;
                                                               cannot reference an identifier before its definition

                                                              What is this supposed to be demonstrating?

                                                              1. 2

                                                                It was supposed to be demonstrating this:

                                                                Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)] on win32
                                                                Type "help", "copyright", "credits" or "license()" for more information.
                                                                >>> def sayHello():
                                                                >>> messageToBeDefined = 'hello, world'
                                                                >>> sayHello()
                                                                hello, world
                                                                >>> messageToBeDefined = 'hi, everyone'
                                                                >>> sayHello()
                                                                hi, everyone

                                                                In contrast to

                                                                GHCi, version 8.4.4: http://www.haskell.org/ghc/  :? for help
                                                                Prelude> messageToBeDefined = "hello, world"
                                                                Prelude> sayHello = putStrLn messageToBeDefined
                                                                Prelude> :t sayHello
                                                                sayHello :: IO ()
                                                                Prelude> sayHello
                                                                hello, world
                                                                Prelude> messageToBeDefined = "hi, everyone"
                                                                Prelude> sayHello
                                                                hello, world
                                                                1. 1

                                                                  Ah, I see. It’s true that Haskell doesn’t allow this (without using a global State monad or something to simulate it), but other statically typed languages like C++ do. (Even Rust would, although you have to use something like an Arc.)

                                                                  1. 2

                                                                    Actually the argument was less about feasibility, but how does the community deal with these discussions. Coming from a Unix/C background myself, I don’t have a horse in this race, though.

                                                                  2. 1

                                                                    That’s a property of Haskell (specifically GHCi in this case I think), not static typing, right? I think you can get both behaviors in e.g. Scheme with (untested)

                                                                    (define message "hello, world")
                                                                    (define (say-hello) (display message))
                                                                    ; Python mode:
                                                                    (set! message "hello, Brent")
                                                                    ; Haskell mode:
                                                                    (define message "hello, Greg")

                                                                    edit: i.e. it’s just the distinction between introducing a new variable binding and modifying an existing one.

                                                                    1. 1

                                                                      That’s a property of Haskell (specifically GHCi in this case I think), not static typing, right?

                                                                      This is a property of sound strong static type systems, which, in that case, is represented with Haskell code. But the real difference there is the “static” part.

                                                                      I think you can get both behaviors in e.g. Scheme with (untested)

                                                                      Truth is, you could either implement interpreters of statically-typed environments in a dynamically-typed language, or dynamically-typed environments with a statically-typed language. So that comes to me as no surprise. But thanks for mentioning that. My impression, though, is that getting a dynamically-typed environment to behave statically requires so much more boilerplate than its counterpart.

                                                                      1. 3

                                                                        Oh, my mistake, I missed the missing initial definition of messageToBeDefined in the Python snippet.

                                                                        Hmm, I never thought about that relationship, but I don’t know if sound static typing completely precludes dynamic variable binding (assuming that’s what this is about and I’m not completely missing the point). I just tried this in Typed Racket, which is statically typed (and I believe sound):

                                                                        > (define (say) (display msg))
                                                                        ; readline-input:1:23: Type Checker: missing type for top-level identifier;
                                                                        ;  either undefined or missing a type annotation
                                                                        ;   identifier: msg
                                                                        ;   in: msg
                                                                        ; [,bt for context]
                                                                        > (: msg String)
                                                                        > (define (say) (display msg))
                                                                        > (say)
                                                                        ; msg: undefined;
                                                                        ;  cannot reference an identifier before its definition
                                                                        ;   in module: top-level
                                                                        ; [,bt for context]
                                                                        > (define msg 6)
                                                                        ; readline-input:5:12: Type Checker: type mismatch
                                                                        ;   expected: String
                                                                        ;   given: Positive-Byte
                                                                        ;   in: 6
                                                                        ; [,bt for context]
                                                                        > (define msg "hello")
                                                                        > (say)
                                                                        > (define msg "hello again")
                                                                        > (say)
                                                                        hello again

                                                                        (Aside: the GHCi feature I was thinking of was -fdefer-out-of-scope-variables but that still errors when actually running it.)

                                                                        1. 3

                                                                          This is where things get tricky. People (e.g. me) usually conflate static/dynamic typing with early/late binding, either due to ignorance or laziness. It is very common that you find early binding on statically-typed systems and late binding on dynamically-typed systems. But binding strategies are not a binary choice: they sit in a spectrum. It is expected that static typing systems introduce the necessity for earlier binding to some degree, because type checks, at least, should be provided ahead of time.

                                                                          Typed Racket is usually classified as a gradually-typed language. I won’t risk an explanation of that, but suffice to say that you get type checks before execution without usually introducing any restrictions on runtime.

                                                                1. 2

                                                                  No value judgments here, please. But it just crossed my mind that apologies to static typing pretty much assume some level of historical materialism, in which they advocate that adaptations to the code, and hence its evolution over a timeline, are at least partially tied to the data structures one must process.

                                                                  [sarcasm] Software history repeats itself first as remote exploits, then as fuzz testing. [/sarcasm]

                                                                  1. 4

                                                                    Sadly, this reinforces my impression that, to a huge part of developers, “type safety” acts as a meme and serves no other purpose but eliciting some kind of Pavlovian response. Too bad for those who give some thought to either theoretical or practical problems and actually get things done, such as the language creators and the poor framework developer.

                                                                    1. 2

                                                                      Type safety is not a meme. Ho do you go from “a person misuses a piece of technology” to “some piece of technology is a meme”?

                                                                      1. 2

                                                                        I should have been clearer. Maybe what I trying to say is that assertions on code quality derived from the fierce defense of static type systems are, sometimes, based not on logical grounds, but serve exactly as a meme: they carry a cultural message that is blindly replicated across part of the community.

                                                                        The unsafe keyword means potentially unsafe, right? In that case, it should never be interpreted as a token for public shaming. CVEs aren’t[1]. The community is mature enough to understand that mistakes happen and we just need to move on.

                                                                        Imagine a world where the C community shamed OpenSSL developers for sloppy pointer arithmetic and implicit type casts to the point they rage-quit. Would that be any better? Would computer users be better served overall in practice?

                                                                        [1] Usually.

                                                                    1. 3

                                                                      A personal take on Dan Ingalls’ statement and the reason why it did not materialize is that some effect akin to linguistic relativity may be a real thing on computer science and no language covers every problem domain with exactly the same conciseness and effectiveness.

                                                                      I would pretty much like to see a Smalltalk-like environment where (1) object instances could run asynchronously by default and (2) particular classes or methods could be implemented in different languages.

                                                                      That is pretty much feasible, but not a reality yet. The next best thing to that are Unix environments.

                                                                      1. 3

                                                                        Smalltalk on graalvm should be pretty close?


                                                                        Another take would be Gemstone/S - it runs smalltalk on the (object database) server, but AFAIK it’s possible to interact via java apis too ?



                                                                        They also made the maglev ruby vm - for ruby on gemstone - but I think the project is dead? http://maglev.github.io/

                                                                        1. 2

                                                                          From what I could understand, those are mostly language implementations over a common VM—please correct me if I’m wrong. What I really meant is an environment where an object could be implemented in, say, a Pascal-like language and another in a LISP, and they would still be able to talk to each other. Or, even further than that, different methods of a same object are implemented in different languages.

                                                                          1. 4

                                                                            The graalvm sort of does this with native compilation and talk across languages (calling Java and c from smalltalk, for example). Gemstone is more a single vm that allows in station of objects from different languages.

                                                                            Ed: graal and truffle together, from the link:

                                                                            Just like all other GraalVM languages, GraalSqueak provides a Polyglot API for evaluating code of other languages and for sharing data and objects between them. What’s interesting here is that you can even interact with something like Javascript’s Math module and invoke its min function with arguments from Smalltalk and Python. Additionally, the base language of our PolyglotWorkspace can be changed via its context menu. This means the tool can also be used in the same way for all other languages supported by the underlying GraalVM.

                                                                            1. 2

                                                                              Interesting. I wonder if integration with the system browser would be too hard, along with some form of annotation to individual methods in order to allow them to be implemented in different languages.

                                                                              1. 3

                                                                                You mean the smalltalk browser? The video displays browsing python objects in the browser.

                                                                                1. 2

                                                                                  I watched two videos, one where code was interpreted in a workspace, another of a Jupyter-like notebook with support to several languages (very cool, btw). I will ckeck the other videos, thanks.

                                                                        2. 2

                                                                          Concern #1 is handled by io, but io is pretty dead unfortunately. It’s a shame: I found it a lot easier to pick up & use than smalltalk.

                                                                          1. 2

                                                                            I never played with io. How does it compare to Smalltalk (or even to an Unix system) interaction-wise?

                                                                            1. 4

                                                                              Like Smalltalk, it’s very much centered around passing messages between objects as its primary metaphor. It has a prototype object system, and a decent REPL. Spawning a coroutine is a matter of adding a single character to an object name in order to make the message passing async.

                                                                              Unfortunately, I ran into some weird behavior with regard to this – io tries to figure out whether or not all coroutines will terminate by analyzing branching, but some of my code had false positives & exited early.

                                                                        1. 1

                                                                          It is funny; I read the first paragraph (unbeknownst to the fact that the essay is authored by Gilad Bracha) and all I could think is that builds are not an issue in Smalltalk environments.

                                                                          Smalltalk really is a programming environment that changes your perspective about the whole software engineering field. I recommend every programmer to give it a serious try at least once.

                                                                          1. 1

                                                                            How much is “a serious try”?

                                                                            1. 3

                                                                              It is hard to tell, but it’s not about learning the syntax, writing FizzBuzzes and the like. Having a basic understanding of a Smalltalk system on the architectural level is the important thing in my opinion. After that, when you are solving a nasty architectural problem in another environment, think for a while on how Smalltalk would help you solve it. It is not a far cry from what is considered to be the UNIX way: if you avoid writing monoliths and you use IPC to coordinate software components, you will notice that Smalltalk is the next iteration of that approach. But the personal computer revolution kind of got in the way and OOP was vandalized.

                                                                              From a personal perspecitve, I would really like to see what the next iteration of that would look like, and I have been spending a considerable amount of time thinking about that. Unfortunately I couldn’t come up with an answer yet.

                                                                              1. 1

                                                                                I don’t edit the code on the production server, so how do you distribute a program in an image based system like Smalltalk? The environment I work in is not the web. The product I work on lives on the phone network, and if it’s not running, not only are we not getting paid, but we’re the one paying our customer (in this case, the Monopolistic Phone Company). The SLAs are scary and if I stop and think about it too much, I want to throw up over how much I could cost the company I work for if my code doesn’t work. And I don’t see how Smalltalk can actually work in such a situation.

                                                                                1. 3

                                                                                  I am almost sure that I did not understand your point, but I will answer it nonetheless.

                                                                                  Generically speaking, there is no impediment in treating an image-based system as an ordinary monolithic, build-based system; the opposite is not true.

                                                                                  I know nothing about your problem domain, but in a hypothetical scenario where the state-of-the-art approach to computing are image-build systems, you would still be able to distribute specific versions of images that wouldn’t receive live updates.

                                                                              2. 3

                                                                                Sometimes I question how much I would gain from learning Smalltalk. I think it has three main lessons of value:

                                                                                • What it’s like to code in an environment where everything can be rewritten at runtime, using itself. <- This one is huge! Everyone should learn it, because it’s just so rewarding and refreshing. But you can learn it a lot more practically in Elisp than in Smalltalk.
                                                                                • How message-passing works. <- Not nearly as important as the previous one, but still pretty neat. But again, more practical to learn this thru Erlang or Elixir nowadays.
                                                                                • What it’s like to work in a “turtles-all-the-way-down” system. <- I don’t really know any modern substitute for this, except that you can get a glimpse at how great it must be by using modern systems that aren’t like that, getting frustrated, and imagining the opposite.

                                                                                But maybe there’s more, and I’m missing out? I would say that #1 above is by far the most important, and I think learning Smalltalk (which I can’t use at my day job) would just lead to more frustration vs learning Emacs, which I can and do use every day.

                                                                                1. 2

                                                                                  Yes, the first point is the most compelling, and it is so blatant to me that our current paradigm is such a haphazard attempt to achieve it that it is not even funny. It gives the impression that monolithic programs that encapsulate all necessary components and are pretty much fixed are the right way to write complex applications.

                                                                                  But those communicate with other applications using an API which is usually REST-based, pretty much oblivious to the fact that the other endpoints may receive updates over which you have no control. And you either encapsulate every single application in things such as containers or you run those on dedicated boxes because you know that hell would break loose if you didn’t do so.

                                                                                  But if you play the long game you will realize that, in production, those environments are expected all of them to run long uptimes, with high availability. So what is static in a time frame of two hours is extremely dynamic if you look at a server with an uptime of months of even years.

                                                                                  So it is all a matter of scale, either time or, say, the applications/MB ratio.

                                                                                  1. 1

                                                                                    We haven’t quite figured out good composition models that scale up. Most of the infra tools are attempts to patch over this. In the medium-small we have compilers that validates, statically binds and outputs a binary in one shot. Larger than that we have code and config that spins components up/down and binds them dynamically. There’s a whole hodgepodge of patterns to create, verify and exercise these bindings.

                                                                            1. 2

                                                                              However, tests are not themselves part of the build, and need not rely on one - the build is just one way to obtain an updated application. In a live system, the updated application immediately reflects the source code. In such an environment, tests can be run on each update, but the developer need not wait for them.

                                                                              This is a really weird statement. In the systems I’ve worked on, the tests are the main point of even having an automated build. If the developer doesn’t wait for the tests to run, then they can’t use the results of the tests to decide what to do next; in that case why even have tests?

                                                                              1. 4

                                                                                In a “live” environment, which includes service-oriented architectures, it’s impossible to unit-test the world. You have to do integration tests, which ideally includes some testing “in production”. Systems have to be designed in a multi-tenant style, so you don’t have to have an end-to-end test that runs every dependency in a fully isolated stack for every CI run. In this world, you can deploy instantly to a test stack and roll out incrementally to users/clients.

                                                                                Consider Stripe’s “test mode”. You can’t run Stripe’s API locally, and it’s a large surface area to mock. Even if you could mock it, what are you really testing anyway? So you run tests against their test mode. Turns out that this works quite nicely. Why wouldn’t it work for your own services? You can have your own test mode, whatever that means for you app/service, and that can go “live” instantly without waiting for any extra checks you need to run prior to deploying that version to customers. A CI job can be kicked off asynchronously to run those integration checks and, if it succeeds, it can update a DNS record or a database entry or something like that.

                                                                                Multi-tenancy is the way to rationalize a world in which you can’t control everything. You don’t have an atomic system, so why would you expect to have an atomic build?

                                                                                1. 1

                                                                                  My impression is that the disciplinary approach you have just described and the technical feasibility of running tests in a live environment updated incrementally are orthogonal things. You could wait for the test results and decide what to do next, that should not be a problem.

                                                                                1. 1

                                                                                  I’ve never really understood the point in these tools. Symlinks in ~ are simple enough to justify the 3 extra characters over something like autojump.

                                                                                  1. 2

                                                                                    One thing I’ve been doing lately is keeping a directory on $HOME called l (the actual letter doesn’t matter that much, but the fact that it is as short as it can be). All directories and symlinks that I access on a frequent basis are there. So all that I usually need to do is to cd ~/l/whatever.

                                                                                    This works for me, in conjunction with pushd and popd to manage short-lived workflows when required.

                                                                                  1. 4

                                                                                    A more Unix-y approach would be only enabling this kind of warning if you run the command with a verbose option. Reading from stdin is one of the most common operations in an Unix system, there is no need to make it that explicit.

                                                                                    1. 5

                                                                                      That entirely defeats the point of providing a good UX when people use uni emoji and don’t know that will read from stdin instead of, say, printing everything, being interactive, or any number of other reasonable behaviours.

                                                                                      1. 3

                                                                                        I understand how that could help end users who are not Unix experts, and probably the guys who wrote The Unix-Haters Handbook would certainly agree with your point. By no means I am trying to imply that this is a terrible solution, because I don’t know your audience. Unix users would probably be OK with using something as echo x | xargs uni identify.

                                                                                        1. 5

                                                                                          I think it could help users who are unix experts, too. No one is an expert on my little program (except me), and many programs don’t read from stdin (or read on stdin only when explicitly told to with -), or would print the help on some of the commands, etc.

                                                                                          There is no consistent interface, and it’s actually not so easy to tell if a program is reading from stdin outside of experimentation or carefully studying the manual. So what you’re left with is a program that appears to be working, but may also be reading from stdin, or it may just be slow; there is no easy way of knowing.

                                                                                          1. 3

                                                                                            many programs don’t read from stdin (or read on stdin only when explicitly told to with -)

                                                                                            No. Actually, reading from STDIN and writing to STDOUT (i.e. program can be used as a filter) is the recommended and standard Unix way. See Rule of Composition:

                                                                                            Design programs to be connected with other programs.

                                                                                            Unix tradition strongly encourages writing programs that read and write simple, textual, stream-oriented, device-independent formats. Under classic Unix, as many programs as possible are written as simple filters, which take a simple text stream on input and process it into another simple text stream on output.

                                                                                            I am not so strict about the textuality (binary streams and formats are sometimes better) but I fully support this filter approach (read from STDIN and write to STDOUT) as default behavior.

                                                                                            However, this is a general advice, that should apply most of the (CLI) software – and your program might be exceptional and might require different approach…

                                                                                            1. 4

                                                                                              You can throw around “rules” to no end to “win” your point. So esr wrote this book 20 years ago, so what? In spite of what esr may think of himself he is not, in fact, the ultimate arbitrator of truth regarding these things, or anything else.

                                                                                              No matter what’s in esr’s book, if you go on GitHub and download CLI tools like this then a lot of them won’t read from stdin, or they will only read on stdin when very explicitly told to do so. That’s the reality of the world today. You can either choose to ignore that reality that and pretend that esr’s 20-year old book is reality – which it’s not and probably never has been – or deal with reality it as it is. I choose the latter.

                                                                                              1. 3

                                                                                                ESR wasn’t defining the rules, he was just noting them down. Bad practice on random Github projects is not really a suitable excuse to continue with bad practice.

                                                                                        2. 2

                                                                                          The manpage should specify it reads it’s input from stdin.