1. 1

    I’m torn between thinking it’s terrible how much digital history we’ve lost already, and thankful that we’re not carrying forward even more crap than we already are.

    Sometimes, impermanence itself should be part of a thing. Think of a sand mandala, or the burning man. It exists only to be lost.

    1. 2

      Sometimes.

      That doesn’t really make an argument about the rest of the time…

      And even when it’s true: should no trace of even a single sand mandala ever, or a single burning ever, be retained?

      The digital age shouldn’t make preservation an everything-or-nothing binary absolute.

      1. 1

        That’s hardly the point I was trying to make. But think about how many times you’ve waxed nostalgic about something from your childhood, only to re-watch / re-play it and it turns out to be shite, and now you no longer enjoy those old memories?

        1. 1

          I love that! It’s the only true opportunity for me to experience anything like what it’s actually like inside the mind of a different person. (Some people with MPD experience remembering “someone else’s memories”. I’m just a little bit envious…)

          But I’m guessing that again misses the point you were going for?

    1. 2

      I think this is due to the lack of pagination

      1. 3

        Yeah. I don’t think this was an intentional piece of design, just an oversight.

        @ap, no, there isn’t a way to retrieve all of your, or anyone else’s comments. If it’s a priority I could dump them from the database for you, but otherwise it’ll happen when someone picks up #394.

        1. 3

          No worries, I am not in a hurry and don’t require special treatment. So long as I can expect to be able to get my comments out someday, I’m fine just waiting for it to happen. Since it’s not purposefully the way it is, and in fact the stated intent is to provide what I want, then if it bugs me that much, I always have the option of putting in the effort myself. That’s good enough for me. Thank you.

          1. 2

            Thanks for your understanding and patience.

      1. 5

        The comment on how Swift make prototyping harder by forcing developers to express correct types is spot-on, and would apply to other strongly typed languages. One could argue that you should solve the problem on paper first and sketch out the types before writing the implementation, but I find it a good example of how dynamic languages shift our expectations in terms of programming ergonomics.

        1. 19

          I’d be very curious to hear what situations you’ve encountered where you were prototyping a solution that you understood well enough to turn into code, but not precisely enough to know its types? I’ve personally found that I can’t write a single line of code – in any language, static or dynamic – without first answering basic questions for myself about what kinds of data will be flowing through it. What questions do you find the language is forcing you answer up-front that you would otherwise be able to defer?

          1. 7

            When I have no idea where I’m going I sometimes just start writing some part of the code I can already foresee, but with no clue how anything around it (or even that part itself) will end up looking in the final analysis. I have no data structures and overall no control flow in mind, only a vague idea of what the point of the code is.

            Then with lax checking it’s much easier to get to where I can run the code – even though only a fraction of it even does anything at all. E.g. I might have some function calls where half the parameters are missing because I didn’t write the code to compute those values yet, but it doesn’t matter: either that part of the code doesn’t even run, or it does but I only care about what happens before execution gets to the point of crashing. Because I want to run the stuff I already have so I can test hypotheses.

            In several contemporary dynamic languages, I don’t have to spend any time stubbing out missing bits like that because the compiler will just let things like that fly. I don’t need the compiler telling me that that code is broken… I already know that. I mean I haven’t even written it yet, how could it be right.

            And then I discover what it is that I even wanted to do in the first place as I try to fill in the parts that I discover are missing as I try to fill in the parts that I discover are missing as I try to fill in the parts that I discover are missing… etc. Structures turn out to repeat as the code grows, or bits need to cross-connect, so I discover abstractions suggesting themselves, and I gradually learn what the code wants to look like.

            The more coherent the code has to be to compile, the more time I have to spend stubbing out dummy parts for pieces of the code I don’t even yet know will end up being part of the final structure of the code or not.

            It would of course be exceedingly helpful to be able to say “now check this whole thing for coherence please” at the end of the process. But along the way it’s a serious hindrance.

            (This is not a design process to use for everything. It’s bottom-up to the extreme. It’s great for breaking into new terrain though… at least for me. I’m terrible at top-downing my way into things I don’t already understand.)

            1. 4

              That’s very interesting! If I’ve understood you correctly, your prototyping approach seems to allow you to smoothly transform non-executable sketches into executable programs, by using the same syntax for both. So instead of sketching ideas for your program on a napkin, or on a whiteboard, or in a scratch plaintext file, you can do that exploration using a notation which is both familiar to you and easy to adapt into an actual running program. Would it be correct to say that by the time you actually run a piece of code, you have a relatively clear idea of what types of data are flowing through it, or at least the parts of it that are actually operational? And that the parts of your program whose types you’re less confident about are also the parts you aren’t quite ready to execute yet?

              If so, then I think our processes are actually quite similar. I mainly program in languages with very strict type systems, but when I first try to solve a problem I often start with a handwritten sketch or plaintext pseudocode. Now that I think about it, I realize that I often subconsciously try to keep the notation of those sketches as close as possible to what the eventual executable code might look like, so that I’ll be able to easily adapt it when the time comes. But either way, we’re both bypassing any kind of correctness checking until we actually know what it is we’re doing, and only once we reach a certain level of confidence do we actually run or (if the language supports it) typecheck our solution.

              Let me know if I’ve missed something about your process, but I think I understand the idea of using dynamic languages for prototyping much more clearly now. What always confused me is that the runtime semantics and static types (whether automatically checked or not) of a program seem so tightly coupled that it would be nearly impossible to figure one out without the other, but you seem to be suggesting that when you’re not sure about the types in a section of your program, you’re probably not sure about it’s exact runtime semantics either, and you’re keeping it around as more of a working outline than an actual program to be immediately run. So even in that early phase, types and semantics are still “coupled,” but only in the sense that they’re both incomplete!

              1. 3

                If I’ve understood you correctly, your prototyping approach seems to allow you to smoothly transform non-executable sketches into executable programs, by using the same syntax for both.

                Yup.

                Would it be correct to say that by the time you actually run a piece of code, you have a relatively clear idea of what types of data are flowing through it, or at least the parts of it that are actually operational?

                Well… it depends. For the parts that are most fully written out, yes. For the parts that aren’t, no. Neither of which are relevant when it comes to type checking, of course. But at the margins there is this grey area where I have some data structures but I only know half of what they look like. And at least one or two of them shift shape completely as the code solidifies and I discover the actual access patterns.

                If so, then I think our processes are actually quite similar.

                Sounds like it. I’d wonder if the different ergonomics don’t still lead to rather different focus in execution (what to flesh out first etc.) so that dynamic vs static still has a defining impact on the outcome. But it sure sounds like there is a deep equivalence, at least on one level.

                Now that I think about it, I realize that I often subconsciously try to keep the notation of those sketches as close as possible to what the eventual executable code might look like

                Seems natural, no? 😊 The code is ultimately what you’re trying to get to, so it makes sense to keep the eventual translation distance small from the get-go.

                So even in that early phase, types and semantics are still “coupled,” but only in the sense that they’re both incomplete!

                I had never thought about it this way, but that sounds right to me as well.

            2. 4

              You didn’t ask me but I’ll answer anyway because I’d like your advice! I am currently prototyping a data processing pipeline. Raw sensor data comes in at one end then is processed by a number of different functions, each of which annotates the data with its results, before emitting the final blob of data plus annotations to the rest of the system. As a concrete example, if the sensor data were an image, one of the annotations might be bounding boxes around objects detected in the image, another might be some statistics, etc.

              At this stage in the design, we don’t know what all the stages in the pipeline will need to be. We would like to be able to insert new functions at any stage in the pipeline. We would also like to be able to rearrange the stages. Maybe we will reuse some of these functions in other pipelines too.

              One way to program this is the “just use a map” style promoted by Clojure. Here every function takes a map, adds its results to the map as new fields, then passes on the map to the next function. So each function will accept data that it doesn’t recognize and just pass it on. This makes everything nicely composable and permits the easy refactoring we want.

              How would this work in a statically typed system? If the pipeline consists of three functions A, B then C, doesn’t B have to be typed such that it only accepts the output of A and produces the input of C? What happens when we add another function between B and C? Or switch the order so A comes last?

              What would the types look like anyway? Each function needs to output its input plus a bit more: in an OOP language, this quickly becomes a mess of nested objects. Can Haskell do better?

              Since I cannot actually use Clojure for this project, I’d welcome any advice on doing this in a statically typed language!

              1. 3

                In my experience statically typed languages are generally very good at expressing these kinds of systems. Very often, you can express composable pipelines without any purpose-built framework at all, just using ordinary functions! You can write your entire pipeline as a function calling out to many sub-functions, passing just the relevant resources through each function’s arguments and return values. This approach requires you to explicitly specify your pipeline’s dependency graph, which in my experience is actually extremely valuable because it allows you to understand the structure of your program at a glance. The simplicity of this approach makes it easy to maintain and guarantees perfect type safety.

                That said, based on your response to @danidiaz, it sounds like you might be doing heavier data processing than a single thread running on a single machine will be able to handle? In that case, depending on the exact kind of processing you’re doing, it’s still possible that you can implement some lightweight parallelism at the function call level without departing too much from modeling your pipeline as an ordinary sequence of function calls. Ordinary (pure) functions are also highly reusable and don’t impose any strong architectural constraints on your system, so you can always scale to a more heavily multi-threaded or distributed environment later without having to re-implement your individual pipeline stages.

                If you do have to run your system across multiple processes or even multiple machines, then it is definitely harder to express a solution in a type-safe way. Most type systems don’t currently work very well across process or machine boundaries, and a large part of this difficulty stems from that it is inherently challenging to statically verify the coherence of a system whose constituent components might be independently recompiled and replaced while the system is running. I’m not sure how your idiomatic Clojure solution would cope with this scenario either, though, so I’d be curious to learn more about exactly what the requirements of this system are. These kinds of questions often turn out to be highly dependent on subtle details, so I’d be interested to hear more about your problem domain.

                1. 2

                  You can write your entire pipeline as a function calling out to many sub-functions, passing just the relevant resources through each function’s arguments and return values.

                  That’s basically what Cleanroom does it its “box structures” that decompose into more concrete boxes. Just functional decomposition. It has semi-formal specifications to go with it plus a limited set of human-verifiable, control-flow primitives. The result is getting things right is a lot easier.

                  1. 1

                    Thank you. Just to emphasize, we are talking about prototyping here. The system I am building is being built to explore possibilities, to find out what the final system should look like. By the time we build the final system, we will have much stricter requirements.

                    I am working on an embedded system. We have limited processing capability on the device itself. We’d like to do as much processing as we can close to the sensors but, we think, we will probably need to off load some of the work to remote systems (e.g. the “cloud”). We also haven’t fixed precisely what on-board processing capability we will have. Maybe it will turn out to be more cost-effective to have a slightly more powerful on-board processor, or maybe it will be helpful to have two independent processors, or maybe lots of really cheap processors, or maybe we should off-load almost everything. I work in upstream research so nothing is set in stone yet.

                    Furthermore, we don’t know precisely what processing we will need to do in order to achieve our goals. Sorry for being vague about “processing” and “goals” here but I can’t tell you exactly what we’re trying to do. I need to be able pull apart our data processing pipeline, rearrange stages, add stages, remove stages, etc.

                    We aren’t using Clojure. I just happen to have been binge watching Rich Hickey videos recently and some of his examples struck a chord with me. We are using C++, which I am finding extremely tedious. Mind you, I’ve been finding C++ tedious for about twenty years now :)

                  2. 2

                    Here every function takes a map, adds its results to the map as new fields, then passes on the map to the next function.

                    Naive question: why should functions bother with returning the original map? Why not return only their own results? Could not the original map be kept as a reference somewhere, say, in a let binding?

                    1. 4

                      If your functions pass through information they don’t recognize - i.e. accept a map and return the same map but with additional fields - then what is to be done is completely decoupled from where it is done. You can trivially move part of the pipeline to a different thread, process or across a network to a different machine.

                      You’re absolutely right though, if everything is in a single thread then you can achieve the same thing by adding results to a local scope.

                      At the prototyping stage, I think it’s helpful not to commit too early to a particular thread/process/node design.

                2. 6

                  I may be too far removed from my time with dynlangs but I’ve always liked just changing a type and being able to very rapidly find all places it matters when things stop compiling and get highlighted in my editor.

                  1. 5

                    The comment on how Swift make prototyping harder by forcing developers to express correct types is spot-on, and would apply to other strongly typed languages

                    Quick bit of notation: “strongly typed” is a subjective term. Generally people use it to mean “statically typed with no implicit type coercion”, but that’s not universal. People often refer to both C and Python as strongly typed, despite one having implicit coercion and the other not having static types.

                    1. 1

                      Thanks for the clarification!

                  1. 3

                    For the past few days/weeks, I’ve been piecing together a theory. We like to solve problems, but that only creates new problems. In other words, solution is just another way to spell tomorrow’s problem. Two very nice examples for my evidence bucket here.

                      1. 1
                        1. 1

                          “We like to solve problems, but that only creates new problems. “

                          Not true even though it looks good on the surface. Maybe true but not as much as it seems. I’m not sure. The counter I have in mind is there’s at least two ways to solve problems:

                          1. Use a solution that worked for something similar whose justifications/assumptions also fit the current context pretty well. Modify it carefully introducing just enough additions to get the job done.

                          2. Use a novel idea whose potential drawbacks aren’t well-understood instead. This creates new problems at a much faster rate. The new problems can also be catastrophic.

                          The cryptocurrency people are doing No 2 when doing No 1 makes more sense. This is also true for many crowds in tech aside from cryptocurrencies. Also, No 1 always makes more sense by default.

                          1. 2

                            Some guy once said something sort of like that before: “One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.”

                        1. 4

                          I don’t like how it claims the filesystem “lies” about its encoding. It’s like you think “this italian file server will never unzip an archive that came from Korea”. The rules (at least on unices) tend to be everything except NUL and slashes, and even if line-feed, ESC or BEL might be hard to generate, they are still valid parts of potential filenames and pretending they are not will make you program worse. A filesystem that says “nopes” to writing a file not compliant with the current locale setting would be poor indeed.

                          1. 1

                            Arguably if anyone were lying it would be sys.getfilesystemencoding, which promises to tell you what the filesystem’s encoding is, even when the filesystem makes no claims to even having an encoding. But arguably it’s not lying either, per the documentation:

                            Return the name of the encoding used to convert Unicode filenames into system file names, or None if the system default encoding is used. The result value depends on the operating system:

                            Note that it only talks about “Unicode filenames into system file names” (aside: don’t miss the “filename” vs “file name” here) but says nothing about the going the other way. It can’t.

                            Knowing too little Python to be sure where the mistake is, I do not trust the article on the correctness or the necessity of the solution it outlines. I have seen developers in other languages be gravely mistaken about their language’s string model, and this article feels iffy in a way that is reminiscent of such misconceptions to me; however, I’ve just as well seen languages screw up their string model, so if I knew more Python I might well be nodding along.

                          1. 2

                            I chuckled when I first read it years ago.

                            On second look now, it falls apart. ☹️The question is “How does that stack up in 2011 terms?” Well, JPEGs were bigger than program binaries back in 1992 too, as would have been a text file containing War & Peace…

                            The one bullseye is the /bin/touch comparison… that’s savage.

                            1. 6

                              Can someone explain a reason you’d want to see the descendants of a commit?

                              1. 7

                                Following history. Code archeology.

                                Many people use the VCS history as a form of documentation for the project.

                                1. 4

                                  But history is the past… you can always see the past in git…

                                  1. 16

                                    Suppose I’ve isolated an issue to this bug fix commit. In what version of gRPC did that commit release?

                                    Github tells you it’s on the v1.8.x branch, so if you head over to the, v1.8.x branch, you can see it landed after v1.8.5, so it must have released in v1.8.6. Easy enough right?

                                    Well that’s not the whole story. That commit was also cherry-picked over to the v1.9.x branch here, because v1.9.x was branched before the bug was fixed.

                                    Besides, that was silly to begin with. Why did you go to the v1.8.x branch and then manually search for it. Why couldn’t it just tell you when it got merged? That would have been nice.

                                    Many projects maintain many release branches. Some just backport bug fixes to older releases, some have more significant changes. Sometimes a bug fix only applies to a range of older releases. Do you want to track all that with no notion of descendants? It’s not fun.

                                    Even just looking at pull requests, it would be nice to see whether a pull request eventually got merged in or not, what release it got merged into, and so on. That’s all history too.

                                    So no, you can’t always see the past in git. You can only see the direct lineage of your current branch.

                                    1. 3

                                      I used to find this hella handy at Fog Creek, especially for quickly answering which bug fixes were in which custom branch for some particular client. We actually made a little GUI out of it, it was so helpful.

                                      (Interestingly, while Kiln supports that in Git too, it at least used to do so by cheating: it looked up the Mercurial SHAs in the Harmony conversion table, asked Mercurial for the descendants, and then converted those commits back to their Git equivalents. Because Harmony is now turned off, I assume either they’ve changed how this works, or no longer ship the Electric DAG, but it was cool at the time.)

                                      1. 2

                                        Why couldn’t it just tell you when it got merged?

                                        I don’t know why GitHub doesn’t, but Git can:

                                        $ git tag --contains b15024d6a1537c69fc446601559a89dc8b84cf6f
                                        v1.8.6
                                        

                                        That doesn’t address the cherry-picking case though. I’m not aware of any built-in tooling for that. Generally Git avoids relying on metadata for things that can be inferred from the data (with file renames being the poster child of the principle), so I’m not surprised that cherry-picks like this aren’t tracked directly. Theoretically they could be inferred (i.e. it’s “just” a matter of someone building the tooling), but I’m not sure that’s doable with a practical amount of computation. (There are other operations Git elects not to try to be fast at (the poster child being blame), but many of them still end up not being impractically slow to use.)

                                        1. 1

                                          Does Fossil track cherry-picks like this though? So that they’d show up as descendants? In git the cherry-picked commit technically has nothing to do with the original, but maybe Fossil does this better. (It’s always bothered me that git doesn’t track stuff like this - Mercurial has Changeset Evolution which has always looked suuuuper nice to me.)

                                          1. 4

                                            According to the fossil merge docs, cherry pick is just a flag on merge, so I imagine it does. I was just highlighting the utility of viewing commit descendants.

                                            1. 7

                                              Mercurial also tracks grafts (what git calls a cherry-pick) in the commit metadata.

                                              1. 5

                                                This is actually illustrative of the main reason I dislike mercurial. In git there are a gajillion low level commands to manipulate commits. But that’s all it is, commits. Give me a desired end state, and I can get there one way or another. But with mercurial there’s all this different stuff, and you need python plugins and config files for python plugins in order to do what you need to do. I feel like git rewards me for understanding the system and mercurial rewards me for understanding the plugin ecosystem.

                                                Maybe I’m off base, but “you need this plugin” has always turned me away from tools. To me it sounds like “this tool isn’t flexible enough to do what you want to do.”

                                                1. 7

                                                  Huh? What did I say that needs a plugin? The graft metadata is part of the commit, in the so-called “extras” field.

                                                  I can find all the commits that are origins for grafts in the repo with the following command:

                                                  $ hg log -r "origin()"
                                                  

                                                  And all the commits that are destinations for grafts:

                                                  $ hg log -r "destination()"
                                                  

                                                  This uses a core mercurial feature called “revsets” to expose this normally hidden metadata to the user.

                                                  1. 2

                                                    Right but how much manipulation of grafts can you do without a plugin? I assume you can do all the basic things like create them, list them, but what if I wanted to restructure them in some way? Can you do arbitrary restructuring without plugins?

                                                    Like this “extras” field, how much stuff goes in that? And how much of it do I have to know about if I want to restructure my repository without breaking it? Is it enough that I need a plugin to make sure I don’t break anything?

                                                    In fairness, I haven’t looked at mercurial much since 2015. Back then the answer was either “we don’t rewrite history” or “you can do that with this plugin.”

                                                    But I want to rewrite history. I want to mix and blend stuff I have in my local repo however I want before I ultimately squash away the mess I’ve created into the commit I’ll actually push. That’s crazy useful to me. Apparently you can do it with mercurial—with an extension called queues.

                                                    I’m okay with limited behavior on the upstream server, that’s fine. I just want to treat my working copy as my working copy and not a perfect clone of the central authority. For example, I don’t mind using svn at all, because with git-svn I can do all the stuff I would normally do and push it up to svn when I’m done. No problem.

                                                    And I admit that I’m not exactly the common case. Which is why I doubt mercurial will ever support me: mercurial is a version control system, not a repository editor.

                                                    1. 12

                                                      For the past several years, as well as in the current release, you still have to enable an extension (or up to two) to edit history. To get the equivalent of Git, you would need the following two lines in ~/.hgrc or %APPDATA%\Mercurial.ini:

                                                      [extensions]
                                                      rebase=
                                                      histedit=
                                                      

                                                      These correspond to turning on rebase and rebase -i, respectively. But that’s it; nothing to install, just two features to enable. I believe this was the same back in 2015, but I’d have to double-check; certainly these two extensions are all you’ve wanted for a long time, and have shipped with Hg for a long time.

                                                      That said, that’s genuinely, truly it. Grafts aren’t something different from other commits; they’re just commits with some data. Git actually does the same thing, IIRC, and also stores them in the extra fields of a commit. I’m not near a computer, but git show —raw <commit sha> should show a field called something like Cherry-Pick for a cherry-picked commit, for example, and will also explicitly expose and show you the author versus committer in its raw form. That’s the same thing going on here in Mercurial.

                                                      And having taught people Git since 2008, oh boy am I glad those two extra settings are required. I have as recently as two months ago had to ask everyone to please let me sit in silence while I tried to undo the result of someone new to Git doing a rebase that picked up some commits twice and others that shouldn’t have gone out, and then pushing to production. In Mercurial, the default commands do not allow you to shoot your foot off; that situation couldn’t have happened. And for experienced users, who I’ve noticed tend to already have elaborate .gitconfigs anyway, asking you to add two lines to a config file before using the danger tools really oughtn’t be that onerous. (And I know you’re up for that, because you mention using git-svn later in this thread, which is definitely not something that Just Works in two seconds with your average Subversion repository.)

                                                      It’s fine if you want to rewrite history. Mercurial does and has let you do that for a very long time. It does not let you do so without adding up to three lines to one configuration file one time. You and I can disagree on whether it should require you to do that, but the idea that these three lines are somehow The Reason Not to Use Mercurial has always struck me as genuinely bizarre.

                                                      1. 5

                                                        Right but how much manipulation of grafts can you do without a plugin?

                                                        A graft isn’t a separate type of object in Mercurial. It’s a built-in command (not a extension or plugin), which creates a regular commit annotated with some meta-data recording whence it came from. After the commit was created it can be dealt with like any other commit.

                                                        And how much of it do I have to know about if I want to restructure my repository without breaking it?

                                                        Nothing. Mercurial isn’t Git. You don’t need to know the implementation inside-out before you’re able to use it effectively. Should you need to accomplish low level tasks you can use Mercurial’s API, which like in most properly designed software hides implementation details.

                                                        But I want to rewrite history. (…) Apparently you can do it with mercurial—with an extension called queues.

                                                        The Mercurial Queues extension is for managing patches on top a repository. For history editing you should use the histedit and rebase extensions instead.

                                                        I just want to treat my working copy as my working copy and not a perfect clone of the central authority.

                                                        Mercurial is a DVCS. It lets you do exactly that. Have you run into any issues where Mercurial prevented you from doing things to your local copy?

                                                        For example, I don’t mind using svn at all, because with git-svn I can do all the stuff I would normally do and push it up to svn when I’m done.

                                                        Mercurial also has several ways to interact with Subversion repositories.

                                                        mercurial is a version control system, not a repository editor.

                                                        Indeed it is. And the former is what most users (maybe not you) actually want. Not the latter.

                                                        1. 2

                                                          Mercurial’s “Phases” and “Changeset Evolution” may be of interest to you, then.

                                                          1. 6

                                                            It’s also worth noting that mercurial’s extension system is there for advanced, built-in features like history editing. Out of the box, git exposes rebase, which is fine, but that does expose a huge potential footgun to an inexperienced user.

                                                            The Mercurial developers decided to make advanced features like history editing opt-in. However, these features are still part of core mercurial and are developed and tested as such. This includes commands like “hg rebase” and “hg histedit” (which is similar to git’s “rebase -i”).

                                                            The expectation is that you will want to customize mercurial a bit for your needs and desires. And as a tool that manages text files, it expects you to be ok with managing text files for configuration and customization. You might think that needing to customize a tool you use every day to get the most out of it to be onerous, but the reward mercurial gets with this approach is that new and inexperienced users avoid confusion and breakage from possibly dangerous operations like history editing.

                                                            Some experimental features (like changeset evolution, narrow clones and sparse clones) are only available as externally developed extensions. Some, like changset evolution, are pretty commonly used, however I think the mercurial devs have done a good job recently of trying to upstream as much useful stuff that’s out there in the ecosystem into core mercurial itself. Changset evolution is being integrated right now and will be a built-in feature in a few releases (hopefully).

                                      1. 14

                                        plaintiffs failed to show that they had a reasonable expectation of privacy

                                        grrzrrffrrzrrlk

                                        1. 0

                                          I’m sorry, but what do you think happens when a person clicks a Facebook Like button embedded on a website? You really didn’t know the click is sent to Facebook for tracking purposes?

                                          1. 5

                                            What if I don’t even click it? The fact that it exists on the page means Facebook’s javascript gets to run.

                                            When a user visits a page with an embedded “like” button, the web browser sends information to both Facebook and the server where the page is located.

                                            You don’t have to interact with the button for it to track you.

                                            1. 3

                                              To wax philosophical for a bit, long ago we called browsers user agents. Which raises the question, who’s the user? If you’re the user and your agent is doing something you don’t want, you should fire it and get a better one.

                                              1. 2

                                                Surely for most pages these days 95% of the javascript is for things you either don’t care about or actively work against you (tracking), and 5% is the thing you want. Assuming the user can’t live without the 5%, it is hard to ensure the 5% is run and the 95% is not.

                                                1. 2

                                                  One of the settings in NoScript allows scripts served from the same domain as the page and disallows all others.

                                                  This fails occasionally (about once a week I have to whitelist another CDN domain) but otherwise has drastically improved my browsing.

                                                  1. 3

                                                    I disabled that because I figured I might stumble on a malicious site and realize it too late. That hasn’t happened, but aimply whitelisting sites you access often amd trust has a similar effect.

                                                    A friend uses throwaway VMs for most of his browsing. A bit like poisoning the data, but if he exhibits the same patterns, despite obviously not using “social media”, that still teaches the beast.

                                                    1. 2

                                                      The setting I’d like to see is per-site whitelists.

                                                      I don’t want first-party scripts to be enabled by default, but just because I whitelist a script in one location doesn’t mean I want it whitelisted everywhere. For instance, twitter.com should also load scripts from twimg.com. But I don’t want their scripts running on other webpages.

                                                      1. 1

                                                        You can get that from uMatrix.

                                                        1. 1

                                                          Yes, I am an avid user of uMatrix these days.

                                                2. 2

                                                  You also don’t have to be logged in or even have a Facebook account for the button/javascript to track you. PS. there are 20 or 30 of these buttons in common use and half of them (Google analytics etc.) don’t even have a visual hint they are tracking you. Good times.

                                            1. 24

                                              MISRA (the automotive applications standard) specifically requires single-exit point functions. While refactoring some code to satisfy this requirement, I found a couple of bugs related to releasing resources before returning in some rarely taken code paths. With a single return point, we moved the resource release to just before the return. https://spin.atomicobject.com/2011/07/26/in-defence-of-misra/ provides another counterpoint though it wasn’t convincing when I read it the first time.

                                              1. 8

                                                This is probably more relevant for non-GC languages. Otherwise, using labels and goto would work even better!

                                                1. 2

                                                  Maybe even for assembly, where before returning you must manually ensure stack pointer is in right place and registers are restored. In this case, there’s more chances to introduce bugs if there are multiple returns (and it might be harder for disassembly when debugging embedded code).

                                                  1. 1

                                                    In some sense this is really just playing games with semantics. You still have multiple points of return in your function… just not multiple literal RET instructions. Semantically the upshot is that you have multiple points of return but also a convention for a user-defined function postamble. Which makes sense, of course.

                                                  2. 2

                                                    Sure, but we do still see labels and gotos working quite well under certain circumstances. :)

                                                    For me, I like single-exit-point functions because they’re a bit easier to instrument for debugging, and because I’ve had many time where missing a return caused some other code to execute that wasn’t expected–with this style, you’re already in a tracing mindset.

                                                    Maybe the biggest complaint I have is that if you properly factor these then you tend towards a bunch of nested functions checking conditions.

                                                    1. 2

                                                      Remember the big picture when focusing on a small, specific issue. The use of labels and goto might help for this problem. It also might throw off automated, analysis tools looking for other problems. These mismatches between what humans and machines understand is why I wanted real, analyzable macros for systems languages. I had one for error handling a long time ago that looked clean in code but generated the tedious, boring form that machines handle well.

                                                      I’m sure there’s more to be gleaned using that method. Even the formal methodists are trying it now with “natural” theorem provers that hide the mechanical stuff a bit.

                                                      1. 2

                                                        Yes, definitely – I think in general if we were able to create abstractions from within the language directly to denote these specific patterns (in that case, early exits), we gain on all levels: clarity, efficiency and the ability to update the tools to support it. Macros and meta-programming are definitely much better options – or maybe something like an ability to easily script compiler passes and include the scripts as part of the build process, which would push the idea of meta-programming one step further.

                                                    2. 5

                                                      I have mixed feelings about this. I think in an embedded environment it makes sense because cleaning up resources is so important. But the example presented in that article is awful. The “simpler” example isn’t actually simpler (and it’s actually different).

                                                      Overall, I’ve only ever found that forcing a single return in a function often makes the code harder to read. You end up setting and checking state all of the time. Those who say (and I don’t think you’re doing this here) that you should use a single return because MISRA C does it seem to ignore the fact that there are specific restrictions in the world MISRA is targetting.

                                                      1. 4

                                                        Golang gets around this with defer though that can incur some overhead.

                                                        1. 8

                                                          C++, Rust, etc. have destructors, which do the work for you automatically (the destructor/drop gets called when a value goes out of scope).

                                                          1. 1

                                                            Destructors tie you to using objects, instead of just calling a function. It also makes cleanup implicit vs. defer which is more explicit.

                                                            The golang authors could have implemented constructors and destructors but generally the philosophy is make the zero value useful, and don’t add to the runtime where you could just call a function.

                                                          2. 4

                                                            defer can be accidentally forgotten, while working around RAII / scoped resource usage in Rust or C++ is harder.

                                                          3. 2

                                                            Firstly he doesn’t address early return from error condition at all.

                                                            And secondly his example of single return…

                                                            singleRet(){
                                                                int rt=0;
                                                                if(a){
                                                                    if(b && c){
                                                                        rt=2;
                                                                    }else{
                                                                        rt=1;
                                                                    }
                                                                }
                                                                return rt;
                                                            }
                                                            

                                                            Should be simplified to…

                                                            a ? (b && c ? 2 : 1) : 0
                                                            
                                                            1. 1

                                                              Are you sure that wasn’t a result of having closely examined the control flow while refsctoring, rather than a positive of the specific form you normalised the control flow into? Plausibly you might have spotted the same bugs if you’d been changing it all into any other specific control flow format which involved not-quite-local changes?

                                                            1. 39

                                                              “We all know the real reason Slack has closed off their gateways. Their business model dictates that they should.”

                                                              Which is why they should’ve never been used in the first place if anyone wanted to keep anything. This isn’t a new lesson with mission-critical, proprietary software. Anyone relying on profit-hungry, 3rd parties is just asking for it. Only people I feel sympathy for are those who didn’t know the risks (esp non-technical folks) or those who did that were forced by managers/customers to use the product at work despite its disadvantages (esp resource hogging).

                                                              1. 19

                                                                I mean, I think categorizing this as a “bait and switch” is disingenuous. How many people were attracted to Slack by their gateways versus their total addressable market or indeed their total number of users? I’m going to go out on a limb and say that number is basically zero.

                                                                Too, the people who are affected by this change are overwhelmingly the people who should have known better. It’s hard for me to gather much sympathy.

                                                                ETA: I’m not a fan of Slack, particularly their godawful clients, but I think this article falls into the classic “It is what I want, therefore it is what everyone wants” fallacy. As my boss at Apple once told me, “we’d go broke if we made products for you.”

                                                                1. 27

                                                                  How many didn’t push harder against slack because they could just use a bridge?

                                                                  1. 5

                                                                    I mean, the problem is that, as Slack is paying for their product by spending Marc Andreessen’s money and not selling goods and services to their users, what leverage does a user have?

                                                                    1. 9

                                                                      I think the idea was that people didn’t push back against their own organizations and managers in their decision to go with Slack because they figured “well, I can just use a bridge and not have to care”.

                                                                  2. 7

                                                                    I mean, I think categorizing this as a “bait and switch” is disingenuous. How many people were attracted to Slack by their gateways versus their total addressable market or indeed their total number of users? I’m going to go out on a limb and say that number is basically zero.

                                                                    What evidence do you have for this? I know of at least 5 people who agreed to adopt slack for various personal projects explicitly because of its IRC gateway.

                                                                    1. 3

                                                                      Against the total universe of Slack users? OK, 5 people you know personally, against a total user population of 9MM. I’m not saying that people who use the gateways don’t exist; I’m saying that as a percentage of Slack’s total userbase, the number is insignificant; it is, to the first order of approximation, zero.

                                                                      1. 6

                                                                        I don’t think you actually know this and I am not sure if it is relevant for bait-and-switch how many such users exist now. Question is how many of them were there in early days when Slack first started to fight for mind-share?

                                                                        My guess would be a lot since it started as a glorified web interface over IRC. However, probably like you I don’t actually know and can only go with anecdotal experience from people I know which was similar to @feoh.

                                                                        1. 2

                                                                          It’s a bit funny that you say “sure, 5 people, but that’s just your anecdote, you don’t have actual numbers” and then go on to confidently assert what the numbers are… apparently without having them, or at the very least without showing them.

                                                                          I also concur with @markos that there were probably disproportionately many gateway users among early adopters of Slack. I watched with concern as its use spread among libre projects, and it was the gateways that made it hard to sell the argument on general principle against it. Apparently “you’re putting yourself in a position to get burned” is not sufficient to convince anyone; people have to actually get burned before they’ll renege on a choice. (And I’m not convinced that they learn from the experience.) I must also admit “it’s where the users are” is hard to argue against; as long as everything goes well, that fact matters.

                                                                          The answer may be that we need something more mobile-device-friendly than traditional XMPP? (I know of things like XEP-0286… but a profile only helps as far as it is deployed.)

                                                                      2. 2

                                                                        I totally agree they’d be majority of those affected.

                                                                      3. 5

                                                                        non-technical folks

                                                                        I doubt there are many non-technical people left that still use IRC, but I think the general idea behind this holds true. people who don’t know the risks of putting companies in control of their stuff get screwed over when this sort of thing happens.

                                                                        1. 3

                                                                          I doubt there are many non-technical people left that still use IRC

                                                                          There are lots (for some definition of lots). At least Undernet and Snoonet are completely non-technical, and while they probably don’t have that many users in terms of absolute numbers, in relative terms they comprise a big chunk of all IRC users.

                                                                      1. 2

                                                                        This seems like a neat concept, but decays pretty rapidly if too many people try to see it. The link I get expires before I get there. Then again. Then close tab.

                                                                        1. 2

                                                                          This is the point of the project.

                                                                          As quoted by the author: “This is an experiment in introducing artificial scarcity into digital work.”

                                                                          (Source: https://twitter.com/donald_hans0n/status/949490885586075651)

                                                                          1. 1

                                                                            To be fair, the further out it gets, the less often anybody actually views the art and moves it further out. But yes, you either have to be lucky (arrive early), cheat, or burn a lot of energy.

                                                                            1. 1

                                                                              After days of trying not to cheat, I gave up and went for the screenshot posted on Twitter.

                                                                              To be fair, the further out it gets, the less often anybody actually views the art and moves it further out.

                                                                              That doesn’t seem to be how it’s going. Not any time soon. If it doesn’t distinguish bots from browsers, quite possibly never. If it does, then maybe in a month or several, or maybe years.

                                                                              1. 1

                                                                                Excluding bots/scripts written specifically for this site, what kind of bot is going to follow a single thread of links thousands of jumps deep?

                                                                                I think that sometime very soon, if not already, the depth will stop increasing and it won’t increase again unless somebody cheats again.

                                                                                1. 1

                                                                                  A month later and it has not slowed down at all. (To put that in perspective, your “if not already” prediction came just a week after it was put online.)

                                                                          1. 1

                                                                            As a developer who moved from Linux to the macOS platform, this made me think about how many non-native apps I use as replacements for the Apple version. The obvious ones I’m thinking of:

                                                                            • Alfred instead of Spotlight
                                                                            • iTerm2 instead of Terminal
                                                                            • Dropbox instead of iCloud
                                                                            • Chrome instead of Safari
                                                                            • Gmail instead of Mail
                                                                            • Google Maps instead of Maps
                                                                            • VLC instead of iMovie
                                                                            • Spotify instead of iTunes
                                                                            • Signal instead of Messages

                                                                            &c. This surely isn’t a good trend for Apple to allow to continue.

                                                                            1. 13

                                                                              That’s not what’s meant by “native” in this case. Alfred, iTerm, Dropbox, Chrome, and VLC are native. Spotify is Electron, and I’m not sure about Signal. I’m guessing it’s probably a native app that does most of its UI in a WebView.

                                                                              1. 5

                                                                                Signal for Desktops is Electron.

                                                                                1. 2

                                                                                  As it might be useful to describe what is meant by native, it means something on a spectrum between “using the platform-supplied libraries and UI widgets”, i.e. Cocoa and “not a wrapped browser or Electron app”, so it’s not clear whether an application using the Qt framework would be considered “native”. It could be delivered through the App Store and subject to the sandbox restrictions, so fits the bill for a “native” app in the original post, but it would also not be using the native platform features which are presumably seen as Apple’s competitive advantage for the purpose of the same post.

                                                                                  1. 2

                                                                                    I’d call QT native. It doesn’t use the native widgets, but then neither do most applications that are available on multiple platforms.

                                                                                    1. 2

                                                                                      It may be native, but it’s not Mac-native in the sense Gruber was talking about. You will find that all three uses of “native” in his article appear as “native Cocoa apps” or “native Mac apps”. He is talking about a quite specific sense of native: apps that integrate seamlessly with all of the MacOS UI conventions (services, system-wide text substitutions, native emoji picker, drag & drop behaviours, proxy icons, and a myriad more). Qt apps do not.

                                                                                2. 5

                                                                                  Why is it not a good trend? You are still using a Mac .. they sold you the hardware. Should they care about what apps you run?

                                                                                  1. 3

                                                                                    Apps with good experiences that aren’t available on other platforms keep users around. Third-party iOS apps do a better job of moving iPhones than anything else Apple does, because people who already have a pile of iOS apps they use generally buy new iPhones.

                                                                                    Electron is just the latest in a long series of cross-platform app toolkits, and it has the same problems that every other one has had: look & feel, perceived inefficiency, and for the OS vendor, doesn’t provide a moat.

                                                                                    1. 1

                                                                                      Counterpoint, their apps have always been limited and really for people who weren’t willing to learn and use more robust tooling. I mean how many professionals use iMovie.

                                                                                      1. 1

                                                                                        iMovie is a good example. I’m guessing a lot of us prefer VLC.

                                                                                    2. 1

                                                                                      It’s good for the end user but not a good trend for their business model, part of which is to have best-in-class apps. Don’t get me wrong, I like having choice and I think they shouldn’t force you into their own app ecosystem.

                                                                                  1. 7

                                                                                    This is a mess.

                                                                                    • Much of the technical complexity of the web has been generated by web designers who refuse to understand and accept the constraints of the medium. Overhauling the design when the implementation becomes intolerably complex is only an option when you are the designer. This luxury is unavailable to many people who build websites.
                                                                                    • Suggesting that CSS grid is somehow the reincarnation of table-based layout is astonishingly simple-minded. Yes, both enable grid-based design. CSS grid achieves this without corrupting the semantic quality of the document. They’re both solutions to the same problem. But there are obvious and significant differences between how they solve that problem. It’s hard to fathom how the author misses that point.
                                                                                    • The fetishization of unminified code distribution is really bizarre. The notion that developers should ship uncompressed code so that other developers can read that code is bewildering. Developers should make technical choices that benefit the user. Code compression, by reducing the bandwidth and time required to load the webpage, is very easily understood as a choice for the user. The author seems to prioritize reliving a romanticized moment in his adolescence when he learned to build websites by reading the code of websites he visited. It’s hard not to feel contempt for somehow who would prioritize nostalgia over the needs of someone trying to load a page from their phone over a poor connection so they can access essential information like a business address or phone number.
                                                                                    • New information always appears more complex than old information when it requires updates to a mental model. This doesn’t mean that the updated model is objectively more complex. It might be more complex. It might not be more complex. The author offers no data that quantifies an increased compexity. What he does offer is a description of the distress felt by people who resist updating their mental model in response to new information. Whether or not his conclusions are correct, I find here more bias than observation.
                                                                                    1. 8

                                                                                      CSS grid achieves this without corrupting the semantic quality of the document.

                                                                                      When was the last time you saw a page that is following semantic guidelines? It is so full of crap and dynamically generated tags, hope was lost a long time ago. It seems to be so crazy that developers heard about the “don’t use tables” that they will put tabular data in floating divs. Are you kidding me?! Don’t even get me started about SPAs.

                                                                                      The fetishization of unminified code distribution is really bizarre.

                                                                                      The point is, I think, that the code should not require minifying and only contain the bare minimum to get the functionality required. The point is to have 1kbyte unminified JS instead of 800kbyte minified crap.

                                                                                      1. 4

                                                                                        New information always appears more complex than old information when it requires updates to a mental model.

                                                                                        I feel like you completely missed his point here. He isn’t just talking about how complex the new stuff is. He even said flexbox was significantly better and simpler to use than “float”. What he is resisting is the continual reinvention that goes on in webdev. A new build tool every week. A new flavor of framework every month. An entire book written about loading fonts on the web. Sometimes you legitimately need that new framework or a detailed font loading library for your site. But frankly even if you are a large company you probably don’t need most of the new fad of the week that happens in web dev. FlexBox is probably still good enough for you needs. React is a genuine improvement for the state of SPA development. But 3-4 different build pipelines? No you probably don’t need that.

                                                                                        And while we are on the subject

                                                                                        CSS grid achieves this without corrupting the semantic quality of the document.

                                                                                        Nobody cares about the semantic quality of the document. It doesn’t really help you with anything. HTML is about presentation and it always has been. CSS allows you to modify the presentation based on what is presenting it. But you still can’t get away from the fact that how you lay things out in the html has an effect on the css you write. The semantic web has gone nowhere and it will continue to go nowhere because it’s built on a foundation that fundamentally doesn’t care about it. If we wanted semantic content we would have gone with xhtml and xslt. We didn’t because at heart html is about designing and presenting web pages not a semantic document.

                                                                                        1. 3

                                                                                          Nobody cares about the semantic quality of the document.

                                                                                          Anybody who uses assistive technology cares about its semantic quality.

                                                                                          Anybody who choses to use styles in Word documents understands why they’d want to write documents with good semantic quality.

                                                                                          You still can’t get away from the fact that how you lay things out in the html has an effect on the css you write.

                                                                                          That’s… the opposite of the point.

                                                                                          All of the cycles in web design – first using CSS at all (instead of tables in the HTML) and then making CSS progressively more powerful – have been about the opposite:

                                                                                          How you lay things out on the screen should not determine how the HTML is written.

                                                                                          Of course the CSS depends on the HTML, as you say. The presentation code depends on the content! But the content should not depend on the presentation code. That’s the direction CSS has been headed. And with CSS Grid, we’re very close to the point where content does not have to have a certain structure in order to permit a desired presentation.

                                                                                          And that’s my main issue with the essay: it presents this forward evolution in CSS as cyclical.

                                                                                          (The other issue is that the experience that compelled the author to write the article in the first place – the frenetic wheel reinvention that has taken hold of the Javascript world – is wholly separate from the phases of CSS. As far as that is concerned, I agree with him: a lot of that reinvention is cyclical and essentially fashion-driven, is optional for anyone who isn’t planning on pushing around megabytes of Javascript, and that anyone who is planning on doing that ought to pause and reconsider their plan.)

                                                                                          If we wanted semantic content we would have gone with xhtml and xslt.

                                                                                          Uh… what? XHTML is absolutely no different from HTML in terms of semantics and XSLT is completely orthogonal. XML is syntax, not semantics. It’s an implementation detail at most.

                                                                                          1. 3

                                                                                            If you are a building websites, please do more research and reconsider your attitude about semantic markup. Semantic markup is important for accessibility technologies like screen readers. RSS readers and search indexes also benefit from semantic markup. In short, there are clear and easily understood necessities for the semantic web. People do care about it. All front end developers I work with review the semantic quality of a document during code reviews and the reason they care is because it has a real impact on the user.

                                                                                            1. 2

                                                                                              Having built and relied on a lot of sematic web (lowercase) tech, this is just untrue. Yes, many devs don’t care to use even basic semantics (h1/section instead of div/div) but that doesn’t mean there isn’t enough good stuff out there to be useful, or that you can’t convince them to fix something for a purpose.

                                                                                              1. 1

                                                                                                I don’t know what you worked on but I’m guessing it was niche. Or if so then you spent a lot of time dealing with sites that most emphatically didn’t care about the semantic web. The fact is that a few sites caring doesn’t mean the industry cares. The majority don’t care. They just need the web page to look just so on both desktop and mobile. Everything else is secondary.

                                                                                          1. 1
                                                                                            1. [Comment from banned user removed]

                                                                                              1. 3

                                                                                                The last two comments I’ve seen from this user seem like the inverse of the friendlysock experiment. If this isn’t intentional, I’d highly recommend reading the blog post and reconsidering your posting style.

                                                                                                1. 2

                                                                                                  I would like to know, why are you people down-voting stefantalpalaru for that comment?

                                                                                                  I am not a native speaker nor in the US, that remark was insightful for me - am I missing something except it (the comment) being slightly snarky?

                                                                                                  1. 32

                                                                                                    I’m sort of used to people making fun of my writing style (people complain about my use of exclamation marks on the internet every month or so, complaining about question marks is a new one :) ) but in general I find technical comments on my posts much more interesting.

                                                                                                    I’m honestly a bit disappointed by this comment – i tend to think of lobste.rs as a place where people try to have more substantive technical discussions about posts, as opposed to hacker news where comment threads frequently get derailed by conversations about irrelevant things and I end up not learning anything by reading the comments. To me the point of tech discussion sites like this is to discuss the technology! (for example: how could a kernel bug like this happen? have you run into other similar bugs on Mac/Linux? How did you debug them? Can you use dtrace to discover more about what’s going on inside the kernel?).

                                                                                                    There are so many interesting questions to talk about, and I think it’s kind of a shame to waste time making nitpicky comments about the use of a question mark in the title :)

                                                                                                    1. 11

                                                                                                      As a linguist who’s read enough language written without punctuation (Latin and Greek), I’d like to thank you for your use of punctuation, and to encourage it.

                                                                                                      Latin, fun fact, has two words to introduce questions, one that introduces questions where you expect an affirmative answer (“nonne”), and one that introduces questions where you expect a negative answer (“num”), and the interrobang was only invented millennia later. It’s always useful to have a metachannel conveying subtext, and punctuation is compact.

                                                                                                      “I think I found a Mac kernel bug.” sounds definitive, and immediately puts a team of kernel hackers on the defensive. “I think I found a Mac kernel bug?” sounds rather surprised at oneself, and emphasizes the incredulity that you’d posted on Twitter, that it was 4 days from kernel hacking to finding a bug, that you’d expected that people would have found it, and generally is the spirit of humility and exploration that has made your writings so interesting to read!

                                                                                                      Thank you for exploring syscalls :)

                                                                                                      1. 2

                                                                                                        So, however insignificant, this issue has, believe it or not, been (low-key) bugging me since this (sub)thread happened. I’m purely concerned with the linguistic question taken at face value, since I vaguely concur with the annoyance at the question mark (in the sense that I would feel odd to write in that style that myself, though I don’t care to tell anyone else what they should prefer). The reason it’s been bugging me is that it’s obvious that “just drop the question mark” can’t work, precisely because it significantly alters the quality of what is being expressed – as you stated. So how would I say that?

                                                                                                        And I think I just realised the answer: the way to correctly express that sentiment in a more formal register is simply “Have I really found a Mac kernel bug?” D’uh, I guess.

                                                                                                        1. 1

                                                                                                          Absolutely. And there’s “I think I might have found a Mac kernel bug” in slightly more formal colloquial registers, “Discovery of potential Mac kernel bug” for a title of some Technical Letter to a journal 50 years ago. More formal titles have fewer questions.

                                                                                                          And we’ve been repurposing punctuation to convey pitch of a sentence when spoken, useful to convey one’s meaning when writing. Sometimes it’s a question mark to convey High Rising Terminal, sometimes it’s comma splices and lack of terminal period to convey a fading train of thought, it’s a fun writing constraint, you should try it

                                                                                                      2. 8

                                                                                                        Thanks for taking the time to reply. I was asking because I felt I might be missing some language slang/common use that was pointed out here.

                                                                                                        Regarding your blog posts: I love reading them, your technical content is sound, delivered in a fun way and a dive into things I rarely look at myself - I’m following all your ruby profiler posts. Keep up what you are doing, the silent majority appreciates it ;)

                                                                                                      3. 11

                                                                                                        the high rising terminal - often associated with “valleyspeak” - is stereotypically associated with shallow, unintelligent women, especially in american pop culture.

                                                                                                        If anyone else on the site had asked about this, I’d wager we would see far less common contentious voting patterns. But hell, let’s call a spade a spade: I’ve seen enough of OPs previous comments to have a pretty good guess at what he’s doing when he made that comment - and I wager the downvoters did too.

                                                                                                        1. 7

                                                                                                          As a meta-discourse thing, I don’t really like this kind of comment even from people whose good faith I’m confident of. It’s really easy for a forum to fall into a pattern where 90% of the discussion is about pretty superficial aspects of the posts, especially in a dismissive way. I wouldn’t say that kind of thing is always off-topic, but I guess I try to think: is this observation novel and non-obvious enough that someone reading the comment learns something? Usually when I’ve been tempted to post a comment complaining about superficial aspects of a post (and there are definitely things I dislike and am tempted to comment on!) it’s hard for me to argue with a straight face that the answer is “yes”.

                                                                                                    1. 6

                                                                                                      Finally, using tags and MP4s instead of tags and GIFs is brings you into the middle of an ongoing cat and mouse game between browsers and unconscionable ad vendors, who abuse the attribute in order to get the users’ attention.

                                                                                                      Indeed. And I feel like I’m playing a cat and mouse game with the browsers. Long ago, annoying videos were delivered with Flash. I didn’t have flash and I was happy. Then browsers took all the annoying aspects of Flash and made them possible with HTML5. So I had to dig around in settings and disable autoplay and media loading, etc. although with somewhat mixed results. Now they’re taking that annoyance and making it possible with just an img tag.

                                                                                                      There’s this flattening effect that happens at the same time as broader feature support. Certain features used to be “advanced”, which meant you could cut them off. But everything has been squashed downwards. Everything is “baseline” now. You can’t readily draw a horizontal line to split the feature stack. It has to be a vertical line (on my imaginary diagram) which in practice is much harder to draw. Old man ruining the web grumps aside, I think there’s a lesson here about how we build complex systems, and how we let users control them, etc. Something about more features vs bigger features.

                                                                                                      1. 3

                                                                                                        Exactly! And do we really need this perversion of the img-tag for this purpose? Why can’t we just stay with video now that the transition has finally been completed from gifs for animated content? Why can’t the browser-vendors just optimize their browsers? And from what it seems using img for videos is more like a hack that just slows everything down even more. There is a reason why videos are not preloaded.

                                                                                                        At suckless we believe that the web has to be reformed or at least reduced to a sane subset. The reason why we have the internet of apps is mostly because the OS-platforms (Windows, macOS, Linux) failed to provide consistent native interfaces which are naturally given on the web, or at least were developed further and further to accommodate it. One can’t just discard the web. The first step towards simplicity is to discard one’s dependencies on complex web applications. Having achieved that, it is possible to browse the web for instance with JavaScript disabled, which is already a huge factor in simplification.

                                                                                                        1. 1

                                                                                                          Now they’re taking that annoyance and making it possible with just an img tag.

                                                                                                          What are they newly making possible, though? The annoyance of decoding MP4s? Because the annoyance of distracting animations already existed… they just had to be served up as animated GIFs.

                                                                                                          I guess the fact that MP4s are much smaller than GIFs could allow soundless animations to be used a bunch more than they currently are. But it’s uncertain to me that that’s actually going to be the case.

                                                                                                        1. 2

                                                                                                          Why does the option have a -unknown-unknown suffix?

                                                                                                          1. 14

                                                                                                            Compilers often consider their target as a triplet in the form machine-vendor-os, such as x86-pc-linux, which would describe a compiler targeting x86 IBM compatible machines running Linux. In the case of webasm, the machine target is webasm, but the vendor/OS are irrelevant and not known at compile time.

                                                                                                            1. 3

                                                                                                              Thanks for the clear explanation :)

                                                                                                              1. 2

                                                                                                                The Rumsfeld of platforms?

                                                                                                            1. 3

                                                                                                              I almost hate the older scissor switches now. It’s funny because of how great I thought they felt compared to other keyboards, but I got used to the butterfly switch within a few days, and within a week or two, the scissor switches came to feel vague and mushy and unnecessarily long in stroke. I love how little force or room it takes to actuate the butterfly switch keys, how decisively and crisply they click, and how little they tilt no matter where you press. Scissor switches feel half like rubber dome switches in comparison, just cheap and low-quality. And what’s up with the spacing on the scissor-switch 15” MBP? It feels ludicrous, like an entire moat around each key.

                                                                                                              The absolutely only thing I hate about this keyboard is the loss in reliability – which I’m affected by too, by what appears to be a common problem, namely that hitting mainly the spacebar very occasionally produces two blanks instead of just one. My left/right arrow keys also seem to have this double trigger issue, but much more rarely still.

                                                                                                              However, I can’t agree with other commentators about how reliable the scissor switch keyboards were. I had a mid-2012 MacBook Air 11” before the MBP, and it got a logic board swap, meaning I had two of these keyboards over time, and both developed a few dead or have-to-press-it-just-so modifier keys. It did take a lot longer for these problems to crop up compared to occasional finicky keys on the MBP, though.

                                                                                                              Basically I have no grounds on which to long for the glory days of scissor switches. I would rather Apple pump more R&D into figuring out how to make the butterfly switches less susceptible and better serviceable. That is the keyboard I want. Between the Air’s keyboard and this one, I’m going with this one, no question… especially when comparing them in their flawed states. All I wish is it didn’t get flawed.

                                                                                                              I do however also agree with Marco that the full-size left/right arrow keys are stupid. It’s been less of an issue than I expected it would be – it was extremely disorienting initially but I got mostly used to it soon after that –, but, I never got over it. Even if rarely an issue, it has remained one. I still find myself confused every so often by what action I triggered, only to find I was one key off along the [ ⌘ ] [⌥] [←] group and have to consciously reorient my hand. This design does rob you of haptic feedback, period. And unlike the butterfly switches, I don’t feel I’ve gained anything in exchange. It has been pure downside, even if a more minor one than anticipated. It should go.

                                                                                                              1. 1

                                                                                                                A very long time ago I had a Lifebook with very shallow keys (trans meta model). I don’t know if they were butterfly or scissor or what, but they were certainly different, and yet I learned to type very quickly on that computer. Easier and faster than any other keyboard. It was quite susceptible to dirt getting under keys, and then I’d have to turn it upside down and bang on the key until I heard a satisfying crunch noise. So sounds a lot like the new MacBooks, but I loved it.

                                                                                                              1. 3

                                                                                                                Meh. The advantages some programming languages bring to the table are sometimes very significant. They don’t just make the problem “slightly easier”. It’s likely that Go is popular because it makes concurrent networking programs significantly easier to write, compared to most mainstream languages; similarly, using OCaml (or something similar) to write a compiler or symbolic program is a huge improvement over doing it in C.

                                                                                                                1. 3

                                                                                                                  It’s not about the language per se and more about how many primitives the language integrates, and how well chosen those primitives are.

                                                                                                                  C has no automatic memory management nor concurrency primitives, and Go has both.

                                                                                                                  Among languages that use async/await-style concurrency, their concurrency expressiveness is largely similar.

                                                                                                                  All the “P languages” form a family based on the set of primitives they’re built on, in which they’re very close to each other, and so programs written in them tend to be structured broadly similarly, despite significant differences in some of the design choices of the languages. Sometimes those differences have practical impact too, but rarely from a zoomed-out perspective on the code structure.

                                                                                                                  So the answer to “which language should I learn?” is fairly irrelevant if it’s to be taken as “which P language should I learn?” but is rather more meaningful if it implies “should I use Go, Haskell or Prolog?”. (Although even then it’s just one topic among the many you need an understanding of, as the article says.)

                                                                                                                  1. 1

                                                                                                                    On the other hand, none of these languages have improved the ways people use their databases, write their queries, set their indices, deploy their servers, configure their networks, ….

                                                                                                                    Programming languages bring a lot to the table, but they are not the core of dealing with computers anymore. It’s a huge chunk, but not as central as people make them to be.

                                                                                                                    1. 2

                                                                                                                      Though not Go or OCaml, all the tasks you describe benefit from declarative languages, like SQL and Prolog. (Or Greenspunned versions thereof)

                                                                                                                      1. 2

                                                                                                                        Sure, if you span the net wide enough, you could also call Elasticsearch query syntax (which is bascially the AST of a simple Lucene search program) a programming language. This isn’t practical though and not what people mean by “I’ll learn another programming language”.

                                                                                                                        SQL is a perfect example of that: it is rather worthless to know without at least having a hunch on how your specific database executes it. Plus, each of the product comes with extensions.

                                                                                                                        1. 2

                                                                                                                          it is rather worthless to know without at least having a hunch on how your specific database executes it

                                                                                                                          I feel this is deeply true of any programming language — it is mostly useless divorced from an implementation. I feel that knowing how to program in C is inseparable from knowing compiler extensions and intrinsics. And with the exception of (seemingly increasingly rare) languages defined by standards, one may not have any choice.

                                                                                                                          One difference between logic languages and imperative languages, here, is that most programmers have already deeply internalized a mental model of how imperative languages are executed (which still often fails to match the actual implementation… note the way one still finds people making performance assumptions that held perfectly well on the ZX Spectrum and not in the modern era).

                                                                                                                          Maybe we actually agree on something here: I think something the OP is successfully pointing out is that most people’s definition of “I’ll learn another programming language” is so shallow that it yields little compared to the effort they could put into learning other things. But, for example, I think learning something like Prolog (well enough to write production software: i.e., understanding at least one implementation well enough to reason accurately about performance and so on) is an exercise that yields knowledge transferable to plenty of other areas of programming; I suspect one can make this argument for any language and implementation that differs significantly from what one already knows.

                                                                                                                        2. 2

                                                                                                                          Like SQL and Prolog = Datalog. Seems like a good example where a new language can help with database queries.

                                                                                                                          https://en.wikipedia.org/wiki/Datalog

                                                                                                                    1. 1

                                                                                                                      I think this rant along with every comment on it misses the real point: mobile devices have made computers really suck.

                                                                                                                      You know why these interfaces make no use of the keyboard and its glorious F keys and so on? Well, show me where the F keys are on your iPad…

                                                                                                                      Google Maps was a lot less terrible back when it had a desktop-centric UI, but eventually they forced everyone over to the new touchscreen-centric UI and now it’s all spontaneous pans and zooms and things popping into and out of presence… which has not only made it horrible to use, it now also eats CPU like nobody’s business, which is an impressive achievement.

                                                                                                                      It’s not like things were going great before. But the iOSification of computing made everything suck 15 times worse.

                                                                                                                      Now here’s my struggle with that thought: iOSification also put computers into the hands of everyone – even my parents, who can only barely figure out how to use their iPad now, who without the invention of the touchscreen UI would have gone to their graves without ever making independent use of a computer. So… I don’t know what to think.