1. 3

    Looks like it’s actually a Protobuf API under the hood, so one could probably do some neat things with it: https://github.com/gnachman/iTerm2/blob/master/proto/api.proto

    1. 5

      Seeing a lot of disappointed comments here. However, I can’t say I’m too surprised. Chris Granger previously worked on LightTable, which was supposed to be the next big programming language editor. At some point he stopped working on it and started building Eve. Work started in 2014, and if you look at witheve.com there still isn’t a release that you can access easily from the homepage. It just says that it’s ‘Coming Soon’… after 3 years.

      As an outsider, the lack of any sort of big download button for something that’s at least an alpha version is a fairly large deterrent from taking it seriously. Is it any surprise that the VCs that funded it felt the same way?

      1. 1

        I see the comment often that it should be assumed that others are acting in good faith, but in my experience there seem to be people that don’t.

        It seems fair to me to ask that to be the starting ground for interacting with other people in a community, but this letter seems to not really address any guidelines for that or ways to distinguish between “respectful” conversations with people that are acting in bad faith vs actually striving for kind and productive interaction.

        1. 1

          I think a big part of this is how much of the 3MB is reusable after the first load. The main web app that I work on is about 2MB on initial load, but a combination of etagging, service workers, and other techniques means that subsequent loads are in the 2kb - 25kb range. Of course I always want initial page load to be better, but it’s certainly less of a concern when the cost is amortized over multiple uses of the site.

          1. 1

            The Chrome team has shown how on mobile devices the cost of parsing all of that JavaScript is non-trivial and that has to happen even on repeat visits. If mobile doesn’t matter in your use, then you’re absolutely right that it’s not a big deal!

            1. 1

              Laptops still have power constraints. A webpage should almost never cause any noticeable CPU load, but all too often they do. The author just had to add scroll animations or something.

          1. 2

            Re-orderable position shouldn’t be stored as integers but as fractions (rational numbers). That way there is always room to find a number between any other two.

            This seems logical enough at face but it has me very nervous. Are my instincts correct, or is this totally fine?

            1. 1

              From what I’ve seen, it’s usually sufficient as long as you occasionally renumber items. It takes a user intentionally resorting items extensively in short succession to get to the pathological case.

              1. 1

                See the linked article on Postgres wiki, especially:

                There are a number of possible approaches. Using integers is simple but tends to require frequent renumberings. Using floats and picking the midpoints between adjacent values also runs out of space rapidly (you only need 50-odd inserts at the wrong spot to start hitting problems). So this approach uses integer fractions, choosing the values (from the Stern–Brocot tree) such that they can be sorted using (p::float8/q) but renumbering values is only rarely required.

                and:

                -- want to renormalize both to avoid possibility of integer overflow
                -- and to ensure that distinct fraction values map to distinct float8
                -- values. Bounding to 10 million gives us reasonable headroom while
                -- not requiring frequent normalization.
                
                IF (np > 10000000) OR (nq > 10000000) THEN
                  perform cat_renormalize(cat_id);
                END IF;
                
                1. 1

                  Ah - reading more into that Wiki entry was illuminating. I think when I first read OP I came away thinking they were advocating storing numbers as reals, not rationals.

                  What a clever solution - thanks for helping clarify!

              1. 6

                I like that this post was written, and would like to see suggestions, but as someone who has abused the hell out of contexts in the past (with lots of success, and some regret!), I find fault with some of the arguments:

                If you use ctx.Value in my (non-existent) company, you’re fired

                1. Go’s type system sucks. So, yes, it’s not statically typed, but you can write wrappers around it to ensure safety at call sites, assuming you use the wrappers. Blame the type system, not the vessel. Or name every other package that uses interface{}, too.
                2. this is the most important point. It’s easy to forget to load state into a context, and much better to require explicit loading (e.g. a constructor that takes a Foo), though even in that case, you can pass a nil. BUT, you still are. documenting based on function signature, which is much easier to get write, and maintain.
                3. Use of ctx.Value has actually made testing super easy in some code I’ve written to use it. I’ve since become convinced that other ways are better. But, you simply inject a Mock into the context, and everything magically works.
                4. This is why new types are created for keys so often. I’ve not ever come across this as an actual problem.
                5. Not a very good argument. I’ve had lots of success especially around database code which uses ctx.Value to overwrite a connection object with a transaction instead for sub calls… without error. E.g. no more error prone than writing addition code with signed ints that can overflow. Incidentally, the new context friendly databaselsql uses the same pattern, and we’ve since switched to it… again, without issue.

                In summary, point 2 is correct, but the supporting points are mostly FUD.

                Context is mostly an inefficient linked list

                So… don’t use Context when a) you have thousands of Values nested in it, b) when you need absolutely every nanosecond of CPU time. In short, this is not a problem, 99% of the time, and if it becomes a problem, I know of some great papers that can increase the efficiency of linked lists…

                1. 1

                  I would be interested to know what papers you’re referring to.

                  1. 1

                    The one that comes to mind first is Phil Bagwell’s Vlists.

                    But you’ve also got Shao’s Unrolling lists as well.

                    And, of course the CDR coding technique.

                1. 1

                  Anyone have any good recommendations for developing VCL configs? Both in terms of concepts (the docs pretty much show a state machine but don’t really cover what pass vs hash etc. mean that I can see), and in terms of practical dev tricks. I’ve been trying to get a testbed set up locally where I’d be able to have a varnish/ folder in my repo, run a command, and have varnish hotswap a new config.

                  1. 1

                    At a past employer we replaced our varnish edge app (with ~1500 lines of config) with a home-grown one based on the golang stdlib.

                    It was slightly (only slightly) more CPU and RAM-intensive (not a problem in our case), not measurably slower on IO (the bit we cared about) and meant most of our devs could contribute changes (instead of 1-2 who knew enough to write VCL without crashing the site).

                  1. 4

                    This article seems to be a bit fear-monger-y to me. This is (A) documented behavior in the PostgreSQL manual, and (B) If you’re using a SQL database it is your responsibility to determine the level of isolation that your query should need.

                    I typically prefer coalescing multiple selects into a more complex query, and only very rarely need to fall back to multiple selects where higher levels of isolation would actually be necessary.

                    1. 3

                      This is (A) documented behavior in the PostgreSQL manual, and (B) If you’re using a SQL database it is your responsibility to determine the level of isolation that your query should need.

                      While it is documented, I find most people aren’t familiar with it. I could see half of the examples being completely unexpected to someone who knows Postgres, but not in depth.

                      Also, I find the documentation in the Postgres manual around the different anomalies to be unclear. I believe this is due to historical issues such as the SQL standard being designed for lock based databases, as well as the SQL standard itself being fairly unclear. See A Critique of the ANSI SQL Isolation Levels.

                    1. 0

                      I’m sure I’m missing the point of this, but the Pac-Man anecdote seems to be implying that Pac-Man was an American-made game. However, it was made in Japan. Not sure what the lesson is in that.

                      1. -1

                        I’m sure I’m missing the point of this, but the Pac-Man anecdote seems to be implying that Pac-Man was an American-made game. However, it was made in Japan. Not sure what the lesson is in that.

                        1. 1

                          One thing I did not see in the existing comments is that cellular data currently consumes more power. With phones, battery life is still one of the biggest challenges that manufacturers have to deal with, so it seems unlikely to go away any time soon.

                          1. 4

                            Two quick thoughts:

                            1. There are companies that manage Kafka for you, such as Heroku. One of the points this article makes is that there’s a time/energy cost to your org’s engineers spent on managing your own Kafka cluster, but that doesn’t have to be true.
                            2. The post alludes to but skirts around the cost of vendor lock-in. If you want to move away from Amazon and you are using an open source queueing system / event stream, it requires less ops & engineering time than having to change from a proprietary system to a new less-well-understood system in the midst of the rest of the migration process.
                            1. 13

                              I think the addition of the Default Contributor License is great. Contributing to an open-source project should really implicitly mean you agree to license the contribution under the same terms as the project (and are able to do so), but sadly that’s not the case.

                              1. 1

                                I’m honestly surprised that this point isn’t addressed better by popular OSS licenses. I can understand the need to modify things in some way if you’re dual-licensing or something of the sort, but it seems like the bog-standard licenses could provide an out-of-the box contributor agreement.

                              1. 3

                                I looked at Sandstorm early on, and one of the reasons that I didn’t jump into it then (and haven’t since then either), is that it’s all built on MongoDB. I’m not trusting my personal data to MongoDB under any circumstances. I can appreciate the goal, but it needs to be built on more robust foundations.

                                1. 6

                                  Mongo is used to store metadata, but apps store their own content in whatever format they prefer. We used Mongo for metadata primarily because we wanted to use Meteor as our reactive web framework, and at the time it only integrated well with Mongo. I’d like to move away from it in the future, but this is really an implementation detail…

                                  1. 4

                                    Apologies, but implementation details matter.

                                    If the underlying data storage for Sandstorm is trivial, then what exactly does using Sandstorm vs using the apps that sit on top of it buy me? Half of the apps in the Sandstorm “app store” are things that are also usable outside of the Sandstorm ecosystem.

                                    1. 7

                                      Sandstorm automates the process of installing, updating, and securing apps, to the point that a non-technical user can do it, whereas if you deploy them all separately you’ll need to spend a lot of time on each of those things for each app and you’ll need technical skills.

                                1. 8

                                  This reminds me a lot of an anecdotal story one of my professors told me from his time working at NASA.

                                  His team was working on running simulations of long-distance manned spaceflight. In particular, the goal of their simulations was to determine an algorithm that would optimally allocate food, water, and electricity to 3 crew members. The decided they would try running a genetic algorithm with the success criteria being that one or more crew members would survive for as many days as possible before resources ran out.

                                  It started off fairly predictably– 300 days, 350 days, 375 days of survival. Then fairly abruptly, the algorithm shot up to around 900 days of survival. The team couldn’t believe it! They were fairly pleased at the 375 day survival results as it was.

                                  As they started digging into how this new algorithm worked, they discovered a small problem. The algorithm had arrived at a solution wherein it would immediately withhold food and water from two of the crew mates, causing them to die from starvation and dehydration. From there, it would simply provide the surplus remaining resources to the surviving crew member.

                                  The team realised that the success criterion of “one or more crew members would survive for as long as possible” was not actually the criteria that they really wanted, and the algorithm settled in at 350 days worth of resources once again once they adjusted the algorithm to keep all of the crew alive.

                                  It’s often the simple underlying assumptions that distinguish murderous spaceships from spaceships that keep their crew alive a little longer in extreme conditions.

                                  1. 6

                                    As always, it depends. I am running some go applications inside a container, e.g. gogs, to isolate the process from the rest of the system. The process could also be isolated by restricting its permisssions through a systemd service configuration but a docker containter is nice for portability. Containerization is also required if you want to deploy your application to something like kubernetes, in this case I’m compiling my go code without C extensions (CGO_ENABLED=0) so I can use a lightweight musl base image like alpine.

                                    1. 2

                                      It seems like working towards making the state-keeping implementation in Kafka pluggable to use other strongly consistent backing stores would be a better use of effort than rewriting Kafka.

                                      1. 2

                                        I had to toggle the checkbox of and then on again to make it work. Seems like it ought to just be a button or something that retriggers it. Or that reruns on input into the text field. I wasn’t sure it was working for a while.

                                        1. 1

                                          I’m the author of the OP, if you have any questions!

                                          1. 1

                                            Why did you choose “Kafka”?

                                            1. 1

                                              It says so in the very first paragraph of the article.

                                              1. 1

                                                Right, I guess I should ask Apache.

                                          1. 38

                                            I don’t buy this analysis at all. The conclusion would be worth discussion, at the least, but the analysis itself seems weak.

                                            Also, it’s not a “shocking secret” that static types only suffice to catch some classes of errors, and that the sorts of powerful type systems that can check more than basic errors are found in non-mainstream languages. We know that.

                                            What is “bug density”? Is our denominator LoC or is it some measure of problem complexity? If we use bugs/LoC as our metric, then we’re punishing terse languages. How are we counting bugs? How do we evaluate the damage caused by bugs?

                                            A few things come to mind.

                                            • TDD is not the same thing as testing. You need to test your programs, in any language. TDD is a specific development methodology. They’re not the same thing.

                                            • “Testing” is only as good as the programmers you have writing the tests (and the code). This is probably obvious. It remains true even if we allow that static typing provides a (limited, but time-saving and powerful) form of testing. Static typing is useful when you know how to use it. It doesn’t, on its own, guarantee much (and it can be subverted with, say, “stringly” typed interfaces.)

                                            • The quality of programmers matters a lot more than the language. This is something that has become clear to me over the past 15 years. C++ is ugly, but I’d trust good programmers using it before I’d trust bad programmers using any toolset that exists today.

                                            1. 25

                                              One of the most horrible things that github brought to the world is this kind of automatic comparative analysis of code quality between different languages. Using github data for this is completely misguided:

                                              • in our industry we don’t have a standard description of what’s a bug, nor we have a standard taxonomy of bugs.
                                              • different languages may have different kind of bugs as some bugs are not possible to happen in some languages.
                                              • internal/proprietary repos are not published in github and this ignores non-internet languages (COBOL, assembler…) or certain kinds of software (games, microdevices, operating system drivers…)
                                              • we don’t know the development methodology used, so we cannot take it into account to find the root cause analysis of the bug (is it a language fault or software development fault?)

                                              Also, the logic of using some papers to disprove the usefulness of strong types:

                                              | “While these relationships are statistically significant, the effects are quite small.” [Emphasis added.]

                                              but then ignoring the same papers conclusions (no effect between strong typing/not typed languages) when he has interested to promote non typed languages is surprising.

                                              1. 13

                                                And let’s not forget that a lot of people on GitHub just use GitHub issues as a to-do list. Just because there’s an issue in GitHub doesn’t mean there’s an associated bug.

                                                1. 7

                                                  I frequently get issues because people don’t read the README and want me to read it to them, so that’s another factor to consider.

                                              2. 3

                                                I would agree. I think, on the balance of things, the article conclusions might be correct. But the data shown wouldn’t meaningfully support that conclusion.

                                                An organization which is aggressive about quality will win the defect count vs an org that isn’t. Regardless of stack.

                                                I think that types are very important, but the culture beats the stack any day.