Threads for joshka

  1. 4

    I’m not sure how well ActivityPub will scale. The Mastodon server I’m on has a DB of over 200GB with 25K users - and it’s only been up for a few weeks. It seems like fairly soon it will hit the limits of what can be handled on a couple of home servers. I don’t know how much of this is Mastodon specific though.

    1. 11

      Let me assure you that the ActivityPub content is an insignificant part of that.

      Mastodon’s developers had the unfortunate (IMHO) idea to cache every piece of media it reaches a server on that server. The official explanation is that it alleviates the burden on smaller instances, so they don’t get hammered by thousands of requests for one piece of media that happens to go viral.

      However this makes it that every instance in the fediverse graph will have a full copy of every other piece of media of the other instances it federates with.

      This is not tenable, and in my opinion is working against the federation concept itself.

      I am working on software where serving dumb files and ActivityPub content that generated them is a low overhead affair, so even receiving hundreds of requests per second the performance won’t drop.

      1. 3

        Is the caching meant to alleviate network performance load or network/disk bandwidth load?

        1. 1

          I am not sure to be honest, I was presenting what I remembered from very old discussions (cca 2019,2020) - which might have been on github, or directly on Mastodon with Eugen and the other people in their dev team.

        2. 2

          Something combining CDN + media hosting + dedupe would be the obvious fix for this. There are github issues proposing IPFS, which probably hits the media hosting + dedupe part.

          1. 1

            I’m also considering using IPFS for media storage in the application I’m writing, but I’m not there yet. I’m prepared to let hyperlinking do its thing for a while. :)

        3. 2

          You should not expect a home server to handle 25K users. But the great thing about how the network is structured, is that it doesn’t have to be that way. I find that the smaller instances, of around 100 people, are a lot better, both in moderation quality, and in community.

          I am honestly disturbed how much people flock to the big servers, because this is coming back to centralize the network. I’ve already seen old-timers starting to block the biggest instances, because the moderation quality in them very noticeably dropped after Elon bought Twitter.

          1. 1

            I don’t think that’s a reasonable expectation, because that essentially boils down to ignoring everyone that isn’t tech savvy enough to roll their own instance, or lifting the burden onto each and every selfhosting capable IT guy.

            1. 2

              You don’t need to be tech savvy to run an instance, you just need a bit of cash to pay a hosting service.

              I realize not everyone has that either, but it’s a lot more people than have the time and energy to be their own host.

              1. 3

                The same goes for everything where you are currently paying someone to make it work. For example your cloud, vegetables, messenger, email or your car.

                And yes for many people this is on the same level. Truly federated, self hosted stuff is a pipe dream of us techies. It’s nice in theory, but a lot of burned out admins in their off time in reality if you really want to make it happen. I wouldn’t want to share random instances with folks I met one day, which I liked at that point, the same way people add and remove others on facebook and twitter. And there is so much more: Do all these instances have enough bandwidth ? Backups ? Privacy & Security things ? Stuff that really matter if you actually throw normal users in this.

              2. 2

                You only need one tech-savvy person per 100 people to host an instance. In my country, around 2.5% of working force are described as “IT specialists” - so to host enough servers for the entire country, about 80% of them would need to host their own instance. A lot, but that’s still possible.

                But you don’t need to be that tech-savvy to admin an instance - there’s plenty of different services offering hosting services. Last I heard, Masto.host hosts around 10% of all Mastodon instances by count, but other services, offering Mastodon or other software hosting exist. Using these services requires little technical knowledge.

                1. 1

                  That is still more people than what traditional centralized social networks are offering.

                  Yes by all means make it easy to run, invest time into having a straightforward install and configuration, but asking a dev to make server software that your grandma’ can run is a little unreasonable to ask in my opinion.

            1. 19

              This is an advertorial (rantvertorial?) for a commercial proprietary product.

              Previously, on lobste.rs:

              1. 1

                Seems a bit more than that to me. The problem they describe and the solution is kinda neat.

                1. 4

                  The problem they cherry picked is about there with the simplest form of what you can practically get away with to justify their pitch. The little technical detail provided is so unusably shallow that it can barely ripple the surface. The solution have been tried plenty of times before and without proprietary $30M VC behind it, see also: Hyper and Upterm.

                  1. 1

                    I don’t disagree - I’d definitely want to see what this could do to standards rather than being a single proprietary solution however.

              1. 3

                This is everything, not just Linux though. I have no idea why I have to enable the IP remote setting in some obscure menu on my TV, but it fixes the issue of the TV restarting 10-30 seconds after I turn it on over HDMI. I wish I did know, but all I can find is people repeating the solution rather than how to diagnose / understand and fix the underlying bug.

                1. 2

                  Yes, but the advice limits itself to what’s realistically possible to get knowledge of. I think it holds for science and source code, for example, which is well understood by many.

                  But your TV is an anti-example: Your TV might have a glaring firmware bug, but it’s hard for us plebs to set a breakpoint in a closed source blob running on a different machine you have to be a security researcher (or have a jtag dongle) to log into. The bane of firmware is that, while you are stuck with the factory version, the devs have probably fixed it as soon as someone noticed it, 5 years ago, and forgotten it.

                  1. 1

                    No disagreements with that take, though it’s a Bravia from 5 years ago running android, so ¯_(ツ)_/¯

                1. 2

                  This is supercool!

                  The workflow parts from https://www.warp.dev/ are similar. Warp solved this by adding it to their own terminal app, but it was a non-starter for me however due to unknown future pricing. runme.dev on the other hand hits a really great price point.

                  The one thing that would make this perfect (for use as a team’s operational runbook) is the ability to capture the output as a separate (markdown?) document for future posterity.

                  It probably also could use some parameterization of commands (I didn’t look deeply, so if this is there already mea culpa).

                  1. 2

                    Thanks for the response! To answer your question: capturing output is not available yet, but totally possible. We also have it on Runme’s radar. There are a few different ways how Runme could make it available. Would you want to capture the output for yourself or to share with others?

                    What would prompt you wanting to look at the outputs again? In other words, what would you want to use the capture output for?

                    1. 2

                      In my previous job, there was a significant amount of ops which often involved multi-step processes. This would help that. When doing problem diagnosis, it’s pretty useful to communicate your findings with other people that might be concurrently exploring issues (in an issue tracker / wiki / email / slack channel etc.). Having this correlated with the runbook rather than detached might be the killer piece on this.

                      I envisage a runbook slapped in a git repo, kept up to date always and distributed on all dev’s machines pretty regularly for situations where access to centralized systems is down. This is particularly key when the system you store your runbooks on depends on the system that you’re managing :D

                      1. 3

                        Unironically, Google Wave would have been perfect for this.

                        1. 2

                          Unfortunately that would have put the data elsewhere than directly owned by the company which was a non-starter for this sort of thing. The fact that wave folded is an indication of why you wouldn’t have wanted that to be a primary data store for anything.

                        2. 2

                          I’m really aligned with your thinking there, thanks for sharing your experience! Our CTO Sebastian is assembling a group of folks like yourself to inform the roadmap, and it would be awesome to have your input there. If you up for that, either jump on the stateful discord (stateful.com), or shoot him an email: sebastian@stateful.com

                    1. 7

                      Consider changing the url when regenerating the colors so that they can be permalinked.

                      1. 10

                        That Usability study really accurately captures my experience on Nix as a n00b:

                        10 sessions with (absolute or relative) Nix beginners of different software development backgrounds quickly produced some observations:

                        • People love control and reproducibility.
                        • Developers just want to get things done and do not care how it works.
                        • Engineers usually care most about one specific programming language or framework.
                        • People do not read, only skim.
                        • nixos.org navigation often does not meet user needs.
                        • Information about the Nix ecosystem is perceived as being highly dispersed and disorganized. Confusion and disorientation quickly kicks in and often results in “tab explosion”.
                        • The learning curve is perceived as extremely steep.
                        • The Nix language is easy for Haskell users, and obscure to almost everyone else without appropriate instructions.

                        The links on doc writing are really good too:

                        How Learning Works

                        Practical advice on effective teaching and learning, backed by broad and deep evidence. The best-written and probably most important book I have ever read.

                        Diátaxis

                        A framework for structuring software documentation around user needs.

                        Plain language guidelines

                        A set of guidelines to write clearly in English.

                        1. 3

                          Ditto. It’s weird how it keeps happening, but basically every time I have to do something new in Nix the right incantation takes a crapton of time to find, but once it’s there it’s really simple to read. I wonder if there’s a big difference between how Nix maintainers and other programmers think?

                        1. 10

                          I was pleased to see that this article wasn’t yet another rant on the evils of cloud based password management, and instead laid out some very common sense goals (one of which happened to be “no third parties” which I can totally respect) and detailed how they built a solution.

                          I’m far too lazy for this approach. I’m more than willing to trade off having my passwords stored at a third party for the ultra convenience something like 1Password offers, and I’m grateful that the folks at AgileBits really care about things like cross platform support including first class Linux clients.

                          1. 5

                            Me too on all the points. The only part I do sometimes get concerned about is what happens in the event of a 1password bug that causes data loss. This fear is less about trusting 1password, and more about the impact of losing access to everything. But I generally answer that with the same logic that I’m glad I can pay someone to have to deal with that issue rather than taking it upon myself.

                            1. 3

                              Thank you for prompting me to explore 1Password’s data export functions :)

                              Seems like you can export your entire password vault to a local file in several different formats.

                              1. 2

                                Thank you for the nightmare about which I can do nothing for the next 10h (in a bus) =P

                            1. 12

                              I’d say the main win from docker is that configuration is generally stored near the containerized service. Environment vars, docker compose files etc. rather than changing many files in various places in /etc. That’s not an insurmountable barrier to the suggestions, but it is something that helps ensure that software released as a docker image is mostly the same in most deployments. That’s the part of this that’s the human answer not just the technical.

                              I also wonder whether there’s a nix alternative approach to https://hub.docker.com/r/containrrr/watchtower (automatic container updates).

                              Thought provoking article though. And an interesting contrast to the approach one dev is working on for kubernetes (removing systemd). https://medium.com/@kris-nova/why-fix-kubernetes-and-systemd-782840e50104 via https://news.ycombinator.com/item?id=32888538 A lot of the same arguments there could be made of swapping to systemd managing everything.

                              1. 4

                                Environment vars, docker compose files etc. rather than changing many files in various places in /etc.

                                While docker definitely made it more popular, all those apps still take the environment variables anyway. That means you can configure either directly though the systemd overrides / etc/defaults, or through the NixOS config if you use it.

                                This unfortunately won’t work with things like mysql which handle all variables in the entrypoint instead.

                                1. 2

                                  I also wonder whether there’s a nix alternative approach to https://hub.docker.com/r/containrrr/watchtower (automatic container updates).

                                  Just run rebuild in cronjob. The effect will be identical thanks to the idempotency of building Nix derivation.

                                1. 25

                                  I’m glad the post essentially says “no” after trying most contenders. I hope people make better porcelain for git, but moving to another data model makes little sense to me, and I hope the plumbing remains.

                                  I did kind of raise my eyebrows at the first sentence:

                                  Most software development is not like the Linux kernel’s development; as such, Git is not designed for most software development.

                                  It’s been a long time since git was developed only for the needs of Linux! For example github has a series of detailed blog posts on recent improvements to git: 1 2 3 4 5 6.

                                  1. 8

                                    The problem with Git is not the “better porcelain”, the current one is fine. The problem is that the fundamental data model doesn’t reflect how people actually work: commits are snapshots, yet all UIs present them as diffs, because that’s how people reason about work. The result of my work on code has never produced an entire new version of a repository, in all cases I remember of I’ve only ever made changes to existing (or empty) repos.

                                    This is the cause of bad merges, git rerere, poorly-handled conflicts etc. which waste millions of man-hours globally every year.

                                    I don’t see any reason to be “glad” that the author of that post didn’t properly evaluate alternatives (Darcs dismissed over HTTP and Pijul over WSL being broken: easier to blame the author than Microsoft).

                                    In my experience, the more pain and suffering one has spent learning Git, the more fiercely one defends it.

                                    1. 4

                                      Snapshots can be trivially converted to diffs and vice-versa, so I don’t see how this would impact merges. Whatever you can store as patches you can store as a sequence of snapshots that differ by the patch you want. Internally git stores snapshots as diffs in pack files anyway. Is there some clever merge algorithm that can’t be ported to git?

                                      What git is missing is ability to preserve “grafts” across network to ensure that rebase and other branch rewrites don’t break old commit references.

                                      1. 5

                                        I actually thought about the problem a bit (like, for a few years) before writing that comment.

                                        Your comment sounds almost reasonable, but its truth depends heavily on how you define things. As I’m sure you’re aware, isomorphisms between datastructure are only relevant if you define the set of operations you’re interested in.

                                        For a good DVCS, my personal favourite set of operations includes:

                                        • In Git terms: merge, rebase and cherry-pick.
                                        • In Pijul terms: all three are called the same: “apply a patch”. Also, I want these operations to work even on conflicts.

                                        If you try to convert a Pijul repo to a Git repo, you will lose information about which patch solved which conflict. You’ll only see snapshots. If you try to cherry pick and merge you’ll get odd conflicts and might even need to use git rerere.

                                        The other direction works better: you can convert a Git repo to a Pijul repo without losing anything meaningful for these operations. If you do it naïvely you might lose information about branches.

                                      2. 4

                                        Not my experience. I was there at the transition phase, teaching developers git. Some hadn’t used SCM at all,. most knew SVN.

                                        The overall experience was: They could do just as much of their daily work as with SVN or CVS very quickly, and there were a few edge cases. But if you had someone knowledgeable it was SO much easier to fix mistakes or recover lost updates. Also if people put in a little work on top of their “checkout, commit, branch, tag” workflow they were very happy to be able to adapt it to their workflow.

                                        I’m not saying none of the others would do that or that they wouldn’t be better - all I’m saying is that IMHO git doesn’t need fierce defense. It mostly works.

                                        (I tried fossil very briefly and it didn’t click, also it ate our main repo and we had to get the maintainer to help :P and I couldn’t really make sense of darcs. I never had any beef with Mercurial and if it had won over git I would probably be just as happy, although it was a little slow back then… I’ve not used it in a decade I guess)

                                        1. 5

                                          The underlying data model problem is something that I’ve run across with experienced devs multiple times. It manifests as soon as your VCS has anything other than a single branch with a linear history:

                                          If I create a branch and then add a commit on top, that commit doesn’t have an identity, only the new branch head does. If I cherry pick that commit onto another branch and then try to merge the two, there’s a good chance that they’ll conflict because they’re both changing the same file. This can also happen after merging in anything beyond the non-trivial cases (try maintaining three branches with frequent merges between pairs of them and you’ll hit situations where a commit causes merge conflicts with itself).

                                          Every large project where I’ve used git has workflows designed to work around this flaw in the underlying data model. I believe Pijul is built to prevent this by design (tracking patches, rather than trees, as the things that have identity) but I’ve never tried it.

                                          1. 2

                                            I don’t understand, is it “not your experience” that:

                                            • Snapshots are always shown as diffs, even though they aren’t diffs
                                            • Bad merges, git rerere and conflicts happen. A lot.
                                            • The author of the original post didn’t evalute things correctly?

                                            In any case, none of these things is contradicted by your explanation that Git came after SVN (I was there too). That said, SVN, CVS, Git, Fossil and Mercurial have the same underlying model: snapshots + 3-way merge. Git and Mercurial are smarter by doing it in a distributed way, but the fundamentals are the same.

                                            Darcs and Pijul do things differently, using actual algorithms instead of hacks. This is never even hinted at in the article.

                                            1. 1

                                              The problem is that the fundamental data model doesn’t reflect how people actually work:

                                              I simply think it matches very well. Yes, we could now spend time arguing if it’s just the same as SVN, but that was not my real point.

                                              1. 1

                                                Not the OP, but I’ll respond with my experiences:

                                                Snapshots are always shown as diffs, even though they aren’t diffs

                                                Its more meaningful (to me at least) to show what changed between two trees represented as the commit as the primary representation rather than the tree, but

                                                Bad merges, git rerere and conflicts happen. A lot.

                                                I don’t tend to use workflows that tend to use merges as their primary integration method. Work on a feature, rebase on mainline, run tests, merge clean.

                                                The author of the original post didn’t evalute things correctly?

                                                The author’s use cases are contradicted by the vast majority that use git successfully regardless of the problems cited. I’d say the only points that I do agree with the author on are:

                                                Git is fundamentally a content-addressable filesystem with a VCS user interface written on top of it.

                                                Not the author’s quote, but there’s a missing next step there which is to examine what happens if we actually do that part better. Fossil kinda has the right approach there. As doe things like Beos’ BFS filesystem, or WinFS which both built database like concepts into a filesystem. Some of the larger Git systems build a backend using databases rather than files, so there’s no real problem that is not being worked on there.

                                                approach history as not just a linear sequence of facts but a story

                                                The one thing I’d like git to have is the idea of correcting / annotating history. Let’s say a junior dev makes 3 commits with such messages as ‘commit’, ‘fixed’, ‘actual fix’. Being able to group and reword those commits into a single commit say ‘implemented foobar feature’ sometime after the fact, without breaking everything else would be a godsend. In effect, git history is the first derivative of your code (dCode/dTime), but there’s a missing second derivative.

                                        1. 2

                                          Regarding TFA,

                                          The for loop is far more evil; it’s much harder to follow and thus impacts maintenance. And it’s also slower. And it doesn’t scale as well. Optimization in this case turns out to be not only only faster but also good. It’s the opposite of wasting programming time! You may assert that this example is ridiculous! I admit it helps that I don’t need to maintain Python’s dictionary implementation. But that’s the point. Today’s powerful computers can run dependencies like Python with ease. The context was definitely different in 1974. So since this optimization is not evil, this optimization is not premature. It’s in fact never premature – even if this dictionary never grows beyond 10 items and the cost of the for loop version was negligible, it’s still better code.

                                          I agree that the code is better, but I think that misses the point a little. It’s not better because it runs faster, it’s better because it clearly does what it needs to do.

                                          1. 3

                                            In particular, the claim is sometimes made that code which has better performance will also be easier to understand. I find this to be nonsense. It might happen, on some occasion, that the two are coaligned, but that does not indicate any kind of general correlation.

                                            In particular, both machines and humans are weakly inclined to brevity. But neither tendency is absolute, and the two are not necessarily inclined towards the same sort of brevity.

                                          1. 4

                                            This is not the first time I’ve read something on that Knuth quote, and I’m sure it won’t be the last. One of my favorite anachronisms from Knuth’s article is:

                                            Most programs are probably only run once; and I suppose in such cases we needn’t be too fussy about even the structure, much less the efficiency, as long as we are happy with the answers.

                                            In the infosec field, we have some lists of principles to apply to make programs secure; CWE (Common Weakness Enumeration), OWASP top 10, etc. There’s also the SonarSource rules for writing and linting softwared. I wonder if there’s anything out there at a higher level of abstraction. Perhaps it’s time for a restatements of computer science to be developed (in the fashion of the Restatements of the Law). There are a few things that the Restatement of the Laws does well (particularly the volumes regarding contracts and torts):

                                            1. Accurately states the precedent and trend of laws and how to interpret them.
                                            2. Adds identity to each statement of the law allowing for easy and accurate citation.
                                            3. Consolidates multiple primary sources into a single document

                                            There are a few really good conceptual statements that can be gleaned from Knuth’s article. My restatements of those are:

                                            1. Strive to write well-structured programs that are easy to understand and almost sure to work. Writing code for performance tends to have a negative impact on debugging and maintenance.
                                            2. Only optimize critical parts of a program rather than the entire program. A minuscule number of lines has a disproportionate effect on the performance of a program.
                                            3. Measure (profile) rather than applying intuition when searching for critical parts. Experience shows that most of the time guesses about where to optimize are wrong.
                                            4. Focus on efficiency only after correctness and maintainability have been attained. Premature emphasis on efficiency introduces complexity and grief.
                                            5. Introducing transformations of critical parts of a well-structured program allows us to more easily understand the performance-optimized code against the backdrop of the rest of the program.
                                            6. Build and use tooling that tends to highlight inefficiencies as part of compilation.

                                            Knuth suggests that we should build performance checks into our compilers, but we have the technology these days to do this earlier in the editing phase of our software development (using linters, and pattern matching refactoring tools).

                                            1. 2

                                              That’s a really neat way of solving this.

                                              1. 26

                                                I’ll go the other way: it is absurd to me that developers repeatedly refuse to attempt to make better estimates, seemingly as a cultural thing.

                                                Devs seem to want this magical fantasy land where they get to tinker with software and get paid six figures and not have to commit to any particular deadline and not have to sully themselves with even trying to make informed guesses about how long something should take. It is absurd.

                                                Here’s a good starting point: “is your current ticket going to take more or less than a year”? As the old joke goes, “….if you answer, we already established that you’re an estimator–now we are just haggling over price.”

                                                1. 22

                                                  it is absurd to me that developers repeatedly refuse to attempt to make better estimates, seemingly as a cultural thing.

                                                  It is equally absurd to me that people refuse to attempt to spontaneously levitate better, seemingly as a cultural thing.

                                                  Estimation requires that the future be like the past. In manufacturing this is described as the process being ‘in control’ so that the normal sources of variation are known and control plots an be drawn without them being a work of fiction. For many things in software we can get certain tasks in control. It is possible to get a lot of the work in software under control, but if you don’t do that work you can’t expect to be able to estimate it. And there is always a big pile of stuff that isn’t under control because it wasn’t foreseen or because it is a rare task.

                                                  1. 14

                                                    Estimation requires that the future be like the past. In manufacturing this is described as the process being ‘in control’ so that the normal sources of variation are known and control plots an be drawn without them being a work of fiction. For many things in software we can get certain tasks in control. It is possible to get a lot of the work in software under control, but if you don’t do that work you can’t expect to be able to estimate it. And there is always a big pile of stuff that isn’t under control because it wasn’t foreseen or because it is a rare task.

                                                    I dunno, there’s a lot of stuff in construction and traditional engineering that’s similarly unforeseen, but they seem to be (somewhat) better at estimating that we are. I wonder if it’s more that we don’t have good techniques or practices. My analogy is formal methods: lots of people told me that up-front design was literally impossible, and I thought that too until I discovered the techniques that make it both possible and economical.

                                                    1. 10

                                                      My experience in a “boring” branch of EE (instrumentation engineering, specifically, that’s as close to bean counting as you can get…) suggests that poor practices are very much a cultural thing. However, I think poor practices are neatly divvied up between development and management. Software engineering management practices need a major revamp at least as much as software development practices do.

                                                      Software engineering managers routinely eschew the management of requirement gathering, synthesis and dissemination, engineering skill building, or process analysis. Much to the embarrassment (and grief) of their organisations, a lot of them do it because they lack the knowledge and skills required for them in the first place. Many eschew these things entirely.

                                                      And when they’re eschewed, they’re often not eschewed through sheer incompetence, but by company policy. For example, the software industry is certainly not the only one that has to deal with changing requirements. However, it is, to my knowledge, the only one that’s developed a whole institutional framework out of not providing requirements, and trickling them, one at a time, over a lengthy period of time. “Other” (as in, EE) engineering managers get that even a provisory understanding of provisory requirements is necessary before making any estimates, and that what you choose early on limits what you can do later on to some degree. Of course the estimates they get are better – with all the early stage trade-offs that involves.

                                                      Edit: I had a cool story about it, but I don’t think I can obfuscate the details enough to make it untraceable so I decided against posting it. But the core of it is that “lack of requirements” and “shifting requirements” just scratch the surface of how poorly requirements are treated by the software industry. “Requirements gathering” is just the most basic aspect: in many fields of engineering, it’s engineering management 101 to ensure not just that a list of requirements exist, but also that they’re analysed, that everyone on the team understands them and what implications they have, that, if any requirements are missing and are replaced with best guesses, everyone up the chain understands what they’re committing to and so on. Software engineering management barely recognizes that these things exist.

                                                      Same goes for e.g. engineering skills development. Top-tier semiconductor engineering companies have pretty detailed plans for how to ensure that, when some technology that’s critical for their strategy will become available some months (or years) from now, their engineers will receive the proper training for it, that it will be integrated in their existing processes and tooling, that knowledge and skills will continue to be disseminated as that technology evolves and so on. Meanwhile, the pinnacle of growing software engineers is time management corporate trainings and workshops whose only considerable advantage over a series of Youtube lectures is a company-paid lunch.

                                                      1. 7

                                                        I feel like construction is probably not a great counterexample since everybody has anecdotes about a massive public-works infrastructure project going way over schedule/budget, often by years and billions of dollars, or about small private projects – remodeling a kitchen/bathroom, building a new patio – going through similarly frustrating overages of time and budget. In fact, contractors who do that sort of work are sort of notorious for being overcommitted, which suggests poor planning/estimation.

                                                        My own experience in software is that it’s most useful to break things down into knowns and unknowns – there are things I’ve done enough times over my career that I can give you very accurate estimates of how long they’ll take to build, but there are also things that I’ve either not done enough, or that involve new variations/twists I’ll have to research, that it’s much harder to be precise up-front about the amount of time they’ll need. This piece says it better than I can.

                                                        1. 2

                                                          In fact, contractors who do that sort of work are sort of notorious for being overcommitted, which suggests poor planning/estimation.

                                                          This gets at one of the messy things about estimates: how others see them. In the case of construction, it is usually in a situation where there is a competition and the low bid is the most appealling. That’s a slightly different beast than an estimate someone doesn’t want to hear.

                                                      2. 4

                                                        Estimation requires that the future be like the past.

                                                        This is false. Good, repeatable, rigorous estimation is certainly aided in such cases, but the act of estimating does not require that–Crazy Alex up on the mount always says that tasks take less than a calendar year, and has a 99.999% success rate for any task put before them. Many software engineers I know will refuse to provide even a bad, arbitrary estimate unless pressed.

                                                        And there is always a big pile of stuff that isn’t under control because it wasn’t foreseen or because it is a rare task.

                                                        This hasn’t been my experience, because usually the engineers have always forseen it (perhaps the one upside of kvetching about technical debt) or there are so genuinely rare tasks. Integrating with an API isn’t a rare task, handling copy changes isn’t a rare task, making CI faster by looking for redundant steps isn’t a rare task–these are all things that are normal and expected in the course of one’s career,

                                                        I think we collectively harm our profession by pretending that there is no significant difference in frequency between the occurrences of boring glue/plumbing work and genuinely hard engineering.

                                                        1. 4

                                                          Minor addendum:

                                                          The real issue with this approach isn’t the grizzled veteran of a dozen projects who refuses to give an estimate because of their observations of how management at their company treats those estimates or who has seen many projects derailed by acts of Vendor–that person’s reluctance is understandable, and I trust that if pressed and managed properly they’d at least come up with something.

                                                          The problem is all of the brand-new folks to the profession who are hearing “you never give estimates”, “estimates are impossible”, etc. They blindly follow that culture and shortchange themselves, their teams, and their customers.

                                                          1. 1

                                                            the grizzled veteran of a dozen projects who refuses to give an estimate because of their observations of how management at their company treats those estimates

                                                            I’d say even that veteran is being silly. They could always take their actual estimate and double it or whatever they see fit to turn it into an upper bound estimation if management is going to take the estimation and turn it into a deadline.

                                                          2. 3

                                                            Many software engineers I know will refuse to provide even a bad, arbitrary estimate unless pressed.

                                                            There’s usually an underlying disfunction hiding here. When a developer is held to bad estimates, they lose the psychological safety and trust that comes from such things and are wary to try this again.

                                                            1. 3

                                                              Agreed, but the weird thing is seeing this in engineers too early in their careers (bluntly: fresh out of college, first/second job types) to be reasonably exhibiting this pathology. The apocryphal monkeys, firehose, and banana come to mind.

                                                          3. 3

                                                            Estimation requires that the future be like the past.

                                                            Going to second the dissent to this. I think this is statement is incorrect.

                                                            I’ve been able to estimate some large tasks with reasonable accuracy over the years, despite many unknowns. In one case I estimated a task (porting a compiler to a new backend where I didn’t know the codebase) at eight months and was almost dead-on accurate.

                                                            It isn’t magic. You can spend a couple of hours breaking down a task, analyzing it, possibly reverse-engineering it at a high level, probably reading some documentation, and there’s a good chance you will find patterns and situations that have come up before. You can judge your experience with them and go from there.

                                                            Estimates don’t have to be precise. The main complaint many people have is being held to them despite the fudge factor involved. In my estimates I always qualifiy it with the confidence I have in it based on my knowledge of the elements involved, and explain my reasoning. Even at a high level, being able to say “This will take weeks” to “This will take months” is useful.

                                                            1. 1

                                                              I wanted to add that the people who have done this also know that each time they did the thing, it was with a new version of the framework/library/whatever was being used at the time, and each time they ran into different walls, so they know that they can’t estimate correctly. And while “8 months” is probably a good-enough estimate sometime, it is not always so, usually you have to give an estimate on something that takes between 2 days and 3 weeks. The article stated something like that, most of this stuff comes from the “agile” projects. And in my experience, the estimation problem is not with rough estimations and milestones that are measured in months and years (which are imprecise, but can work, in my experience), but the gritty everyday of scrum refinement meetings.

                                                              1. 2

                                                                And in my experience, the estimation problem is not with rough estimations and milestones that are measured in months and years (which are imprecise, but can work, in my experience), but the gritty everyday of scrum refinement meetings.

                                                                I expect this is true. The problem, then, sounds like “Agile” and Scrum.

                                                                I am lucky not to have ever had to do either one of them in any “serious” way, so perhaps that’s why I don’t think doing estimates are a bad thing.

                                                          4. 13

                                                            I don’t think it’s a cultural thing. Given the abysmal track record of estimate I have observed in 15 year of career, whether it be by me or other, I would say it’s more of a resignation and a cry for the madness to stop. All I can use to generate and estimate is a belief, a feeling, based on very little concrete, that the date make sense.

                                                            I have seen that belief been wrong so many many time.

                                                            Even with your example of one year, I have seen it wrong. Yes, a project that the dev thought could be done in one month can balloon to a few years once we hit something unknown. I have seen many one year project not be done after ten years, yes ten time the estimate.

                                                            Your comment say that programmer are late because of a moral failing. That if they would just « man up », they would make good estimate.

                                                            well, no. Just… no.

                                                            After those mountain pile of evidence we have accumulated over half a century, we cannot say that anymore. Dev are just asked for the impossible. Their ability to enunciate a number is completely orthogonal to their ability to make that number accurate.

                                                            Also, dev salary come from their competence In figuring how complexe system should be put together. Those salary were never asked for a talent to estimate. If they were, employer could have refused and replaced them easily with some homeless person, who would have been just as accurate. But they did not. Because they could not.

                                                            I don’t know of any employer who have found those accurate estimator yet.

                                                            Do you?

                                                            1. 11

                                                              I think what’s absurd is asking people to give an answer to long it would take to build a building.

                                                              It’s a three-bedroom house in X city, but no plans are required, nor is the street it’s on. Is this on a hill? What does the drainage look like?

                                                              Even if an exact lat/long is given, you don’t automatically know what utilities are available without research. You don’t know what permits are required without research. You don’t know the level of the water table or the type of soil without research.

                                                              And you haven’t even started building.

                                                              Writing software is an exercise in building new things under new constraints. Estimates are basically throwing a dart at a wall. The minimum scope is generally known. The maximum is vague, if it’s knowable at all.

                                                              1. 7

                                                                Here are some architects discussing how to estimate project time. So I don’t think it’s only developers.

                                                                Processes that take an easily predictable time in software development (compiling, deploying) take up a minority of most developers time. We can’t use them to figure out the time like for example building construction could, as time for concrete to set, or time to place 100 bricks can be estimated a lot easier, because exactly the same task has already been done millions of times, and has been well characterized.

                                                                And even then, infrastructure projects encounter delays, because sometimes not everything gets accounted for, not every environment variable has been measured beforehand, and some variables have changed since.

                                                                Maybe programming is a bit more unique, because since of the massive possibility for re-usability of our work, every time it is unique, which makes characterization even harder.

                                                                1. 5

                                                                  Basically this. I don’t enjoy estimating or claim to have any secret knowledge that makes me better at it. However, I also recognise the need to plan work, and communicate dates to stakeholders.

                                                                  Where this becomes a problem is when people don’t recognise that:

                                                                  a) estimates are not sacred, and/or

                                                                  b) plans are necessary, but rarely end up perfectly matching reality

                                                                  If we can all accept that we’re working with imperfect information - and communicate when that breaks down - then it’s an inconvenience at worst, and a useful tool at best.

                                                                  1. 4

                                                                    Bingo. The old saw “Plans are useless, planning is indispensable” holds true.

                                                                    Communicating when the plans break down, or when significant new information has become known, is super important–and I’ll freely admit many orgs screw up that part really badly in how they react.

                                                                  2. 2

                                                                    I don’t disagree with the point that there’s a cultural aspect to this, but the real problem starts upstream. I think a huge percentage of the ills of modern software development are actually the ills of modern requirements gathering.

                                                                    Agile has made a badge of honor out of not pressing stakeholders to think rigorously.

                                                                    I love building software but if there’s one thing that’s going to burn me out on working in the industry, it’s having to repeatedly debug requirements and point out situations the product owners didn’t consider. I don’t think it should take a degree in CS to notice that if you, say, make a given input field optional, you can’t just assume it is always available for a mandatory calculation later on.

                                                                  1. 3

                                                                    Such a requirement may look difficult until inspiration hits. Then one day you may realise that it’d be as simple as a list of pairs (two-tuples). In Haskell, it could be as simple as this:

                                                                    newtype EvenList a = EvenList [(a,a)] deriving (Eq, Show)

                                                                    With such a constructive data model, lists of uneven length are unrepresentable. This is a simple example of the kind of creative thinking you may need to engage in with constructive data modelling.

                                                                    Ah, but that’s not quite the same as a List! How do you get the seventh element? For a list it’s xs !! 6, for an EvenList it’s fst(xs !! 3). That’s one of the big disadvantages of constructive data over predicative data: it’s not interchangeable with the general type.

                                                                    1. 1

                                                                      It wouldn’t be a particularly difficult stretch to implement the list interface over this storage mechanism however. E.g.

                                                                      get(index) => index % 2 == 0 ? underlyingList.get(index/2).first : underlyingList.get(index/2).second;
                                                                      

                                                                      I’ve been reading Mark’s stuff for years. It’s likely intentional that though the blog article elides some information so as not to expand beyond the necessity.

                                                                    1. 1

                                                                      Just stop storing and editing code as a fixed hierarchy of files and folders.

                                                                      1. 2

                                                                        This isn’t necessarily a new viewpoint, and so it has many pros and cons. The chief among the cons is that all (most?) existing tooling assumes that files and folders is the primary mechanism for organization.

                                                                        Do you have any concrete suggestions on this that take this from the concept to implementation?

                                                                        1. 2

                                                                          It is indeed a very big zero-to-one problem :-/

                                                                          1. 1

                                                                            Ya, http://datalisp.is, you still treat files and folders as the API of the compiler (or whatever) that you are targeting, but you are free to rearrange the access to sections of text (like materialized views or whatever). There’s lots you can do but the key for these tools to exist is some standard way for them to work together.. which is datalisp.

                                                                            1. 1

                                                                              In my casual perusal of datalisp, I wasn’t able to really get a sense of what it is. It seems rather opaque right now. It seems like it’s very much a research project. Is there something more concrete that I’m missing on this?

                                                                              1. 1

                                                                                It’s stupid simple really, it’s an attempt at a decentralized name system, but before that it is just a data interchange format and some associated tools (templating, version control, etc). The problem of co-locating stuff will be tackled with logic programming over the template data that you use to export a codebase, which is painfully manual, but you can share stuff and work with others.

                                                                                The frontier is what I’ve been staring at but really all I’m trying to do is find solid assumptions to build on.

                                                                          2. 1

                                                                            I don’t think that’s necessarily a solution - the way your tools would display the “units of code” still has to be organised somehow. This would probably still require the programmer to make a taxonomy of code units, since automated relations are typically not that great and may miss more “implicit” relationships between code. Perhaps that can be done by tagging them? This reminds me of Doxygen which can IIRC have clickable “see also” links in descriptions so you can help the reader find related functionality.

                                                                            I mean, trivially you could do away with a fixed hierarchy of files and folders by simply storing all of your code in a single file. Most languages would allow you to do that already, today. The reason we don’t typically do that is because it’s such a mess to find things (although there are some notable single-file projects and people typically praise them for their ease of drop-in into an existing codebase).

                                                                            Besides, an IDE should technically be able to show existing code from multiple viewpoints, already, even if the actual storage is in files and directories. For example, showing class by hierarchy with “jump to definition” is something that existing IDEs can already do, I think.

                                                                          1. 7

                                                                            The more I see this sort of thing in code bases, the more I learn that for some people this thing doesn’t matter at all, and for others it matters significantly. (I’m one of the latter, but have worked with the former a bunch). Neither way is right per se.

                                                                            The arguments against this are:

                                                                            1. filenames and class names are generally unique enough that the folder structure is irrelevant once the file is open
                                                                            2. component types have many similarities, which makes conventions easier to see.
                                                                            3. the person responsible for models / controllers may be different than for the view.

                                                                            There are probably others, and each of those has counterpoints:

                                                                            1. filenames are often a thing that makes it easier to make decisions and changes on (e.g. looking at a feature’s git history, checking whether the feature is complete, working out whether we can we move the feature into another system, …). Grouping by
                                                                            2. often the same person is responsible for M/V/C
                                                                            1. 7

                                                                              I’m usually of the same opinion and don’t really care what the code looks like - sometimes it matters slightly more for opinionated languages or frameworks (especially when your IDE is suggesting things that may not be relevant)

                                                                              But one aspect I think is overlooked a lot is onboarding new developers: juniors or contractors being the most important. A junior wants to be able to navigate easily and not be dropped into a huge mess that’s only well understood in some team veteran’s head and contractors you don’t want them spending days getting up to speed with a codebase.

                                                                              1. 6

                                                                                Yep. I think the most succinct way of stating this view is:

                                                                                To reduce the cognitive burden inherent software development, things which are closely related should be located close to each other. This concept applies to every level of a software stack - statements, functions, classes, files, folders, modules, services, and systems.

                                                                                1. 2

                                                                                  One thing we’ve been striving for is consistency. We have a decently large code base, maybe 600 files, 20-30 engineers at a time (50+ engineers over time), but the whole code base looks exactly the same. Doesn’t matter what area of the code you’re in, you can hop to a different team in an entirely different area of the code and it’s still familiar. That’s been very empowering.

                                                                                  It’s kind of like the idea of not wanting to introduce additional languages at your org taken to the extreme.

                                                                              1. 7

                                                                                There’s a missing piece in this . Any retry after the first retry is often going to be useless (>90% of the time). Hence a second retry only ever makes things worse. This is something that you’d measure looking at histograms and based on your actual failure modes of course (YMMV, but I’ve seen very large studies of this behavior at scale).

                                                                                Secondly, whenever there is 1 retry, and retries further down the stack, there’s an exponential effect on every single extra retry.

                                                                                So the real answer is 1 retry (top level only), token bucketed / with a circuit breaker on the first tries, not on the first retry.

                                                                                With well behaved clients that do know how to backoff / token bucket / circuit break, and a server side fast fail on overload approach, you’re not really in a world of hurt with this. But, if you don’t have all those, you introduce things like brownouts where a single failing server that fails fast receives a larger share of traffice due to speed which it responds to failure.

                                                                                1. 2

                                                                                  Yes, this is what I’m trying to get at, only it’s more clearly written :)

                                                                                  1. 1

                                                                                    Any retry after the first retry is often going to be useless (>90% of the time).

                                                                                    Where does the 90% figure come from?

                                                                                    1. 2

                                                                                      Really good question - This was an internal metric across a very large amount of systems at a place I used to work (80K+ systems 3M+ servers). I don’t recall the exact metric, but it started with a 9. YMMV - use statistical data to work out your own numbers for this. Think of this as more of a rule of thumb than a specific number.

                                                                                      Put another way, if you thinking about how failures in distributed systems happen, it’s usually things like overload, misconfiguration, bad deployment, disconnection, hardware failure, etc. Large scale failures were more common based on these factors than intermittent ones. Failing twice is a good indicator for large scale failure rather than intermittent failure. Large scale failure is where you want backoff the most, hence leading to that retry once advice.

                                                                                      You can look at this mathematically. Say we only retry when we fail and we have two possible failure modes 90% failure or 10% failure.

                                                                                      • Large scale failure (99% of your calls will fail):
                                                                                        • success over two calls = 0.01+0.99*0.01 = 0.0199
                                                                                        • 1.99% success / 98.01% failure
                                                                                      • Intermittent failure (1% of your calls fail):
                                                                                        • success over two calls = 0.90+0.01*0.90 = 0.9999
                                                                                        • 99.99% success / 0.01% failure

                                                                                      In this example, it’s 9801 times more likely that two failures indicate a large scale failure than an intermittent failure. Maybe your numbers are different and you might consider going to a second retry, but capture some stats around your retry logic for first and second retries and see what they say.

                                                                                  1. 1

                                                                                    A blog post from Oracle telling you to use composition instead of inheritance in java, and examples showing mass amounts of necessary boilerplate in order to do the thing you’re supposed to do. In comparison to inheritance, which is a single keyword.

                                                                                    They’re talking the talk, but not walking the walk.

                                                                                    Kotlin is the only language I’ve seen that embraces OOP and also makes composition easy, using delegation:

                                                                                    interface Base {
                                                                                        fun print()
                                                                                    }
                                                                                    
                                                                                    class BaseImpl(val x: Int) : Base {
                                                                                        override fun print() { print(x) }
                                                                                    }
                                                                                    
                                                                                    class Derived(b: Base) : Base by b
                                                                                    
                                                                                    fun main() {
                                                                                        val b = BaseImpl(10)
                                                                                        Derived(b).print()
                                                                                    }
                                                                                    
                                                                                    1. 2

                                                                                      It’s worth taking note of who the author of the post is rather than where it’s posted. Joshua Bloch pretty much wrote the bible of how to write good idiomatic Java as well as writing much of the underlying code that we’re talking about here (collections stuff). Sure kotlin is cool, but it’s not like this isn’t easy to do in Java with Lombok or some other annotation processor.

                                                                                      E.g. taking the example in the post:

                                                                                      // see https://projectlombok.org/features/experimental/Delegate
                                                                                      import lombok.experimental.Delegate; 
                                                                                      
                                                                                      class InstrumentedSet<E> implements Set<E> {
                                                                                        @Delegate(excludes=InstrumentedSet.class)
                                                                                        private final Set<E> s;
                                                                                        private int addCount = 0;
                                                                                      
                                                                                        public boolean add(E e) {
                                                                                          addCount++;
                                                                                          return s.add(e);
                                                                                        }
                                                                                      
                                                                                        public boolean addAll(Collection<? extends E> c) {
                                                                                          addCount += c.size();
                                                                                          return s.addAll(c);
                                                                                        }
                                                                                      
                                                                                        public int getAddCount() {
                                                                                           return addCount;
                                                                                        }
                                                                                      }
                                                                                      

                                                                                      You could argue the problems of Lombok (I’m not going to play that game today though). I think it would be difficult to argue however that switching to kotlin is easier than adding the two lines above.

                                                                                      1. 2

                                                                                        Assuming familiarity with and willingness to use Lombok in every Java project sounds like a big stretch to me though.

                                                                                        1. 1

                                                                                          Compared to switching to Kotlin?

                                                                                          1. 2

                                                                                            No, that was a standalone opinion. “Not in java” - “but Lombok”. I personally don’t care about Kotlin, or the comparison. I’m siding with whoever said “Not in Java” and my point was mostly re: “ not like this isn’t easy to do in Java with Lombok or some other annotation processor.”

                                                                                            I don’t have a lot of experience with Lombok, but what little I had was more bad than good and so pulling it in for this is meh.

                                                                                            1. 1

                                                                                              Got it - your perspective makes sense to me. I put that last point alluding to the problems of Lombok for similar reasons. Overall I lean positive on it rather than negative though.

                                                                                        2. 2

                                                                                          If the most popular OOP language needs a third-party library to make one of the most necessary OOP patterns reasonable, while having built-in support for an antipattern, then the language is fundamentally broken.

                                                                                          1. 1

                                                                                            I agree, the Kotlin approach is nicer. This doesn’t make Java bad, as the approach involved isn’t hard and can easily be avoided or automated.

                                                                                            The language doesn’t require lombok to achieve this. It only requires it to reduce this to a single line (like the equivalent kotlin). The article presents a one line per public method approach to the missing problem. That’s fine in my book, and isn’t going to scare away any developer worth their salt. The most used Java IDE even provides a shortcut to generate delegation methods for you.

                                                                                            1. 2

                                                                                              We’ll have to agree to disagree. I believe that boilerplate is just more surface area for bugs to hide in. I believe that putting code generation in IDEs as a workaround, is putting what should be in a compiler into an IDE.

                                                                                              1. 1

                                                                                                I actually agree with the following in theory, and with the underlying rationale that less code is generally better.

                                                                                                I believe that boilerplate is just more surface area for bugs to hide in

                                                                                                But in practice, boilerplate is rarely a place where I have found bugs.

                                                                                                If code generated code has issues, those are found more often in the generator than modified generated code. C# as a language leverages this idea by introducing partial classes to allow generated code and hand written code to co-exist in a way where the two parts are obvious (caveat, It’s been >5 years since I’ve written any C#, so my perspective on that is tinted by my recollection).

                                                                                                The overall thing I come back to here though is that boilerplate isn’t necessary. It’s only necessary if you want to avoid a language feature that you don’t like to use (annotation processing). That feature is just as good once understood as the corresponding Kotlin feature.

                                                                                                Perhaps there’s a bit of a middle ground suggestion that would work here. Lombok focuses very much on the nuts and bolts to implement OOP patterns (getters, setters, builders, nullability etc.). Perhaps the higher level set of OOP annotations need to be extracted from Lombok in a way that would bring easy feature parity with Kotlin’s OOP concepts that are missing in Java? This would be a fairly neat approach to modeling what a language change that makes this part of the language instead of bolted on would look like.

                                                                                      1. 2

                                                                                        I have mixed feelings on the “keep all changes under 400 lines total” rule. I don’t like thousand-line diff bombs any more than anyone else and I try not to lob them at others. But it really depends on what’s in all those lines. It’s pretty easy to blow past that number with low-review-effort changes such as automated refactors on a code base of any meaningful size. Tests can also sometimes have a high code-size : review-effort ratio, but a comprehensive set of tests can make a change easier to review, not harder.

                                                                                        Or the degenerate case: replace 1000 lines of custom code with a small implementation that uses a new third-party library. With a hard-and-fast change size limit, you would have to whittle away the unused code a chunk at a time. That seems clearly ridiculous to me.

                                                                                        When I’ve worked on projects where people took a hard line on this rule, it never got as bad as my previous paragraph, but it was annoying about as often as it was helpful. You’d frequently see people shatter a semantically atomic change into arbitrarily-bounded pieces each of which was under the limit but was impossible to meaningfully review on anything beyond a simple syntactic level without flipping back and forth between the pieces.

                                                                                        I kind of follow Einstein’s philosophy on this: PRs should be as small as possible, but no smaller. If there is a good way to decompose a change into a sequence of self-contained smaller changes, I do that. If there isn’t, I don’t try to force the issue.

                                                                                        One way I try to answer the “Could this be a separate PR?” question is to ask myself whether I could assign each piece of my change to a different reviewer and get useful feedback.

                                                                                        1. 3

                                                                                          I understand that this is a hypo, but “replace 1000 lines with another implementation” is 3 distinct things:

                                                                                          1. Implement replacement
                                                                                          2. Use replacement everywhere initial implementation was used
                                                                                          3. Cleanup

                                                                                          1 is where the meat of the review is necessary, 2 is fairly easy to review, and 3 should mostly just be deleting code and should effectively just be a noop.

                                                                                          I personally (for Java) like to reduce this even further to 100-150 LoC in a single review. This doesn’t include tests though, which are usually 3-10x the size of the code, which puts this squarely in the 400-1000 total lines :)

                                                                                          In the end, the best agile way to handle this is to have a rule. Ignore it when you need to. Notice when this ignoring becomes a burden (many comments on a single review, over many rounds). A good number might be 10 comments or going past 3 rounds of review.

                                                                                          1. 2

                                                                                            Yes, it’s truly rare that an arbitrary rule can’t be followed when it pertains to code that you as the programmer add, modify and remove. Things can always be broken down into small chunks, and according to the book, smaller chunks will result in better review feedback from your reviewers.

                                                                                            I didn’t mention it in the blog post, but changes outside of your direct control (say those 20k changes to package-lock.json because you upgraded a single npm package) don’t need to count towards such a limit. Such files are generated automatically and thus safely be ignored most of the time by human reviewers.

                                                                                            1. 1

                                                                                              Also, the part that makes those three things work is the tests:

                                                                                              1. tests only cover the new code
                                                                                              2. tests only cover the code that uses the new code
                                                                                              3. tests covering the old code are deleted

                                                                                              Even better is that if the tests for 2 don’t exist then you get that as a 0. step (cover usage of existing code with characterization tests to ensure things don’t break) If you see any other pattern, then there’s somethings screwy going on…

                                                                                          1. 1

                                                                                            When does JetBrains fleet arrive?