1. 2

    If you learn how to use an UNIX shell well, you have way more power than any IDE will give you, and will be able to use this power in any machine you’re using.

    And like, yeah you can get git control and things like that on your vim if that’s your style, but you don’t really need that when you’re already in a shell window. You can just run git commands ¯_(ツ)_/¯ . That’s the thing with using a text editor that runs inside the terminal, you use command line tools for a lot of your tasks, and use the text editor just to edit text. And as a text editor, vim is actually pretty great.

    Besides, Vim configuration with modern plugin systems like Vundle/Vim-plug is not even hard anymore

    1. 2

      I’d add don’t use rails if you’re not developing a REST based API, rails graphql support is a Frankenstein.

      1. 1

        Aren’t REST and graphql mutually exclusive?

        1. 2

          You can have Graphql as an endpoint in an application without using it for everything. on Rails you usually add graphql as another controller, that renders a Graphql Schema.

          That said — I really don’t like how GraphQL works in Rails, so in a well designed system it probably should be mutually exclusive.

      1. 7

        Having worked on a lot of Rails codebases and teams, big and small, I think this article is pretty spot on. Once your codebase or team is big enough that you find yourself getting surprised by things like

        counters = %i(one two three four five)
        counters.each do |counter|
          define_method("do_#{counter}_things") do |*args|
            # whatever horrible things happen here
          end
        end
        

        etc… you’ve outgrown rails.

        1. 7

          This is my litmus test for “has this person had to maintain code they wrote years ago”.

          I don’t think I’ve yet worked with anyone who can answer yes but also wants me to maintain code that can’t be found via grep.

          1. 3

            What unholy beast is that. I mean. Seriously. Wtf is that?

            1. 4

              It’s gruesome, and I’ve seen a version of it (using define_method in a loop interpolating symbols into strings for completely ungreppable results) in at least 3 different large old codebases, where “large” is “50KLOC+” and “old” is “5+ years production”

              There are a lot of ways to write unmaintainable code in a lot of languages/frameworks, but if I ever were to take a Rails job again, I would specifically ask to grep the codebase for define_method and look for this prior to taking the job. It’s such a smell.

              1. 2

                I don’t understand why it’s so normalized in rails to define methods in the fly. You could do that monstrosity easily in most interpreted languages, but there’s a reason people don’t! On Rails, it’s just normal.

                1. 4

                  It’s been a long time since I’ve read the ActiveRecord source so it may no longer be this way, but there was a lot of this style of metaprogramming in the early versions (Rails 1.x and 2.x) of the Rails source, ActiveRecord in particular, and I think it influenced a lot of the early adopters and sort of became idiomatic.

            2. 1

              Who the fuck writes code like this?

              shudders

              1. 1

                The time between “just discovered ruby lets you do that” and “just realized why I shouldn’t” varies from person to person; I’ve seen it last anywhere between a week and a year.

            1. 1

              I have seen used in governmental databases. per-table/view ACL. Haven’t seen row based ACL yet.

              1. 3

                This haven’t really convinced me that Redux is more than a glorified get/setter, the example of the state machine don’t use any interesting Redux features, it’s clever, but it’s not really because of Redux. Redux don’t have AFAIK intrinsic features that allow you to use as a state machine. I’m no redux expert, but a state machine library is something like https://github.com/aasm/aasm, redux for me just seems like a common pubsub design pattern glorified into a library.

                1. 1

                  This is really cool, besides being useful and a good idea, I’ve been using to see what comes up when I search non-dev related things (such as band names or celebrities) and getting hilarious results

                  1. 1

                    I’m also slightly terrified of entering stuff like “how to kill children processes” into Google.

                  1. 4

                    Reminds me of a rant I wrote a while ago about an article titled “How and Why We Switched from Erlang to Python”.

                    1. 4

                      🤦‍♀️ omg they switch to PYTHON because they needed faster CPU speed? Like, I love Python, paid my bill for years, but it’s a terrible choice for CPU bound speed improvements.

                    1. 6

                      I feel like a lot of what is being discussed here has already been talked about at length in various other posts. Is it not odd that there seems to be a collection of C/C++ users who are misrepresenting Rust’s capabilities? Steve already talked about this in his post You can’t “turn off the borrow checker” in Rust which is mentioned in this article. I’ve seen many false statements across Reddit, HN, Discord, etc, that could easily be resolved by reading the documentation. What is causing this? It’s not like Rust’s documentation doesn’t spell out what it restricts.

                      All Rust checks are turned off inside unsafe blocks; it doesn’t check anything within those blocks and totally relies on you having written correct code.

                      This is objectively false! Granted the original video is in Russian, but if you’re giving a talk about Rust it seems like it would make sense to learn what unsafe actually does before preventing your idea of it as fact.

                      My greater question is: why does this happen this much? Am I disproportionately seeing more false comments about Rust than most people, or is there a real issue here? In contrast, people voicing their opinions on Go are founded on Go’s actual flaws. Lack of generics, error handing, versioning, et al. are mentioned, but when it comes to Rust, the argument shifts. Rust has flaws, and they are discussed, but there is quite a lot of misrepresentation. IMO

                      1. 14

                        It seems like a fairly normal human reaction, I think. People have invested large portions of their life towards C++ and becoming important people in C++ spaces. In that group of people, most are deeply sensible geeks that have reasonable reactions to Rust. But there will be some that have their own egos tightly coupled with C++ and their place in the C++ community, that see the claims made by Rust people as some form of aggression - attacking the underpinning of their social status.

                        And.. when that happens, our brains are garbage. Suddenly the most rational person will say the most senseless things. We all do this, I think.. most of us anyway. Some are better than others at calming down before they find themselves with all the lizard brain anger organized on a slide deck, clicking through it on stage.

                        1. 2

                          While I love this explanation, I do want to point out the complexity and length of the list of actions one must do to build a misleading slide deck and speak on stage about it with absurd confidence.

                          1. 1

                            Hm that might be true, I think this also happen to a lot of people attacking graphQL, they do not want to accept an alternative to REST.

                          2. 6

                            I think these are different crowds. People who use Go instead of X vs. C/C++ people looking into Rust. Based on my very limited experience talking to C/C++ developers they got this sort of Stockholm syndrome when it comes to programming languages and they always try to defend the shortcomings of their favorite language. UB is fine because… Overflows are fine because…. They does not see any value in Rust because their favorite language has all. I do not know that many Go developers, but the ones I know are familiar with the shortcomings of Go and do not try to downplay it. All of this is anecdotal and might not represent reality but one potential explanation of what you observed.

                          1. 16

                            Finding my will to live. Not in a serious sense, more as in it’s boring not finding interest in everything you used to do anymore.

                            1. 10

                              This might be a sign of developing depression… You should consider seeking help.

                              1. 7

                                Seconded – heck, even if you think things are going pretty good, if you can afford it, a check in with a therapist is great preventative maintenance. I know that since I started regularly seeing a therapist I have seen marked improvement in my experience of life. I’m happier, more productive, and more resilient to shit going haywire.

                              2. 5

                                might be an opportunity to revisit really old hobbies that you let go before. Got back into some PC gaming the past couple months after a couple years away and it feels good to be back

                                1. 2

                                  I feel the same way between personal projects. For me, it always help to pick up a good fiction book and just relax for a couple of days.

                                  1. 2

                                    dude rocx! go talk to someone buddy, believe me, it’s better to do now rather than later on.

                                  1. 5

                                    This is really cool! I’m not sure the flag here is fair, it’s not like grep but for code, I use grep for code every day and it works fine for the majority of use cases, and I wouldn’t use semgrep for them.

                                    It’s a semantic grep, it’s very useful for security, the examples show exactly that, for instance: requests.get(..., verify=False, ...)

                                    Will show every time someone in your codebase is using SSL while not verifying the CA. This would help an organization locate which data might be leaking externally and also what internal services haven’t been properly signed/certificates missing…

                                    With grep alone, you would take ages to be able to cover all formats like they show: https://semgrep.live/jqn

                                    For the security use case, I’d even deal with the weird Docker image distribution.

                                    1. 8

                                      True, but the real argument is about unnecessary complexity. I’m reminded of the syscall comparison graph between Apache and IIS https://www.schrankmonster.de/2006/06/12/iis6-and-apache-syscall-graph/

                                      1. 3

                                        I’ll stick to my point: Accidental (unnecessary) complexity is just essential complexity that shows its age.

                                        It would be a mistake to consider the software independently from the people who have to write and maintain it. It doesn’t exist in a vacuum. The challenges in understanding the complexity and in people working around it (leaving behind messy artifacts and bits of code) are not really avoidable except in hindsight, once people tried things, shipped some, found mistakes, and course-corrected.

                                        At the time these things were thought to be necessary to accomplish a task within the constraints imposed on developers and their experience. It would be misguided to think that this can be avoided or prevented ahead of time. They leave traces and it’s only because of that work and experience gained over time that we can eventually consider the complexity to be unnecessary.

                                        1. 7

                                          are not really avoidable except in hindsight, once people tried things, shipped some, found mistakes, and course-corrected.

                                          This is either trivially true, or empirically false. Trivially true: If you replay history with the exact same set of starting conditions. Empirically false: Why can some teams produce software of greater simplicity and higher quality than others? Why can the same team, after training, or other changes, produce software of greater simplicity than they previously had?

                                          The argument you’re making feels both defeatist and responsibility-shirking.

                                          1. 2

                                            It’s actually a call for more responsibility. We need to care for it, train people, and see complexity as something to be managed, adapted to, and not just something we can hope to prevent or push outside of our purview.

                                            1. 7

                                              You explicitly argue the seeking simplicity is misguided.

                                              Another quote:

                                              A common trap we have in software design comes from focusing on how “simple” we find it to read and interpret a given piece of code. Focusing on simplicity is fraught with peril because complexity can’t be removed: it can just be shifted around.

                                              This simply isn’t true. It happens all the time that someone realizes that a piece of code – their own or someone else’s – can be rewritten in a drastically simpler way. A shift in point of view, an insight – they can change everything. The complexity you thought was there can just vanish.

                                              What I take issue with is that you seem to be explicitly arguing against this as a goal.

                                              1. 3

                                                The shift in point of view often is just people learning things. It’s knowledge moving from the world and into your head. To understand the code that uses that shift in perspective, you have to be able to teach that perspective. You shifted the complexity around. Picasso’s bulls get simpler and simpler, but only retain their meaning as long as you have the ability to mentally connect the minimal outlines with the more complete realization inside your head.

                                                It would be a mistake to think that because the code is simple, the complexity vanished. Look at books like “Pearls of Functional Algorithm Design”, or at most papers. The simplicity of some implementations (or even their use) is done by having people who are able to understand, implement, and ultimately handle the complexity inherent to problems and their solutions. People can pick up the book, see the short code, and never get why it works. But it is “simple”.

                                                You can’t take the software in isolation from the people working on it. Without them, no software gets written or maintained at this point.

                                                1. 8

                                                  You can’t take the software in isolation from the people working on it. Without them, no software gets written or maintained at this point.

                                                  You are saying this as if it’s a rebuttal to something I’ve said, when in fact I agree with it. Yet I still find the way you are framing the rest of the argument to be either counterproductive or factually wrong.

                                                  The shift in point of view often is just people learning things. It’s knowledge moving from the world and into your head.

                                                  Sure, I guess you can say it like that. I don’t what to make of it until I see what you plan to do with this framing…

                                                  To understand the code that uses that shift in perspective, you have to be able to teach that perspective.

                                                  Sometimes. Sometimes not: Often you can refactor code in a way that is simpler and that the original author immediately sees, with no teaching. In fact, they may exclaim something like: “Ah, that’s what I meant! I knew I was making it too complex!” or “Ah, I’d forgotten about that trick!”

                                                  All of these are common possibilities for what happens when you make such a change:

                                                  1. The change has no cost. It’s simply free and better.
                                                  2. The code is simpler if you understand some new concept first. So there is a teaching cost. And perhaps a documentation cost, etc.
                                                  3. The code gets shorter and superficially simpler, but hasn’t actually improved. You’ve moved stuff around. 6 of one, half a dozen of the other.
                                                  4. The code gets shorter and superficially simpler, but has actually gotten more complex. What you thought was a win is in fact a loss.

                                                  It seems to me that your argument assumes we are always in case 2. Or rather: That case 1 isn’t possible.

                                                  Maybe my real question is: What do you want people to do with that argument?

                                                  What I want people to do with my argument is to learn more, press harder for simpler solutions, have humility about their own designs, realize that there’s very often a better solution than the one they’ve found, and that this remains true almost no matter how experienced you are. I think if you understand that and take it seriously, and are on a team where everyone else does too, good things will happen.

                                                  1. 0

                                                    It seems to me that your argument assumes we are always in case 2. Or rather: That case 1 isn’t possible.

                                                    I mean, there are surely cases where 1 is possible and the change has no cost, but that would usually happen when there is already enough complexity absorbed in program rules to deal with it (i.e. a linter that changes things for you), or by the people who do the work. I.e. knowing that 1+1+1+1 can be replaced by 4 is almost always a better thing and it’s so trivial even most compilers will do it without fear.

                                                    Case 2 is what I hint at with the assumption that you need to understand new concepts or new perspectives. I.e. if you go and gather statistics and find out that 95% of cases hit in a certain way, you can afford to re-structure things differently. It may not have changed much to the algorithm, but there’s a need for new awareness.

                                                    Cases 3 and 4 are more particularly interesting, and far more likely when you start working on systems at large. Changing an ID from an integer to a UUID should be self-contained (and help do things like shard data sets). But if an external services uses integer IDs to sort on them, you’ve potentially broken an unspecified contract (was the ordering ever intended or only used accidentally?)

                                                    You can make assumptions that “you must be this tall to ride”, in which case all of these ‘complexity’ cases are assumed to be self-evident, need little validation or maintenance. But that’s an assumption you make based on what? Probably personal experience, hiring standards, people self-selecting? That’s a tricky classification to make, and can be very subjective. That’s why I’m thinking of defaulting to embracing the concept of complexity.

                                                    Sure you can make the case that everyone should be fine with this change, it is equivalent, you can demonstrate it to be the same, get someone to approve it, and move on. Or do it through an automated tool that has accepted these transformations as safe and worthwhile.

                                                    But an interesting aspect there is to always consider that there could be fallibility in what we change, in what we create, and that communication and sharing of information is a necessary part of managing complexity. If I assume a thing is category 1 as a change, but it turns out that in communicating it I show a new perspective that hits category 2 for a coworker (maybe I don’t know them that well!), aren’t we winning and improving? Knowledge management is part of coping with complexity, and that’s why I make such a point of tying people and what they know, and why I’m so hesitant to clearly delineate essential and accidental knowledge.

                                                    1. 3

                                                      Case 2 is what I hint at with the assumption that you need to understand new concepts or new perspectives. I.e. if you go and gather statistics and find out that 95% of cases hit in a certain way, you can afford to re-structure things differently. It may not have changed much to the algorithm, but there’s a need for new awareness.

                                                      Or you can realize that a problem has a better way of being posed. For example, a common interview problem is validating a binary search tree. You can do this bottom up, propagating the validity of the binary tree, but the book-keeping needed to make this approach is hard and fragile, and you’re looking at probably abotu 50 lines of code to get it right.

                                                      Or you can think of placing the nodes of a tree on a number line, and the solution becomes 5 lines of code.

                                                      Neither conceptualization is complex, and the one that solves the problem better is probably simpler to most people, it just takes a bit of thought to find the right isomorphism.

                                                      I find that similar effects happen in large systems, where entire subsystems become unnecessary because of a good engineering choice at the right time. An example of this may be standardizing on one data format, instead of having different systems connected using protobuf, xml, json, csv, thrift, and. This eliminates a huge amount of glue code, as well as the bugs that come when you need to deal with the representational mismatches between the various formats chosen.

                                                      I don’t buy your nihilism.

                                                      1. 0

                                                        So, once you’ve made that choice that saves a lot of complexity, how do you make sure the other engineers that follow after you work in line with it? How do you deal with the folks for whom JSON no longer works well because they want to transfer bytes and the overhead of base64ing them is too much?

                                                        You have to communicate and maintain that knowledge, and it needs to be actively kept alive for the benefits to keep going (or to be revisited when they don’t make sense anymore)

                                                        1. 3

                                                          So, once you’ve made that choice that saves a lot of complexity, how do you make sure the other engineers that follow after you work in line with it?

                                                          Are you seriously arguing that software can’t be maintained?

                                                          Honestly, the way you’re writing makes me worry that you need to talk to someone about burnout.

                                                          1. 1

                                                            No, it’s rhethorical. Again the whole point is that complexity is fundamental and part of the maintenance and creation of software. The whole thing relates to the concepts behind complex adaptive systems, and again most of the thing is that the complexity doesn’t go away and remains essential to manage, which is more useful than ignoring it.

                                                            1. 2

                                                              If it’s fundamental and can’t be reduced, then why bother managing it? It’s going to be there no matter what you do.

                                          2. 3

                                            Accidental (unnecessary) complexity is just essential complexity that shows its age.

                                            Your description is true if you assume that the developers intention is 100% aligned with developing the software as good and correct as possible - which might be closer to the case on a project like the development or rebar3.

                                            But on a project like developing a boring CRUD system and being paid for it in a 9-5 - this is farther from reality.

                                            Developers also are working in an economic framework, where certain skills are more valuable than others and they are trying to include those to their resumes to obtain higher paying, higher status or simply more interesting roles. They might also simply be bored and resent their managers and product managers and not be entirely motivated to look for the simplest solution possible, where many times that solution is on detriment of their own career.

                                            In these situations, engineers will bring unnecessary complexity to the system, it is not the same as the unavoidable fact that some code has mistakes, have to be adjusted, have to be course-corrected and fixed, I agree this is essential complexity.

                                            But there’s also complexity that goes beyond the software/problem at hand, they are also human, but they’re from the economical and social interactions of the people who develop that system. And I find that calling this essential is not right, because this type of economic relationship, this type of interaction, is not essential to software development!

                                        1. 58

                                          Hallelujah, suffering through this at work at the moment.

                                          To dispense with the obvious: for any richly interactive client application, like a movie editor or a videogame or a text editor or something, you probably should write that section as a thick client backed by APIs. Possibly a wrapping HTML page with embedded data populated from the session or the controller or whatever.

                                          But, 99.99% of sites aren’t doing that. They’re glorified CRUD apps, with maybe dozens of simultaneous users. Hell, @pushcx, what’s the max simultaneous users we get on Lobsters? Is your site bigger than Lobsters? HN?

                                          The advantages to rendering stuff server-side is:

                                          • You put humans first, and that’s always what ends up mattering.
                                          • You simplify (vastly) the distributed-systems problems inherent to web development. If it’s all server-side, the browser looks more and more like a document reader than a peered node with autonomy and hopes and dreams and the likelihood of fucking up.
                                          • You make caching orders of magnitude simpler, because again it’s just documents.
                                          • You make the asset pipeline orders of magnitude simpler, because again it’s just documents.
                                          • You make bugs easier because, again it’s just documents (sensing a pattern yet?).
                                          • It makes it slightly easier to search, get indexed (at least at one time, SPAs getting crawled properly was A Whole Thing compare to server-side rendering), and to become emails someday (though that’s also a disaster because of the miserable state of CSS in email).
                                          • Lots of client-side JS takes forever to compile compared with server evaluated stuff, for whatever reason. Phoenix reliably updates near instantly while the React stuff with bells and whistles and Typescript and all the other stuff chugs on my machine.
                                          • It encourages your engineers to do full-stack work and be able to debug things anywhere in the pipeline, because there’s not an arbitrary (and often fictional) separation in responsibilities.

                                          There are some downsides, sure:

                                          • There will be a tipping point on some pages as your designers opt for things like typeahead and sorting and whatnot where you’ll probably benefit from graduating to a React app or something.
                                          • There will occasionally be pathological bugs that really would be easier to hack around in JS.
                                          • You don’t get to blog about it as much.
                                          • If you really are roflscale (which usually you aren’t, let’s be honest with ourselves), you need to sort out your caching. Cloudflare and other CDNs do make this a lot easier, but even basic tweaking of nginx or haproxy or whatever your favorite solution is can get you remarkably far.
                                          • If you have a lot of archived content, some chucklefuck is going to crawl it, and that’s gonna make your server sad again unless you have caching.

                                          Overall, though, I think that starting with server-side rendering just makes so much more sense.

                                          1. 21

                                            Hell, @pushcx, what’s the max simultaneous users we get on Lobsters?

                                            We don’t really track this, but a quick cat production.log | grep " Request " | grep -o "2020-04-28T[0-9][0-9]:[0-9][0-9]" | uniq -c | sort -rn | head says

                                                472 2020-04-28T14:00
                                                459 2020-04-28T10:00
                                                388 2020-04-28T07:00
                                                382 2020-04-28T15:30
                                                372 2020-04-28T18:00
                                                367 2020-04-28T17:30
                                                367 2020-04-28T13:00
                                                358 2020-04-28T16:00
                                                355 2020-04-28T17:00
                                                348 2020-04-28T00:00
                                            

                                            So it looks like when bots fire and hit us on the hour and half-hour we peak ~7.9 app requests per second (does not including js/css/avatars/404s that nginx handles). Interesting finding. We’ve been briefly knocked offline by a careless bot that repeatedly requested an expensive page as fast as it could (IIRC ~19 rps). I assumed there were more bots that were slightly better behaved and this analysis makes them quickly conspicuous. 77% of the top 100 minutes today end in 0 and 5 and they strongly dominate the top of the list, rather than the 20% and even distribution you’d see if traffic were randomly distributed as humans likely are.

                                            As usual, happy to run variations if folks have them.

                                            1. 18

                                              It’s just not about number of users though, it’s also about type of users. The people who uses hacker news or lobste.rs is VERY different from the average user. For instance: this thing I’m using right now has markdown and expects me to be able to write it in plain text format. It has no wysiwyg editor, it doesn’t keep drafts of my comment, if I reload this page, this comment is gone, I need to refresh the page to see new updates, I get no notification if someone answer me here.

                                              Yes we can live without these things, but a huge part of the public actually wants functionality like these. And you keep adding them, and even simple “Crud” apps become complicated systems with modern UIs.

                                              The bar for UI is way higher nowadays, and your competitor will have all these features you don’t have.

                                              Like… I use VIM, but if I try to say to my parents to use this instead of Word they will laugh at me, it’s the same reason why UI is getting so complicated.

                                              We can’t just see from the performance/back-end focused point of view.

                                              1. 25

                                                The bar for UI is way higher nowadays, and your competitor will have all these features you don’t have.

                                                My competitor is just as likely as not to be sputtering around while their devs slapfight over whether to use React hooks or contexts or redux. You can move fast with a server-side rendered app.

                                                Some sites who have famously been gutted by the competitors for having boring server-side pages:

                                                • Craigslist
                                                • Hacker News
                                                • Stackoverflow
                                                • Wikipedia
                                                • 4chan

                                                In my experience if people get useful value from your site they won’t be turned off by rough design.

                                                1. 7

                                                  In my experience if people get useful value from your site they won’t be turned off by rough design.

                                                  On the contrary; “rough design” is actually a very positive feature.

                                                  1. 5

                                                    I think lots of users appreciate the simplicity of Wikipedia, Craigslist, etc. I know my wife preferred Craigslist to Facebook marketplace or Zillow when we were searching for an apartment earlier this year. She’s only 29.

                                                2. 12

                                                  These are all really trivial things to add on top of a static or server rendered page, though. I mean, we’re literally talking about 100 lines of mithril (ok, maybe slightly more), hitting the backend.

                                                  I recently noticed that Reddit has added a “continue this thread” link to many threads. Because they’re doing it through their frontend app, not only does it not just load the comment, it redirects you, and then when you try to go back to where you were, the basic back functionality of the browser is broken. For a comment that is multiple levels deep, you’re talking about what looks to the user like 3 or 4 new page reloads, but is really just whatever their frontend is doing. It’s baffling to me, especially considering that they were so ahead of the game in 2005 that they were doing stuff in lisp, and Huffman is the CEO.

                                                  So this frontend, which ideally is better for the user, to me is almost unusable. I checked out what kind of requests they are doing, and it’s over 200 and the page isn’t even loaded in 30 seconds on a good desktop. The requests dont’ actually ever stop, there is a “beacon” post request every x seconds that looks like it is hitting ads. I can only assume they’re forcing all these redirects on the user because it allows more ad impressions.

                                                  Just feels like we’ve really lost our way. I don’t understand how Huffman is even okay with that.

                                              1. 15

                                                I respectfully disagree.

                                                You will need a backend anyway. It’ll need to expose data to users. Exposing it as HTML is no harder than doing it via JSON or GraphQL. The data to HTML transformation is still required with client-side rendering, so client-side saves no work, but imposes its own costs.

                                                Doing it this way means that you don’t have an API, meaning your service is neither discoverable nor scriptable. For smaller projects that may not matter, but chances are eventually you’re going to want to provide remote machine-accessible access to your service and then you’re going to need to build an API. Why not build the API first?

                                                Client-side apps are wads of Javascript that must be loaded and parsed before it can load and parse JSON which it can then transformed into HTML. Precisely nothing about this is optimal for browsers and networks. Many applications are used infrequently and browser caches are not magic (and disagree with you about how important you app is), therefore arguing ‘each user only has to load it once’ is bullshit.

                                                Unless your app is something used frequently and for a decent period, with lots of low-latency interaction (forms do not count: think editing, iterating), it will certainly be faster server-side rendered.

                                                Client-side rendering offloads a lot of the work to the client, assuming caches are held well. An entire HTML page must be cached as a single unit. If you re-render the page every time something changes, caches become useless. If you send the rendering machinery (the templates, the JavaScript libraries, etc) once, you only need to send the changed data from there on out because the static assets are cached. Browser caches aren’t magic, sure, but they’re not useless either. They’re a well-understood technology.

                                                It is harmful for read-only, or read-mostly applications. Harmful to the implementors as it imposes unnecessary cost, harmful to users as it’s likely slower, less likely to use the web platform correctly and less accessible. Inappropriate use of client-side rendering is why to find out my electricity bill I have to wait ~10 seconds while a big React app loads and then hits up 5 REST endpoints.

                                                Those 5 REST endpoints are now machine-accessible, which facilitates testing and scriptable access. It’s not less likely to use the web platform correctly unless we get very pedantic, because without JavaScript frameworks on the client side we’re going to have to deal with rendering engine quirks on the server side. That big React app downloads once, and then you’re only grabbing the parts that change, and smooths over whether the client is IE or Edge or Firefox or Opera or Chrome or whatever.

                                                For very small, very infrequently used, very simple applications, server side rendering makes sense. In just about all other cases, having the web browser just be another client that hits a standard API is strongly preferable, IMHO.

                                                1. 27

                                                  Why not build the API first?

                                                  To me, this begs another question. Are you serving humans or computers? If you’re writing a website for humans to consume, I think it makes more sense to write their interface first and worry about the machine interface second.

                                                  1. 5

                                                    I would argue that you’re serving clients, which may be human or machine. There’s a translator for humans (the browser). It’s easier, IMHO, to build an API first and adapt it to browsers than to discover later that you need an API and retrofit it in.

                                                    1. 12

                                                      A more interesting question: what is the likelihood of that being needed? I’d hazard that 99.99% of companies out there have little or no need to expose a public API.

                                                      1. 1

                                                        I actually agree with you and @friendlysock on this. Security, reliability, maintainability, accessibility… many benefits from defaulting on HTML documents and server-side rendering.

                                                        That said, there’s one example worth considering that backed @lorddimwit’s point in an epic way.

                                                    2. 3

                                                      You are serving computers. Those computers will run a web browser, which is serving humans. Having a clean separation between the presentation layer and the business logic layer makes it easy to support multiple presentation layers, such as mobile apps, adapted interfaces for braille readers, and so on. Having this separation makes it easier to change the human interface quickly, which is essential when you’re developing something usable for humans.

                                                      The question is whether you put all of the presentation logic in the web browser or split it between the browser and a back-end pass. In both cases, you’re providing some tree-structured data to the web browser, which is then running a load of code to generate a UI. The only difference is whether you are providing tree-structured data to the web browser that is specifically tied to that specific UI or not. Where it makes the sense to put the split is a software engineering decision that varies between system.

                                                    3. 20

                                                      Doing it this way means that you don’t have an API, meaning your service is neither discoverable nor scriptable. For smaller projects that may not matter, but chances are eventually you’re going to want to provide remote machine-accessible access to your service and then you’re going to need to build an API. Why not build the API first?

                                                      This is a very good argument for splitting your app into “API” and “rendering” halves. If the whole app runs entirely through the JSON API, then you know that the JSON API is usable, because it’s being used.

                                                      This is not really an argument in favour of browser-side JavaScript; it’s an argument in favour of a microservice architecture where HTML rendering and business logic are split into two services, like I’m pretty sure is what GitHub’s doing.

                                                      Client-side rendering offloads a lot of the work to the client, assuming caches are held well. An entire HTML page must be cached as a single unit. If you re-render the page every time something changes, caches become useless. If you send the rendering machinery (the templates, the JavaScript libraries, etc) once, you only need to send the changed data from there on out because the static assets are cached. Browser caches aren’t magic, sure, but they’re not useless either. They’re a well-understood technology.

                                                      It’s also a very hard performance gain to actually realize. If your API is made up of orthogonal resources that all go into a single page, then rendering a single page is going to result in multiple round-trips, which will cancel out all of your efficiency gains just from the latency. (this isn’t as big of a problem in microservice case, since the two services are probably in the same data center, and even if they’re not, they’re certainly not on LTE or anything)

                                                      There are workarounds, of course. Smart people, like the discourse.org team, will ship the JSON payload bundled into the initial HTML payload, so the interface isn’t so chatty any more. But now you’re not just serving a static HTML file any more; you’ve built a thin server-side “rendering” tier whose job it is to predict what the JavaScript is going to need before it asks for it. That logic is now duplicated into two spots: once in the app itself, and once in the “predictor” renderer.

                                                      You can also try to design your API to reduce round-trips, chipping into its orthogonality, or you can use a generic query system (like GraphQL), making the API harder to cache and requiring complicated policy tooling to avoid becoming a DoS vector. But both of these options still require two round trips at least: one to load the JavaScript app, and one to make the API call.

                                                      As for caching server-side rendered pages in pieces: there are “edge logic” solutions like CloudFlare Workers or Server-Side Includes that can solve your Big HTML Blob caching problem. The CloudFlare worker makes multiple HTTP requests on behalf of the user, caches the pieces, but serves a single HTML blob at the other end.

                                                      1. 5

                                                        The thing is, the two round trips you’re describing for Graphql, keep being the single only 2 round trips for a huge amount of functionality that now you can easily code into the front-end with decent modern tools, while you would need to divide them into multiple pages in a server side rendering system.

                                                        The technical level of UX you can get with a modern framework like react do not compare to what you can do easily with the old school templating languages of the past. And the github example you’ve used can be used as a great counter argument, because they have 1- a TERRIBLE uptime, their status page look like a pez dispenser and 2- low level of interactivity to modern standards.

                                                        1. 9

                                                          The thing is, the two round trips you’re describing for Graphql, keep being the single only 2 round trips for a huge amount of functionality that now you can easily code into the front-end with decent modern tools, while you would need to divide them into multiple pages in a server side rendering system.

                                                          Initial load time is your first impression, and first impressions matter.

                                                          But let’s instead talk about Discourse, my favourite SPA by far. That app is complicated. They use a setup with multiple bundles to be able to render the page without downloading all of the app’s JavaScript (look for all the -bundle.js files). They hack around with OS-specific workarounds to avoid spending minutes at a time on rendering or actual OS-specific bugs. They reimplement parts of their underlying framework (Ember) for performance reasons. Because that’s what you have to do if you want a client-side JavaScript app to run well on a potato phone.

                                                          I’m not even saying that they made a bad choice. Infinite scrolling is kind of an unavoidable part of their app’s design goal of letting people just read with as few impediments as physically possible. But it’s hacky, complicated, and it the initial load time is still a lot worse than a site like Lobsters.

                                                          The technical level of UX you can get with a modern framework like react do not compare to what you can do easily with the old school templating languages of the past.

                                                          What you’re saying is true. It also doesn’t really matter most of the time; it’s not that hard to add React components to an otherwise client-side-rendered app. It’s what Medium, for example, does.

                                                          And the github example you’ve used can be used as a great counter argument, because they have 1- a TERRIBLE uptime, their status page look like a pez dispenser and 2- low level of interactivity to modern standards.

                                                          So? It’s not their UI layer that’s having uptime problems. It’s GitHub Actions, Webhooks, and their MySQL cluster that dominate their downtime retrospectives. Distributed databases are an unsolved problem.

                                                          GitLab uses client-side rendering, and their problems are almost identical. Maintenance work on their database (they use PostgreSQL), downtime in GitLab CI, and trouble delivering webhooks.

                                                        2. 4

                                                          This is not really an argument in favour of browser-side JavaScript; it’s an argument in favour of a microservice architecture

                                                          You don’t need microservices if you have a proper module system.

                                                          1. 1

                                                            I’ve heard advice to use modules instead of microservices before, but never seen an example of it. So here’s my attempt at working out an example, which might also help anyone who isn’t clear on what that advice is supposed to mean.

                                                            Given an API defined like this (in JavaScript-like pseudocode):

                                                            api/products.js
                                                            function getProduct(id) {
                                                                // talk to database, return JSON
                                                                // might throw error if database fails or there is a bug in this code
                                                            }
                                                            
                                                            defineEndpoint('product/:id', getProduct)
                                                            

                                                            The suggestion is to turn this HTML rendering code, which uses a call over the local network to load the API response:

                                                            networking/index.js
                                                            function loadApiEndpoint(endpoint) {
                                                                return fetch('http://localhost:5678/' + endpoint)
                                                            }
                                                            
                                                            rendering/products.js
                                                            import {loadApiEndpoint} from '../networking'
                                                            
                                                            async function renderProductPage(id)
                                                                try {
                                                                    const productInfo = await loadApiEndpoint(`products/${id}`)
                                                                    return `<h1>${productInfo.name}</h1>`
                                                                } catch {
                                                                    console.error('network error, database failure, or bug in API code')
                                                                }
                                                            }
                                                            

                                                            Into this code, which uses the language module system to find the relevant API code and get its reponse:

                                                            rendering/products.js
                                                            import {getProduct} from '../api/products'
                                                            
                                                            async function renderProductPage(id)
                                                                try {
                                                                    const productInfo = await getProduct(id)
                                                                    return `<h1>${productInfo.name}</h1>`
                                                                } catch {
                                                                    console.error('database failure, or bug in API code')
                                                                }
                                                            }
                                                            

                                                            Is that what you meant by your advice?

                                                            1. 3

                                                              A proper module system is one that allows you to trust that there is isolation between components. This lets each team write their own module without breaking each other.

                                                              I don’t think one exists yet; it would need to segregate / apply quotas to memory allocations, network traffic and cpu use, so that teams could not break each others stuff.

                                                        3. 15

                                                          Most apps don’t need an API, or if they do, it’s easy enough to just add a REST API in most cases. It’s not like you can use the REST API for your frontend app anyway if you want decent performance. GraphQL is kinda designed to solve these problems, but that comes with its own set of trade-offs and problems.

                                                          For very small, very infrequently used, very simple applications, server side rendering makes sense. In just about all other cases, having the web browser just be another client that hits a standard API is strongly preferable, IMHO.

                                                          The thing is that the browser isn’t really “just another client”; programming a script to count my contributions on GitHub is something very different than building a functional and snappy UI, and has rather different requirements.

                                                          “Write an API once, use it everywhere” is one of those things that sounds fantastic, but I’ve never seen it work well anywhere. You can often smell apps that are built on these kind of APIs, because you’re waiting forever for data to load (Stripe, SendGrid).

                                                          It’s not like I don’t see the value in frontend apps btw, but there are pros and cons to both approaches, and when done well both work well. One of the downsides of frontend apps is that it’s quite hard to do them well, which tends to be easier on the backend.

                                                          Relegating all backend apps to just “”very small, very infrequently used, very simple applications” seems rather lacking in nuance. Clearly there are many backend apps that work quite well.

                                                          tl;dr: there is no silver bullet.


                                                          Some other tidbits:

                                                          If you re-render the page every time something changes, caches become useless

                                                          Most “server side apps” have some dynamic content. E.g. if you click a button somewhere it loads the (server-generated) partial and replaces/inserts the content. An example is Lobsters where if you click “post” it just sends a XHR request and inserts your comment, instead of doing a full page refresh. “Pure” server-side apps are pretty rare these days.

                                                          You can also use ESI to cache parts of a page; e.g. Varnish can do that.

                                                          Inappropriate use of client-side rendering is why to find out my electricity bill I have to wait ~10 seconds while a big React app loads and then hits up 5 REST endpoints. Those 5 REST endpoints are now machine-accessible, which facilitates testing and scriptable access.

                                                          I still have to wait 10 seconds for a simple operation. It’s not good UX, no matter what the reasons. I think this is a classic example of an engineer pointing out there are good technical reasons for something, while ignoring that users are gnashing their teeth and dreading your app because it’s so horribly frustratingly slow to use. That’s not solving problems, it’s creating them.

                                                          I’ll bet that these endpoints are undocumented and not intended to be used directly by you, so the usefulness of them as an “API” is rather limited.

                                                          It’s not like you can’t test server-side apps; if anything it’s easier.

                                                          without JavaScript frameworks on the client side we’re going to have to deal with rendering engine quirks on the server side.

                                                          It’s pretty rare I run in to this; and if I do, they’re small visual issues. “The browser” is a huge unknown platform influenced by many factors (OS, settings, extensions, etc.) which are hard to reason about. A weird bug in your backend can be hard to solve; a weird bug on the frontend can be almost impossible to solve if you’re having trouble reproducing it on your machine (which can be hard!)

                                                          1. 2

                                                            GraphQL is kinda designed to solve these problems, but that comes with its own set of trade-offs and problems.

                                                            Can you say a few words about the problems/trade-offs you see with GraphQL? When would you use it over a RESTful API?

                                                            1. 2

                                                              The advantage of GraphQL is that it’s essentially an interface to your database allowing people to get a lot of data in one go. The disadvantage of GraphSQL is that it’s essentially an interface to your database :-)

                                                              You really don’t want to allow people to query unlimited amount of data, so you need to be careful in what you allow and disallow, leading to all sorts of complexity. GraphQL is pretty neat for certain cases, especially large APIs (e.g. GitHub), but for a lot of cases just adding a few JSON endpoints which combine data (e.g. “get customer and tickets”) works just as well.

                                                              Disclaimer: I’m not an expert on GraphQL; I only used it once. There is probably much more to be said about this.

                                                          2. 1

                                                            If you’re using a good server-side framework/toolkit/library/box-of-code/whatever-name-you-like, a lot of the same routes and logic that present a HTML formatted view, can return a JSON formatted view, or an XML formatted view, or whatever other serialisation tech you prefer, using literally the same business logic, and maybe a little bit of per-serialiser code.

                                                          1. 4

                                                            I like having more space for discussion, and wouldn’t like to see these discouraged entirely, we could have some recurrent posts, similar to “what you’re doing this week” with some of these questions. For instance: “Describe a tool you use and love”, “share something you wrote this week”.

                                                            “what’s your distro” was a very open question that could had resulted in people just answering “linux” but resulted in deep and interesting conversations, so it’s nice to have this type of space to foster dialogue, the low bar for entry might make people feel more comfortable in engaging with lobste.rs as a whole.

                                                            1. 9

                                                              One of the issues I’ve seen with this is that people that are in favor of better work/life balance may not also be in favor of using their work time effectively. A team may accumulate a work debt and then have to trash work/life balance in order to pay it off–but efforts to just work more efficiently and aggressively before things get to that point are often dismissed because of things like this manifesto.

                                                              1. 7

                                                                I think our industry is notorious for overworking young people and causing burnout. I always thought one reason to insist on sane working hours is that you can then expect more features per engineer per week. I think consistently delivering and maintaining good boundaries if anything is a mark of professionalism.

                                                                1. 5

                                                                  The pressure always come from business needs. In any given scenario, who is to decide that the workers are too slow instead of the managers setting unrealistic deadlines and too much pressure?

                                                                  1. 21

                                                                    As an executive in a public listed company, I can tell you if one of my managers tell me the reason they missed a target is because their workers are “too slow” I will fire the manager.

                                                                    That is literally the only reason I hired a manager in the first place, and if they can’t take responsibility for the decisions they made to whip/trash work/life balance or set “unrealistic deadlines”, then they are not doing the job I hired them for and I will find someone else.

                                                                    Reading HN, Reddit, and even Lobsters, it seems like working for some real shitty managers is common. I have no idea why people work for shitty managers. Maybe they have low self-esteem, or maybe they have little confidence in their employability that they feel “lucky” to have a job where they’re treated like shit. Maybe they think this is normal and/or necessary. It’s not necessary.

                                                                    1. 7

                                                                      A lot of this is normalized by the cultural spaces in which these jobs exist. “Startup Culture” and “Corporate culture” are dogwhistles for “ways to convince workers to work more for less”. It’s easier to pay a couple HR to whip the workers and maintain the ranks than to grow technical teams and so endless flows of bullshit are employed both at a company level and systemic level to keep them docile and obedient until they are in burnout and decide to leave for their next company, believing they are free and indipendent professionals that are sticking it to the man by resigning and finding a new job, while they are just improving the ability of a company to work with a high-turnover.

                                                                      1. 2

                                                                        A lot of this is normalized by the cultural spaces in which these jobs exist.

                                                                        Reminds me of this article by Dan Luu on normalization of deviance:

                                                                        https://danluu.com/wat/

                                                                      2. 6

                                                                        I have no idea why people work for shitty managers.

                                                                        A lot of areas have mostly jobs with shitty managers. It’s hard to get good jobs. For many, it’s hard to get jobs period. It also seems like it’s easy for shitty managers to get their jobs in such situations.

                                                                        1. 7

                                                                          Also, statistics: Good managers keep their happy team while shitty managers have more churn (leaving, burnout, fired, etc). Thus, even if there are more good than bad managers, open positions are more likely under shitty managers.

                                                                          1. 2

                                                                            Strictly it depends on the parameters you choose. What proportion of good vs bad and what level of hiring for each class. The opposite conclusion is quite possible while keeping the inequality “more good than bad managers”.

                                                                        2. 2

                                                                          It’s almost impossible to keep only having good managers, even when I was lucky enough to have one, they either eventually resigned or moved to another position, and I got a shitty manager instead.

                                                                          I can’t stand abusive managers, but incompetent ones are almost impossible to avoid in this business where people get promoted into management because they know a programming language well.

                                                                      3. 2

                                                                        One of the issues I’ve seen with this is that people that are in favor of better work/life balance may not also be in favor of using their work time effectively.

                                                                        They may not be, but then again they may be. The flip side is that overworking leads to poor efficiency, leading to overworking, leading to all kinds of complications.

                                                                        A team will always accumulate technical debt, I haven’t seen one otherwise. Working less doesn’t mean work less aggressively. The manifesto doesn’t even hint at that. It’s advocating for better work/life balance, not passive work. It even says, “to us it is just a job, but we still do it well”.

                                                                        Sorry, but I see this remark being a very casual counterpoint that is polarizing.

                                                                        1. 4

                                                                          I mentioned work debt, not technical debt–the key difference (which I failed to articulate at the time) being that while technical debt matters to the engineers, work debt matters to the business. Not having a DRY admin page codebase is technical debt; not having an admin page at all is work debt. Lots of engineers in my experience are woefully oblivious to work debt.

                                                                          Working aggressively is about getting the same amount of work done in a shorter amount of time, an analogy being to the mechanical definition of power (work over time taken). The problem is that people elect to work less (thus decreasing their work power) and then the work debt comes due and suddenly it’s time gun the engineering engine and either the work doesn’t get done (and everybody gets canned and replaced with engineers who are better about how they spend their time) or the work gets done at the cost of additional stress for the devs.

                                                                          Basically, the failure mode I’ve seen is:

                                                                          • Team commits to “work/life balance”.
                                                                          • Everything is fine, team is trundling along.
                                                                          • Some business demand comes up which increases the work debt.
                                                                          • At this point, there is usually a window where working (for analogy) an extra 30 minutes a day (not even outside work hours…just doing 30 more minutes of paying down work debt instead of browsing the net or refactoring things to make them match dependabot’s complaints or whatever) would pay off the work debt and give slack back to the team.
                                                                          • However, some or all of the team members (for a variety of reasons) refuse to do this.
                                                                          • Deadline for the work debt draws nearer.
                                                                          • Team must rush to meet work debt (or heads roll).
                                                                          • Team is unhappy, blames it on work/life balance–when
                                                                          • Repeat.

                                                                          As much light and ink has been spent on the problem of burnout in tech, we definitely (due to cultural and historical reasons) oddly seem unable to address the (real) problem of malingering.

                                                                          1. 2

                                                                            I still feel like your argument is unclear. You are saying choosing better work/life balance could lead to ineffective work (browsing the net or refactoring as opposed to making that admin page). I find it a bit hard to believe that this happens because people believe in the 501-manifesto (I could be wrong). It’s more of an example of mis-prioritization of tasks.

                                                                            I rather have this scenario, where someone or a group gets burnt because they chose to badly prioritize their tasks than everybody gets to overwork blindly.

                                                                            1. 2

                                                                              My arguments/positions:

                                                                              • Work/life balance is sometimes used as a cover for malingerers.
                                                                              • Work/life balance is sometimes used to resist efforts to work more efficiently.
                                                                              • We as an industry focus on tech debt and work/life balance and ignore the very real issue of not clearing work debt, which in turn makes life harder for us.

                                                                              EDIT: Clarified “real” to be “very real”, so as to not sound dismissive towards tech debt and balance.

                                                                            2. 2

                                                                              Isn’t the idea of Agile / Scrum / whatever to be able to account for new business needs (“work debt”).

                                                                              In any case, it’s not the team’s responsibility to make sure work debt doesn’t accumulate, it’s management’s. Comitting to “501” works both ways - you work your 40 hours a week but you give all that time to the company.

                                                                              1. 1

                                                                                Work debt has an inflow (management) and an outflow (engineering). If engineering isn’t clearing out work, the normal pace of inflow will cause accumulation.

                                                                                We can certainly complain about management dumping a gigantic pile of work and jamming the pipes, but I see almost no attention being paid to “hey, engineering, y’all need to work faster.”

                                                                        1. 12

                                                                          Thanks for this link, it reminds me how entrenched into work I can be sometimes.

                                                                          That being said, I think the mention about open-source contributions is a bit different. I feel that when I’m contributing, I’m doing something like helping a (my?) community. Exactly like I would help my neighbor fix his TV, or help him cut down a tree in his garden, etc… pushing code to a project is helping communities.

                                                                          1. 1

                                                                            that’s true, but I think that’s why the manifesto makes it clear they respect people who contribute to open source/write technical blog/etc.

                                                                            1. 19

                                                                              It’s the followup paragraph which kind of reverses all that:

                                                                              We recognize that your willingness to allow your employment to penetrate deeply into your personal life means that you will inevitably become our supervisor. We’re cool with this.

                                                                              Doesn’t sound like respect to me and conflates open source contributions with employment somehow. I think it goes a bit off the rails there. It starts nicely enough but then it just seems to get lost in feeling bitter about people who like to do certain things with their free time, despite their claiming ‘we’re cool with that’ – then why bring it up at all? This part just doesn’t seem relevant to the overall point to me.

                                                                              1. 4

                                                                                fair, it does get unnecessarily mean to people who might have coding as their hobbies.

                                                                                1. 1

                                                                                  I agree. The vast majority of them are victims to the employability dogma (aka rat race), not perpetrators.

                                                                                2. 2

                                                                                  This didn’t sound bitter to me, but the opposite. People who do more coding are just naturally going to be better at coding. Rather than complaining that it’s unfair to the people who did less coding outside work time (and I have certainly heard that argument), the manifesto says yes, we get it, and we’re OK with that trade-off.

                                                                                  1. 1

                                                                                    I agree that’s what it said. I don’t know that it’s true, though. It depends on what kind and quality of code they’re writing in that extra time. Writing more bad or mediocre code at home that’s totally unrelated to what’s done at work won’t make them better than coworkers. Also, if they start good but keep getting tired, the quality might go down in a way where however they manage that becomes consistent at other times.

                                                                                  2. 1

                                                                                    I don’t think that this line:

                                                                                    We recognize that your willingness to allow your employment to penetrate deeply into your personal life means that you will inevitably become our supervisor. We’re cool with this.

                                                                                    Is related to paragraph and bullet point list above. I think this line says that if you work overtime, you will probably get promoted faster and become supervisor of a 501 dev.

                                                                                    This is my interpretation.

                                                                              1. 22

                                                                                Animal crossing is our only hope

                                                                                1. 9

                                                                                  But here is where the problem lies, it’s surprisingly rare that I find myself editing only one file at a time.

                                                                                  And that’s why Emacs exists. It even has a best-in-class set of vim keybindings. And it has a wonderful extension language. It’s a Lisp Machine for the 21st century!

                                                                                  1. 15

                                                                                    That just means he needs to learn more Vim. It does indeed support tabs, and splits too, and IMO does it better than most IDEs. And you can get a file tree pretty easily too with Nerdtree. I have no issues with editing a bunch of files at once in Vim. Features like being able to yank into multiple registers is much more of a help there.

                                                                                    1. 8

                                                                                      I suspect one problem people have with vim and editing multiple files is that they only know about buffers, which can be a little tricky to work with, but I don’t think many people realise it does actually have tabs too.

                                                                                      I frequently open multiple files with vim -p file1 file2 which opens each file in a tab, and you can use gt, gT, or <number>gt to navigate them. There’s also :tabe[dit] in existing sessions to open a file in a new tab.

                                                                                      I generally find this pretty easy to work with and not much harder than doing something like Cmd-[ / Cmd-] as in some IDEs.

                                                                                      1. 3

                                                                                        There is some debate on whether tabs should be used this way in Vim. I used to do it this way and then I installed LustyJuggler and edited all my files without tabs.

                                                                                        But if it works for you, more power to you!

                                                                                        1. 3

                                                                                          As a sort-of an IDE power user, I would argue that editor tabs are a counter-productive feature: https://hadihariri.com/2014/06/24/no-tabs-in-intellij-idea/.

                                                                                          1. 2

                                                                                            That said, while you can work with tabs like this, that’s not entirely the idea of them. Tabs in Vim are more like “window-split workspaces” where you can keep your windows in some order that you like. With one buffer in one window per tab you do get a pretty similar workflow to tabs in other editors, but then again you could get a similar workflow with multiple buffers in one window even before Vim got tabs.

                                                                                            Guess tabs fall in the tradition of vim naming features different than one would imagine: buffers is what people usually understand as files, windows are more like what other editors call splits and tabs are more like workspaces.

                                                                                            1. 4

                                                                                              Oh, to be clear I don’t have one buffer per tab. I do tend to use them more like workspaces as you say. Typically each of my tabs has multiple splits or vsplits and I’ll go back and forth between buffers - but having tabs to flip back and forth between semi-related things can be useful on a smaller screen too.

                                                                                          2. 3

                                                                                            One of the reasons why I love vim is that I find a lot easier to edit multiple files at once. I can open then with :vsp and :sp and shift-ctrl-* my way around them very fast, with NERDtree I can open a directory folder in any of these windows, locate the file, and there you go, I have them arranged in whatever way I want. It makes it super easy to read the multiple files and copy things around. I like my auto-complete simple, I find autocomplete distracting, so I just use ctrl-n, but I’m aware this is a fringe opinion, if you want a more feature rich autocomplete, You complete me works pretty fine for people that like these. Also, I can open any terminal with :terminal… My vim usually looks more like this https://i.redd.it/890v8sr4kz211.png nothing with to do with 1 file per time.

                                                                                            Does vim makes me super faster than everyone else? Probably not, it’s just a text editor, but it’s very maleable and I’ve been coding for many years now and I haven’t ever seen the need to not use it. When I need to work on a new language I just install a bunch of new plugins and things get solved.

                                                                                            Nothing against VS Code, but it’s also only super powerful with the right plugins and configurations, there’s more of it out of the box, but without them it would also just be a simple text editor.

                                                                                            1. 2

                                                                                              What’s a good resource to start learning emacs?

                                                                                                1. 1

                                                                                                  I gave emacs a try through mg(8), a small editor that ressemble emacs a lot (no LISP though) and is part of the default OpenBSD system. It comes with a tutorial file that takes you on a tour to discover the basics of emacs, starting with the navigation. This is like vimtutor, and it’s great !

                                                                                                  It also let you discover emacs as a text editor, AND NOTHING MORE! Which is refreshing and helped me reconsider its usefulness for editing text 😉

                                                                                              1. 9

                                                                                                Ask yourself “is this my opinion?” – This is so important. Engineers that keep making comments based on unproved opinions can slow down the whole project, and bring no benefits.

                                                                                                1. 2

                                                                                                  Yes and no. Sometimes something is an opinion but ultimately you should still pick A or B and not a mixture of the two. Is there any actual evidence that brace position affects correctness or readability? No. But inconsistent brace position probably does, and while it might be based on someone’s opinion, picking one or the other way of putting in curly braces needs to be done.

                                                                                                  What really frustrates me is that most code review systems (including the GitHub pull request model) have poor support for ‘this looks good I’ll just clean up the style issues with it before I merge it’. The effort to just go and fix the issues immediately is so much lower than marking the style issues, asking for them to be fixed, the person fixing them, them pushing again, me pointing out one they missed because I didn’t spend half an hour going through and marking every little instance, them doing it (probably annoyed themselves by now), them pushing, me finally approving and merging.

                                                                                                  1. 1

                                                                                                    but things like brace position consistency should be part of the agreed upon style guide. And it’s fine to point that a code is breaking the style guide.

                                                                                                    And while the rules of the style guide are arbitrary, they are there simply so everyone agrees on a common pattern.

                                                                                                    And this is covered in the article, linters and code style guides are super important to avoid this type of conversation. But then when someone keeps arguing that a class is more readable because they like organizing methods in some other way, that the team has not agreed as an standard, or that they think that some pattern is preferable to another, without concrete arguments beyond “I find more simple”, it’s when it boils down to personal style and trying to deny a PR simply because that’s not how the reviewer would had written the code, is very problematic and self centered. Which is not what you’re referring too, I think preferring consistency is necessary, and should be agreed with everyone in a living document.

                                                                                                1. -1

                                                                                                  Discord is also full of white supremacists, and a lot of people would rather not use it because of that.

                                                                                                  1. 9

                                                                                                    Do you really believe that? Do you really buy into the neo-nazi hysteria that much?

                                                                                                    The same could be said about most platforms out there.

                                                                                                    1. 4

                                                                                                      There are SOME extremist activity in most platforms, but some like Reddit and Discord have more content and had been used for organizing efforts of these groups. Security Analysts and OSINT people keep an eye on discord, because there is a higher amount of relevant extremist activity there than in most open platforms.

                                                                                                      1. 4

                                                                                                        Is that more than their size would indicate? I’d assume large platforms like reddit and discord to have a lot of… well, everything. I certainly haven’t noticed it being worse than facebook or twitter.

                                                                                                        1. 1

                                                                                                          Is Discord centrally moderated like Reddit is?

                                                                                                          1. 3

                                                                                                            Reddit is the closest analogy, yes. There are local moderators but the platform is the ultimate arbiter.