1.  

    I think low level. It you are just the slightest bit serious about becoming a programmer, there is no possible way to skip the fundamentals. Which are very much embodied in lower level languages. There’s no point into starting from the end, only to be required repeatedly to dig deeper in an unstructured and random manner.

    Finding an interesting project is important, and if higher level languages are better at this, the. Maybe learn them in parallel. Or go throught the fundamentals as quick as possible so you can jump on the engaging project.

    1. 15

      Packaging is usually people’s #1 problem with the Python ecosystem. I do empathise with people struggling but often when I read the specifics of problems people have they often seem to be doing what I would consider “ambitious” things, like mixing package managers (conda and pip in one example in TFA), trying to do data science on a musl libc ddocker container or trying to take their app to prod without packaging it up.

      In Pythons case I think the packaging system is solid overall and does work, though there are beartraps that unfortunately they aren’t labelled well. That said I think poetry does provide the right UX “affordances” and I think it should be the default choice for many.

      1. 14

        My latest problem with Python packaging was with setting up JupyterHub at $JOB. Here is what happened:

        • building a notebook docker image FROM jupyter/scipy-notebook
        • RUN conda update
        • environment completely broken
        • change conda update to only update the specific package we need a bug fix for
        • environment completely broken
        • ADD fix_a_specific_bug_in_that_specific_package_myself.diff /tmp/patch.diff
        • WORKDIR /opt/conda
        • RUN patch -p3 < /tmp/patch.diff

        I did nothing ambitious, unless updating packages is “ambitious” in Python. The entire packaging ecosystem seems like a huge dumpster fire. Why are there so many different package managers? I can only imagine because the authors of each package manager thought all the other ones suck too much to bother using, which seems completely true to me.

        Ruby just has gem. Rust just has cargo. They both just work. Python has really dropped the ball here.

        1. 6

          Have you tried just using pip? pip install jupyterlab just works for me.

          To be honest I see a lot of people complain about conda which makes me suspect it doesn’t work. Like I have gotten a lot of mileage out of pip and python -m venv (and even that one, if you’re in docker you don’t need to mess with virtual envs) (Though maybe you’re on windows?)

          1. 2

            The official Docker image we based on uses Conda, and mixing Conda and pip is asking for even more pain. And I was under the impression Conda has packages that pip doesn’t, but maybe that’s not true anymore.

          2. 4

            @ $WORK, For the past decade+ our policy was: whatever our OS packages for dependencies only. If we need more than what our OS(typically Ubuntu LTS) packages for python dependencies, then the only option is to import the package into the tree, as a direct dependency and we now OWN that code. This is the only sane way to do it, as the python packaging stuff after many decades is still mostly broken and not worth trying to fight with.

            We’ve since started moving to playing with Nix and Docker containers, which mostly solve the problem a different way, but it’s still 90% saner than whatever python packaging people keep spouting. Note, the technical issues are basically 100% solved, it’s all community buy in and miserable work, nothing anyone wants to do, which is why we are stuck with Python packaging continually being a near complete disaster.

            Maybe the PSF will decide it’s a real problem eventually, hire a few community organizers full time for a decade and a developer(maybe 2, but come on it’s not really a technical problem anymore) and solve it for real. I’m not holding my breath, nor am I volunteering for the job.

            1. 2

              My solution has been the opposite: do not rely on brew or apt for anything. Their release strategies (like “oh we’ll just upgrade python from 3.8 to 3.9 for you”) just don’t work, and cause pain. This has solved so much for me (and means I’m not breaking the system Python when doing “weird” stuff). Python.org has installers, after all!

              Granted, I’m not on Windows but everything just works for me and I think it’s partly cuz package maintainers use the system in a similar way.

              I kinda think Python needs a “don’t use anything from dist-packages and fully-qualify the Python version everywhere” mode, to prevent OS updates from breaking everything else.

              1. 1

                There is def. work involved to “port” our software to the next LTS branch, but it’s usually not miserable work. Plus we only have to do it every few years. The good thing is, by the time the python package makes it into a .deb in -stable branch the python code is also pretty stable, so upgrades and bugfixes are usually pretty painless.

            2. 1

              Yes I’m afraid I would consider blithely updating lots of software to the latest version very ambitious. Only a loon would update a number of different pieces of software at once with the serious expectation that everything would continue working, surely you can’t mean that?

              I can’t speak to other package managers in general except to contradict you on rubygems. Last time I was running ruby in prod we had to increase the vm size to run rubygems in order to run puppet. Our Rubygem runs were unreliable (transient failures ala npm) and this was considered common. Perhaps this is an out of date view, it was some years ago.

              1. 15

                That’s insane. apt-get upgrade works fine. gem update works fine. cargo update works fine. Each updates the packages while maintaining dependency version invariants declared by those packages. Conda seemed to be trying to do that, but couldn’t do it correctly, and then trashed the system. This is not a failure mode I consider acceptable.

                Sure, sometimes upgrading to newer software causes issues with backwards compatibility, hence stable and LTS release channels. But that’s not what happened here. After conda update completed, it was impossible to do anything else with conda at all. It was utterly broken, incapable of proceeding in any direction, no conda commands could successfully change anything. And the packages were all in completely inconsistent states. Stuff didn’t stop working because the newer packages had bugs or incompatibilities, they stopped working because package A required B version 3+, but version 2 was still installed because conda self-destructed before it finished installing B version 3. Has that ever happened to you with gem?

                If updating my packages in the Python ecosystem makes me a loon, what is normal? Does anyone care about security patches? Bug fixes?

                I can’t comment on gem‘s memory usage, I’ve never run it on such a small machine that I’ve had that problem. But I haven’t had transient failures for any reason other than networking issues. I have used gem to update all my --user installed gems many times. Conda literally self-destructed the first time I tried. Literally no issue with gem comes remotely close to the absolute absurdity of Python packaging.

                1. 4

                  apt-get upgrade does work pretty reliably (but not always, as you imply) because it is a closed universe as described in TFA in which the totality of packages are tested together by a single party - more or less anyway. Cargo I have not used seriously.

                  Upgrading packages does not make you a loon, but blindly updating everything and expecting it to work for you arguably does - at least not without the expectation of debugging stuff, though I wouldn’t expect to debug the package manager. That sounds like you also ran into conda bug on your first (and apparently sole?) experience with python packaging. I can’t help you there except to say that is not the indicative experience you are extrapolating it to be.

                  I don’t want to get deep into rubygems tit for tat except to repeat that it has been a huge problem for me professionally and yes including broken environments. Rubygems/bundler performance and reliably were one of the reasons that team abandoned Puppet. Ansible used only system python on the target machine, a design I’m sure was motivated by problems having to bootstrap ruby and bundler for puppet and chef. I’m sure there were other way to surmount that but this was a JVM team and eyes were rolling each time DevOps was blocked on stuff arising from Ruby packaging.

                  1. 2

                    your first (and apparently sole?) experience with python packaging

                    Latest experience. Previous experiences have not been so egregiously bad, but I’ve always found the tooling lacking.

                    having to bootstrap ruby and bundler for puppet

                    That’s the issue. As a JVM team, would you run mvn install on production machines, or copy the jars? PuppetLabs provides deb and rpm packages for a reason.

                    Regardless, even though gem has problems, I can still run gem update on an environment and resolve any problems. What about Python? Apparently that’s so inadvisable with Conda that I was a loony for trying. It’s certainly not better with pip, which doesn’t even try to provide an update subcommand. There’s pip install -U, which doesn’t seem to provide any way to ensure you actually end up with a coherent version set after upgrading. Conda, though it blew up spectacularly, at least tried.

                    Seriously, how are you supposed to keep your app dependencies updated in Python?

                    1. 2

                      how are you supposed to keep your app dependencies updated in Python?

                      In the old days, there were scripts that told you what’s outdated and even generated new requirements.txt. Eventually pip gained the ability to list outdated deps by itself: https://superuser.com/questions/259474/find-outdated-updatable-pip-packages#588422

                      State of the art — pyproject.toml projects — your manager (e.g. pdm or poetry or whatever else because There’s One Way To Do It™) just provides all the things you want.

                2. 8

                  Yes I’m afraid I would consider blithely updating lots of software to the latest version very ambitious. Only a loon would update a number of different pieces of software at once with the serious expectation that everything would continue working, surely you can’t mean that?

                  You have been Stockholm syndromed. Yes, in most languages, you just blithely do upgrades and only run into problems when there are major version bumps in the frameworks you are using and you want to move up.

              2. 5

                I’ve been in the Python ecosystem now for about six months after not touching it for 15+ years. Poetry is a great tool but has its warts. I worked around a bug today where two sources with similar URLs can cause one to get dropped when Poetry exports its data to a requirements.txt, something we need to do to install dependencies inside of the Docker container in which we’re running our app.

                As Poetry matures, it’ll be great.

                1. 2

                  Poetry is already three years old.

                  1. 2

                    A young pup!

                2. 2

                  I agree with your post, but thisnis a stretch:

                  In Pythons case I think the packaging system is solid overall and does work

                  You define you dependencies but they will update their dependencies down the tree, to versions incompatible to each other or with your code. This problem is left unsolved by the official packaging systems.

                  But in all frankness… This of pulling dependencies for something that takes otherwise 5 minutes to implement, and being ok with having dozens of moving targets as dependencies, is something that just can’t be solved. I rather prefer to avoid it. Use fewer and well defined dependencies.

                  1. 7

                    If you ship an app, this can be solved by “pip freeze” which saves the deep dependencies as well. If you ship a library, limit the dependency versions.

                    This is not really a python problem - you have to do the same thing in all languages. In the similar group of languages Ruby and JS will suffer the same issue.

                    1. 2

                      Pip freeze helps, but you’re still screwed if you rely on some C wheel (you probably do) or Python breaks something between versions (3.7 making async a keyword in particular was brutal for the breakage it caused).

                      1. 1

                        I’m not sure what you mean - wheels are precompiled, so get frozen like everything else. What do you think breaks in that situation?

                        1. 1

                          If I knew, it wouldn’t be broken, would it? All I can tell you is that wheels routinely work on one machine but not another or stop working when something on a machine gets upgraded, whether deliberately or accidentally. Why? I have no idea.

                1. 8

                  I had no idea they were different! I always thought SFTP was just a fancy name for scp. Turns out SFTP is an SSH protocol standard.

                  1. 10

                    Yes they are pretty different, I wrote about it here https://rain-1.github.io/use-sftp-not-scp.html

                    1. 3

                      I see you are also against rsync. Is there alternative that would use similar protocol for incremental update that would have better implementation?

                      1. 2

                        Maybe reclone

                      2. 3

                        Thanks, looking at its interface is all I need to know I don’t ever want to use the sftp tool. That interface is horrible.

                      3. 3

                        I thought scp was just a command line tool to transfer files over sftp. Looks like it is that now. What did it use before if not sftp?

                        1. 6

                          scp used SCP

                        2. 2

                          An additional learning that blew my mind is that SFTP is actually very much used in big corporations!

                          It is used widely in Finance and Healthcare afaik. There are wish to more away from file based protocols but it will take some time!

                          1. 3

                            An additional learning that blew my mind is that SFTP is actually very much used in big corporations!

                            I recently bought a Brother printer / scanner. The scanner has an option to upload results via sftp, with a web-based GUI for providing both the private key for it to use and the server’s public key. It was very easy to set up to scan things to my NAS, where I wrote a tiny script that uses fswatch to watch for new files and then tesseract to OCR them.

                            I was very happy to see that it supported SFTP. The last printer / scanner combo thingy I bought could talk FTP or SMB, but a weird version of SMB that didn’t seem to want to talk to Samba.

                            1. 2

                              The product made by company I work for handles a lot of data being transferred in flat files. Many customers have “security checklists” that identified FTP as an insecure protocol and recommended SFTP instead.

                              I used to mock file based data transfer but compared to stuff like getting data via JSON APIs they have a lot of life in them still…

                              1. 2

                                You mention JSON APIs; but you can have JSON APIs over SFTP, so I guess you meant REST APIs instead.

                                As far as I understand, the main issue with file based data transfer with SFTP is that there’s no support for upload completion in any way.

                                E.g.: if client 1 uploads a file to the server for processing, then, how does the server knows the file upload is completed?

                                This is often worked around by changing the name of the file(using the SFTP rename command), or uploading a hash too, or the file name is the hash, etc… all this is pretty clumsy compared to how HTTP handles that.

                                1. 2

                                  Correct, I meant REST APIs (often returning JSON, but can return XML too).

                                  There are a lot of issues with file based transfer, including stuff like completeness (can be mitigated by including a defined footer/end of file marker) file names, unannounced changes of format and so on.

                                  But you can shuffle a lot of data in a short time by zipping files, the transfers can be batched, and the endpoint generally doesn’t need a ton of authentication infra to ensure that unauthorized access is prevented etc. Push vs. Pull.

                                  In the long run returning data over a API endpoint is The Future, but SFTP is basically a small upgrade to FTP which enables transport security without a ton of other changes.

                                  1. 1

                                    It’s a bit unclear here if you’re talking about SFTP or FTPS…

                                    1. 2

                                      SFTP.

                                      I don’t mean it’s a drop-in replacement, but as a part of a system where you have 2 systems communicating using files, updating the transport mechanism from FTP to SFTP is a small step compared to converting the entire chain to an API-based solution.

                                2. 1

                                  What bother me about SFTP over FTPS (as a replacement for FTP), is that you need to allow ssh trafic from your client to your server. It also means providing a real account for the client on the machine, while FTPS is just as secure and can make use of virtual accounts and a different port than SSH by default.

                                  1. 2

                                    There’s nothing about the SFTP protocol that doesn’t allow for virtual users or other port numbers.

                                    1. 1

                                      Sure the protocol allows it, but as far as I know, openssh doesn’t support virtual users. So you’d need to install another server (say vsftpd), and at this point, why would you run sftp over ftps ?

                                3. 1

                                  Yes, I work in the data space and sftp connectors usually come up right after cloud stores. A lot of companies use it, it is even supported by hadoop. It seems to have replaced ftp/nfs is a lot of corporations.

                                4. 2

                                  I think scp was basically rcp over ssh rather than rsh/rlogin.

                                1. 12

                                  Docker has some suboptimal design choices (that clearly hasn’t stopped its popularity). Yes, the build process, with the line continuation ridden Dockerfile format that makes it impossible to comment what each thing is there for, with implicit transactions that buries temporary files in the layers, and the layers themselves, that behave nothing like an ideal dependency tree, is one thing, but that’s fixable. What makes me sad are the fundamental design choices that can’t be satisfyingly fixed by adding stuff on top, such as being a security hole by design and containers being stateful and writable, and therefore inefficient to share between processes and something you have to delete afterwards.

                                  What is a more ideal way to build an image? For a start, run a shellscript in a container and save it. The best part is that you don’t need to copy the resources into the container, because you can mount it as a readonly volume. You need to implement rebuilding logic yourself, though, but you can, and it will be better. Need layers? Just build one upon another. Even better, use a proper build system, that treats dependencies as a tree, and then make an image out of it.

                                  As for reimplementing Docker the right way from the ground up, there is fortunately no lack alternatives these days. My attempt, selfdock is just one.

                                  1. 8

                                    As for reimplementing Docker the right way from the ground up, there is fortunately no lack alternatives these days.

                                    What about nixery?

                                    Especially their idea on “think graphs not layers” is quite an improvement over previous projects.

                                    1. 4

                                      I spent some time talking about the optimisations Nixery does for layers in my talk about it (bit about layers starts at around 13:30).

                                      An interesting constraint we had to work with was the restriction on the maximum number of layers permitted in an OCI image (which, as I understand it, is an implementation artefact from before) and there’s a public version of the design doc we wrote for this on my blog.

                                      In theory an optimal implementation is possible without that layer restriction.

                                      1. 2

                                        Hey! Thanks for sharing and also thank you for your work, true source of inspiration :)

                                    2. 2

                                      My attempt, selfdock is just one.

                                      This looks neat. But your README left me craving for examples.

                                      Say I want to run or distribute my python app on top of this. Could you provide an example of the equivalent to a docker file?

                                      1. 4

                                        Thanks for asking! The idea is that instead of building, distributing and using an image, you build, distribute and use a root filesystem, or part of it (it can of course run out of the host’s root filesystem), and you do this however you want (this isn’t the big point, however).

                                        To start with something like a base image, you can undocker a docker image:

                                        docker pull python:3.9.7-slim
                                        sudo mkdir /opt/os/python:3.9.7/
                                        docker save python:3.9.7-slim | sudo undocker -i -o /opt/os/myroot
                                        

                                        Now, you have a root filesystem. To run a shell in it:

                                        selfdock --rootfs /opt/os/myroot run bash
                                        

                                        Now, you are in a container. If you try to modify the root filesystem from a container, it’s readonly – that’s a feature!

                                        I have no name!@host:/$ touch /touchme
                                        touch: cannot touch '/touchme': Read-only file system
                                        

                                        When you exit this process, the reason for this feature starts to show itself: The process was the container, so when you exit it, it’s gone – there is no cleanup. Zero bytes written to disk. Writing is what volumes are for.

                                        To build something into this filesystem, replace run with build, which gives you write access. The idea is as outlined above, to mount your resources readonly and running whatever:

                                        selfdock --rootfs /opt/os/myroot --map $PWD /mnt build pip install -r /mnt/requirements.txt
                                        

                                        … except that if it modifies files owned by root, you need to be root. As the name implies, selfdock doesn’t just give you root.

                                        Then, you can run your thing:

                                        selfdock --rootfs /opt/os/myroot --vol $PWD/data /home python app.py
                                        

                                        Note that we didn’t specify user- and group ID to run as – it just does (anything else would be a security hole). This is important for file permissions, especially when giving write access to a volume as above. But since the root filesystem is readonly, you can run thousands of instances out of it, and the overhead isn’t much more than spawning a process. The big point here is not in the features, but in doing things correctly.

                                        1. 2

                                          That sounds very similar to what systemd-nspawn offers. Once you deal with unpacked root filesystems it may be another solution to look at.

                                          1. 1

                                            So, it has even more resemblance to chroot but with more focus on isolation and control of resource usage, IIUIC.

                                            A bit of feedback, if I may. The whole requirement of carrying files around, will put people off. Including myself. I refrain from using docker because of the gigantic storage footprint any simple thing requires. But the reason it is so popular is that it abstracts away the binary blobs. People run docker commands and let it do its thing, they don’t need to fiddle with or even know about the images which are stored on their hard drive. It was distributed with dockerhub connectivity by default. So people only worry about their docker files and refer to images as a URL or even just a slug if the image is in docker hub.

                                            Similarly, back in the day, many chroot power users had a script to copy a basic filestructure to a folder and run chroot. I think most people would want this. Even if inconsciently. A command that does the complicated parts with a simple porcelain.

                                      1. 4

                                        I came to this conclusion too. But of course, this doesn’t exaust the debate because it assumes all dependencies have a benefit. When in reality, many of them, sometimes most of them, have a cost.

                                        1. 6

                                          Is Lobsters a document or an app?

                                          1. 4

                                            Presentation wise I would lean more towards a set of documents.

                                            1. 2

                                              The article talks about client side and server side processing, but its logic seems very client focused, and Lobste.rs uses a very lean client.

                                              On the server, there’s always going to be some code that translates an incoming HTTP request to obtain data from a static file. It doesn’t make a huge amount of difference if that code is getting the data from a database, or a zip file. There’s going to be code running either way, the difference is the maturity of that code.

                                              1. 5

                                                It does make a difference because http is designed with full support for downloading files. With headers reserved for filename hints, and even for things like if it should be displayed in the brow as we or downloaded to the local filesystem. The first webservers did this and still to do this day. It’s fisrt class functionality and if you keep things limited to that, it is trivial to move the site. Heck, it is even browsable directly without http involved at all.

                                                I agree with the author, but @carlmjohnson question is not to be disregarded so quickly. From the point you exposed a database view, you introduce expectation on the dynamic nature of the content. And from there, arises demand for interactivity. And at that point you are fiddling with the DOM in the client. Then you ask yourself… Ok… Does it really make sense to assemble html in the server and in the browser? What about the server having a well defined http API that spits whatever data I need? We have arrived to SPAs.

                                                By all means, I’m all for sites like lobsters with full page form submissions, simple forms and buttons. But the pressure for shiny looking websites from the general public is just enormous.

                                                1. 3

                                                  But the pressure for shiny looking websites from the general public is just enormous.

                                                  I wonder if that pressure comes from the the general public?

                                                  1. 4

                                                    It comes from the boss who saw something shiny on a competitor’s site.

                                                    1. 3

                                                      I think it does. While there are many non tech savvy people that instinctively would prefer their tried and true software that works… I believe they are still the minority. Sexy screenshots trump everything. If you are in the industry as a worker, you get left behind if you don’t embrace it. Engineers that put together shiny things with flashy colors and lots of padding will be promoted.

                                                      1. 1

                                                        You can have “sexy” “documents”, https://lobste.rs/ is pretty sexy design wise, but it still and acts and looks like a “document”. Content is niche but I don’t believe its design would be rejected by the general public.

                                                        1. 2

                                                          I don’t think so at all. People would reject it in an eye blink given an alternative with huge title text, lots of padding, large round avatars and all links looking like buttons with large rounded corners and flat design.

                                                          Look at how discord completely took over all existing forum software. What other reason are there besides flashy looks?

                                                          1. 1

                                                            Did Discord take over all forum software? I recall the old web forum model becoming unpopular well before Discord became a thing; it seems like Facebook replaced it as much as anything. Since Discord is a chat program, it doesn’t seem to me to be comparing like with like.

                                                            As for why these proprietary platforms won, I see there as being two reasons. The first is that these platforms realised they could use graph data (or in Discord’s case, multiple “servers”) to create a platform which scales to an infinitely large number of people and infinitely large number of communities, enabling a network effect which leads to a network effect monopoly. In short on Facebook you’re “bubbled” according to your position in the graph (the people you’ve friended). Compare this with a web forum in which everyone sees the same thing (and in which new subforums can only be created by administrators). This model naturally scales only so far and a traditional forum will always have some specific subject of focus for this reason. Moreover, if you were involved in web forums, you might recall that smaller forums (in which everyone knew each other) had a very different feeling to larger ones; and as smaller ones grew to be larger ones, their feeling changed in this way. By using graph data the modern social network can allow one to have a more “local” community while also being able to communicate with a much larger global network of people. Of course, this requires people to provide this graph data to them (which they do by adding people); the value of this graph data to commercial and state surveillance interests is a very convenient coincidental benefit to these platforms.

                                                            A second likely reason might easily be “dopamine engineering”. That’s not quite the same thing as “people want flashy UI”.

                                                            1. 2

                                                              I meant discourse. Sorry.

                                                              It is essentially the same functionality of phpbb and the like, with a flashier design.

                                                              1. 4

                                                                I’d argue Discourse is a lot less flashy than phpBB; phpBB style forums have a lot of extraneous chrome (unless it’s a buy/sell forum, why do I care about the poster’s location?) that Discourse ditches in favour of content and widgets focused on navigating content. (Of course, Discourse isn’t the first; it feels like a spiritual successor to Vanilla for me.)

                                                                1. 1

                                                                  https://try.discourse.org/ doesn’t like an app to me.

                                                                  Edit: But it is.

                                                                  Could very well be a progressively enhanced SSR web site. Design would be mostly the same.

                                                      2. 1

                                                        I think @calmjohnson’s question is interesting because lobste.rs is a document, it’s just not the same kind of document as a static web page and that serves to highlight the underlying problem: web browsers have evolved from a mechanism for displaying a document to being a framework for providing document viewers. This isn’t a new development. Netscape 2.0 was the first web browser to support a mechanism for providing custom viewers for other kinds of documents (Mosaic / Netscape 1 provided a mechanism for opening other kinds of document in a different application).

                                                        Perhaps the real questions that need asking are:

                                                        • To what degree is this document different from a static HTML page?
                                                        • What is the smallest possible viewer for a document of this kind?
                                                  1. 26

                                                    Let’s look at an example: I make software for hair salons. Our main product is based on Electron and runs in many hair salons in The Netherlands. Over 95% of our users have PC’s that run Windows. So, we have a choice: Use Electron to support both macOS and Windows. Or don’t support macOS at all.

                                                    […]

                                                    I’ve never gotten the feedback during the last ten years: “It’s a nice app, but it would be better if it were a native app”. Not once. Users only care that it works, not how it works.

                                                    Sounds like you’re looking for the wrong feedback. If your clientele isn’t technical, they don’t know there’s a difference between electron and native apps. Even if they can tell there’s a difference, they don’t know how to express it. Maybe they would be happier with a native app. Maybe they’d be happier if the app was more responsive, and the app would be more responsive if it used less memory. You can’t take what the customer says as gospel, because they don’t know how to communicate at your level.

                                                    1. 11

                                                      Right, the salon doesn’t care about the stack, but they do care about its effects; if it’s slow, they’ll feel it (even if it’s not enough to make it worth communicating). If it doesn’t take the same shortcuts as the other apps, they’ll complain, etc.

                                                      1. 6

                                                        Honestly a salon app can probably be done entirely with a local web app and a shortcut on the desktop which opens the default browser to the salon app location. I’ve seen lots of apps for kiosks and such distributed this way. Usually written in Java but not always. When I’ve done non-profit work in the past, it’s what I’ve done.

                                                        1. 3

                                                          Why is this better than an Electron app? It’s using potentially more RAM — a full browser and a JVM, rather than just a stripped-down browser — with a more confusing UI (why is there an address bar?). It’s just as “slow” and most likely actually slower, since every interaction needs the entire UI and any data updates serialized to and from strings and sent over a local socket, rather than happening entirely in-process with shared memory. Electron for consumer apps has some known downsides (e.g. RAM, since most users will be running a browser in addition to the app at most times), but for point-of-sale or registration management apps, it’s not like someone’s trying to browse Reddit and stream Netflix on the same machine at the same time. A local webserver with a desktop shortcut to open a browser seems… significantly worse than a single Electron app.

                                                          1. 3

                                                            It’s using potentially more RAM

                                                            This is a kiosk app. There’s nothing running on this box other than the operating system and the application. I struggle to think of a box made in the last 10 years that couldn’t run a single browser tab and a JVM to run a CRUD web app. Some of these boxes are even airgapped. Why prematurely optimize?

                                                            with a more confusing UI (why is there an address bar?)

                                                            Kiosk mode apps usually hide the address bar. Okay that’s not exactly what I said when I said “shortcut on the desktop which opens the default browser to the salon app location” but please use good faith and permit me to say I am capable of hiding the address bar.

                                                            since every interaction needs the entire UI and any data updates serialized to and from strings and sent over a local socket

                                                            And a multiprocessing environment is running background processes and handling devices that I couldn’t care less about in a kiosk environment. Why even go with an Electron or native app here if you could grab a microcontroller and a VGA display and implement the entire thing from scratch in C or Forth or something? The answer is, it’s easy and quick for me to write, it’s easy and quick to deploy, it’s easy for the non-profit to find other people who can work on the app, and it’s cheap for the non-profit to hire somebody to fix the app if they need to. A non-profit isn’t looking for the leanest/meanest application, they just need to have something to quickly run (e.g. fast development) and make a difference to their organization, be easy/cheap to maintain, and easy/cheap to fix. That’s the whole calculus here. I presume @Sophistifunk’s work on airline kiosks probably had very similar requirements.

                                                            1. 6

                                                              Right, but in a thread dumping on Electron for this exact use case — a salon kiosk app is exactly the app the blog post author was talking about — I’m confused why Electron is viewed as bad but a locally-running JVM webserver + custom webview for a browser is viewed as good.

                                                              I agree wholeheartedly that nobody should care about less-than-gigabyte RAM overhead for a kiosk app: nothing else is running on the machine, so it doesn’t matter. And I agree that “it works” and “it has the features I need” and are the most important parts. Just… Why the Electron hate? It has generally similar upsides in terms of easy to maintain + easy to hire for + easy cross platform support, and the downsides are as meaningless as you mention for the local webserver+browser version. It’s also probably easier for the average team to deploy: you don’t need a custom webview, you don’t need multiple build targets, you need to support one fewer language’s tooling, it’s easier to manage since there aren’t hidden-to-the-end-user processes running separately from the application and binding ports, that can crash or hang or get into buggy states that simply closing the app window won’t fix… I’m not saying “don’t do a local webserver + browser/custom webview” — if your company has mostly JVM people already, then it could make sense — I’m just confused why people would think Electron is inferior or poorly suited for kiosk apps by comparison.

                                                              Edit: maybe I misunderstood, and you agree with the blog post author that Electron is fine for this use case?

                                                              1. 1

                                                                quick to write and deploy is definitely not my experience with webapps! i have found that desktop apps have a far simpler architecture, in general.

                                                              2. 3

                                                                It’s not better in those metrics you point out. It is that exact mindset that led to electron success and before that, it’s existence.

                                                                It is a possibility. A web developer can develop a desktop app. They could already before electron as pointed out by GP. Electron just streamlined that way of working. It works, that is enough for many people, but not always. I think the position on the article is silly because it essentially justify everything by “it works”. I’m pretty sure the author doesn’t have that mindset when he buys a car, a computer or a pair of jeans. It is not so difficult to grasp that people want quality products that work well.

                                                                Two aspects that don’t usually get discussed: 1.UI libraries are usually horrible to work with, poorly documented and the communities are generally full of entitled people that are not too interested in sharing their knowledge. The same doesn’t apply to web technologies, which have been more open and welcoming from day one. 2. People used to value a UI that had the same style as the rest of their os chrome, nowdays they value individually designed aestetics. Pretty much the opposite. Webtech, being a document formatting solution, excels here.

                                                                Chromium apps dont get enough love, and I think half of the people would use them instead of electron. Slack is one such case.n I cannot understand how people reason about having a desktop app that is literally just showing webpage you can open.

                                                                1. 1

                                                                  I think the position on the article is silly because it essentially justify everything by “it works”. I’m pretty sure the author doesn’t have that mindset when he buys a car, a computer or a pair of jeans. It is not so difficult to grasp that people want quality products that work well.

                                                                  That is what most people want out of a car, a computer, or a pair of jeans. Almost everyone in my life that drives has opted for a simple Japanese car with high reliability, often slightly used, not because they love their CVT engines (lol) or they find the styling of the Corolla to be beautiful, but because it gets them from point A to point B with minimal fuss and breaks down seldom. In another life I used to do cabinetry. The number of people willing to pay for solid woodwork when they can get a rickety Ikea table is vanishingly few. I, and most other folks with an eye for woodwork, would love to sit on a Morton Chair but most people have never heard of it nor even care about it. They want a chair that’s decently aesthetic that’s comfortable enough. That’s all. This doesn’t excuse slow apps or apps that are unresponsive, but as TFA says, that’s a matter of how you write an app, not an indictment of Electron itself. A slow app is slow and disrespecting of its users regardless of the technology it uses underneath. Much like people wouldn’t sit in a chair that wobbles excessively.

                                                                  It’s not better in those metrics you point out. It is that exact mindset that led to electron success and before that, it’s existence.

                                                                  At one end of a spectrum is a bespoke solution from a bespoke FPGA and a bespoke display. On the other is some plug-and-play Electron monstrosity. Given the requirements you have at hand which point along this spectrum would you pick and why?

                                                                  1. 3

                                                                    From an engineering perspective, the Corolla is the native app in the analogy. Compared to european and american cars, it has less superfluous resources thrown at it. Less unecessairy features, no luxury factor, better fuel economy, great durability. etc. In essence is makes better usage of resources.

                                                                    But I agree with you that from the user perspective, things might be switched around. The fact is that making an electron app is cheaper because there is a much larger pool of [cheap] developers availlable. At that point, even if the user would prefer a native app, the electron app is good enough, or to be fair, often does exactly what the user wants. Resource usage being unimportant in many cases.

                                                                    At one end of a spectrum is a bespoke solution from a bespoke FPGA and a bespoke display. On the other is some plug-and-play Electron monstrosity. Given the requirements you have at hand which point along this spectrum would you pick and why?

                                                                    95-99% of the time an electron app. The 1-5% being left for more critical missions. Or for cases where resources are limited or scarce for whatever reason.

                                                              3. 1

                                                                Yes, many years ago now I was behind the curtain at “airline you’ve heard of” and all their airport kiosks were a Java web app with a web front-end.

                                                                1. 3

                                                                  Surprised it wasn’t a 3270/5250 application.

                                                                  1. 1

                                                                    These were public-facing touch screens.

                                                                    1. 1

                                                                      United States based carrier? With a popular bag check system that exhibits many 9s of user facing uptime?

                                                              4. 1

                                                                You assume working feedback chains between users and owners, and then developers.

                                                                Most of the time you need to be a coworker to know what people hate.

                                                              5. 6

                                                                Also: these are computers that do one thing and one thing only. In a more typical desktop example you will have several apps open, including perhaps multiple Electron apps, and I bet the general expectation is also a bit higher than a point of sale system.

                                                                Anecdotally, I’ve certainly noticed quite a few Electron apps have issues in terms of performance and UX, which I usually noticed before I knew it was Electron. This is a biased sample of course since I never noticed apps using Electron when it worked well, but all of this seems a bit too hand-wavy to me.

                                                                1. 5

                                                                  There are plenty of apps that are fine delivered via Electron. In every case other than VSCode (which is basically a miracle), they’re things that should just be web-apps in the first place.

                                                                  1. 1

                                                                    Maybe they’d be happier if the app was more responsive, and the app would be more responsive if it used less memory.

                                                                    Though this raises the question, what if the author of the app used Electron and they made sure to optimize it to make it as responsive as possible? VSCode is an example but VSCode actually does a lot. A salon app, like the author of this site is talking about, is probably a lot easier to optimize than something like VSCode.

                                                                    (FWIW I’m not saying the author did or did not make their app responsive/low-latency. I’m just raising the point that there’s a middle ground that can be acceptable.)

                                                                    1. 4

                                                                      VSCode often gets touted as an example, but also realize that there is a huge team at a well funded company that makes this app, and clearly put a lot of work into it! I’d say VSCode is more an exception than anything, and an example of how much work it takes to make an Electron app that performs reasonably.
                                                                      Slack is a counter example of a large team at a well funded company that ships a poorly performing electron app.

                                                                      1. 4

                                                                        If you want an example of a poorly-performing Electron app from the same company that made VSCode, let me point you at MS Teams.

                                                                        1. 1

                                                                          I think there’s a lot more variables here than team size and funding and as such using these as predictive features is misguided. Just like trying to use those two features to determine the success of any software project.

                                                                          1. 2

                                                                            Nah, not sure how you got that take.

                                                                            I’m more saying that even with lots of advantages, Electron apps still end up like crap most of the time. VSCode should be considered an anomaly, or at best an example of what is possible by as an unlikely/amazing result, not a good predictor of what end users most often end up with.

                                                                            1. 4

                                                                              I’m more saying that even with lots of advantages, Electron apps still end up like crap most of the time. VSCode should be considered an anomaly, or at best an example of what is possible by as an unlikely/amazing result, not a good predictor of what end users most often end up with.

                                                                              I mean I disagree but there’s no substance to disagree over. It’s just your word against mine. That’s literally what TFA is about. The author of TFA thinks good and bad apps are independent of Electron and I agree. You don’t. Without anything more to bring to this we’re just voicing our unfounded opinions.

                                                                    1. 2

                                                                      A long time ago, all the developers had a common dream. The dream was about interactivity, liveness, evaluation…

                                                                      And after a while, we even forgot that we ever had this dream.

                                                                      This assertion is unfounded, and I definitely don’t agree. There are some developers who love those things, and they definitely haven’t forgotten about it. There’s a reason many developers fawn over REPLs. On the other hand, many developers opt away from REPLs, because the sacrifices necessary to enable them aren’t always worth it.

                                                                      On the topic of the article, while it’s a cool demo, I don’t personally find it revolutionary, or all that useful. The tour of go, many programmers’ introduction to the language, is similarly interactive go programming in a webpage (albeit with compilation done on the backend). And honestly, when reading programming blogs, I’ve never once found myself wishing the snippets were interactive. If I want to tinker with a snippet, I can edit and compile it locally, in a familiar and comfortable environment.

                                                                      1. 4

                                                                        To build on this, I’ve found that blogs/sites that try to do clever interactive things with the examples completely fail to display any of the code snippet if javascript is turned off in the browser. So it’s beyond useless. Not everyone is browsing with a fast computer.

                                                                        1. 1

                                                                          The fact that the word “dream” is used, is telling. It is not an coincidence. The author us é the word he could use. A dream, something vague, not rationaly proven to make sense.

                                                                          Over the last decade or two, ever now and then, some website or programming language does this. Pyret, Crystal programming language, go programming language, eloquent JavaScript book. They all have this silly text boxes where one can paste code and click execute. I fail to understand why and to whom thismwould be useful. So you are advocating for the usage of a language and for interactivity and all for being able to use it “on your computer”, and you do so by encouraging the user to NOT running it their computer? Seems paradoxal to me. Also, if we are talking about JavaScript, there is no dream chase. You don’t need toy REPLs, your browser already comes with a proper one. To conter the example by Kay: such dream is already reality for JavaScript. Just open the browser JavaScript console while on Wikipedia.

                                                                          1. 1

                                                                            So you are advocating for the usage of a language and for interactivity and all for being able to use it “on your computer”, and you do so by encouraging the user to NOT running it their computer?

                                                                            I think I’ve missed the point you’re trying to make here. The code in the blog post runs on whatever computer the user is using to browse the internet. How is that NOT “their computer” while opening the javascript console on the very same browser while visiting wikipedia is “their computer”?

                                                                            1. 1

                                                                              No, that is exactly my point. If you provide a text box on a webpage, the code is either executed remotely, which is the case for most examples I gave, or it relies (or is) JavaScript. For which we already have a superior experience in the developer tools.

                                                                        1. 3

                                                                          I am a Linux user and I generally steer clear from software wish uses docker as main means of distribution/installation.

                                                                          I don’t quite get this rant. For starters, all the software he is trying to use, for me, falls in the category of bloated borderline useless stuff that could be replaced with a something built in one afternoon or so.

                                                                          But judgements asside… Surely the authors of those projects made the assumptions that fit well their target audience. It’s not like the author is their boss.

                                                                          Clone the repos and tweak to your needs if worthy of your time? What is the point of ranting if the code is available?

                                                                          1. 24

                                                                            When your configuration has become complicated enough that you need a DSL it starts to look a lot less like configuration and much more like a scripting interface to your application.

                                                                            For scripting your app I agree no static format or generic special purpose configuration language with logic will really fit your needs. You’ll need to provide some sort of scriptable DSL. However the vast majority of configuration is not and should not be this kind of configuration. For those a static format is perfect since it limits the impulse to grow into a DSL. Languages like DHall, Cue, JSonnet, and others which can generate those formats are a good fit when used as a way to generate the config files.

                                                                            1. 5

                                                                              When your configuration has become complicated enough that you need a DSL it starts to look a lot less like configuration and much more like a scripting interface to your application.

                                                                              Extremely this.

                                                                              Configuration isn’t precisely defined. But in the general context, any definition that can’t be satisfied by flat key-value pairs, expressed by e.g. command-line flags, isn’t a useful model.

                                                                              1. 1

                                                                                I agree. It’s part of the reason why I made one of the output formats of my personal experiment in this space https://github.com/zaphar/ucg to be command line flags. I think it was the first output format I did actually.

                                                                                1. 1

                                                                                  Funny that we are in 2021 and the world haven’t gotten this right. What more evidence do we need. The article provides the classical example that disproves its point.

                                                                                  Webserver virtual host configuration. Of the two most used webservers in the world, one became the go to example on how a poorly designed configuration system becomes a unescapable nightmare (Apache) and the other has a configuration system that ‘degraded’ to its own platform with multinational companies running their whole products on top of it (nginx).

                                                                                  Now that I think about it, is the article sarcastic?

                                                                                  1. 3

                                                                                    nginx is sort of in support of his point. It’s a custom format for configuring a host. It’s syntax is better than apaches but it is still very much a custom config format. So I guess it’s one for one?

                                                                                    1. 3

                                                                                      My point is that, by being a proper configuration by his standards, it ends up being not a configuration but an application platform itself.

                                                                                      If has support for lua scripting. Some of the biggest websites in the world are, by his classification, not webapps, but rather anwebserver with its configuration. You wouldn’t call Alibaba “just a well configured static website”, it is very much a web application. Objectively, if you support scripting, you are not bulding a configuration system but maybe a programming language/application platform.

                                                                                      JavaScript started sort of like a solution to configure dynamic behaviour of an HTML document. It was only a matter of time before it became widely used as what it is: a programmable environment on which you can build applications.

                                                                                    2. 2

                                                                                      one became the go to example on how a poorly designed configuration system becomes a unescapable nightmare (Apache)

                                                                                      Is this generally-accepted wisdom? Apache’s configuration takes a little getting used to, but I don’t really have a problem with it.

                                                                                      1. 6

                                                                                        There were whole websites dedicated to Apache configuration back in the day. The official documentation had essentially a book on it, with the introductory articles presenting themselves as your first baby steps on a long journey of learning Apache configuration. A skill comparable to, say, learning a programming language.

                                                                                        Then lighttpd, nginx and a couple.ofmothrs came. And Apache reign was over in two years or so.I don’t have the data to support it. Just anedoctical. From what I remember pretty much everyone thought nginx configuration is easier. I myself do find it easier than Apache’s.

                                                                                        1. 4

                                                                                          I used Apache for maybe 10 years. I was a heavy user, I even interviewed to do some Apache training in maybe 2002/3 (I particularly remember that all the stuff I swotted up on about how the weird-ass SSL config options interacted with VirtualHost came up in the call and I was silently high-giving myself), but something else came up, and in retrospect, I’m very very glad. At the time I thought the config was OK, it was a bit random and all over the shop, and it had plenty rough edges, but it was free, what the hell. I wasn’t going to be using IIS so, whatever. Then I discovered nginx back in maybe 2007 and it was night and day. The performance and scale difference on the same hardware was from a different planet, but the configuration was like a breath of fresh air when you hadn’t realized you were suffocating. To say Apache’s was “patchy” was an understatement. There was only enough semblance of consistency and coherence that you thought that you could make sense of it when you needed to work something out or learn some new part of it; in reality you just had to learn all the differently-shaped bits or keep going back to the docs the whole time. It was immediately obvious on reading nginx config for even 0.6 that they were worlds apart. The fall-through stuff with named locations was just … “oh, ok, yeah”. Even though nginx has its own peculiarities, I’ve literally never looked back, apart from when I occasionally have to dig into an Apache config file on some ancient box somewhere and I kind of shiver and enjoy it in a weird, grave-digging kind of way. Haha, you had to do this … and that … and … oh, wow. Wow I’m happy I don’t have to deal with this any more.

                                                                                          1. 3

                                                                                            Compared to cabby or traefik, looks awfully complicated and ugly.

                                                                                      1. 9

                                                                                        I actually really like “considered harmful” essays. For most topics it’s a lot easier to find material in favor of it than against it, so searching “$x considered harmful” is a good way to find cold showers.

                                                                                        (Usually they’ll be pretty bad articles, but some are good like this one, and it’s a good launch point for doing more research.)

                                                                                        1. 5

                                                                                          Bring back “Against X”. It worked for Cicero, it will work for us.

                                                                                          1. 3

                                                                                            Is “Against considered harmful” “‘against’ considered harmful” or “against ‘considered harmful’”

                                                                                          2. 4

                                                                                            I find them really annoying for the reasons stated in GP. I wish the author of this submission would have chosen a different title and that they would leave out a couple of inflammatory sentences. The article is very informative and well written otherwise. “Considered harmful” in the article will probably end up being counter productive.

                                                                                        1. 67

                                                                                          I love rebase.

                                                                                          There are a couple things rebase enables that are really powerful which are, unfortunately, not possible in fossil.

                                                                                          The first is a clean history.

                                                                                          My commit history as I create it has no value to anybody else. I “finally got this bit working”, I go “This is close but I’m going to try a totally different approach now,” and I leave my computer for the day. All of these are valuable to me, but have no place in the long lived history of my source code. Why?

                                                                                          A simple misconception. Commit history is not supposed to be how I think, but how the software committed evolved.

                                                                                          I commit then run tests. Should I be committing the failed results and then committing the successful ones? Should I be cluttering my history with commits like “fix tests” since I commit all over? Or should I be producing nice, small, specific commits for specific features or specific points in my software’s progression towards its current form?

                                                                                          Bisect means nothing if I have many small commits where I repeatedly broke and unbroke a feature. Bisect means a lot when I have a specific commit that makes a set of changes, or when I have a specific commit that fixes a different bug. It means nothing when I have to try “Hey, does this pass our CI as it stands?” (welcome to big-corp coding).

                                                                                          So point by point:

                                                                                          1. Yes! Rebase is dangerous! Don’t blindly use this command, know what it is you want at the end.
                                                                                          2. Cleaning history so it becomes about the software and not about your brain is a new and useful feature.

                                                                                          2.1) Nope, history all still there, just because you don’t know where the work started doesn’t mean you don’t know where the software gained the work.

                                                                                          2.2) You can merge this way in git too. You can diff two commits in git too. And then you can rebase because again, it’s not about my brain but about the software.

                                                                                          1. Siloed development? “Hey can you check this branch and it’s my branch I might clobber it later” is very different from “It’s my code and you can’t see it until it’s all done.” Master/trunk can’t be rebased. Everything else is fair game.
                                                                                          2. So what? Do you really want my commit to show when the work was done at 2 PM instead of 10 AM?
                                                                                          3. Who cares how a line of code came together, so long as the reason for it to exist (commit message) and a clean story for how it fit into the previously-existing project both exist?
                                                                                          4. How your brain works is not so valuable it must be imprinted on your commit history.

                                                                                          6.1) They were thinking “blargh.” Obviously. That’s why it’s an intermediate commit.

                                                                                          6.2) Nothing wrong with small check-ins in a linear progression, rebased into complete commits that add things in small and appropriate ways.

                                                                                          6.3) “blargh” “aargh” “fix the thing” “wtf is with java” “dude. stop” “I AM A ZYGON.” I’d rather a nice commit shaped by a rebase into being a useful object because….

                                                                                          6.4) Cherry picks also work better with single commits that are useful, instead of five commits all that need to go together to bring a single feature across branches. Also, notably, commits with terrible useless messages. See 6.3.

                                                                                          6.5) You want to back out just the testing fix? Or the whole feature while reconsidering how it fits in the existing code. Again, rebased commits for that clean history make this easier.

                                                                                          1. Sure. Rebasing for a merge, maybe a cherry-pick in fossil’s model is actually better. Won’t argue with the local SCM semantics for performing a nice linear merge.
                                                                                          2. Dishonest only if you think SCM is about the developer’s brain, and not the software.

                                                                                          Really I worry the author had too much Enterprise coding experience, where all your work will now become a single commit fitting the Jira formatting rule and you have a multi-dozen line commit because that way the pre-CI checks can pass. I understand being in such a system and thinking rebase is to blame. Maybe your org should trust developers a little more, and spend more time saying “don’t say blargh” instead of “all one commit.”

                                                                                          1. 27

                                                                                            The best analogy of this I’ve come up with is your private work is like a lab book (meticulous, forensic) and the public/merged branches are the thesis (edited, diversions gone, to the point).

                                                                                            1. 4

                                                                                              As I started reading you comment and was convinced you had not read the article, but I see you did as you addressed points individually. The author does negate your first claim and shows the evidence. I think you mean that some things are not achievable the same way they are in git. Cleaner history is achievable in fossil in a way that is a superset of git, as explained in the article thoroughly.

                                                                                              I am a git user and never used fossil. Git works fine for me and has proven to be a reliable and snappy VCS for me from day one. I don’t have any interest in move to fossil, nor am I a member of the group of people that advocate for changes in git, specially not on git’s principles. It works for me, the parts of it I dislike or would have built in a different way are acceptable choices by people who offered an immensely useful tool to the world. That isn’t to say that valid criticism doesn’t exist or that there aren’t things that could be solved better. I think this article very strongly proves that rebase is just a hacky workflow whose results could be achievable by resourcing to better designed functionality. The author did this masterfully, but on the other hands there is nothing wrong in having a workflow in muscle memory and use it. Even if said workflow relies on glitches or rough shortcuts.

                                                                                              1. Do you not mean the opposite? They way I see it, it doesn’t make sense to call a tool ‘dishonest’, one could call it potentially confusing. But it does what it does, how is that possibly dishonest?

                                                                                              Regardless personal opinions, the article was so clear, and explaining things so well with clear information and to such detail, that it was a joy to read. This is the mind of a great engineer at work in a way we don’t see so often these days.

                                                                                              1. 3

                                                                                                rebase is just a hacky workflow whose results could be achievable by resourcing to better designed functionality

                                                                                                Argued with examples and suggestions which do not share the same assumptions. There may be a case for rebase being a hacky way to go about making changes to past/private commits, but it was not made in this post. Rather the case was made for any manipulating of past commits as technically and socially wrong.

                                                                                                I understand fossil allows overlaying new information on past commits, however there comes a time for messing with actual commits, and not much lost when you change a parent commit.

                                                                                              2. 3

                                                                                                Maybe I’m missing something, but it seems like the author is specifically talking about git rebase and not git rebase --interactve (at least for the majority of the article). Many of their points are valid for the former, but this response seems to be speaking almost exclusively to the latter.

                                                                                                That being said, I don’t think Fossil supports rewriting history in any form, so quite a few of your responses are critiques of Fossil, but not really the article. Similar to you, I’ll try to go through all the points and show what I think the original author was getting at. On a side note, I personally don’t think that rebasing all commits so the master branch is flat is very helpful, but some people seem to like it. In any sense, that specific use is what I’ll be speaking to because it seems to be what the article seems to be talking about.

                                                                                                1. Everyone seems to agree on this, no sense speaking more about it.
                                                                                                2. Raw git rebase is more an alternative to merging in prod than it is cleaning up the commits. Commit cleanup is often useful, while blindly rebasing on prod rather than merging it in isn’t always the best option.
                                                                                                  1. Your argument is saying “some data was lost, but everything is still there”. I have to agree with the original author on this one - a rebase drops the parent commit where the branch first came from, so all the history is not still there (or is purposefully misrepresented). Also see my response to #4.
                                                                                                  2. I think the point they were making is that the claimed benefit from rebasing (“rebasing provides better feature branch diffs”) can be easily achieved by other means - in this case, merging the parent branch back in to the feature branch. On a related note, there are very subtle, but potentially fairly dangerous, differences when you look at the diff from the HEAD to the feature branch without merging in prod, so either rebasing or merging in prod are 2 ways to solve this. That is what the graphics and table show.
                                                                                                3. While I tend to view personal branches as potentially rewritten at any time, have you ever tried to base your branch on someone else’s when they’re using a rebase-based workflow? It’s a nightmare. Trying to get your changes to re-apply on top of their rebased changes often causes conflicts which are very hard to recover from.
                                                                                                4. The issue is not “when was work done”, but “what was the order the work was done in”. Using a rebase workflow, you could easily end up with commits later in the history which were much earlier chronologically. This is extremely confusing if you’re trying to track down what actually happened.
                                                                                                5. I’m not sure what you’re getting at here - fossil seems to allow amending commit messages to fix information or a mistake at a later date, you can’t do that in git without rebasing… and once something is in prod, that really shouldn’t happen. There have been many times when I’ve wanted to go back and add more information to a commit message (or fix a typo) after it was merged in.
                                                                                                6. For these, I tend to agree with your response - this seems to be one of the only places where the Fossil article is speaking about an interactive rebase and I think they really miss the point.
                                                                                                7. Not much to respond to here.
                                                                                                8. From the original article, “Rebasing is an anti-pattern. It is dishonest. It deliberately omits historical information. It causes problems for collaboration. And it has no offsetting benefits.” I agree that rebasing is often an anti-pattern, but I’m purposefully excluding the modification of local commits to get a more useful history. Rewriting local history can definitely have benefits though, so I don’t think they’re completely right.

                                                                                                I often wish Git’s UI was clearer - the multiple uses of “rebase” seems similar to the many things “checkout” can do. To make the distinction clearer in my head, I personally view git rebase --interactive as a git rewrite-history command. While it may share some of the internals of rebase, it has quite a different goal from the plain “rebase” action.

                                                                                                I hope this helps shed some light on their opinions, even if you may not agree with all of it.

                                                                                                TL;DR: there should be a distinction made between rebasing to keep a flat merge history and rewriting feature-branch commits to make them more useful. The first can cause quite a bit of confusion and lead to a more misleading history, while the second can be a very valuable tool.

                                                                                                1. 1

                                                                                                  Interactive rebase should give you only those abilities available from the commandline, just with a nicer interface.

                                                                                                  I’m OK with fossil commits being append-only. That doesn’t bother me, I love the OpenCrux database which offers an immutable base. A similar thing for commits is an excellent idea.

                                                                                                  But so is modifying the stream of commits to match when commits hit mainline. And so is merging or splitting commits. And so is ordering the work not in how a spread-out team might complete it, with multiple parallel useless checkins a day, but with what matters long term: In what order did this work introduce regressions to the codebase.

                                                                                                  1. 1

                                                                                                    In general I agree with you - I like modifying the stream of commits, but I really only like doing it before they hit main… and I don’t like forcing main to be a straight line without merges. I was primarily trying to point out that most of their arguments focus on rebasing commits to maintain a straight line on main, and not on rewriting history for the sake of a clearer commit log. I think there is very little value to the former (most times), and plenty of value for the latter.

                                                                                                    Again, I really dislike how “rebase” has often been taken to mean “rewriting history” in git, because in a DAG, a rebase is a specific operation. It’s unclear which people are talking about during this conversation and I think some of the wires may have been crossed.

                                                                                                2. 2

                                                                                                  I generally agree that git rebase is fine as long as one knows exactly one is doing and using the tool to make deliberate changes and improve the state of the project, but

                                                                                                  So what? Do you really want my commit to show when the work was done at 2 PM instead of 10 AM?

                                                                                                  2PM vs. 10AM is unlikely to matter, but it often matters whether it was yesterday or Thursday 2 weeks ago, which is before we had the meeting about X,Y,Z. I don’t go about memorizing the commit timestamps in my repositories, but I still find them useful occasionally. I wish we’d all be more careful about avoiding argument from lack of imagination in our debates.

                                                                                                  1. 5

                                                                                                    but it often matters whether it was yesterday or Thursday 2 weeks ago, which is before we had the meeting about X,Y,Z.

                                                                                                    Not long term, which is where SCM exists.

                                                                                                    Long term those distinctions turn into a very thin slice of time, and people forget about the discussions that happened outside the commit history. Thus all that remains is a commit message in a line.

                                                                                                1. 20

                                                                                                  I’ve been using JS since the late 90s and I haven’t even seen alert/confirm/prompt outside of toy tutorials or horrendous code since roughly 2005. I don’t find the arguments in this blog very convincing.

                                                                                                  It’s also incredibly clickbait/alarmist, which is an immediate eye roll.

                                                                                                  1. 37

                                                                                                    I use confirm for actions that cannot be undone and are potentially dangerous. Why wouldn’t I? The alternative is to write a lot of code to throw up a modal div that does the same thing. Might as well do it natively.

                                                                                                    1. 32

                                                                                                      Wait until you meet enterprise software!

                                                                                                      1. 28

                                                                                                        In the relevant bug tracker discussion someone says this broke ERP software with hundreds of thousands of users.

                                                                                                        1. 18

                                                                                                          Half the argument of the blog post though is that “toy tutorials” are important and valuable in a way that isn’t captured by how often the feature is used in production. And most of the rest is about how actually, it’s valuable that code from 2005 still works. I think you are missing the forest for the trees.

                                                                                                          1. 8

                                                                                                            The article considerably overstates what the Chrome team is actually intending to ship: it’s disabling cross-origin alert inside iframes, not alert entirely. Most of the article seems to be an extremely uncharitable reading of Dominic hoping that “one day”, “in the far future”, “maybe” (literally these are direct quotes!) they can remove blocking APIs like alert — not that they have any plans to do so now or any time soon.

                                                                                                            I don’t think the GP is missing the forest for the trees; I think the author is making a mountain out of a molehill.

                                                                                                            1. 7

                                                                                                              Few things:

                                                                                                              • “Some day” tends to come a lot sooner than we’d expect.
                                                                                                              • This is the sort of thing people use to justify further encroachment down the line (“Well we already disable it for iframe stuff…”).
                                                                                                              • This directly reduces the utility of using iframes–and some folks still use those on occasion, and it is exceedingly tacky to unilaterally decide to break their workflows.
                                                                                                            2. 1

                                                                                                              You can still do your little toy tutorials with alert/confirm/prompt, just don’t do them in an iframe?

                                                                                                              1. 2

                                                                                                                If you’re making a codepad-like site, you kind of have to put all the user-submitted JS in an alert so it’s not on your own domain.

                                                                                                                1. 1

                                                                                                                  If you’re making a codepad-like site, you can also inject a polyfill for alert() etc in the user-controlled iframe to keep things working. Until you’re done locking down the codepad for arbitrary user scripts to run without problems, this is probably one of the smaller tasks.

                                                                                                                  1. 2

                                                                                                                    Can you make the polyfill block?

                                                                                                            3. 11

                                                                                                              I use alert() and confirm(). It’s easy, simple, works, and doesn’t even look so bad since Firefox 89. I don’t think my code is “horrendous”; it’s just the obvious solution without throwing a bunch of JS at it.

                                                                                                              I agree this blog post isn’t especially great though.

                                                                                                              1. 1

                                                                                                                Do you use it in a cross-origin iframe?

                                                                                                                1. 3

                                                                                                                  No, but your comment made no mention of that:

                                                                                                                  I haven’t even seen alert/confirm/prompt outside of toy tutorials or horrendous code since roughly 2005.

                                                                                                                  1. 1

                                                                                                                    Sorry, I read your reply in the context of the blog post (i.e. your code is going to break).

                                                                                                                    My line about horrendous code is hyperbolic, but the fact is that alert/confirm/prompt don’t offer customizability to make for a consistent, well-made UX. Maybe it’s not a problem for certain audiences (usually things like internal tools for devs end up having them), but most customer-facing solutions require more to their experience.

                                                                                                                    I’m not saying they should remove them right now, but a day in the future where they go away (presumably deprecated due to a better option) is not something we should be dreading. Who knows if that day will even come.

                                                                                                              2. 8

                                                                                                                At $JOB we have used prompt for some simple scenarios where it solves the problem of getting user input in a scenario in some sync-y code, and was no fuss.

                                                                                                                We integrate with Salesforce through an iframe. This change caused us to have to like redo a whole tiny thing to get stuff working again (using a much heavier modal thing instead of, well, a call to prompt). It wasn’t the end of the world, but it was annoying and a real unforced error.

                                                                                                                We would love a scenario where browsers offered more rich input in a clean way (modals have been in basically every native GUI since the beginning of time!). I’m sure people would be way less frustrated if Chrome offered easy alternatives that don’t rely (for example) on z-index-overlays (that can break for a billion reasons) or stuff like that.

                                                                                                                Sometimes you just want input from somebody in a prompt-y way

                                                                                                                1. 5

                                                                                                                  You haven’t seen a lot of business to business software then.

                                                                                                                  1. 1

                                                                                                                    That is still no reason to remove a perfectly functional feature that has worked reliability for decades and requir a orders of magnitude less resources than the alternative. Both human and computational resources.

                                                                                                                    I use it all the time on simple UIs I write for my own usage or for restricted groups of users.

                                                                                                                    The amount of resources that could be saved if we favoured well known, tried and true technology rather than the new aesthetically shiny thing, is astonishing.

                                                                                                                    1. 2

                                                                                                                      It’s not about “shiny things” but about use experience. Linux has suffered for decades due to the approach you’re talking about.

                                                                                                                      1. 2

                                                                                                                        No, Linux has suffered precisely because it does not offer a native GUI, or UI at all, forcing everyone to reinvent basic functionality like on the web.

                                                                                                                  1. 2

                                                                                                                    I’m very happy to see this discussion popping up. We are so far gone on resource wastfullness. Beyond any reasonable limit. I see kubernet a kluaters with dozens of nodes and combined terabytes of RAM running software that does essentially the same as as a PHP script 15 years ago running on a shared hosting account.

                                                                                                                    I look at chrome, which essentially has the same core functionality as it had in its inception but requires an order of magnitude more of resources to be usable.

                                                                                                                    We have to stop this. If not because it’s non sense, perhaps to spare the planet of a gigantic abuse of resources.

                                                                                                                    Lobaters. What can we do? How do we fight this tendency? Should we start a club/culture/meme/religion/whatever for resource-friendly software? How/where to we start?

                                                                                                                    1. 2

                                                                                                                      Programmer time is more expensive than CPU cycles. Whining about it isn’t going to change anything, and spending more of the expensive thing to buy the cheap thing is silly.

                                                                                                                      1. 15

                                                                                                                        The article makes a good counterpoint:

                                                                                                                        People migrate to faster programs because faster programs allow users to do more. Look at examples from the past: the original Python-based bittorrent client was quickly overtaken by the much faster uTorrent; Subversion lost its status as the premier VCS to Git in large part because every operation was so much faster in Git; the improved grep utility, ack, is written in Perl and waning in popularity to the faster silversurfer and ripgrep; the Electron-based editor Atom has been all but replaced by VSCode, also Electron-based, but which is faster; Chrome became the king of browsers largely because it was much faster than Firefox and Internet Explorer. The fastest option eventually wins. Would your project survive if a competitor came along and was ten times faster?

                                                                                                                        1. 7

                                                                                                                          That fragment is not great in my opinion. Svn-git change is about the whole architecture not about implementation speed. A lot of speedup in that case comes from not going to the server for information. Early git was mainly shell and perl too so it doesn’t quite mesh with the python example before. Calling out Python for BitTorrent is not a great example either - it’s an io-heavy app rather than processing heavy.

                                                                                                                          Vscode has way more improvements over atom and available man-hours. If it was about performance, sublime or some other graphical editor would take over from them.

                                                                                                                          I get the idea and I see what the author is aiming for, but those examples don’t support the post.

                                                                                                                          1. 3

                                                                                                                            I was an enthusiastic user of BitTorrent when it was released. uTorrent was absolutely snappier and lighter than other clients. Specifically the oficial Python GUI. It blew the competition out of the watter because it was superior in its pragmacy. Perhaps python Vs c is an oversimplification. The point would still hold even in the presence of two programs written in the same language.

                                                                                                                            The same applies for git. It feels snappy and reliable. Subversion and cvs, besides being slow and clunky, would gift you a corrupted repo every other Friday afternoon. Git pulverised this non sense brutally quick.

                                                                                                                            The point is about higher quality software built with better focus, making reasonable use of resources, resulting in superior experience for the user. Not so much about a language being better than others.

                                                                                                                            1. 2

                                                                                                                              BitTorrent might seem IO heavy these days; ironically this is because it has been optimised to death; but you are revising history if you think that it’s not CPU/Memory intensive and doing it in python would be crushingly slow.

                                                                                                                              The point at the end is a good one though, you must agree:

                                                                                                                              Would your project survive if a competitor came along and was ten times faster?

                                                                                                                              1. 1

                                                                                                                                I was talking about the actual process not the specific implementation. You can make BitTorrent cpu-bound in any language with inefficient implementation. But the problem itself is IO bound, so any runtime should also be able to get there. (Modulo the runtime overhead)

                                                                                                                            2. 2

                                                                                                                              This paragraph popped out at me as historically biased and lacking in citations or evidence. With a bit more context, the examples are hollow:

                                                                                                                              • The fastest torrent clients are built on libtorrent (the one powering rtorrent), but rtorrent is not a very common tool
                                                                                                                              • Fossil is faster than git
                                                                                                                              • grep itself is more popular than any of its newer competitors; it’s the only one shipped as a standard utility
                                                                                                                              • Atom? VSCode? vim and emacs are still quite popular! Moreover, the neovim fork is not more popular than classic vim, despite speed improvements
                                                                                                                              • There was a period of time when WebKit was fastest, and browsers like uzbl were faster than either Chrome or Firefox at rendering, but never got popular

                                                                                                                              I understand the author’s feelings, but they failed to substantiate their argument at this spot.

                                                                                                                              1. 2

                                                                                                                                This is true, but most programming is done for other employees, either of your company or another if you’re in commercial business software. These employees can’t shop around or (in most cases) switch, and your application only needs to be significantly better than whatever they’re doing now, in the eyes of the person writing the cheques.

                                                                                                                                I don’t like it, but I can’t see it changing much until all our tools and processes get shaken up.

                                                                                                                              2. 11

                                                                                                                                But we shouldn’t ignore the users’ time. If the web app they use all day long take 2-3 seconds to load every page, that piles up quickly.

                                                                                                                                1. 7

                                                                                                                                  While this is obviously a nuanced issue, personally I think this is the key insight in any of it, but the whole “optimise for developer happiness/productivity, RAM is cheap, buy more RAM (etc)” line totally ignores it. Let alone the “rockstar developer” spiel. Serving users’ purposes is what software is for. A very large number of developers lose track of this because of an understandable focus on their own frustrations, and tools that make them more productive are obviously valuable, as well as meaning they have a less shitty time, which is meaningful and valuable. But building a development ideology around that doesn’t make this go away. It just makes software worse for users.

                                                                                                                                  1. 7

                                                                                                                                    Occasionally I ask end-users in stores, doctor’s offices, etc what they think of the software they’re using, and 99% of the time they say “it’s too slow and crashes too much.”

                                                                                                                                    1. 2

                                                                                                                                      Yes, and they’re right to do so. But spending more programming time using our current toolset is unlikely to change that, as the pressures that selected for features and delivery time over artefact quality haven’t gone anywhere. We need to fix our tools.

                                                                                                                                    2. 5

                                                                                                                                      In an early draft, I cut out a paragraph about what I am starting to call “trickle-down devenomics”; this idea that if we optimize for the developers, users will have better software. Just like trickle-down economics, it’s just snake oil.

                                                                                                                                      1. 1

                                                                                                                                        Alternately, you could make it not political.

                                                                                                                                        Developers use tools and see beauty differently from normal people. Musicians see music differently, architects see buildings differently, and interior designers see rooms differently. That’s OK, but it means you need software people to talk to non-software people to figure out what they actually need.

                                                                                                                                  2. 3

                                                                                                                                    Removed because I forgot to reload and multiple others gave the same argument I did in the meantime already.

                                                                                                                                    1. 3

                                                                                                                                      I don’t buy this argument. In some (many?) cases, sure. But once you’re operating at any reasonable scale you’re spending a lot of money on compute resources. At that stage even a modest performance increase can save a lot of money. But if you closed the door on those improvements at the beginning by not thinking about performance at all, then you’re kinda out of luck.

                                                                                                                                      Not to mention the environmental cost of excessive computing resources.

                                                                                                                                      It’s not fair to characterize the author as “whining about” performance issues. They made a reasonable and nuanced argument.

                                                                                                                                      1. 3

                                                                                                                                        Yes. This is true so long as you are the only option. Once there is a faster option, the faster option wins.

                                                                                                                                        Why?

                                                                                                                                        Not for victories in CPU time. The only thing more scarce and expensive than programmer time is…. User Time. Minimize user time and pin cpu usage at 100% and nobody will care until it causes user discomfort or loss of user time elsewhere.

                                                                                                                                        Companies with slow intranets cause employees to become annoyed, and cause people to leave at some rate greater than zero.

                                                                                                                                        A server costs a few thousand dollars on the high end. A smaller program costs a few tens of thousands to build and maintain and operate. That program can cost more than hundreds of thousands in management and engineer and sales and marketing and HR and quality and training and compliance salaries to use it over its life.

                                                                                                                                      1. 6

                                                                                                                                        No, thank you. After watching three companies shooting themselves in the foot by buying into this kind of snake oil and chasing the mythical man month in kubernetes land, my experience has been the opposite of what this article describes.

                                                                                                                                        The claims in the link are not backed by evidence. Spinning as many new environments as you want, apps being portable, etc. Are pipe dreams which absolutely do not materialize in practice. And writing dozens or even hundreds of repetitive deep nested yaml files insn’t a small effort by any measure.

                                                                                                                                        What you end up with is an amalgamation of certificates, yaml files that need to be tracked and that are not handled in a stateless manner, references to external services, service dependencies. Opacity, complicated logging set ups and a whole lot of non sense that will for sure keep your job safety rock solid.

                                                                                                                                        Kubernetes might make sense for massive deployments (thousands or phisical machines), otherwise I wouldn’t touch it with a ten foot pole. I am not 100% sure these articles aren’t written with that intent of making the competition fall for this enormous pitfall.

                                                                                                                                        1. 1

                                                                                                                                          Keep up. Been using this for many years and it’s great.

                                                                                                                                          1. 2

                                                                                                                                            I have used this for many years and it is one of my favourite pieces of software. So usefull. Removing the clutter in a filesystem becomes a breeze.

                                                                                                                                            From ncdu’s website:

                                                                                                                                            Project status: Maintenance mode: I consider ncdu to be mostly complete. I’m still here to keep it alive and to fix issues as they come along, but I don’t actively work on adding new features.

                                                                                                                                            This is how it’s done. We need more such pragmatism. Write software which a well defined scope and purpose, implement it cleanly and well, and use it.

                                                                                                                                            1. 6

                                                                                                                                              What percentage of users use another default search engine? I would guess less than 0.1%. such things, while susceptible to valid criticism, barely matter at this point.

                                                                                                                                              The big advantage of Google, is being in a position of advantage and power and being able to essentially force website owners to follow their rules, under the threat of removing them from their index.

                                                                                                                                              Google started by crawling webpages. If someone did to Google what they did to the web. Not only would they be blocked immediately, they would potentially face legal consequences.

                                                                                                                                              In essence, building an index of the web was up for grabs and Google is allowed to legally prevent other player to ncrawl Google.

                                                                                                                                              1. 17

                                                                                                                                                It’s around 7%, but it’s also 100% of users of 100% of Google’s competitors.

                                                                                                                                                Google has a track record of upholding their dominance by “oops, but who cares” with their sites serving slower or inferior “fallback” code to other browser engines, buggy/discriminatory user-agent sniffing, browser “upgrade” prompts, AMP that happens to have better implementation for Google’s ads, etc.

                                                                                                                                                1. 1

                                                                                                                                                  I wonder if this preferential preconnect behavior also benefits AMP. Don’t forget that the idea of “AMP’s performance comes from the restrictions it imposes” is mostly a lie: the real performance gain is from AMP pages being served from Google’s CDN and not the real origin.

                                                                                                                                              1. 3

                                                                                                                                                Does anyone still parse access logs? Seems like a good option for a small site with limited aims. It adds no page weight, can’t be blocked, and doesn’t even require much back-end infrastructure.

                                                                                                                                                1. 2

                                                                                                                                                  Yes, but there are a couple of niggles that I may or may not work on something to try and ‘solve’:

                                                                                                                                                  (a) all the log analysers pretty much assume you have just one web server.

                                                                                                                                                  (b) they’re either quite featured, and look like they were designed in the 90s, or they look nice, but have some glaring gaps in functionality.

                                                                                                                                                  I’ve toyed with the idea (only to the point of some PoC stuff to test it out so far) of a “simpler” analyser that would work for the use-cases I’ve seen: a really-simplistic (i.e. probably just shell, for the initial version) ‘parsing’ of the log entries, and then relying on Redis’ increment functionality to bump up scores based for the various metrics, using a few date-related keys.

                                                                                                                                                  1. 1

                                                                                                                                                    Let me know if you ever get around to building such a thing. I would be happy to test it. All I really want is a graph of visitors over time broken down by page. I had been using Google Analytics, which was overkill and I was feeling guilty about supplying traffic data to Google. Now I just run less on the access file occasionally, which is nearly enough for the traffic volume (can I call less an MVP for web traffic analysis?)

                                                                                                                                                    1. 3

                                                                                                                                                      Thanks for the offer. I’ll be sure to post something here if I get something working.

                                                                                                                                                      You may want to also look at Goaccess (https://goaccess.io) - it does does static log analysis and might well be enough for what you need.

                                                                                                                                                      The issue for us has been (a) it’s a PITA to make it work across multiple servers and (b) it has no built in ability to filter to a given date range. On the CLI it’s possible (although not necessarily simple) to just filter the input for an ad-hoc run, but from the ‘HTML’ frontend (i.e. what business people want to use) it’s just not possible.

                                                                                                                                                      1. 2

                                                                                                                                                        To gather logs from multiple web servers, I am using:

                                                                                                                                                        for h in web01 web02 web03 web04; do
                                                                                                                                                          ssh $h zcat -f /var/log/nginx/vincent.bernat.ch.log\* | grep -Fv atom.xml
                                                                                                                                                        done | goaccess --output=goaccess.html ...
                                                                                                                                                        
                                                                                                                                                        1. 1

                                                                                                                                                          Thanks for the suggestion. I’ll be sure to check it out.

                                                                                                                                                          1. 1

                                                                                                                                                            I love goaccess and use it all the time. I try to keep things in one server but I have used it with multi server setups.

                                                                                                                                                            Could you be specific about what it is a PITA when handling multi server setups. How is it any more complicated (or simpler) than any other tool? You always need to agregate the data, whatever solution you use. What’s specific about go-access?

                                                                                                                                                            1. 1

                                                                                                                                                              So the problem is that we want the analytics frontend to be served from multiple servers too, and want it to work in realtime html mode.

                                                                                                                                                              As much as analytics isn’t really business critical, the goal here is that nothing we control for prod, is a SPOF.

                                                                                                                                                              So the kind of working setup now relies on rsyslog to take varnishncsa access logs and send/receive to/from peer syslog servers and also write to disk locally, which goaccess consumes. This isn’t what I’d call a robust setup.

                                                                                                                                                              The plan in my head/on some notes/kind of in a PoC is to have the storage layer (i.e. redis is my idea for now, might end up being either something else, or being adaptable to a couple of options) be the point of aggregation, so each log producing service (in our case varnish, but in another case it might be Nginx or apache or whatever that can produce access logs) has something locally running which does really basic handling of the log entry, and then just increments a series of counters in the shared storage layer, based on the metrics from the log entry.

                                                                                                                                                              1. 1

                                                                                                                                                                Rsyslog, fluentd, or just watch the logs with tail or what have you and append to a remote server via socket.

                                                                                                                                                                I don’t really see the use case of serving the UI from several servers. They are behind a proxy anyways.

                                                                                                                                                                Personally I would just reach the files via SSH like @vbernat suggests.

                                                                                                                                                                1. 1

                                                                                                                                                                  They are not behind a single proxy, that’s the point.

                                                                                                                                                                  Copying files via ssh means you lose any capability for real-time logs too.

                                                                                                                                                                2. 1

                                                                                                                                                                  GoatCounter supports log parsing as well; the way it works is a bit different than e.g. goaccess: you still have your main goatcounter instance running like usual, and you run goatcounter import [..] which parses the logfiles and uses the API to send it to goatcounter. The upshot of this is that it should solve problems like this, and generally being a bit more flexible.

                                                                                                                                                                  (disclaimer: I am the GoatCounter author, not trying to advertise it or anything, just seems like a useful thing to mention here)

                                                                                                                                                                  1. 1

                                                                                                                                                                    Thats interesting, thanks.

                                                                                                                                                                    1. 1

                                                                                                                                                                      Hey I don’t want to turn this into a GoatCounter FAQ, but there’s no way to have the computed metrics be shared somehow is there (i.e. so the analytics are not reliant on a single machine being up to record/view) ?

                                                                                                                                                                      1. 1

                                                                                                                                                                        I would solve that by running two instances with the same PostgreSQL database.

                                                                                                                                                                        Other than that, you can send the data to two instances, and you can export/import as CSV. But in general, if there isn’t really a failover solution built-in. But I think using the same (possibly redundant) PostgreSQL database should work fairly well, but it’s not a setup I’ve tried so there may be some issues I’m not thinking of at the moment (but if there are, I expect them to be solvable without too much problems).

                                                                                                                                                                        1. 1

                                                                                                                                                                          The shared DB solution sounds most like what I had in mind, thanks - I wasn’t even aware it supports Postgres. I guess it’s a deliberate decision to leave the self-hosting info on GH and have the main site be more about the hosted version?

                                                                                                                                                                          1. 1

                                                                                                                                                                            I guess it’s a deliberate decision to leave the self-hosting info on GH and have the main site be more about the hosted version?

                                                                                                                                                                            Yeah, that’s pretty much the general idea; a lot of people looking for the self-hosted option aren’t really interested in details of the SaaS stuff, and vice versa. Maybe I should make that a bit clearer actually 🤔