Threads for enn

  1. 5

    I think as the intro implies this can be extended to machines and tools and maybe even further

    I think in the context of computers in particular there’s a bit of a political problem where we force people to use them, sometimes by lawn, sometimes through society. They have to use computers, Smartphones and even certain apps.

    At the same time we see a rise in scams and are surprised how people who might not even need or want this devices and only have them because they are forced to fill out some form online.

    Some decades ago it was relatively easy to come by without almost any particular tool one can think of. You might be odd for it, but it allowed you to stop make use of your rights, etc.

    Today you need apps to log in to your bank, websites to do your taxes, sometimes even the web to apply for elderly homes. And smartphones are pretty complex, and force you to fit example have or create an email address, require passwords, etc. You need to know how to use software, understand what the internet is, should have done concept of pop-ups, online ads, spam, updates, understand that there is no other person sitting on the other end right now and so on .

    I think a lot of ruthlessness comes from this. Then even if you know about all of the above you end up like in Kafka’s The Trial and even if you know what things mean the processes behind the scenes for the vast majority of use cases will remain completely intransparent to you.

    In a non automated/digitalized world is easy to ask quick questions and people who can ask other people handle exceptions. In the digital world one has to hope the developer has to have thought of it and handle it accordingly. If you are lucky there’s a support hotline but these seem to go away, especially for bigger so often more important companies

    I see tools more on the morally neutral side, but I don’t think that’s the issue really. I don’t think computers are impressive but there’s an unintentional direction we move towards to whete things are forced upon people often thinking it’s a good thing when it’s at least debatable.

    As a side note there’s certainly cases where things were done in the name of digitalization, progress, efficiency and things were just harder, slower, turned out to be less cost effective, less secure and required more real people to be involved

    Of course these are the bad example, but given the adjective here is oppressive. Usually even in (working/stable) oppressive societies it works for most people most of the time. Things start to shift when it doesn’t for you many or there’s war. Only the ones not fitting in tend to have problems and while I would have titled it differently I think that is true for how computers are used that’s true today for all sorts of computers.

    1. 13

      In a non automated/digitalized world is easy to ask quick questions and people who can ask other people handle exceptions.

      In the land of unicorn and rainbows? ;)

      From my experience, people in positions of “HTML form actions” absolutely aren’t inclined to answer any questions and handle exceptions, unless they have any real retribution to fear. Worse yet, it’s a rational behavior for them: they almost certainly will be reprimanded if they break the intended logic, so it’s much safer for them to follow its letter.

      Just past month I had to file a certain application for a somewhat uncommon case. The humans responsible for handling them rejected it as invalid because my scenario wasn’t in their “cache” of common cases and they used the default “contact our parent organization” response instead of trying to handle it, and not even in a polite manner. I contacted the parent organization and, luckily, people there were willing to handle it and told me that my application was valid all along and should have been accepted, and that I should file it again.

      I suppose the application form handlers received quite a “motivational speech” from the higher-ups because they were much more polite and accepted it without questions, but it’s still wasted me a lot of time traveling to a different city to file it and standing in lines.

      It may be one of the more egregious example in my practice, but it’s far from unique. I very much prefer interacting with machines because at least I can communicate with them remotely. ;)

      1. 5

        Your anecdote just demonstrates the author’s point; you had to escalate to a more-responsible human, but you successfully did so and they were able to accommodate the uncommon circumstances, even though those cirumstances were not anticipated by the people who designed the process. When was the last time you pulled that off with an HTML form?

        1. 6

          They were anticipated by the people who designed the process. It’s just that their subordinates did a sloppy job executing the logic written for them by the higher-ups. If the higher-ups programmed a machine to do that, it wouldn’t fail.

          And I got very lucky with the sensible higher-ups. It could have been much worse: in that particular case it was obvious who the higher-ups were and they had publicly-accessible contact information. In many other cases you may never even find out who they are and how to reach them.

          1. 1

            everytime the form allows freedom (which they are admittedly rarely used for, but could be), e.g. https://mro.name/2021/ocaml-stickers

            1. 2

              I love that, and I wish more of the web worked that way, but it’s worth pointing out that the only reason it can work is because ultimately the input I put into that form gets interpreted by a human at the post office. It would not be possible to create a form for inputting an email address which would be as resilient to errors or omissions.

              1. 1

                yes, and a lot of the information filled into the form doesn’t make sense to me – I just copy it on the envelope. It makes sense in peels as it is routed along: first country, then ZIP, then street, then name. That’s flexibility! Subsidiarity at work.

        2. 2

          Some decades ago, here in the US, we were deep in the midst of making a large proportion of physical social institutions at best undignified and at worst somewhere between unsafe and impossible to independently access without ownership and operation of a dangerous, expensive motor vehicle, something unavailable to a significant proportion of the population that ruthlessly grinds tens of thousands of people a year into meat just here into the US.

        1. 5

          I think this article is technically correct but in this particular case it might just not be quite the best kind of correct :-).

          There are always going to be people who romanticize “the old way” but painting all criticism of Flatpak & friends as rose-tinted glasses is one of the reasons why Flatpak is six years old and still weird – this story is, ironically enough, on the frontpage along with this article.

          (Disclaimer: I like Flathub, I think it’s a good idea, and I use it). But a half-realized idea of a better system is usually worse than a fully-realized idea of a worse system. Plenty of things break when installing stuff from Flathub and many applications there are minimally sandboxed, to the point where you might as well just install the bloody .deb if it exists. Filing all the breakage under “yeah users don’t need that” (font rendering bugs, themes etc.) or “well the next release of this particular Wayland compositor is going to support that” is the same kind of obliviousness to reality as “but Flatpak breaks the Unix philosophy”, just of a more optimistic nature.

          This leads to a curious state of affairs that’s unsatisfying for everyone.

          It’s certainly in the nature of FOSS software that things don’t happen overnight and software evolves in the open. But if you want to appeal to an audience (technical and non-technical) that’s wider than “people who contribute to FOSS desktop projects”, shipping half-finished implementations is not the way, whether it’s in the nature of FOSS or not. You can say that Linux is not a product but that won’t change the (entirely reasonable) expectation of this wider audience that it should at least work.

          Meanwhile, elitism and gatekeeping are one unpleasant aspect of romanticizing the old ways but, elitism and gatekeeping aside, I think it’s important to be realistic and acknowledge that the old way works – as in, it allows you to install applications, manage, update and uninstall applications which work as intended, to a degree that Flatpak is still aspiring to. While some people may be yearning for the days when being a package maintainer granted you demigod status in cyberspace, I think it’s more realistic to assume that most people just aren’t willing to spend the extra troubleshooting hours on a system that doesn’t always deliver even the security guarantees it’s meant to deliver, and sometimes results in a functional downgrade, too.

          Edit: oh, while we’re on the topic of rose-tinted glasses, it’s also worth keeping in mind that the goalposts have changed quite significantly since then, too. Lots of people today point out that hey, back in 2000 you’d have had to fiddle with XF86Config and maybe fry your monitor, why are you complaining about far softer breakage today? Well, sure, but the alternative back in 2000 – especially if you were on a CS student’s budget – was Windows Me (I’m being as charitable as “maybe fry your monitor” here, realistically it was probably Windows 98). You maybe fried your monitor but got things many computer users couldn’t even dream of in return, unless they were swimming in money to shed out on Windows 2000, Visual Studio and so on. The standard to beat is no longer Windows Me.

          1. 4

            and acknowledge that the old way works

            Especially true when you’re not interested in desktop but servers. I’m very happy that I know I can just apt install php apache and it’ll give me a working bundle. The same for everything built on top of this. Also debian does specify a release cycle by this. I won’t have to worry that my php 7.4 is completely outdated in the next month just because someone thought moving ahead to php 8 is the new flashy thing. No, it’ll certainly work for a long time on php 7.4 as that’s the current debian stable release. And that’s perfectly fine, I don’t have the time to upgrade all the time just because someone though it would be neat to use one feature of php8. Those “gatekeeper” also ship most of these services with very sane defaults (config, location of configs, systemd units,…).

            Yeah that probably won’t work for the new release of $desktopapp, but it works flawless for the server environment.

            No docker is not an answer. It’s a completely different way of operating stuff.

            1. 2

              But a half-realized idea of a better system is usually worse than a fully-realized idea of a worse system.

              Oh, wow, I could not disagree more strongly with this. Give me something that is functionally complete over something that is broken and half-baked but has some kind of vague conceptual superiority any day.

              1. 4

                I’ve read your comment 3 times now and I’m pretty sure you actually strongly agree with the comment you’re replying to.

                1. 3

                  Damn it, you’re right.

            1. 4

              I remember trying Clojure a bit, and being super interested in a lot of the ideas of the language.

              There is the universal quibbles about syntax (and honestly I do kinda agree that f(x, y) and (f x y) are not really much different, and I like the removal of commas). But trying to write some non-trivial programs in Clojure/script made me realize that my quibble with lisps and some functional languages is name bindings.

              The fact that name bindings require indentation really messes with readability. I understand the sort of… theoretical underpinning of this, and some people will argue that it’s better, but when you’re working with a relatively iterative process, being able to reserve indentation for loops and other blocks (instead of “OK from this point forward this value is named foo”) is nice!

              It feels silly but I think it’s important, because people already are pretty lazy about giving things good names, so any added friction is going to make written code harder to read.

              (Clojure-specific whine: something about all the clojure tooling feels super brittle. Lots of inscrutable errors for beginners that could probably be mangled into something nicer. I of course hit these and also didn’t fix them, though…)

              EDIT: OTOH Clojure-specific stuff for data types is very very nice. Really love the readability improvements from there

              1. 5

                Interesting to hear this–indentation to indicate binding scope is one of the things I really miss when I’m using a non-Lisp. I feel like the mental overhead of trying to figure out where something is bound and where it’s not is much higher.

                (I strongly agree on the state of Clojure tooling.)

                1. 1

                  I think that racket solves this:

                  (define (f x)
                      (define y (* 10 x))
                      (printf "~a ~a\n" y x))
                  (f 42)
                  
                1. 96

                  Static or dynamic refers to whether the webserver serves requests by reading a static file off disk or running some dynamic code (whether in process or not). While the word “dynamic” can apply broadly to any change, reusing a term with a well-understood definition in this context to refer to unrelated changes like SSL cert renewal and HTTP headers is really confusing. Late in the article it refers to “the filesystem API used to host static files” so it’s clear the author knows the definition. It’s unfortunate that the article is written in this way; it’s self-fulfilling that misusing a clear and well-established term just results in confusion. Maybe a better metaphor for the points it’s trying to make would be Stewart Brand’s concept of pace layering.

                  1. 12

                    Yeah I agree, I think the article is generally good, but the title is misleading.

                    My summary is “We should try to make dynamic sites as easy to maintain as static sites”, using sqlite, nginx, whatever.

                    The distinction obviously exists – in fact the article heavily relies on the distinction to make its point.

                    I agree with the idea of moving them closer together (who wouldn’t want to make dynamic sites easier to maintain?) But I think there will be a difference no matter what.

                    Mainly that’s because the sandboxing problem (which consists of namespace isolation and resource isolation) is hard on any kernel and on any hardware. When you have a static site, you don’t need to solve that problem at all.

                    We will get better at solving that problem, but it will always be there. There are hardware issues like Spectre and Meltdown (which required patches to kernels and compilers!), but that’s arguably not even the hardest problem.


                    I also think recognizing this distinction will lead to more robust architectures. Similar to how progressive enhancement says that your website should still work without JS, your website’s static part should still work if the dynamic parts are broken (the app servers are down). That’s just good engineering.

                    1. 3

                      Funnily enough, sqlite + nginx is what I use for most of my smaller dynamic websites, usually wish a server process as well.

                      EDIT: Reading further, yeah, almost all of my side projects use that setup, outside of some Phoenix stuff, and I’ve definitely noticed those projects requiring not very much maintenance at all.

                      1. 7

                        What’s also a bit funny is that sqlite and nginx are both extremely old school, state machine-heavy, plain C code.

                        Yet we reach for them when we want something reliable. I recommend everyone look at the source code for both projects.

                        This reminds me of these 2 old articles:

                        https://tratt.net/laurie/blog/entries/how_can_c_programs_be_so_reliable.html

                        http://damienkatz.net/2013/01/the_unreasonable_effectiveness_of_c.html

                        (And I am not saying this is good; I certainly wouldn’t and can’t write such C code. It’s just funny)

                        1. 1

                          SQLite, at least, partially compensates via extensive testing, and a slow/considered pace of work (or so I understand). It’s the antithesis of many web-apps in that regard. And the authors come from a tradition that allows them to think outside the box much more than many devs, and do things like auto-generate the SQLite C header, rather than trying to maintain it by hand.

                          C and C++ can be used effectively, as demonstrated by nginx, sqlite, curl, ruby, python, tcl, lua and others, but it’s definitely a different headspace, as I understand it from dipping into such things just a bit.

                        2. 6

                          I did not know that nginx can talk to sqlite by itself. Can you share your setup?

                          1. 1

                            For me, I don’t use nginx talking directly to SQLite, I just use it as a reverse proxy. It’s just that it makes it easy to set up a lot of websites behind one server, and using SQLite makes it easy to manage those from a data storage standpoint.

                            1. 1

                              I see, yes that makes sense. I use it that way too.

                      2. 11

                        You articulated that without using expressions that would be inappropriate in the average office setting. I admire you for that.

                        The whole act of reusing a common, well-understood content-related term to instead refer to TLS certs and HTTP headers left me ready to respond with coarse language and possibly question whether OP was trolling.

                        The idea that maybe we’re comparing a fast layer to a slow layer is somewhat appealing, but I don’t think it quite fits either. I think OP is muddling content and presentation. Different presentations require differing levels of maintenance even for the same content. So if I publish a book, I might need to reprint it every few hundred years as natural conditions cause paper to degrade, etc. Whereas if I publish the same content on a website, I might need to alter the computer that hosts that content every X days as browsers’ expectations change.

                        That content doesn’t change. And that’s what we commonly mean when we say “a static website.” The fact that the thing presenting the content needs to change in order to adequately serve the readers doesn’t, in my view, make the content dynamic. And I don’t think it moves it from a slow layer to a faster one either.

                        1. 5

                          This is a reasonable criticism, but I think it’s slightly more complicated than that — a collection of files in a directory isn’t enough to unambiguously know how to correctly serve a static site. For instance, different servers disagree on the file extension → mimetype mapping. So I think you need to accept that you can’t just “read a static file off disk”, in order to serve it, you also need other information, which is encoded in the webserver configuration. But nginx/apache/etc let you do surprisingly dynamic things (changing routing depending on cookies/auth status/etc, for instance). So what parts of the webserver configuration are you allowed to use while still classifying something as “static”?

                          That’s what I’m trying to get at — a directory of files can’t be served as a static site without a configuration system of some sort, and actual http server software in order to serve a static site. But once you’re doing that sort of thing, how do you draw a principled line about what’s “static” and what isn’t?

                          1. 7

                            Putting a finer point on the mimetype thing, since I understand that it could be seen as a purely academic issue: python2 -m SimpleHTTPServer and python3 -m http.server will server foo.wasm with different mimetypes (application/wasm and application/octet-stream) Only the wasm bundle served by the python3 version will be executed by browsers, due to security constraints. Thus, what the website does, in a very concrete way, will be dependent not on the files, but on the server software. That sounds like a property of a “dynamic” system to me — why isn’t it?

                            You could say, ok, so a static website needs a filesystem to serve from and a mapping of extensions to content types. But there are also other things you need — information about routing, for instance. What domain and port is the content supposed to be served on, and at what path? If you don’t get that correct, links on the site likely won’t work. This is typically configured out of band — on GitHub pages, for instance, this is configured with the name of the repo.

                            So you need a extension to mimetype mapping, and routing information, and a filesystem. But you can have a static javacsript file that then goes and talks to the sever it was served from, and arbitrarily changes its behavior based on the HTTP headers that were returned. So really, if you want a robust definition of what a “static” website is, you need to pretty completely describe the mapping between HTTP requests and HTTP responses. But isn’t “a mapping between HTTP requests and HTTP responses” just a FP sort of way of describing a dynamic webserver?

                            If you disagree with some part of this chain of logic, I’m curious which part.

                            1. 6

                              All the configuration parts and “dynamic” nature of serving files in a static site are about that: serving them, how the file gets on my computer. But at the end of the day, with a static site the content of the document I get is the same as the content on the filesystem on the server. And with a dynamic site it is not. That is the difference. It’s about what is served.

                              All this talk about mime types and routing just confuses things. One can do the same kinds of tricks with a file system and local applications. For instance: changing the extension, setting default applications, etc. can all change the behavior you observe by opening a file. Does that mean my file system is dynamic too? Isn’t everything dynamic if you look at it that way?

                              1. 6

                                It seems very odd to be talking about whether or not WASM gets executed to make a point about static websites.

                                When the average person talks about a static site, they are talking about a document-like site with some HTML, some CSS, maybe some images. Yes, there may be some scripting, but it’s likely to be non-essential to the functionality of the site. For these kinds of sites, in practice MIME types are basically never something you as the site author will have to worry about. Every reasonable server will serve HTML, CSS, etc. with reasonable MIME types.

                                Sure, you can come up with some contrived example of an application-like site that reliant on WASM to function and call it a static site. But that is not what the phrase means in common usage, so what point do you think you are proving by doing so?

                                1. 4

                                  You can also misconfigure nginx to send html files as text/plain, if that is your point. python2 predates wasm, it’s simply a wrong default -today-.

                                  1. 3

                                    What about that is “misconfigured”? It’s just configuration, in some cases you might want all files to be served with a particular content type, regardless of path.

                                    My point is that just having a set of files doesn’t properly encode the information you need to serve that website. That, to me, seems to indicate that defining a static site as one that responds to requests by “reading static files off disk” is at the very least, incomplete.

                                    1. 3

                                      I think this discussion is kind of pointless then.

                                      Ask 10 web developers and I bet 9 would tell you that they will assume a “normal” or “randomly picked” not shit webserver will serve html/png/jpeg/css files with the correct header so that clients can meaningfully interpret them. It’s not really a web standard but it’s common knowledge/best practice/whatever you wanna call it. I simply think it’s disingenious to call this proper configuration then and not “just assuming any webserver that works”.

                                  2. 2

                                    I found your point (about the false division of static and dynamic websites) intuitive, from when you talked about isolation primitives in your post. (Is a webserver which serves a FUSE filesystem static or dynamic, for example? What if that filesystem is archivemount?)

                                    But this point about MIME headers is also quite persuasive and interesting, perhaps more so than the isolation point, you should include it in your post.

                                    Given this WASM mimetype requirement, what happens when you distribute WASM as part of a filesystem trees of HTML files and open it with file://? Is there an exception, or… Is this just relying on the browser’s internal mimetype detection heuristics to be correct?

                                    1. 2

                                      Yeah, I probably should have included it in the post — I might write a follow up post, or add a postscript.

                                      Loading WASM actually doesn’t work from file:// URLs at all! In general, file:// URLs are pretty special and there’s a bunch of stuff that doesn’t work with them. (Similarly, there are a handful of browser features that don’t work on non-https origins). If you’re doing local development with wasm files, you have to use a HTTP server of some sort.

                                      1. 3

                                        Loading WASM actually doesn’t work from file:// URLs at all!

                                        Fascinating! That’s also good support for your post! It disproves the “static means you can distribute it as a tarball and open it and all the content is there” counter-argument.

                                        1. 1

                                          This is for a good reason. Originally HTML pages were self-contained. Images were added, then styles and scripts. Systems were made that assumed pages wouldn’t be able to just request any old file, so when Javascript gained the ability to load any file it was limited to only be able to load files from the same Origin (protocol + hostname + port group) to not break the assumptions of existing services. But file:// URLs are special, they’re treated as a unique origins so random HTML pages on disk can’t exfiltrate all the data on your drive. People still wanted to load data from other origins, so they figured out JSONP (basically letting 3rd-party servers run arbitrary JS on your site to tell you things because JS files are special) and then browsers added CORS. CORS allowed servers to send headers to opt in to access from other origins.

                                          WebAssembly isn’t special like scripts are, you have to fetch it yourself and it’s subject to CORS and the same origin policy so loading it from a file:// URL isn’t possible without disabling security restrictions (there are flags for this, using them is a bad idea) but you could inline the WebAssembly file as a data: URL. (You can basically always fetch those.)

                                    2. 2

                                      What domain and port is the content supposed to be served on, and at what path? If you don’t get that correct, links on the site likely won’t work.

                                      These days when getting a subdomain is a non-issue, I can’t see why anyone would want to use absolute URLs inside pages, other than in a few very special cases like sub-sites generated by different tools (e.g. example.com/docs produced by a documentation generator).

                                      I also haven’t seen MIME type mapping become a serious problem in practice. If a client expects JS or WASM, it doesn’t look at the MIME type at all normally because the behavior for it is hardcoded and doesn’t depend on the MIME type reported by the server. Otherwise, for loading non-HTML files, whether the user agent displays it or offers to open it with an external program by default isn’t a big issue.

                                      1. 3

                                        MIME bites you where you least expect it. Especially when serving files to external apps or stuff that understands both xml and json and wants to know which one it got. My last surprise was app manifests for windows click-once updates which have to have their weird content-type which the app expects.

                                        1. 2

                                          If a client expects JS or WASM, it doesn’t look at the MIME type at all normally because the behavior for it is hardcoded and doesn’t depend on the MIME type reported by the server.

                                          This is incorrect. Browsers will not execute WASM from locations that do not have a correct mimetype. This is mandated by the spec: https://www.w3.org/TR/wasm-web-api-1/

                                          You might not have seen this be a problem in practice, but it does exist, and I and many other people have ran into it.

                                          1. 1

                                            Thanks for the pointer, I didn’t know that the standard requires clients to reject WASM if the MIME type is not correct.

                                            However, I think the original point still stands. If the standard didn’t require rejecting WASM with different MIME types but some clients did it on their own initiative, then I’d agree that web servers with _different but equally acceptable behavior could make or break the website. But if it’s mandated, then a web server that doesn’t have a correct mapping is incorrectly implemented or misconfigured.

                                            Since WASM is relatively new, it’s a more subtle issue of course—some servers/config used to be valid, but no longer are. But they are still expected to conform with the standard now.

                                        2. 1

                                          You don’t need any specific mimetype for WASM, you can load bytes however you want and pass them to WebAssembly.instantiate as an ArrayBuffer.

                                      2. 3

                                        The other replies have explained this particular case in detail, but I think it’s worth isolating the logical fallacy you’re espousing. Suppose we believe that there are two distinct types of X, say PX and QX. But there exist X that are simultaneously PX and QX. Then those existing X are counterexamples, and we should question our assumption that PX and QX were distinct. If PX and QX are only defined in opposition to each other, then we should also question whether P and Q are meaningful.

                                      1. 1

                                        The abilities to dump and restore a running image, and to easily change everything at runtime, are the two biggest things I miss about Common Lisp. It feels barbaric now when I have to restart a JVM to pick up classpath changes, or when I have to wait a minute on startup for everything to get evaluated instead of just resuming a saved image instantly.

                                        1. 3

                                          2021:

                                          • I finally got a long-sought promotion, but it was a bit of Pyrrhic victory as it came with only a 3% raise–much less than I’d been led to expect.
                                          • Had to move unexpectedly after our lease wasn’t renewed, and rent went up
                                          • my wife had a really rough year of job hunting
                                          • got the freedom to run with a crazy idea at work for a while. I don’t know if we’ll end up shipping it or not, but it’s been a great experience that has developed our team’s capacity and if nothing else it will provide a point of comparison to other potential solutions. This work was a refreshing change from the feature-factory treadmill I’d gotten stuck on for the past year or so.
                                          • Rode my bike a lot, camped a lot.

                                          2022:

                                          • I need to look for a new job. I’ve been procrastinating here because I enjoy my team and my work, but I’ve been getting jerked around on comp for two years now and it’s clear that this place is never going to value me.
                                          • Some exposure to the “modern” front-end development world this year at work (I am primarily a backend programmer) has made me want play with alternatives like HTMX, Hyperscript, etc. React/Apollo/GraphQL can’t be the best we can do.
                                          • Ride my bike more, camp more.
                                          1. 17

                                            SQLite is my go-to for small to medium size webapps that could reasonably run on a single server. It is zero effort to set up. If you need a higher performance DB, you probably need to scale past a single server anyway, and then you have a whole bunch of other scaling issues, where you need a web cache and other stuff anyway.

                                            1. 5

                                              Reasons not to do that are handling backup at a different place than the application, good inspection tools while your app runs, perf optimization things (also “shared” memory usage with one big dbms instance) you can’t do in sqlite and the easier path for migrating to a multi-machine setup. Lastly you’ll also get separation of concerns, allowing you to split up some parts of your up into different permission levels.

                                              1. 5

                                                Regarding backups: what’s wrong with the .backup command

                                                1. 1

                                                  If I’m reading that right you’ll have to implement that into your application. postgres/mariadb can be backed up (and restored) without any application interaction. Thus it can also be performed by a specialized backup user (making it also a little bit more secure).

                                                  1. 12

                                                    As far as I know, you can use the sqlite3 CLI tool to run .backup while your application is still running. I think it’s fine if you have multiple readers while one process is writing to the DB.

                                                    1. 5

                                                      Yes, provided you use WAL mode, which you should probably do anyway.

                                                    2. 8

                                                      You could use litestream to stream your SQLite changes to local and offsite backups. Works pretty well.

                                                      1. 7

                                                        Ok but instead of adding another dependency that solves the shortcomings of not using a DBMS (and I’ll also have to care about) I could instead use a DBMS.

                                                        1. 7

                                                          OK, but then you need to administer a DBMS server, with security, performance, testing, and other implications. The point is that there are tradeoffs and that SQLite offers a simple one for many applications.

                                                          1. 3

                                                            Not just that, but what exactly are the problems that make someone need a DBMS server? Sqlite3 is thread safe and for remote replication you can just use something like https://www.symmetricds.org/, right? Even then, you can safely store data up to a couple of terabytes in a single Sqlite3 server, too, and it’s pretty fault tolerant by itself. Am I missing something here?

                                                            1. 2

                                                              What does a “single sqlite3 server” mean in the context of an embedded database?

                                                              How do you run N copies of your application for HA/operational purposes when the database is “glued with only one instance of the application”?

                                                              It’s far from easy in my experience.

                                                              1. 2

                                                                My experience has been that managing Postgres replication is also far from easy (though to be fair, Amazon will now do this for you if you’re willing to pay for it).

                                                              2. 1

                                                                SymmetricDS supports many databases and can replicate across different databases, including Oracle, MySQL, MariaDB, PostgreSQL, MS SQL Server (including Azure), IBM DB2 (UDB, iSeries, and zSeries), H2, HSQLDB, Derby, Firebird, Interbase, Informix, Greenplum, SQLite, Sybase ASE, Sybase ASA (SQL Anywhere), Amazon Redshift, MongoDB, and VoltDB databases.

                                                                This seems quite remarkable - any experience with it?

                                                            2. 3

                                                              Where do you see the difference between litestream and a tool to backup Postgres/MariaDB? Last time I checked my self-hosted Postgres instance didn’t backup itself.

                                                              1. 1

                                                                You have a point but nearly every dbms hoster has automatic backups and I know many backup solutions that automate this. I am running stuff only by myself though (no SaaS)

                                                          2. 6

                                                            No, it’s fine to open a SQLite database in another process, such as the CLI. And as long as you use WAL mode, a writer doesn’t interrupt a reader, and a reader can use a RO transaction to operate on a consistent snapshot of the database.

                                                    1. 3

                                                      (I wonder how many good companies that Accelerate book is going to kill before engineering managers move on to the next shiny object.)

                                                      More on-topic … this article seems to set up a false dichotomy between E2E tests and unit (or component) tests. Integration tests which exercise the whole system can be fast and not flaky if you replace external service dependencies like queues, HTTP transport, etc. with synchronous, in-process components.

                                                      1. 11

                                                        Did you know that it’s possible today to create something for your browser that works like a native app on your device?

                                                        This is categorically false. It is a marketing fiction spread by those who want to develop their apps on the cheap, and, ok, fine. But PWAs do not work anything like native apps from the perspective of the end user, and acting otherwise is just gaslighting those users.

                                                        1. 7

                                                          This is insane…

                                                          I assume most people here that use Ubiquiti have disabled remote access to devices if they haven’t already.

                                                          Legal overrode the repeated requests to force rotation of all customer credentials, and to revert any device access permission changes within the relevant period

                                                          I’m struggling to see how this is good advice. Was it really to protect the stock value (rotating would reveal something bad happened and open it up to questions)? Even that is short sighted.

                                                          1. 24

                                                            A comment from a former employee lifted from the HN thread:

                                                            While I was there, the CEO loved to just fly between offices (randomly) on his private jet. You never knew where he’d pop up, and that put everybody on edge, because when he was unhappy he tended to fire people in large chunks (and shut down entire offices).

                                                            This seems consistent with some Glassdoor reviews; for example:

                                                            No one is safe here. you expendable just like the trashbag in your garbage can. owner gives unreasonable goals and when not met, he fires. upper management/cfo like money and rjp [Robert J. Pera, the CEO] clout over the product. over the consumer experience. the company morale is everyone tries to fly under RJP’s radar due to random firings. Upper Management is number people, worried about the stock more than employees and the product. Very muddy project mangement and very foggy leadership. No one really knows where the ship is sailing. Everyone is on the same ride trying to avoid a wreck at the same time avoiding RJP.


                                                            The company is a one-man show who completely ignores people value.

                                                            You are being questioned, demoralized and you even don’t believe your skills in the end.

                                                            No feedback, no HR, no planning.


                                                            • Incredibly toxic culture where most people would rather not have to deal with the CEO at all (“be invisible”) due to his behaviour and complete lack of respect towards his employees. I have witnessed or experimented a lot of what you can see in the other negative reviews on this site.

                                                            • This may vary from office to office, but there doesn’t seem to be a general HR department. If the CEO is being disrespectful or abusive, who can you complain to, really?

                                                            And a bunch more.

                                                            Seems like the owner/CEO is just a twat that everyone is afraid of, and for good reasons too. This kind of company culture incentives the wrong kind of decision-making; from a business, ethical, and legal perspective. It’s no surprise that whistleblower “Adam” wants to remain anonymous.

                                                            It’s all a classic story repeated untold times over history innit? People will go to great lengths to avoid strong negative consequences to themselves, whether that’s a child lying about things to avoid a spanking, a prisoner giving a false confession under torture, or an employee making bad decisions to avoid being fired. We only have several thousand years of experience with this so it’s all very new… Some people never learn.

                                                            1. 5

                                                              holy shit.

                                                              This kind of company culture incentives the wrong kind of decision-making; from a business, ethical, and legal perspective.

                                                              Indeed, and it makes its way right into the product too; you can tell when release feature quantity is prized over quality. This honestly explains more than I thought it could about my experience with their products so far — they feel so clearly half-baked, in a persistent, ongoing sense.

                                                              1. 3

                                                                I never even heard of Ubiquiti until a few days ago when there was a story on HN that their management interface started displaying huge banner ads for their products – I just use standard/cheap/whatever’s available kind of hardware most of the time so I’m not really up to speed with these kind of things. Anyway, the response from that customer support agent is something else. The best possible interpretation is that it’s a non-native speaker on a particularly bad day: the wife left him yesterday, the dog died this morning, and this afternoon he stepped on a Lego brick. But much more likely is that it’s just another symptom of the horrible work environment and/or bad decision making, just like your meh experience with their products.

                                                                1. 2

                                                                  Yeah, I had similar experiences with Ubiquiti stuff–I bought it because I liked the idea of separating routing and access point functionality, but it never stopped being flaky. After the last time throughput slowed to a crawl for no reason I got a cheap TP-Link consumer router instead and I haven’t had to think about it once.

                                                              2. 1

                                                                I assume most people here that use Ubiquiti have disabled remote access to devices if they haven’t already.

                                                                Ironically, I can’t. The UniFi Protect phone apps require it, so I have to choose between security of my network and physical security of my house.

                                                              1. 5

                                                                Great write-up, I had no idea the REPL of lisp/smalltalk was so powerful. I need to get around to learning clojure.

                                                                I think the elixir* REPL fits the bill for the most part - if I start up one iex instance and connect to it from another node I can define modules/functions and they show up everywhere. And for hot-fixing in production one can connect to a running erlang/elixir node and fix modules/functions on the REPL live, and as long as the node doesn’t get restarted the fix will be there.

                                                                * erlang doesn’t quite fit the bill since one can’t define modules/functions on the REPL, you have to compile them from the REPL.

                                                                1. 3

                                                                  Does Clojure actually have these breakloops though? I think I’ve seen some libraries that allow doing parts of it (restarts), but isn’t the default a stacktrace and “back to the prompt”?

                                                                  1. 2

                                                                    Well, prompt being the Clojure repl, but you’re correct that the breakloop isn’t implemented, as far as I got in the language. You can must implement the new function and re-execute, so you lose all of the context previous to the break. I think with all of the customizability of what happens when a stack trace happens, it’s possibly possible.

                                                                    I THINK the expected use with Clojure is to try to keep functions so small and side effect free that they are easy to iterate on in a vacuum. Smalltalk and CL have not doubled down on functional and software transactional memory like Clojure has. That makes this a little more nuanced than “has/doesn’t have a feature”.

                                                                    1. 1

                                                                      You’re correct. Interactivity and REPL affordances are areas where Clojure–otherwise an advancement over earlier Lisps–really suffers compared to, for instance, Common Lisp. You don’t have restarts, there is a lot you can’t do from the REPL, and it’s easy to get a REPL into a broken state that can’t be fixed without either a full process restart or using something like Stuart Sierra’s Component to force a full reload of your project (unless you know a ton about both the JVM and the internals of the Clojure compiler). You also can’t take a snapshot of a running image and start it back up later, as you can with other Lisps (and I believe Smalltalk). (This can be useful for creating significant applications that start up very quickly; not coincidentally, Clojure apps start up notoriously slowly.)

                                                                  1. 3

                                                                    If you notice one difference, it would probably be that commands that act on the symbol at point (e.g. cider-doc) will no longer prompt you to confirm the symbol. The old default was a mistake and I wanted to adjust this for this grand release.

                                                                    Best news I’ve heard all day! It’ll be nice to remove the override of this setting from my Emacs config.

                                                                    1. 4

                                                                      At least we have a single connector now…

                                                                      1. 18

                                                                        Arguably having one connector that is not actually interoperable is worse than having multiple connectors whose lack of interoperability is apparent at a glance.

                                                                        1. 5

                                                                          Yep, not knowing if the cable you want will work is like having USB-C Heisenberg edition. Its arguably worse because of the lack of knowing.

                                                                          1. 4

                                                                            It works for me with everything. A variety of phones, laptops, Nindendos Switch, chargers, batteries and cables. Maybe sometimes I’m not getting optimal charging speed, but it’s always better than the situation was before.

                                                                            I don’t have a MacBook though. I hear they have more issues than anything else.

                                                                            1. 3

                                                                              Good for you. But the article shows your experience isn’t shared by everyone. Not knowing if a USB-C cable and charger will charge your device unless you try it is mildly infuriating.

                                                                        1. 4

                                                                          I use fish and have been happy with it. I don’t use any of the plugins for it, mostly because it’s good enough out of the box for the simple shell stuff I do.

                                                                          On occasion I have used bass to work with bash scripts that were required by projects I had to work on. It runs bash scripts and exports environment changes back to your fish session.

                                                                          1. 3

                                                                            Damn, I’ve been wanting something like bass for years, thank you!

                                                                          1. 6

                                                                            I switched away from Fish (after using it for several years) because I couldn’t get used to it’s non-POSIX syntax…

                                                                            1. 13

                                                                              That’s the reason I picked it; for day to day usage, it’s a lot less warty, and if I need a bunch of POSIXy stuff, I can just exec bash.

                                                                              1. 1

                                                                                You are correct in that exec bash is a useful way to run POSIXy stuff. Unfortunately (to my knowledge), shell functions, aliases, etc. must be translated into fish. I depend on being able to source POSIX shell scripts in order to bring my functions and aliases to remote servers and containers.

                                                                                1. 2

                                                                                  From my experience, 90% of shell functions and aliases can (and should) be rewritten as standalone scripts. Unless you indeed modify the current shell’s environment or even the argument handling (like noglob in zsh), there is no reason to hoard functions.

                                                                                  1. 1

                                                                                    Funny; I did just that mere hours before you posted your comment. Took three commits to my dotfile repo (1, 2, 3).

                                                                                    There is one other benefit to using shell functions and aliases: it’s easy to view them all at once by running functions && alias; the result can be written to a file, giving you a single file with all your custom commands.

                                                                                  2. 1

                                                                                    That’s a different use case, yeah. For me, I have two computers I ever interact with, so.

                                                                                    1. 1

                                                                                      Yeah, that’s unfortunately why I have to use bash at work even though I use fish everywhere else. Sad; fish is way nicer for shell scripting.

                                                                                  3. 5

                                                                                    I switched away from Fish, despite much preferring its obviously better syntax and other UI improvements, because I still had to work with old bash and posix shell scripts day-to-day anyway.

                                                                                    1. 4

                                                                                      For the folks in this thread who found Fish was not compatible with their POSIX scripts: do you have a lot of functions that modify your shell’s state/environment, then? For me that’s limited to a few scripts related to virtualenv, changing directories, and setting the prompt. The other 95% does just fine as standalone scripts with #!/bin/sh as the hashbang.

                                                                                      To avoid misunderstanding, I’m not really asking “why do/don’t you use X”. It is me wanting to see all the things folks do with their shell that could not be done as a script, so please Share All The Things That Come Under That Heading And That You Feel LIke Sharing :D

                                                                                      1. 3

                                                                                        The biggest offenders are the version-switchers for various languages–virtualenv, chruby/rvm/rbenv, nvm, etc. For some of these things, there are fish equivalents, but it remains a point of friction.

                                                                                        The other annoyance with fish is that any time I have any problem with a shared script at work that works for everybody else, everyone assumes fish is the problem (which is rarely-to-never the case).

                                                                                        But it’s still a great shell and I continue to use it.

                                                                                      2. 4

                                                                                        I’m just writing scripts for zsh or a non-shell language when I want to do more than one thing.

                                                                                        fish has made 99% of my interactions at a shell prompt ‘just work’ with practically zero config, learning curve, or slowness.

                                                                                      1. 4

                                                                                        The decision to not try to entirely abstract the host away as some other languages do (Java for example, with respect to operating systems) has not in any way hindered the ability of different Clojure implementations to share code between them.

                                                                                        This seems … patently false? For several years there was no way to conditionally execute code based on which runtime it was running under.

                                                                                        1. 10

                                                                                          Yeah, there is a lot of hand-waving in this piece. All implementations of Clojure other than the first-party JVM implementation are distinctly second-class citizens. The latest version of Clojure does not work with Graal–you have to use an older version and many of the most common Clojure libraries are not compatible. Clojurescript works OK in the browser, but most Clojurescript tooling was not designed with Node.js in mind. Code reuse between Clojure and Clojurescript kind of works unless your code needs to do any IO, at which point your Clojurescript becomes a nest of callbacks (and you don’t have access to the async/await stuff that makes async IO tolerable in Javascript).

                                                                                          I like Clojure and have been using it professionally since 2014, but its strength is definitely in the core language concepts, not the implementation–and certainly not in having multiple compatible implementations, where it lags behind many, many other languages.

                                                                                          1. 1

                                                                                            FWIW the Graal issue is being worked on at the moment, slated for release in 1.11.

                                                                                          2. 4

                                                                                            I’m sure in those earlier years it was more of a struggle, but I’m speaking from the perspective of someone who started using Clojure in 2017. To me it’s been smooth sailing with .cljc files from the very beginning.

                                                                                          1. 10

                                                                                            The maker of the Iowa Caucus app was given $60,000 and 2 months. They had four engineers. $60k doesn’t cover salary and benefits for four engineers for two months, especially on top of any business expenses. Money cannot be traded for time. There is little or no outside help.

                                                                                            This is the important bit. “Cheap, fast, good: pick two.” If the caucus wanted a better app, they’d have needed to invest a lot more time and money.

                                                                                            1. 4

                                                                                              I don’t think so: Even if you assume 10% bench time, that’s still a mean $81k salary which is above market rates (avg. software engineer salary in Des Moines is $75k). That still leaves revenue for a $80k salesperson with a $80k OTE and the company makes money.

                                                                                              It is more than possible two months time isn’t enough to build a $360k mobile app for some people, but I certainly could’ve done it, and I probably would have for $360k.

                                                                                              My guess: Either the persons who made the RFP didn’t anticipate their real requirements (uptime/availability, accuracy, etc), or The Maker really doesn’t know how to make software. Without looking at the RFP I wouldn’t venture to say, and my own experience doing government projects could have me believe either equally.

                                                                                              1. 1

                                                                                                Did you read that number right? It’s 60k not 600k.

                                                                                                1. 4

                                                                                                  $60k for 2 months work is $30k/pcm. If I do 6 projects of that size a year that’s $360k/pa revenue. If I discount that by 10% (for running my business; my salesperson, overage, etc), that’s still $324k/pa I could budget for four people, or an average salary of $81k.

                                                                                                  I could certainly do this project in two months, and then do six more just like it in a year. That means if I went and did the RFP myself I could make $360k/pa doing projects like this.

                                                                                                  I’ve also done events though, so I know that even if the RFP says there’s going to be wifi available, it’s going to be shit. Someone who hasn’t might trust the RFP and the project will fail: Whose fault is that? It seems a bit unfair to blame the developer who writes to a spec where the spec is just ambitious/wrong, but it also seems a bit unfair to blame the RFP writer since they might have never done event software before either. I suppose I could blame the DNC for hiring idiots in the first place, but that’s a bit higher up the food chain than what we’re talking about: We have no idea what they’re paying their PM…

                                                                                                2. 1

                                                                                                  None of the engineers at Shadow was based in Des Moines. They seem to have been based in Seattle, Denver, and NYC. $81k is not above market rates in those cities.

                                                                                                  1. 1

                                                                                                    Do you have a link for that?

                                                                                                    I searched their jobs page and saw offers for “remote” which is usually a sign they pay midwest rates.

                                                                                                    I haven’t seen any news articles saying where the developers actually were.

                                                                                                    1. 1

                                                                                                      I was going by the developers associated with them on Linkedin.

                                                                                              1. 2

                                                                                                I used Google Reader and when it shut down, I exported my subscription list (in a standard format), imported that list into several other competitors until I found one I liked (Feedbin), and continued reading essentially without interruption. Isn’t that how interoperable standards are supposed to work? I would be thrilled if any newer social media stuff were as interoperable as RSS.

                                                                                                1. 3

                                                                                                  In retrospect it’s kind of amazing how quickly we moved from an Internet with no “like” counts (the golden age of blogging) to an Internet where it’s very difficult to find any community where “like” counts or upvotes are not a core part of the system. Even indie sites like Lobste.rs or Metafilter that eschew a lot of the apparatus of the modern Internet incorporate this very quantitative approach to community and social interaction.

                                                                                                  1. 2

                                                                                                    Yes. The quieter, less-evaluative Internet was hijacked by one of addictive narcissism.

                                                                                                    1. 2

                                                                                                      After writing my earlier comment I realized that there is one type of online community I participate in that is completely free of likes/voting/ranking/quantitative anything: mailing lists.

                                                                                                      It’s probably not a coincidence that I love mailing lists, while people whose Internet experience started even a few years later than mine did seem to really, really hate them. I wonder if there is a real generational (or internet-generational) divide here, or if I’m just an outlier.

                                                                                                      1. 2

                                                                                                        It’s probably not a coincidence that I love mailing lists, while people whose Internet experience started even a few years later than mine did seem to really, really hate them. I wonder if there is a real generational (or internet- generational) divide here, or if I’m just an outlier.

                                                                                                        As a guy who first acquired an ISP in 1993, I can honestly say that I generally dislike mailing lists (like most people, I guess). I always think of them as a poor-man’s usenet, I would much rather just hop on tin(1) and read the latest posts in my subscribed groups.

                                                                                                        Having said that, I am a member of some mailing lists that I genuinely enjoy. Though they are the exception, not the rule…

                                                                                                      2. 1

                                                                                                        It would be interesting to see an implementation of an upvote button that didn’t display the count to the users. You still get the “community” aspect of it, without the narcissistic side.

                                                                                                        1. 1

                                                                                                          HN does this.

                                                                                                          1. 2

                                                                                                            Right! For the comments. They still show the points for each story, which I think makes sense (or does it…?)

                                                                                                        2. 1

                                                                                                          Back then we had guestbooks and hit counters to provide the tingle of popularity that is oh so addictive.

                                                                                                          I remember when I first added commenting to my blog and getting ten or so meaningfull comments within the first week of publishing a new post was a thrill to see; those were different to likes though, because they were actual meaningful interactions that often spawned discussion.

                                                                                                        1. 5

                                                                                                          I feel lucky to work in the Clojure ecosystem where edn is ubiquitous. It behaves predictably and supports the features I need.

                                                                                                          In a previous job I found myself sometimes hand-editing YAML CloudFormation templates. What a nightmare that was.