Threads for jitl

  1. 8

    We see this at Notion too - the SQLite queries to load a page from our local cache are slower than fetching the page from network on low end devices. The problem with racing network & SQLite is that this can end up using up to 2x the IO & memory on the device and is still slower that always using network for many devices. Well, at least we still have the cache in case network is not available.

    1. 2

      That’s interesting, and I find myself surprised to read this. I don’t know too much by way of low-level database, IO, or kernel API tuning, but do you think this is mainly because of IO bottleneck, or is there room for optimization?

      1. 2

        There’s still plenty of room for optimization, but I’m not working closely with the iOS & Android teams anymore so I’m not sure what their outlook is on this problem. Our general local cache stores records in our standard normal form. A common solution we’ve used in the past is to denormalize the data needed for specific UI components into a more specialized cache. This is harder to do for page rendering since the page editor is our most general component family. We could do like a tiered boot? Or try to cut down on scans but otherwise keep stuff normalized? Tricky.

        Tuning anything like this takes a bunch of time because we either need to develop and benchmark against a slow device (which is slow to do since installing a build and attaching a debugger takes longer), or if we can’t repro since p95 situations are often corner-cases, ship improvement experiments to users and wait for data to come back.

        Being a database admin for a fleet of millions of hopefully-identical databases running on low end hardware is an interesting challenge.

      1. 1

        How do you expect this to play out in the long-term? Tools like these can start out well when you have a set of users that span a whole company. Eventually cost, or just a more trendy tool, cause folks to want to move. This is not a great experience for folks, but it seems to frustrate eng teams more than others. I like Notion’s user experience, so I’m more interested in whether you think they can be sticky, especially for eng teams.

        1. 2

          Disclaimer; I work at Notion.

          I’d like to build a two-way sync tool between Notion and source-controlled documentation so that it’s easier to author in Notion, but pipe that documentation goodness into your IDE, website, public GitHub wiki; have run books cloned locally and greppable in-repo with code, etc. Let the Vim and EMacs fans write content that finds its way into the Notion wiki. That would lessen the lock-in anxiety, too.

          I think the biggest problem with Notion as a documentation tool for code aside from that is search. Search has improved a lot over the last year in general, but Notion’s index is still quite simple. I don’t think we do any specific analysis or indexing for source code snippets or really understanding the semantic layout of your wiki. We have a long way to go there.

          Where Notion is better than other similar tools is that UX, and wrangling projects - especially projects with unusual shape where you might want a more specialized workflow. We get a lot of positive feedback from teams who say “wow we came for the wiki but really like the flexibility to manage this or that process”, as well as teams who say “the editor UX in Notion is so much better than XYZ that we actually write (and organize) our docs”.

          1. 1

            If that roundtripping was fully supported for third parties I could see how might light up scenarios beyond developers. It seems like the Notion API is fairly fleshed out. Is it complete enough to attempt something like this?

            1. 1

              I think you could do a pretty good job with the API. The only limitation is that the API is kinda slow / rate limited.

      1. 6

        The linked video presentation from Josh Triplett is worth a listen, he gives a clear explanation.

        The most interesting thing he said was in the Q&A after, paraphrasing: “I’d like to see the kernel have exactly 2 syscalls, io_uring_setup and io_uring_submit.” It seems like io_uring is an abstraction model of user-space/kernel interaction that is sufficiently powerful and general, which simplifies the interface and implementation of a lot of existing syscalls (like fork/exec, vfork, etc. mentioned in the talk) and affords optimization and performance opportunities (by eliminating the need for context switches and user-space memory & code in many cases) that would be difficult or impossible to get otherwise.

        1. 6

          io_thing is similar to modern graphics APIs in a way, because the API explicitly builds and submits batches of commands - and the batching lives in user space. Previously, all the batching was in library code or (more often) an in-kernel optimization.

          1. 1

            more thing_uring (if not io)

            1. 1

              iOS autocorrect HATES Linux syscalls

        1. 4

          We’ve been working on some updates that will allow Deno to easily import npm packages and make the vast majority of npm packages work in Deno within the next three months.

          On the one hand, good on them for recognizing a major limitation and doing something about it. On the other hand…

          import express from "npm:express@5";
          

          This syntax introduces yet another module resolution algorithm in addition to and incompatible with the ones that already exist in:

          • The browser spec
          • Node
          • webpack, Vite, and other bundlers and build tools
          • The TypeScript language server

          I’m sure they’d like to avoid reinventing package.json, but it seems like there ought to be someplace outside of the source where package installations can be managed instead of hacking npm into the module name.

          1. 5

            I don’t see it THAT bad… it builds on the same logic as the node:xxx modules in Node.js. That is the closest to a standard in regard to backend JavaScript.

            1. 3

              but it seems like there ought to be someplace outside of the source where package installations can be managed

              I agree 100%. But that’s sort of a fundamental issue with Deno’s whole approach. In reality, it’s extremely useful to use abstract package names in the source and provide a mapping between abstract package name and concrete package implementation externally. Deno’s (and Go’s) rejection of that idea is unfortunate imo. Go has mostly reversed their direction, where the import URLs are now abstract package identifiers which are resolved using go.mod; maybe Deno should do the same.

              1. 2

                Deno does have a go.mod equivalent: https://deno.land/manual/linking_to_external_code/import_maps

                (This is a standard and was not invented by Deno. https://wicg.github.io/import-maps/)

                As well as a lock file for integrity checking: https://deno.land/manual/linking_to_external_code/integrity_checking

                (This is Deno-specific)

              2. 3

                Yeah, like: one thing that really confused me was why they didn’t do something like import express from npm("express@5"), which, although I suspect (honestly, I think) isn’t technically a valid import syntax, has the benefit that it could simply expand out to e.g. https://esm.sh/express@5, and therefore keep the existing, clean import system Deno already has.

                I feel for Deno. They’re between a rock and a hard place on innovating v. breaking all backwards compatibility. But I feel as if this is a small step in the wrong direction that’ll be very, very hard to unwind from.

              1. 5

                The drawing/canvas features in this app are built on top of TLDraw - https://www.tldraw.com/ - which is open source and well-architected for easy extension. TLDraw is showing up in a lot of wiki/notes products, another example is Logseq: https://twitter.com/pengx17/status/1552172906146398208

                I think TLDraw will do for collaborative 2D editors what Prosemirror did for collaborative rich text editors.

                (Disclaimer: I work at Notion)

                1. 16

                  It’s really hard to take an article like this seriously.

                  The opening paragraph:

                  This article deals with the use of bool in C++. Should we use it or not? That is the question we will try to answer here. However, this is more of an open discussion than a coding rule.

                  So we are having an “open discussion” in an article whose title is definitive, denying the very idea of a discussion. Gotcha.

                  Then, if you continue, the “discussion” is laughably simplistic and narrow. It trades Booleans for types with a design that is most generously described as questionable. The problem at the core is understanding the problem domain. If a function like shouldBuyHouse is taking parameters to describe all the variations, then trading Booleans for types isn’t going to solve much unless your domain is rudimentary, in which case Booleans are probably just fine.

                  1. 2

                    It’s really a question on the use of booleans in function signatures, not in general.

                    If a function like shouldBuyHouse is taking parameters to describe all the variations, then trading Booleans for types isn’t going to solve much unless your domain is rudimentary, in which case Booleans are probably just fine.

                    It’s definitely better if calling a function like shouldByHouse requires identifiers named hasPool/noPool versus positional true/false params. That is, shouldBuyHouse(hasPool, badLights) is definitely better than shouldBuyHouse(true, false). Right? No?

                    1. 3

                      It’s definitely better if the caller is passing a constant, as in your example.

                      It gets awkward otherwise:

                      shouldBuyHouse(buyerCanSwim == canSwim ? hasPool : doesntHavePool,
                          buyerIsBlind ? badLights : goodLights)
                      

                      I love C++ enum classes, but the drawback is they have no implicit conversions to anything…

                      Another way to look at the problem described here is that it’s a lack of parameter naming at the call site. Booleans are much clearer in shouldBuyHouse(hasPool: true, badLights: false). Right?

                      1. 2

                        I agree 100% - the lack of named arguments in c++ is mostly why we resort to using enum classes to represent boolean values. Unfortunately this trick only works for booleans and not so much for strings or other types, which is why we often resort to the builder pattern or similar idioms to simulate named arguments.

                        1. 1

                          Well, you probably wouldn’t express the mapping of buyerCanSwim to hasPool, or buyerIsBlind to whichLights, inline with the calling expression.

                          poolRequirement = buyerCanSwim ? hasPool : doesntHavePool
                          lightRequirement = buyerIsBlind ? badLights : goodLights
                          purchaseDecision = shouldBuyHouse(poolRequirement, lightRequirement)
                          

                          Named parameters have appeal, but they tend to go hand-in-hand with optionality, which is a huge source of risk, IME far more costly than the benefits delivered by the feature. If you could have named parameters without the option of leaving any out, then I’d agree that’s a good solution, except that it still allows callers to call shouldBuyHouse(true, false) which is exactly the thing we’re trying to prevent, here, by construction. So I dunno. I wouldn’t put them in my language.

                          1. 2

                            I think Swift’s approach is quite interesting: https://docs.swift.org/swift-book/LanguageGuide/Functions.html

                            Parameter names are encouraged by the syntax and non-optional for the caller, and must be provided in definition order. There’s a syntax to say “caller doesn’t need to provide a name here” but the caller needs to provide the argument names in basic looking signatures like shouldBuyHouse(hasPool: bool, badLights: bool) -> bool.

                            Swift does have default parameters though, which are as bad or worse than optional parameters - but at least they need to be trailing, and order is guaranteed.

                        2. 2

                          When it’s a function with two parameters backing an uncompelling example, then I argue it still mostly doesn’t matter, and if you want those identifiers could be globals or even macros, if you’re using C.

                          I’m not advocating for Booleans as arguments in general, I’m just asking for better writing. This is article is not worth anyone’s time.

                          1. 3

                            If the two parameters you mention are both booleans, then for readability it definitely matters whether you use a bool or an enum. The tradeoff is some extra work for the enum, and it’s reasonable to argue whether (or not) the extra work is worthwhile.

                            One bonus the enum gives you is you can more easily replace it with a policy object.

                      1. 2

                        It says something that Windows 11 UX is enough of a regression that it’s believable that WIndows 12 is the same as KDE.

                        1. 4

                          When pattern matching dropped I was surprised at how limited it was trying to be, so I did something similar to get it to do regexp matching/capturing. Pro tip: attribute access also has side effects :)

                          In general I have mixed feelings about this feature. Some languages just don’t have customizable destructuring at all, and that seems entirely respectable. But Python clearly isn’t trying to be one of them, and yet this feature is still weirdly limited. It feels like the language designers are determined to impose some random opinions on every new thing they add.

                          1. 5

                            I appreciate where you’re coming from and shared your mixed feelings for a long time.

                            However now I feel like the easy answer is - don’t do stunt coding with your pattern matching, keep it simple and readable, and we all get a nice alternative to the endless waves if if/else/ :)

                            1. 1

                              Yeah, reading the PEP felt discouraging for this reason. But overall, Python is about having one flavor of crazy punch. Unlike Ruby, where literally anything going on could be a flavor of crazy punch.

                            1. 7

                              It’s especially fine if the network has client separation/isolation, which is quite common on most “Guest” networks nowadays.

                              1. 8

                                Which is why I broadcast my own copy of the network with a much stronger AP and flood my target with client disassociate packets. Don’t have to fight attacking client isolation when I’m the AP ;)

                                1. 3

                                  macOS (on M1) these days seems to totally ignore disassociate. I gave up on “mesh” at home because of this, and $OFFICE has a slack channel for griping about it since the mac will stick with an AP on the other side of the floor. Security or shitty programming?

                              1. 4

                                I laughed when I saw this:

                                // eslint-disable-next-line fp/no-loops
                                for (…) {
                                
                                1. 1

                                  I wasn’t sure where you were seeing code like that, as it’s not in the README. I found that these three implementation files in src/ disable fp/no-loops.

                                1. 8

                                  Running JS on WASM is already pretty easy with quickjs, but it’s cool to see new entrants especially one in Rust.

                                  My suggestion to the author: try to encapsulate your interpreter’s state completely in a data structure instead of relying on Rust’s call stack. That way your users can easily implement async and suspend/resume, coroutines etc. The annoying bit of QuickJS is that it uses the C stack so there’s no way to pause the VM and do something else in your C, and then resume later without abusing ASYNCIFY compiler pass or something.

                                  1. 4

                                    try to encapsulate your interpreter’s state completely in a data structure

                                    JSRefs on Rust stack are exactly what prevents me from having a sane GC right now (I already had a draft of a GC when I discovered this problem). I’m going to overhaul the design and hide the implementation details in the crate and either have all external references accounted for, or make it impossible to leak them outside of a limited scope (using Rust borrow checker).

                                    1. 1

                                      I thought the rust but was the most interesting, but the page I linked to was “in wasm!”, so I figured I should include it.

                                    1. 9

                                      Mozilla stopped working on Mentat in 2018: https://mail.mozilla.org/pipermail/firefox-dev/2018-September/006780.html

                                      There’s a fork that has continued work here: https://github.com/qpdb/mentat

                                      I think we should change the URL to the forked repo, since the README does a much better job explaining than these random auto-generated API docs from 2018.

                                      1. 3

                                        I agree completely! If this mods are alright with me doing that, I’m totally down with this idea.

                                      1. 2

                                        @lettuce are you still working on this thing? Other than asymmetric fields, is there a compelling advantage to choice over Thrift’s union type? Asking as someone who’s never used Thrift but is considering serialization libraries & schema languages for a Typescript/ADT first company.

                                        1. 3

                                          are you still working on this thing?

                                          Absolutely, in the sense that Typical is a member of my portfolio of projects that I actively maintain. In terms of feature development, Typical has reached a stable point where things are not changing (e.g., you can count on the binary format not changing in breaking ways).

                                          Other than asymmetric fields, is there a compelling advantage to choice over Thrift’s union type?

                                          Typical’s choice types are roughly equivalent to Thrift’s “strict unions” (in Credit Karma’s Thrift to TypeScript code generator)—both support exhaustive pattern matching, which is the proper elimination principle for coproducts. Thrift’s default unions are quite weak in terms of what guarantees you get from the type checker, leaving the critical invariant (that exactly one field is set) up to a runtime check.

                                          However, you wouldn’t want to use Thrift’s strict unions with exhaustive pattern matching for RPCs, because there is no way to safely add/remove cases as your code evolves over time. I know you said “other than asymmetric fields”, but asymmetric fields are the key feature that allows schema changes to be made safely.

                                        1. 5

                                          I really like this project, but I really don’t like the curl | sh pattern of installing things. We should make an effort to make packaging a more universal and easy process for projects like this.

                                          I even went to do my due-diligence and read the shell script, but it was in a minified format that made it difficult to look at. I know I can trivially load it into my editor and replace the semicolons with newlines and read it that way, but I’d rather have an install that works with my package manager. I understand that code installed by package managers isn’t foolproof and has it’s own issues, but there has to be something better than a curl | sh pattern for it.

                                          I guess this gets to the bigger problem of properly packaging things for multiple systems without need for manually creating the packaging for each system. I recently attempted to package an application for mac and windows (leaving the linux users to figure out how to run a binary for themselves) and found it to be very difficult and requiring more knowledge than I think should be necessary to do so Windows particularly. Is anyone aware of a system where I can just drop my windows, mac, and linux (and each architechture supported by each) binaries in a folder and have the packages generated by an automatic system?

                                          1. 5

                                            I’d rather have a shell script that I can curl > install.sh and then less than add a new package repository to my system-wide settings. I don’t think a system package is any better than curl | sh over HTTP from a security standpoint. A hobby or poorly maintained system package repo is much more complex than a simple 14 line shell script.

                                            1. 4

                                              You can list and uninstall system packages.

                                              1. 4

                                                Until they’re compromised by malware, and it rewrites the list.

                                                1. 4

                                                  True, but a system package can also add a zillion dependencies that somehow put the system into a weird state. I learned my lesson with third party packaging on Debian and Redhat already - for something simple like Bun, much better to pop it into ~/prefix/bun than somehow end up with a conflict about what version of OpenSSL should be installed system-wide.

                                                2. 2

                                                  Problem is you’re the 1/5th of people using the program, the fifth who are going to take a cursory look at the script as opposed to the other four-fifths who will simply run curl | sh and not notice their local library has a fake “Free WiFi” MITM installed by some skid.

                                                  1. 9

                                                    How is a random bash script any different from a random .deb that contains a bash script?

                                                    1. 1

                                                      Are .debs not signed? or is this a .deb from random website vs the main Debian repos?

                                                      1. 4

                                                        .debs can be signed, but are not in general, so for the most part they’re trusted to exactly the same extent as the repository is. That means that curl | sh over HTTPS has basically the exact same threat model as installing a .deb does, and it always makes me wonder if people who lament the security failings of the former process are happily making use of the latter one. The same doesn’t hold for RPMs, though.

                                                        1. 1

                                                          In what sense are RPMs different (it has been a very long time since I dealt with anything other than initial Linux setup - my wife is the one installing terrible bioinformatics software and complaining about the code quality there :))

                                                          1. 5

                                                            RPMs are much more likely to be signed than DEBs (where only the repo is usually signed).

                                                            But both points are moot anyways. If I were to ship malware to you via curl | bash, I might as well do it via a malicious .DEB or .RPM which I have signed with my private key and told you to add the corresponding public key to your configuration.

                                                            Only, the curl’ed shell script is easily audited, whereas the same isn’t true for a .DEB or .RPM package. Yes, they can be extracted, but while I know the tools needed to inspect a file downloaded by curl, I would have to look up the commands to unpack a .DEB and also, I will need understanding of the files inside of a .DEB to know what gets executed at install time

                                                    2. 3

                                                      I think much less than 1/5th of people will examine a script before installing it. That also goes for language dependencies, like NPM, PyPI, Bundler, Cargo, Go modules, etc.

                                                  2. 4

                                                    Is your concern about the security implications of running untrusted code? If so, wouldn’t you have the same concern when you actually run the installed program as well?

                                                    1. 2

                                                      On macOS binaries are by default required to be code signed, which means that the default behaviour requires some real identity of the authors (they have to pay apple for the signing cert), and - especially if historically - the authors signed the package, and then a fake update comes out that isn’t in principle you could notice. The signing requirement can be bypassed, but again requires extra steps that one would hope protect lay folk.

                                                      Interestingly (for hilarious reasons) you can codesign a shell script on macOS, but the signature isn’t checked - presumably because the code running is the bash/zsh/whev shell which is signed.

                                                      1. 2

                                                        So the solution is to centralize software distribution and make it impossible for people to independently publish software?

                                                        1. 1

                                                          No, though that does come with very large security benefits.

                                                          But a lot of malware relies on users simply double clicking something, which is path broken by the default, and by passable, Mac setup.

                                                    2. 2

                                                      Packaging a Mac app has to be done locally on your own Mac because it involves code-signing using your developer credentials.

                                                      If it’s a developer/geek oriented app you might get away without signing it, since your users will probably have enabled running unsigned apps, but here in a thread complaining about insecure installation that doesn’t seem like a good suggestion!

                                                      1. 2

                                                        I really hope they don’t disable code signing requirements, and I hate with a passion these sites that say “just disable this core malware protection to run our app, making you vulnerable to binaries from other sites, not just ours”.

                                                        You can run unsigned apps with the default signing rules: it requires that you know to context menu click and open, in which case it asks if you’re sure you want to run the app. It really is that simple, and means that a site can’t make a binary with the image or a zip file icon that then silently installs malware when a user “opens” it.

                                                        1. 1

                                                          You can run unsigned apps with the default signing rules:

                                                          I think that’s changed recently…as far as I can tell, recent macOS now says something like “this app is damaged and can’t be run”, with no option to run anyways if it isn’t signed (and further shows a warning if it’s only signed, but not notarized; quite a pain)

                                                          1. 1

                                                            I believe an incorrect signature isn’t by passable (though obviously you could simply remove the signature if you were malicious?)

                                                    1. 10

                                                      I have some quick notes on other query languages here, if you’re into that sort of thing: https://jitl.notion.site/Databases-9b6be2d6d2ea48689b13ef3e8da1db47

                                                      1. 4

                                                        I agree, PRQL are not improving the semantics of SQL in any way. But their syntax is better, and their support for functions (which I didn’t test, but at least it exists) already puts it miles ahead of SQL.

                                                        I still feel it’s behind https://github.com/erezsh/Preql in many ways :)

                                                        1. 2

                                                          You should submit your notes!

                                                        1. 4

                                                          Building amd64 images on M1 is SO slow. Running is generally fine though in my experience. I suggest offloading build tasks to a cloud VM if all you have locally is Apple Silicon.

                                                          1. 7

                                                            This will be better soon, since macOS 13 will support Rosetta 2 in Linux VMs:

                                                            https://developer.apple.com/documentation/virtualization/running_intel_binaries_in_linux_vms_with_rosetta?language=objc

                                                            1. 1

                                                              Anecdotal, but building amd64 images on my M1 is at least faster than building arm64 images on GitHub Actions

                                                              1. 1

                                                                I’ve switched to using ARM images. There’s some hassle to set up and sometimes fix build systems or cross compile, but then it’s fast.

                                                                Also docker has an experimental option for faster file system access. It helps a lot for me.

                                                              1. 5

                                                                Honestly, graphql is perfectly reasonable. What type of simplicity do you think it violates?

                                                                Avro has a a json representation and is intended to define rpc interfaces. It’s also a type based idl.

                                                                1. 1

                                                                  For my case, I think graphql solves to much — I think I want a straightforward request/response RPC, I don’t need graphql capabilities for fetching data according to a specific query.

                                                                  This is kinda what protobuf does, but, impl wise, protobuf usually codgens a huge amount of ugly code which is not pleasant to work with (in addition to not being JSON). Not sure what’s the status with similar implementation complexity for graphql — didn’t get a chance to use it yet.

                                                                  1. 2

                                                                    Protobuf has a 1:1 json encoding. You could write your schema in Protobuf and then use the JSON encoding with POST requests or something to avoid all the shenanigans?

                                                                  2. 1

                                                                    Honestly, graphql is perfectly reasonable.

                                                                    Doesn’t it still violate HTTP, badly? IIRC, it returns error responses with status code 200. I thought that it sent mutations in GET requests too, but from a quick look it looks like I misremembered that (shame on me!).

                                                                    Regardless, REST is best.

                                                                    1. 1

                                                                      No, it returns errors inline in the body. There’s nothing non-restful about graphql. Indeed, the introspection and explicit schema, together with the ability to use it over a get request make it more restful than most fake rest rpc endpoints.

                                                                  1. 4

                                                                    See if there’s a converter from types in your language of choice directly to a machine-readable schema language. If you think OpenAPI/JSONSchema is too wild to write by hand, see if you can generate it from your internal types. For example, Zod is a nice validador library for Typescript, and there’s a zod-to-openapi thing. I haven’t tried it, but that kind of pair could be what you’re looking for?

                                                                    I haven’t used gRPC/Protobuf much but it seems like the clear winner for internal distributed systems programming. There are tons of converters from Proto3 to X, so you can write a Proto3 and be reasonably sure you can target kinda whatever output. I think it’s super ugly though and has a bunch of peculiarities.

                                                                    Avro seems quite nice. It uses a nice JSON format for the IDL, has straightforward union and array types, fast JS implementation, but unfortunately not much ecosystem compared to Protobuf or OpenAPI.

                                                                    I agree that Typescript’s interface. My idle time project at work is fiddling a Typescript -> Protobuf converter.

                                                                    1. 3

                                                                      Amazing project.

                                                                      It seems like container images are more useful as a deployable artifact for network services. I can pay to host a container image at tons of hosting services (e.g. ECS) but no hosting service will support a bare executable. This is great for running something locally – no end user is going to install Docker so they can run an image locally.

                                                                      How do I update it? Can I copy 2.1 over the 2.0 binary without losing my data?

                                                                      I also didn’t see any notes on sqlite and how to manage its persistent data. Can I back it up or replicate it? Is sqlite meant to be completely read-only with a pre-loaded SQL database?

                                                                      1. 3

                                                                        Are virtual machines in the cloud not a hosting service? The value of redbean is you can just scp it onto any host and run it. Any program that has access to the UNIX system() function can run your redbean. SQLite is supported both for read-only and read-write. For example, its WAL mode is particularly good. You do need a second file for the SQLite database. However, redbean also has a StoreAsset function that lets you programmatically use the zip executable structure itself as a self-modifying object store.

                                                                        1. 1

                                                                          A VM still needs to be administered. Containers don’t. I guess you are targeting technical users whereas I’m thinking if it’s possible to have truly non-technical users who might want to spin up their own instance of some redbean app.

                                                                          My thought is “could this power a truly decentralized app like Mastodon where everybody runs their own microinstance?”

                                                                          1. 1

                                                                            If we provided a service for hosting your redbean containers and editing them in the browser, would you use it?

                                                                            1. 1

                                                                              I’m just thinking aloud currently. I can imagine this providing the backend for phone apps that run completely decentralized. Your service would just provision redbean instances using some well-defined container image. Imagine Twitter where your messages are fanned out to your subscribers. This could make an amazing tightly contained blogging platform. These are all really conventional ideas but there’s so much potential in mono-user services.

                                                                              The only drawback I’ve seen so far is lack of ARM64 support but that’s not critical if you choose the hosting hardware.

                                                                        2. 3

                                                                          I haven’t tried it, but I think this would be a valid Docker file for Redbean:

                                                                          FROM scratch
                                                                          ENV PATH=/bin
                                                                          ADD ape.com /bin/ape
                                                                          ADD redbean.com /bin/redbean
                                                                          

                                                                          It’s no different from any other static binary like a Go executable; if you want to wrap it in Web Scale Computing Primitive, I think it should work fine.

                                                                          If you’re worried about updating redbean from upstream as a developer, use a build process instead of editing the zip manually. Just like go command compiles go code, zip command compiles redbean code:

                                                                          build/app:
                                                                            cp deps/redbean.com build/app
                                                                            zip build/app src/*
                                                                          

                                                                          As for files and end-user upgrading, nothing forces redbean to store user data inside the zip. Like SQLite, you could write all data adjacent to the executable, or to %USERDATA% or ~/.myapp.

                                                                          1. 2

                                                                            Great stuff! This was also shared by Loam on our Discord server:

                                                                            FROM alpine AS builder
                                                                            RUN wget https://redbean.dev/redbean-tiny-2.0.1.com -O /redbean.com \
                                                                                && chmod +x /redbean.com \
                                                                                && /redbean.com --assimilate
                                                                            
                                                                            FROM scratch
                                                                            COPY --from=builder /redbean.com /
                                                                            EXPOSE 8080/tcp
                                                                            VOLUME /src
                                                                            ENTRYPOINT ["/redbean.com", "-D", "/src"]
                                                                            

                                                                            You should come join us! https://discord.gg/EZwQUAcx

                                                                            1. 1

                                                                              why do you use discord?

                                                                        1. 1

                                                                          This is the second tracing-related memory leak post I’ve seen recently (I don’t have the other one handy but it was due to holding spans over await points IIRC). Makes me a bit nervous to dig into tracing when regular logging seems to “just work”.

                                                                          1. 2

                                                                            I’d say look up how to do tracing with your framework of choice, ask some people on irc discord and you’ll be fine. Just don’t ship a custom tracer, unless you’re willing to deal with that.

                                                                            1. 1

                                                                              I am a big fan of tracing, and I’m not to spooked by the possibility of leaks. As long as you have a graph of your memory usage, and you can overlay a deploy event bar on that graph you can spot a big leak right away. At least in NodeJS land, upgrading to a new anything ever may introduce a leak, and dd-trace specifically (datadog’s library) is fraught with this issue. So, we stare at the graph once in a while and track down the source. Not too bad.