1. 1

    Starting to work on Sasquach again after a bit of a break. I’m trying to figure out how to resolve cross file dependencies. I’m using Java’s fork join tasks to both parallelize the process, as well as simplify building the graph and waiting for resolution of other modules to complete. I need to ensure that there aren’t deadlocks due to cyclic dependencies.

    I’m also set to move into my new apartment this weekend and I’m gonna help my new roommate pack up on Thursday.

    1. 5

      Very similar article was just added within the last week. Is it just B-tree Building Time? This one I found particularly naive — you can find more and better info on Wikipepedia or online articles like Modern B-Tree Techniques, and also the book Database Internals [O’Reilly], both of which are solid gold.

      I’ve said this before, but: people get hung up on the idea of a b-tree needing a fixed branching factor. That’s not necessary, and it totally gets in the way of supporting variable length keys and values. Just keep shoving entries into a node until it overflows, then split it.

      1. 1

        I am on the lookout for a good introduction to writing disk-backed btrees since I have never done that before. This is maybe the closest or exactly what I was looking for.

        Most books that cover it don’t actually give you working, minimal code you could run today. Or maybe I miss them. I do own Database Internals and I have been meaning to read it but I feared it would still be higher level than actually working code samples.

        Wikipedia for example does not give working code samples.

        1. 2

          This tutorial for writing a sqlite clone is pretty good and has actual C code: https://cstack.github.io/db_tutorial/

          1. 2

            True! But it kind of leaves off write in the middle and has a lot of distracting detail about databases that I’d like to not have to look at (in an ideal tutorial). So I discount that.

          2. 2

            Yeah, I couldn’t find good code either, so I just kind of dove in and started coding a few months ago. It’s been challenging but lots of fun. Unfortunately this is a work project, so I can’t just open source it.

            Another good resource I found is the documentation of the SQLite file format. It’s not actual code, but it tells you all about the data structures down to the bit level. This is a page on the SQLite.org site somewhere.

        1. 9

          Trying to lock down a new apartment as I’m currently living with my now ex. I’m feeling a bit anxious about it so hopefully it’ll be settled soon.

          1. 2

            That does sound stressful, I hope it all works out soon.

            1. 1

              Thanks, I really appreciate it

            2. 1

              Not easy. Lived at my ex’s place for a while also, we didn’t live together tho after we broke up, she just let me stay there.

            1. 6

              This sounds similar to how Pony does things

              1. 3

                Yup, though Pony takes it a bit further by introducing many different reference types/capabilities.

              1. 12

                I appreciate that a columnar data store confers many advantages to high-cardinality time-oriented telemetry data, and this article was a really nice overview of the mechanics, but

                You pretty much need to use a distributed column store in order to have observability.

                feels like an over-reach that isn’t really supported by the facts brought to the table.

                Also,

                The result is blazing-fast query performance

                Which petition do I sign to put a moratorium on the word “blazing” in any technical context?

                1. 4

                  feels like an over-reach

                  They failed to set the scene. I guess their implied context is “you have many thousands of machines continuously spamming you with rich data points you need to query ad-hoc with various aggregation styles” which is not everyone’s experience, so the “need” is different.

                  1. 3

                    Yeah it would be nice to see a low level comparison of how their system handles high cardinality metrics vs prometheus

                    1. 1

                      I guess their implied context is “you have many thousands of machines continuously spamming you with rich data points you need to query ad-hoc with various aggregation styles” which is not everyone’s experience, so the “need” is different.

                      Even then, though.

                  1. 36

                    Pip pip! Time to get rev our coding engines to a high RPM! We’ll be sure to have a lot of snacks (yum!) to keep our spirits up. Of course, we’ll be apt to get some people who nix our great ideas, but I’m sure that’s just because of a desire to avoid cargo-cult programming.

                    1. 24

                      This seems like a tangled ball of yarn from the go get. It’s possible there will be some gems, but that’s assuming that nobody placed a hex on the conference. Regardless, I’m sure there will be a cabal of mavens in attendance ready to talk about their stack.

                      1. 4

                        Take it easy everyone. Go brew some coffee and watch some asdf videos.

                        1. 3

                          Nah, me and the Guix were hungry–and not for drinks or something chocolately–so we rock-paper-scissor’d, but I threw papers in luarocks so this little coding cabal’ll be getting some spago at the place with the foreign name, Leiningen’s.

                      2. 8

                        banned

                        1. 3

                          Or let’s just play some pac-man.

                        1. 5

                          If you want to be even closer to the metal, there’s Ceph’s Bluestore backend which eschews the file system and writes blocks directly. I don’t have experience with Ceph, but that’s how S3 actually writes to disk.

                          1. 1

                            Those results are interesting! Has anyone gone a level deeper though? I’d love to see someone writing a driver for an SSD drive that bypasses not only the File System but also the Flash Translation Layer and exposes a low level interface that deals in immutable NAND flash pages. The FTL is, after all, only there to serve the mutability needed by the File System sitting layer above it. I think many people realize at this point how mutability complicates everything, so thanks to ditching it, not only we could speed things up, but also simplify them. For instance, LMDB is based on copy-on-write (i.e. immutable) memory pages because it’s the most reliable way. If it could deal in NAND pages directly instead, that’s losing two layers of abstraction that slow down and complicate things, with no loss of functionality. The downside is of course being tied to a specific SSD firmware, but the interface can be generalized to other SSD manufacturers.

                          1. 1

                            Mostly hanging out with friends but hopefully also working on Sasquach. I got out of the groove of working on it daily after visiting my family. I think part of it is moving the name resolution from the parsing process into its own step is just a PITA. I’m hoping I’ll pick up steam again after I get it working.

                            1. 2

                              I would like to write a blog post that i’ve wanted to write but i haven’t got around to it in the last few weeks. partly unsure about hosting.

                              1. 2

                                You could put it anywhere that allows your own domain - if you don’t like the service, just move the content later. But at least the post is out there already.

                                1. 1

                                  I use bearblog.dev, it’s pretty simple and fast.

                                1. 7

                                  It’s a shame the game is closed source, I wonder if he’s embarrassed at some part of the 700k lines?

                                  1. 19

                                    Oh I’m sure at 700k everyone would have a disappointing and embarrassing bit. I mean, on my solo projects I often times hit them before 1k :-)

                                    1. 18

                                      The canonical answer to why Toady keeps it closed source is that he wants to make his vision, not manage a software project. I personally think it’s a little silly since by now he has a small army of talented programmers, mod makers, writers, artists etc who would happily devote their time to helping make his vision, but honestly, that’s fine. The guy is the poster-child of a mad genius: He’s spent nearly 20 years working on an incredibly vast and insane project, and has generally made it work pretty successfully. He’ll probably be doing it the rest of his life; I hope he makes the code open-source in his will.

                                      1. 8

                                        I hope he makes the code open-source in his will.

                                        Unless I’m mistaken, it is indeed a part of Toady’s will that the game becomes open source after he passes away.

                                        1. 6

                                          I mean he could adopt the SQLite style of open source, open to read and fork but not to contribute to.

                                          1. 3

                                            Or rather “source available” but not open source. If he makes it open source and don’t accept any contribution, there’s a good chance he’ll lose control of it, because someone else will create a fork that accepts them.

                                          2. 2

                                            I’m quite sympathetic to him not wanting to make the source available. My only gripe is that it means no way to try and build it for OpenBSD. 😔

                                        1. 1

                                          Something I’ve wondered about building an http server from scratch or a even a reverse proxy using some http lib is how to handle denial of service. I feel like handling DoS could take up a decent chunk of code and runtime performance, but I’m not really sure. I don’t even know what the potential attacks are, I just know they exist.

                                          1. 2

                                            So, I’ve done a bit of this.

                                            For an HTTP server you are primarily interested in mitigating application-level attacks, with some interest in the protocol-level.

                                            DOS attacks attempt to exhaust some resource - typically RAM, CPU or I/O - without exhausting the attackers resources. For instance, when I last used Apache HTTPD it consumed relatively-high RAM per connection, allowing attacker software to open connections until it ran out.

                                            The key idea in DOS protection is to ensure that your design requires an attacker to consume comparable (eg same big-O complexity) resources that you do.

                                          1. 1

                                            Anyone know if ublock blocks tracking pixels in email?

                                            1. 4

                                              As far as I know, uBlock is not installable on any email client.

                                              Now if you’re talking about tracking pixels in webmails. It depends on your webmail. Most webmails nowadays (this started by gmail) download and cache the images in the emails. So they can’t track you (since the webmail shielding your IPs and your cookies by proxying the image download), but they can track if and when you opened this email, which I find creepy.

                                              uBlock can’t do much about this, since the image in your webmail is embedded as https://cdn.webmail.example.com/image-proxy/...., so there is no way to differentiate between a legitimate image and a tracking pixel.

                                              1. 5

                                                Most non-web email clients now refuse to download any images unless you press a button. I generally don’t press it, so I never see the contents of the GOG.com marketing emails because they don’t put any plain text in that motivates me to click. If I can’t read your marketing email without your tracking me, I don’t read your marketing email. Your loss.

                                                1. 2

                                                  Yeah, I imagine the huge image-only emails are horrible for accessibility as well.

                                                  I’ve also noticed that the urls in emails that come from recruiters are usually some url shortener with tracking info, even the ones in the footer. Now I just search for whatever company they’re trying to sell instead. This way they don’t send 5 followup emails for their dumb blockchain startup.

                                                  1. 1

                                                    I use FairEmail [on Android] which does not display images by default, but also attempts to stop tracking images if you want to see images ; it kinda works for GOG for example. I have a pi-hole on my home network and it also prevents some other tracking beacons.

                                                    Nothing is 100% proof at this point, except plain-text but depending on senders you also get click tracking (and often stupidly long links with that).

                                                    On a side topic, I’m curious as to why you are (still ?) subscribed to marketing emails that you don’t want to read?

                                                    1. 1

                                                      On a side topic, I’m curious as to why you are (still ?) subscribed to marketing emails that you don’t want to read?

                                                      I do want to read them, but not enough to load trackers. When they send emails with actual text in them, I sometimes click on them (and I do still buy games from them, though since work gives me a free XBox Game Pass account, I don’t buy games as much as I used to).

                                              1. 2

                                                Should have went on scouts camp this week, which took a lot of preparation. Really hoped to go to take a week off from regular things. But sadly due to COVID I’m the only last-minute one having to stay home. :(

                                                So, I’ll probably be spending the week learning a new programming language, library or technique. Does anybody have a cool suggestion?

                                                1. 1

                                                  What languages do you normally use and have played with?

                                                1. 3

                                                  I have always thought “I should get into NixOS”, but people seem to have gripes with the Nix configuration language and I am really comfortable running Alpine on the small boxes I have.

                                                  Do you think the tools that are made available are worth the learning curve? ilo Alpine li pona

                                                  1. 20

                                                    I used NixOS for a while on my laptop. It’s certainly worth trying, and not very difficult to install.

                                                    Setting up services, tinkering with the main config, is easy enough.

                                                    But if you want to go deeper than that, you’ll spend hours searching other people’s configuration because the documentation is poor.

                                                    1. 5

                                                      Ugh, yes, this is my single #1 complaint with the infrastructure by far. The poor documentation. I need to start taking notes and contributing back to the wiki.

                                                      1. 2

                                                        Seems like Guix might be an option. At least they didn’t create a brand new configuration language..

                                                        1. 15

                                                          At least they didn’t create a brand new configuration language..

                                                          Note that although Guix didn’t create new syntax (they use lisp), you’d still need to learn the “language” defined by the Guix libraries. In the end, most of your time is spent figuring out Nix/Guix libraries, and very little time is spent on programming-language things like branching and arithmetic

                                                          1. 5

                                                            The biggest annoyances I’ve run into with Nix-as-a-language are the lack of static types and the fact that it doesn’t really have good support for managing state. The latter doesn’t usually present a problem, but occasionally if you want to generate a config file in a certain way it can be annoying.

                                                            But I think it helps that I already knew Haskell, so all the stuff like laziness and the syntax are super familiar.

                                                            1. 1

                                                              There really isn’t much of a “language” to learn. Guix configurations use scheme records for about 90 of any configuration a user will do and the rest is in g-expressions which is something like a new syntax that takes the place of embedded shell scripts in nix.

                                                        2. 8

                                                          On one hand, Nix is terrible. On the other hand, isn’t everything else worse? Guix is the only decent comparison, and personally I think Nix’s technical decisions are better. (So do they, given that they borrow the Nix store’s design wholesale!)

                                                          1. 2

                                                            How can they be better, if they are the same?

                                                          2. 6

                                                            NixOS is amazingly good for writing appliances. It also can be made to barf out docker images, which is nice.

                                                            1. 6

                                                              Coming from Void Linux, NixOS on a desktop machine… ilo NixOS li pona ala, la pali mute. It’s a lot of work for a functioning desktop, I think. But on the server NixOS is killer and fun, and makes configuration suuuuper simple. I only need my /etc/nixos and /src folders to migrate my server anywhere (though I’d have to bring along /data to keep my state).

                                                              1. 1

                                                                This is basically what I do. When I got my new laptop I considered Nix, but decided to stick with Arch because it was easier. I use NixOS for my Digitalocean nodes and am very glad I did.

                                                              2. 1

                                                                tl;dr: No, I went back to Void on the desktop and Alpine/Ubuntu on servers in almost all contexts

                                                                Purely anecdotal: I was all-in on Nix, both at home and at work, and drank copious amounts of the kool-aid.

                                                                As of today, it still runs on a few machines I’m too lazy to reformat, but it’s gone from all my interactive machines, and from all functions (be it NixOS on EC2, or Nix shells for developer workstations) at work.

                                                                My takeaway was basically: Nix(OS) makes 90% of things absolutely trivial, the next 8% significantly more difficult, and the remaining 2% anywhere from nightmarish to downright impossible. That latter 10% made it “cost” significantly more than, say, investing in other version locking tooling for developer workstations at work. At home, that remaining 10% just meant that I didn’t do some things (like play many Steam games) that I would otherwise have enjoyed doing, because I didn’t have the energy to dive in.

                                                              1. 2

                                                                Many of your decisions landed at the same place as my (abandoned) wibble project from a few years ago. I managed to avoid the tuple problem matklad mentions above by not having tuples at all, only structs. Function parameters are structs too. I think this can simplify some of the redundancy, and the awkwardness of “what do you call the unnamed 2nd field?” (Answer: It must have a name.)

                                                                1. 1

                                                                  Ah cool I’ll have to take a look at it. I was considering just making functions take a struct, though I wasn’t sure about the performance impact. I suppose I could just remove the extra struct creation at compile time if possible.

                                                                1. 2

                                                                  Interesting!

                                                                  Random bits of feedback:

                                                                  Tuples are like a structs with unnamed fields. In fact, at runtime they are actually structs with fields named 1, 2, 3, etc.

                                                                  From experience with Rust, I’d suggest naming fields as _1, _2 (that is, matching traditional definition of identifier), etc instead. Numbers make the implementation more complicated (which might be an ok tradeoff, but evolution-wise, it makes sense to start simple). In the IDE layer, “what is an identifier” is important question; if numbers are sometimes identifiers, you need contextual info to figure this out, and more special cases in completion and such. In the lexer, there’s ambiguity in foo.1.1 which can be lexed as two field accesses or a float literal. Rust now has a hack where it splits a float token into three in the parser, to make the syntax work.

                                                                  When a module imports another module, it only has to look at the defined type signature. This makes type checking embarrassingly parallel at the module level.

                                                                  Yay! To fully reap the benefits here, you also need to be careful to make compilation embarrassingly parallel at the file level within a single “library” (this mostly boils down to having fully qualified names in things, and avoiding fixed points in name resolution), and to make dependencies between libraries explicit (so that there’s explicit DAG of libraries, rather than a global, flat search path).

                                                                  PS: syntax highlighting & line wrapping of code examples seem broken, at least on iPad.

                                                                  1. 1

                                                                    From experience with Rust, I’d suggest naming fields as _1, _2 … Ah I wondered why some languages did this, that makes total sense. You just saved me a potential headache later down the line!

                                                                    Yay! To fully reap the benefits here, you also need to be careful to make compilation embarrassingly parallel at the file level within a single “library” (this mostly boils down to having fully qualified names in things, … If I understand this correctly, then that is my plan. Sasquaches’ imports map roughly to Java’s. All runnable code must exist within a module, which corresponds to a classfile. Modules are declared in files, each file can have many modules. Files map to packages, which resembles Rust’s modules:

                                                                    file: src/foo/bar/other.sasq
                                                                    // package here is foo/bar/other
                                                                    Other { ... }
                                                                    
                                                                    file: src/foo/bar/bar.sasq
                                                                    // When the file has the same name as the folder, it elides the second qualification. It acts like mod.rs in Rust, however
                                                                    // I prefer to use the same name as the folder so then you don't have to disambiguate between different mod.sasq
                                                                    // tabs in your editor
                                                                    Foo { 
                                                                        doWork = (): void -> { ... },
                                                                    }
                                                                    
                                                                    Fooz {
                                                                        // Might elide the "foo/bar" part or follow Rust's model and do self/Foo. Likely the latter
                                                                        use foo/bar/Foo,
                                                                        // Filename is included in the import path unless the file has the same name as the parent folder
                                                                        use foo/bar/other/Other,
                                                                        ...
                                                                    }
                                                                    

                                                                    avoiding fixed points in name resolution Not quite sure what this means.

                                                                    make dependencies between libraries explicit (so that there’s explicit DAG of libraries, rather than a global, flat search path) Absolutely, I plan on taking advantage of the Java module system + jlink for this instead of just using the classpath.

                                                                    PS: syntax highlighting & line wrapping of code examples seem broken, at least on iPad. Ah yeah I see that on mobile. I’m using a hosted platform for the site but I’ll see if I can fix it via CSS.

                                                                  1. 14

                                                                    This was an absolutely brilliant article! It was fantastically well researched and written someone with expert knowledge of the domain. I’m learning so much from reading it.

                                                                    The argument about representing JSON objects in SQL were not persuasive to me. I do not really understand why this would be desirable. I see the SQL approach as a more static-typed one, where you would process JSON objects and ensure they fit a predefined structure before inserting them into SQL. For a more dynamic approach where you just thrown JSON objects into a database you have MongoDB. On that note I think the lack of union types in SQL is a feature more than a limitation, isn’t it?

                                                                    Excellent point about JOIN syntax being verbose, and the lack of sugar or any way to metaprogram and define new syntax. The query language could be so much more expressive and easy to use.

                                                                    It totals ~16kloc and was mostly written by a single person. Materialize adds support for SQL and various data sources. To date, that has taken ~128kloc (not including dependencies) and I estimate ~15-20 engineer-years

                                                                    I think these line counts say a lot! The extra work trying to fulfill all the criteria of the SQL standard isn’t necessary work for the implementation of a database system. A more compact language specification would enable implementations to be shorter and enable people to learn it much more easily.

                                                                    The overall vibe of the NoSQL years was “relations bad, objects good”.

                                                                    The whole attitude of the NoSQL movement put me off it a lot. Lacking types and structure never sounded like an improvement to me - more like people just wanted to skip the boring work of declaring tables and such. But this work is a foundation for things to work smoothly so I think the more dynamic approach will often bite you in the end. But then the author explains more about GraphQL and honestly it sold me on GraphQL, I would be very open to using that in future rather than SQL after reading this.

                                                                    Strategies for actually getting people to use the thing are much harder.

                                                                    This is a frustrating part about innovation in programming but honestly I believe that the ideas he has presented represent too significant an improvement that they are just too good for people not to start using.

                                                                    1. 7

                                                                      If you have data encoded in a JSON format, it often falls naturally into sets of values with named fields (that’s the beauty of the relational model) so you can convert it into a SQL database more or less painlessly.

                                                                      On the other hand, if you want to store actual JSON in a SQL database, perhaps to run analytical queries on things like “how often is ‘breakfast’ used as a key rather than as a value”, it’s much more difficult, because “a JSON value” is not a thing with a fixed representation. A JSON number might be stored as eight bytes, but a JSON string could be any length, never mind objects or lists. You could create a bunch of SQL tables for each possible kind of JSON value (numbers, strings, booleans, objects, lists) but if a particular object’s key’s value can be a number or a string, how do you write that foreign key constraint?

                                                                      Sure, most applications don’t need to query JSON in those ways, but since the relational model is supposed to be able to represent any kind of data, the fact that SQL falls flat on its face when you try to represent one of the most common data formats of the 21st century is a little embarrassing.

                                                                      That’s what the post means by “union types”. Not in the C/C++ sense of type-punning, but in the sense of “a single data type with a fixed number of variants”.

                                                                      1. 4

                                                                        A JSON number might be stored as eight bytes

                                                                        Sorry to nitpick, but a JSON number can be of any length. I think what you were thinking of was JavaScript, in which numbers are represented as 64-bit values.

                                                                        1. 1

                                                                          No, the json standard provides for a maximum number of digits in numbers. Yes I know this because of a bug from where I assumed json numbers could be any length.

                                                                          Edit: I stand corrected - I’m certain I saw something in the standard about a limit (I was surprised) but it seems there isn’t. That said various implementations are allowed to limit the length they process. https://datatracker.ietf.org/doc/html/rfc7159#section-6

                                                                          1. 5

                                                                            Which standard? ECMA-404 doesn’t appear to have a length limitation on numbers. RFC 8259 says something much more specific:

                                                                            This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.

                                                                            In fewer words, long numbers are syntactically legal but might be incorrectly interpreted depending on which implementation is decoding.

                                                                            1. 1

                                                                              The ECMA-303 standard doesn’t talk about any numerical limits at all, and RFC7159 talks about implementation-specific limitations which a) is kinda obvious, because RAM isn’t unlimited in the real world and b) doesn’t buy you anything if you are implementing a library that needs to deal with JSON as it exists in the wild.

                                                                              So yes, JSON numbers can be of unlimited magnitude and precision and any correct parsing library better deals with this.

                                                                        2. 5

                                                                          Lacking types and structure never sounded like an improvement to me - more like people just wanted to skip the boring work of declaring tables and such.

                                                                          To some degree it’s the same as the arguments in favor of dynamically-typed languages. Just s/tables/variable types/, etc.

                                                                          Also, remember the recent post which included corbin (?)s quote about “you can’t extend your type system across the network” — that was about RPC but it applies to distributed systems as well, and the big win of NoSQL originally was horizontal scaling, i.e. distributing the database across servers.

                                                                          [imaginary “has worked at Couchbase for ten years doing document-db stuff” hat]

                                                                          1. 3

                                                                            The whole attitude of the NoSQL movement put me off it a lot. Lacking types and structure never sounded like an improvement to me - more like people just wanted to skip the boring work of declaring tables and such.

                                                                            I always thought that NoSQL came about because people didn’t feel like dealing with schema migrations. I’ve certainly dreaded any sort of schema migration that did more than just add or remove columns. But I never actually tried using NoSQL “databases” so I can’t speak about whether or not they actually help.

                                                                            1. 13

                                                                              In practice you still need to do migrations, in the form of deploying your code to write the new column in a backwards compatible way and then later removing that backwards compatible layer. The intermediate deployments that allow for the new and old code to live side by side, as well as a safe rollback, are required whether you use sql or not. The only difference is that you don’t have to actually run a schema migration. A downside of this is that it’s much easier to miss what actually turns out to be schema change in a code review, since there are not explicit “migration” files to look for.

                                                                              1. 10

                                                                                This! you’re basically sweeping dirt under the carpet. One day you’re going to have to deal with it..

                                                                              2. 11

                                                                                In my experience this leads to data inconsistencies and the need to code defensively or otherwise maintain additional application code.

                                                                                1. 9

                                                                                  Not if you’re hopping jobs every 1-2 years. If you’re out the door quickly enough, you can honestly claim you’ve never run into any long-term maintainability issues with your choice of technologies.

                                                                                2. 3

                                                                                  I always thought that NoSQL came about because people didn’t feel like dealing with schema migrations.

                                                                                  I think that’s unlikely, most NoSQL people probably have no idea what schema migrations are.

                                                                              1. 2

                                                                                I’m working on adding parametric polymorphism to Sasquach, I’m pretty close to getting it working. After that I’ll either write a blog post about the type system or start working on the stdlib.

                                                                                1. 2

                                                                                  Based on my experiences with Typescript, I’m guessing compiling it is NP-extrahard.

                                                                                  1. 1

                                                                                    It’s undecidable (and its type system is actually unsound).

                                                                                    1. 1

                                                                                      Yep it was a bad joke, I saw your very useful link. Anecdotally, I see this occur most frequently when a function has one or more type parameters and is overloaded. I actually found a case yesterday when I couldn’t actually get it to select the right overload, even with explicit type parameters and argument types.

                                                                                      1. 1

                                                                                        and its type system is actually unsound

                                                                                        Doesn’t this make it very decidable, like O(1) decidable.

                                                                                        1. 1

                                                                                          In what way?

                                                                                          1. 1

                                                                                            If it’s unsound, you should be able to prove anything, so just always return “this doesn’t typecheck” or always return “this is an int” and you’re not wrong.

                                                                                            (I may be thinking of types as a proof system slightly more than they actually are)

                                                                                    1. 13

                                                                                      Somewhat hidden in there is a draft of A Tour of the Oil Language, one of the most important docs! It has a few TODOs but is probably digestible to those casually following the project. Feedback welcome as always.

                                                                                      1. 2

                                                                                        The Tour looks great! I’ve been looking forward to a more concise overview of the language for a while.