1. 1

    I have been using Fastmail in parallel of Gmail for a few months now. It’s great but I really miss the smart tabs of Gmail (social, promotions, updates, forums). It really helps with quickly sorting out my email. Because of this, Gmail stays my main workhorse…

    1. 2

      Great post about a complex topic!

      I think Swift “witness tables” could be a great fit for generics in Go (making possible to implement generics data structures without boxing and without monomorphization, by passing items type and size as parameters). That’s more or less how built-in generic data structures are already implemented (slices and maps).

      1. 9

        Disclaimer: I work at Google (on an unrelated product team)

        It was not a false flag. I feel upset reading this. Articles like this don’t help me talk to others as a developer working at a corporation, they just make me want to shut up so I don’t get quoted like Russ.

        1. 4

          I’m sorry that the article came across this way. It was not my intention to malign the good work of Russ or the rest of the Go team. I think they do good work and I’m a big fan of Go (I even wrote a book about it). In fact I think they usually make better decisions than the community. Vgo was just better than dep, which has only become more apparent the more I’ve used it. I think its great that he was able to make the tough call and do what he did, despite the social ramifications. Go would’ve been worse off if he hadn’t.

          I quoted Russ because it was illustrative of an issue in the Go community about trust and ownership. It’s not any individual’s fault, and I actually think the issue is a bit overblown. The Go team has nearly impossible demands placed upon them and I think they do a good job of managing – far better than I could in a similar circumstance. Nevertheless, even if not wholly deserved, the issue is a real one, and this latest issue with the try proposal really strained the community. Had it gone through I suspect there may have been long-time developers who would’ve given up on the language, or at least pulled back from future involvement, distraught from a project that seemed to have taken a very different turn from what they signed up for. Is that fair? Probably not. But the tension was real regardless.

          The article was meant in jest. Of course the proposal wasn’t a false flag. I suppose the last comment may not have made that clear.

          I think the objective outcome here is great. I don’t think try would’ve been a good addition to the language. Like a lot of Go developers it didn’t strike me as particularly go-like, and it left a lot of us scratching our heads. So for things to pan out the way they did was a surprisingly good outcome, so good that you wonder if it wasn’t planned.

          Happy accidents do exist, but I wanted to just be a bit playful with an absurd conspiracy theory. I see now how this could be misinterpreted due to sloppy writing. The fault is entirely my own and I apologize.

          1. 5

            Oh, I’m sorry for my emotional reaction :/ Thanks so much for the explanation, I get where you’re coming from now.

            I think I’m too attached emotionally and shouldn’t comment on these discussions. They’re 100% important conversations to have. Lobsters has made me think a lot about corporate teams working on public technology, their relationship with the community and the risks that dissonance between company goals and community goals can pose – I absolutely want to read more and think more about it.

            1. 1

              I am afraid especially the final disclaimer may be much too tiny, both in font size and in length/depth. Even seeing it, I still absolutely didn’t grasp what you really meant by it, until I read your comment above. In other words, I’d say it isn’t counter-balancing the initial part of the article well enough; to me, it even seems half-hearted enough, that I took it more like a “I don’t really mean it, hee hee, right? wink, wink, nudge, nudge, or do I? ;D”

              Please remember the old truth, that on the Internet, it’s not possible for the reader to know if the writer is sarcastic, or if they really mean it, while the reader is often convinced they know the answer. It’s unfortunate, but I’ve seen it introduce problems much too many times. To the extent that I’ve even seen it used as an explicit and effective trolling tactic, to seed conflict in a community.

              1. 0

                Vgo was just better than dep, which has only become more apparent the more I’ve used it. I think its great that he was able to make the tough call and do what he did, despite the social ramifications.

                So true. I’ve been thinking this during the whole drama. Happy to read it here.

                Had it gone through I suspect there may have been long-time developers who would’ve given up on the language

                This goes both ways. I agree that maybe try was not right choice, but I could consider giving up on the language if nothing is done about error handling verbosity.

                PS: Nice and funny article by the way ;-)

              2. 1

                If this ever happens to you, then hopefully the take away is to apply the feedback of your peers so that they can help you think about your actions instead of inspiring you to never communicate again. The initial problem that stemmed this now paranoid group of Rust fans was a lack of communication from Russ, so not communicating is the actual problem which caused the issue in the first place.

                There’s also always the option of not working for big evil corporations. :)

              1. 13

                I think it was a big mistake not accepting the try proposal. I was actually thinking to myself, man, Go is catching up to Zig’s error handling abilities, this is closing the gap. But now I’m raising an eyebrow and wondering what they’re thinking. The reasoning for closing the proposal seems to be “we didn’t explain it well enough”. That’s kinda odd.

                1. 4

                  I was also surprised and also kinda pity this decision, from a practical point of view of a professional Go developer.

                  That said, I think I do kinda see some arguments that I think could be what made them pause.

                  Or at least personally to me, I think the thing that made me the most uneasy about this proposal, was that it basically encouraged a significantly different approach to error handling than what was “officially” considered idiomatic in Go beforehand. Notably, if your code was written in the “idiomatic way”, you wouldn’t really be able to change it to anything better with try. And what I mean by “idiomatic” here, is adding some extra info/context to the error message, and maybe also doing some extra logic in the “if err” block. As opposed to a trivial if err != nil { return err }, which is not considered officially idiomatic in Go - though quite commonly found in real life code (esp. with helper libs like log15.v2 etc.).

                  This is a somewhat subtle thing, and I’m not sure if I managed to explain it well enough (I can try giving more examples if you’d like, maybe in priv if you prefer). I’m not even 100% sure if that’s their actual main reason, but personally to me, that was the one thing that made me feel not 100% good about this proposal from the beginning. And kinda helps me rationalize the rejection, although from short-sighted immediate perspective I’d much prefer to have the try.

                  1. 3

                    Thanks for this explanation! This made a lot more sense to me than the official explanation on the issue tracker.

                    As a counter-argument (not to you but for the proposal in general) try allows the language to automatically add context to an error as it bubbles up the stack. For example, it could add the source file, line, column, function name where the try occurred. Once the error reaches a point where the application reports the error or decides to panic because of it, the resulting data attached to the error explains exactly what the developer needs to know about what happened.

                    In Zig this is called error return tracing and I’m getting really positive feedback about it.

                    1. 3

                      Yep, you’re technically not wrong (regarding the counter-argument). The thing is, stack traces are somewhat controversial in the Go community.

                      I mean, it’s technically easy to attach a stack trace to an error in Go when it’s created, if that’s what you want (my last employer’s codebase works that way). If you mentally take a step back however, there are some interesting issues with stack traces, and especially if you try to use only stack traces when dumping error messages:

                      • stack traces are tightly coupled to a specific version of a codebase; a very next commit may make line numbers in your stack trace invalid/misleading; thus, you must track the codebase of a binary very well; as a corollary, stack traces mean little when analyzed in isolation, without source code.
                      • pure stack traces will still lack context that may be important at intermediate steps/levels of the stack (e.g. values of some local variables that may help when debugging, or may otherwise shed more light on what was the meaning of the call in a particular frame).
                      • also, stack traces tend to be noisy, making it somewhat tedious to find actually important information in them.
                      • stack traces are arguably developer friendly, but not very end-user friendly.

                      In contrast, an error message “officially” seen as “idiomatic” by the Go “fathers” could look kinda like below, when emitted from a program (with no formal stack trace):

                      error: backup of /home/akavel to /mnt/pendrive failed: cannot write /mnt/pendrive/foo/bar: device not ready
                      

                      With some care, such an error message can be short, informative, give some potentially important extra context (e.g. the /home/akavel path as the source of the backup), be time-proof, self-contained, arguably more end-user friendly, and still usually make it possible to trace a concrete call stack in the codebase that emitted the error (though with some extra work compared to raw stack traces).

                      I don’t claim they are perfect, or that they are strictly better than stack traces. But I do understand their advantages and find this dilemma quite interesting and challenging (no clear winner to me). Also, it’s worth to note that this area is now kinda being further explored by the Go 2 proposals, esp. the “error values” one, with regards to how the ergonomy here could be maybe further improved, both for error message writers and readers.

                      1. 2

                        That’s a fair point about different versions of code with regards to stack traces.

                        And I do like your example quite a bit. That really is an amazingly helpful error message, both to end users and to developers. If Zig had the ability to do hidden allocations like Go does I would be all over this.

                        Even without that though, maybe there is a way… perhaps with something like this proposal.

                        1. 3

                          If you haven’t yet, please try and take a look at what is explored in the relevant Go 2 design draft (https://go.googlesource.com/proposal/+/master/design/go2draft-error-values-overview.md). Even if you don’t understand the whole context, I think there are some potentially interesting thoughts and considerations there. Please note also this is a very early stage “thought experiment”/exploration, that is not even at a proposal/RFC stage yet (or rather, it is something that could match the “request for comments” phrase if it was treated literally and without historical baggage).

                          As to the Zig proposal you linked, one thing that’s sorely missing for me there in order to fully understand it, is what kind of error messages/traces this could enable. I don’t see any example output there. Would it allow printing extra information as part of Zig’s “error return tracing”? Or let the programmer build error messages like what I’ve shown? I don’t know Zig enough to understand the unstated consequences. So, we’re now in a reversed situation, where previously I explained to you the unstated Go context that you didn’t have, and now it’s me who doesn’t have Zig context ;)

                        2. 2

                          I actually don’t find the ‘idiomatic Go error message’ example very helpful. I see the why the error happened, but what can I do to fix it and where can I do that? These are really the kinds of questions stack traces answer. I don’t find that line numbers shifting over time to be very compelling as an argument against. Stack traces are meant to be used to jump to the exact line numbers where the error travelled through. Typically when you’re debugging, you already know the specific commit you’re debugging–some commit SHA of a deployment. So you would already have that commit checked out and trace the error with accurate line numbers.

                          1. 1

                            In this particular example case, the answer to your question (what, where) would be something like: “Insert the backup pendrive back into the USB port”. That’s not something one could fix in code in any way, so actually, stack traces are of completely no use here! (Ok, one could maybe make the code ignore the error and go on, but anyway, the message would still have to land in logs.)

                            Other than that, as I said, the “idiomatic” errors are not perfect, and have disadvantages vs. stack traces, with the main (only?), most important one being how super easy and powerful it is to jump through code when you do have the call stack with line numbers and do know the commit SHA. And please note, that the Go 2 draft designs do try and explore if and how the advantages of both approaches could maybe be fused together.

                        3. 2

                          try allows the language to automatically add context to an error as it bubbles up the stack

                          Nothing in the proposal mentioned anything like this, and if you mean that users could combine try with deferred functions that annotated all errors returned in the function scope the same way, well, (a) that was already possible, and (b) it’s significantly worse than doing individual in-situ annotations, because (i) it physically separates error generating statements from the code that handles them, and (ii) it forces all errors that escape a function to be annotated the same way.

                        4. 1

                          Like you, I’d prefer to have try immediately, but I agree it introduces a style which is not idiomatic.

                        5. 1

                          I also feel like they abandoned the proposal mostly to keep the peace in the community. That said, catch and try fit better in Zig because you have error traces. Go doesn’t have error traces, and this is why people insist on decorating errors, and why they dislike try which only permits to return “naked” errors.

                        1. 13

                          If Go 2.0 will get generics, as this piece asserts, why not have sum types as well, and then use said generics to implement a monadic error handling scheme?

                          1. 3

                            Rust has generics and sum types, and error handling was verbose until the introduction of the try! macro, and eventually the ? operator. The issue try was trying to solve is about control flow, and this can be solved only with macros, a new built-in function or a new keyword.

                            1. 2

                              Sum types have been proposed before. Here is a Reddit thread that discusses some of the difficulties involved in adding them.

                            1. 6

                              After having read this article, I read a bit more about third-party cookie blocking, and I was reminded that cookies are not the only way to track internet users: localStorage, and cache tracking with HTTP ETag, also enable tracking.

                              This led me to Safari which partitions cookies, cache and HTML 5 storage for all third-party domains. As far as I know, Firefox (and Chrome of course) don’t to that.

                              It’s easy to check with the browser developer tools open:

                              • Empty your cache.
                              • Visit a website that uses a given font hosted on fonts.google.com.
                              • You should see the HTTP request for the font in the network tab.
                              • Then visit another website that uses the same font on fonts.google.com.
                              • In Safari, you should see another HTTP request, because the cache is not shared.
                              • In Chrome and Firefox, you’ll see the font is retrieved from the cache.

                              Kudos to Apple for their work on the privacy features of Safari! I’d be happy to see Firefox put the same emphasis on this :-)

                              1. 6

                                Enable first party isolation in about:config (or download the web extension withtthe same name)

                                1. 2

                                  To be honest, I do want resources such as fonts and javascript libraries to be cached across domains. Cookies, no.

                                  1. 1

                                    You don’t. Because those libraries and fonts basically act as cookies.

                                    1. 2

                                      How so? If they are accessed at a common URL from a CDN, how could they be used to track a user across domains? Serious question, I am trying to understand the threat model.

                                  2. 2

                                    Caching of reused assets accross domains is useful when accessing the internet over metered satelight link. Where every byte counts and latency is often in the seconds.

                                  1. 1

                                    The future is exciting.

                                    Yes, I admit I enjoyed the article :-) But maybe the future should be about using machine learning to fight climate change and other civilizational challenges, instead if this? But yeah, this is impressive :-)

                                    1. 1

                                      I’m glad to have a non-negative application!

                                    1. 2

                                      I’m particularly entertained by the results of the survey on the topic: https://www.reddit.com/r/rust/comments/bju8di/asyncawait_syntax_survey_results/

                                      There’s no syntax that people particularly actually like, but all of them are widely disliked. Bit of a comment on human nature, there. So, this seems a perfectly reasonable choice.

                                      We COULD just all use S-exprs and have parenthesis do all the hard work for us, but noooo, people don’t like that for some reason. (Honestly I find it gets rather clunky for simple things like array and struct references, as well as type annotations, but for anything more complicated/situational than it makes all these questions just magically evaporate.)

                                      1. 0

                                        IMO lisp was written for computers not people. People read left to right, not inside to outside. After about 4 layers lisp becomes unreadable unless you start with formatting gymnastics. This is one of reasons for rise of python and yaml. People want something that is readable. Method chaining ala Ruby or JavaScript is dead simple to read.

                                        1. 7

                                          People read left to right

                                          Reminder that this isn’t even remotely universally true, you’re over-generalizing from one cultural viewpoint.

                                          Point being: if you’re going to make an argument from naturalness, make sure you don’t pick something that’s actually arbitrary. People are actually incredibly flexible about reading order. You’ll quickly find when learning Arabic that reading right-to-left presents no challenges, and similarly when learning East Asian languages that top to bottom, right to left is not discomforting either. Hell, Boustrophedon order (flipping between lines) isn’t hard either.

                                          The notion that Lisp is some unnatural, unmasterable order seems implausible. It’s largely left-to-right, and the right to left precedence hierarchy of thing-operated-on out to operations-on-thing is no different than the way English itself structures modifiers and objects

                                          1. 5

                                            People read left to right, not inside to outside

                                            @cup compared reading “left to right” with reading “inside to outside”. He/she could also have written “people read right to left, not inside to outside”, with the same conclusion. I’m not aware of any human language where sentences are read from “inside to outside”.

                                            1. 4

                                              “Reads inside to outside” is a poor description of the way that Lisp functions — it reads left to right, it’s just not strictly left associative. No human language is either. Certainly not English.

                                              Consider “gather the tall roses in the brown basket, quickly”. Does English read “inside out” because the things being operated on by the verb “gather” are specified right-to-left (subject: roses. What kind of roses? roses which are tall) in the middle of the sentence? Does it read right to left because the adjective “quickly” applies to the clause that comes immediately to its left?

                                              Obviously not.

                                              “Left to right” is about how we lay words out on a page, not how we construct meaning. As readers we parse the complicated structure of English in a very complex manner that involves relationships between words flowing in a variety of directions.

                                              We lay Lisp out left to right, similarly. Its operands might fall “in the middle” and be composed of complex clauses themselves, but again that’s hardly unique — most languages aren’t SVO, so at least one of Subject and the Object will be clause-internal. Even a classically SVO language like English has a variety of non-SVO forms, as above. In a language like Japanese, which I also speak, clauses are typically SOV, with clauses able to serve as subject or object in larger constructions (as in English), so you frequently see sentences of the form S(SOV)V.

                                              No sane person would claim Japanese reads “inside out”.

                                              (gather (roses :tall) (basket :brown) :quickly)) is no more “inside out” than English, or Japanese. It’s just a VSO style ordering, which again is permissible even in English. It’s also the default ordering in Arabic, welsh, filipino,…

                                              1. 2

                                                Interesting. I agree that Lisp is mostly read left-to-right, top-to-bottom. I just have to keep in mind some kind of stack to know where I am in the nested parentheses, but it’s a matter of habit, and not different from parsing an English sentence, as explained in your comment. Thanks for the thorough comment :-)

                                          2. 1

                                            Lucky for me then my next most favorite language is Forth. :D

                                        1. 4

                                          If you haven’t already read “What Color is Your Function?”, I highly recommend it: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/

                                          I wonder why so many languages opted for async/await rather than threads. I understand that granting untrusted code the power to create threads is a risk, so at least in JavaScript’s case it makes some sense. But I find it curious that languages like Go are the exception, not the norm. (My own language also uses threads.)

                                          1. 18

                                            Rust has threads. The standard library API is right here.

                                            • Threads as they currently exist in Rust are just a wrapper on top of POSIX or Win32 threads, which are themselves implemented in the OS kernel. This means spawning a thread is a syscall, parking a thread is a syscall, jumping from one thread to another is a syscall*, and did I mention that the call stack in Rust can’t be resized, so every thread has a couple of pages of overhead? This isn’t a deal breaker if you wrap them in a thread pool library like rayon, but it means you can’t just use OS threads as a replacement for Erlang processes or Goroutines.

                                            • Green threads introduce overhead when you want to call C code. For one thing, if you make a blocking call in C land, green threads mean you’re blocking a whole scheduler, not just one little task. For another thing, that stack size thing bites you again, since it means your green threads all have to have enough stack space to run normal C code, or, alternatively, you switch stack every time you make an FFI call. Rust used to have green threads, but the FFI overhead convinced them to drop it.

                                            So, since green threads aren’t happening, and you can’t spawn enough OS threads to use them as the main task abstraction in a C10K server, Rust went with the annoying leaky zero-overhead abstraction. I don’t really like it, but between the three options, async/await seems like the least bad.

                                            * Win32 user mode threads can allow you to avoid task switching overhead, but the rest of the downsides, especially the stack size one, are still problems.

                                            1. 3

                                              Great comment! Just want to nitpick about this:

                                              Green threads introduce overhead when you want to call C code. For one thing, if you make a blocking call in C land, green threads mean you’re blocking a whole scheduler, not just one little task.

                                              Regarding “blocking calls in C land”, using async/await with an event loop is not better than green threads: both will be blocked until C land yields control.

                                            2. 11

                                              I wonder why so many languages opted for async/await rather than threads

                                              I think you have to understand that this isn’t an either-or question. Rust, of course, has had threads for ages – the async/await RFC revolves around providing an ergonomic means of interacting with Futures, which are of course just another abstraction built on top of the basic thread API.

                                              The better question would be, “What are the ergonomic issues and engineering trade-offs involved in designing threading APIs, and why might abstractions like Futures, and an async/await API, be more appealing for some sorts of use-cases?”

                                              1. 2

                                                I’m much more of a fan of algebraic effects for this stuff. Multicore OCaml seems to be moving in the right direction here, in a way that can reduce the splitting of the language in two. I would have loved to have seen something like this in Rust, but I can understand that the more pragmatic choice is async/await + futures. We still need to figure out how to make algebraic effects zero cost.

                                                1. 1

                                                  Yeah. The problem is the language needs some sort of programmable sequencing operators built in that async primitives can make use of, while users can write code that is agnostic to them.

                                                  1. 1

                                                    OCaml has let+ syntax now (in 4.08) which addresses this.

                                                    1. 1

                                                      One example is how you can write:

                                                      map : ('a ~> 'b) ->> 'a list ~> 'b list
                                                      

                                                      which is sugar for:

                                                      map : ('a -[e]-> 'b) ->> 'a list -[e]-> 'b list
                                                      

                                                      where e is a type level set (a row) of effects. That way you can have a map function that works in pure situations, or for any other combination of effects. Super handy.

                                                      1. 1

                                                        That’s really, really nice!

                                                  2. 1

                                                    With the proper functions / operators (bind / lift) this is not much of an issue in practice.

                                                    1. 1

                                                      There’s certainly value in the greenthread solution, as evidenced by the success Go, but Rust’s approach makes much more control over the execution context possible, and therefore higher performance is possible. To achieve the absolute highest performance you have to minimize synchronization overhead, which means you need to distinguish between synchronous and asynchronous code. “What color is your function” provides an important observation, but we shouldn’t read it as “async functions are fundamentally worse”. It’s a trade-off.

                                                      Of course, prior to Rust, async functions didn’t give much (if any) control over the execution context, and so the advantages of async functions over greenthreads were less clear or present.

                                                      1. 1

                                                        I’m not 100% sure if that’s a good intuition that I have, but I kinda think that in case of Go, it’s more like “every line is/can be async-await” in it — because of the “green threads” a.k.a. goroutines model (where goroutines are not really your OS’s threads: they’re multiplexed on them — as will happen with async/await functions, IIUC).

                                                      1. 2

                                                        Configuration is still largely an unsolved problem.

                                                        I didn’t realize there were other realities of the software world where configuration was an unsolved problem.

                                                        1. 1

                                                          Me too. This is why I shared this :-)

                                                        1. 1

                                                          Interesting thoughts about threading implementation in chat software like Slack, Telegram, Discord, etc.

                                                          1. 22

                                                            I feel like it comes to this:

                                                            1. Is your web app more like a newspaper, brochure, pamphlet, flyer, or other similar purposes? If so, a SPA is not the right architecture for you. Maybe a small component of that site is, like a comments box or something, but that should be a drop-in widget, not dominate the engine of the site.
                                                            2. Is your app more interactive, a desktop app that a user would have had to download and install manually before the web took over? If so, then SPA is probably the way to go.

                                                            The first one treats the web like text documents: a URL is a way to retrieve a single view of a document. The second one treats a URL like a document that describes how to load a full application.

                                                            1. 8

                                                              What i’m reading in your comment is it boils down to data consumption vs manipulation, which makes sense to me.

                                                              1. 9

                                                                Even manipulation can be done very effectively via traditional HTML forms, and maybe some minor changes. I have seen people use Asana as an example of SPA… but it is a todo list site, and that is really easy to do as a traditional html site (and it would load like 10x faster). Even traditional form + ajax saving - basically the same as the spa user experience - is pretty easy to do.

                                                                I just feel like these easy old techniques are becoming a lost art :(

                                                                1. 9

                                                                  Even traditional form + ajax saving - basically the same as the spa user experience - is pretty easy to do.

                                                                  Maybe up front, but it’s very, very hard to not make a dogs breakfast of untestable, tightly coupled code. As much as SPAs are maligned, I’d take maintaining an Angular app over a jQuery app of similar UI/UX complexity any day.

                                                                  1. 1

                                                                    Sure, but that’s a strawman–if I’m reading @adam_d_ruppe’s suggestion, we’re talking a server-rendered page with a normal input form, with just enough Ajax to highlight validation errors and to move page otherwise. That’s hella easy to maintain.

                                                                    If you need every little widget and greeble to jiggle and dance when a user does something, sure, pick a framework that does that–but the overwhelming majority of web pages are really just documents and not applications.

                                                                    1. 1

                                                                      I’d rather an untestable tiny project than a massive overkill SPA that only needs to take about five interactions for users and twenty for admins.

                                                                    2. 2

                                                                      Fully agreed. I was more interested in the original comment looking at it from the other end: which application definitely does not need an SPA? The answer to that, and it makes sense to me, is seemingly applications where the main usage is going to be data consumption.

                                                                      1. 1

                                                                        What do you mean by traditional form + AJAX saving? That the form data is submitted by an XHR request, and if there is an error it is rendered on the same page, but if the request succeed then the browser navigates to a new page/URL?

                                                                        1. 2

                                                                          Basically progressive enhancement.

                                                                          If the UI already represents what is saved, it can just save - just like the SPA way. If not, go ahead and reload the page. Let’s say you have the todo list app - if you haven’t used Asana, its layout is basically a left navigation column, a middle task list column, and a right details column. In the middle column, you can add, delete, and reorder the tasks. In the right column, you can comment, change description, etc.

                                                                          I’d make the left column just standard links. The right column would be a standard form. The middle column is a list of links and forms together. The links let you view details. The forms do move up, down, delete, and check off as completed. Those are the ones where you can do the progressive enhancement - make the checkbox, for example, ajax submit the form, and if success, leave it alone; the server now reflects what you see (the box is checked). If error, render that. Ditto on move up/down (which you might even do via drag and drop) - the ajax there is just bringing the server up-to-date on the client UI state.

                                                                          Notice that each individual form here would correspond to a single api call the SPA would use too.

                                                                          1. 1

                                                                            Got it! Thanks for clarifying :-)

                                                                    3. 1

                                                                      I think a better “decision tree” for SPA is this:

                                                                      Are you trying to pad your own resume with useless technology buzzwords that appeal to other buzzword obsessed Kool kids and technologically clueless recruiters? Then maybe an SPA is what you want.

                                                                      Are you trying to build a tool that solves a business need, is reliable and maintainable? An SPA is probably not what you want.

                                                                      1. 2

                                                                        Okay, now, this isn’t entirely fair–and I posted the submission a few days ago that originally talked about this.

                                                                        There are certain types of applications that aren’t just documents and that require a heavy degree of interactivity. Things in this category would be: spreadsheets, document editing or generation (think WYSIWYG or similar), certain types of advanced user interactions (click on realtime data, see a bunch of buttons pop up, pick one to change rendering or add/remove one or change styling), and arguably even some types of streaming events.

                                                                        If you find yourself building one of these applications, you simply aren’t going to get the responsiveness (or sanity) you want by continually banging up to the server and back.

                                                                        However, I’d say that 97% of web pages are not in this problem domain. Those pages would be better served as boring server-side pages, and would actually benefit from some of the things we take for granted developing that way.

                                                                        1. 2

                                                                          I’d say the genuine need for SPA is so rare it should be treated as an exception to the rule.

                                                                          Like: should you use eval() (or whatever your languages equivalent is)?

                                                                          The answer is “no, and if you have a genuine need for this you are the exception to the rule and should be able to talk at length about why you’re breaking the rule and why your approach is valid”

                                                                          1. 1

                                                                            And many interactive sites don’t need an SPA anyway. The traditional Todo list tracker is just simple interactivity, readily doable with a quick page reload

                                                                      1. 1

                                                                        Regarding backup, how do you ensure a hacker getting control of your production server (Remilia) is not able to erase or overwrite the backups on your off-site backup server (Konpaku)?

                                                                        How do you manage outages? Are you on-call with some monitoring system texting/notifying you if the service becomes unavailable?

                                                                        1. 2

                                                                          Off-site backups are append-only.

                                                                          Outages are not well-managed yet, but addressing this is part of the HA workstream. Will write a blog post on it when the time comes.

                                                                          1. 1

                                                                            Great. Thanks!

                                                                        1. 20

                                                                          I do agree with the theme of this post: at scale software is complex. Whether you use a monorepo or polyrepos, you’ll need a lot of complicated tooling to make things manageable for your developers.

                                                                          But I want to draw attention to sfink’s excellent rebuttal (full disclosure, he is a colleague of mine).

                                                                          Additionally, I’d like to address the VCS Scalablilty downside. The author’s monorepo experience seems to be with Git. Companies like Google, Facebook (who have two of the largest monorepos in the world) and to a lesser extent Mozilla all use Mercurial for a reason: it scales much better. While I’m not suggesting the path to get there was easy, the work is largely finished and contributed back upstream. So when the author points to Twitter’s perf issues or Microsoft’s need for a VFS, I think it is more a problem related to using the wrong tool for the job than it is something inherently wrong with monorepos.

                                                                          1. 5

                                                                            I was under the impression (possibly mistaken) that Google still used perforce predominantly (or some piper wrapper thing), with a few teams using mercurial or git for various externally visible codebases (android, chrome, etc).

                                                                            1. 10

                                                                              Perforce has been gone for quite a while. Internal devs predominantly use Piper, though an increasing group is using Mercurial to interact with Piper instead of the native Piper tooling. The Mercurial install is a few minor internal things (eg custom auth), evolve and core Mercurial. We’ve been very wary of using things outside of that set, and are working hard to keep our workflow in line with the OSS Mercurial workflow. An example of something we’ve worked to send upstream is hg fix which helps you use a source code formatter (gofmt or clang-format) as you go, and another is the narrow extension which lets you clone only part of a repo instead of the whole thing.

                                                                              Non-internal devs (Chrome, Android, Kubernetes, etc etc) that work outside of Piper are almost exclusively on Git, but in a variety of workflows. Chrome, AIUI is one giant git repo of doom (it’s huge), Android is some number of hundreds (over 700 last I knew?) of git repos, and most other tools are doing more orthodox polyrepo setups, some with Gerrit for review, some with GH Pull Requests, etc.

                                                                              1. 3

                                                                                Thanks for the clarification, sounds like Piper is (and will continue to be) the source of truth while the “rollout” Greg mentioned is in reference to client side tooling. To my original point, Google still seems to have ended up with the right tool for the job in Piper (given the timeline and alternatives when they needed it).

                                                                                1. 2

                                                                                  But how does Mercurial interact with Piper? Is Mercurial a “layer” above Piper? Do you have a Mercurial extension that integrates with Piper?

                                                                                  1. 3

                                                                                    We have a custom server that speaks hg’s wire protocol. Pushing to piper exports to the code review system (to an approximation), pulling brings down the new changes that are relevant to your client.

                                                                                    (Handwaving because I’m assuming you don’t want gory narrow-hg details.)

                                                                                    1. 2

                                                                                      It’s a layer, yeah. My understanding is that when you send out a change, it makes Piper clients for you. It’s just a UX thing on top of Piper, not a technical thing built into it.

                                                                                  2. 2

                                                                                    I’m fuzzy on the details, but my understanding is that they’re in the middle of some sort of phased Mercurial rollout. So it’s possible only a sample population of their developers are using the Mercurial backend. What I do know is that they are still actively contributing to Mercurial and seem to be moving in that direction for the future.

                                                                                    1. 1

                                                                                      I wonder if they are using some custom mercurial backend to their internal thing (basically a VFS layer as the author outlined)? It would be interesting to get some first of second hand information on what is actually being used, as people tend to specifically call out Google and Facebook as paragons of monorepos.

                                                                                      My feeling is that google/facebook are both huge organizations with lots of custom tooling and systems. /Most/ companies are not google/facebook nor have google/facebook problems.

                                                                                      1. 6

                                                                                        This is largely my source (in addition to offline conversations): https://groups.google.com/forum/#!topic/mozilla.dev.version-control/hh8-l0I2b-0

                                                                                        The relevant part is:

                                                                                        Speaking of Google, their Mercurial rollout on the massive Google monorepo continues. Apparently their users are very pleased with Mercurial - so much so that they initially thought their user sentiment numbers were wrong because they were so high! Google’s contribution approach with Mercurial is to upstream as many of their modifications and custom extensions as possible: they seem to want to run a vanilla Mercurial out-of-the-box as possible. Their feature contributions so far have been very well received upstream and they’ve been contributing a number of performance improvements as well. Their contributions should translate to a better Mercurial experience for all.

                                                                                        So at the very least it seems they endeavour to avoid as much custom tooling on top of Mercurial as possible. But like you said, they have Google problems so I imagine they will have at least some.

                                                                                        1. 6

                                                                                          Whoa. This could be the point where Mercurial comes back after falling behind git for years.

                                                                                          Monorepo sounds sexy because Facebook and Google use that. If both use Mercurial and open source their modifications then Mercurial becomes very attractive suddenly.

                                                                                          In git, neither submodules nor LFS are well integrated and generate pain for lots of developers. If Mercurial promises to fix that many will consider to switch.

                                                                                          Sprinkling some Rust into the code base probably helps to seduce some developers as well.

                                                                                          1. 10

                                                                                            Narrow cloning (authored by Google) has been OSS from the very start, and now ships in the hg tarball. If you’ve got need of it, it’s still maturing (and formats can change etc) but it’s already in use by at least 3 companies. I’d be happy to talk to anyone that might want to deploy hg at their company, and can offer at least some help on narrow functionality if that’s needed.

                                                                                          2. 1

                                                                                            Thanks for digging!
                                                                                            Pretty interesting for sure.

                                                                                      2. 0

                                                                                        I’m getting verification from someone at Google, but the quick version as I understood it:

                                                                                        Google hasn’t actually used Perforce for a long time. What they had was a Perforce workalike that was largely their own thing. They are now using normal Mercurial.

                                                                                        1. 12

                                                                                          This isn’t true, Google uses Piper (their perforce clone) internally. Devs have the option of using mercurial or git for their personal coding environments, but commits get converted to piper before they land in the canonical monorepo.

                                                                                          1. 2

                                                                                            I’ll ping @durin42; I don’t think I’m misremembering the discussion, but I may have misunderstood either the current state or implementation details.

                                                                                      3. 3

                                                                                        What is it about git that makes it a poor choice for very large repos?

                                                                                        What does Mercurial and Perforce do differently?

                                                                                        1. 2

                                                                                          In addition to the article @arp242 linked, this post goes into a bit more technical detail. Tl;dr, it’s largely due to how data is stored in each. Ease of contribution is another reason (scaling Git shouldn’t be impossible, but for one reason or another no one has attempted it yet).

                                                                                          1. 1

                                                                                            Microsoft has a 300GB git repo. They built a virtual file system to make it work.

                                                                                            1. 1

                                                                                              True, but in the scalability section of the article the author argues that the need for a VFS is proof that monorepos don’t scale. So I think most of this thread is centered around proving that monorepos can scale without the need for a VFS.

                                                                                              I agree that a VFS is a perfectly valid solution if at the end of the day the developers using the system can’t tell the difference.

                                                                                          2. 2

                                                                                            Facebook wrote about Scaling Mercurial at Facebook back in 2014:

                                                                                            After much deliberation, we concluded that Git’s internals would be difficult to work with for an ambitious scaling project. [..] Importantly, it [mercurial] is written mostly in clean, modular Python (with some native code for hot paths), making it deeply extensible.

                                                                                            It’s a great example of how applications in a slower language can be made better performing than applications in a faster language, just because it’s so much easier to understand and optimize.

                                                                                        1. 8

                                                                                          yet in many respects, it is the most modern database management system there is

                                                                                          It’s not though. No disrespect to PostgreSQL, but it just isn’t. In the world of free and open source databases it’s quite advanced, but commercial databases blow it out of the water.

                                                                                          PostgreSQL shines by providing high quality implementations of relatively modest features, not highly advanced state of the art database tech. And it really does have loads of useful features, the author has only touched on a small fraction of them. Almost all those features exist in some other system. But not necessarily one single neatly integrated system.

                                                                                          PostgreSQL isn’t great because it’s the most advanced database, it’s great because if you don’t need anything state of the art or extremely specialized, you can just use PostgreSQL for everything and it’ll do a solid job.

                                                                                          1. 13

                                                                                            but commercial databases blow it out of the water

                                                                                            Can you provide some specific examples?

                                                                                            1. 16

                                                                                              Oracle has RAC, which is a basic install step for any Oracle DBA. Most Postgres users can’t implement something similar, and those that can appreciate it’s a significant undertaking that will lock you into a specific workflow so get it right.

                                                                                              Oracle and MS-SQL also have clustered indexes. Not what Postgres has, but where updates are clustered as well. Getting Pg to perform sensibly in this situation is so painful, it’s worth spending a few grand to simply not worry about it.

                                                                                              Ever run Postgres on a machine with over 100 cores? It’s not much faster than 2 cores without a lot of planning and partitioning, and even then, it’s got nothing on Oracle and MS-SQL: Open checkbook and it’s faster might sound like a lose, but programmers and sysadmins cost money too! Having them research how to get your “free” database to perform like a proper database isn’t cost effective for a lot of people.

                                                                                              How about big tables. Try to update just one column, and Postgres still copies the whole row. Madness. This turns something that’s got to be a 100GB of IO into 10s of TBs of IO. Restructuring this into separate partitions would’ve been the smart thing to do if you’d remembered to do it a few months ago, but this is a surprise coming from commercial databases which haven’t had this problem for twenty years. Seriously! And don’t even try to VACUUM anything.

                                                                                              MS-SQL also has some really great tools. Visual Studio actually understands the database, and its role in development and release. You can point it at two tables and it can build ALTER statements for you and help script up migrations that you can package up. Your autocomplete can recognise what version you’re pointing at. And so on.

                                                                                              …and so on, and so on…

                                                                                              1. 3

                                                                                                Thanks for the detailed response. Not everyone has money to throw at a “real” enterprise DB solution, but (having never worked with Oracle and having only administered small MSSQL setups) I did wonder what some of the specific benefits that make a DBA’s life easier were.

                                                                                                Of course, lots of the open source tools used for web development and such these days seem to prefer Postgres (and sometimes MySQL), and developers like Postgres’ APIs. With postgres-compatible databases like EnterpriseDB and redshift out there, my guess is we’ll see a Postgres-compatible Oracle offering at some point.

                                                                                                1. 7

                                                                                                  Not everyone has money to throw at a “real” enterprise DB solution

                                                                                                  I work for a commercial database company, so I expect I see a lot more company-databases than you and most other crustaceans: Most companies have a strong preference to rely on an expert who will give them a fixed cost (even if it’s “money”) to implement their database, instead of trying to hire and build a team to do it open-source. Because it’s cheaper. Usually a lot cheaper.

                                                                                                  Part of the reason why: An expert can give them an SLA and has PI insurance, and the solution generally includes all costs. Building a engineering+sysadmin team is a big unknown for every company, and they usually need some kind of business analyst too (often a contractor anyway; more £££) to get the right schemas figured out.

                                                                                                  Professional opinion: Business logic may actually be some of the least logical stuff in the world.

                                                                                                  lots of the open source tools used for web development and such these days seem to prefer Postgres

                                                                                                  This is true, and if you’re building an application, I’d say Postgres wins big. Optimising queries for dbmail’s postgres queries was hands down much easier than any other database (including commercial ones!).

                                                                                                  But databases are used for a lot more than just applications, and companies who use databases don’t always (or even often) build all (or even much) of the software that interacts with the database. This should not be surprising.

                                                                                                  With postgres-compatible databases like EnterpriseDB and redshift out there, my guess is we’ll see a Postgres-compatible Oracle offering at some point.

                                                                                                  I’m not sure I disagree, but I don’t think this is a good thing. EnterpriseDB isn’t Postgres. Neither is redshift. Queries that work fine in a local Pg installation run like shit in redshift, and queries that are built for EnterpriseDB won’t work at all if you ever try and leave. These kinds of “hybrid open source” offerings are an anathema, often sold below a sustainable price (and much less than what a proper expert would charge), leaving uncertainty in the SLA, and with none of the benefits of owning your own stack that doing it on plain postgres would give you. I just don’t see the point.

                                                                                                  1. 3

                                                                                                    Professional opinion: Business logic may actually be some of the least logical stuff in the world.

                                                                                                    No kidding. Nice summary also.

                                                                                                    1. 0

                                                                                                      Queries that work fine in a local Pg installation run like shit in redshift

                                                                                                      Not necessarily true, when building your redshift schema you optimize for certain queries (like your old pg queries).

                                                                                                  2. 4

                                                                                                    And yet the cost of putting your data into a proprietary database format is enough to make people find other solutions when limitations are reached.

                                                                                                    Don’t forget great database conversion stories like WI Circuit Courts system or Yandex where the conversion to Postgres from proprietary databases saved millions of dollars and improved performance…

                                                                                                    1. 2

                                                                                                      Links to those stories?

                                                                                                      1. 1

                                                                                                        That Yandex can implement clickhouse doesn’t mean everyone else can (or should). How many $100k developers do they employ to save a few $10k database cores?

                                                                                                        1. 2

                                                                                                          ClickHouse has nothing to do with Postgres, it’s a custom column oriented database for analytics. Yandex Mail actually migrated to Postgres. Just Postgres.

                                                                                                      2. 2

                                                                                                        You’re right about RAC but over last couple of major releases Postgres has gotten alot better about using multiple cores and modifying big tables. Maybe not at the Oracle level yet bit its catching up quickly in my opinion.

                                                                                                        1. 3

                                                                                                          Not Oracle-related, but a friend of mine tried to replace a disk-based kdb+ with Postgres, and it was something like 1000x slower. This isn’t even a RAC situation, this is one kdb+ core, versus a 32-core server with Postgresql on it (no failover even!).

                                                                                                          Postgres is getting better. It may even be closing the gap. But gosh, what a gap…

                                                                                                          1. 1

                                                                                                            Not to be that guy, but when tossing around claims of 1000x, please back that up with actual data/blogpost or something..

                                                                                                            1. 6

                                                                                                              You remember Mark’s benchmarks.

                                                                                                              kdb doing 0.051sec what postgres was taking 152sec to complete.

                                                                                                              1000x is nothing.

                                                                                                              Nobody should be surprised by that. It just means you’re asking the computer to do the wrong thing.

                                                                                                              Btw, starting a sentence with “not to be that guy” means you’re that guy. There’s a completely normal way to express curiosity in what my friend was doing (he’s also on lobsters), or to start a conversation about why it was so much easier to get right in kdb+. Both could be interesting, but I don’t owe you anything, and you owe me an apology.

                                                                                                              1. 2

                                                                                                                Thanks for sharing the source, that helps in understanding.

                                                                                                                That’s a benchmark comparing a server grade setup vs essentially laptop grade hardware (quad-core i5), running the default configuration right out of the sample file from the Git repo, with a query that reads a single small column out of a very wide dataset without using an index. I don’t doubt these numbers, but they aren’t terribly exciting/relevant to compare.

                                                                                                                Also, there was no disrespect intended, not being a native english speaker I may have come off clumsy though.

                                                                                                                1. 1

                                                                                                                  kdb doing 0.051sec what postgres was taking 152sec to complete.

                                                                                                                  That benchmarks summary points to https://tech.marksblogg.com/billion-nyc-taxi-rides-postgresql.html which was testing first a pre-9.6 master and then a PG 9.5 with cstore_fdw. Seems to me that neither was fair and I’d like to do it myself, but I don’t have the resources.

                                                                                                                  1. 1

                                                                                                                    If you think a substantially different disk layout of Pg, and/or substantially different queries would be more appropriate, I think I’d find that interesting.

                                                                                                                    I wouldn’t like to see a tuning exercise including a post-query exercise looking for the best indexes to install for these queries though: The real world rarely has an opportunity to do that outside of applications (i.e. Enterprise).

                                                                                                              2. 1

                                                                                                                Isn’t kdb+ really good at stuff that postgres (and other RDBMS) is bad at? So not that surprising.

                                                                                                                1. 1

                                                                                                                  Sort of? Kdb+ isn’t a big program, and most of what it does is the sort of thing you’d do in C anyway (if you liked writing databases in C): Got some tall skinny table? Try mmaping as much as possible. That’s basically what kdb does.

                                                                                                                  What was surprising was just how difficult it was to get that in Pg. I think we expected, with more cores and more disks it’d be fast enough? But this was pretty demoralising! I think the fantasy was that by switching the application to Postgres it’d be possible to get access to the Pg tooling (which is much bigger than kdb!), and we massively underestimated how expensive Pg is/can be.

                                                                                                                  1. 3

                                                                                                                    Kdb+ isn’t a big program, and most of what it does is the sort of thing you’d do in C anyway (if you liked writing databases in C)

                                                                                                                    Well, kdb+ is columnar, which is pretty different than how most people approach naive database implementation. That makes it very good for some things, but really rough for others. Notably, columnar storage is doesn’t deal with update statements very well at all (to the degree that some columnar DBs simply don’t allow them).

                                                                                                                    Even on reads, though, I’ve definitely seen postgres beat it on a queries that work better on a row-based system.

                                                                                                                    But, yes, if your primary use cases favor a columnar approach, kdb+ will outperform vanilla postgres (as will monetdb, clickhouse, and wrappers around parquet files).

                                                                                                                    You can get the best of both worlds You can get decent chunks of both worlds by using either the cstore_fdw or imcs extensions to postgres.

                                                                                                                    1. 1

                                                                                                                      which is pretty different than how most people approach naive database implementation.

                                                                                                                      I blame foolish CS professors emphasising linked lists and binary trees.

                                                                                                                      If you simply count cycles, it’s exactly how you should approach database implementation.

                                                                                                                      Notably, columnar storage is doesn’t deal with update statements very well at all (to the degree that some columnar DBs simply don’t allow them).

                                                                                                                      So I haven’t done that kind of UPDATE in any production work, but I also don’t need it: Every customer always wants an audit trail which means my database builds are INSERT+some materialised view, so that’s exactly what kdb+ does. If you can build the view fast enough, you don’t need UPDATE.

                                                                                                                      Even on reads, though, I’ve definitely seen postgres beat it on a queries that work better on a row-based system.

                                                                                                                      If I have data that I need horizontal grabs from, I arrange it that way in memory. I don’t make my life harder by putting it on the disk in the wrong shape, and if I do run into an application like that, I don’t think gosh using postgres would really speed this part up.

                                                                                                          2. 3

                                                                                                            Spanner provides globally consistent transactions even across multiple data centers.

                                                                                                            Disclosure: I work for Google. I am speaking only for myself in this matter and my views do not represent the views of Google. I have tried my best to make this description factually accurate. It’s a short description because doing that is hard. The disclosure is long because disclaimers are easier to write than useful information is. ;)

                                                                                                            1. 2

                                                                                                              @geocar covered most of what I wanted to say. I also have worked for a commercial database company, and same as @geocar I expect I have seen a lot more database use cases deployed at various companies.

                                                                                                              The opinions stated here are my own, not those of my former or current company.

                                                                                                              To put it bluntly, if you’re building a Rails app, PostgreSQL is a solid choice. But if you’ve just bought a petabyte of PCIe SSDs for your 2000 core rack of servers, you might want to buy a commercial database that’s a bit more heavy duty.

                                                                                                              I worked at MemSQL, and nearly every deployment I worked with would have murdered PostgreSQL on performance requirements alone. Compared to PostgreSQL, MemSQL has more advanced query planning, query execution, replication, data storage, and so on and so forth. It has state of the art features like Pipelines. It has crucial-at-scale features like Workload Profiling. MemSQL’s competitors obviously have their own distinguishing features and qualities that make them worth money. @geocar mentioned some.

                                                                                                              PostgreSQL works great at smaller scale. It has loads useful features for small scale application development. The original post talks about how Arcentry uses NOTIFY to great effect, facilitating their realtime collaboration functionality. This already tells us something about their scale: PostgreSQL uses a fairly heavyweight process-per-connection model, meaning they can’t have a huge number of concurrent connections participating in this notification layer. We can conclude Arcentry deployments using this strategy probably don’t have a massive number of concurrent users. Thus they probably don’t need a state of the art commercial database.

                                                                                                              There are great counterexamples where specific applications need to scale in a very particular way, and some clever engineers made a free database work for them. One of my favorites is Expensify running 4 million queries per second on SQLite. SQLite can only perform nested loop joins using 1 index per table, making it a non-starter for applications that require any kind of sophisticated queries. But if you think about Expensify, its workload is mostly point look ups and simple joins on single indexes. Perfect for SQLite!

                                                                                                              1. 1

                                                                                                                But MemSQL is a distributed in-memory database? Aren’t you comparing apples and oranges?

                                                                                                                I also highly recommend reading the post about Expensify usage of SQLite: it’s a great example of thinking out of the box.

                                                                                                                1. 1

                                                                                                                  No. The author’s claims “Postgres might just be the most advanced database yet.” MemSQL is a database. If you think they’re apples and oranges different, might that be because MemSQL is substantially more advanced? And I used MemSQL as one example of a commercial database. For a more apples-to-apples comparison, I also think MSSQL more advanced than PostgreSQL, which geocar covered.

                                                                                                                  And MemSQL’s in-memory rowstore serves the same purpose as PostgreSQL’s native storage format. It stores rows. It’s persistent. It’s transactional. It’s indexed. It does all the same things PostgreSQL does.

                                                                                                                  And MemSQL isn’t only in-memory, it also has an advanced on-disk column store.

                                                                                                          1. 17

                                                                                                            I’m one of the maintainers of Conjure, happy to answer questions about this project!

                                                                                                            1. 12
                                                                                                              1. How does this compare to gRPC and friends (e.g. Thrift), especially now that gRPC-Web is in GA? When would I pick Conjure over them?
                                                                                                              2. Are there plans for additional language support? I’m interested in Go in particular.
                                                                                                              1. 9
                                                                                                                1. We’re big fans of gRPC! One downside is that it does require HTTP/2 trailers, which means if you want to make requests from browsers or curl, you’d need to deploy a proxy like envoy to rewrite the traffic. I think Conjure makes sense if you’re already relying on HTTP/JSON in production or care about browsers. It’s very much designed with simplicity in mind, and doesn’t actually prescribe any particular client or server, so allows you to get the upside of strong types for your JSON, without requiring big changes to your existing stack or API.

                                                                                                                2. Definitely! Internally, we use go extensively so I think conjure-go is next in the open-sourcing pipeline. One interesting feature of Conjure is that since the IR is a stable format, you can develop new language-generators independently without needing any involvement from the core maintainers!

                                                                                                                1. 4

                                                                                                                  I have the same question as 1. but with OpenAPI.

                                                                                                                  1. 6

                                                                                                                    They’re conceptually pretty similar, but we found the Java code that Swagger generates pretty hard to read. While Swagger has to add many annotations (https://github.com/swagger-api/swagger-samples/blob/master/java/java-jersey-jaxrs/src/main/java/io/swagger/sample/resource/PetResource.java#L43) to deal with any conceivable API you might define, Conjure is intentionally more restrictive in terms of what you can define and tries to focus on doing a small number of things very well.

                                                                                                                    This results in code that is as readable as a human would write (https://github.com/palantir/conjure-java/blob/1.3.0/conjure-java-core/src/integrationInput/java/com/palantir/product/EteService.java), with all the benefits of modern best practices like immutability, NonNull everywhere etc.

                                                                                                                2. 60

                                                                                                                  How do you feel about your work being used to enable human rights violations?

                                                                                                                  1. 14

                                                                                                                    This is actually an interesting question.

                                                                                                                    1. 6

                                                                                                                      Probably terrible but also aware of the unlikelihood of escaping it. Sometimes you have an action and it has good and bad consequences and the good consequences are avoidable, but the bad aren’t. In that specific scenario it’s not wise to give up the good consequences just so you aren’t “getting your hands dirty”. Sure if you can find some way to escape the evils then you should try all available options but sometimes things are bad.

                                                                                                                      Oh I’m realizing this is specifically Palantir’s stack lmao. Nevermind yeah don’t work on that on your free time y’all, no hard feelings intended towards @iamdanfox .

                                                                                                                      1. 2

                                                                                                                        As it seems I’m living under a rock, could you paste some link to provide the context for your question?

                                                                                                                        Edit: never mind, found it; “Palantir provides the engine for Donald Trump’s deportation machine”

                                                                                                                        1. 1

                                                                                                                          Which is the same machine as previously used, right?

                                                                                                                      2. 3

                                                                                                                        How does it compare to Twirp? (https://github.com/twitchtv/twirp)

                                                                                                                        Update after a quick search: Conjure uses JSON whereas Twirp uses ProtocolBuffer.

                                                                                                                        1. 2

                                                                                                                          I think Twirp has a lot of similar motivations to Conjure - the blog post even mentions an emphasis on simplicity and it works over HTTP1.1 (unlike gRPC).

                                                                                                                          One key difference I see is that many API definition tools are essentially monoliths, pretty much completely controlled by the originating company. We’ve gone for a different approach with Conjure and tried to decouple things so that significant additions (e.g. adding a new language) happen without blocking on the core maintainers at all.

                                                                                                                          For example, if you wanted to add Swift support, you’d make a brand new conjure-swift repo and write a CLI that turns the stable IR format into Swift source code. We have a standardised test harness (written in Rust) that you can use to prove that your new conjure-swift code will be compatible with other Conjure clients/servers. Once your CLI works, you can package it up in a standard way and it should slot nicely into existing build tools.

                                                                                                                        2. 3

                                                                                                                          How is unionization in Palantir? Is there a reason why you’re still working there? Do you need support? I’m not from USA but I can point you to people willing to support workers like yourself.

                                                                                                                          1. 2

                                                                                                                            Hey I am really interested in the project, but I’m getting errors when going through the tutorial on the GitHub repo, specfically ./gradlew tasks was giving me an error fetching the com.palantir repos. I copy pasted the exact instructions and I’m unsure where to ask for help on this. I figured I’d at least let you know to see if you experience the same issues.

                                                                                                                            1. 2

                                                                                                                              Hey! Glad to hear you’re interested in the project and sorry about the issue with the getting started guide. i’ve gone ahead and updated the guide to fix the problem you encountered. Thanks for pointing out the issue. Hope you enjoy using the project and feel free to reach out with any questions, comments or concerns

                                                                                                                          1. 1

                                                                                                                            Rust is cool, and it’s faster but has less correctness guarantees than I need.

                                                                                                                            1. 1

                                                                                                                              What do you use instead?

                                                                                                                              1. 2

                                                                                                                                I use F# (.net core) personally and I’m not a systems programmer so that’s probably part of my motivations. It looks like some of my preconceptions were wrong as I looked into it more but it also has some edge cases that can still bite you in abstract.

                                                                                                                                I guess I should also qualify that part of why I don’t use Rust is that getting started seems wowzers complex. I often think “Wow rust looks cool, but using it outside of as a toy looks very complicated.”

                                                                                                                                1. 1

                                                                                                                                  I’ve never used F#, but I read a lot about it and I always seen it as a very interesting and well designed language.

                                                                                                                                  1. 1

                                                                                                                                    Yeah it’s spiritually similar to Rust in several ways, both having come out of OCaml. They are like brothers who went down very very different paths. I’ve long considered Rust, maybe I’ll try it out to mess with making a game is there anything you recommend?

                                                                                                                                    1. 1

                                                                                                                                      Sorry, I can’t recommend anything. I’ve no significant experience with Rust yet :-)

                                                                                                                            1. 1

                                                                                                                              Would it be difficult to fix this natively in PostgreSQL?

                                                                                                                              1. 8

                                                                                                                                Glad I wasn’t the only person who read that article and thought that!

                                                                                                                                Generally if deployment involves more steps than Heroku, I don’t want to do it. Keeping packages up to date on my production server? Ugh. Given that my personal time is 10% of what it used to be, the Heroku premium still has me coming out way ahead. (Lambda would be a good possibility as well!)

                                                                                                                                1. 1

                                                                                                                                  That’s why I like dokku. It takes a small amount more setup initially but deployment is exactly as easy as Heroku. There’s a continuum between “dev” and “ops” in devops, with Heroku being at the far “dev” end, and dokku is just one step down from that.

                                                                                                                                  1. 1

                                                                                                                                    Given that my personal time is 10% of what it used to be

                                                                                                                                    Children? ;-)

                                                                                                                                    1. 3

                                                                                                                                      Depending on the ages, that can be too generous by an order or two of magnitude. Ask me how I know!

                                                                                                                                      1. 1

                                                                                                                                        Looks like I’m lucky eventually :-)

                                                                                                                                  1. 2

                                                                                                                                    I’m still a bit worried about the difficulty to “downscale” Kubernetes to one single node, or to a simple 3 nodes cluster with low-end machines.

                                                                                                                                    GKE documentation says 25 % of the first 4 GB are reserved for GKE: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#memory_cpu

                                                                                                                                    What is your experience regarding this?

                                                                                                                                    Edit: Sorry Caleb, I discovered your post here just after having emailed you with the same question :-)

                                                                                                                                    1. 4

                                                                                                                                      Just started reading introductory docs and encountered this

                                                                                                                                      2 GB or more of RAM per machine. Any less leaves little room for your apps.

                                                                                                                                      So, Kubernetes software uses 2 GB for its own needs? That’s huge amount of memory. I remember php+mysql wikis and forums ran on VMs with 128 Mb without problems, including OS and database.

                                                                                                                                      1. 3

                                                                                                                                        I haven’t tested it myself, but I remember having read this in Kubernetes docs. This is what gave me cold feet…

                                                                                                                                        I’d be confused if a regular node would always have this memory requirement. I mean, how would people create a k8s Raspberry Pi cluster then?

                                                                                                                                        1. 1

                                                                                                                                          I’m confused too. I’m wondering if the requirement in GKE docs is not about optional features like StackDriver and Kubernetes dashboard. I haven’t had the time to test it myself. Curious if someone here knows more about this?

                                                                                                                                        2. 3

                                                                                                                                          This would only be for the master nodes, which are provided for free on GKE.

                                                                                                                                          On several machines that I have they are more around 400m which inclues the kube-proxy (reverse proxy management for containers), flannel (network fabric), and kubelet (containers management). That can seem huge, but it offers guarantees and primitives that php+mysql wiki would use to be easily deployable, and hopefully more resilient to underlying failures.

                                                                                                                                          1. 1

                                                                                                                                            I haven’t tested it myself, but I remember having read this in Kubernetes docs. This is what gave me cold feet…

                                                                                                                                          2. 1

                                                                                                                                            This would only be for the master nodes, which are provided for free on GKE.

                                                                                                                                            1. 2

                                                                                                                                              Are you sure? The part of the doc I linked is specifically about the cluster nodes and not the master.

                                                                                                                                              1. 1

                                                                                                                                                sorry, wrong thread… It does reserve 25%, which is a safe bet to ensure that the kub-system pods are running correctly.