Threads for sluongng

    1. 4

      I work quite closely with FastCDC for the past year. The problem with FastCDC is that the paper is not specific on how it should be implemented. That caused different implementation out in the wild using different parameters, causing 1 big blob to be chunk differently using different implementations. So let say we have a xit-zig implementation and a xit-rs implementation, it’s likely that each will chunk a tarball in different ways and thus, reducing the effectiveness of the deduplication between chunks.

      Secondly, git-lfs is quite open about how to extend it. https://github.com/git-lfs/git-lfs/blob/main/docs/extensions.md You can implement your own client-side storage, your own transfer protocol as well as server-side storage on top of the existing implementation. So it’s not hard to apply a FastCDC layer on top of this.

      Finally, as I administered a handful of git servers for large enterprises for the last few years and have to support x00-x000s of users, I think only 2/5 points listed are relevant to end users today (it’s number 1 and 4, git-compatibility and large blob support). Would love to be proven wrong though.

      1. 2

        I always enable net/http/pprof in my servers.

        1. 2

          Not all Go program is a server though.

          1. 1

            True, but in the context of the article if your program is hanging and it isn’t the pprof http handler that’s hanging then it may still be useful.

        2. 4

          I think the language would be a lot nicer without the make() init. In the last 8 years using Go, every time I took a break away from Go and came back, the nil map, nil channel, and zero maps init tripped me up quite consistently. With generics available now, I think there are plenty of ways to clean up these older APIs and make the language friendlier to new users.

          1. 3

            I was going to write a comment about why you really want nil channels, but carlana already did so I’ll just bring it to your attention in case you’re only watching replies.

            Nil maps aren’t so directly useful, but the difference between an empty map and no map at all is 48 bytes, which is non-negligible if you have some kind of data structure with millions of maps that might or might not exist.

            1. 4

              I was going to write a comment about why you really want nil channels, but carlana already did so I’ll just bring it to your attention in case you’re only watching replies.

              The issue is not the functionality, it’s the implicitness. It’s that if you forget to make() your channel you get a behaviour which is very likely to screw you over when you don’t expect it.

              1. 2

                If it gave you a channel by default it would be impliclty open or closed, buffered (with some capacity) or unbuffered. But it declines to do that, and has you construct what you want, which makes the code more explicit.

                At the same time, there’s a need for one more thing, a (non)channel that is never available for communication. Go types must always have a zero value, and “a channel that’s never ready” is a lot more zero-ish than anything else you could come up with.

                Yes, there’s a guideline that zero values should be useful when possible (a nil slice is ready for append, a zero bytes.Buffer is empty, a zero sync.Mutex is unlocked), but that takes a backseat to the requirement for a zero value to be uniquely zero. Complaining about how

                var ch chan int
                ch <- 42
                

                fails feels the same as complaining about how

                var ptr *int
                *ptr = 42
                

                fails.

                1. 4

                  If it gave you a channel by default

                  How about it doesn’t do that either.

                  But it declines to do that

                  Would that it did. It does give me a channel by default, one that is pretty much never what I want.

                  which makes the code more explicit.

                  Ah yes, the explicitness of implicitly doing something stupid.

                  Go types must always have a zero value

                  Have you considered that that’s a mistake?

                  Complaining about how […] fails.

                  It does does it not? In both cases the language does something which is at best useless and at worst harmful, and which, with a little more effort put into its design, it could just not do.

                  1. 1

                    one that is pretty much never what I want.

                    Like I said elsewhere, for pretty much every non-trivial use of channels, you will want a nil channel at some point, so you can deactivate one branch of a select. It’s pretty much always one of the things I want.

                    1. 1

                      I think I disagree? Or at least I’ve never made good use of a nil channel. Maybe now that I’ve learned about its uses for select{...} I’ll have a different opinion, but there have been plenty of times I don’t want a nil channel. And this problem isn’t limited to channels either–Go also gives nil pointers and nil maps by default, even though a nil pointer or map is frequently a bug. Defaulting to a zero value is certainly an improvement on C’s default (“whatever was in that particular memory region”), but I think it would be a lot better if it just forced us to initialize the memory.

                      1. 3

                        I do wish that map was a value type by default and you would need to write *m to actually use it. That would be much more convenient. The Go team said they did that in early versions of Go, but they got sick of the pointer, so they made it internal to the map, but I think that was a mistake.

                        1. 3

                          Defaulting to a zero value is certainly an improvement on C’s default (“whatever was in that particular memory region”)

                          Technically it’s UB, which is even worse. You may get whatever was at that location, or you might get the compiler deleting the entire thing and / or going off the rails completely.

                          1. 2

                            Good point. Even keeping track of what is/isn’t UB is a big headache.

                          2. 1

                            I think it would be a lot better if it just forced us to initialize the memory.

                            That would have implications across the whole language design that wouldn’t, in my opinion, be overall good. Zero-initialization is quite fundamental.

                            This is also my answer to masklinn’s “how about it doesn’t do that either” in a comment I couldn’t bring myself to respond directly to.

                            1. 2

                              Yeah, it’s not going to happen, but I’m convinced that would have been the choice to make in 2012 (or earlier). I can live with it, and Go is still the most productive tool in my bucket, but that particular decision is pretty disappointing, especially because we can’t back out of it the way we could have done if we had mandatory initialization (you can relax that requirement without breaking compatibility).

                    2. 1

                      To add to that point, there’s another issue which is that many channels oughtn’t be nil, but Go doesn’t give us a very good way to express that. In fact, it goes even further and makes nil a default even when a nil channel would be a bug. I really, really wish Go had (reasonably implemented) sum types.

                  2. 3

                    I haven’t done Go seriously in a while, but when I did, I was continually annoyed at this sort of thing because there’s no way to encapsulate these patterns to make it easy to get them right. I remember reading a Go team post about how to use goroutines properly and ranting about how Go’s only solution for reusing high-level code patterns is blog posts.

                    But now that it has generics, is it possible to solve this? Has someone made a package of things like worker pools and goroutine combinators (e.g., split/merge) that get this stuff right so you don’t have to rediscover the mistakes?

                    1. 2

                      As an example of what annoyed me, it was things like this blog post on pipelines and cancellation. Which should have just been a library, not a blog post.

                        1. 1

                          Yes! Thank you, I will check this out, as it looks like I may have a Go project coming up in the near future.

                          1. 3

                            Conc has a truly awful API. It really shows the power of writing a good blog post to make your package popular. I made my own concurrency package, and there were no ideas in conc worth copying. Honestly though, my recommendation for most people is to just use https://pkg.go.dev/golang.org/x/sync/errgroup since it’s semi-standard.

                    2. 2

                      I wrote something similar not so long ago https://github.com/sluongng/dotfiles/blob/master/prepare-commit-msg-hook. Edit: I actually used llm to spit out the initial versions, then fixed the issues i encountered along the way manually

                      1. 1

                        Nice one! Thanks for sharing your approach—I’ll check it out!

                      2. 16

                        Initially, the WASM file was around 32 MB. By applying Brotli compression, we were able to bring it down to around 4.6 MB

                        Isn’t that still a lot for frontend?

                        I think modern SPA frameworks have been getting a lot better at lazy loading things, right?

                        go-app looks sick though. It’s gona be on my to-try list now.

                        1. 6

                          It’s definitely a lot to download, but comparing WASM and JS bundle sizes isn’t entirely apples-to-apples. Byte for byte, JS is one of the slower assets you can put on your page due to its relatively complex parsing requirements.

                          I’d be interested to know where the balance point is between parsing speed and download speed.

                          1. 4

                            the tricky thing is that the balancing point would depend on the connection speed of your user. some people still have really slow 3G as their primary internet connection.

                            1. 2

                              And some may still prefer less data usage to a minor speed up

                            2. 2

                              But a JS bundle can be cached, no? And I’d be really surprised if browsers aren’t JIT-compiling the cached JS internally into some kind of executable bytecode representation.

                            3. 5

                              Spoke a bit too soon. It seems like https://go-app.dev/reference crashed for me on iOS + Safari. One big downside for using wasm.

                              1. 1

                                Yeah, it’s a bit chunky for sure but not a dealbreaker. It’s deployed as a PWA, which helps; it’s heavily cached until the user clicks an “update” button.

                              2. 47

                                The tone in the discussion here is fairly negative so far (people don’t like an implementation they found online, which was not written by the authors of the paper). I skimmed the paper and from a distance and as a non-expert it seems reasonable, it is co-authored with people that are knowledgeable about this research field, there are detailed proof arguments, and the introduction shows a good understanding of the context and previous attempts in this area. Unlike the sensationalist Quanta title, the overall claim is (impressive but) fairly reasonable, they disprove a conjectured lower bound by doing better with a clever trick. They point out that this trick is in fact already used in some more advanced parts of the hashtable literature, but maybe people had not done the work of proving complexity bounds, and therefore not realized that it worked much better than expected – no pun intended.

                                Is this correct? I don’t know – I timed out without reading this in any level of details. But it has the appearance of serious work, so I would be inclined to assume that it is unless proven otherwise.

                                1. 7

                                  The PoC was written by a random person AFAIK, and it shouldn’t reflect in any way on the paper or its authors. That could’ve been made more explicit in the other thread though :)

                                  And I agree the paper seems serious though I don’t have the background to understand everything.

                                  1. 1

                                    I think the paper might help in crafting some novelty implementation with specific worst case tradeoffs. But it would not impact the generic implementations such as Abseil (Google) or Folly F14 (Meta).

                                  2. 9

                                    Shame about the title, because the post is great. I particularly appreciated the section that compared governance models across multiple other programming languages.

                                    1. 1

                                      I wonder which bucket is Go.

                                      1. 4

                                        Go was mostly steered by a core team of Plan 9 refugees until recently. I hope the culture is strong enough to survive Russ Cox stepping down as lead, but time will tell. Rob Pike probably already thinks we ruined it by adding generic iteration.

                                    2. 18

                                      I am one of the people who began to wonder “Is my phone listening to me?” because of Instagram ads. I never really believed that they were, but it felt like they could be given how targeted the ads were. Here’s a dilemma.

                                      1. Instagram/Meta/Whoever was listening and sending me microphone-based targeted ads. That’s definitely bad.
                                      2. Instagram/Meta/Whoever was not listening but they still had a method to send me (and people I talked with) ads (well) targeted enough that they felt as if they could have been based on microphone data. (E.g., I talk to my wife about needing sweaters. Within minutes, both wife and me have buckets of ads for men’s sweaters on Instagram.) That’s also definitely bad.

                                      Either way, I’m glad I quit Instagram (and all social media) as a 2022 New Year’s resolution.

                                      1. 8

                                        There are plenty ways to track users nowadays: cookies, pixels, tcp hello handshake,… Each comes with a different “resolution” allowing advertisers to send you more relevant ads. Chance is that Meta, Google, Adobe, Alibaba, Bytedance are just really good at building these data pipelines and segment them with different clustering algorithms, ML powered recommendations. It’s next to impossible to disable these completely given that many of these services also design and sell the underlying compute platform that you are using: android, chrome, search, email, isp, etc…

                                        I think all these fear mongers created a really good selling pitch for Apple’s private compute pitch. However, i doubt that it gona last long bc Apple could start selling ads themselves

                                          1. 1

                                            “Ads that are delivered by Apple’s advertising platform may appear on the App Store, Apple News, Stocks, and Apple TV app. Apple’s advertising platform does not track you, meaning that it does not link user or device data collected from our apps with user or device data collected from third parties for targeted advertising or advertising measurement purposes, and does not share user or device data with data brokers.”

                                              1. 8

                                                Yes. Why does nobody believe anything a company says any more?

                                                When companies are caught lying even a tiny bit it’s headline news. And yet a lot of people seem convinced you can’t believe anything any company says about anything.

                                                I guess the big problem here is probably around telling the difference between marketing and material statements. If a company says “our product is the best solution on the market” that’s a different category of statement from “our products do not listen to your microphone to target ads” or “we don’t train our LLMs on your inputs”.

                                                1. 5

                                                  Why does nobody believe anything a company says any more?

                                                  Because they’re incentivized to lie by the only factor that they care about, money. If they can make more money by lying, they will, then pay a fine or a PR agency or fire some token employee if it comes out. Doing otherwise would be failing the great god of “maximizing shareholder value”. I mean, look who the richest man in the world is right now; what’s his history with materially false statements?

                                                  1. 4

                                                    None of the companies I have ever worked for have seemed like that as an insider.

                                                1. 1

                                                  If there’s no user or device data shared with data brokers, how are those brokers targeting ads?

                                                  1. 2

                                                    People buying ads from apple can target them at “segments” based on personal info, so long as each segment contains at least 5000 people. It’s in the link above. https://www.apple.com/legal/privacy/data/en/apple-advertising/

                                          2. 8

                                            I also believe that “they are listening”. I’m a software engineer who’s worked in ad-tech and has developed mobile apps in the past.

                                            I’m aware that, for example, the Facebook app may not be able to literally use my microphone to listen at that exact second. But I am also aware at a high level that lots of data is collected and exchanged between companies for advertising.

                                            So whether or not Google’s app is listening to me, or my “smart TV” is listening and sending that info with identifying IP or other identity resolution methods, or someone else’s phone is listening and sharing text plus ip and geo, the result is the same. I have many times said incredibly specific things and immediately gone from seeing zero ads about that product to seeing an ad for that product.

                                            It’s kind of like solving a crime or debugging a production issue. The advertisers possess the motive. I believe that they do possess the means (other devices or maybe other more unscrupulous apps on your phone).

                                            More often than not the xkcd observation is true “Correlation doesn’t imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing ‘look over there’”.

                                            1. 1

                                              You make a good point. While I am comfortable with Apple, and their promise of protecting users, my home network has several IoT devices of questionable origin (like the 4K projector that allows me to login to Netflix and plays sound), and I cannot be sure that they aren’t listening in.

                                              As an example, Chinese random projector brands offer their $300+ projectors for peanuts (like under $50) with coupon codes. I won’t be surprised if these are actually CCP survellience devices. I cannot prove it either way, but I am inclined to believe the no-name cheap Chinese projector is doing something nasty.

                                          3. 3

                                            Dealing with a small conflu after 38C3, and also realizing I have to do work for my FOSDEM talk. Turns out submitting things for conferences and having them accepted is fun, preparing the talk and the project is less fun!

                                            So chill planning of the above that work and re-watching Halt and Catch Fire, which I recommend everyone to watch.

                                            1. 1

                                              Haha i also have an accepted talk and i cant stop thinking about the slides composition

                                              1. 1

                                                I’ll be thinking about slide compositions until one hour before I hold the talk :) Such is the curse of public speaking!

                                                1. 1

                                                  One trick i have started using was to voice chat with chatgpt while going out. I can bounce different ideas to it, incoherent and let it ask questions and summarize the convo. It’s like an interactive, voice driven google docs scratch pad.

                                            2. 4

                                              Billions of people around the world use Google’s products every day, and they count on those products to work reliably

                                              Really? The only thing that still feels reliable to me is search. Maps was kind of reliable but recently shops have disappeared randomly, and wayfinding works very strange (well I’m in Japan, but it’s still Google). For every other product, I’ve long since given up on relying on them. Heck, for most products I would even assume they cease to exist within the next couple of years.

                                              1. 6

                                                Think of what a mom in India or a dad in Indonesia use: search, youtube, gmail, photos, drive. These Google products have worked wonders for the majority of cases.

                                                1. 6

                                                  Thanks to the GenAI search results, I wouldn’t even call Search reliable.

                                                  1. 4

                                                    shops have disappeared randomly

                                                    I suspect data integrity and systems uptime are treated separately at google.

                                                    1. 3

                                                      Gmail is extremely commonly used I think. Interestingly, I never use Google for search. I find it to be pretty bad.

                                                      1. 1

                                                        Right. I don’t use Gmail so can’t judge that.

                                                      2. 1

                                                        maps navigation is fine in toronto but i don’t drive

                                                      3. 1

                                                        I tried to use jj with my existing github pr workflow and most options seem to yield a worse experience vs using git + gh clis. Even my shell integration seems to struggle quite a bit with a jj repo.

                                                        It would be nice to see more improvements around these areas, but i also understand that it’s not a priority for the team in Google.

                                                        1. 4

                                                          But in his eyes, I was probably just the sysadmin being a pain and judging the “quality” of his work.

                                                          It’s kind of ironic that’s exactly what the author is. The whole story isn’t even relevant to Sentry, it’s just the story of a server running out of disk space which has happened for as long as servers have existed.

                                                          1. 17

                                                            To me it’s another example of the old operating model: separation of dev, ops and billing. The dev has no clue how sentry server is setup. The ops has no say or control over the code, just said that it should be written properly. Neither one have any idea on whats the financial constraints to make the right decision about buy vs build.

                                                            1. 12

                                                              It’s the story of a server running out of space, and the prod server relying on that other server never to run out of space.

                                                              1. 4

                                                                The actual moral of the story here is that when this was moved to the cloud, someone should have set up spending alerts. Those would have tripped immediately and shrunk the problem dramatically.

                                                              2. 3

                                                                I’ve been enjoying lua-for-configuration in Neovim and Wezterm recently while setting up a new computer. Especially in Neovim, it’s so great to go from the very quirky and weird Vimscript config to Lua where a setting looks like a setting, and a function call looks like a function call. More or less.

                                                                Neovim and many modern written-for-nvim plugins also sport great type documentation coverage, so you can get great completion and in-editor type checking… once you get your config running and install the lua checker.

                                                                For general programming I find the situation with array/list/sequence “nil termination” unacceptable. It’s bad enough in C strings but to have the same kind of wonky issue for the basic “ordered compound type” in a dynamic language is just… ugh. It will hide nil pointer mistakes and instead turn them into silent data loss, horrendous. I guess people deal with it because large amounts of production lua do exist, and there is the type checker stuff. I just don’t want to have to think about such things when programming.

                                                                1. 3

                                                                  I have used both and i cant say i enjoy using lua at all. Instead, i would prefer something simpler, preferably with typing support for better ide/lsp support. In my experience, buck2 starlark got it just right.

                                                                2. 2

                                                                  TBH, I don’t care for it. I have been dealing with “big code” problems for the past few years and have long ditched relying on visual cues/anchors to navigate around. My workflow is either:

                                                                  • I have a stack trace with the exact line of code in a function. I can navigate there directly when opening a file.

                                                                  • Or I have some clue of what I am looking for and use some type of search to find references to that clue. Then I iterate through the references until I get to where I need to be. This could be regex search, or code intel (LSP, tree-sitter) searches, or ctags, etc..

                                                                  This helped me narrow down the range of code I wanted to see much faster and better than relying on a folded function context. Historically, I have seen the folded functions to be misleading as well because their names could be out of date versus the actual implementation due to over-time scope creep.

                                                                  1. 6

                                                                    This made me want to check how to do “Go to definition” in a separate split in Neovim and it’s built-in lsp client:

                                                                    • Crtl-W ] will open up the definition in a new split and place your cursor onto the new split.
                                                                    • Ctrl-W } will open up the definition in a new split but keep your cursor where it is currently.
                                                                    1. 2

                                                                      In case anyone’s wondering for Emacs with xref via Eglot (since I just checked):

                                                                      • M-. is the usual xref-find-definitions, i.e., in-place. (This one I use all the time.)
                                                                      • C-x 4 . is xref-find-definitions-other-window which will make a split if needed or reuse an existing one, then move the cursor to the new split. (This one was new to me.)
                                                                    2. 1

                                                                      I posted this in the orange site, but there is a similar project from Datadog here https://github.com/DataDog/orchestrion

                                                                      The problem with these “codegen” projects is the API for codegen is not standardized and therefore, not supported by the majority of Go toolings. Datadog project used comments for this reason.

                                                                      Another approach is to use an advance build tool like Bazel or Buck and integrate the codegen step before the compiler step. That could work well but your code editor will have a hard time navigating the codegen steps if it does not support your advance build tool well.

                                                                      1. 2

                                                                        txtar is actually a really good way to concat files to feed to an LLM context

                                                                        1. 1

                                                                          I use tail -n +1 for this.

                                                                        2. 36

                                                                          Q: why not MySQL?

                                                                          A: Postgres has a bunch of feature advantages over MySQL. My favorite is PG’s much more flexible indexing. Postgres supports expression indexes and conditional indexes. To get the same results in MySQL you need to use triggers and multiple tables, it’s super painful.

                                                                          For example, we have a table at Notion called block with a zillion rows. Any index that contains an entry for every row is huge. With these two features we can make small and effective indexes like create index image_src on block (json_get(block.json_data, “$.image_src”)) where block.type = ‘image’. Since only like 1% of blocks are images (not the real number but sense of scale is right), we get a small efficient index for image block queries on a specific json property.

                                                                          1. 1

                                                                            This sounds cursed to me tbh. Your pitch made it sounds like the flexibility of the db also enables anti patterns.

                                                                            1. 8

                                                                              Treating every kind of “thing” as a block that can be inserted into a page is Notion’s main thing. Images, kanban boards, paragraphs, to-do lists, etc.

                                                                              1. 2

                                                                                Right, I think Notion is a fine product. But the user experience does not need to map directly to the database schema.

                                                                                What I am reading here is a “zillion rows” SQL table used as a non-SQL json blob store. And you are paying the index cost on every insert/update despite only 1% of the table uses that.

                                                                                I get that the flexibility is good to save you previous time to build up your business, sure. But it also left behind a tech debt, which does look cursed from a distant.

                                                                            2. 1

                                                                              We use something similar at work. Also for a very large table. These partial indexes do seem to do full table scan to create the index itself and this takes hours. Our solution requires us to create multiple partial indexes, and a new one needs to be added every so often.

                                                                              Do you all have this problem too? Did you figure out a bypass?

                                                                              If it helps, we are on v13. Maybe a more recent version doesn’t have this full table scan problem? But information on this is sorely lacking on the Interwebs or maybe my google foo is just not strong.

                                                                              1. 6

                                                                                There’s no way around scanning the whole table to check if every row needs to be in the index or not. It certainly burns IOPS but doesn’t cause us too much trouble with CREATE INDEX CONCURRENTLY and a few admin queries to avoid interrupting VACUUM and then cancel ongoing locks on the table once we queue the index DDL. CONCURRENTLY does need a brief table lock at the start, and will stall all other queries while it waits for the lock so it’s important to cancel ongoing reads/locks when you start the index.

                                                                                EDIT: we partition our block table into discrete schemas so we can apply indexes and other DDL more incrementally. We have 480 schemas split across 96 Postgres instances.

                                                                              2. 1

                                                                                I can’t say I’ve actually tried it, but wouldn’t a generated column (and an index on it) let you achieve the same thing in mysql?

                                                                                1. 4

                                                                                  I think with a generated column in MySQL you end up with an index on the JSON expression, but it would still contain an entry for every row in the table, in other words size(index) = O(size(table)). But with Postgres, size(index) = O(size(rows matching where clause)), which is vastly smaller than size(table) for us.

                                                                                  Another place this came up for me recently was investigating building an Entity-Attribute-Value indexing table. We studied Tao (fb’s MySQL+memcached EAV+graph store) and talked to some people who worked on Asana’s discount copy of Tao. When building on MySQL those teams needed a different table per indexed Value data type and hugely complicated index maintenance logic, where in Postgres we just needed one EAV table with a column per possible Value data type because sparse indexes in Postgres are a thing, and our update queries were very simple.

                                                                              3. 1

                                                                                I don’t understand structs.HostLayout. Could somebody give me an example of how its used?

                                                                                1. 7

                                                                                  In practical terms, this forces the Go compiler to arrange a struct in memory like C would in the host system the code will run on. This in turn lets you pass a struct to C code as if it was a native C struct. This is useful to call libraries written in C (typically via cgo) from Go code.

                                                                                  My understanding is that Go already does this by default, but they’d like to stop doing that and only do it when actually required (ie. for structs marked with this HostLayout feature). This enables optimizations.

                                                                                  More details and nuance in the proposal.

                                                                                2. 3

                                                                                  I still don’t get why one should invest their time into this. The article seems to be implying some performance improvements, but no benchmark or number were provided. Google themselves have yet to implement http3 support in Go stdlib. They often claim that they are the biggest Go and Grpc adopters, and they often prioritize performance improvements with clear % wins.

                                                                                  I also don’t understand the rationale behind using ConnectRPC for grpc. The entire point of grpc is to define handlers as native functions, using structs as request response objects. So why do one want to go back to http handlers for grpc? I think the selling point of ConnectRPC is to be able to use protobuf to define http apis and not grpc services?

                                                                                  1. 1

                                                                                    The article seems to be implying some performance improvements, but no benchmark or number were provided.

                                                                                    Yeah, agreed that this should go further. I do need to do some benchmarks, but quite literally the first step is making it work. My goals for this article were to provide an example of doing this and talking about the general ideas about why you’d want to use HTTP/3. Most of the things that make HTTP/3 “more performant” are related to the number of round trips required, which I feel like I explained decently.

                                                                                    HTTP/3 with Go in the stdlib is on its way in time. Maybe you shouldn’t spend time with this if you don’t care about the benefits? I just thought that this stuff is interesting and others might also be interested in this topic.

                                                                                    The entire point of grpc is to define handlers as native functions, using structs as request response objects. So why do one want to go back to http handlers for grpc?

                                                                                    Yep, ConnectRPC works similarly to this. I think you’re confusing something returning an http.Handler to requiring a user to implement an http.Handler. With ConnectRPC users implement RPC methods with typed input/output the same as grpc-go. The difference is that ConnectRPC converts this to an http.Handler that you can mount using your favorite mux/router library. This allows to use the same tooling as standard library instead of gRPC-specific tooling. For example, you can use “normal” http middleware with ConnectRPC.

                                                                                    In this article, I used go-quic’s http3 server and client along with connectrpc. This was only a trivial thing because ConnectRPC works nicely with net/http so I was easily able to work with quic-go’s http3.Server and http3.RoundTripper. This is absolutely not possible without a lot of effort with grpc-go.

                                                                                    Further, you can mount handlers along side it without making a new http.Server instance on a different port. grpc-go does support this but it’s experimental and much slower. Also ConnectRPC exposes gRPC-Web without the need for an additional load balancer deployment and network hop.

                                                                                    I think the selling point of ConnectRPC is to be able to use protobuf to define http apis and not grpc services?

                                                                                    I think you should re-evaluate what ConnectRPC actually is. It’s a complete replacement for gRPC. “Connect” is the protocol that ConnectRPC exposes alongside gRPC/gRPC-Web that’s more compatible and looks like a normal HTTP+JSON or HTTP+protobuf API for unary calls. Streaming calls still require some special framing. These three protocols (connect, grpc, grpc-web) all come “built in” with ConnectRPC so it gives you access to tooling for all three.