1. 21
  1.  

  2. 32

    I thought I’d read this before, and after a quick search, I have: https://lobste.rs/s/34sse3/why_discord_is_switching_from_go_rust

    1. 1

      Thanks for letting me know!

    2. 12

      This is a one year old article. It would be interesting to read if the transition was successful, but there are no followup posts on their blog.

      1. 61

        Probably still waiting for the Rust project to finish compiling…

        1. 6

          It would probably be fine if they skip serde. I’m wondering if they’re using nanoserde or not. 50 -> 7 sec build time by switching to nanoserde

          1. 3

            This is the winning joke.

        2. 3

          The first time I saw this I found it quite sensationalist? I wonder if they’ve spent the past year actively ripping out Go and replacing it with Rust

          1. 1

            I’m really confused as to why partitioning the data didn’t work.

            There is one Read State per User per Channel.

            ok, so we’ve outline the partitioning boundaries …

            There are millions of Users in each cache.

            … and the fact that we’re apparently not partitioning the data.

            we figured a smaller LRU cache would be faster because the garbage collector would have less to scan

            sounds like we’re on the right path …

            if the cache is smaller it’s less likely for a user’s Read State to be in the cache.

            sorry, what? Why? It’s not at all clear why they can’t partition the data by user in such a way that leaves the hit rate unaffected.

            1. 1

              I don’t work at Discord, but I could imagine a scenario in which you’re faced with a difficult read state conundrum:

              • You want updating read state to be a single write, so you partition it by channel (otherwise you need to fan out writes to multiple consumers, and there could be a lot of consumers)
              • But if you partition it by channel and shrink the cache, you miss cache on read more often (because large Discord servers will have very active channels, and each message needs its own read state).

              The right solution is probably quite dependent on Discord’s existing architecture, but preferring cache misses on reads to multiplying writes by N (where N could be the size of an entire Discord server, assuming there’s a general channel that everyone joins) could be the right call. In turn, that could mean that a language without garbage collection, and with a large library of high-quality generic containers, could be the right choice as compared to Golang.