Threads for rsaarelm

    1. 19

      This seems to be the general consensus in gamedev circles I run in. Rust is being too restrictive.

      It’s a fine language, but lets say you are making something for a gamejam, so you have 48h. Rust expecting immediate perfection really eats your time away there, when you’re trying to figure out does some basic gameplay concept even feel good.

      Gamedev is a lot of hacks, for better or worse. Rust is very rigid language. It’s not surprising the two don’t match.

      I am sure with enough time and effort games can be made. And I hope Rust gets there, even I dont use it (since on my free time i work on games mostly). But i do want to use it for games. Just too many papercuts currently like the above.

      1. 11

        I think Rust is better at being a game engine programming language than a game programming language.

        1. 9

          On the topic of productivity, part of the problem with Rustaceans and Rust’s partisans is that they attribute to productivity what often other ecosystems would attribute to attrition.

          Rust’s restrictiveness is a feature - and a feature that you subscribe to when you adopt Rust. Therefore, if the language disciplines you into a pit of correctness and if you believe that correctness trumps everything else, all of these complaints go out of window: compilation time, it won’t matter; restrictiveness, it won’t matter; and other of these alleged downsides won’t matter because in the end you get something that is, more often than not, correct.

          So when you are running a high dollar system (like a nuclear plant), being able to sleep well at night knowing it won’t botch a valve or something - Rust is really the only show in town. It will take ages to get it done, but when it is finally done, you know the software won’t be to blame in the next TMI incident.

          Games are exactly the opposite of that. Games are all about being fun and entertaining. More often than not, it’s art and not engineering. All of Rust attributes run against that.

          So the real question is: why so many gamedevs give Rust a try? They are not dumb. They know what Rust is beforehand. The reason they end up in Rust is because a lot of them are emigrates from C++. They think Rust would be a less messy C++. A lot in software is about trade-offs: yes, these gamedevs get a less messy C++, but they also have to accept the Rust’s shackles to be benefitted from that.


          On a different note, this parody is still bullseye regarding Rust and gamedev.

          https://youtu.be/TGfQu0bQTKc?feature=shared&t=164

          1. 16

            So the real question is: why so many gamedevs give Rust a try?

            Because when I mention I work on game with non-Rust language, someone will bring up Rust. See example: https://lobste.rs/s/noerku/moving_my_game_project_from_c_odin#c_0jkxqv

            1. 2

              I doubt lots and lots of people are trying Rust just because of people on forums bringing it up. I mean, they’ve got to all have heard of the various negative experiences other people have had, right?

            2. 14

              So when you are running a high dollar system (like a nuclear plant), being able to sleep well at night knowing it won’t botch a valve or something - Rust is really the only show in town.

              Weird that most nuclear power plants and their control systems were built long before Rust was invented ;)

              1. 3

                Why don’t you rebuild your Thermonuclear plant with Rust? /jk :)

              2. 14

                Rust is really the only show in town.

                This isn’t true. Ada have been in critical systems for a long time.

                1. 3

                  I stand corrected. You’re right.

                2. 11

                  I don’t care about being good at game jams, I want to make a game system that can be worked on over a decade and where the type system will keep parts of it manageable and partitioned so things won’t break even if I forget how the exact details work. I want to make the program actually reflect the structure of the game in types rather than just being amorphous code messily hammered into the right shape but having very few formal constraints that match the ideas. I want to discover how to make a clean, concise and comprehensible large-scale architecture for a complex game instead of yet another pile of spaghetti that painfully implements one design and that’s that. This isn’t good for actually getting games finished at any sort of reasonable speed, but it’s what keeps me going.

                  1. 10

                    So the real question is: why so many gamedevs give Rust a try?

                    Same reason as everyone else: it’s immensely hyped and you don’t want to be the last one learning it. That’s the quasi-universal answer I heard in the gamedev space in the last year. The “less messy” part isn’t a substantial part of it. It helps that most of the footguns result in Perl-like compiler messages instead of CVEs, but Rust is at least as complex as C++, so writing good, clean Rust code is about as hard and as frustrating.

                    I learned it hoping it would be our (embedded development, at the time) industry’s second chance after Ada and maybe we wouldn’t miss it. I don’t know if we’ll miss it after all but I’m not entirely sure it’s our industry’s second chance in the end :-).

                    FWIW, I stuck with Rust, both for personal projects and professionally (for embedded and various “systems” projects, for lack of a better term). But I basically don’t know anyone in the gamedev space who adopted Rust or advocated for its adoption by their team after learning it, except for a very narrow space (crypto/NFT gamification space, where Rust has… a lot of visibility). Lots of gamedev projects require a lot of model refinement and, thus, a lot of changes and refactorings at the data end. The kind of productivity Rust gives you with that is out there on par with ASM. The defect rate is often flawless but that pace of iteration doesn’t fly in the gamedev space.

                    1. 2

                      It will take ages to get it done, but when it is finally done, you know the software won’t be to blame in the next TMI incident.

                      I don’t think a software bug was to blame for the last TMI incident?

                      There was alert fatigue but that’s not a problem that Rust can solve.

                      To be fair, Rust would probably have prevented the Therac-25 incident.

                    2. 5

                      I just wish they would write their network components with less CVEs

                    3. 4

                      Good stuff. I think this is the first time I’ve seen someone actually take a detailed look at Ada.

                      Maybe you could have “fearlessness” instead of “dread” as the name of the last item so you could have a more intuitive “bigger number means more of the thing” meaning for the score?

                      1. 4

                        You’re the second person to suggest that, so you’re probably right!

                        1. 1

                          Wait, best practices (tm) say you need one more call for it before refactoring it into a generalized solution :)

                          Seriously, this is very interesting and inspiring, and it confirms to me that I don’t wanna drag with rust, and that I do want to start playing with zig, and perhaps get back to learning more ocaml.

                      2. 3

                        I really like Helix, but I’ve also managed to get hooked on outline notetaking that relies heavily on indetation-based text folding, and Helix has no folding support yet, so I’m stuck on Neovim for the time being.

                        1. 3

                          Folding, spell checking, minimap, context indicator. 4 features I’m waiting for. But still a delight to use.

                        2. 2

                          I sometimes ask perplexity.ai in browser to write snippets where the entire solution is around ten lines and where I’m too lazy to look up the API. Often for things like reading a file and splitting it to lines, which I use rarely enough and has enough API cruft that I can’t rattle the whole thing off memory.

                          I’m not a JavaScript coder and also had good success giving it my so-so JavaScript functions for my home page and telling it to rewrite them into idiomatic JS.

                          I’ve used perplexity and ChatGPT for rubber duck brainstorming dialogues about some technical ideas. They’re not really good at criticizing stuff and are immediately very enthusiastic about whatever I propose, but they still feel more useful than just journaling into an empty file on my own. It’s nice that the AI reflects a summary of the ideas back at you. It’s also somewhat useful for coming up with technical terms to use with the ideas.

                          I’m somewhat curious about the editor AI autocompleters, but not enough to pay for one. I’m not really bound by code-cranking speed with what I’m currently doing so they might not be that much use.

                          1. 1

                            Why do people use “just”? Simple bash script will suffice and be always more flexible.

                            1. 18

                              I don’t want flexibility, I want to have short names for complex commands and/or a sequence of commands with doc strings and simple argument handling.

                              1. 1

                                Makes sense, then I would still use Makefile or https://taskfile.dev/

                                1. 11

                                  taskfile looks like it’s also a command runner, so switching to a competing software isn’t compelling for those who already use and enjoy just. In my opinion, just is much better than Make as a command runner (instead of being a build automation tool), and the syntax doesn’t rely on yaml (like taskfile). It’s simple, fast, and easy to use.

                                  1. 3

                                    “just” is harder to type than “make”. For these kinds of gathered maintenance tasks I use a shell script called “do” as the command. It can do things directly, or call through into “make” (or “just” if I used that – it does appear to have some useful properties).

                                    1. 2

                                      as a guy who really quite enjoys using yaml for keeping and organizing data, I find it’s use in taskfile horrendous…. it just feels like you’re trying to jam a square peg in a round hole

                                      yea, I actually prefer the made up justfile language to it

                                  2. 7

                                    It’s a shared convention. When I see a Justfile in an unfamiliar project, I know that that’s the project’s runbook that collects all the tasks the maintainer considers relevant and has brief documentation for each, and that they’re specifically meant to be run manually and not part of the source code.

                                    1. 2

                                      same, the idea is that teammates can now just change directory into a repo and run just to see what can be accomplished via the console…

                                      It’s about discoverability…this is so much better than having to dig through documentation to find relevant snippets…. I combine it’s use with devbox

                                    2. 6

                                      I use Make. because there is a very high chance make is installed. I have no idea if just is installed or not, so it’s hard to rely on it as a 1st level dependency.

                                      1. 4

                                        It’s 2024, is it really that hard to <tool> install just? It’s not like someone needs to receive it on a floppy.

                                        1. 4

                                          To be honest, I get it when you’re working on several machines/vm, where some are pretty bare environements, maybe not even configured by you. There’s a lot of cases in shared computers or enterprises where <tool> install just is indeed not that easy.

                                          It’s pretty niche but I got interested in bootstrapping, and I realised how convenient it can be to use tools that come pretty early in the bootstrapping chain. I wouldn’t be surprised there’s a plethora of situations like that, and if you can only use one tool somewhere, I get why you wouldn’t use something else anywhere. The tool has to be pretty bad for another to be worth mastering in addition to the bad one (but note I’m not saying gnumake is not one of those :P).

                                          1. 2

                                            It’s not hard, but then you have to document it. I’d argue just isn’t a big enough leap from make that it’s worth the hassle. Obviously other people disagree, and that’s totally reasonable. I’m not trying to argue my way is the only way, or even the best way.

                                          2. 1

                                            However, just is installable in a location not requiring privilege escalation via cargo install… and presumably, if you’re building a Rust project, you’ll either need to have Cargo installed or will get it by default when installing rustc.

                                            It’s also something that you might want to install on a Windows system… either with a changed shell for non-portable projects or because you might have GitBash but not Make.

                                          3. 2
                                            • You can type just or just taskname from any directory within your project and it’ll walk up the directory tree like git does.
                                            • It autogenerates a listing of available commands that can be viewed with just -l
                                            • It’s easy to install using cargo install, which is nice for Windows users if they’re either writing non-portable software (you can change the shell it uses) or they’ve installed something like GitBash but don’t want the hassle of getting Make installed too.
                                            • Its pseudo-Makefile syntax is easier for me to skim-read than reinventing it in shell script.
                                            1. 1

                                              Often I end up using npm for this, with something like npm run build, but only in JavaScript projects where I’m already assuming everyone has npm. Otherwise, I have a tools directory and put individual bash scripts in there.

                                              1. 1

                                                I use just as a per-project alias config. It’s really just a command runner, and only a command runner. No implicit rules about whether a rule is about a file that needs to be rebuilt or whatever. It’s dead simple for me to use, and writing a justfile is fast enough that it’s actually worth it for commands that I type 3 times a minute (insert relevant xkcd). I don’t want flexibility, I just want a command runner that flows the same way my brain was wired. Tbh that’s the kind of thing one might even hand-craft for oneself and it would still be a cool tool.

                                              2. 2

                                                It’s less broad, but I’ve always liked Conal Elliot’s definition of having an implementation but no denotation.

                                                1. 1

                                                  Instead of evaluating any of the existing ones, I’m making my own dynamic code notebook implementation. I think I can get a polyglot one working without that much support machinery, and I’ve got an idea to use strace to do automatic dependency analysis between the embedded script files.

                                                  1. 44

                                                    Name popular OSS software, written in Haskell, not used for Haskell management (e.g. Cabal).

                                                    AFAICT, there are only two, pandoc and XMonad.

                                                    This does not strike me as being an unreasonably effective language. There are tons of tools written in Rust you can name, and Rust is a significantly younger language.

                                                    People say there is a ton of good Haskell locked up in fintech, and that may be true, but a) fintech is weird because it has infinite money and b) there are plenty of other languages used in fintech which are also popular outside of it, eg Python, so it doesn’t strike me as being a good counterexample, even if we grant that it is true.

                                                    1. 28

                                                      Here’s a Github search: https://github.com/search?l=&o=desc&q=stars%3A%3E500+language%3AHaskell&s=stars&type=Repositories

                                                      I missed a couple of good ones:

                                                      • Shellcheck
                                                      • Hasura
                                                      • Postgrest (which I think is a dumb idea, lol, but hey, it’s popular)
                                                      • Elm
                                                      • Idris, although I think this arguably goes against the not used for Haskell management rule, sort of

                                                      Still, compare this to any similarly old and popular language, and it’s no contest.

                                                        1. 9

                                                          I think postgrest is a great idea, but it can be applied to very wrong situations. Unless you’re familiar with Postgres, you might be surprised with how much application logic can be modelled purely in the database without turning it into spaghetti. At that point, you can make the strategic choice of modelling a part of your domain purely in the DB and let the clients work directly with it.

                                                          To put it differently, postgrest is an architectural tool, it can be useful for giving front-end teams a fast path to maintaining their own CRUD stores and endpoints. You can still have other parts of the database behind your API.

                                                          1. 6

                                                            I don’t understand Postgrest. IMO, the entire point of an API is to provide an interface to the database and explicitly decouple the internals of the database from the rest of the world. If you change the schema, all of your Postgrest users break. API is an abstraction layer serving exactly what the application needs and nothing more. It provides a way to maintain backwards compatibility if you need. You might as well just send sql query to a POST endpoint and eliminate the need for Postgrest - not condoning it but saying how silly the idea of postgrest is.

                                                            1. 11

                                                              Sometimes you just don’t want to make any backend application, only to have a web frontend talk to a database. There are whole “as-a-Service” products like Firebase that offer this as part of their functionality. Postgrest is self-hosted that. It’s far more convenient than sending bare SQL directly.

                                                              1. 6

                                                                with views, one can largely get around the break the schema break the API problem. Even so, as long as the consumers of the API are internal, you control both ends, so it’s pretty easy to just schedule your cutovers.

                                                                But I think the best use-case for Postgrest is old stable databases that aren’t really changing stuff much anymore but need to add a fancy web UI.

                                                                The database people spend 10 minutes turning up Postgrest and leave the UI people to do their thing and otherwise ignore them.

                                                                1. 1

                                                                  Hah, I don’t get views either. My philosophy is that the database is there to store the data. It is the last thing that scales. Don’t put logic and abstraction layers in the database. There is plenty of compute available outside of it and APIs can do precise data abstraction needed for the apps. Materialized views, may be, but still feels wrong. SQL is a pain to write tests for.

                                                                  1. 11

                                                                    Your perspective is certainly a reasonable one, but not one I or many people necessarily agree with.

                                                                    The more data you have to mess with, the closer you want the messing with next to the data. i.e. in the same process if possible :) Hence Pl/PGSQL and all the other languages that can get embedded into SQL databases.

                                                                    We use views mostly for 2 reasons:

                                                                    • Reporting
                                                                    • Access control.
                                                                    1. 2

                                                                      Have you checked row-level security? I think it creates a good default, and then you can use security definer views for when you need to override that default.

                                                                      1. 5

                                                                        Yes, That’s exactly how we use access control views! I’m a huge fan of RLS, so much so that all of our users get their own role in PG, and our app(s) auth directly to PG. We happily encourage direct SQL access to our users, since all of our apps use RLS for their security.

                                                                        Our biggest complaint with RLS, none(?) of the reporting front ends out there have any concept of RLS or really DB security in general, they AT BEST offer some minimal app-level security that’s usually pretty annoying. I’ve never been upset enough to write one…yet, but I hope someone someday does.

                                                                        1. 2

                                                                          That’s exactly how we use access control views! I’m a huge fan of RLS, so much so that all of our users get their own role in PG

                                                                          When each user has it its own role, usually that means ‘Role explosion’ [1]. But perhaps you have other methods/systems that let you avoid that.

                                                                          How do you do for example: user ‘X’ when operating at location “Poland” is not allowed to access Report data ‘ABC’ before 8am and after 4pm UTC-2, in Postgres ?

                                                                          [1] https://blog.plainid.com/role-explosion-unintended-consequence-rbac

                                                                          1. 3

                                                                            Well in PG a role IS a user, there is no difference, but I agree that RBAC is not ideal when your user count gets high as management can be complicated. Luckily our database includes all the HR data, so we know this person is employed with this job on these dates, etc. We utilize that information in our, mostly automated, user controls and accounts. When one is a supervisor, they have the permission(s) given to them, and they can hand them out like candy to their employees, all within our UI.

                                                                            We try to model the UI around “capabilities”, all though it’s implemented through RBAC obviously, and is not a capability based system.

                                                                            So each supervisor is responsible for their employees permissions, and we largely try to stay out of it. They can’t define the “capabilities”, that’s on us.

                                                                            How do you do for example: user ‘X’ when operating at location “Poland” is not allowed to access Report data ‘ABC’ before 8am and after 4pm UTC-2, in Postgres ?

                                                                            Unfortunately PG’s RBAC doesn’t really allow us to do that easily, and we luckily haven’t yet had a need to do something that detailed. It is possible, albeit non-trivial. We try to limit our access rules to more basic stuff: supervisor(s) can see/update data within their sphere but not outside of it, etc.

                                                                            We do limit users based on their work location, but not their logged in location. We do log all activity in an audit log, which is just another DB table, and it’s in the UI for everyone with the right permissions(so a supervisor can see all their employee’s activity, whenever they want).

                                                                            Certainly different authorization system(s) exist, and they all have their pros and cons, but we’ve so far been pretty happy with PG’s system. If you can write a query to generate the data needed to make a decision, then you can make the system authorize with it.

                                                                    2. 4

                                                                      My philosophy is “don’t write half-baked abstractions again and again”. PostgREST & friends (like Postgraphile) provide selecting specific columns, joins, sorting, filtering, pagination and others. I’m tired of writing that again and again for each endpoint, except each endpoint is slightly different, as it supports sorting on different fields, or different styles of filtering. PostgREST does all of that once and for all.

                                                                      Also, there are ways to test SQL, and databases supporting transaction isolation actually simplify running your tests. Just wrap your test in a BEGIN; ROLLBACK; block.

                                                                      1. 2

                                                                        Idk, I’ve been bitten by this. Probably ok in a small project, but this is a dangerous tight coupling of the entire system. Next time a new requirement comes in that requires changing the schema, RIP, wouldn’t even know which services would break and how many things would go wrong. Write fully-baked, well tested, requirements contested, exceptionally vetted, and excellently thought out abstractions.

                                                                        1. 6

                                                                          Or just use views to maintain backwards compatibility and generate typings from the introspection endpoint to typecheck clients.

                                                                  2. 1

                                                                    I’m a fan of tools that support incremental refactoring and decomposition of a program’s architecture w/o major API breakage. PostgREST feels to me like a useful tool in that toolbox, especially when coupled with procedural logic in the database. Plus there’s the added bonus of exposing the existing domain model “natively” as JSON over HTTP, which is one of the rare integration models better supported than even the native PG wire protocol.

                                                                    With embedded subresources and full SQL view support you can quickly get to something that’s as straightforward for a FE project to talk to as a bespoke REST or GraphQL backend.. Keeping the schema definitions in one place (i.e., the database itself) means less mirroring of the same structures and serialization approaches in multiple tiers of my application.

                                                                    I’m building a project right now where PostgREST fills the same architectural slot that a Django or Laravel application might, but without having to build and maintain that service at all. Will I eventually need to split the API so I can add logic that doesn’t map to tuples and functions on them? Sure, maybe, if the app gets traction at all. Does it help me keep my tiers separate for now while I’m working solo on a project that might naturally decompose into a handful of backend services and an integration layer? Yep, also working out thus far.

                                                                    There are some things that strike me as awkward and/or likely to cause problems down the road, like pushing JWT handling down into the DB itself. I also think it’s a weird oversight to not expose LISTEN/NOTIFY over websockets or SSE, given that PostgREST already uses notification channels to handle its schema cache refresh trigger.

                                                                    Again, though, being able to wire a hybrid SPA/SSG framework like SvelteKit into a “native” database backend without having to deploy a custom API layer has been a nice option for rapid prototyping and even “real” CRUD applications. As a bonus, my backend code can just talk to Postgres directly, which means I can use my preferred stack there (Rust + SQLx + Warp) without doing yet another intermediate JSON (un)wrap step. Eventually – again, modulo actually needing the app to work for more than a few months – more and more will migrate into that service, but in the meantime I can keep using fetch in my frontend and move on.

                                                                2. 2

                                                                  I would add shake

                                                                  https://shakebuild.com

                                                                  not exactly a tool but a great DSL.

                                                                3. 21

                                                                  I think it’s true that, historically, Haskell hasn’t been used as much for open source work as you might expect given the quality of the language. I think there are a few factors that are in play here, but the dominant one is simply that the open source projects that take off tend to be ones that a lot of people are interested in and/or contribute to. Haskell has, historically, struggled with a steep on-ramp and that means that the people who persevered and learned the language well enough to build things with it were self-selected to be the sorts of people who were highly motivated to work on Haskell and it’s ecosystem, but it was less appealing if your goals were to do something else and get that done quickly. It’s rare for Haskell to be the only language that someone knows, so even among Haskell developers I think it’s been common to pick a different language if the goal is to get a lot of community involvement in a project.

                                                                  All that said, I think things are shifting. The Haskell community is starting to think earnestly about broadening adoption and making the language more appealing to a wider variety of developers. There are a lot of problems where Haskell makes a lot of sense, and we just need to see the friction for picking it reduced in order for the adoption to pick up. In that sense, the fact that many other languages are starting to add some things that are heavily inspired by Haskell makes Haskell itself more appealing, because more of the language is going to look familiar and that’s going to make it more accessible to people.

                                                                  1. 15

                                                                    There are tons of tools written in Rust you can name

                                                                    I can’t think of anything off the dome except ripgrep. I’m sure I could do some research and find a few, but I’m sure that’s also the case for Haskell.

                                                                    1. 1

                                                                      You’ve probably heard of Firefox and maybe also Deno. When you look through the GitHub Rust repos by stars, there are a bunch of ls clones weirdly, lol.

                                                                    2. 9

                                                                      Agree … and finance and functional languages seem to have a connection empirically:

                                                                      • OCaml and Jane St (they strongly advocate it, mostly rejecting polyglot approaches, doing almost everything within OCaml)
                                                                      • the South American bank that bought the company behind Clojure

                                                                      I think it’s obviously the domain … there is simple a lot of “purely functional” logic in finance.

                                                                      Implementing languages and particularly compilers is another place where that’s true, which the blog post mentions. But I’d say that isn’t true for most domains.

                                                                      BTW git annex appears to be written in Haskell. However my experience with it is mixed. It feels like git itself is more reliable and it’s written in C/Perl/Shell. I think the dominating factor is just the number and skill of developers, not the language.

                                                                      1. 5

                                                                        OCaml also has a range of more or less (or once) popular non-fintech, non-compiler tools written in it. LiquidSoap, MLDonkey, Unison file synchronizer, 0install, the original PGP key server…

                                                                          1. 4

                                                                            The MirageOS project always seemed super cool. Unikernels are very interesting.

                                                                            1. 3

                                                                              Well, the tools for it, rather than the hypervisor itself. But yeah, I forgot about that one.

                                                                          2. 5

                                                                            I think the connection with finance is that making mistakes in automated finance is actually very costly on expectation, whereas making mistakes in a social network or something is typically not very expensive.

                                                                            1. 5

                                                                              Not being popular is not the same as being “ineffective”. Likewise, something can be “effective”, but not popular.

                                                                              Is JavaScript a super effective language? Is C?

                                                                              Without going too far down the language holy war rabbit hole, my overall feeling after so many years is that programming language popularity, in general, fits a “worse is better” characterization where the languages that I, personally, feel are the most bug-prone, poorly designed, etc, are the most popular. Nobody has to agree with me, but for the sake of transparency, I’m thinking of PHP, C, JavaScript, Python, and Java when I write that. Languages that are probably pretty good/powerful/good-at-preventing-bugs are things like Haskell, Rust, Clojure, Elixir.

                                                                              1. 4

                                                                                In the past, a lot of the reason I’ve seen people being turned away from using Haskell based tools has been the perceived pain of installing GHC, which admittedly is quite large, and it can sometime be a pain to figure out which version you need. ghcup has improved that situation quite a lot by making the process of installing and managing old compilers significantly easier. There’s still an argument that GHC is massive, which it is, but storage is pretty cheap these days. For some reason I’ve never seen people make similar complaints about needing to install multiple version of python (though this is less off an issue these days).

                                                                                The other place where large Haskell codebases are locked up is Facebook - Sigma processes every single post, comment and massage for spam, at 2,000,000 req/sec, and is all written in Haskell. Luckily the underlying tech, Haxl, is open source - though few people seem to have found a particularly good use for it, you really need to be working at quite a large scale to benefit from it.

                                                                                  1. 2

                                                                                    Cardano is a great example.

                                                                                    Or Standard Chartered, which is a very prominent British bank, and runs all their backend on Haskell. They even have their own strict dialect.

                                                                                      1. 1

                                                                                        https://pandoc.org/

                                                                                        I used pandoc for a long time before even realizing it was Haskell. Ended up learning just enough to make a change I needed.

                                                                                      2. 2

                                                                                        I’ve been writing a game programming framework in Rust for the past couple months, trying to learn it and see how it measures up for game programming. Now I’m trying to hammer out a minimal roguelike game on top of it for the Seven-day Roguelike Challenge this week.

                                                                                        So far I’ve been having fun with Rust. I don’t particularly need its style of avoiding garbage collection whenever possible and the slight extra complexity that follows from that, but it is quite interesting how it is positioning itself as both a serious contender to C++ and something where the type system is robust enough to eliminate almost all runtime crashes.