Threads for freddyb

  1. 13

    I want to like helix, but my brain keeps thinking this is vim and there are too many vim shortcuts I apparently use all the time that don’t work in helix - or are just different. Is anyone else experiencing this?

    1. 8

      I’d say it’s painful for a day or two then you start adapting. I’m biased though of course :)

      1. 4

        I’ve been a Vim user for 15+ years. It took me a couple of days to get used to Helix shortcuts, but it was so worth it…

        I threw away years of Vim hacks accumulated in my ~/.vim dir and replaced them with a half-page config. I’m not going back!

        1. 1

          Can you share your config?

          1. 3

            Don’t expect much :)

            theme = "monokai_pro"
            
            [editor.lsp]
            display-messages = true
            
            [keys.normal]
            C-s = ":write-all"
            A-a = "save_selection"
            space = {t = [":write-all", ":sh tmux send-keys -t 0 last_test c-m"]}
            
            [keys.insert]
            C-s = [":write-all", "normal_mode"]
            

            Basically two key bindings, the most important one being my test runner for the tmux pane next to hx. Helix does the rest. Works like a charm for Go and TypeScript (every day at $JOB) and the occasional Rust project.

      2. 6

        Same. Muscle memory can be really strong. How cool would it be if there was a built-in tutorial where a text file is instructing you how to interact with it (maybe changing a simple block of text? Maybe a program in the coming levels), showcasing typical strengths of the highlighting-by-default flow, some refactoring, some search and replace etc.

        1. 14

          How cool would it be if there was a built-in tutorial where a text file is instructing you how to interact with it

          We do have this! It’s available via hx --tutor

          1. 4

            Wow that’s just lovely. Thanks!

        2. 2

          I had that problem too - until I remapped a few keys to avoid the pain points. See djacuen’s link in this comment thread for some example rebindings.

          1. 1

            I have been trying helix more and more. The subtle changes from vim keybindings are truly painful, but the undo is responsive. I need to come up with a better shortcut list, because typing the vim keybinding is one thing, not knowing how to do it at all in hx is another.

          1. 2

            These kinds of bugs are terrifying. You’re one click away in a browser from getting your home directory curled to the malicious third party.

            It’d be great to have browsers use syscalls like pledge + limit the filesystem access to basically zero so RCE is not as disastrous. Unless something similar is already being done.

            1. 1

              Wouldn’t running the browser in something like bubblewrap prevent that?

              1. 1

                It depends! Most browsers already use sandbox primitives provided by the operating system. Layering this with another sandbox might provide a second layer of defense, but it’s also a known source of crashes. I know that Firefox is unstable and more crashy when using e.g., sandboxie under windows.

              2. 1

                I don’t think that’s true. This is a memory-safety bug in the JavaScript component, which is usually within the sandbox. For home directory access, an attacker would have to find another bug, a sandbox bypass. These are generally harder to find and exploit.

              1. 3

                I work on a trust store project at my job and it is complicated, annoying, easy to break and no area for big successes. X509 is just broken beyond repair.

                1. 4

                  fractally broken, even. WebPKI is broken because PKCS#11 is broken, because X509 is broken, because DER is broken because ASN1 is broken.

                  1. 5

                    DER and ASN.1 aren’t actually too bad, as formats; in modern term, ASN.1 is basically protobuf plus a few extraneous types. (Both ASN.1 and protobuf are tag-length-value encodings with an external schema, with a variable-length encoding for the length and other integers.)

                    The issue with ASN.1 is that most ASN.1-consuming software was written and effectively abandoned before a lot of modern practices for secure software development.

                    The issue with X.509 is not really ASN.1, but that X.509’s semantics are very complicated, inconsistently implemented, and not always the best fit for the actual problem being solved. Encoding X.509 in protobuf or XML wouldn’t really solve the problem.

                    1. 2

                      Both ASN.1 and protobuf are tag-length-value encodings with an external schema

                      … with the major difference that SEQUENCEs are not delimited with a length. both formats support extensions but ASN.1 requires you to explicitly make space for the extensions in the SEQUENCE while protobuf wants you to declare space in the tag ranges but will accept unknown fields anywhere.

                      The complex semantics of X.509 are exacerbated by the schema-mandatory parsing of ASN.1.

                1. 4

                  I think the tldr-solution is that a good root store does not work with just a bundle of “good” certs. There are more rules that unfortunately need to be in code.

                  1. 1

                    What would you propose? x509 does have a concept of what a certificate is capable of being used for though I’m not sure how often those capabilities are leveraged in practice.

                    1. 1

                      Those are very coarse though

                  1. 2

                    Paging @jart for ape & fs considerations.

                    1. 1

                      I didn’t think anyone deployed pgadmin for anything other than localhost. But this article made me go check and make sure I hadn’t accidentally done so.

                      Fortunately, it looks like pgadmin would be painful enough to host in a public way that this seems relatively unlikely to be a common trap.

                      1. 1

                        Who said it? Anything that can be connected to the internet, will eventually be connected to the internet.

                        1. 1

                          People publish systems to the internet that controls power plants and other types of industrial facilities. Wouldn’t surprise me that someone would expose Pgadmin to the internet.

                          1. 1

                            I feel like most of the time when they do what you’re saying, it’s either a side effect of doing something else they consider useful, or a deliberate choice to do something risky because they feel it’s worth it overall.

                            Whereas last time I wanted to intentionally expose pgadmin over my VPN instead of just running it locally, I had to jump through hoops to even get it to listen on a non-loopback interface.

                            So while it wouldn’t surprise me that someone would do so, I’m not nearly as worried that it’ll be this mass, wormable problem that impacts nearly all installs of pgadmin, as I was when I read the headline initially…

                        1. 3

                          I didn’t grok the Janet bits (and also mostly didn’t try to), but the effects are stunning. Well done! :)

                          1. 1

                            Thank you :)

                          1. 6

                            I bought an old Mac Mini for very cheap and put a big disk in it. I let Photos sync everything there and I back up my Photos library (and the rest of my iCloud stuff, which is getting close to 2TB now) to Backblaze.

                            I also let Google Photos take copies from the phone and occasionally check in on it to see if it’s working okay.

                            1. 1

                              via cable?

                              1. 1

                                No all of this is without physically connecting anything

                                1. 1

                                  How can the photos app sync with the mac mini then?

                                  1. 1

                                    iCloud syncs it

                                    1. 1

                                      Do you have to pay for an iCloud plan then? And if so, what happens when you run out of capacity in iCloud? Or is this syncing feature independent of the iCloud storage plans?

                                      1. 1

                                        Yes I pay for iCloud, for a family pack that gives us 2TB. By far I’m the biggest user and the capacity is almost all photos and videos from my various cameras. I’m happy paying for this because it means I get to provide others with a copy of their photos that won’t easily get lost.

                                        I’m not quite near the 2TB limit yet but eventually I’ll have to come up with a way to choose what I don’t want be in my primary Photos library and that I will just leave in one place to keep it backed up.

                                        I probably have quite a lot of video I could move out of my primary Photos library and just keep around - but I like that I keep accidentally coming across old photos and videos, so I hope they make a bigger plan soon.

                                        I used to use Dropbox Photos but they killed it. That was pretty good. Apple Photos is great for always having all my photos available on my phone, laptops, etc. - all indexed and catalogued automatically in some useful ways. Just like Google Photos. I don’t trust Apple or Google alone, but I like the iOS and MacOS Photos stuff enough to use it as my primary way to handle photos.

                                        Syncing is because of iCloud. I don’t have a 2TB phone to keep all my photos on and sync to elsewhere! Maybe one day.

                                        You can of course try the ways around using the system Apple provide but it’s always going to be clunky. The closest to the same experience is Google Photos, as far as I know. I think you can still sync Google Photos to Linux so that might be a way to do similar to iCloud + Apple Photos though not necessarily cheaper.

                            1. 1

                              https://bsides.berlin/ - I’m giving a talk.

                              1. 2

                                Its an imperfect solution but I use Nextcloud’s Auto Upload Functionality.

                                It does occasionally need some baby sitting, as the app does not seem to always background correctly. Fortunately it backs up well. If you want to upload it to s3 you can mount it as local filesystem using s3fs.

                                1. 1

                                  Same. Nextcloud sync + babysitting. Works better than the “get a MacBook” solution which isn’t wireless and cheaper than iCloud

                                  1. 1

                                    The added advantage of this is that it’s cross platform. You can sync iOS and Android photos to the same place and share them between family members and so on with the same tool.

                                    Nextcloud uses APIs that are available to any cloud storage provider. If you have a family Microsoft 365 subscription, you get 6 users with 1 TB of OneDrive space and the OneDrive app can do the same kind of photo syncing. I believe the Google Drive app does the same thing (not sure about their pricing and capacities).

                                  1. 3

                                    surprising their 1.0 proposed feature list has Mastodon API support. Why not use the ActivityPub protocol?

                                    1. 1

                                      Because ActivityPub is probably implicit. However, it’s not really a general client protocol AFAIK, and there are plenty of clients that support Mastodon’s protocol.

                                      1. 3

                                        it’s not really a general client protocol AFAIK

                                        It is though. ActivityPub has two sections, the client to server (C2S) one and the server to server (S2S) one. Historically services have mostly implemented the later but avoided the former for various reasons: either claiming that it’s a little under specified (which it is) or that they had their own client APIs by the time ActivityPub was usable (as is the case for Mastodon for example).

                                        I am working on a suite of libraries (for Go) and a generic ActivityPub server that implements both.

                                        1. 2

                                          Since Mastodon uses ActivityPub for server-to-server communication only, nearly all the clients created use the Mastodon API for client-to-server communications. A very small minority support both AP C2S and Mastodon’s API but it’s nearly a lost cause at this point; Mastodon’s API is the de-facto standard. If you want good client support, it’s the only way.

                                          1. 1

                                            it’s the only way.

                                            I disagree. It just requires more work.

                                            1. 2

                                              “Become an expert in iOS, Android, Electron, native Windows apps, etc so you can add C2S support to the existing apps” isn’t really feasible for most people. Technically it is “just more work” but it’s unrealistic.

                                    1. 3

                                      This is super interesting. Both as a look into activitypub implementations that are less resource intensive as well as the single-user setup. I’d be keen to see versions of this guide for some of the hosters that allow you to directly run a docker image in the cloud :)

                                      1. 2

                                        This is why any service that cares even slightly about security should:

                                        • Use DNSSEC
                                        • Set CCA records.

                                        I’d love to see web browsers start warning if these conditions are not both met. Setting CCA records means that someone who compromises e-Turga or any of the other probably insecure CAs can sign a cert for your domain, but clients know not to trust it (assuming that they do CCA checks). Without DNSSEC, an attacker needs to be able to also compromise DNS responses, which is easier than compromising a CA, but requires a more targeted attack on the client. With both, this kind of problem affects only customers of e-Turga.

                                        Of course, once you have both of these, it’s not really clear why you need a CA at all. If you have DNSSEC then you can just publish a TLSA record containing your public key and avoid the need for a cert chain at all. You only need your TLS certificate to be signed by a third party if you want some claim stronger than ‘the owner of this domain name is also the owner of the computer that you are talking to’ and, now that EV certs are not given special treatment by browsers (because CAs were terrible at validating the claims that they signed), there isn’t really any expectation of this in most use (and the places that do this typically use private signing certs and do their own validation).

                                        With CCA and a CA-signed key, anyone who compromises your DNS server can change the CCA record and replace your cert with one signed by another CA (which can be Let’s Encrypt, because ACME2 allows anyone who can update DNS records for your domain to issue certs). It also allows anyone who compromises your CA to issue certs for your domain. They might be caught if you’re actively monitoring certificate transparency logs. With TLSA, anyone who compromises your DNS can replace your public key and create a fake certificate for you and will show up only in CT logs recorded by clients (CAs are supposed to publish keys to CT logs, but if you’re a malicious actor who has compromised a CA, then you probably won’t), so CA + CCA records introduces more points of failure than TLSA but leaves you vulnerable to the same attacks.

                                        Unfortunately, I don’t think any of the major browsers support trusting unsigned certs via TLSA.

                                        1. 2

                                          CAA cannot be checked in the browser. They are a method to indicate whether a CA is allowed to issue certs for your domain - at the point in time when the certificate is issued. But a browser cannot retroactively check what CAA record was set at the time a cert was issued.

                                          If a browser would check that a CAA record matches the issuer of a cert then a situation like this could occur: A site owner decides to switch the CA for the next cert renewal. Old cert is CA X, new is CA Y. The site owner configures the CAA record for the domain of CA Y. However he still uses the cert from CA X for a transition period as long as it’s still valid. That’s legit and not forbidden by anything in CAA, but your browser would reject it.

                                          There’s another more subtle reason why browsers can’t check CAA records: What do you do if you can’t fetch the CAA record? (e.g. you get a SERVFAIL or DENIED or the DNS does not answer.) Do you assume an attack? Or do you assume it’s an old resolver that has no knowledge of the CAA record type and does not forward queries with unknown DNS types? As a CA controlling your own DNS resolver you can make sure that it understands CAA records, but as a browser you cannot do that, as you have to assume all kinds of DNS resolvers in cheap routers etc. that you need to stay compatible with.

                                          1. 1

                                            I thought CCA records are only checked by the CAs themselves, upon certificate creation?

                                            1. 1

                                              They definitely can be checked in the client. I believe Chrome does. They provide no value over ACME2 if not.

                                              1. 4

                                                CAA records are explicitly not allowed to be checked in the client (since they don’t affect existing certificates retroactively). They’re a layer of swiss cheese for buggy CAs, that’s all.

                                                The way to delegate trust strongly via DNS would indeed be DANE, but it seems unlikely Chrome will support it; they have offered various reasons, but trying to read between the lines, I think they place a lot of value on their ability to strongarm CAs into meeting new requirements they come up with. I don’t like this at all, but given there are still TLDs that don’t support DNSSEC, I think relying on Google to bully CAs is probably slightly safer than relying on ICANN to bully NICs.

                                                1. 1

                                                  I don’t like this at all, but given there are still TLDs that don’t support DNSSEC, I think relying on Google to bully CAs is probably slightly safer than relying on ICANN to bully NICs.

                                                  Presumably Google could also bully NICs into accepting whatever security standards they wanted (like DNSSEC, stronger cryptography for DNSSEC, …), lest the domain names under their administration stop working (or show some fat ugly warnings) in Chrome? Which would lead to domain owners complaining to their registrars and NICs.

                                          1. 3

                                            The “can’t place cursor with mouse” bit isn’t exactly true. Some allow it via Alt/Cmd-click

                                            1. 1

                                              I’m surprised their outcome is to introduce a pooling proxy.

                                              Especially with the early reference to synchronous rendering, I was totally expecting them to solve this by using asynchronous, streaming primitives. Indeed their own link to ReactDOMServer.renderToString shows a page that explicitly warns about using this on the server.

                                              I’ve seen their argument, that sticking with synchronous rendering makes it simpler to reason about garbage collection. But I don’t really buy it. Do you?

                                              1. 12

                                                Once upon a time, we invented regular expressions as a comfortable midpoint between expressive power and efficiency on 1970s computers. We invented tools to search for regexes (grep), tools to automatically edit streams based on regexes (sed), programming languages designed to run code when a regex was matched in the input stream (awk) and even interactive editors with regexes as first-class UI primitives (vi).

                                                Here in the 21st century, text processing has advanced a long way. In particular, parsing context-free grammars no longer requires days of swearing and wrestling with shift/reduce errors; you can write grammars nearly as succinctly as regexes, and more readably. Using packrat parsing or Earley parsing or some other modern algorithm, I’d love to have a suite of text-processing tools like grep, sed, awk and vi based on grammars rather than regexes.

                                                1. 4

                                                  you can write grammars nearly as succinctly as regexes, and more readably

                                                  Do you have any links or examples? I struggle at this!

                                                  1. 3

                                                    In most cases, I end up doing text processing using (neo)vim macros rather than regexp. It can run surprisingly fast even on large datasets (in headless mode or with lazyredraw).

                                                    Feels like a very modern/ergonomic/incremental/less abstract approach compared to regular expressions.

                                                    I do like the premise of clearly defined grammars, however! Could compound nicely with macros, too.

                                                    Made me think of this recent Structural Search & Replace plugin that uses treesitter (basically grammars under the hood).

                                                    Now that I think of it, treesitter is essentially a database of grammars that can be used for data annotating and processing. 🙃 I guess the next step is to have a more on-the-fly way to describe and use these things.

                                                    1. 2

                                                      semgrep?

                                                      1. 2

                                                        I kind of got that feeling back when I played with turtle:

                                                        You can see some examples in the comment here: https://github.com/Gabriella439/turtle/blob/main/src/Turtle/Pattern.hs

                                                        hackage seems down right now, but there is a tutorial there: https://hackage.haskell.org/package/turtle/docs/Turtle-Tutorial.html

                                                        1. 1

                                                          That’s interesting. I think you’d be looking at matching/querying an AST-like structure?

                                                          For matching elements of specific kinds of tree-like structure, we have jq and xpath.

                                                          Is that the kind of thing you mean (but perhaps for general grammars?)

                                                          If not, how do these differ from your thoughts/vision?

                                                        1. 7

                                                          Most of the rant is about ABI of dynamic libraries, and how ABI is OS/Arch dependent and how ABI is difficult to get it right. How OP is confusing ABI with the actual standard C is unclear. If one is going to go around any language X and direct interfacing with the binary generated by any language X compiler, one needs to understand the actual binary.

                                                          1. 34

                                                            I don’t think OP “is confusing API with the actual standard C”. I think OP understands quite well the difference between C the standardized language and C the ad-hoc ABIs of the popular toolchains. I think OP is ranting because the latter is effectively the only interoperability mechanism available for non-C languages to speak to each other, and thus brings a lot of C-oriented baggage into a situation where it’s neither needed nor wanted, and coupling languages which aren’t C to particular concepts from C.

                                                            1. 8

                                                              Windows has a well defined language interop layer: the widely derided COM ABI :D

                                                              1. 8

                                                                It goes further than that. Win32 evolved from win16, which was created at a time when it was unclear whether C or Pascal would win as an application programming language and so all of the types for the APIs are defined as fixed-width things that can be mapped to an IDL or, at least, be defined in multiple languages. These types differentiate things like buffers and null-terminated strings, for example. More recently, SAL annotations add length and ownership information for pointers that allow them to be extracted.

                                                                Apple also has a thing called BridgeKit that generates property lists for all of its system libraries that include more metadata than a standard C function.

                                                                In FreeBSD, the syscall ABI is actually defined in a C-like IDL with SAL annotations and then the C wrappers and userspace assembly stubs are generated from this. I’d love to see more libraries follow a similar approach and generate C headers from a more language-agnostic IDL.

                                                                1. 4

                                                                  One problem I found while working with a somewhat similar IDL which was derived from the implementation in C (the XML files used to define the X11 protocol) was that it still carried a lot of C baggage, and a lot of information that would be useful for generating bindings for higher-level languages was informally specified and could usually be derived with some heuristics, but required special casing in some situations.

                                                                  For example variable-sized strings and buffers usually had a corresponding length field which is usually marked as an attribute of the string field, sometimes using an arithmetic expression which has to be manually inverted if you don’t want to manually specify the length of the string in the API, and there’s this concept of “switches” which sometimes are informally-specified discriminated unions whose discriminant might be derived from multiple other fields in the containing struct, or even a parent struct, and other times define the presence of optional fields through a bitmask.

                                                                  Basically, the homegrown language-agnostic IDL is still still very much tainted by C and there’s a significant amount of work that needs to be done on top of the IDL to make it palatable to higher-level languages.

                                                                  I think it’d be extremely hard to define an IDL that allows enough expressiveness to work around the edge cases of some of the more spiky APIs while also providing enough information to allow generating somewhat idiomatic code in different languages that use different paradigms.

                                                                  Even if idiomaticity of the generated bindings is not the priority you still need enough expressiveness to encode all of C’s type system and all the quirks that the OP describes in their post in a format that’s still general enough to be used to generate headers for a lot of C libraries. Every C project defines its own soup of aliases for basic types, custom attributes, some even define their own type system on top of C’s like GObject, and most of them use compiler-specific directives and define piles of macros. And what about inline functions? Header-only libraries? Even if such a perfect IDL did exist, I doubt you’d be able to convince many people to use it for existing libraries and APIs.

                                                                  Sorry if I’m being too negative. Perfect is the enemy of good and maybe there is an 80% solution to be reached, I just don’t think it’d be easy.

                                                                  1. 5

                                                                    I agree it’s very hard - if it were easy, someone would have done it already. I think it’s helped with things like the COM IDL that they were designed from the start to support non-C languages. I also think that aiming to support all of C is the wrong approach: you should aim to support enough that C libraries can define efficient public interfaces in terms of it (and so can other languages). As the article says, nothing short of a full C compiler can give full C interop (for both C and C++, the code I’ve written for Verona’s interop layer uses all of clang to generate LLVM IR functions with a simple calling convention that call functions / methods and set / get struct fields, which can then be inlined into the Verona code, picking up all of the excitement of the C/C++ type system). That works for interop with C libraries, but what I (and the author of the blog) want is interop with non-C libraries without going via C as the lowest common denominator interface.

                                                                  2. 2

                                                                    In other words it sounds like a lot of the complaining here is about how linux describes its ABI, rather than every other platform :D

                                                                  3. 3

                                                                    Great idea. Maybe someone should do a cross-platform variation. They could call it XPCOM (more here)

                                                                    1. 4

                                                                      XPCOM always struck me as an odd name. An XPCOM component isn’t cross-platform (it’s compiled for a single platform) and is no more cross-platform than COM (which has been implemented for multiple platforms and in multiple languages).

                                                                      1. 2

                                                                        Not to be confused with XCOM :D

                                                                    2. 2

                                                                      There is no “C the ad-hoc ABI”. ABI is an ABI, which is, by construction, unrelated to any programing language, and depend on OS and arch. Yes, C tool chains are the most ubiquitous. But, there are still fortran, pascal, and a plethora of Windows conventions. It’s a mess. But so does making a syscall on different OSes.

                                                                      1. 13

                                                                        There is no “C the ad-hoc ABI”. ABI is an ABI, which is, by construction, unrelated to any programing language,

                                                                        The argument being made, in its purest form, appears to me to be:

                                                                        When binaries of Language 1 and Language 2, neither of which are C, need to communicate with each other, one of if not just the simplest and most reliable way to accomplish this is to hook both languages into an existing C compiler’s toolchain to take advantage of that compiler toolchain’s ABI, or otherwise to emulate the ABI of an existing popular C toolchain.

                                                                        This is the case because there is no portable cross-language interface, other than “every language sooner or later has a use case for C FFI support, which also gets you FFI to every other language that has C FFI”.

                                                                        And I believe the author wants to register their distaste for this and for the consequences it wreaks on languages, on toolchains, and on the resulting executables.

                                                                        Nitpicking about the definition of “C” or whether there is or is not a formally-specified ABI is fundamentally not relevant to this argument.

                                                                    3. 2

                                                                      On top of that OP seems to be looking at “non-standard” C, specifically __int128 and how intmax_t isn’t 128 bit. I guess the OP got hurt by this somehow in a real program.

                                                                      1. 4

                                                                        I wonder what the ratio is of C programs that really only use standard conforming C without UB to the ones that don’t. Furthermore, how many of those programs are unwittingly relying on compiler-specific behavior that would never be revealed until another compiler without that behavior is used? Would make for interesting data.

                                                                        1. 2

                                                                          If I understand correctly, declaring two externally visible identifiers that have the same first 6 characters is undefined behavior. So probably not many.

                                                                          1. 2

                                                                            C89 had the six character limit for external names. C99 raised that limit to 31.

                                                                            1. 2

                                                                              Wow, what a generous limit! </s>

                                                                              I wonder why they decided to stick with a ridiculous limit like that. Are there any non-toy C compilers that are that constrained in their identifier lengths in practice, anyway?

                                                                              1. 1

                                                                                A lot of the decisions in C89 were an attempt not to exclude any C compiler at the time (the late 80s) from implementing the standard. Given there were sign magnitude and 1s complement machines that can support standard C, it wouldn’t surprise me if there weren’t some older computers in the late 80s that only supported 6 significant characters in external identifiers.

                                                                                1. 1

                                                                                  I get that, but I was wondering more about C99; anything that was modern at the time and willing to implement C99 over the years that followed wouldn’t be so constrained, I would guess.

                                                                                  1. 1

                                                                                    Well, the 31 character limit (and even the earlier 6 character limit) was the minimum a compiler had to support. Compilers could (and often did) exceed said limits, but if you wanted maximum portability, you had to be aware of it.

                                                                                    In my 30 years of C programming, I had only one job where management took the limit seriously (in the mid-90s, so it was the 6 character limit of C89), and even then, it was a silly thing to do due to the platforms we ran on (MS-DOS, Windows and several Unix variants excluding Linux [1]).

                                                                                    [1] Funny, because the Unix development was primarily done on Linux, but management knew they couldn’t charge the 4 figures for the software for Linux, but they could for the other Unix variants.

                                                                        2. 4

                                                                          Correct - a bug was reported to rustc, which treats u128 as a fundamental type that needs to work on all platforms.

                                                                      1. 3

                                                                        I think these questions are best suited for the GitHub issues. As an aside, I added a suggestion to remove the programming and add the meta tag.

                                                                        1. 5

                                                                          It has indeed been reported.

                                                                          1. 1

                                                                            It would be great to have a link to that on the front page somewhere. The pending messages notification has been broken for months and this is the first time I’ve seen instructions for how to write a bug report.

                                                                            1. 1

                                                                              Please do file an issue for this, it’s the first I’m hearing of a bug around pending messages.

                                                                          1. 2

                                                                            I’m super interested in this for, erm, reasons :-D. Has anyone here tried it? How well does it deal with large codebases?

                                                                            Background story: ’bout four years ago I tried something that looks similar at $work, but not via VS Code, motivated by the fact that a) it took hours to build our thing on our devel laptops so I really needed a beefy build server but b) developing by ssh -X or VNC-ing into the datacentre was equally infuriating, and I felt strongly about writing code in text-mode emacs in a tmux session in the 21st bloody century. I was still left doing the latter, because indexing several hundred thousand LoC over a puny ssh-backed remote filesystem, and especially accessing said index, was so slow as to be useless.

                                                                            From what I can tell in the article, some extensions can run remotely – presumably, that would include e.g. source code indexing? Can I, say, develop a large Rust program remotely without that involving my local VS Code instance trying to index and lookup content in remotely-hosted files?

                                                                            1. 4

                                                                              I find the implementation of this pretty terrifying. It basically runs a headless VS Code remotely, which it does by grabbing binaries of node.js and so on from an MS server and installing them on the remote machine. This means that the dev machine must be able to make arbitrary outbound connections and execute code downloaded from remote sites. It also means that it must be a supported OS and you have no good way of auditing the code that is run.

                                                                              This does give great compatibility because all of your extensions then run on the remote machine. If you have an LSP server then it will index there. The down side is that the remote machine must be powerful enough to index the codebase, so if you’re deploying to a small embedded device and connecting to it from a powerful dev workstation then this will be sub-optimal.

                                                                              The main use case is for dev containers, where you have a local VS Code install and connect to some container that has the development environment for a particular project set up, including code-indexing tools preconfigured.

                                                                              1. 1

                                                                                I find the implementation of this pretty terrifying.

                                                                                It does sound like somewhat of an afterthought, or maybe rather a bit of a compromise. Some things – like indexing files – have to happen on the computer that also hosts the source code, otherwise it’s either terrifyingly slow, or terrifyingly expensive, depending on how fast the connection from the local to the remote machine is. It seems to me that it would’ve made a lot of sense for this to happen by having an instance of a local VS Code extension connect to a remote instance of the LSP server, rather than running the whole extension remotely. But I’m not sure how well this works for other extensions, which presumably have to handle lots of other use cases, not just offering code completion on a machine that doesn’t host the indexed files.

                                                                                If I understand it right, I’m not very bothered by this implementation, even though it’s a giant hack. If I were to do all my development locally, on my laptop, then my laptop would have to be able make arbitrary outbound connections and execute code downloaded from remote sites, and while I could presumably audit all the code running on it, that will likely never happen. If I were to smash its display and access it via SSH from another laptop, I’d get the exact same thing: a local machine that I use to write things, and another machine on the same network that will have to make arbitrary outbound connections and execute arbitrary code.

                                                                                I understand how that might be problematic for organisations that already have beefy build servers, but have structured their networks based on completely different assumptions about what build servers can access and run. Fortunately that’s not a problem I currently have, my beefy build server hasn’t even been powered on yet, let alone have a network cable plugged in :-).

                                                                              2. 3

                                                                                Can I, say, develop a large Rust program remotely without that involving my local VS Code instance trying to index and lookup content in remotely-hosted files?

                                                                                Yes, rust-analyzer just works, there’s no differences with local dev (except that stuff works faster, of course).

                                                                                1. 1

                                                                                  Yay, that’s awesome! Thank you!

                                                                                2. 2

                                                                                  At my previous job, the legacy code was a huge Perl monorepo (millions LoC), and because the code has expectations about its environment we couldn’t run it locally. So instead we worked on remote machines that were compatible with the code base. VSCode Remote SSH was a blessing, it worked amazingly well. It was like working in local most of the times.

                                                                                  Before that we used sshfs but it was a bit clunky.

                                                                                  1. 1

                                                                                    I’m super happy this exists – I no longer need it for that codebase (thank God, that was the worst steaming pile of shi most cumbersome project I ever worked on, but I still want it for my own projects now.

                                                                                    That was a very frustrating experience for me because I just knew this was how things were supposed to be done but there was no tooling for it. A while later I actually managed to get something similar working in Emacs via a hacked up xcscope – files & co. were still mounted over the network but my hacked up xcscope interfaced with a remote cscope binary which had local access to the network-mounted files. But it was just a teeny-tiny bit of what I needed and it was clunky as hell.

                                                                                  2. 1

                                                                                    My large codebase is Firefox and it works really, really well! I used do all the editing in vscode and start the compilation from a terminal via ssh. I never used any of the debugging features though, but clang/clangd support is great.

                                                                                    My setup was like this: Thinkpad X series for mobility and convenience is my main driver. many-more-cores workstation under my desk is doing the compilation, because it’s much faster. If I would compile on the X-series Thinkpad, it would take over an hour. On the workstation less than 5 minutes.

                                                                                    (My workstation CPU broke this summer, so I’ve switched laptops to an Macbook with M1 which can compile under 10 minutes but I also changed my role at Mozilla a tiny bit and need to compile Firefox a lot less. I still miss being able to use rr every once in a while though)

                                                                                    1. 1

                                                                                      Most extensions do in fact run remotely - VSCode already has a solid separation between frontend and backend code, and VSCode Remote just runs the backend code on the remote host. So little is done in the frontend code, in fact, that they were able to build GitHub Codespaces such that the frontend code is all run in the browser, while keeping nearly all extensions working exactly the same in Codespaces as they do locally.

                                                                                      1. 1

                                                                                        Yes, I use it all the time. All the extensions and tools run on the remote system and it works well.