1. 6

    Besides Firefox and Servo, SpiderMonkey is also used by GNOME and MongoDB.

    1.  

      Also polkit.

      1.  

        CouchDB also uses Spidermonkey.

      1. 13

        I really appreciate the mentality shown here. I have had to do something similar with a Perl monolith that everyone wanted to ignore and re-write, but 6 years after the “re-write” started and 2 teams later, the Perl is still running the show. Just buckling up and making things better is underrated but very satisfying.

        1. 17

          For the code part, yes. Though the endnote seems to undermine the whole piece:

          The biggest fuss that happened early on was that I dared change it from “INCIDENT” (as in the Jira ticket/project prefix, INCIDENT-1234) to “SEV”. It was amazing. People came out of the woodwork and used all of their best rationalization techniques to try to explain what was a completely senseless reaction on their part.

          I tried to look up “SEV”, as I’ve never heard the term before. All I found was this Atlassian page. It doesn’t even appear on the Wikipedia disambiguation page. Opposing change for change’s sake doesn’t seem like a rationalization; change has cost. Am I missing something? Is this standard or common in some circles?

          1. 12

            I wouldn’t say it undermines the entire piece; I’d say that Rachel, like literally every other dev I’ve ever met, gets some things right, and some things wrong. SEV, as noted elsewhere, is common amongst FAANG-derived companies, and not elsewhere. And, yes, the most consistent (and IMVHO correct) path would’ve been to leave the terminology, too. But I don’t think the entire piece goes out the window because she did one thing inconsistent with the rest of what she did, and the overall point is spot-on.

            1. 3

              I believe that the terminology comes from BBN’s NOC. Specifically, every operations ticket was assigned a severity, with the numbers running from 5 (somebody said this would be nice) through 1 (customer down and/or multiple sites impaired) to 0 (multiple customers down). Everybody lived in fear of a “sev zero”.

              That terminology was in use by 1997, and was probably there by 1992 or so. I have asked on the ex-BBN list if anyone can illuminate it further.

              People familiar with that NOC (or who worked there or around it) populated an awful lot of other organizations over the years.

            2. 6

              ‘Sev’ is short for an incident with a particular ‘severity’. In a large enterprise using something like ITIL you would hear ‘there is a sev’ and know there is an incident that requires attention soon.

              A sev 3 might require a team to look at it in working hours. 2 might mean an out of hours on-call team. 1 is the worst and is likely a complete outage of whatever you’re running.

              Atlassian have an easy to understand interpretation written up here: https://www.atlassian.com/incident-management/kpis/severity-levels

              1. 3

                It’s short for “Site EVent”, not SEVerity. Rachel discusses that in the post (and years ago I worked for FB where they also used the term).

              2. 4

                https://response.pagerduty.com/before/severity_levels/

                It’s terminology I often encounter in FAANG circles. I’m believe FB, Google, and Amazon use it. We use it Square.

                1. 1

                  Ah, interesting. Making that change makes more sense if the company is based in the bay.

                2. 2

                  The hilarious thing is that she felt she had to explain what a ‘SEV’ was in this post but she didn’t need to explain what an ‘INCIDENT’ was.

                  1. 2

                    It’s probably short for “severe”

                    1. 3

                      Or “site event”?

                      SEVs (you know, outages, site events, whatever?)

                  2. 5

                    Just buckling up and making things better is underrated but very satisfying.

                    I have seen many, many, ground up big bang rewrites over the course of 21 years in software development. And very few of them produced a better outcome than would have been obtained by incremental improvement or replacement of the older systems.

                    1. 2

                      Rewriting just introduces new, unknown bugs, no matter how good the team(s) writing the new software is.

                      I’ve worked as a software tester for 7 years in a few different settings (good and bad teams and organisations) and I can remember one rewrite that improved things. It was a C++ middleware that was rewritten in Java, which made it accessible for more developers in that team. The middleware was multi-threaded, talking to hardware devices & a network backend (both partially async and sync in nature…) and was important to get right (it was handling physical money).

                      Eventually it was refactored to also work with a PoC Android based terminal, so that the common bits were put in a common code base. It worked great, and when doing this PoC the amount of unknown bugs were most likely smaller than if we’d rewritten it in Kotlin (or what have you) again.

                  1. 4

                    Visual Studio 2022 will be a 64-bit application, no longer limited to ~4gb of memory in the main devenv.exe process.

                    I am not sure if this is a good thing or a bad one. Why would an editor require 4G or RAM?

                    1. 8

                      If you want your editor to do semantic code analysis the amount of derived data (types, use-def chains, etc) you use turns out to be substantial. This is because you need to process much larger inputs than an editor (editor can show a single file, semantic code analysis requires some knowledge about all files in the project and it’s dependencies) and the data is complex (types are trees or graphs, scopes are hash maps, etc).

                      It’s possible to reduce ram consumption significantly by spilling rarely used data to disk and being smart with lazily realizing only absolutely required bits of info. But, if you just naively code ide stuff without optimizing for memory usage specifically, you’ll end up in gigabytes range.

                      Did some interesting ultra high-level benchmarks here: https://github.com/rust-analyzer/rust-analyzer/issues/7330#issuecomment-823209678

                      1. 1

                        The point you are missing is that VS has been heavily extension based, so “semantic code analysis” probably shouldn’t be a part of the main process to start with.

                        1. 1

                          I don’t know the current state, but at least n years ago extensions were run in-process. I think (but don’t know exactly) that that was the story with JetBrains Rider: Resharper really suffered from being in the same process as Studio itself, so they came up with idea of moving the brains to a separate process, and, hey, if the brains are a separate CLR app, why not bridge then to existing Java UI of IntelliJ?

                          These docs seems to imply that everything is still in the same process?

                          When your Visual Studio solution grows large, two code analysis engines (ReSharper and Roslyn) working simultaneously can reach the memory limit of the 32-bit process that they share.

                          https://www.jetbrains.com/help/resharper/Speeding_Up_ReSharper.html

                          Not sure how up to date they are.

                          1. 2

                            It’s fuzzy. Plugins do technically run in-process, but even all but the most trivial of Microsoft’s own plugins are narrow shims that then communicate with the actual plugin core that’s off running in a COM apartment or an equivalent IPC mechanism. I’m not entirely sure how much the cart is pulling the horse there (i.e., whether VS being 32-bits has caused that, or whether their desire for the increased reliability of having out-of-process plugins has enabled VS to stay 32-bit), but that’s where you’re seeing that disconnect.

                      2. 2

                        AFAIK this 32-bit was a huge problem if you used static analysis tools like ReSharper

                      1. 26

                        Very similar story from a few weeks ago: SQLite is not a toy database – I won’t repeat my full comment from there.

                        SQLite is very fast. [..] The only time you need to consider a client-server setup is: [..] If you’re working with very big datasets, like in the terabytes size. A client-server approach is better suited for large datasets because the database will split files up into smaller files whereas SQLite only works with a single file.

                        SQLite is pretty fast compared to fopen(), sure, but PostgreSQL (and presumably also MariaDB) will beat it in performance in most cases once you get beyond the select * from tbl where [..], sometimes by a considerable margin. This is not only an issue with “terabytes” of data. See e.g. these benchmarks.

                        Is it fast enough for quite a few cases? Sure. But I wouldn’t want to run Lobsters on it, to name an example, and it’s not like Lobsters is a huge site.

                        Well, first of all, all database administration tasks becomes much easier. You don’t need any database account administration, the database is just a single file.

                        Except if you want to change anything about your database schema. And PostgreSQL also comes with a great deal of useful administrative tools that SQLite lacks AFAIK, like the pg_stats tables, tracking of slow queries, etc.

                        And sure, I like SQLite. I think it’s fantastic. But we need to be a tad realistic about what it is and isn’t. I also like my aeropress but I can’t boil an egg with it.

                        1. 9

                          SQLite is pretty fast compared to fopen(), sure, but […] MariaDB will beat it in performance

                          I would actually be interested in knowing whether SQLite handles that query that broke Lobste.rs’ “Replies” feature better than MySql/MariaDb.

                          But I wouldn’t want to run Lobsters on it, to name an example, and it’s not like Lobsters is a huge site.

                          I think Lobste.rs would run fine. It would probably be more an issue with the limited amount of SQL SQLite supports.

                          1. 7

                            The replies query broke because the hosted MySQL Lobste.rs relies on doesn’t do predicate push down. SQLite does do predicate push down, so it wouldn’t have the same problem.

                            However SQLite doesn’t have as many execution strategies as MySQL, so it may be missing a key strategy for that query.

                            1. 5

                              SQLite’s query planner is honestly a bit smarter than MySQL’s in certain ways. For example, MySQL, as recently as 2017, did temporary on-disk tables for subselects. SQLite instead usually managed to convert them to joins. Maybe that’s been fixed in the last four years, but I wouldn’t assume that MySQL would be faster/that SQLite would be slower.

                            2. 1

                              Lobsters uses some fairly complex queries; usually those kind of things tend to do less well on SQLite, although I didn’t run any benchmarks or anything. I found that SQL support in SQLite is actually pretty good and don’t necessarily expect that to be a major issue.

                              From what I understand is that the biggest problem with the Lobsters hosting is that it’s running MySQL rather than MariaDB. While MySQL is still being actively developed, from what I can see it’s not developed very actively and MariaDB is leaps ahead of MySQL. At this point we should probably stop grouping them together as “MySQL/MariaDB”.

                              1. 1

                                Aside from the operations perspective of migrating data, converting things that are not 1:1 between mysql and mariadb, etc. are there any features in lobste.rs that prevent the use of MariaDB?

                                1. 1

                                  It used to run on MariaDB until there was a handover of the servers. AFAIK it runs well on both (but not PostgreSQL, and probably also not SQLite).

                                  1. 1

                                    I guess their current hoster only provides MySql (for unknown reasons).

                                    I asked about offering hosting, but never got a reply.

                              2. 12

                                I also like my aeropress but I can’t boil an egg with it.

                                I bet you could poach an egg with it, with some inventiveness and a slightly severe risk of getting scalded. ;)

                                1. 3

                                  When I posted that comment I was thinking to myself “I bet some smartarse is going to comment on that” 🙃

                                  1. 2

                                    Joking aside, I think a better analogy would be comparing the Aeropress to an espresso machine: the Aeropress is going to get you really good coffee that you’re going to use every day, costs very little, is easy to maintain, and you can bring with you everywhere, but it’s never going to give you an espresso. But then again, it’s not really trying to.

                                    (The analogy falls apart a bit, as one of the original claims was that it could produce espresso. I think they stopped claiming that though.)

                                  2. 1

                                    LOL

                                    …and audible laughter was emitted. Thanks for that.

                                    1. 1

                                      On the other hand if you had to set up and supply your password to obtain admin rights every time you just wanted to make coffee….

                                      …because some nutjob might want to use it for boiling eggs and the company wanted to stop that….

                                      …the device that just let’s you get on with making coffee (or boiling eggs) is a hellavuh lot faster for many jobs!

                                    2. 5

                                      Except if you want to change anything about your database schema.

                                      SQLite has supported ALTER TABLE ADD COLUMN for years, and recently added support for dropping columns. So I’d amend your statement to “…make complex changes to your db schema.”

                                      SQLite has stats tables, mostly for the query optimizer’s own use; I haven’t looked into them so I don’t know how useful they are for human inspection.

                                      1. 2

                                        SQLite has supported ALTER TABLE ADD COLUMN for years, and recently added support for dropping columns. So I’d amend your statement to “…make complex changes to your db schema.”

                                        Yeah, the drop column is a nice addition, but it’s still a pain even for some fairly simple/common changes like renaming a column, changing a check constraint, etc. I wouldn’t really call these complex changes. It’s less of a pain than it was before, but still rather painful.

                                        SQLite has stats tables, mostly for the query optimizer’s own use; I haven’t looked into them so I don’t know how useful they are for human inspection.

                                        As far as I could find a while ago there’s nothing like PostgreSQL’s internal statistics. For example keeping track of things like number of seq scans vs. index scans. You can use explain query plan of course, but query plans can differ based on which parameters are used, table size, etc. and the query planner may surprise you. It’s good to keep a bit of an eye on these kind of things for non-trivial cases. Things like logging slow queries is similarly useful, and AFAIK not really something you can do in SQLite (although you can write a wrapper in your application).

                                        None of these are insurmountable problems or show-stoppers, but as I mentioned in my other comment from a few weeks ago, overall I find the PostgreSQL experience much smoother, at the small expense of having to run a server.

                                        1. 6

                                          it’s still a pain even for some fairly simple/common changes like renaming a column

                                          https://sqlite.org/lang_altertable.html :

                                          ALTER TABLE RENAME COLUMN The RENAME COLUMN TO syntax changes the column-name of table table-name into new-column-name. The column name is changed both within the table definition itself and also within all indexes, triggers, and views that reference the column.

                                    1. 25

                                      So basically we finally arrived at the “make your app a web page” that Apple demanded when launching the iPhone

                                      1. 30

                                        Yes, and the latest trend in web development is to render content on the server. Everything old is new again!

                                        1. 7

                                          I think it’s better this time, because phones and network are fast enough that doing everything in the browser isn’t limited by UMTS speeds.

                                          1. 3

                                            The original iPhone didn’t even support UMTS (3G), it was GPRS (2G) EDGE (2.5G). A load of mobile providers who had already rolled out large UMTS had to go and deploy older hardware to support the iPhone without it falling back to GPRS. The latency on GPRS was awful (500ms RTTs were common, making it unusable for anything interactive).

                                          2. 2

                                            I have noticed this and had the very same reaction a few weeks ago.

                                          3. 13

                                            To be fair: when Apple announced this, React did not exist, Vue did not exist, precursors like Backbone didn’t even exist, and most critically, most of the technologies and tools we use in 2021 to do SPAs, let alone offline webapps, did not exist. Hell, I think the dominant offline storage solution was WebSQL, which was never standardized and is not (AFAIK) supported in any contemporary browser, and no equivalent of web workers existed unless you had Google Gears installed. You also had nothing like WebGL, or web sockets, or even widespread contemporary CSS that would make reasonable, cross-platform styling feasible. So what Apple was offering at the time was morally equivalent to having bookmark links on the home screen.

                                            (Yeah, I’m very aware of the pile of old meta tags you could use to make the experience be better than that in a literal sense, but that doesn’t resolve anything else I highlighted.)

                                            Speaking purely for myself, I found the initial announcement infuriating, not because I didn’t believe in the web (I did! Firefox was growing! Safari was proving the viability of KHTML! IE was on the decline!), but because Apple’s proposal was just so damn far from what doing that seriously would’ve actually looked like that it felt condescending. The Palm Pre, which notably came out two years later, was dramatically closer to what I’d have expected if Apple were being sincere in their offer. (And even there, webOS, much as I love it, is more an OS that happens to have JavaScript-powered apps than a genuine web app platform in the 2021 sense.)

                                            1. 5

                                              Even at the time, Apple’s stance felt to me like, “We aren’t finished with our native SDK yet and it’s so far from ready for public consumption that we’re going to just pretend it doesn’t exist at all.” I remember talking about the iPhone with my coworkers when it first came out and everyone just assumed native apps would be coming at some point.

                                              Even webOS (which I also loved) ended up supporting native apps eventually, despite having a much more feature-rich platform for JavaScript code.

                                              Games seem to be the killer app category that pushes mobile OS vendors to support native code. They’re one of the few categories of application where a lack of native code support can make an app impossible to implement, rather than just making it a bit slower or clunkier but still basically workable.

                                            2. 3

                                              Even Firefox OS was too early in the game for that (besides other problems of FFOS).

                                              1. 6

                                                If it was timed right, Mozilla would have found another way to run it into the ground. ;-)

                                                1. 1

                                                  Could not agree more !

                                            1. 3

                                              Is there a working strace equivalent for Windows? It’s the tool I always miss when I have to debug anything there.

                                              1. 5

                                                Procmon?

                                                1. 2

                                                  Procmon is the closest equivalent, but Portmon and ProcDump, alongside the tightly-related-but-different Spy++, can also be very useful in this context (some of those being closer to e.g. ltrace than strace, specifically, but the division of responsibilities on Windows are a bit different, so there’s not a one-to-one mapping).

                                                  1. 1

                                                    procmon seems to work well, thanks for the suggestion!

                                                1. 11

                                                  Most lists of “weird programming languages” get bogged down in brainfuck and brainfuck skins. I like that this one doesn’t!

                                                  1. 11

                                                    I agree. Although I feel that APL and especially Lisp don’t really fit with the rest of the list - those are languages that (some) people really do want to program in.

                                                    1. 7

                                                      I think a listicle like this about unusual languages people actually use would be really interesting. Probably something like

                                                      • Forth
                                                      • APL/J/K
                                                      • Inform7
                                                      • Orca
                                                      • Golfscript (stretching it, I know)

                                                      Damn I’ve heard of so many bizarre languages

                                                      1. 10

                                                        PostScript.

                                                        1. 2

                                                          Any good resources on PS? I’ve heard… rumors, but never investigated myself.

                                                          1. 7

                                                            I’m dead tired and can’t find the docs before sleep, but PostScript is an awesome concatenative language and sincerely my favorite in the genre other than Factor. It’s not hard. I’ll find links to the guides in the morning. You can literally code in the GhostScript REPL meanwhile if you want to play.

                                                            1. 3

                                                              I really like what I’ve read of Bill Casselman’s Mathematical Illustrations which covers PostScript and some fun geometry.

                                                              1. 1

                                                                Unfortunately no, like so many of my opinions I’ve gotten it from The Internet.

                                                                I believe my primary memory of PostScript being used for programming is from this comment by JWZ: http://regex.info/blog/2006-09-15/247#comment-3085

                                                                1. 1

                                                                  Back when I had to use PostScript for work, the language reference was the best document I was able to find.

                                                              2. 3

                                                                And there’s INRAC (used for at least two, possibly three, commercial products that I know of) where flow control is non-deterministic.

                                                                1. 2

                                                                  I saw you mention INRAC on the alien languages ask, which to my eternal shame I didn’t notice until two weeks later. What are some resources for learning about it as an outsider? Sounds really interesting!

                                                                  1. 4

                                                                    Unfortunately, there isn’t much available and most of the references I’ve come across just mention INRAC. I think, aside from the original creator of INRAC (William Chamberlain) I think I’ve inadvertently became an INRAC expert:

                                                                    Deconstruction Racter

                                                                    The Psychotherapy of Racter, or The Descent Into Madness of Sean Conner

                                                                    The Psychotherapy of Racter, or The Further Descent Into Madness of Sean Conner

                                                                    INRAC, the mind bending implementation language of Racter

                                                                    WTF INRAC?

                                                                    So how do you determine undefined behavior in a language you are reverse engineering?

                                                                2. 2

                                                                  Hey, if the software historian / archeologist hasn’t heard of it…

                                                                  For that hypothetical listicle, I’d consider adding one or two of your modelling languages - like, TLA+ looks pretty magical to people who are not you ;-). Also, I’d consider - LaTeX is not actually that uncommon, but very different from other languages in both appearance and semantics. (Maybe TikZ, but I’m not sure that counts as a programming language.)

                                                                  Something like Haskell is probably too common, but Prolog might make the list?

                                                                  [Quick EDIT: also, maybe assembly for the original MIPS CPUs, where you could apparently read the old value of a register if you manage to execute the instruction before the previous instruction has actually written the new value? It doesn’t look too evil, but…]

                                                                  … do people use Orca?

                                                                  1. 3

                                                                    … do people use Orca?

                                                                    @rwhaling introduced me to it and was using it for his synth music, so at least one person uses it :P

                                                                    1. 2

                                                                      Re MIPS, you may be thinking of https://en.m.wikipedia.org/wiki/Delay_slots. For some reason this is still being taught in introductory computing classes at university.

                                                                      1. 2

                                                                        [Quick EDIT: also, maybe assembly for the original MIPS CPUs, where you could apparently read the old value of a register if you manage to execute the instruction before the previous instruction has actually written the new value? It doesn’t look too evil, but…]

                                                                        Were you thinking of the divide and multiply instructions? Some instruction sequences give unpredictable results.

                                                                        1. 2

                                                                          I was thinking of https://retrocomputing.stackexchange.com/questions/17598/did-any-cpu-ever-expose-load-delays. (kameliya’s Wikipedia page is a little less informative; note that sufficiently-embedded processors may be able to ensure that an interrupt doesn’t happen. Which would allow one to write rather mind-bending code.)

                                                                      2. 2

                                                                        SQL is based around relationships (in the mathematical sense) and is the most popular goofy programming language no one thinks about.

                                                                        Lex/Yacc let you write half your program as a cfg and the rest in C, a language/tool chain that again no one thinks of in these lists.

                                                                        Wolfram is based on term rewriting and is somewhat popular and extensively used in physics.

                                                                        Erlang is based around a distributed model that is again something few other languages support naively.

                                                                        Most of the ‘esoteric’ language lists are list of ‘languages that do the same thing as C but poorly’.

                                                                        1. 1

                                                                          Yes, I was also just about to suggest Inform 7. It’s fantastic.

                                                                          1. 1

                                                                            Golfscript (stretching it, I know)

                                                                            No you’re not. I want to write an implementation that is not Ruby

                                                                            1. 1

                                                                              Mumps, RPG…

                                                                              1. 1

                                                                                Factor is a really nice forth dialect.

                                                                                1. 1

                                                                                  Prolog, MUMPS

                                                                                  1. 1

                                                                                    MiniZinc is also worth an include on that list.

                                                                                  2. 1

                                                                                    TBH I interpreted the inclusion of CL on this list as a trolling attempt toward lispers.

                                                                                1. 3

                                                                                  It’s truly fascinating that all the early smartphone attempts focused on making it easy to run desktop applications on your smartphone. As it turned out, the interaction model with touch and small screens was just too different and everything had to be rewritten from scratch, but that wasn’t obvious at the time.

                                                                                  1. 2

                                                                                    I honestly fully agree. I didn’t own the 900, only the 800, and I gave that thing a hell of a lot of use. It was an amazing device, and it traveled the world with me in a very literal sense.

                                                                                    But I knew even back then, before the N900 shipped, that it was a dead end. To me, the N800/900 was always a bridge: it was Nokia experimenting with hardware design in an ecosystem where they knew that hobbyists would show them what the form was capable of. Unlike contemporary Ubuntu and similar desktop Linuxes, you really needed to be comfy using things like apt and so on in order to do useful things with the N800/N900. Maybe not the terminal literally, but a lot of Debian-specific (not even Linux-specific) details.

                                                                                    And the fact is that I don’t think the form factor was all that, either. Hardware keyboards were necessary in 2007, but not in 2021. The UI required a stylus to operate properly, and while it’s possible in a literal sense to engineer your way out of that and still keep Gtk as your toolkit, that’d have been a massively uphill battle. (Hell, the damn D-pad wasn’t reliably supported in a useful way!) And so on.

                                                                                    As much as I hate to say it, because they were every bit as proprietary as their adversaries, I think that Windows Phone and (my personal favorite) WebOS were much better also-rans in the phone space. Nokia would have had to choose one of those, or Android, eventually. I have a very hard time, except in retrospect and from a very specific point of view, saying Nokia made the wrong call trying to go with Windows.

                                                                                  1. 8

                                                                                    This was an interesting article, it breaks down the issues with net.IP well, and describes the path to the current solution well.

                                                                                    But.

                                                                                    This isn’t a difficult problem. Don’t waste a ton of space, don’t allocate everywhere, make it possible to make the type a key in the language’s standard map implementation. In C++, this would’ve been easy. In Rust, this would’ve been easy. In C, this would’ve been easy (assuming you’re using some kind of halfway decent map abstraction). It doesn’t speak well of Go’s aspiration to be a systems programming language that doing this easy task in Go requires a bunch of ugly hacks and a separate package to make a string deduplicator which uses uintptrs to fool the garbage collector and relies on finalizers to clean up. I can’t help but think that this would’ve been a very straightforward problem to solve in Rust with traits or C++ with operator overloading or even Java with its Comparable generic interface.

                                                                                    That’s not to say that the resulting netaddr.IP type is bad, it seems like basically the best possible implementation in Go. But there are clearly some severe limitations in the Go language to make it necessary.

                                                                                    1. 11

                                                                                      Almost all of the complexity that happened here is related to the ipv6 zone string combined with fitting the value in 24 bytes. Given that a pointer is 8 bytes and an ipv6 address is 16 bytes, you must use only a single pointer for the zone. Then, having amortized zero allocations with no space leaks for the zone portion, some form of interning with automatic cleanup is required.

                                                                                      If this is as easy as you claim in C/C++/Rust/whatever real systems language you want, can you provide a code snippet implementing it? I’d be happy to audit to see if it does meet the same (or better!) constraints.

                                                                                      1. 6

                                                                                        Here’s a C++ version: https://godbolt.org/z/E3WGPb - see the bottom for a usage example.

                                                                                        Now, C++ is a terrible language in many ways. It makes everything look super complicated, and there’s a lot of seemingly unnecessary code there, but almost all of that stems from having to make my own RAII type, which includes writing the default constructor, the move constructor, the copy constructor, the destructor, the move operator= and the copy operator=. That complexity is just par for the course in C++.

                                                                                        One advantage of the netaddr.IP type is that it doesn’t allocate for every zone, just for every new zone, thanks to the “intern” system. My code will allocate space for the zone for every IPv6 address with a zone. One could definitely implement a “zone cache” system for my IPZone type though, maybe using a shared_ptr instead of a raw pointer for refcounting. One would have to look at usage patterns to see whether the extra complexity and potential memory/CPU overhead would be worth it or if zones are so infrequently used that it doesn’t matter. At least you have the choice in C++ though (and it wouldn’t rely on finalizers and fooling the GC).

                                                                                        1. 7

                                                                                          They also had the choice to just make a copy of every string when parsing and avoid all of the “ugly hacks”. Additionally, a shared_ptr is 16 bytes, so you’d have to figure out some other way to pack that in to the IPAddress without allocations. So far, I don’t think you’ve created an equivalent type without any “ugly hacks”. Would you like to try again?

                                                                                          1. 6

                                                                                            I don’t think they had the choice to just copy the zone strings? My reading of the article was that the intern system was 100% a result of the constraint that A) IP addresses with no zone should be no bigger than 24 bytes and B) it should be possible to use IP addresses as keys. I didn’t see concern over the memory usage of an IP address’s zone string. Whether that’s important or not depends on whether zones are used frequently or almost never.

                                                                                            It’s obviously hard to write a type when the requirements are hypothetical and there’s no data. But here’s a version with a zone string cache: https://godbolt.org/z/P9MWvf. Here, the zone is a uint64_t on the IP address, where 0 represents an IPv4 address, 1 represents an IPv6 address with no zone, and any other number refers to some refcounted zone kept in that IPZoneCache class. This is the “zone mapping table” solution mentioned in the article, but it works properly because the IPAddress class’s destructor decrements the reference count.

                                                                                            1. 7

                                                                                              I don’t think they had the choice to just copy the zone strings? My reading of the article was that the intern system was 100% a result of the constraint that A) IP addresses with no zone should be no bigger than 24 bytes and B) it should be possible to use IP addresses as keys.

                                                                                              Indeed, interning is required by the 24 byte limit. That Interning avoids copies seems to be a secondary benefit meeting the “allocation free” goal. It was a mistake to imply that copying would allow a 24 byte representation and that interning was only to reduce allocations.

                                                                                              That said, your first solution gets away with avoiding interning because it uses C style (null terminated) strings so the reference only takes up a single pointer. Somehow, I don’t think that people would be happier if Go allowed or used C style strings, though, and some might consider using them an “ugly hack”.

                                                                                              I didn’t see concern over the memory usage of an IP address’s zone string. Whether that’s important or not depends on whether zones are used frequently or almost never.

                                                                                              One of the design criteria in the article was “allocation free”.

                                                                                              It’s obviously hard to write a type when the requirements are hypothetical and there’s no data. But here’s a version with a zone string cache: https://godbolt.org/z/P9MWvf.

                                                                                              Great! From what I can tell, this does indeed solve the problem. I appreciate you taking the time to write these samples up.


                                                                                              I have a couple of points to make about your C++ version and some hypothetical C or Rust versions as compared to the Go version, though.

                                                                                              1. It took your C++ code approximately 60 lines to create the ref-counted cache for interning. Similarly, stripping comments and reducing the intern package they wrote to a similar feature set also brings it to around 60 lines. Since it’s not more code, I assume the objection is to the kind of code that is written? If so, I can see that the C++ code you provided looks very much like straightforward C++ code whereas the Go intern package is very much not. That said, the authors of the intern package often work on the Go runtime where these sorts of tricks are more common.

                                                                                              2. In a hypothetical C solution that mirrors your C++ solution, it would need a hash-map library (as you stated). Would you not consider it an ugly hack to have to write one of those every time? Would that push the bar for implementing it C from “easy” towards “difficult”? Why should the Go solution not be afforded the same courtesy under the (now valid) assumption that an intern library exists?

                                                                                              3. I’ll note that when other languages gain a library that increases the capabilities, even if that library does unsafe hacks, it’s often viewed as a positive sign that the language is powerful enough to express the concept. Why not in this case?

                                                                                              4. In a hypothetical Rust solution, the internal representation (I think. Please correct me if I’m wrong) can’t use the enum feature because the tag would push the size limits past 24 bytes. Assuming that’s true, would you consider it an ugly hack to hand-roll your own union type, perhaps using unsafe, to get the same data size layout?

                                                                                              5. All of these languages would trivially solve the problem easily and idiomatically if the size was allowed to be 32 bytes and allocations were allowed (this is take 2 in the blog post). Similarly, I think they all have to overcome significant and non-obvious challenges to hit 24 bytes with no allocations as they did.


                                                                                              Anyway, I want to thank you for engaging and writing some code to demonstrate the type in C++. That’s effort you don’t usually get on the internet. This conversation has caused me to update my beliefs to agree more with adding interning or weak references to the language/standard library. Hopefully my arguments have been as useful to you.

                                                                                      2. 4

                                                                                        I agree—if Go is a systems language. But I don’t think it ever was supposed to be. Or if it was, it’s (in my opinion) really bad at it. Definitely worse than even something like C#, for exactly the reasons you’re highlighting.

                                                                                        I think Go was more originally designed to be a much faster language than Python (or perhaps Java), specifically for Google’s needs, and thus designed to compete with those for high-performance servers. And it’s fine at that. And I’ve thought about solving this kind of issue in those languages, too, using things like array in Python for example.

                                                                                        So I agree Go isn’t a good systems language, but I think that was a bit of retcon. It’s a compiled high-level language that could replace Python usage at Google. It’s not competing with Rust, C, Zig, etc.

                                                                                        1. 3

                                                                                          Ok, I can buy that. IIRC, it was originally promoted as a systems language, but it seems like they’ve gone away from that branding as well. There’s a lot of value to something like “a really fast, natively compiled Python”.

                                                                                          But even then, this article seems to demonstrate a pretty big limitation. Something as simple as using a custom IP address type as the key in a map, ignoring everything performance-related, seems extremely difficult. How would you write an IP address struct which stores an IPv4 address or an IPv6 address with an optional zone, which can be used a the key in a map, even ignoring memory usage and performance? Because that would be easy in Python too; just implement __hash__ and __eq__.

                                                                                          This is a problem which isn’t just related to Go’s positioning, be it a “systems language” or a “faster python”. Near the bottom we have C, where an IP address -> whatever map is about as difficult as any other kind of map. Slightly above, we have C++ and Rust, where the built-in types let you use your IP address class/struct as a key with no performance penalty, since you stamp out a purpose-built “IP address to whatever” map using templates. Above that again, we have Java and C#, which also makes it easy, though at a performance cost due to virtual calls (because genetics aren’t templates), though maybe the JIT optimises out the virtual call, who knows. Near the top, we have Python which makes it arguably even more straightforward than Java thanks to duck typing.

                                                                                          Basically, unless you put Go at the very bottom of the stack alongside C, this should be an easy task regardless of where you consider Go to fit in.

                                                                                          1. 3

                                                                                            IIRC, it was originally promoted as a systems language, but it seems like they’ve gone away from that branding as well.

                                                                                            I believe you’re correct about how Google promoted it. I just remember looking at it, thinking “this is absolutely not a systems language; it’s Limbo (https://en.wikipedia.org/wiki/Limbo_(programming_language), but honestly kind of worse, and without the interesting runtime,” and continuing to not use it. So I’m not sure the team itself actually thought they were doing a systems language.

                                                                                            But even then, this article seems to demonstrate a pretty big limitation. Something as simple as using a custom IP address type as the key in a map, ignoring everything performance-related, seems extremely difficult.

                                                                                            I completely agree, but that’s changing the discussion to whether Go is a good language, period. And since I mostly see that devolving into a flame war, I’m just going to just say that I think you have a lot of company, and also that clearly lots of people love the language despite any warts it has.

                                                                                            1. 2

                                                                                              I completely agree, but that’s changing the discussion to whether Go is a good language, period. And since I mostly see that devolving into a flame war, I’m just going to just say that I think you have a lot of company, and also that clearly lots of people love the language despite any warts it has.

                                                                                              My relationship with the language is… Complicated. I often enjoy it, I use it for work, and when I just want to write a small tool (such as when I wrote a process tree viewer) it’s generally my go-to “scripting” language these days. But I hate how the module system puts URLs to random git hosting websites in my source code, there’s a lot of things I dislike about the tooling, and the inability write a datastructure which acts like the built-in datastructures and the inability to write a type which works with the built-in datastructures are both super annoying issues which none of the other languages I use have. I’m hoping Go 2 will fix some of the bigger problems, and I’m always worried about which directions the corporate management at Google will take the language or its tooling/infrastructure.

                                                                                              But you’re right, this is tantamount to flamewar bait so I’ll stop now.

                                                                                      1. 12

                                                                                        I once wasted an entire month trying to resolve some cryptic C# compile errors where Visual Studio simply wouldn’t recognize some of my source files. In the end, the reason was that the compiler silently failed to recognize files with path lengths of longer than 255 characters, even though you can technically create such files on Windows. A prefix like “C:\Users\Benjamin\Documents\ProjectName\src" combined with C#’s very verbose naming conventions meant that a few of my files were just over the path size limit.

                                                                                        1. 8

                                                                                          I feel like Windows is drowning in technical debt even more than Linux is. The APIs to work with long paths have existed for ages now, so most modern software lets you easily create deep hierarchies, but Windows Explorer still isn’t updated to work with those APIs so if you create a file with a long path, you can’t interact with that file through Explorer. There have been solid widgets for things like text entry fields in various Microsoft UI frameworks/libraries for ages now, but core apps like Notepad and - again - Windows Explorer still aren’t updated to take advantage of them, so hotkeys like ctrl+backspace will just insert a square instead of doing the action which the rest of the system has taught you to expect (i.e deleting a word). CMD.EXE is an absolutely horrible terminal application, but it hasn’t been touched in ages presumably due to backwards compatibility, and Microsoft is just writing multiple new terminal applications, not as replacements because CMD.EXE Will always exist, but as additional terminal emulators which you have to use in addition to CMD.EXE. The Control Center lets you get to all your settings, but it’s old and crusty, so Microsoft is writing multiple generations of separately holistic Control Center replacements, but with limitations which make it necessary to use both the new and the old settings editors at the same time, and sometimes Control Center and some new settings program don’t even agree on the same setting. Windows is useful as a gaming OS, but any time I actually try to use it, I just get sad.

                                                                                          1. 6

                                                                                            CMD.EXE is an absolutely horrible terminal application, but it hasn’t been touched in ages presumably due to backwards compatibility, and Microsoft is just writing multiple new terminal applications, not as replacements because CMD.EXE

                                                                                            What you think of as cmd.exe is actually a bunch of things, most of which are in the Windows Console Host. The shell-equivalent part is stable because a load of .bat files are written for it, but PowerShell is now the thing that’s recommended for interactive use. The console host (which includes a mixture of things that are PTY-subsystem and terminal emulator features on a *NIX system) is now developed by the Windows Terminal team and is seeing a lot of development. Both cmd.exe and powershell.exe run happily in the new terminal with the new console host and in the old terminal and the old console host. At the moment, if you run them from a non-console environment (e.g. from the windows-R box), the default console host that’s started is the one that Windows ships with and so you don’t get the new terminal.

                                                                                            1. 1

                                                                                              Windows Terminal is great when I can use it, but it does not seem to work well with administrator privileges.

                                                                                              1. 1

                                                                                                You can use the sudo package from scoop. For me it’s good enough.

                                                                                                1. 1

                                                                                                  Wow, did not know about this! It looks like it still generates a UAC popup unless you configure those to not exist. Still, far better than nothing.

                                                                                                  http://blog.lukesampson.com/sudo-for-windows

                                                                                              2. 1

                                                                                                but PowerShell is now the thing that’s recommended for interactive use

                                                                                                Which one? ;-)

                                                                                                I have some code that extracts config/data/cache directories on Windows (the equivalent of “check if XDG_CONFIG_DIR is set, otherwise use .config” on Linux) and it’s just a hyperdimensional lair of horrors.

                                                                                                Basically, the best way to get such info without having to ship native code is to run powershell (version 2, because that one does not have restricted mode) with a base64 encoded powershell script that embeds a C# type declaration that embeds native interop code that finally calls the required APIs.¹

                                                                                                I’m close to simply dropping Windows support, to be honest.


                                                                                                ¹ The juicy part of the code for those interested:

                                                                                                  static final String SCRIPT_START_BASE64 = operatingSystem == 'w' ? toUTF16LEBase64("& {\n" +
                                                                                                      "[Console]::OutputEncoding = [System.Text.Encoding]::UTF8\n" +
                                                                                                      "Add-Type @\"\n" +
                                                                                                      "using System;\n" +
                                                                                                      "using System.Runtime.InteropServices;\n" +
                                                                                                      "public class Dir {\n" +
                                                                                                      "  [DllImport(\"shell32.dll\")]\n" +
                                                                                                      "  private static extern int SHGetKnownFolderPath([MarshalAs(UnmanagedType.LPStruct)] Guid rfid, uint dwFlags, IntPtr hToken, out IntPtr pszPath);\n" +
                                                                                                      "  public static string GetKnownFolderPath(string rfid) {\n" +
                                                                                                      "    IntPtr pszPath;\n" +
                                                                                                      "    if (SHGetKnownFolderPath(new Guid(rfid), 0, IntPtr.Zero, out pszPath) != 0) return \"\";\n" +
                                                                                                      "    string path = Marshal.PtrToStringUni(pszPath);\n" +
                                                                                                      "    Marshal.FreeCoTaskMem(pszPath);\n" +
                                                                                                      "    return path;\n" +
                                                                                                      "  }\n" +
                                                                                                      "}\n" +
                                                                                                      "\"@\n") : null;
                                                                                                
                                                                                                1. 1

                                                                                                  Which one? ;-)

                                                                                                  PowerShell 7 Core, of course!

                                                                                                  …for now!

                                                                                                  …unless you also need to support classic PowerShell, in which case, PowerShell 5!

                                                                                                  …and be careful not to use Windows-specific assemblies if you want to be cross-platform!

                                                                                              3. 3

                                                                                                The APIs to work with long paths have existed for ages now

                                                                                                Well, I’d agree about technical debt, but this claim is a great example of it.

                                                                                                As an application developer, you can choose one of these options:

                                                                                                1. Add a manifest to your program where you promise to support long paths throughout the entire program. If you do this, it won’t do anything unless the user has also modified a system-global setting to enable long paths, which obviously many users won’t do, and you can expect to deal with long path related support queries for a long time. This is also only supported on recent versions of Windows 10, so you can expect a few queries from users running older systems.
                                                                                                2. Change your program to use UTF-16, and escape paths with \\?\ . The effect of doing this is to tell the system to suppress a lot of path conversions, which means you have to implement those yourself - things like applying a relative path to an absolute path, for example. This logic is more convoluted on Windows than Linux, because you have to think about drive letters and SMB shares. “D:” relative to “C:\foo” means “the current directory on drive D:”. “..\..\bar” relative to “C:\foo” means “C:\bar”. “\\server\share\..\bar” becomes “\\?\UNC\server\share\bar”. “con” means “con”.

                                                                                                I went with option #2, but the whole time kept feeling this is yet another wheel that all application developers are asked to reinvent.

                                                                                                1. 1

                                                                                                  Windows is useful as a gaming OS, but any time I actually try to use it, I just get sad.

                                                                                                  • Microsoft Office and the Adobe Suite (or replacements such as the Affinity Suite).

                                                                                                  It would be really nice if Microsoft just ported Office.

                                                                                                  1. 2

                                                                                                    They effectively have. It seems like Microsoft cares far more about the O365 version of Office than any native version — even Windows.

                                                                                                    1. 2

                                                                                                      They effectively have. It seems like Microsoft cares far more about the O365 version of Office than any native version — even Windows.

                                                                                                      Office 365 is a subscription service, most of the subscriptions include the Windows/Mac Apps. I guess that you mean Office Online, but it only contains a very small subset of the features of the native versions. I tried to use it for a while, but you quickly run into features that are missing.

                                                                                                  2. 1

                                                                                                    The separation of the control centre may actually go away soon. If the articles are up be believed, MS finished that migration in the latest version.

                                                                                                    1. 1

                                                                                                      More details? The only thing I heard was that they were finally killing the working ones.

                                                                                                1. 37

                                                                                                  I’m primarily a Windows developer, and relate to the frustration of Windows development.

                                                                                                  However, reading this article, most of the comments seemed related to initial setup: yes, you have to install git; yes, it installs its own bash; yes, vim doesn’t know what to do with the Windows clipboard but can do anything; yes, PowerShell came from an era where being conspicuously different was considered a virtue, but you’re free to use any other tool; etc.

                                                                                                  The type of thing that makes me lose my mind about Windows as a platform is trying to deliver anything to a customer in an end-to-end way. On Linux, you often end up writing code…and that’s about it. Each distribution will package and update your code in their own way. They might get it wrong, but they’ll try. On Windows, updating your program is your problem. Depending on how you count, there’s either zero or a bajillion systems for updating code, but you can’t assume your users are using any of them, so you end up having to write your own. Users want to have precompiled binaries, but then they’ll be greeted with a slew of scary warnings, unless your code is signed, so you have to deal with that as a code author. Other platforms will have a standardized package install model; on Windows, the user’s running some executable you provided, so implementing every conceivable setup configuration is on you (see the git installer.) On other platforms a dist-upgrade will upgrade various things as a set; on Windows, you have to assume that the entire OS can move underneath your program and your program has to work. And you can’t expect users to help - how many users really know which version of Windows 10 they have? - so your program has to run on all of them.

                                                                                                  There’s just a kind of cognitive burden with every program having to independently reinvent every wheel. The solutions are well known, but it’s just so…painful.

                                                                                                  1. 7

                                                                                                    On Linux, you often end up writing code…and that’s about it. Each distribution will package and update your code in their own way.

                                                                                                    Only if your program is both open source and popular. The overwhelming majority of programs aren’t. Case in point: I spent hundreds of hours (spread over 4 years) writing a small easy to use crypto library. I have users, some of which even wrote language bindings. The only distribution packages I know of are for Void Linux, Arch Linux, and Net BSD. No Debian, no Redhat, no Gentoo, and most of all, no Ubuntu.

                                                                                                    Not that it really matters. This is a single file library we’re talking about, which you can easily bundle in your own source code. But I did go out of my way to have a bog standard, easy to use makefile (with $PREFIX, $DESTDIR, $CC and all that jazz). Packaging it ought to be very easy. Yet no one stepped up for any of the major distributions out there.

                                                                                                    They might get it wrong, but they’ll try.

                                                                                                    The very fact they might get it wrong, in my opinion, suggest that packaging itself may be a bad idea to begin with. Linus Torvalds goes out of his way never to “break users”. We should be able to take advantage of that, but it would require abandoning the very concept of distribution, or at least specialising it.

                                                                                                    A distribution is mostly a glorified curated repository of software. Ideally a coherent whole, compiled, or even designed, to work together. The people managing it, the packagers, have made themselves responsible for the quality and security of that repository. Security by the way is the trump card they show in dynamic vs static linking debates: upstream devs can’t all be trusted with updating their software fast enough, so when there’s a vulnerability in some library, we ought to be able to swap it and instantly fix the problem for the whole distribution. Mostly though, it’s about making the life of packagers easier.

                                                                                                    Now I have no problem with curated repositories of software. What I have a problem with is the exclusivity. In most cases, there can be only one. One does not simply uses Debian and RedHat at the same time. They don’t just distribute software, they pervade the whole system. Including the kernel itself, which they somehow need to patch. This effectively turns them in to fenced gardens. It’s not as bad as Apple’s App Store, you can go over the fence, but it’s inconvenient at best.

                                                                                                    So. Linux distros won’t package my software, when they do it they might get it wrong anyway. Which means that in practice, I’ll have half a dozen systems moving under my feet, I can only hope that it will still work despite all those updates everywhere. Just like Windows, only worse. And just like on Windows, there’s only one solution: “Find your dependencies. Track them down, and eliminate them.”

                                                                                                    Ideally, we should only depend on the kernel. Maybe statically link everything, though if we’re short on space (??) we can lock those dependencies instead, like NPM or Cargo do it at the source level, and Nix (I think? I haven’t checked) can do at the binary level. On the flip side, that means you need to handle stuff like installation and updates yourself, or have a library do it for you. Just like Windows. Problem is, it’s not even possible, because of how distributions insist on standing between users and developers.

                                                                                                    On Windows, updating your program is your problem.

                                                                                                    As it should be. You wrote that program, you should be responsible for its life cycle. Distribution maintainers really got screwed when they realised that a bad program may undermine the distribution’s reputation. Though we may not like the idea of each program having its own update code, that update code can be as small as 100KB, including the cryptographic code (modern crypto libraries can be really small).

                                                                                                    Users want to have precompiled binaries, but then they’ll be greeted with a slew of scary warnings, unless your code is signed, so you have to deal with that as a code author.

                                                                                                    That, however, is something Windows is doing very, very wrong. Especially since signing your binaries is not enough, they decide whether your reputation warrants a warning anyway or not. This practice turns Microsoft into one giant middle man. They go as far as staking their reputation on the list of trusted authors and programs. While it does result in fewer users getting viruses, it also acts as yet another centralisation force, yet another way for huge entities and corporation to have an edge over the little folk. (An even more blatant example is how big email providers handle spam.)

                                                                                                    This is one of the few places where the solution is to tell everyone to “git gud”. That means teaching. People have to know how computers work. Not just how to use Microsoft® Word®, but the fundamentals of computing, and (among other things) what you can expect when you execute a random program from some shady web site. We don’t have to teach them programming, but at least let them try Human Resource Machine. Only then will it be safe to stop treating users like children. Heck, maybe they’ll even start to demand a better way.


                                                                                                    There is one thing for which a coherent curated repository of software is extremely useful: development environments. Developers generally need a comprehensive set of tools that work well together: at the very least a compiler, editor, version control, dependency management, and the actual dependencies of the program. It’s okay if things break a little because of version incompatibility. I can always update or fix the program I’m writing.

                                                                                                    Less technical end users however need more stability. When a program works, it’d better still work even when the system moves under its feet. The OS ought to provide a stable and sufficient API (ABI, really) upon which everyone can rely on.

                                                                                                    1. 4

                                                                                                      On Windows, updating your program is your problem.

                                                                                                      As it should be. You wrote that program, you should be responsible for its life cycle.

                                                                                                      The complaint here is it sucks for everyone to be reimplementing auto updates, possibly with bugs. I believe that my gaming PC is right now running buggy and wasteful auto update checkers from a half dozen different vendors, all of whom wasted money on these things which provide negative value.

                                                                                                      Whereas, uploading a new version to an app store or apt/rpm/etc repo is much nicer in this regard: users’ machines already have the mechanism to update software from those, often automatically.

                                                                                                      1. 1

                                                                                                        There are libraries for such things. Some of them could be provided by the OS vendor. I’m just not sure they should be part of the OS itself: it would add to what the OS must keep stable.

                                                                                                        Stability at the OS level is easier to achieve if said OS is minimal: just schedule programs & talk to the hardware. If programs can access the network, there is no need to provide an update mechanism on top. A standard, recommended library however, would be very nice.

                                                                                                    2. 1

                                                                                                      The type of thing that makes me lose my mind about Windows as a platform is trying to deliver anything to a customer in an end-to-end way. On Linux, you often end up writing code…and that’s about it. Each distribution will package and update your code in their own way. They might get it wrong, but they’ll try. On Windows, updating your program is your problem. Depending on how you count, there’s either zero or a bajillion systems for updating code, but you can’t assume your users are using any of them, so you end up having to write your own. […] And you can’t expect users to help - how many users really know which version of Windows 10 they have? - so your program has to run on all of them.

                                                                                                      There’s just a kind of cognitive burden with every program having to independently reinvent every wheel. The solutions are well known, but it’s just so…painful.

                                                                                                      Linux approaches to runnable-binaries-shipped-with-dependencies (Flatpak, snap, AppImage, …) do address some of these concerns, but I wonder if (or how) the increase of base images (echo "which version Windows 10 they have?" | sed s/Windows/Fedora/) will change the amount of work that the application developers will have to put in to create fully working {flatpaks,snaps,appimages}.

                                                                                                      1. 5

                                                                                                        It’s not that we don’t have an equivalent to that on Windows. It’s that there are just too damn many options, and Microsoft changes its mind every couple of years on what they want to do, exactly.

                                                                                                        Ever since the giant mess that was DLL hell, Windows has had something called side-by-side assemblies, which allow conflicting versions of DLLs to be installed globally. Combined with its take on app bundles, strongly allowing and encouraging application vendors to to just bundle all their DLLs alongside the application in the same directory, we end up effectively the same place as Flatpak, albeit exploded instead of single files. So that’s “solved”.

                                                                                                        But that’s only the mechanism. When it comes to actually distributing your app, Microsoft loses its attention every five seconds. The Micrsofot Store has been Microsoft’s answer for awhile, but it only relatively recently (last couple of years?) gained the ability to handle non-UWP binaries. We’ve also had ClickOnce, which was Microsoft’s answer to Java WebStart, and which was again .NET-only. And now we’re getting winget which is kinda Chocolatey and kinda the Microsoft Store and kinda its own thing, and so on.

                                                                                                        So it’s not the container bit that’s so hard, but rather getting your app mechanically distributed. That’s more contrasting with e.g. apt or rpm or the App Store (or maybe snap, since that is centralized) than Flatpak.

                                                                                                        1. 4

                                                                                                          I think this is not quite fair to Microsoft. On Windows, there is a blessed store for all GUI programs: the Microsoft Store. If you don’t like the Microsoft Store, you can distribute over the internet; if you sign your builds, Windows will pop up a non-scary prompt before installing, and if you don’t, Windows will pop up a scary prompt.

                                                                                                          On Linux, you can also distribute GUI programs over the internet. But there’s no trusted signing built in for programs distributed this way, so it’s somewhat less secure. What about blessed stores? Good grief: first of all, many distros maintain their own, and patch your software without your consent and in some cases refuse to distribute updates to your software (e.g. jwz’s XScreenSaver woes). But from a user’s perspective, perhaps that is ~okay — if you don’t mind out-of-date software. But for users, it gets worse! Where do you install from: the distro? Flatpak? Snaps? Sometime you install one package from one place, and it immediately pops up an alert telling you to uninstall it and install from a different place. But there’s no consistency: it’s not like every package prefers one place or another. And they’re cross-listed, but often with radically different versions! You’re not even guaranteed Flatpak or Snaps are the most up to date: the app developers may have abandoned that distribution method and gone back to shipping binaries in the distro’s repo. Plus if you install from Flatpak or Snaps, which certain programs more-or-less demand, they interface poorly with the rest of your system by default because of bundling their own filesystem images (and in Snap’s case they start slowly as a result). It’s… not great.

                                                                                                          On macOS, for GUI programs you have two “options”: the Mac App Store, or the internet. If you choose “the internet,” macOS will refuse to run your program unless your users click a checkbox hidden in the main system Settings app. Even if they do, it will prompt them before installing, telling them that anything from the Internet is dangerous (ignoring any signing). Also, the Mac App Store is extremely limited and many programs are impossible to run in their sandboxing. As per usual, Apple’s basic message is that if you’re trying to make programs that run on Macs, and you’re not Apple, they reserve the right to make you miserable.

                                                                                                          For installing command-line binaries: on Windows you’d either use chocolatey (the old 3rd party package manager) or scoop (the new 3rd party package manager). MS realized that people like command-line package managers, so they’re building an officially-blessed one called winget that will presumably replace those. Winget is not yet released to the general public though.

                                                                                                          On Linux, generally you’d use your distro’s package manager. But since app developers can’t easily add or update packages in the repo, sometimes your distro does not have the package! Or, as usual, it has some ancient outdated version. Then if you are on Ubuntu maybe you can add the PPA, or if you are not maybe you can go fuck yourself (cough I mean build it from source).

                                                                                                          On macOS the situation is fairly similar to Windows currently: you can use MacPorts (the old 3rd party package manager), or Homebrew (the new 3rd party package manager). As usual Apple does not care that developers like command-line package managers and is not building a blessed one.

                                                                                                          1. 2

                                                                                                            When it comes to actually distributing your app, Microsoft loses its attention every five seconds. The Micrsofot Store has been Microsoft’s answer for awhile, but it only relatively recently (last couple of years?) gained the ability to handle non-UWP binaries.

                                                                                                            Agree with this. It looks like at the moment if you want to sell productivity software for Windows without having to operate your own storefront, the most stable option may actually be Steam? Sure it targets the wrong market segment, but at least it works reliably.

                                                                                                      1. 4

                                                                                                        While I agree with the ideas presented here, in particular the comments on IDEs (or as I like to call them, Interactive Computing Environments, to avoid confusion with “regular” IDEs), I do wonder why these ideas keep getting forgotten. We had Smalltalk, we had Lisp Machines and we have Unix shells, but the tendency always seems to go towards a rigid cookie-cutter-style of programming. I don’t like the idea that people are “too stupid” to understand or use it, and I don’t know how much of it is just that people were used to whatever reached the market first, no matter how annoying it is and how much time people spend fighting it. One component is certainly external (often proprietary) dependencies. Or is it education that de-prioritizes these kinds of thinking?

                                                                                                        1. 8

                                                                                                          It’s the insistence on doing everything via text.

                                                                                                          1. 5

                                                                                                            There are two issues, in my opinion, both shaped by own experience using Smalltalk and trying to teach it to others.

                                                                                                            The first is that you can’t get a flow like the one in this article without learning new tooling on top of the language, and that ends up being a big issue. If I know Emacs (or Visual Studio Code, or Vim, or any of the even vaguely extensible editors), I can use that same tool to handle basically every language, focusing just on the new language. To get a good flow in Smalltalk (or, I believe, a Lisp machine, but notably not contemporary Common Lisp or Schemes), you have to learn the IDE. In Smalltalk, this is especially bad, because the traditional dev flow effectively uses a source database instead of source files, so (until recently) you couldn’t even use things like diff or git.

                                                                                                            The second thing is that this kind of dev flow, in my experience, thrives when you’re doing something novel. Nowadays, most dev work I do is “just” assembling tons of existing libraries in familiar patterns. That’s not a problem, and I don’t think it’s laziness; it’s about predictability and repeatability, and I mostly view it as a sign that the industry is maturing. It lets me do much more with much less effort and much lower risk than doing everything bespoke. But it does mean that if, for example, I want to write a Smalltalk backend for a website in 2021, I’m going to have to write a bunch of stuff (e.g., OAuth connectors, AWS APIs, possibly DB drivers, etc.) that I’d get for free in virtually any other language, which in turn are new places things can go wrong, where I won’t be able to ask or pay someone else for support, and which likely don’t have anything to do with making my software sell. This applies pretty intense brakes to using novel environments even if you believe you’d be happier in one. This is basically the same as your point on external dependencies, but I think looking at it one step back from a repeatability and reliability perspective makes it more obvious why it’s such an issue.

                                                                                                            1. 7

                                                                                                              As someone who has dabbled in Common Lisp w/ SLIME, another limitation of that development style I have noticed is keeping track of state and making sure things are reproducible from the code, and not some unreachable state you have arrived at from mutating things in the REPL. There is a similar issue with Jupyter notebooks

                                                                                                              1. 4

                                                                                                                In Smalltalk, this is especially bad, because the traditional dev flow effectively uses a source database instead of source files, so (until recently) you couldn’t even use things like diff or git.

                                                                                                                While I certainly agree with the lamentations on using modern VCS tools – ten years ago, I spent four months writing a tool that could split a single multi-megabyte XML source database configuration file into multiple files for more atomic versioning and review and combine those files for deployments — I feel like the file paradigm is one that advanced IDE users may be OK abstracting away. I use IntelliJ and other JetBrains products, and Eclipse before them, that have “search by symbol” features, generally used to search by a class, object, trait, interface, etc. name. There are some projects I’ve worked on where the only time I really have to care about files is when I identify the need to create a new package or module necessitating a new directory. Otherwise, my IDE handles the files almost entirely as an abstraction.

                                                                                                                This was difficult to wrap my head around it but because of my experience with Smalltalk in college, I understood it more quickly than my peers and it accelerated my development productivity by a little bit. I’ll readily admit that I’m slower on file-based IDEs or text editors without some kind of fuzzy finder (I’ve been using Elementary Code in a VM for one project and dreadfully missing CtrlP or the like) but it is my preference to treat encapsulated code as an object instead of as a file. I think if more people preferred this, and Smalltalk would have been more popular for other reasons, perhaps a solid VCS for source databases may have emerged; one that didn’t rely on disaggregating the database into the filesystem paradigm.

                                                                                                                1. 1

                                                                                                                  I feel like the file paradigm is one that advanced IDE users may be OK abstracting away. I use IntelliJ and other JetBrains products, and Eclipse before them, that have “search by symbol” features, generally used to search by a class, object, trait, interface, etc. name. There are some projects I’ve worked on where the only time I really have to care about files is when I identify the need to create a new package or module necessitating a new directory.

                                                                                                                  While it’s true that individual users might be OK with this, there’s two factors to consider. One is that you operate with the knowledge that when your tools do stop working, you can always drop down a level to the “real” files to find out what’s actually going on. The second is that you can collaborate with others who use Vim and Emacs; your choice to use IntelliJ does not force your teammates to adopt your same tools.

                                                                                                                2. 2

                                                                                                                  I’m going to have to write a bunch of stuff (e.g., OAuth connectors, AWS APIs, possibly DB drivers, etc.) that I’d get for free in virtually any other language

                                                                                                                  Those seem to be largely available in Pharo via existing libraries e.g.:

                                                                                                                  1. http://forum.world.st/Zinc-amp-IdentityServer4-td5106594.html#a5106930
                                                                                                                  2. http://forum.world.st/AWS-SDK-for-Pharo-td5096080.html
                                                                                                                  3. http://forum.world.st/Databases-td5063151.html#a5063498
                                                                                                                  1. 1

                                                                                                                    Here’s a quick reality check, using two examples that have come up in my own work:

                                                                                                                    1. Does PayPal have an official SDK for Smalltalk?

                                                                                                                    2. Is there a Smalltalk version of the AWS Encryption SDK?

                                                                                                                    Spoiler: The answer to both is no.

                                                                                                                    1. 4

                                                                                                                      I don’t think that’s the right question. The right question is whether these things have an SDK that is easy to use from Smalltalk. Unfortunately the answer is still ‘no’. Even in Smalltalks that have a way of calling other languages, the integration is usually painful because the Smalltalk image abstraction doesn’t play nicely with the idea that some state exists outside of the image.

                                                                                                                3. 4

                                                                                                                  We had Smalltalk, we had Lisp Machines and we have Unix shells

                                                                                                                  One of these is not like the others.

                                                                                                                  PowerShell is closer due to being object-based, but it’s still very clunky.

                                                                                                                  1. 3

                                                                                                                    I don’t think that being object-based is necessary – it makes it cleaner and more efficient. Following this article, you do have a dialogue with the computer (even if it is rather simple), users can and do modify their environment (shell and PATH) and in the end, it is simple, perhaps too simple.

                                                                                                                    1. 1

                                                                                                                      I claim that being object-based is “necessary” in the sense that you’re meaningfully far away from the Smalltalk ideal if your system is built around text. Obviously, there’s a gradient, not a discrete transition, but being object-oriented is one of the major factors.

                                                                                                                      Additionally, Unix (shells) is dis-integrated, both in ideals and in implementation. Another major design decision of Lisp/Smalltalk is integration between components - something the Unix philosophy explicitly spurns.

                                                                                                                  2. 2

                                                                                                                    I think different tools are just good at different jobs. I don’t write in-the-large network services in Smalltalk just like I don’t write tax filing products in spreadsheets.

                                                                                                                    This is not to say that Smalltalk or spreadsheets are less – far from it! If I want to bang out a business projection I don’t reach for Rails or Haskell or Rust, I grab a spreadsheet. I think there are similarly many situations where something more Smalltalk-like is the ideal tool, but your day-job as a programmer in tech is not full of those situations and we haven’t given enough knowledge of what computers are capable of to those who would use computing as a tool for their own ends.

                                                                                                                  1. 21

                                                                                                                    This is something I try, over and over, to explain to people, and I’ve never, ever succeeded in doing it in print or a talk. I always get a response along the lines of, “oh yeah, I love TDD, that’s how I write [OCaml/Go/C#/whatever],” and that’s effectively the end of the conversation on their end: “neat, this guy likes Smalltalk because it has really good TDD”, is about all they hear, and the rest washes off like tears in rain.

                                                                                                                    “Experiencing Smalltalk” is a great title for an article like this because you really need to actually experience it, ideally using it yourself, to get it. Smalltalk the language is…fine. It gets a lot right, it gets a lot wrong, languages like Self and Slate have tried to improve it, but at any rate, it gets the job done with minimal fuss. People who just look at its syntax and semantics are right in 2021 that many other languages deliver the same or better.

                                                                                                                    But that misses the point. The thing that differentiates Smalltalk is its entire development flow, which is radically different from literally anything else I’ve ever used: write broken code, run it, repeatedly fix the code as you slowly walk through methods and whole classes that either didn’t work or didn’t even exist when you initiated the execution, and end up with a working spike that had its first successful run the second you’re done writing the last line of code. A very few languages, like Factor and Common Lisp, come very close, but as of 2021, Smalltalk is the only environment I’ve ever used that still delivers it.[1]

                                                                                                                    I don’t write Smalltalk anymore, and I don’t see that changing (mostly just because I’m old and have kids and spend what little time I do coding for fun on things like Factor), but the experience of developing in it remains absolutely without peer.

                                                                                                                    [1]: I’ve been told that the actual Lisp Machines of the 80s did have this same flow, but I’ve never used one–and I definitely don’t think SBCL in 2021 matches the dev flow of Pharo or Squeak Smalltalk.

                                                                                                                    1. 1

                                                                                                                      The thing that differentiates Smalltalk is its entire development flow, which is radically different from literally anything else I’ve ever used: write broken code, run it, repeatedly fix the code as you slowly walk through methods and whole classes that either didn’t work or didn’t even exist when you initiated the execution, and end up with a working spike that had its first successful run the second you’re done writing the last line of code.

                                                                                                                      This describes my experience writing Emacs pretty closely. However, I know that many people who know Emacs intimately still say that Smalltalk is different, so I have to conclude that there’s more to it, and that it’s just very difficult to describe what exactly the difference is in words. I expect it has to do with a more seamlessly integrated debugger that reifies the call stack and things. I suppose there’s only one way to really find out.

                                                                                                                    1. 3

                                                                                                                      Are there other examples of SQLite being used as a website backend database in production? What kind of scale could you reach with this approach? And what would be the limiting resource?

                                                                                                                      1. 10

                                                                                                                        Expensify was based exclusively on sqlite for a long time, then they created a whole distributed database thing on top of it.

                                                                                                                        1. 7

                                                                                                                          Clojars used SQLite for a good 10 years or so, only recently moving away to Postgres for ease of redeployment and disaster recovery. The asset serving was just static assets, but the website and deployments ran against SQLite pretty well.

                                                                                                                          1. 3

                                                                                                                            If I remember correctly, the trouble that Clojars ran into had more to do with the quality of the JVM-based bindings to SQLite than they did with SQLite itself, at least during the portion of time that I was involved with the project.

                                                                                                                            1. 2

                                                                                                                              Yeah, looking back at the issues, “pretty well” is maybe a little bit generous. There were definitely settings available later on which would have helped the issues we were faxing around locking.

                                                                                                                          2. 4

                                                                                                                            I can’t remember whom but at least one of the well funded dynamoDB style distributed database-y products from the mid 10s used it as the storage layer.

                                                                                                                            So all the novel stuff that was being done with data was the communication and synchronisation over the network, and then for persistence on individual nodes they used sqlite instead of reinventing the wheel.

                                                                                                                            1. 6

                                                                                                                              That was FoundationDB, purchased by Apple in 2013, then gutted, and then returned as open-source in 2018. I’m a bit annoyed, because it was headed to be CockroachDB half a decade earlier, and was taken off the market with very little warning.

                                                                                                                              1. 1

                                                                                                                                Thanks!

                                                                                                                            2. 3

                                                                                                                              You probably will get really fast performance for read-only operations. The overhead of client/server and network stack could be more than10x times of function calls from same address space. The only real limitation might be single server, since you cannot really efficiently scale sqlite beyond single system. But when you reach that scale, you usually needs much more than sqlite.

                                                                                                                              1. 3

                                                                                                                                The sqlite website claims to run entirely on sqlite.

                                                                                                                                They also have this page, though most of those aren’t websites: https://sqlite.com/mostdeployed.html

                                                                                                                              1. -3

                                                                                                                                It seems to be a common theme of prog-lang-started-by-child-prodigy projects that they adopt features where I simply can’t fathom how they are going to maintain and develop them in the mid-to-long-term.

                                                                                                                                Perhaps I’m the only one who is concerned by the complexity these party-trick features seem to involve?

                                                                                                                                (The other option is that this stuff is really that easy and all the hundreds of full-time C/C++ compiler engineers are just idiots for not doing it.)

                                                                                                                                1. 33

                                                                                                                                  There are more details on why and how this works here: zig cc: a Powerful Drop-In Replacement for GCC/Clang

                                                                                                                                  The other full time C/C++ compiler engineers are not idiots; they just have different goals since they work for companies trying to turn a profit by exploiting open source software rather than working for a non-profit just trying to make things nice for everyone.

                                                                                                                                  1. 6

                                                                                                                                    The other full time C/C++ compiler engineers are not idiots; they just have different goals since they work for companies trying to turn a profit by exploiting open source software rather than working for a non-profit just trying to make things nice for everyone.

                                                                                                                                    This feels like a big statement, and that’s fine, but would you mind elaborating? Which companies do you mean? What goals do they have that are incompatible with something like zig cc?

                                                                                                                                    1. 5

                                                                                                                                      I think the point there was just that e.g. Apple has no particular interest in making using clang to cross-compile Windows binaries easy. They wouldn’t necessarily be against it, but it’s not something that aligns with their business interests whatsoever, so they’re very unlikely to spend any money on it. (Microsoft actually does value cross-compilation very highly, and has been doing some stuff in that area with clang, and so is almost a counterexample. But even there, they focus on cross-compilation in the context of Visual Studio, in which case, improving the CLI UI of clang again does not actually do anything for them.)

                                                                                                                                  2. 40

                                                                                                                                    Am I the only one who is concerned by the complexity these party-trick features seem to involve?

                                                                                                                                    (The other option is that this stuff is really that easy and all the hundreds of full-time C/C++ compiler engineers are just idiots for not doing it.)

                                                                                                                                    This mindset is one of the major causes why modern software sucks so much. The amount of tools that can be improved is humongous and this learned helplessness is why we keep having +N layer solutions to problems that would require re-thinking the existing toolchains.

                                                                                                                                    I encourage you to read the Handmade Manifesto and to dive deeper into how Zig works. Maybe you’re right, maybe this is a party trick, but the reality is that you don’t know (otherwise you would take issue with specific approaches Zig employs) and you’re just choosing the safe approach of reinforcing your understanding of the status quo.

                                                                                                                                    Yes, there are a lot of snake oil sellers out there, but software is not a solved problem and blanket statements like this one are frankly not helping anybody.

                                                                                                                                    1. 1

                                                                                                                                      I think you are wrong and the exact opposite is the case:

                                                                                                                                      We can’t have nice things because people don’t learn from their predecessors.

                                                                                                                                      Instead they go out to reinvent flashy new stuff and make grandiose claims until it turns out they ignored the inconvenient last 20% of work that would make their new code reliable and complete – oh, and their stuff takes 200% more resources for no good reason.

                                                                                                                                      So yeah, if people don’t want to get suspected of selling snake oil, then they need to be straight-forward and transparent, instead of having these self-congratulatory blog articles.

                                                                                                                                      Build trust by telling me what doesn’t work, and what will never work.

                                                                                                                                      1. 17

                                                                                                                                        Here’s what doesn’t work https://github.com/ziglang/zig/labels/zig%20cc.

                                                                                                                                    2. 7

                                                                                                                                      Clang could provide the same trivial cross compilation if it were a priority. Zig is mostly just using existing clang/llvm features and packaging them up in a way that is easier for the end user.

                                                                                                                                      1. 21

                                                                                                                                        “just”

                                                                                                                                        1. 4

                                                                                                                                          Perhaps not obvious, but I meant the “just” to be restricted to “mostly just using existing clang/llvm features”. I’m in no way denegrating Andrew’s efforts or the value of good UX.

                                                                                                                                      2. 5

                                                                                                                                        Another option is that it’s easy if you build it in at the start and much more difficult to add it later. It’s like the python 2 to 3 migration. It wasn’t worth it for some projects, but creating a new python 3 project is easy. Path dependence is a thing.

                                                                                                                                        1. 2

                                                                                                                                          I think the hard part is adding these kinds of features after the fact. But assuming it’s already in place, I feel like this is actually not a very hard thing to maintain?

                                                                                                                                          I think a lot of complexity with existing tools is around “oh we’re going to have this be global/implicit” and that permeating everywhere, so then when you want to parametrize it you have to play a bunch of tricks or rewrite everything in the stack to get it to work.

                                                                                                                                          But if you get it right out of the door, so to speak, or do the legwork with some of the dependencies… then it might just become a parameter passed around at the top level (and the lower levels already had logic to handle this, so they don’t actually change that much).

                                                                                                                                          case in point: if you have some translation framework relying on a global, your low-level will read that value and do a lookup, and the high level will not handle it. If you parameterize it, now your high-level stuff has to pass around a bunch of translation state, but the low-level (I guess the hard part, so to speak?) will stay basically the same. At least in theory

                                                                                                                                          I do kinda share your skepticism with the whole “let’s rewrite LLVM” thing… but cross compilation? Having a build system that is “just zig code” instead of some separate config lang? These seem good and almost simpler to maintain. I don’t think C compiler engineers are idiots for not doing X, just like… less incentivised to do that, since CMake isn’t a problem for someone who has spent years doing it.

                                                                                                                                          1. 2

                                                                                                                                            I agree with you. This doesn’t make any sense for Zig to take on. Andrew shared it with me as he was working on it and I thought the same thing then: what? Why does a compiler for one language go to this much trouble to integrate a toolchain for another language? Besides being severely out of scope, the problem space is fraught with pitfalls, for example with managing sysroots and dependencies, maintaining patched forks of libcs, etc. What a huge time sink for a group who should ostensibly have their hands full with, you know, inventing an entire new programming language.

                                                                                                                                            The idea of making cross-compilation easier in C and C++ is quite meritous. See Plan 9 for how this was done well back in the naughts. The idea that it should live in the zig build tool, however, is preposterous, and speaks rather ill of the language and its maintainers priorities. To invoke big corporate compiler engineers killing open source as the motivation is… what the fuck?

                                                                                                                                            Sorry Andrew. We don’t always see eye to eye, but this one is particularly egregious.

                                                                                                                                            1. 7

                                                                                                                                              No, this makes a lot of sense. Going back to the article, Go’s toolchain (like Plan 9’s) is good at cross-compilation, but “I recommend, if you need cgo, to compile natively”. This sort-of works for Go because cgo use is low. But Zig wants to encourage C interoperability. Then, Zig’s toolchain being good at cross-compilation is useless without solving C’s cross-compilation, because most of Zig will fail to cross-compile because of C dependency somewhere. By the way, most of Rust fails to cross-compile because of C dependency somewhere. This is a real problem.

                                                                                                                                              Once you solved the problem, it is just a good etiquette to expose it as CLI, aka zig cc, so that others can use it. The article gives an example of Go using it, and mentions Rust using it in passing.

                                                                                                                                              I mean, yes, zig cc should be a separate project collaboratively maintained by Go, Rust, and Zig developers. Humanity is bad at coordination. Big companies are part of that problem. Do you disagree?

                                                                                                                                              1. 2

                                                                                                                                                The best way, in my opinion, to achieve good C interop is by leveraging the tools of the C ecosystem correctly. Use the system linker, identify dependencies with pkg-config, link to system libraries, and so on. Be prepared to use sysroots for cross-compiling, and unafraid to meet the system where it’s at to do so. Pulling the concerns of the system into zig - libc, the C toolchain, statically building and linking to dependencies - is pulling a lot of scope into zig which really has no right to be there. Is the state of the art for cross-compiling C programs any good? Well, no, not really. But that doesn’t mean that those problems can jump domains into Zig’s scope.

                                                                                                                                                I am a believer that your dependency’s problems are your problems. But that definitely doesn’t mean that the solution should be implemented in your domain. If you don’t like the C ecosystem’s approach to cross compiling, and you want to interoperate with the C ecosystem, the correct solution involves going to the C ecosystem and improve it there, not to pull the responsibilities of the C ecosystem into your domain.

                                                                                                                                                Yes, other languages - Go, Rust, etc - should also be interested in this effort, and should work together. And yes, humanity is bad at cooperation, and yes, companies are part of that problem - but applying it here doesn’t make sense. It’s as if I were talking about poaching as contributing to mass extinction, and climate change for also contributing to mass extinction, and large corporations for contributing to climate change, and then conclude that large corporations are responsible for poaching.

                                                                                                                                                1. 3

                                                                                                                                                  There’s another path to achieving C interop, which is by using whatever feels more convenient but staying true to whatever ABI boundaries. In terms of Zig, this is achieved in a few ways: It uses its own linker (currently LLD) which is useful when you don’t have a local system linker (pure linux/windows install) and still works with existing C code out there. It uses paths for dependencies, leaving it up to the user to specify how they’re found (e.g. pkg-config). It links to system libraries only if told explicitly but still works without them - this is also useful when building statically linked binaries which still work with existing C code.

                                                                                                                                                  For cross-compiling, sysroot is a GCC concept. This doesn’t apply to other environments like clang (the C compiler Zig uses), or the defaults of Mac/Windows. Zig instead uses LLVM to emit any supported machine code (something which requires having multiple compilers for in GCC), bundled the build environment needed (lib files on windows, static libc on linux if specified, nothing if dynamically linking), and finally links them together to the appropriate output using LLD’s cross-linking ability.

                                                                                                                                                  Having this all work seamlessly from whatever supported system is what makes it appealing. For example, andrew (creator of Zig) has showcased in the past cross-compiling the compiler on an x86 machine to aarch64, then using qemu to cross-compile the compiler again from the aarch64 vm back to x86, and it works. This applies also to other operating systems, which is a feature that isn’t present in current cross compiling tools, even clang.

                                                                                                                                                  For the issue of problem domains, this is not something you could address by trying to fix existing C tools. Those already have a defined structure as andrew noted above given they have different goals and are unlikely to change it. This could be why Zig takes upon solving these problems locally, and pulls the responsibility of what it wishes to provide, not the entire C ecosystem. I believe its partially of similar sub-reasons why Go has its own build system but also claims to compile to different environments.

                                                                                                                                                  I also agree that different ecosystems could pitch in for what seems to be a universally helpful tool, but as its been going on today, maybe they have different design goals. Where another path such as using the existing C ecosystem (for various situational reasons) makes more sense than the idealistic one Zig has chose to burden.

                                                                                                                                                  1. 1

                                                                                                                                                    It links to system libraries only if told explicitly but still works without them - this is also useful when building statically linked binaries which still work with existing C code.

                                                                                                                                                    System libraries can also be static libraries, and there’s lots of reasons to link to them instead. We do build statically linked programs without the Zig tools, you know!

                                                                                                                                                    For cross-compiling, sysroot is a GCC concept. This doesn’t apply to other environments like clang

                                                                                                                                                    Clang definitely uses sysroots. Where does it find the static libs you were referring to? Or their headers? The answer is in a sysroot. Zig may manage the sysroot, but it’s a sysroot all the same.

                                                                                                                                                    There’s more to take apart here, but on the whole this is a pretty bad take which seems to come from a lack of understanding about how Linux distributions (and other Unicies, save for macOS perhaps) work. That ignorance also, I think, drove the design of this tool in the first place, and imbued it with frustrating limitations which are nigh-unsolvable as a consequence of its design.

                                                                                                                                                    1. 3

                                                                                                                                                      The explicitly provided system libraries is not about dynamic vs static linking, its about linking them at all. Even if you have the option to statically link libc, you may not want to given you can do its job sometimes better for your use case on platforms that don’t require it (e.g. linux). The closest alternative for C land seems to be -ffreestanding (correct me if i’m wrong)? This is also an option in zig, but it also gives the option to compile for platforms without having to link to any normal platform libraries.

                                                                                                                                                      Clang has the option to use sysroots, but it doesn’t seem to be required. In zig’s case, it uses whatever static libs you need by you explicitly linking to them rather than assuming they exist upon a given folder structure in the same directory. Zig does at least provide some methods of finding where they are on the system if you don’t know there they reside given the different configurations out there. I’d say this differs from a sysroot as its more modular than “system library directory”.

                                                                                                                                                      Without a proper explanation, the idea that this approach “stems from lack of understanding” or has “frustrating limitations which are nigh-unsolvable” don’t seem make such sense. As we’re both guilty of prejudice here, i’d relate your response to one of willfully ignorant to unknown systems and gate-keeping.

                                                                                                                                                      1. 1

                                                                                                                                                        Clang has the option to use sysroots, but it doesn’t seem to be required.

                                                                                                                                                        Your link does not support your statement. I don’t think you understand how cross-compiling or sysroots actually work.

                                                                                                                                                        Again, it’s the same with the rest of your comments. There are basic errors throughout. You have a deep ignorance or misunderstanding of how the C toolchain, linking, and Unix distributions work in practice.

                                                                                                                                                        1. 4

                                                                                                                                                          Given you haven’t actually rebutted any of my claims yet, nor looked into how clang supports using sysroots, we probably won’t be getting anywhere with this. Hope you’re able to do more than troll in the future.

                                                                                                                                                          1. 1

                                                                                                                                                            Clang totally uses sysroot, see here. (Long time ago, I wrote the beginning of Clang’s driver code.) I don’t know where to begin, but in fact, ddevault is correct about all technical points and you really are demonstrating your ignorance. Please think about it.

                                                                                                                                                            1. 3

                                                                                                                                                              Please re-read by post above which literally says “clang supports using sysroots”, a claim that agrees with yours. My original point a few messages back was about how clang doesn’t need sysroot in order to cross-compile, which still stands to be disproved, as its just short for a bunch of includes.

                                                                                                                                                              Once again, just as ddevault, you enjoy making claims about others without specifying why in an attempt to prove some point or boost your ego. Either ways, if this is your mindset, there’s no point in further discussion with you as well.

                                                                                                                                            2. 1

                                                                                                                                              What features in this case?

                                                                                                                                              1. 1
                                                                                                                                            1. 11

                                                                                                                                              The bit about copyright violations is particularly bad, too.

                                                                                                                                                1. 17

                                                                                                                                                  That seems less an “other side” and more “so what?”, especially in his response to jwz’s response, but it’s indeed interesting to have more context.

                                                                                                                                                  1. 6

                                                                                                                                                    Interesting, but I’m inclined to be on jwz’s side

                                                                                                                                                    1. 6

                                                                                                                                                      “Your security arguments turned out to be incorrect. So, stop?” Did they though? Did they REALLY?

                                                                                                                                                  1. 7

                                                                                                                                                    I really want to try kakoune, but the idea of starting over with a new editor and editing paradigm just seems like so much effort and time before I’m productive.

                                                                                                                                                    1. 3

                                                                                                                                                      I feel the same way. I use NeoVim and have tried to keep it as stock config as possible, but I think I’ve already tweaked it enough to be different enough. So learning Kakoune would be against vim everywhere and my customization.

                                                                                                                                                      1. 3

                                                                                                                                                        If you like the vi/vim experience but want some similar features to Kakoune then vis might be worth a shot. (Also see differences from Kakoune).
                                                                                                                                                        I use it as my main editor and structural regular expressions, multi-cursor, etc are all quite intuitive while not leaving the traditional vi-like modal editing world IMO.

                                                                                                                                                        Plugins are also written in Lua, if that’s your thing.

                                                                                                                                                      2. 2

                                                                                                                                                        YMMV of course, but it only took ~2 weeks after switching from vim for me to become reasonably productive in kakoune.

                                                                                                                                                        1. 3

                                                                                                                                                          What was the biggest hurdle for you when acclimating to Kakoune?

                                                                                                                                                          1. 3

                                                                                                                                                            Not OP, but as someone else who went from Vim to Kakoune, I think the biggest shift for me was thinking in terms of repeatedly narrowing the selection and then doing one single command on the selection, rather than doing a command sequence and e.g. assigning to a macro or the like. The better I got at selection narrowing, the easier and more natural everything felt. Learning slightly different keystrokes was comparatively very easy.

                                                                                                                                                            1. 3

                                                                                                                                                              On day 1 it was 100% unlearning vim muscle memory. After that my biggest challenge was adapting to kakoune’s selection first editing model, which is what inspired me to switch in the first place. It was very worth it though, the incremental nature of the editing in which intermediate results are instantly visible makes complex tasks much more intuitive.

                                                                                                                                                            2. 1

                                                                                                                                                              I’m coming from emacs, which is probably going to be worse, but even two weeks sounds like an enormous amount of time to not be able to code. I can’t justify taking more than a day to switch at work, so I’d have to use both, too.

                                                                                                                                                              1. 2

                                                                                                                                                                It’s not that I wasn’t able to code at all but that I was significantly slower than I was with vim. I quickly gained speed over the first week though and after ~2 weeks I didn’t feel like my inexperience with editor was holding me back for basic editing tasks. More advanced editing tasks weren’t intolerably slow either, just took a bit more thought than they do now.

                                                                                                                                                          1. 14

                                                                                                                                                            Tired of Terminal UIs based on ncurses or blessed? With Fig, you can use modern web technologies like HTML, CSS and JavaScript instead.

                                                                                                                                                            I suppose I’m not in the target audience as I really don’t see using web technologies as a feature over TUIs.

                                                                                                                                                            1. 5

                                                                                                                                                              Eh, the idea seems brilliant to me, honestly; there are a few tools I just don’t use often enough to fully remember their CLIs, so having an ad hoc, simple GUI for those would be a huge boon, letting me stick with the CLI tool, but not (necessarily) have to read the man pages each time. Having that outside the terminal so I can see the command line being built also makes sense. But I’m with you that full-blown HTML for the UI seems a bit heavy to me.

                                                                                                                                                              1. 1

                                                                                                                                                                When I saw it, I thought of a co-worker who’s wondered about how to span the gulf between scripts/utilities we can readily write and run, and utilities that non-programmers doing video production can handle.