1. 2

    looking forward to FOSDEM 2023

    1. 4

      Related, an open challenge to break an implementation of Viewstamped Replication, with bounties available for those who succeed: https://tigerbeetle.webflow.io/20k-challenge

      1. 23

        The lobste.rs title is truncated from the actual title of

        Relative paths are faster than absolute paths on Linux (and other jokes I tell myself)

        Which has a different meaning than

        Relative paths are faster than absolute paths on Linux

        But also, both titles are unrelated to the contents of the blog post, which is about the semantics of openat flavor of syscalls and not in any way about performance.

        1. 4

          I had hoped that the last few paragraphs would explain the title, because performance was the reason I’ve been looking into this.

          1. 4

            It also has very little to do with rust. The examples could be written in any language with the same result

          1. 3

            Cleanin’ up the house so the new kitty cat doesn’t eat anything dangerous

            1. 4

              TIL in C++, enum class is the same thing as enum struct. Source: https://en.cppreference.com/w/cpp/language/enum

              1. 8

                I’m looking forward to getting Zig to bootstrap with both cproc and chibicc. Of course this will only work when creating a build that does not have LLVM extensions enabled, since LLVM is written in C++.

                The process will be: zig0 source code written in C. zig0 inputs .zig code and compiles it into .c code. This will build the entire stage1 zig compiler into a .c file, which will then get compiled and linked with the same thing that compiled and linked zig0 in the first place. Next this new binary, zig1, will recompile itself, outputting a binary instead of .c code. The resulting binary is the zig compiler, fully bootstrapped. At this point, recompiling itself again should produce an identical output.

                1. 4

                  Thanks for spending the time to make this an option.

                1. 4

                  I posted this partially because of the new Unicode version, and partially as an answer to people who ask why Zig doesn’t have a built-in Unicode string type.

                  My argument is that if you want to support Unicode, you have to do so knowingly.
                  No built-in type can exempt you from that.

                  1. 2

                    My argument is that if you want to support Unicode, you have to do so knowingly. No built-in type can exempt you from that.

                    From this comment, it sounds like Swift successfully exempts developers from thinking about Unicode – if they work on non-performance-sensitive programs. Swift’s abstraction over Unicode strings could lead to unexpectedly slow operations on certain strings, so I understand why Zig wouldn’t want that.

                    To avoid the impression that Zig doesn’t support Unicode at all, I’ll note that though the Zig language doesn’t have a Unicode type, the Zig standard library has a std.unicode struct with functions that perform Unicode operations on arrays of bytes.

                    Do you know if there are any plans to update std.unicode given the issues raised by the author of Ziglyph in this comment – that graphemes would be a better base unit than codepoints? I only just started trying to write Unicode-aware code in Zig, but after reading about the available libraries, I wish for Zigstr or something like it to replace std.unicode in the standard library. Otherwise, I worry about developers finding std.unicode in the standard library, using it to read strings a codepoint at a time, and thinking they’ve handled everything they need to.

                    The comments I linked to were left on an issue that was closed because no Zig language changes were needed. Would it be well-received if I opened a new issue about updating the Zig standard library as I described above?

                    1. 2

                      I think that followup comment isn’t quite correct. Swifts strings cannot be indexed using a plain numeric index as in other languages. Instead, they are indexed using the String.Index type which must be constructed using the String instance in question, advanced and manipulated using String index methods. All this ceremony makes it rather obvious that it’s not an O(1) operation.

                      1. 2

                        I added a comment to one of the threads you linked just now, and I will reproduce it here:

                        @jecolon thank you for your comments. Before tagging 1.0, I will be personally auditing std.unicode (and the rest of std) while inspecting ziglyph carefully for inspiration. If you’re available during that release cycle I would love to get you involved and work with you an achieving a reasonable std lib API.

                        In fact, if you wanted to make some sweeping, breaking changes to std.unicode right now, upstream, I would be amenable to that. The only limitation is that we won’t have access to the Unicode data for the std lib. If you want to make a case that we should add that as a dependency of zig std lib, I’m willing to hear that out, but for status quo, that is a limitation because of not wanting to take on that dependency.

                        In summary, std.unicode as it exists today is mainly used to serve other APIs such as the file system on Windows. It is one of the APIs that I think is far from its final form when 1.0 is tagged, and someone who has put in the work to make ziglyph is welcome to go in and make some breaking changes in the meantime.

                    1. 19

                      Yesterday on Twitter, someone said:

                      The success of docker was always based in the fact that the most popular web technologies of that day and age sucked so bad at dependency management that “just download a tarball of a stripped down os image” seemed like a reasonable solution.

                      This is true, but it’s sort of more true that as TFA says,

                      The reason why we can often get away with using languages like Python or JavaScript to drive resource-intensive computations, is because under the hood somebody took years to perfect a C implementation of a key procedure and shared it with the world under a permissive license.

                      And C/C++ have an ugly Makefile where an actual dependency manager should be, which makes Docker feel like a solution and not a bandaid.

                      I think TFA is correct that moving forward, it’s not going to be possible to boil the ocean and throw out all existing unsafe software, but we can at least simplify things by using simpler and more direct dependency management in C/C++.

                      1. 29

                        And C/C++ have an ugly Makefile where an actual dependency manager should be, which makes Docker feel like a solution and not a bandaid.

                        I completely disagree. Makefile/CMake/Meson/whatever are convoluted, difficult to learn, etc but they are fundamentally different from what docker gives you. They plug in to the existing ecosystem, they compose nicely with downstream packages, they’re amenable to distro packaging, they offer well-defined, stable, and standardized interfaces for consumption. They are part of an ecosystem and are great team players.

                        A docker image says “f this, here’s everything and the kitchen sink in the exact version and configuration that worked for me, don’t change anything, good luck maintaining dependencies when we don’t bother fast enough. Screw your system preferences for the behavior of dependency x, y, or z (which they rightly have no need to know about or concern themselves with - but the user very much has the right to), this is what works for me and you’re on your own if you want to diverge in the slightest.”

                        I write and maintain open source software (including things you might use). It’s hard to use system dependencies and abstract away our dependency on them behind well-defined boundaries. But it’s important because I respect that it’s not my machine the code will run under, it’s the users’.

                        Docker - like Electron but let’s not get into that here - isn’t about what’s better in principal or even in practice - it’s solely about what’s easier. At some point, it was universally accepted that things should be easy for the user even if that makes the developer’s job a living hell. It’s what we do. Then sometime in the past ten years, it all became about what’s easiest and most pain-free for developers. Software development (don’t you dare say software engineering) became lazy.

                        We can argue about the motives but I don’t blame the developers, I think they are following a path that was paved by corporations that realized users don’t know any better and developers were their only advocates. It was cheaper to invent these alternatives that let you push software out the door faster with greener and greener developers than it was investing in the existing ecosystem and holding the industry to a higher standard. Users have no one advocating for them and they don’t even realize it.

                        1. 4

                          Software development (don’t you dare say software engineering) became lazy.

                          This sentiment is as old as Unix: https://en.wikipedia.org/wiki/Worse_is_better

                          1. 10

                            Docker is neither simple nor correct nor consistent nor complete in either the New Jersey or MIT sense.

                            I think that if the takeaway from reading Worse Is Better is that lazy software development is acceptable, then that is the incorrect takeaway. The essay is about managing complexity in order to get a rough fit sooner than a perfect fit perhaps too late to matter. Read the essay.

                            1. 8

                              I read the essay. The essay itself codifies a position that it opposes, based on the author’s observations about the state of the New Jersey/MIT split. It’s one person’s idea of what “Worse Is Better” means, with the essay created to criticize the self-defined argument, not the definitive idea. But we can split semantics about the essay some other time.

                              When someone says that “software development has become lazy” and adds a bunch of supporting information around that for a specific situation, what I read is “I am frustrated with the human condition”. Software developers have been lazy, are lazy, and will continue to be lazy. Much like a long-standing bug becomes an expectation of functionality. Fighting human nature results in disappointment. To ignore the human factors around your software is to be willingly ignorant. Docker isn’t popular in a vacuum and there’s no grand capitalist conspiracy to convince you that Docker is the way to make software. Docker solves real problems with software distribution. It may be a hamfisted solution, but railing against the human condition and blaming corporate interests is not the way to understand the problems that Docker solves, it’s just an ill-defined appeal to a boogeyman.

                              1. 8

                                Docker isn’t popular in a vacuum and there’s no grand capitalist conspiracy to convince you that Docker is the way to make software.

                                You, uh, you sure about that? Like, really sure?

                                1. 4

                                  Our community’s discourse is so dominated by cynicism, we need to find a way to de-escalate that, not add fuel to the fire. So the benefit of the doubt is more important now than ever. That means that whenever there’s a benign explanation for something, we should accept it.

                                  1. 11

                                    Our community is split into two groups:

                                    • Those exploiting software and human labor for financial gain at the expense of the Earth and its inhabitants.
                                    • Those engaging in craftsmanship and improving the state of technology for end users, by creating software you can love.

                                    Think carefully before choosing to defend the former group.

                                    1. 3

                                      I don’t think it’s that simple. I definitely feel the pull of the second group and its ideals, but sometimes the practices of the first group can be put to good use to, as you say, improve the state of technology for end-users. Consider: if there’s an unsolved problem affecting end-users, e.g. one caused by the sudden changes that happened in response to the pandemic, and the most practical way to solve that problem is to develop and deploy a web application, then if I spend time crafting an elegant, efficient solution that I would be proud to show to people here, then I’ve likely done the actual users a disservice, since I could get the solution out to them sooner by taking the shortcuts of the first group. That’s why I defend those practices.

                                      1. 3

                                        This fast-to-market argument only has a home because the world is run so much by the former group.

                                        Consider the case of email vs instant messaging. Email was standardized and made ubiquitous at a time before market forces had a chance to spoil it with vendor lock-in. Meanwhile, text messaging, and messaging in general is incredibly user-hostile. But it didn’t have to be this way. If messaging were orchestrated by the second group, with the end-user experience in mind as the primary concern, we would have widely popular federated messaging with robust protocols. Further, many other technologies would exist this way, with software of the world, in general, being more cooperative and reusable. In such case, total time to develop and deploy a web application would be decreased from where it is today, and furthermore it would have more capabilities to aid the end-user.

                                        All this “glue” code that needs to be written is not fundamentally necessary in a technical sense; it’s a direct result of the churn of venture capital.

                                        1. 8

                                          The friendliest ways of building websites, with the least amount of code, right now are things like Wix, Wordpress, cPanel, and so forth. These are all very much commercial ventures, squarely from the first camp.

                                          Your example of messaging is also questionable, because the successful messaging stuff was taken over by the first camp while the second camp was screwing around with XMPP and IRCv3 and all the rest.

                                          The killer advantage the first camp has over the craftsmen in the second camp is that they’re not worried about “quality” or “products people love”…they are worried about the more straightforward (and sustainable) goal of “fastest thing we can put out with the highest profit margin the most people want”.

                                          I wish–oh how much do I wish!–that the second group was favored, but they aren’t as competitive as they need to be and they aren’t as munificent or excellent as they think they are.

                                  2. 2
                                    1. 5

                                      In my eyes that’s proof that Docker failed to build a moat more than anything else, and in fact it has greater chances to be evidence in support of friendlysock’s theory than the opposite: companies don’t go gently into the night, VC funded ones especially, so you can be sure that those billions fueled pantagruelian marketing budgets in a desperate scramble to become the leading brand for deploying distributed systems.

                                      Unfortunately for them the open source game didn’t play out in their favor.

                                      1. 4

                                        Unfortunately for them the open source game didn’t play out in their favor.

                                        I don’t think there’s any actual disagreement here; just differences about how snarky we want to be when talking about the underlying reality. Yes, Docker is a company with VC cash that had an incentive to promote its core offering. But no, Docker can’t actually make the market accept its solutions, so e.g. Docker Swarm was killed by Kubernetes.

                                        Okay, maybe you can say, but Kubernetes was just promoted by Google, which is an even bigger capitalist nightmare, which okay, fine is true, but at the end of the day, propaganda/capitalism/whatever you want to call it can only go so far. You can get to a certain point by just being big and hyped, but then if you aren’t actually any good, you’ll eventually end up crashing against reality, like Docker Swarm or Windows Mobile or XML or the Soviet Union or whoever else tries to substitute a marketing budget for reality.

                                        1. 2

                                          but at the end of the day, propaganda/capitalism/whatever you want to call it can only go so far.

                                          I do agree that containers are a solution to a problem. An imperfect solution to a problem we should not have in the first place but, regardless, it’s true that they can be a useful tool in the modern development world. That said, I fear that it’s the truth that can only go so far, and that skilled use of a communication medium can produce much bigger impact in the short to medium term.

                                      2. 5

                                        That article suggests they raised more than a quarter of a billion dollars, and then talks about how they lost to the even more heavily propagandized (by Google) Kubernetes meme when they couldn’t figure out how to monetize all the victims. Neither of those seems a clear counter to there being a vast capitalist conspiracy.

                                        Like, devs get memed into dumb shit all the time by sales engineers.

                                        If they didn’t there wouldn’t be devrel/devangelist positions.

                                        Edit:

                                        (and just to be clear…I’m not denying that Docker has some use cases. I myself like it for wrapping up the seeping viscera of Python projects. I’m just disagreeing that it was from some spontaneous outpouring of developer affection that it got where it is today. See also, Java and React.)

                                        1. 4

                                          Like, devs get memed into dumb shit all the time by sales engineers.

                                          If they didn’t there wouldn’t be devrel/devangelist positions.

                                          Yeah, true enough based on my experience as a former dev advocate.

                                          1. 2

                                            Neither of those seems a clear counter to there being a vast capitalist conspiracy.

                                            There can’t be two vast capitalist conspiracies. If there are two, it’s not a vast conspiracy. Calling it a “capitalist conspiracy” either means that there is only one or that you like using snarky names for perfectly ordinary things.

                                            1. 2

                                              I would call a conspiracy of half the capitalists pretty vast, FWIW.

                                  3. 2

                                    Yes. But that was only the conclusion of my argument; I think it’s fair to say that the actual points I was making regarding dependencies are pretty objective/factual and specific to the docker situation.

                                  4. 2

                                    While I agree and am loathe to defend docker in any way, if instead of a docker image we were talking about a Dockerfile then that is comparable to a build system that declares dependencies also.

                                    1. 2

                                      I completely disagree. Makefile/CMake/Meson/whatever are convoluted, difficult to learn, etc but they are fundamentally different from what docker gives you.

                                      Agreed.

                                      They plug in to the existing ecosystem, they compose nicely with downstream packages, they’re amenable to distro packaging, they offer well-defined, stable, and standardized interfaces for consumption.

                                      I disagree. The interfaces aren’t stable or standardized at all. Distros put a huge amount of effort into trying to put fingers into the leaking dam, but the core problem is that Make is a Turing complete language with extreme late binding of symbols. The late binding makes it easy to write a Makefile that works on one machine but not another. Adding more layers of autoconf and whatnot does not really solve the core problem. The thing C/C++ are trying to do is… not actually that hard at all? It’s just linking and building files and trying to cache stuff along the way. Newer languages just include this as part of their core. But because every C/C++ project has its own Turing complete bespoke solution, they are incompatible and can’t be moved to new/different platforms without a ton of human effort. It’s a huge ongoing PITA for everyone.

                                      The thing that would actually be good is to standardize a descriptive non-Turing complete, configuration language that can just describe dependencies between files and version constraints. If you had that (big if!), then it wouldn’t be a big deal to move to new platforms, deal with platforms changing, etc. Instead we get duplication where every distro does their own work to fill in the gaps by being the package manager that C/C++ need.

                                      1. 2

                                        Sorry if I wasn’t clear: the abstracted interfaces I’m referring to aren’t provided by the Makefile or whatever. I meant standardized things like pkgconf definition files in their place, man files in their place, using the packages made available by the system package manager rather than bringing in your own deps, etc.

                                  1. 36

                                    Congrats to waddlesplash, the Haiku project, and all the users!

                                    1. 23

                                      Hey, thanks very much!

                                      1. 3

                                        Best of luck, man! This is amazing! I can’t wait to see all the cool updates!

                                      2. 1

                                        Do you know if Zig has been ported to Haiku yet? I’ve used Haiku on and off for years, and I’ve been getting into Zig recently, so porting it might be fun. Given Zig’s portability, the fact IIRC Haiku uses ELF, and the BYO²S functionality in the stdlib, an initial port would probably be a fun weekend project.

                                        1. 1

                                          Yes

                                          Although I think the work should be audited; I noticed some suspect values the other day. Why would some error codes be negative and others not? I think the contributor who did this did a lot of copy pasting and guesswork.

                                          Anyway not to complain, but I do think there could be a lot of improvements made to Zig’s Haiku support. Contributors welcome :)

                                          1. 2

                                            Why would some error codes be negative and others not?

                                            The short answer that you are looking for is: indeed, those are incorrect, and all the error codes in Haiku are negative. Actually they are all defined in the same file (/system/develop/headers/os/support/Errors.h).

                                            The long answer is … there is a wrapper system which allows you to enable some feature flags and link to a static wrapper library to get positive error codes in the case of applications that really, really want error codes to be positive and cannot be easily patched. It is very rarely used these days, and HaikuPorts instead tries to upstream patches to applications that assume error codes are positive, but it does exist. Most applications should not have to think about this at all, though.

                                            Anyway, please feel free to us Haiku devs on IRC or GitHub to review Haiku-specific things. (I thought I looked at the Zig port before it was merged but I guess I missed that section.) It used to be the case that most projects would just tell us “I don’t really care about Haiku, keep your patches in your own tree;” but that seems to be changing in recent years and now most projects seem to be amenable to accepting patches from us. Rarely, like in the case of Zig, we even get patches done by someone who isn’t a Haiku or HaikuPorts developer! So that is itself an exciting development, even if they get things wrong some of the time.

                                      1. 26

                                        One small useful fact that not many people realize: for Linux specifically, if you make a fully static executable, it will be portable to all Linux distributions, no matter which libc they use, or if they use a nonstandard dynamic linker path. If the application needs to open a window and use the graphics driver, it gets a lot more complicated to be portable.

                                        1. 2

                                          Another approach to handle portability when targetting multiple Linux distributions is to compile the app on an old distribution (e.g. some old CentOS with old glibc). This can be done through Docker. This covers different use cases than static linking, so it’s worth considering (e.g. usage of LGPL libs).

                                          1. 1

                                            I might be wrong here, but what about kernel version?

                                            I had a thing last year where we compiled stuff with a certain Qt version which was built on a 4.x kernel and Qt used some ifdefs and it wouldn’t run on a 3.x kernel on an old RHEL. This was dynamic linking, but the underlying problem is the same, C++ code used a kernel call via libc that wasn’t there, not sure how this would’ve worked with a purely static build.

                                            1. 3

                                              I think there’s a chance that if you’d compile it on an old RHEL with 3.x kernel, it would run on newer distros with 4.x kernels.

                                              1. 1

                                                I’d even go so far and say that will be the case 99% of the time because it falls under the kernel’s “don’t break APIs” mantra as far as I know.

                                              2. 1

                                                Any binary would still need to use syscalls, so that limits the backward compatibility as linux keep adding new syscalls.

                                            1. 8

                                              Keep going, guys. Every time I read about Zig I get a bit closer to deciding to try it. (Srsly.) it helps that my current project uses very little memory allocation, so Zig’s lack of a global allocator won’t be as big a deal. (Previosly I have been turned off by the need to pass or store allocator references all over the place.)

                                              1. 5

                                                For end-product projects there’s really nothing wrong with setting a global allocator and using it everywhere! You can even use that single point of reference to swap in test allocator in tests so you can check for memory leaks in all of your unit and integration tests. You might want to be more flexible, composable, and unopinionated (or maybe strategically opinionated) with allocator strategies if you’re writing a library.

                                                1. 5

                                                  Good point! I tend to write libraries, though.

                                                  The scenario that worries me is where I’ve got a subsystem that doesn’t allocate memory. Then I modify something down inside it so that it does need to allocate. Now I have to plumb an allocator reference through umpteen layers of call stack (or structs) in all control flow paths that reach the affected function.

                                                  Maybe far fetched, but it gets uglier the bigger the codebase gets. I’ve had to do this before (not with allocators, but other types of state) in big layer-cake codebases like Chromium. It’s not rocket science but it’s a pain.

                                                  I guess I’m not used to thinking of “performs memory allocation” as a color of function.

                                                  1. 4

                                                    Then I modify something down inside it so that it does need to allocate.

                                                    To me this would be a smell, a hint that the design may want to be rethought so as not to have the possibility of allocation failure. Many code paths do have the possibility of allocation failure, but if you have an abstraction that eliminates that possibility, you’ve opened up more possible users of that abstraction. Adding the first possibility of allocation failure is in fact a big design decision in my opinion - one that warrants the friction of having to refactor a bunch of function prototypes.

                                                    As I’ve programmed more Zig, I’ve found that things that need to allocate tend to be grouped together, and likewise with things that don’t need to allocate. Plus there are some really nice things you can do to reserve memory and then use a code path that cannot fail. As an example, this is an extremely common pattern:

                                                    https://github.com/ziglang/zig/blob/f81b2531cb4904064446f84a06f6e09e4120e28a/src/AstGen.zig#L9745-L9784

                                                    Here we use ensureUnusedCapacity to make it so that the following code can use the “assumeCapacity” forms of appending to various data structures. This makes error handling simpler since most of the logic does not need to handle failure (note the lack of the word try after those first 3 lines). This pattern can be especially helpful when there is a resource that (unfortunately) lacks a cheap or simple way to deallocate it. If you reserve the other resources such as memory up front, and then leave that weird resource at the end, you don’t have to handle failure.

                                                    A note on safety: “assumeCapacity” functions are runtime safety-protected with the usual note that you can opt out of runtime safety checks on a per-scope basis.

                                                    1. 1

                                                      the design may want to be rethought so as not to have the possibility of allocation failure.

                                                      True, I’ve been forgetting about allocation failures because I code for big iron, like phones and Raspberry Pis 😉 … but I do want my current project to run on li’l embedded devices.

                                                      The ensureUnusedCapacity trick is neat. But doesn’t it assume that the allocator has no per-block overhead? Otherwise the heap may have A+B+C bytes free, but it’s not possible to allocate 3 blocks of sizes A, B and C because each heap block has an n-byte header. (Or there may not be a free block big enough to hold C bytes.)

                                                      1. 2

                                                        Speaking of allocators: the Zig docs say that implementations need to satisfy “the Allocator interface”, but don’t say what an “interface” is. That is in fact the only mention of the word “interface” in the docs.

                                                        I’m guessing that Zig supports Go-like interfaces, but this seems to be undocumented, which is weird for such a significant language feature…?

                                                        1. 2

                                                          The Allocator interface is actually not part of the language at all! It’s a standard library concept. Zig does not support Go-like interfaces. It in fact does not have any OOP features. There are a few patterns you can use to get OOP-like abstractions, and the Allocator interface is one of them.

                                                          The ensureUnusedCapacity trick is neat. But doesn’t it assume that the allocator has no per-block overhead?

                                                          The ensureUnusedCapacity usage I showed above is using array lists and hash maps :)

                                                          1. 3

                                                            Will I get in trouble if i start calling it the allocator factory?

                                                            1. 1

                                                              Lol

                                                    2. 2

                                                      It’s not function coloring (or if it is, it doesn’t feel like it), because in your interfacing function you can fairly trivially catch the enomem and return a null value that will correspond to allocation failure… And you return that, (or panic on alloc failure, if you prefer), in either case you don’t have to change your stack at all. Since allocators are factories, it’s pretty easy to set it up so that there is an overrideable default allocator.

                                                      As an example, here is some sample code that mocks libc’s calloc and malloc (and free) with zig’s stdlib “testing.allocator” which gives you memory leak analysis in code that is designed to use the libc functions, note that testing.allocator, of course, has zig’s standard memory allocator signature, but with the correct interface that doesn’t have to travel up the call stack and muck up the rest of the code, that is expecting something that looks like libc’s calloc/malloc (which of course doesn’t have error return values, but communicates failure with null):

                                                      https://gist.github.com/ityonemo/fb1f9aca32feb56ad46dd5caab76a765

                                                      1. 1

                                                        I guess I’m not used to thinking of “performs memory allocation” as a color of function.

                                                        We should, though. Although, it should be something handled like generics.

                                                        1. 2

                                                          It’s not, though (see sibling reply with code example).

                                                  1. 15

                                                    Zig is such a promising language.

                                                    I hope it becomes great for deploying applications as well, and plays nicely with various package managers and Linux distros.

                                                    Deployment seems to be an afterthought for some programming language ecosystems.

                                                    1. 25

                                                      Deployment seems to be an afterthought for some programming language ecosystems.

                                                      If you check the 4th commit in ziglang/zig repository, the commit that first populated the README.md file with a set of goals for the project, this goal is listed:

                                                      • Friendly toward package maintainers.

                                                      I’ve been a Debian and Ubuntu package maintainer in the past, and I even picked up another Debian Developer from the airport in order to sign each other’s gpg keys (part of the procedures to gaining upload access).

                                                      I’ve also been involved in the Node.js ecosystem - looks like I have 76 packages under my name.

                                                      I’m familiar with what distribution maintainers want to achieve for their users, and the problems they are faced with in terms of packaging, as well as what upstream project maintainers want to achieve for their users, and I’m intimately familiar with how these goals can sometimes conflict. I fully intend to use this insight to help both parties work more effectively with each other for the end user when it comes to the Zig package manager, ecosystem, and communities.

                                                      1. 3

                                                        That sounds perfect! Thanks for creating Zig, I really like the philosophy behind it.

                                                        Now all I’m longing for is Zig++, a language just like Zig, but with added syntactic sugar. ;)

                                                    1. 33

                                                      The user interface has been redesigned. Some of you will love it, some will hate it. You’re welcome and we’re sorry.

                                                      lmao I unironically love this attitude.

                                                      1. 1

                                                        Same. Truly the most genre-savvy of patch notes.

                                                      1. 6

                                                        This misses one huge motivator: you want to move the implementation to a new underlying technology that is fundamentally unable to run your existing code. Most often this is the language: we can’t find COBOL programmers so we need to rewrite this big thing in Java or C# fast, before all the original devs retire. Sometimes it is the platform. Sometimes it used to be a desktop app and now it needs to be a web app.

                                                        Of course, this motivation doesn’t cause the rewrite to go any better than a rewrite for any other reason, except that in some cases, you are better off with a semi-broken rewritten system you can maintain than with a working system nobody can touch.

                                                        1. 15

                                                          On the other hand, with the Zig compiler I have figured out how to incrementally bootstrap the compiler, slowly moving the implementation from C++ to Zig, all the while keeping the code shippable. If you download Zig today, what you get is a hybrid compiler: part of it is implemented in C++, part of it is implemented in Zig. Over time we’ve been able to reduce the amount of C++ to fewer and fewer lines, until some day (soon!) it can disappear completely.

                                                          More details / examples:

                                                          1. 3

                                                            Andy already knows this, but for everyone else’s benefit–the screenshotted audio message at the top of the original post is Andy explaining the above to me, and audio response from me was basically my draft for this post.

                                                            I already held the opinion that rewriting is usually the wrong approach, seeing Andy productively pull of an incremental transition for what at first glance seems like one of the most difficult things to do incrementally further convinced me.

                                                          2. 2

                                                            IME moving from one platform to another is much less risky than other kinds of rewrites when you do a 1:1 copy of the existing design.

                                                            Think “port” not “rewrite”.

                                                            If you leave yourself no decisions to make, you can write the new version at a substantial fraction of the rate at which you can physically type.

                                                          1. 5

                                                            As the author of music player software that wants to have an automated import-from-youtube feature, I have some opinions about this.

                                                            The value-add of youtube-dl is the fact that it is regularly updated by humans doing labor. The fact that it is in Python or in Rust is actually problematic for this use case. Ideally the music player software would be able to fetch a youtube-dl update without having to rebuild the music player, and also ideally without the music player having a dependency on Python. Also, ideally, since new youtube-dl code would be fetched and run automatically, it should not need to be trusted and could be run in a sandbox. Perhaps something like WASI, if it supported networking, would be a good fit. In this case Rust could be used, but I don’t see why you would do that, since the WASI program could be run in a sandbox, which provides its own memory safety, making Rust’s borrow checker an unnecessary complication.

                                                            In conclusion, my selfish desires are:

                                                            • WASI hurry up and add networking please
                                                            • youtube-dl to switch from Python to something that can compile to WASI, and include wasi binaries with releases
                                                            • yarr harr fiddle-dee-dee, being a pirate is alright with me
                                                            1. 3

                                                              Write it in Haskell! You could probably design something clever using the ‘monad as DSL’ pattern so that your plugins don’t actually have any IO, and use Safe Haskell to make sure they’re not sneaking in an unsafePerformIO. Then the main binary just ‘executes’ your DSL, using a set of predefined primitives (fetch this URL and give it to the plugin, fetch this URL and save it to disk as the video, etc) that you know are safe.

                                                              Of course, Haskell isn’t a well-known language, I don’t think Safe Haskell has full sandbox guarantees, and I don’t know if loading plugins at runtime that were compiled on a separate machine is a thing you can do. But it’s a fun thing to think about!

                                                            1. 4

                                                              Great work! All these fixes to libgccjit would be needed if anyone ever wanted to pursue this as a Zig backend too, so from this side of the fence, thank you!!

                                                              1. 6

                                                                The usual problem encountered when cross-compiling from a non-macOS system to macOS is you need the macOS headers and it’s against the licence agreement to redistribute them or even use them on non-Apple hardware:

                                                                You may not alter the Apple Software or Services in any way in such copy, e.g., You are expressly prohibited from separately using the Apple SDKs or attempting to run any part of the Apple Software on non-Apple-branded hardware.

                                                                How does Zig handle this?

                                                                Edit: having said that, this repo has existed for a long time and hasn’t been taken down yet…

                                                                1. 17

                                                                  it’s not against the license agreement. the header files are under the APSL https://spdx.org/licenses/APSL-1.1.html

                                                                  1. 3

                                                                    Even if it was, it’s probably not enforceable. Didn’t we have a ruling a while back stating that interfaces were not eligible for copyright?

                                                                    1. 2

                                                                      That was Oracle v Google, right?

                                                                      1. 2

                                                                        That’s the one. If I recall correctly, Google originally lost, then appealed, and the ruling was basically reversed to “interfaces are not subject to copyright”.

                                                                        Now that was American law. I have no idea about the rest of the world. I do believe many legislations have explicit exceptions for interoperability, though.

                                                                        1. 5

                                                                          That’s the one. If I recall correctly, Google originally lost, then appealed, and the ruling was basically reversed to “interfaces are not subject to copyright”.

                                                                          The Supreme Court judgement said ‘assume interfaces are copyrightable, in this case Oracle still loses’ it did not make a ruling on whether interfaces are copyrightable.

                                                                          1. 3

                                                                            and the ruling was basically reversed to “interfaces are not subject to copyright”

                                                                            Not exactly, the ruling didn’t want to touch the “interfaces are not subject to copyright” matter since that would open a big can of worms. What it did say, however, was that Google’s specific usage of those interfaces fell into the fair use category.

                                                                            1. 1

                                                                              Ah, so in the case of Zig, it would also be fair use, but since fair use is judged on a case by case basis, there’s still some uncertainty. Not ideal, though it looks like it should work.

                                                                              1. 1

                                                                                There’s no useful precedent. Google’s fair use was from an independent implementation of an interface for compatibility. Zig is copying header files directly and so must comply with the licenses for them. The exact licenses that apply depend on whether you got the headers from the open source code dump or by agreeing to the XCode EULA. A lot of the system headers for macOS / iOS are only available if you agree to the XCode EULA, which prohibits compilation on anything other than an Apple-branded system.

                                                                                1. 1

                                                                                  I recall that Google did copy interface files (or code) directly, same as Zig?

                                                                                  1. 2

                                                                                    Java doesn’t have any analogue of .h files, they wrote new .java files that implemented the same methods. There is a difference between creating a new .h file that contains equivalent definitions and copying a .h file that someone else wrote. If interfaces are not copyrightable, then the specific serialisation in a text file may still be because it may contain comments and other things that are not part of the interface.

                                                                      2. 1

                                                                        Interesting. Ok so does Zig just include the headers from the most SDK then?

                                                                        1. 10

                                                                          The way zig collects macos headers is still experimental. We probably need to migrate to using an SDK at some point. For now it is this project.

                                                                          1. 1

                                                                            I’d be super nervous about using this in production. This is using code under the Apple Public Source License, which explicitly prohibits using it to circumvent EULAs of Apple products. The XCode EULA under which the SDKs are prohibited explicitly prohibits cross-compiling from a non-Apple machine. I have no idea what a judge would decide, but I am 100% sure that Apple can afford to hire better lawyers than I can.

                                                                            1. 3

                                                                              Zig has nothing to do with xcode. Zig does not depend on xcode or use xcode in any way. The macos headers have to do with interfacing with the Darwin kernel.

                                                                      3. 1

                                                                        Edit: having said that, this repo has existed for a long time and hasn’t been taken down yet…

                                                                        Apple generally doesn’t bother with small-scale infringement. They care about preventing cross compilation only insofar as it might hurt Mac sales.

                                                                      1. 15

                                                                        There are a lot of accusations in this and the subsequently linked posts against this ominous person called Andrew Lee. With all the democratic impetus on these resigning statements, please audiatur et altera pars. Where’s the statement from Lee to the topic? What does he think about this? Does he not want to comment (that is, takes the accusations as valid) or is it simply not linked, which I would find a dubious attitude from people who insist on democratic values? Because, if you accuse anyone, you should give him opportunity to explain himself.

                                                                        Don’t get me wrong. What I read is concerning. But freenode basically is/was the last bastion of IRC. The brand is well-known. The proposed alternative libera.chat will fight an uphill battle against non-IRC services. Dissolving/attacking the freenode brand is thus doing IRC as a whole a disfavour and should only be done after very careful consideration and not as a spontaneous act of protest.

                                                                        1. 13

                                                                          Where’s the statement from Lee to the topic?

                                                                          You can dig through IRC logs referenced in the resignation letter linked by pushcx above and see what he has to say to the admins directly, if you assume the logs haven’t been tampered with. My personal assessment is he makes lots and lots of soothing reassuring non-confrontational noises to angry people, and then when the people who actually operate the network ask for actual information he gives them none. When they offer suggestions for how to resolve the situation he ignores them. When they press for explanations of previous actions (such as him asking for particular people to be given admin access) he deflects and tries to make it seem like the decision came from a group of people, not just himself.

                                                                          So yeah. Smooth, shiny, nicely-lacquered 100% bullshit.

                                                                          1. 16

                                                                            I’ve now skimmed through some of the IRC logs. It’s been long that I read so heated discussions, full of swear words, insults, accusations, dirty language, and so on. This affects both sides. It’s like a transcript from children in the kindergarten trying to insult each other and it’s hard to believe that these persons are supposed to be adults. This is unworthy of a project which so many FOSS communities are relying on. Everyone should go shame in the corner and come back in a few days when they have calmed down.

                                                                            I’m not going to further comment on the topic. This is not how educated persons settle a dispute.

                                                                            1. 6

                                                                              Amen.

                                                                          2. 11

                                                                            Lee has issued a statement under his own name now: https://freenode.net/news/freenode-is-foss

                                                                            1. 10

                                                                              As a rebuttal to the URL alone, freenode isn’t foss, it’s a for profit company. So before you even click through you are being bullshitted.

                                                                              1. 14

                                                                                freenode isn’t foss, it’s a for profit company.

                                                                                You can be for profit and foss. So this is a non-sequitur.

                                                                              2. 3

                                                                                Ah, thanks.

                                                                              3. 7

                                                                                Self-replying: Lee has been e-mail-interviewed by The Register, giving some information on how he sees the topic: https://www.theregister.com/2021/05/19/freenode_staff_resigns/

                                                                                1. 7

                                                                                  “freenode” is both a brand and an irc network run by a team of people. if the team did not want to work with andrew lee, but wanted to continue running the network, their only option was to walk away and establish a new network, and try to make that the “real” freenode in all but the name.

                                                                                  this is not the first brand-vs-actual-substance split in the open source world; i don’t see that they had any other choice after lee tried to assert control over freenode-the-network due to ownership of freenode-the-brand.

                                                                                  1. 6

                                                                                    who insist on democratic values?

                                                                                    Democracy isn’t about hearing both sides. It’s about majority winning.

                                                                                    Actually getting angry over one-sided claims and forming an angry mob is very democratic and has been its tradition since the time of ancient greeks.

                                                                                    1. 5

                                                                                      If and when a representative of Lee’s company (apparently https://imperialfamily.com/) posts something, a member can submit it to the site.

                                                                                      As far as I know Lee or his company have made no statement whatsoever.

                                                                                      1. 2

                                                                                        Could this just be the death knell of irc? A network split is not good as people will be confused between Freenode and Libera Chat.

                                                                                        Most young people that look for a place to chat probably look at discord first. For example, the python discord server has 220000 registered users and 50000 online right now. I don’t believe that the python channel on Freenode has ever gotten close to that.

                                                                                        1. 16

                                                                                          Having multiple networks is healthy.

                                                                                          1. 11

                                                                                            I strongly believe that IRC is on a long slow decline rather than going to die out due to any one big event. Mostly because there are so many other IRC servers. It’s an ecosystem not a corporation.

                                                                                            1. 7

                                                                                              IRC has survived, and will yet survive, a lot of drama.

                                                                                              1. 3

                                                                                                Well, people were already confused between OFTC and Freenode. More the merrier.

                                                                                            1. 6

                                                                                              I tried making intel syntax the only accepted inline assembly syntax of zig but LLVM’s ability to parse it just isn’t up to par. It’s unfortunately not practical to use it.

                                                                                              I still want to get to that point, but it’s going to require zig implementing intel syntax itself, then translating it to AT&T syntax for LLVM to consume. Wild, right? But I agree, it will be worth it so that inline assembly can be in intel syntax.

                                                                                              1. 3

                                                                                                I’m not a big fan of inline assembly in general, particularly the llvm/gcc vision of it. But llvm is doing really cool stuff there. (See also.)

                                                                                                1. 1

                                                                                                  zig implementing intel syntax itself, then translating it to AT&T syntax for LLVM to consume

                                                                                                  Would it be harder to fix LLVM directly?

                                                                                                  1. 1

                                                                                                    Yes, because we need assembly parsing anyway, for the non-LLVM backends of the compiler. So we have to lower from that, to whatever LLVM expects. So naturally we would just lower to the syntax that already works in LLVM.

                                                                                                1. 22

                                                                                                  The youtube link is broken because I moved to vimeo when youtube started unconditionally showing ads on all videos. Here’s the vimeo link

                                                                                                  edit: never mind, I figured out how to get the embed code for vimeo and updated the article itself to have the new video link.

                                                                                                  edit2: this article (and the syntax highlighting, formatting, etc) is the main reason I have been dragging my feet to redo my personal web site. I want to avoid breaking this page. it’s encouraging to see that this is not in vain, and that it is worth preserving. cheers

                                                                                                  1. 3

                                                                                                    I remember seeing this article around the time it was published and I think I wrote to you. It’s still one of the most impressive write-ups I have seen.