Even Rust which has macros special cases format!, implementing it in the compiler itself. Meanwhile in Zig, the equivalent function is implemented in the standard library with no special case code in the compiler.
While this is true, the current status is that Rust format! implementation actually uses the exact same mechanism available to users, and can easily be moved to the standard library. (It wasn’t always that way though.) The current status of being implemented in the compiler is mostly historical.
I had a brief flirtation with Zig, but I decided it’s too low-level for me. The lack of global allocator means every function that might allocate/free memory has to take an extra allocator parameter, or stash an allocator reference in whatever data structure it’s passed as a parameter. This feels like too much work to me— I’m sure in super low level code the benefits are worth it, but even at the medium-low level I work at memory allocation isn’t that special a thing.
The “every call is explicit” feature doesn’t work for me either. I think this may be a case where there are two types of programmers. Some like everything explicit, some prefer abstraction. I’m in the latter camp. I don’t go as far as building my own DSLs, but I like a layer of custom infrastructure so I can describe my code at a higher level.
Zig is cool and I’m glad it exists :) but I’ve gone with Nim as my new BFF … just don’t tell C++, ‘cause we still have to work together.
I think things as low level as zig work great with higher level languages like lua/python/lisp to provide the easy to use parts. The fact that zig doesn’t enforce a memory policy is handy for cases like that.
Whenever anyone asks “why?”, I find the answer of “because I want to” to be plenty good enough. I might think it is silly and never use their thing, but I see it as totally valid to do it however you want simply because you want to, regardless of what else exists or what anyone else thinks about it.
I fundamentally agree with you, however there is also part of me that wonders if this is part of the reason we continue to flounder as an industry/profession. We “build” things, but have very little rigor, regulation or conformity across the industry. The industry is nascent in the grand scheme of things, but the insane growth of things powered by software is frightening if you look at it from the point of being the consumer of “because I want to” technology. Someone can just go and create their own language good or bad, and folks will just use it, and probably sell you a product using it. That’s very cool to part of me, but also unsettling?
I think the fact that people can just go off and build something is great. There are many interesting design spaces to explore and who knows if/where they will be useful. If someone is able to build a successful product with a random piece of technology then good for them. As a consumer of that tech, it’s difficult to know if the supplier’s stack is on firm footing, but if you are worried you can always pick an established player.
I do think that it’s mostly a moot point because most rank and file programmers are going to steer clear of niche languages and ecosystems. The average person doesn’t want to implement their own networking stack or SOAP connectors or FTP library and that’s great too. They’re probably off building something directly useful for the average person.
But they are ultimately the people who are going to ask “why?” Why Zig? Or Nim? Or Elixir? If you like software for software’s sake then why might seem like a silly question, but if software is a means to an end then why is a pretty good question.
“Zig has no macros and no metaprogramming, yet still is powerful enough to express complex programs in a clear, non-repetitive way. “
Zig has metaprogramming. It’s not a Lisp or anything. It did look powerful and clean last time AndrewK showed me an example. One of his is at the end of the linked article, too.
Obscurity in code and, in particular, hidden control flow as defined here, is a topic I’m currently interested in. Does anyone have any articles in their bookmarks that build compelling arguments for things like getters/setters, @properties, key value observing, and other invisible/counterintuitive side-effect-generating mechanisms?
I don’t think there’s a definitive argument either way – like all things in programming, it depends on the problem domain and use case.
For low level code (interfacing with hardware, with the kernel, etc.), I can definitely see why you want to avoid hidden control flow. For application code, you think at a higher level of abstraction, so you may not care about every last resource. It’s a tradeoff.
As a concrete example I was “trained” not to use exceptions in C++ by 2 jobs, but for my shell project I found that they were pretty much essential for recursive evaluators (and I believe faster than explicit error checks in this case). It depends on the problem domain.
Also note that C does have hidden control flow with longjmp (used in almost all shells, Lua, etc.) and “hidden” side effects, e.g. errno which is a global / thread local.
Yes, definitely. I was just curious to read accounts of specific cases where they might offer an undeniable benefit over the more explicit alternatives, to tweak my understanding of their raison d’être.
What property gives you is pretty simple. There is need for migratability from field access to more complex getters and setters. So either you never use field access (always define getters and setters and use them), or have property, or lose migratability. Assuming losing migratability is not an option, having property allows avoiding writing getters and setters.
Personally I think property is a wrong tradeoff, because macros can write getters and setters for you. Java’s Project Lombok is an example of what I consider the right solution here.
I think properties were a mistake – they may be an ok hack in languages that have already shipped with incompatible syntax for fields and methods though.
Instead, by simply allowing to leave out the () for methods without parameters, the implementer gains the ability to switch from fields to methods and back, without breaking calling code at all.
This design makes the whole matter a complete no-problem, without the need to introduce a weird third way of doing things that keeps sprouting new complexity every once in a while (like properties do in C#).
They’re layers of abstraction, just like everything else. Function calls are a layer of abstraction over the bare JSR/RET instructions the CPU provides. Structs are an abstraction. So are variable names. A die-hard asm programmer might argue that these are invisible or counterintuitive — “I see ‘x’ here and I don’t know what it is! I have to look up above and it says it’s a Foo, but I don’t know how big a Foo is. It might even be a typedef for a pointer! Feh! Get off my lawn!”
Every program builds its own abstractions to model the entities and operations it works with. I like these to,be expressed in the syntax I write, like the ones hard coded in the language. Otherwise I have to keep repeating their innards over and over, or expressing them in ways that are less clear.
I get this confused all the time because CPPFLAGS is the preprocessor and CXXFLAGS is C++. You’d be surprised how many programs are out there incorrectly setting their build time env which leads to further confusion.
It’s saying that allocators are explicit, which is a useful feature. Libraries take them as parameters, rather than being able to call malloc() “behind your back”.
If you never initialize a heap allocator, then of course you cannot pass it to any library.
So libraries which use allocators and those which don’t are obvious. You can avoid initializing a heap allocator and still use libraries that don’t need allocators. You don’t have to write the code yourself.
A pluggable allocator is provided by many well-designed C libraries like Lua and sqlite, but not all. There also C libraries that promise not to do any allocation. So I think it’s good to put this in the language.
No, but Zig is declaring “accept an Allocator parameter to allocate” as a convention, and everything in the standard distribution is written that way. That is not a convention in C world. Often such convention is enough.
Fair enough. But that of course depends on the definition of “often” and “enough” :-) The article uses a much stronger wording though (“never” and “sure”).
While this is true, the current status is that Rust format! implementation actually uses the exact same mechanism available to users, and can easily be moved to the standard library. (It wasn’t always that way though.) The current status of being implemented in the compiler is mostly historical.
I had a brief flirtation with Zig, but I decided it’s too low-level for me. The lack of global allocator means every function that might allocate/free memory has to take an extra allocator parameter, or stash an allocator reference in whatever data structure it’s passed as a parameter. This feels like too much work to me— I’m sure in super low level code the benefits are worth it, but even at the medium-low level I work at memory allocation isn’t that special a thing.
The “every call is explicit” feature doesn’t work for me either. I think this may be a case where there are two types of programmers. Some like everything explicit, some prefer abstraction. I’m in the latter camp. I don’t go as far as building my own DSLs, but I like a layer of custom infrastructure so I can describe my code at a higher level.
Zig is cool and I’m glad it exists :) but I’ve gone with Nim as my new BFF … just don’t tell C++, ‘cause we still have to work together.
I think things as low level as zig work great with higher level languages like lua/python/lisp to provide the easy to use parts. The fact that zig doesn’t enforce a memory policy is handy for cases like that.
Whenever anyone asks “why?”, I find the answer of “because I want to” to be plenty good enough. I might think it is silly and never use their thing, but I see it as totally valid to do it however you want simply because you want to, regardless of what else exists or what anyone else thinks about it.
I fundamentally agree with you, however there is also part of me that wonders if this is part of the reason we continue to flounder as an industry/profession. We “build” things, but have very little rigor, regulation or conformity across the industry. The industry is nascent in the grand scheme of things, but the insane growth of things powered by software is frightening if you look at it from the point of being the consumer of “because I want to” technology. Someone can just go and create their own language good or bad, and folks will just use it, and probably sell you a product using it. That’s very cool to part of me, but also unsettling?
I think the fact that people can just go off and build something is great. There are many interesting design spaces to explore and who knows if/where they will be useful. If someone is able to build a successful product with a random piece of technology then good for them. As a consumer of that tech, it’s difficult to know if the supplier’s stack is on firm footing, but if you are worried you can always pick an established player.
I do think that it’s mostly a moot point because most rank and file programmers are going to steer clear of niche languages and ecosystems. The average person doesn’t want to implement their own networking stack or SOAP connectors or FTP library and that’s great too. They’re probably off building something directly useful for the average person.
But they are ultimately the people who are going to ask “why?” Why Zig? Or Nim? Or Elixir? If you like software for software’s sake then why might seem like a silly question, but if software is a means to an end then why is a pretty good question.
“Zig has no macros and no metaprogramming, yet still is powerful enough to express complex programs in a clear, non-repetitive way. “
Zig has metaprogramming. It’s not a Lisp or anything. It did look powerful and clean last time AndrewK showed me an example. One of his is at the end of the linked article, too.
Obscurity in code and, in particular, hidden control flow as defined here, is a topic I’m currently interested in. Does anyone have any articles in their bookmarks that build compelling arguments for things like getters/setters, @properties, key value observing, and other invisible/counterintuitive side-effect-generating mechanisms?
I don’t think there’s a definitive argument either way – like all things in programming, it depends on the problem domain and use case.
For low level code (interfacing with hardware, with the kernel, etc.), I can definitely see why you want to avoid hidden control flow. For application code, you think at a higher level of abstraction, so you may not care about every last resource. It’s a tradeoff.
As a concrete example I was “trained” not to use exceptions in C++ by 2 jobs, but for my shell project I found that they were pretty much essential for recursive evaluators (and I believe faster than explicit error checks in this case). It depends on the problem domain.
Also note that C does have hidden control flow with
longjmp
(used in almost all shells, Lua, etc.) and “hidden” side effects, e.g.errno
which is a global / thread local.Yes, definitely. I was just curious to read accounts of specific cases where they might offer an undeniable benefit over the more explicit alternatives, to tweak my understanding of their raison d’être.
What property gives you is pretty simple. There is need for migratability from field access to more complex getters and setters. So either you never use field access (always define getters and setters and use them), or have property, or lose migratability. Assuming losing migratability is not an option, having property allows avoiding writing getters and setters.
Personally I think property is a wrong tradeoff, because macros can write getters and setters for you. Java’s Project Lombok is an example of what I consider the right solution here.
I think properties were a mistake – they may be an ok hack in languages that have already shipped with incompatible syntax for fields and methods though.
Instead, by simply allowing to leave out the
()
for methods without parameters, the implementer gains the ability to switch from fields to methods and back, without breaking calling code at all.This design makes the whole matter a complete no-problem, without the need to introduce a weird third way of doing things that keeps sprouting new complexity every once in a while (like properties do in C#).
They’re layers of abstraction, just like everything else. Function calls are a layer of abstraction over the bare JSR/RET instructions the CPU provides. Structs are an abstraction. So are variable names. A die-hard asm programmer might argue that these are invisible or counterintuitive — “I see ‘x’ here and I don’t know what it is! I have to look up above and it says it’s a Foo, but I don’t know how big a Foo is. It might even be a typedef for a pointer! Feh! Get off my lawn!”
Every program builds its own abstractions to model the entities and operations it works with. I like these to,be expressed in the syntax I write, like the ones hard coded in the language. Otherwise I have to keep repeating their innards over and over, or expressing them in ways that are less clear.
For me it’s simple: Zig is fun. :)
I haven’t tried D, but I don’t find C++ or Rust particularly fun.
Useful, but not fun.
I understand the comments about D and Rust. But what does the C preprocessor have to do with all that?
By ‘CPP’ they mean “C++”.
I get this confused all the time because CPPFLAGS is the preprocessor and CXXFLAGS is C++. You’d be surprised how many programs are out there incorrectly setting their build time env which leads to further confusion.
I just learned something today, and confess to doing just that!
Spread the word, this is so common!
That’s how I read it, too. Probably because I learned C++ on Borland, which used
.cpp
rather than.cc
That is, if you write all of your code yourself.
It’s saying that allocators are explicit, which is a useful feature. Libraries take them as parameters, rather than being able to call malloc() “behind your back”.
If you never initialize a heap allocator, then of course you cannot pass it to any library.
So libraries which use allocators and those which don’t are obvious. You can avoid initializing a heap allocator and still use libraries that don’t need allocators. You don’t have to write the code yourself.
A pluggable allocator is provided by many well-designed C libraries like Lua and sqlite, but not all. There also C libraries that promise not to do any allocation. So I think it’s good to put this in the language.
Is there anything preventing a library to bundle its own custom allocator and use it internally?
No, but Zig is declaring “accept an Allocator parameter to allocate” as a convention, and everything in the standard distribution is written that way. That is not a convention in C world. Often such convention is enough.
Fair enough. But that of course depends on the definition of “often” and “enough” :-) The article uses a much stronger wording though (“never” and “sure”).
The general case? “If copious grepping never turns up heap allocation, then you can be sure your program is never going to cause heap allocations.”
Until the 3rd maintainer of the project upgrades dependency C, which now allocates memory in the new version.
Because none of these languages have banned tabs ;)
to be clear, Zig the language allows tabs. it is only the stage one compiler that bans them. stage 2 will allow you to use tabs if you so choose.