While this doesn’t solve the problem I love using the Elvish shell. It’s not statically typed but it has enough programming language features such as lists, maps, and builtin functions that I can use it for the vast majority of my scripts.
These tools end up reimplementing programming language features like variables, conditionals, and function calls within the data structures supported by YAML. They support using YAML’s alias nodes feature to emulate variables or functions, but might additionally define their ownvariable feature that supports string interpolation.
The configuration languages CUE and Dhall seem to provide a better foundation for deployment tools than YAML does.
Wow, I’ve never heard of alias nodes. YAML is way more complicated than I thought. I use GH actions but never really considered how configuring it is kind of like a language in itself
The tag unions look like a really elegant solution to error handling in a functional language, where you usually either have
data AppError = FSError | HTTPError | AppSpecificError | ThisTypeIsTooBroad | ... and everything returns Either AppError,
some type class setup like class FromHTTPError a where fromHTTPError :: HTTPError -> a, or
an effect system
At first, I was a little disappointed with the lack of HKP, but I think now that this is a good trade-off for better compile times and type inference. If I wanted more a complex type system I could always go back to using Haskell.
I think this blog post gives a very good overview of the issues around unwrap. I still think it should be in std. The reason here is a structural one: if it weren’t available, people would write similar ad-hoc macros or functions for it.
Defining it as a std lib method enables quite a number of fine grained lints around it.
I still think it should be in std. The reason here is a structural one: if it weren’t available, people would write similar ad-hoc macros or functions for it.
This is what I think as well. I agree with the blog post author that expect should be used instead of unwrap, but unwrap is useful for prototyping, and without it in the standard library, some people would just do .expect("") or as you said define their own function/macro.
I think that the unwrap_used rule should be enabled by default though and that there should be more documentation and guidelines strongly advising against unwrap.
I’m a bit of a Rust noob, but what do you do for a function call that should never fail? For example, I have a static site generator that parses a base URL out of a configuration file (validating that it’s correct). Later, that URL is joined with a constant string: base_url.join("index.html").unwrap(). What do I do here other than unwrap()? Should I use .expect(format!("{}/index.html", base_url)) or similar? It doesn’t make sense to return this as an error via ? because this is an exceptional case (a programmer error) if it happens.
Yes. For a quite a while it would be the line number of the panic call inside unwrap though, so not super useful unless you were running with the RUST_BACKTRACE env var set. This was remedied in Rust 1.42.0.
One easy fix would be to augment cargo publish so that it scans for unwrap. Either disallow the upload or add a tag to the published package, with a link to how to fix this.
There‘s quite a few legitimate uses of unwrap, particularly in cases where the error case is impossible, but is needed e.g. because of a trait interface. Such a solution would be very heavy-handed and I think the practical issues with unwrap are overrated - I rarely see them popping up in projects.
If an error is impossible, you should use std::convert::Infallible (which will be eventually be deprecated in favour the unstable ! (never)) as the error type. Then you could do
fn safe_unwrap<T>(result: Result<T, Infallible>) -> T {
result.unwrap_or_else(|e| match e {})
}
Or with #[feature = "unwrap_infallible"] you can use .into_ok():
fn safe_unwrap<T>(result: Result<T, !>) -> T {
result.into_ok()
}
I’m a new user (joined 5 months ago) and thank you for the great moderation and community here! This is one of my favourite parts of the Internet and it’s so refreshing to have a break from culture wars and focus solely on tech.
And in this system we can add e.g. 2m + 3s but the result is just 2m + 3s, nothing happens to it. So that is fine really.
I suppose this makes the most sense mathematically, but practically adding two units with different dimensions should be an error. There isn’t really a use for adding two incompatible units like this.
adding two units with different dimensions should be an error. There isn’t really a use for adding two incompatible units like this.
I don’t think it’s an error; you just get a complex number. Consider the cost of getting something shipped to you, which includes a price and a wait time, measured in dollars plus days. While those are distinct properties, they’re not separate, but related — lowering one drives up the other.
Whether or not it should be an error in a programming language is an economics question. Either is theoretically justifiable but I think in practice you’d have simpler, less surprising error messages if 1s+1m was an error but (1s, 0m)+(0s,1m)=(1s,1m).
Huh, I hadn’t thought about units as quotients. That’s neat.
My personal thoughts on units: any quantity is a combination of both a number representing the magnitude and a representation of the ‘exponent’ of each base unit, called their dimensions. So for example, 2.5 kg*m^2/s^2 has a magnitude of 2.5 and dimensions of [mass: 1, length: 2, time: -2]. “Raw” numbers just have all their dimensions set to 0.
You can apply a function f to an arbitrary quantity with dimensions only if it’s homogenous: there’s some k such that for all numbers a, f(ax) = a^k*f(x). Otherwise you’re not scale invariant: sin(1 m) vs sin(100 cm) vs sin(1 cm), that sort of thing.
This is why you can multiply unitary quantities but not add them: multiplication is linear, but addition isn’t (since 2x + y != 2(x + y)).
Conversion of type 'string' to type 'string[]' may be a mistake because neither type sufficiently overlaps with the other. If this was intentional, convert the expression to 'unknown' first.
is translated to
You can’t use ‘as’ to convert string into a string[] - they don’t share enough in common.
You’re correct that it has translated it that way (and accurately). If you’re used to doing the work of translating type errors into what they mean in your head from working in other typed languages, it may not be more clear… but it is, in my experience, much clearer and easier for someone reading along who is learning the language. Also: brevity is not always better for clarity (some opinionated manuals of style notwithstanding), and tone can go a long way to making the language feel more approachable to folks who are new to types—which is many, perhaps most TypeScript developers when they first start, given they’re coming from JavaScript.
Just to clarify I was not criticizing the goal of the project. I was only suggesting that perhaps another example would be more enticing given that’s the first thing a prospective user will see.
Great write-up! Saving this for when I’m ready to write my own programming language some day.
If you try to run 5 / "Hello", it won’t actually run the code, JS/Python will see "Hello has type string and will throw a runtime error instead of executing it.
I wish JS would throw a runtime error. Instead, because it’s JS, it coerces "Hello" into a number (NaN), so 5 / "Hello" evaluates to NaN.
I’m not an expert on OOP or FP, but I’ve used both and greatly prefer FP. I just FP programs so much easier to understand than OOP programs. However, I think some concepts from OOP such as encapsulation have some merits and can also be implemented in a FP style. I think there has been enough debate on FP vs OOP and we should accept that ideas from both paradigms are very valuable. While I personally hate dislike OOP and believe FP is the way forward, I will never flame someone for using OOP.
With OOP, methods are tied to data in such a way that it’s hard to extend an abstract data type with a new behaviour because it requires extending every class that implements that data type with the new method.
With FP, functions are specialised over data in such a way that it’s difficult to extend an abstract data type with a new concrete representation because it requires modifying all implementations of functions that operate over the data type.
Which is more of a problem depends on your problem domain.
Real-world code on either paradigm addresses these problems in similar ways: by defining a core set of methods / functions that are implemented by every instance of an abstract data type and building higher-level abstractions on top. My favourite example, NSString, provides a rich representation-agnostic set of methods that all depend on a concrete implementation providing at least two methods (get the length, get the character at an index) and ideally a third (copy a range of characters), with a fallback implementation of the first in terms of the first two for representations that don’t care about performance. In an FP style, these would be functions that you’d need to provide overrides for and the other string functions would be implemented to call only these three functions with whatever generic type you provided it as the string type.
OOP does have one significant advantage over FP in terms of usability: noun-verb interfaces are more discoverable than verb-noun ones (‘I have a thing, I want to enumerate the things I can do with it?’ is a more common learning pattern than ‘I want to do a thing, what can I do the thing with?’) and OO languages are intrinsically noun-verb (you have an object, you call a method on it [or send a message, if you’re an OO purist]). This has nothing to do with the underlying semantics and a lot to do with the language syntax. Even in FP languages, the documentation tends to be organised in a noun-first style.
I very much agree with the last paragraph, and in my humble opinion, that namespacing is something which is the prime selling point of “OOP” while still fully compatible with “FP”. It just happens not to be in the popular OO languages of the 90s and early 2000s.
The code browser is very nice! When I first looked into Unison, I was interested to see what was included in the base library, but there seemed to be no way to do this without installing Unison.
For a human playing Wordle, I’m not sure I’d actually recommend starting with it, since it requires also knowing what it will do for second guesses. For example, here’s the start of the mapping for what it does with that second guess:
I don’t understand what’s wrong with the Nix language. Sure, the space-delimited lists can get a little annoying, but other than that it’s minimalistic enough to use a configuration language but complex enough to do more complicated things like overriding derivations. Maybe it’s because I learnt Haskell before Nix so the syntax is very familiar to me, whereas I see how it could be a learning curve for other users.
What I find the most annoying is the lack of documentation of some of the nixpkgs functions. Some of the functions provided by import <nixpkgs> {}.lib appear to be the same as the ones builtin to the Nix language and there doesn’t seem to be any clear guidance on when to use which version. I’ve also had to look at the source code to find out the difference between writeTextFile, writeText, writeTextDir, writeScript, and writeScriptBin. The docs explain writeTextFile, but the only documentation for the rest is
Many more commands wrap writeTextFile including writeText, writeTextDir, writeScript, and writeScriptBin. These are convenience functions over writeTextFile.
Additionally, it’s a bit frustrating for me how all the documentation for nixpkgs — the lib functions, how to make a derivation, specific details for building packages in certain languages, how to contribute to nixpkgs, overriding packages/overlays etc — are in one gigantic web page that’s quite slow to load and even slower to search for things in.
How much of the type checker is implemented? From the readme it looks like implementing abstract methods, calling super in a constructor, and missing properties are checked, but I wonder if more complex features like generics, mapped types, template string etc. are available.
The type checker for TypeScript is not sound. It relies on the fact that TypeScript is lowered to JavaScript to catch the cases where the type checker has to give up. This means that a TypeScript compiler really needs to be a JavaScript compiler that is optimised for cases where the type checker has sufficient information to give stronger guarantees.
Exciting to see builtin JSON support on the way, although I’m probably going to stick to xh (like HTTPie but in Rust) as it’s really easy to add JSON data, query parameters, headers, etc.
While this doesn’t solve the problem I love using the Elvish shell. It’s not statically typed but it has enough programming language features such as lists, maps, and builtin functions that I can use it for the vast majority of my scripts.
What is this? This sounds horrific.
A lot of deployment tools require you to define the deployment steps in YAML files:
These tools end up reimplementing programming language features like variables, conditionals, and function calls within the data structures supported by YAML. They support using YAML’s alias nodes feature to emulate variables or functions, but might additionally define their own variable feature that supports string interpolation.
The configuration languages CUE and Dhall seem to provide a better foundation for deployment tools than YAML does.
Another turn around the configuration complexity clock, wheeeeeee! http://mikehadlow.blogspot.com/2012/05/configuration-complexity-clock.html
Wow, I’ve never heard of alias nodes. YAML is way more complicated than I thought. I use GH actions but never really considered how configuring it is kind of like a language in itself
How does this compare to Logseq?
When you go to the playground and try to enable syntax highlighting, it tells you
The tag unions look like a really elegant solution to error handling in a functional language, where you usually either have
data AppError = FSError | HTTPError | AppSpecificError | ThisTypeIsTooBroad | ...
and everything returnsEither AppError
,class FromHTTPError a where fromHTTPError :: HTTPError -> a
, orAt first, I was a little disappointed with the lack of HKP, but I think now that this is a good trade-off for better compile times and type inference. If I wanted more a complex type system I could always go back to using Haskell.
Another Haskell implementation
I think this blog post gives a very good overview of the issues around unwrap. I still think it should be in std. The reason here is a structural one: if it weren’t available, people would write similar ad-hoc macros or functions for it.
Defining it as a std lib method enables quite a number of fine grained lints around it.
The most basic one is the one banning it altogether: https://rust-lang.github.io/rust-clippy/master/#unwrap_used
(the search box gives more)
This is what I think as well. I agree with the blog post author that
expect
should be used instead ofunwrap
, butunwrap
is useful for prototyping, and without it in the standard library, some people would just do.expect("")
or as you said define their own function/macro.I think that the
unwrap_used
rule should be enabled by default though and that there should be more documentation and guidelines strongly advising againstunwrap
.I’m a bit of a Rust noob, but what do you do for a function call that should never fail? For example, I have a static site generator that parses a base URL out of a configuration file (validating that it’s correct). Later, that URL is joined with a constant string:
base_url.join("index.html").unwrap()
. What do I do here other thanunwrap()
? Should I use.expect(format!("{}/index.html", base_url))
or similar? It doesn’t make sense to return this as an error via?
because this is an exceptional case (a programmer error) if it happens.You should just use
unwrap()
.I think of it like an assertion, if my assumption about this code is wrong, the program will crash. That’s perfectly acceptable in many situations.
or
expect(...)
which is identical except for making grepping for the issue later easier if it does come upDoesn’t an unwrap()-induced crash always print the line number in addition to the default error message?
Yes. For a quite a while it would be the line number of the panic call inside unwrap though, so not super useful unless you were running with the
RUST_BACKTRACE
env var set. This was remedied in Rust 1.42.0.One easy fix would be to augment
cargo publish
so that it scans forunwrap
. Either disallow the upload or add a tag to the published package, with a link to how to fix this.There‘s quite a few legitimate uses of unwrap, particularly in cases where the error case is impossible, but is needed e.g. because of a trait interface. Such a solution would be very heavy-handed and I think the practical issues with unwrap are overrated - I rarely see them popping up in projects.
If an error is impossible, you should use
std::convert::Infallible
(which will be eventually be deprecated in favour the unstable!
(never)) as the error type. Then you could doOr with
#[feature = "unwrap_infallible"]
you can use.into_ok()
:I’m a new user (joined 5 months ago) and thank you for the great moderation and community here! This is one of my favourite parts of the Internet and it’s so refreshing to have a break from culture wars and focus solely on tech.
Interesting concept!
I suppose this makes the most sense mathematically, but practically adding two units with different dimensions should be an error. There isn’t really a use for adding two incompatible units like this.
I don’t think it’s an error; you just get a complex number. Consider the cost of getting something shipped to you, which includes a price and a wait time, measured in dollars plus days. While those are distinct properties, they’re not separate, but related — lowering one drives up the other.
Whether or not it should be an error in a programming language is an economics question. Either is theoretically justifiable but I think in practice you’d have simpler, less surprising error messages if
1s+1m
was an error but(1s, 0m)+(0s,1m)=(1s,1m)
.Huh, I hadn’t thought about units as quotients. That’s neat.
My personal thoughts on units: any quantity is a combination of both a number representing the magnitude and a representation of the ‘exponent’ of each base unit, called their dimensions. So for example, 2.5 kg*m^2/s^2 has a magnitude of 2.5 and dimensions of [mass: 1, length: 2, time: -2]. “Raw” numbers just have all their dimensions set to 0.
You can apply a function f to an arbitrary quantity with dimensions only if it’s homogenous: there’s some k such that for all numbers a, f(ax) = a^k*f(x). Otherwise you’re not scale invariant: sin(1 m) vs sin(100 cm) vs sin(1 cm), that sort of thing.
This is why you can multiply unitary quantities but not add them: multiplication is linear, but addition isn’t (since 2x + y != 2(x + y)).
I believe this is how F# does it. The Rust crate uom also does this with its
Dimension
trait.I’ve never used TypeScript but am I correct in assuming that this package has translated:
to:
?
The TL;DR isn’t shorter, nor more clear. Probably a bad example for a prominent show case.
There’s a better (in my opinion) example on the demo website:
is translated to
You’re correct that it has translated it that way (and accurately). If you’re used to doing the work of translating type errors into what they mean in your head from working in other typed languages, it may not be more clear… but it is, in my experience, much clearer and easier for someone reading along who is learning the language. Also: brevity is not always better for clarity (some opinionated manuals of style notwithstanding), and tone can go a long way to making the language feel more approachable to folks who are new to types—which is many, perhaps most TypeScript developers when they first start, given they’re coming from JavaScript.
Just to clarify I was not criticizing the goal of the project. I was only suggesting that perhaps another example would be more enticing given that’s the first thing a prospective user will see.
Ah, I see – I agree that @cherryblossom’s sibling comment would indeed be a better motivating example!
Type placeholders will be so convenient!
Great write-up! Saving this for when I’m ready to write my own programming language some day.
I wish JS would throw a runtime error. Instead, because it’s JS, it coerces
"Hello"
into a number (NaN
), so5 / "Hello"
evaluates toNaN
.I’m not an expert on OOP or FP, but I’ve used both and greatly prefer FP. I just FP programs so much easier to understand than OOP programs. However, I think some concepts from OOP such as encapsulation have some merits and can also be implemented in a FP style. I think there has been enough debate on FP vs OOP and we should accept that ideas from both paradigms are very valuable. While I personally
hatedislike OOP and believe FP is the way forward, I will never flame someone for using OOP.OOP and FP have the opposite problem:
With OOP, methods are tied to data in such a way that it’s hard to extend an abstract data type with a new behaviour because it requires extending every class that implements that data type with the new method.
With FP, functions are specialised over data in such a way that it’s difficult to extend an abstract data type with a new concrete representation because it requires modifying all implementations of functions that operate over the data type.
Which is more of a problem depends on your problem domain.
Real-world code on either paradigm addresses these problems in similar ways: by defining a core set of methods / functions that are implemented by every instance of an abstract data type and building higher-level abstractions on top. My favourite example,
NSString
, provides a rich representation-agnostic set of methods that all depend on a concrete implementation providing at least two methods (get the length, get the character at an index) and ideally a third (copy a range of characters), with a fallback implementation of the first in terms of the first two for representations that don’t care about performance. In an FP style, these would be functions that you’d need to provide overrides for and the other string functions would be implemented to call only these three functions with whatever generic type you provided it as the string type.OOP does have one significant advantage over FP in terms of usability: noun-verb interfaces are more discoverable than verb-noun ones (‘I have a thing, I want to enumerate the things I can do with it?’ is a more common learning pattern than ‘I want to do a thing, what can I do the thing with?’) and OO languages are intrinsically noun-verb (you have an object, you call a method on it [or send a message, if you’re an OO purist]). This has nothing to do with the underlying semantics and a lot to do with the language syntax. Even in FP languages, the documentation tends to be organised in a noun-first style.
I very much agree with the last paragraph, and in my humble opinion, that namespacing is something which is the prime selling point of “OOP” while still fully compatible with “FP”. It just happens not to be in the popular OO languages of the 90s and early 2000s.
The code browser is very nice! When I first looked into Unison, I was interested to see what was included in the base library, but there seemed to be no way to do this without installing Unison.
Hm interesting, my opening has been DARES … surprised CRANE is better, though I still need to watch the video :)
From 3b1b’s comment regarding CRANE:
I don’t understand what’s wrong with the Nix language. Sure, the space-delimited lists can get a little annoying, but other than that it’s minimalistic enough to use a configuration language but complex enough to do more complicated things like overriding derivations. Maybe it’s because I learnt Haskell before Nix so the syntax is very familiar to me, whereas I see how it could be a learning curve for other users.
What I find the most annoying is the lack of documentation of some of the nixpkgs functions. Some of the functions provided by
import <nixpkgs> {}.lib
appear to be the same as the ones builtin to the Nix language and there doesn’t seem to be any clear guidance on when to use which version. I’ve also had to look at the source code to find out the difference betweenwriteTextFile
,writeText
,writeTextDir
,writeScript
, andwriteScriptBin
. The docs explainwriteTextFile
, but the only documentation for the rest isAdditionally, it’s a bit frustrating for me how all the documentation for nixpkgs — the
lib
functions, how to make a derivation, specific details for building packages in certain languages, how to contribute to nixpkgs, overriding packages/overlays etc — are in one gigantic web page that’s quite slow to load and even slower to search for things in.How much of the type checker is implemented? From the readme it looks like implementing abstract methods, calling
super
in a constructor, and missing properties are checked, but I wonder if more complex features like generics, mapped types, template string etc. are available.The type checker for TypeScript is not sound. It relies on the fact that TypeScript is lowered to JavaScript to catch the cases where the type checker has to give up. This means that a TypeScript compiler really needs to be a JavaScript compiler that is optimised for cases where the type checker has sufficient information to give stronger guarantees.
Exciting to see builtin JSON support on the way, although I’m probably going to stick to xh (like HTTPie but in Rust) as it’s really easy to add JSON data, query parameters, headers, etc.
Nice.
I had just learned about HTTPie, but was bummed to read it’s Python based. Figured there’s a Rust alternative.