1.  

    I like that this space is being explored more! It’s something I’ve given a lot of thought to, and there is still much work to be done.

    The Rust crate clap can generate shell completions for you automatically at compile time (in Bash, Fish, Zsh, PowerShell, and Elvish).

    For an interesting real-world example rustup uses this feature to generate completions via the CLI (so at runtime as well as compile time). Ripgrep also uses this feature for completions (minus Zsh which are hand written).

    Full disclosure, I’m biased as I’m the clap author ;)

    1. 15

      I know it’s probably not the place for that post, but I’d like to see some more in depth discussions on various sections such as how Zig accomplishes some of it’s claims. For example, being faster than C; I see the discussion about UB allowing certain optimizations, but I’m curious about other areas that allow this performance. Or more safety discussion. I know Zig isn’t trying to be a safe language, but I believe safety is still a valid concern for all except maybe game-devs, so any thoughts on ways in which Zig could inch at being a safer C would be great.

      Also the cross compiling appears almost magical (said in a good way!), I’d love to see more details on the implications of this. I.e. what are the trade-offs Zig is making if any to be able to support this. I assume there’s trade-offs, because otherwise other similar languages would probably be doing the same.

      1. 20

        It seems like there’s no winning strategy with interviews. I hear both of these regularly:

        “Boo! This company is asking me theoretical problems that have no applicability to my actual work”

        “Boo! This company is asking me specific problems that directly relate to what I’ll be contributing”

        To me the problems are when 1) the time investment is not commensurate with the company’s legitimate interest in you (the process should fail fast), and 2) the company doesn’t respect you or your time – you waste lots of hours answering the same questions, or the total time is just excessive (> ~5 hours starts to get excessive, although it can be longer for a higher level position).

        For context, my company regularly asks candidates to design our own system (so far we’ve never incorporated ideas from this into our systems – we’ve been thinking about the problem for years, so it’s unlikely someone will come up with something groundbreaking in an hour). It’s a great problem because we understand all the corner cases and can answer questions easily, and performance is directly associated with outcomes.

        1. 3

          “Boo! This company is asking me specific problems that directly relate to what I’ll be contributing”

          I don’t believe this is what the OP is saying. The issue is when the specific problems you’re being asked to solve are unanswered at the interviewing company and answers result in revenue for said company. I enjoy being asked problems in the specific problem domain I’m interviewing for, but not when my answers are actually solving company problems without compensation.

          we’ve been thinking about the problem for years, so it’s unlikely someone will come up with something groundbreaking in an hour). It’s a great problem because we understand all the corner cases and can answer questions easily, and performance is directly associated with outcomes.

          This is better. But I’m always a bit hesitant with these style of interviews because it’s easy for the interviewers to fall to knowledge bias. “This engineer isn’t a good fit because they suggested a design that is terrible, due to XYZ corner cases and performance implications [which we only know because we’ve spent years studying this problem domain. Sure we only gave them 30 minutes to come up with a solution, but how could they possibly NOT see all these ramifications that took us months to fully understand!].” It’s a dangerous road that’s only human to fall prey to.

          1. 1

            but not when my answers are actually solving company problems

            Is this just about the principle? Like in general you really have no way of knowing whether an interview is useful to the other party or not. To flip it around — sometimes candidates use an offer to negotiate on their current role (and I say good for them, as long as they went into the process in good faith).

            If the company is genuinely interested in hiring you, and you’re genuinely interested in working for that company, why does it matter if someone learns something along the way?

            1. 2

              Sorry, I may not have clearly articulated my thoughts. Yes, on principal. I’m not opposed to each learning something along the way. I think people should always be learning. I’m only opposed to the more blatant approaches, and only when uncompensated.

              I don’t think it’s morally right for companies to take advantage of anyone, especially potential employees! Maybe there isn’t true malice, and some of these companies are simply taking advantage of a situation to “make the most of it.” I.e. “well we have to interview, let’s see if we can also get some real work thrown in at little or not cost.” I think this is wrong. To be fair, as the interviewee there is no way for me to know this provided the interviewer is crafty enough. It’s easy to take a business scenario and make it look like a generic coding exercise.

              Some of my co-workers have stories of doing a full day of pairing “as part of the interview” without compensation. To me that’s terrible! It’s passed as, “Let’s just make sure you’re a good fit for the actual workload and team you’ll be a part of.” Which sounds noble, and better than your stereotypical whiteboard interview but then if that was truly the case, the company should be willing to pay a consultation fee.

          2. 2

            With one position I’ve interviewed people for, I’ve asked them to implement a particular functionality that was already implemented and released in our app, and I encouraged them to use it as a reference (although by no means a perfect one, they could feel free to improve on it).

            1. 1

              If you want to work for free you can do it, nobody is stopping you.

            1. 5

              I’ve been using just for a few years now and am very happy with it. Although I started using it somewhat like make I’ve since grown to use it in areas like baseline setups, certain task automation situations where I don’t want to use bare shell scripts. It’s been great, and constantly improving!

              1. 10

                That’s understandable. There aren’t many ways to make sustainable money from open source, and having an open core with proprietary add-ons seems like a reasonable compromise between sharing the code and having a defensible business.

                1. 3

                  I was also very optimistic about the open core model, but the recent events have shown how vulnerable open core startups are against the cloud giants who have the resources and the incentives to replicate the proprietary shell once the core is popular enough..

                  1. 1

                    That’s exactly the motivation behind Redis’ and MongDB’s license changes, isn’t it?

                  2. 4

                    The sad thing is that the actual goals of Free Software & Open Source would be served by the AGPL: any company using AGPLed server software must make it available to end users to read, modify & redistribute, which means that the original authors would also have access to it.

                    1. 2

                      I agree, but that’s predicated on two things:

                      1. The company complying with the license (which we’ve seen not all do)
                      2. The company using the (A)GPL’ed software in the first place

                      Some companies won’t (or try not to) use GPL software because of what the license entails. So saying the software in question would be in the exact same predicament had the liberal license been a GPL variant isn’t a given. The software may not have taken off like it did.

                      1. 3

                        Which leads to this weird scenario:

                        – Here’s a permissive license, so that companies can use it!
                        < Companies take it and make a proprietary product from it >
                        – Wait, not like that!

                        We end up having unwritten rules about what permissive licenses permit. We expect the code to remain open. We expect companies to give back. But the license doesn’t actually require any of this.

                        1. 1

                          At the end of the day, companies will do what makes economic sense. There was a time when they refused to use GPLed software out of superstition, and then once a few showed that they could make a profit by opening up development and charging for services others followed suit.

                          Likewise, would Amazon really care if it had to, for example, give back its updates to Redis or PostgreSQL? AWS is still loads easier to manage than running those things manually, and (most) competitors won’t have the name recognition or integration that AWS has. There’s really no reason other than superstition for Amazon not to deliver (some) services using AGPLed software.

                          Regarding the first point, all it takes is a few pointed judgements and folks tend to fall in line.

                    1. 6

                      I agree we can do better, and a custom language specific solutions will almost always beat out a language agnostic one. However, I didn’t see the author mention things like optimizing techniques like inlining and Link Time Optimizations (LTO), which I believe are key reasons why portions that are seemingly disparate actually end up intertwined and thus require rebuilding.

                      1. 4

                        Good point on LTO and optimisation. The thing is, building an optimised binary something is something I rarely do and don’t particularly care about. It’s all about running the tests for me.

                        1. 4

                          Thta’s kind of what I figured, but I didn’t see it mention only debug builds. For debug builds, I totally agree with you!

                      1. 3

                        Desktop-focused Linux distros such as Linux Mint and ElementaryOS don’t do this.

                        Genuinely curious why the author thinks teams like Elementary don’t focus on UX? I see elementary as extremely similar to Pop!_OS, just with a different shell. I’d actually argue that elementary puts more time into consistency and UX that Pop.

                        Granted, personally I prefer Pop to elementary (and am writing this from a Pop machine). However, my reasons stem from elementary having that layer of polish that is ultra wide, but only a mm deep. The cracks start to show after a few weeks of use. This might be due to Pantheon, or (IMO only) spreading themselves thin across Pantheon (and all it’s components), AppCenter, maintaining Vala, or all the “custom” apps (Code, Mail, Browser, etc.) for such a small team. Pantheon and the AppCenter I get, but all the custom apps (minus maybe Files) I feel are bit time sink when there are so many other bugs I would prefer get tacked. But these are my observations. I’d be curious why the author feels as they do.

                        1. 2

                          owned ref / ref

                          The author mentions Rust’s Ownership and Borrowing semantics briefly, but I’m curious how the proposed solution of owned ref/ref work without the additional type information Rust relies on to determine what’s valid.

                          1. 2

                            Hi - OP here. Thanks for taking the time to detail your objections. I’ve edited the post to add some clarity (appreciate it) - I think I can also respond some some of your input thus far.

                            When really understanding Big O it’s important to think in terms of actual use-case.

                            I think the biggest bit of feedback I’ve gotten so far is along the lines of “well, in the real world Rob I’m more concerned about actual performance so Big O can take a leap…” and I understand that. Big O, unfortunately, doesn’t concern itself with use cases, nor should it. It’s just a mathematical adjective for code eifficiency.

                            Depending on the system a structure with O(n) lookup can vastly outperform O(log n) or even O(1) lookup structures for certain numbers of n.

                            I think that’s your opinion, which is fine, but just saying it doesn’t make it so. It would be helpful if you added a few examples to illustrate this?

                            Something isn’t O(1) because it only requires a single operation. It’s O(1) because the operations required are constant time and don’t change as the data set grows or shrinks.

                            Good point - updated the post!

                            The biggest issue here several structures the author speaks of might truly have O(log n) access time, however creating the structure and keeping it with O(log n) can require far more time and make this optimization worthless in certain circumstances. I.e. sorting a structure to ensure O(log n) could be an O(n^2) operation.

                            You’re talking about the indexing operation (which is n log n) which, yes, takes time. This has nothing to do with the lookup time, which is log n and will always be faster than the O(n) fuzzy find. I do agree that indexing (and programming at large) is hard and there things we need to consider. Understanding Big O does not preclude this.

                            1. 2

                              I appreciate your reply! To be clear I wasn’t trying to imply that you don’t understand Big O, just that the methods in which it was explained can lead the reader to false conclusions which can have a negative impact ;)

                              I think that’s your opinion, which is fine, but just saying it doesn’t make it so. It would be helpful if you added a few examples to illustrate this?

                              For example, linear search (O(n) lookup) through a packed array will almost always blow a hashmap (O(1) lookup) out of the water on modern CPUs for “smallish” values of n (depending on the exact hashing method and implementation utilized, along with the size of element itself, “smallish” here can mean thousands of elements).

                              This is due to how memory access and CPU cache works. It’s far more expensive to hash an element (especially a large sparse object) and then conduct an essentially random memory access to a location (which is then sometimes followed by a linear search bound by some constant, or offset access through that location). Linear search through an array is to simply load m elements of n into cache (Even if you can’t fit all of ‘n’ into cache the memory access isn’t random and instead linear/locally bound so pre-fetching will pay dividends), and perform an extremely fast iteration through them.

                              Also remember Big O is worst case. Unfortunately, with O(1) the actual time is the same across all cases. Since we’re talking about actual time, O(n) could end up finishing at n/2 in the average case.

                              You’re talking about the indexing operation (which is n log n) which, yes, takes time. This has nothing to do with the lookup time, which is log n and will always be faster than the O(n) fuzzy find.

                              My point was we can’t look at access times in a vacuum. Creating a structure that has O(log n) access time can take far greater time than creating a structure with O(n) access time, by orders of magnitude in some cases. Therefore, yes, if access time dominates the use-case and the creation/insertion times only happen once (or rarely) sure a structure with O(log n) access or better is the best option.

                              However, even if we were to consider access time in a vacuum, I also don’t subscribe to, “O(log n) will always beat O(n).” The answer is, it depends. It depends on exact structure we talking about, the implementation used, and size of each element.

                              Again, going back to a simple tightly packed array vs something like a BTree. The BTree (again depending on implementation) will do multiple random memory accesses (cache misses and branch mis-predicts), again probably with several linear search bound by a constant. Whereas the array can simply rip through elements much faster (benefits of cache reference, linear/local memory access, pre-fetching, and branch prediction, etc.).

                              It’s not until n gets large enough, doing fewer comparisons will outweigh having to do so many random memory accesses.

                              Never underestimate the power of cache ;)

                            1. 16

                              While this post simplifies Big O, I think it over-simplifies to the point of incorrect information. My biggest gripe is it gives no consideration of anything other than lookup time complexity.

                              When really understanding Big O it’s important to think in terms of actual use-case. Does the system do like 3 lookups, but billions insertions? Picking a structure that has O(1) lookup but O(2^n) insertion would be crazy. What about space complexity (space Big O)?

                              Depending on the system a structure with O(n) lookup can vastly outperform O(log n) or even O(1) lookup structures for certain numbers of n.

                              Here’s a few things I found off:

                              O(1)

                              Something isn’t O(1) because it only requires a single operation. It’s O(1) because the operations required are constant time and don’t change as the data set grows or shrinks.

                              If something requires 1,000 different operations, that were slow as molasses, yet it was those same slow 1,000 operations no matter how large or small n, it would still be O(1).

                              To find something, you just need to know its key. You don’t have to run a loop or do some complex find routine – it’s just right there for you.

                              This leads the reader to believe any lookup could be O(1) so long as you’re not implementing it yourself. A hashmap for example actually can be a complex find operation, but it’s not bound by n, hence O(1).

                              O(n)

                              The only issue I have is saying:

                              This is my rule of thumb: “if there’s a loop, it’s O(n)”.

                              Which isn’t terrible advice, but it should be qualified say, “if there’s [only a single] a loop [bound by n], it’s O(n)”

                              O(n^2)

                              Same as above:

                              My rule of thumb here is that if I have to use a loop within a loop, that’s O(n^2).

                              Not terrible, but it should again clarify an inner loop also bound by n

                              O(log n)

                              The biggest issue here several structures the author speaks of might truly have O(log n) access time, however creating the structure and keeping it with O(log n) can require far more time and make this optimization worthless in certain circumstances. I.e. sorting a structure to ensure O(log n) could be an O(n^2) operation.


                              The moral of the story is think, design, build and profile. Don’t just blindly look for O(1) lookup structures.

                              1. 72

                                Cargo is mandatory. On a similar line of thought, Rust’s compiler flags are not stable

                                This is factually false. Everything in rustc that is not behind -Z (unstable flags) is considered public interface and is stable. Cargo uses only the non--Z interface, so it can be replaced.

                                I also don’t agree with the rest of the statement, integrating cargo into other build systems is a problem that would get worse if it were solved badly and it is terribly hard to find an interface that helps even “most” of the consumers of such a feature. Yes, it always looks like “not caring” from the side of consumers, but we have a ton of people to talk to about this, so please give that time? There’s the unstable build-plan feature which allows to export data from cargo, so please use it and add your feedback.

                                A lot of the arguments fall down to “not yet mature enough” (which I can easily live with, given that the 4th birthday of the 1.0 release is in 1.5 months) or - and I don’t say that easily - some bad faith. For example, Rust doesn’t have a (finalized!) spec, yes, but it should also be said that lots of time is poured into formally proving the stuff that is there. And yes, we’re writing a spec. Yet again, there is almost no practical language today that had a formalized and complete spec matching the implementation out of the door!

                                I also don’t agree with the statement that Rust code from last year looks old, code churn around the 2018 edition was rather low, except that you could now kill a lot of noisy lines, which a lot of projects just do on the go.

                                I’m completely good with accepting a lot of the points in the post and please have your rant, but can’t help but feeling like someone wanted to grind an axe instead and highlight their mastodon posts.

                                Finally, I’d like to highlight how much effort Federico from Meson has put into exploring exactly the build system space around Rust in a much better fashion. https://people.gnome.org/~federico/blog/index.html

                                1. 3

                                  Yet again, there is almost no practical language today that had a formalized and complete spec matching the implementation out of the door!

                                  This is factually false? JavaScript has a superb spec and also has a formalized spec. Practically speaking, formalized spec is not very useful yet, so if we restrict to complete spec, all of C, C++, Java, C#, JavaScript have complete spec supported by multiple independent implementations. Rust’s spec as it exists is considerably less complete and useful compared to those specs.

                                  1. 16

                                    My point is: Did all of those have it out of the door?

                                    Yes, the current spec is not useful for reimplementing Rust and that has to change. My point is that it’s rare to see languages that have such a spec 3 years out of the door.

                                    1. 25

                                      Java was released in 1996 together with Java Language Specification written by Guy Steele and co (zero delay). C# was released in 2002 and ECMA-334 was standardized in 2003 (1 year delay). Compared to Java and C#, Rust very much neglected works on specification, primarily due to scarce resource. My point is that even after 3 years, unlike Java and C#, there is no useful spec of Rust.

                                      1. 4

                                        Why did Steele write the Java spec? Usually there is little value in writing a spec if there is only one implementation. Did they write the Java spec because Microsoft made its own Java?

                                        Also, Python has no spec although it has multiple implementations and it is certainly a useful and successful language.

                                        1. 2

                                          I believe Python does have a spec. “don’t rely on dict ordering” was a consequence of saying “Python spec doesn’t specify this even if CPython in fact orders it”, though this has changed. Not closing files explicitly is considered incorrect from a spec perspective even though CPython files will close files on file object destruction

                                          It’s not the C++ language spec but there are a good amount of declarations relative to “standard Python behavior”

                                          1. 3

                                            By that logic, so does Rust. They both follow almost identical processes of accepting RFCs and documenting behavior.

                                            1. 1

                                              I’m agnostic to the “Rust having a spec” question. I have not thought about it more than today.

                                              Python has the reality of having multiple mature implementations (I’m not sure if this is true of Rust?) so there’s actually a good amount of space for a difference between spec and impl.

                                              I also think there’s actually an ongoing project to defining a Rust spec? It feels like “Rust spec” is pretty close to existing , at least in a diffuse form

                                          2. 1

                                            Usually there is little value in writing a spec if there is only one implementation.

                                            There is a lot of value in writing down the conclusion of a discussion. When the conclusions are about formalization, it adds value to write it down as formally as reasonable. That enables other humans to check it for logical errors, functional problems, etc. and catch those before they are discovered while coding or even later.

                                          3. 1

                                            You’re right. coming from a background of more dynamic languages (Ruby/Python/etc., I’m more used to their pace so speccing).

                                            1. 0

                                              hm - i was against you until this comment

                                              thats a good point - perhaps mozilla wants hegemony over the language and wants to prevent other rust implementations - i wonder if any other serious implementations even exist currently?

                                              1. 14

                                                I don’t think that’s the case. Spec writing is a very specific skill, and you pretty much need to hire spec writer to write specification. Mozilla didn’t invest in hiring Rust spec writer. (They did hire doc writer to produce the book.) Since Java and C# did invest in specification, it is right and proper to judge Rust on the point, but then Mozilla is not as rich as Sun and Microsoft were.

                                                1. 13

                                                  Rust is independently governed from Mozilla; while there are Mozilla employees on the teams, there was a deliberate attempt to make Rust its own project a bit before 1.0.

                                                  There are active attempts to specify parts of Rust: we have a group of people attempting to pin down the formal semantics of unsafe code so that that can be specified better (we need to figure this out before we specify the rest of it).

                                                  Specifying a language is a huge endeavor, it’s going to take time, and Rust doesn’t have that many resources.

                                                  1. 6

                                                    Equally likely that Mozilla doesn’t want hegemony over Rust, and so doesn’t put a lot of effort into the things that don’t benefit them directly as much. Java and C# were both made by large companies that needed a standard written down so that a) they could coordinate large (bureaucratic) teams of people, and b) they could keep control over what the language included.

                                                    There’s already one alternative Rust implementation: https://github.com/thepowersgang/mrustc . Afaik it’s partial, but complete enough to bootstrap rustc.

                                                    1. 19

                                                      (Yes, and…) Having worked not on but nearby the Microsoft JavaScript and C# teams I can tell you that in both cases the push for rapid standardization was to a significant degree a result of large-corporation politics. For JavaScript, Netscape wanted to control the language and Microsoft put on a crash effort to participate so it wouldn’t be a rubber stamp of whatever Netscape had. For C#, Microsoft wanted to avoid the appearance of a proprietary language, so introduced it with an open standards process to start with. In both cases somebody had to write a spec for a standards process to happen.

                                                      BTW, the MS developers had some “hilarious” times trying to write the JavaScript spec. The only available definition was “what does Netscape do”, and pretty often when they tried to test the edge cases to refine the spec, Netscape crashed! Not helpful.

                                                    2. 3

                                                      i wonder if any other serious implementations even exist currently?

                                                      There is mrustc, although I haven’t followed development of it lately, so I’m unsure of the exact roadmap.

                                                      1. 2

                                                        mrustc doesn’t do lifetime checking at all, which is notoriously unspecified how it exactly works (like: what must be accepted, what not?)

                                                2. 14

                                                  This debate is Rust vs C. Rust had good design imitating strengths of various languages with a spec to come later. C was a slightly extended variant of B and BCPL, which was bare minimum of what compiled on an EDSAC. Unlike Wirth’s, it wasn’t designed for safety, fast compiles, or easy spec. Pascal/P was also more portable with folks putting it on 80 architectures in a few years. Even amateurs.

                                                  Far as spec, we got a C semantics with undefined behavior decades after C since the “design” was so rough. I can’t recall if it covers absolutely everything yet. People started on safety and language specs on Rust within a few years of its release. So, Rust is literally moving decades faster than C on these issues. Im not sure it matters to most programmers since they’ll just use Rust compiler.

                                                  C is still ahead, though, if combined with strict coding style and every tool we can throw at it. Most C coders don’t do that. Im sure the author isn’t given what he stated in article.

                                                  EDIT: Post was a hurried one on my phone. Fixed organization a bit. Same content.

                                                  1. 2

                                                    C is still ahead, though, if combined with strict coding style and every tool we can throw at it. Most C coders don’t do that. Im sure the author isn’t given what he stated in article.

                                                    This is something I’m always quite surprised by. I can’t get why some don’t even use the minimum of valgrind/the sanitizers that come with the compiler they use, also cppcheck, scan-build, and a bunch of other other free C++ tools work wonders with C as well.

                                              1. 3

                                                My 2019 resolution is to reduce for-learning books and increase for-fun books. Here are some of the for-fun books on my list.

                                                • Mistborn, Brandon Sanderson
                                                  • The Final Empire
                                                  • The Well of Ascension
                                                  • The Hero of Ages
                                                • Wheel of Time, Robert Jordan
                                                  • The Eye of the World
                                                • The Axe, Donald Westlake
                                                • Le Comte de Monte Cristo, Alexandre Dumas
                                                • Discworld, Terry Pratchet
                                                  • The Colour of Magic
                                                • Pillards of the Earth, Ken Follet
                                                • Hitchhiker’s Guide to the Galaxy, Douglas Adams
                                                • The Lord of the Rings, J.R.R. Tolkien
                                                  • The Hobbit
                                                  • The Fellowship of the Ring
                                                  • The Two Towers
                                                  • The Return of the King
                                                • The Hand Maid’s Tale, Margaret Atwood
                                                • Malazan, Steven Erickson

                                                In my for-learning list, I have a couple of history books and I’ll probably want to add a couple of books on satisfiability (probably Knuth’s) and database implementation.

                                                1. 2

                                                  Nice list. If you haven’t already read it I’d also like to suggest the Stormlight Archives by Brandon Sanderson. I wasn’t a huge fan of the Mistborn series but the Stormlight Archives is one of my favorite series.

                                                  1. 1

                                                    I was coming to say the same thing; the Stormlight Archives are so good! I found them prior to any other Sanderson books, so I’ll be going through Mistborn next, but I feel like these have set the bar really, really high.

                                                  2. 1

                                                    Since you seem to like a certain type of fantasy, may I suggest The Blade Itself by Joe Abercrombie? I’ve read most of the books on your list, and I thought Abercrombie’s First Law series was up there with the best of them. The audiobook was particularly well done.

                                                    1. 1

                                                      Looks like you want to try Terry Pratchett? I would not start with the first one. It is not as good as his later books.

                                                      1. 1

                                                        Mistborn is great, it was my introduction to Sanderson as well (and after reading them I binge-read the rest of his books). If you want to keep going with him after the original Mistborn trilogy, I’d recommend reading Warbreaker and then diving into the Stormlight Archives. As @qznc stated, I’d recommend starting with a different Pratchett book. What a friend got me to start with was Thud, which I absolutely loved. That being said, it looks like you’ve picked a great list! (Also, just in case you aren’t aware, Hitchhiker’s Guide is a series, as is Malazan)

                                                        1. 1

                                                          As @qznc stated, I’d recommend starting with a different Pratchett book. What a friend got me to start with was Thud, which I absolutely loved.

                                                          There is also Good Omens. Which is what you would get if you were to cross a Terry Pratchett book & a Neil Gaiman book… probably because it is a Terry Pratchett & Gaiman book.

                                                          1. 1

                                                            I heard of Sanderson because of his videos on writing on Youtube and I figured I’d read what he wrote. Really enjoying Mistborn: The Final Empire at the moment. For Discworld, it came recommended by a colleague at work; is it okay if I don’t read them in order? For HHGG, it’s a re-read and Malazan is going to be an attempt at the series, we’ll see if I stick with it

                                                        1. 5

                                                          The module system is a strange beast indeed! It’s one of those things that when reading base level tutorials you think, “Yep that makes perfect sense!” …then you try and do something with it and get totally lost. But like the borrow checker, once it “clicks” it clicks! And you can see where the design decisions were made, and why.

                                                          I think the combination of allowing the flexibility to use mod.rs and directories and inline mod { /* .. */ }s is great once it clicks…but can get overwhelming at first.

                                                          Add to that the rules around use and pub use or mod and pub mod make it an extremely flexible system, but tough to grasp up front.

                                                          1. 13

                                                            This post is riddled with falsehoods and misapprehensions. Addressing it point by point:

                                                            1. “A cap at 21M. … no one thinks it’s a good idea”. That’s a matter of opinion. Gold and precious metals served very effectively as a currency standard for hundreds or thousands of years and they have a similar attribute of limited supply. The economists who think this is a bad idea are the same people who support centralised control of currency. Bitcoin is all about creating a decentralised currency which removes centralised control - and Bitcoin’s algorithmic control of money supply is a direct result of that. So really this is about philosophical viewpoints rather than who’s right or wrong.

                                                            2. (On why Ethereum’s blockchain is better than Bitcoin’s) Bitcoin is a moving target. It was the first of its kind and so it didn’t have some improvements that have been discovered since. Many of these will probably be added over time.

                                                            3. This is false: “…made everyone waste bandwidth on downloading blockchain from scratch”. Most users have wallets which don’t need to download the blockchain at all (SPV and web based types). Only people with special use cases need to download the whole blockchain.

                                                            4. This is also false: “He offered ‘new payment — new address’ as a rule”. It’s not a rule. You can certainly do it but it’s by no means a rule. The rest of this section is based on the false assumption that Bitcoin is anonymous. Bitcoin doesn’t claim to offer anonymity, only a weak kind of pseudonymity. You have to go to cryptocurrencies with an emphasis on privacy like zcash and monero for full anonymity.

                                                            5. “He never defined clearly threat model of Bitcoin” is not related to the original claim “Satoshi was wrong” at all so this whole point is irrelevant. It just means he didn’t fully explore an aspect of it. Which isn’t surprising since it was totally new at the time and his writings pre-date most of the current implementation.

                                                            6. Another false point: “there’s no clear strategy or way of resolution of conflicts”. Satoshi defined a scheme where multiple implementations could exist at the same time and compete for popularity where popularity is defined as accumulated hash rate. He said that the chain with the greatest proof of work was the true implementation and the technology supports exactly that mechanism for conflict resolution. What we’ve seen recently is a bunch of conflict but it is indeed being resolved by that exact mechanism.

                                                            1. 7

                                                              A cap at 21M. … no one thinks it’s a good idea”. That’s a matter of opinion. Gold and precious metals served very effectively as a currency standard for hundreds or thousands of years and they have a similar attribute of limited supply.

                                                              The limited supply is a problem because there’s not enough to go around in a growing economy. You simply cannot remint all the coin all the time to keep the currency circulation large enough once you go industrialised. You need paper money for that.

                                                              As is bitcoin is not being used as a currency, it is being used as an ‘investment’ whereby people buy it hoping the price will go up. That’s not how currency is supposed to work.

                                                              1. 7

                                                                The limited supply is a problem because there’s not enough to go around in a growing economy.

                                                                Please explain what you believe to be the problem without using vague and easily misconflated concepts like “enough” and “go around”.

                                                                In case you’re speaking literally, there are 2.1*10^15 fungible units of Bitcoin (Satoshis). This is plenty for all humans on earth to represent their wealth. If the world switches to 100% bitcoin and we end up feeling the squeeze of a single Satoshi being a bit big, we can easily add more decimal places.

                                                                Currency isn’t “supposed to work” in any particular way unless you believe the teleological purpose of currency is to facilitate force-backed centralized control of economies. If a commodity is fungible, divisible, and easy to transport, it’s perfectly reasonable to use as a currency. People will autonomously choose media of exchange with good properties.

                                                                1. 3

                                                                  The limited supply is a problem because there’s not enough to go around in a growing economy.

                                                                  Bitcoin is divisible in an extremely easy manner. Not that I’m advocating for every human on the planet owning a whole Bitcoin, but one shoulnd’t think in those terms. What should be common is people holding MilliBitcoin (mBTC, or 0.001 BTC), or even MicroBitcoin (uBTC, or 0.000001 BTC). In these cases there is plenty to “go around.”

                                                                  The primary reason 21M could be seen as bad is that it makes the currency deflationary (increases in value over time) vs inflationary (decreases in value over time) like standard fiat currency. Being deflationary in nature isn’t actually a problem so long as the deflation happens slowly and predictably. If Bitcoin were stable enough that the value only increased 1 or 2 percent a year, being deflationary wouldn’t be as big of an issue.

                                                                  1. 2

                                                                    “Not enough” is not a problem. You can simply move the comma. Do your accounting in millibitcoins or satoshis. While 1 Satoshi is currently the smallest unit, the software could be changed to split it even further if 1BTC is worth $1 million, for example.

                                                                    Bitcoin price becomes more stable over time the more people own it. When Bitcoin is as pervasive as dollars, yen, or euros it will be equally stable.

                                                                    1. 4

                                                                      “Not enough” is not a problem. You can simply move the comma.

                                                                      This is what economists call the “Quantity theory of money”, and empirical data shows that demand in most circumstances is more important than the money supply, and shows that especially with speculation, changes in the value of a commodity introduce feedback loops that cause instability.

                                                                      Bitcoin price becomes more stable over time the more people own it.

                                                                      Have you seen the graphs of price? There is no evidence that Bitcoin is becoming “more stable”, if anything, it’s as unstable as it was years ago as a proportion of its total value. I’m sorry but what you speak of is blind faith, not reality.

                                                                      1. 2

                                                                        The latest big dump of Bitcoin was $5000 down to $3000, which is 40% in 13 days. Back in 2015, you can find 40% dumps in a single day. Back in 2013, there is 40% up one day and 50% down the next.

                                                                        For a comparison, in 2014 EURUSD fell 25% in 300 days.

                                                                        1. 2

                                                                          You were telling me about how the age of bitcoin fluctuation is over?

                                                                  2. 2

                                                                    The economists who think this is a bad idea are the same people who support centralised control of currency.

                                                                    Deflation is generally considered bad. However, I was never convinced by the arguments. The basic argument is “why would you buy something if you can buy more later?” together with the assumption that increasing consume is good for the economy. Now, for technology we practically have deflation. Why would you buy a smartphone now, if you can buy a better one later?

                                                                    Maybe on a larger scale? Do we prefer people own government bonds, stocks, and gold instead of money? Why?

                                                                    Even larger? Will governments investments stop in a deflationary world? Will Apple stop producing because its cash reserves are good enough?

                                                                  1. 1

                                                                    It seems (NPI) the main complaint is coupling the public interface to the implementation, vice the actual problem abstraction. I would very much agree with the idea that the abstraction should model the problem and hide the implementation details. However, not being super familiar with seams in general it’s difficult for me to judge if this issue is causal to using seams or just something that can happen with any methodology.

                                                                    1. 1

                                                                      This is exciting, I keep hearing such great things about the Rust plugin. I really need to check this out!