1. 4

    Feel like I’d rather just sit quietly with a notebook and think about code than work with this janky-at-best setup.

    To each their own though.

    1. 30

      I enjoyed the author’s previous series of articles on C++, but I found this one pretty vacuous. I think my only advice to readers of this article would be to make up your own mind about which languages to learn and use, or find some other source to help you make up your mind. You very well might wind up agreeing with the OP:

      Programmers spend a lot of time fighting the borrow checker and other language rules in order to placate the compiler that their code really is safe.

      But it is not true for a lot of people writing Rust, myself included. Don’t take the above as a fact that must be true. Cognitive overheads come in many shapes and sizes, and not all of them are equal for all people.

      A better version of this article might have went out and collected evidence, such as examples of actual work done or experience reports or a real comparison of something. It would have been a lot more work, but it wouldn’t have been vacuous and might have actually helped someone answer the question posed by the OP.

      Both Go and Rust decided to special case their map implementations.

      Rust did not special case its “map implementation.” Rust, the language, doesn’t have a map.

      1. 16

        Hi burntsushi - sorry you did not like it. I spent months before this article asking Rust developers about their experiences where I concentrated on people actually shipping code. I found a lot of frustration among the production programmers, less so among the people who enjoy challenging puzzles. They mostly like the constraints and in fact find it rewarding to fit their code within them. I did not write this sentence without making sure it at least reflected the experience of a lot of people.

        1. 20

          I would expect an article on the experience reports of production users to have quite a bit of nuance, but your article is mostly written in a binary style without much room for nuance at all. This does not reflect my understanding of reality at all—not just with Rust but with anything. So it’s kind of hard for me to trust that your characterizations are actually useful.

          I realize we’re probably at an impasse here and there’s nothing to be done. Personally, I think the style of article you were trying to write is incredibly hard to do so successfully. But there are some pretty glaring errors here, of which lack of nuance and actual evidence are the biggest ones. There’s a lot of certainty expressed in this article on your behalf, which makes me extremely skeptical by nature.

          (FWIW, I like Rust. I ship Rust code in production, at both my job and in open source. And I am not a huge fan of puzzles, much to the frustration of my wife, who loves them.)

          1. 4

            I just wanted to say I thought your article was excellent and well reasoned. A lot of people here seem to find your points controversial but as someone who programs C++ for food, Go for fun and Rust out of interest I thought your assessment was fair.

            Lobsters (and Hacker News) seem to be very favourable to Rust at the moment and that’s fine. Rust has a lot to offer. However my experience has been similar to yours: the Rust community can sometimes be tiresome and Rust itself can involve a lot of “wrestling with the compiler” as Jonathan Turner himself said. Rust also provides some amazing memory safety features which I think are a great contribution so there are pluses and minuses.

            Language design is all about trade-offs and I think it’s up to us all to decide what we value in a language. The “one language fits all” evangelists seem to be ignoring that every language has strong points and weak points. There’s no one true language and there never can be since each of the hundreds of language design decisions involved in designing a language sacrifices one benefit in favour of another. It’s all about the trade-offs, and that’s why each language has its place in the world.

            1. 10

              I found the article unreasonable because I disagree on two facts: that you can write safe C (and C++), and that you can’t write Rust with fun. Interpreted reasonably (so for example, excluding formally verified C in seL4, etc.), it seems to me people are demonstrably incapable of writing safe C (and C++), and people are demonstrably capable of writing Rust with fun. I am curious about your opinion of these two statements.

              1. 7

                I think you’re making a straw man argument here: he never said you can’t have fun with Rust. By changing his statement into an absolute you’ve changed the meaning. What he said was “Rust is not a particularly fun language to use (unless you like puzzles).” That’s obviously a subjective statement of his personal experience so it’s not something you can falsify. And he did say up front “I am very biased towards C++” so it’s not like he was pretending to be impartial or express anything other than his opinion here.

                Your other point “people are demonstrably incapable writing safe C” is similarly plagued by absolute phrasing. People have demonstrably used unsafe constructs in Rust and created memory safety bugs so if we’re living in a world of such absolute statements then you’d have to admit that the exact same statement applies to Rust.

                A much more moderate reality is that Rust helps somewhat with one particular class of bugs - which is great. It doesn’t entirely fix the problem because unsafe access is still needed for some things. C++ from C++11 onwards also solves quite a lot (but not all) of the same memory safety issues as long as you choose to avoid the unsafe constructs, just like in Rust.

                An alternative statement of “people can choose to write safe Rust by avoiding unsafe constructs” is probably matched these days with “people can choose to write safe C++17 by avoiding unsafe constructs”… And that’s pretty much what any decent C++ shop is doing these days.

                1. 5

                  somewhat with one particular class of bugs

                  It helps with several types of bugs that often lead to crashes or code injections in C. We call the collective result of addressing them “memory safety.” The extra ability to prevent classes of temporal errors… easy-to-create, hard-to-find errors in other languages… without a GC was major development. Saying “one class” makes it seem like Rust is knocking out one type of bug instead of piles of them that regularly hit C programs written by experienced coders.

                  An alternative statement of “people can choose to write safe Rust by avoiding unsafe constructs” is probably matched these days with “people can choose to write safe C++17 by avoiding unsafe constructs”

                  Maybe. I’m not familiar with C++17 enough to know. I know C++ was built on top of unsafe language with Rust designed ground-up to be safe-as-possible by default. I caution people to look very carefully for ways to do C++17 unsafely before thinking it’s equivalent to what safe Rust is doing.

          2. 13

            I agree wholeheartedly. Not sure who the target survey group was for Rust but I’d be interested to better understand the questions posed.

            Having written a pretty large amount of Rust that now runs in production on some pretty big systems, I don’t find I’m “fighting” the compiler. You might fight it a bit at the beginning in the sense that you’re learning a new language and a new way of thinking. This is much like learning to use Haskell. It isn’t a good or bad thing, it’s simply a different thing.

            For context for the author - I’ve got 10 years of professional C++ experience at a large software engineering company. Unless you have a considerable amount of legacy C++ to integrate with or an esoteric platform to support, I really don’t see a reason to start a new project in C++. The number of times Rust has saved my bacon in catching a subtle cross-thread variable sharing issue or enforcing some strong requirements around the borrow checker have saved me many hours of debugging.

            1. 0

              I really don’t see a reason to start a new project in C++.

              Here’s one: there’s simply not enough lines of Rust code running in production to convince me to write a big project in it right now. v1.0 was released 3 or 4 years ago; C++ in 1983 or something. I believe you when you tell me Rust solves most memory-safety issues, but there’s a lot more to a language than that. Rust has a lot to prove (and I truly hope that it will, one day).

              1. 2

                I got convinced when Rust in Firefox shipped. My use case is Windows GUI application, and if Firefox is okay with Rust, so is my use case. I agree I too would be uncertain if I am doing, say, embedded development.

                1. 2

                  That’s fair. To flip that, there’s more than enough lines of C++ running in production and plenty I’ve had to debug that convinces me to never write another line again.

                  People have different levels of comfort for sure. I’m just done with C++.

            1. 16

              Having spent the last 6 months working Rust, I’d disagree with his conclusion around Rust being prideful about pushing the borrow checker.

              The language is explicit in how you should use it - the rules are different than someone coming from C++ or C# might expect. It is to be expected that you fight the compiler - it’s a different way of thinking. That doesn’t mean Rust is right but it also doesn’t mean it’s wrong. It’s an opinion.

              This is similar to new users of Haskell fighting purity. Just because Haskell is pure doesn’t mean it’s right or wrong - it’s a design choice that you have to learn to work with.

              1. 1

                I think we have to admit that, for most people, the just get it to compile->runtime error->printf debugging loop is preferable. Even if only because it feels more productive.

                1. 8

                  There is quite a large class of bugs that Rust purports to fix in which you only get a runtime error if you’re lucky. At least, when compared to C or C++.

                  1. 0

                    You’re right. It’s closer to a (wait for the world to explode->printf debugging->wait again) loop. This goes along with the metaphor of building software as construction. There’s a whole group of people who just want to duct tape that leak under the sink. They aren’t building a house or even a shed, as much as they are temporary tenants. You know you won’t be living in the house forever, so it’s not worth it to actually fix the problem.

                    1. 2

                      I do think most source codes live too short to care, but shouldn’t systems be built to last? I think Rust is a better systems programming language than C or C++ in that sense.

                      1. 3

                        I do think most source codes live too short to care

                        Even then, Wirth’s languages showed you could get fast compiles, be safe by default, have clean module system, and support concurrency built-in. C the language is still worse if one aims to quickly throw together code that doesn’t crash or get hacked as often.

                      2. 1

                        I’d contend that maybe those folks shouldn’t be building systems :). Until you have to deal with servicing a huge number of client machines, the guarantees don’t really set in as to how much they help.

                    2. 2

                      I’d disagree. That feels considerably less productive for systems programming. In fact, it’s infuriating. I mostly work in client-side software developed on a large-to-huge scale. Runtime failures are the last thing I want to deal with - it means I have to update upwards of 500k clients.

                      While it might be acceptable to deal with compile->runtime error->printf debug on the server side. It’s hardly a good solution on client side – even if it was how we dealt with it for many years.

                      1. 1

                        Yes, I agree that certain tasks require different tools. I was trying to specifically point out that generalization is for most people. E.g. quick data analysis jobs, internal web UI, etc. Obviously dynamic or interpreted languages are better for such tasks, than something like C. Personally I see the future of C being for microcontroller projects or toy ISAs, where you care about ease of implementation, and support for better defined languages take over primary systems. That may take another half-century at this rate, though.

                      2. 1

                        Well, there are those who feel that compile/type errors hold back their unbounded creativity, but that doesn’t mean those analyses are bad.

                    1. 1

                      A very cool project. Not sure I’d have defaulted to translating the ISO in EFS format to a TAR file but in retrospect, this is a good fit for a simple tool. She notes it’s over-engineered at the beginning but I think it’s actually probably a better approach than fighting with dated hardware, having to compile a kernel module or setup an OS vm.

                      1. 2

                        This is neat. While I’m less keen on the APL-like languages, I’ve often felt Forth-like languages would also be quite useful for creating music.

                        Maybe, in my infinite (not) time, I’ll try to create an example stack-based language for music… Maybe…

                          1. 2

                            You should. I could see things like chord substitutions lending themselves really well to a stack-based language.

                          1. 3

                            For those unaware, https://startpage.com is excellent and has great privacy policy. I actually prefer it to DuckDuckGo these days because I feel its default search is of higher quality.

                            Reminds me a lot of the Google from 10-15 years ago.

                            1. 2

                              The default search quality is higher probably because they act as a Google proxy sometimes (offering privacy by being between you and Google).

                            1. 8

                              They did all this before they learned Worse is Better. Now that we know it wins, we have to sneak The Right Thing into what otherwise looks like Worse is Better. Alternatively, do Worse is Better in a way where good, interface design lets us constantly improve on the worse parts inside if the project/product gets adoption. Likewise, I say put new things into products people find useful maybe without those things. Parts of it build on proven principles with the new thing an extra differentiator that might or might not pan out. If it’s a language or environment, they can discover it when trying to modify the product.

                              One thing that should be considered for this list is Burroughs Architecture. It made low-level operations high-level, safe, and maintainable with OS written in ALGOL. Although it was commercialized, the hardware enforcement got taken out if I’m remembering correctly. The market only cared price/performance for a long time. Only a few projects applied those concepts later on. A recent project was SAFE Architecture which started like it in original proposal but changed to do something more flexible. Dover Microsystems finally released it commercially as CoreGuard in late 2017. Quite a long delay for anyone to deploy a Burroughs-inspired solution despite fact that it was solving many of today’s problems in 1961.

                              1. 6

                                One of my favorite courses in college was an OS course where we had to build an OS inside of a VM. The VM was a simplified Burroughs Large System architecture.

                                I enjoyed that course a lot and learned so much. It was a refreshing change from x86 and MIPS assembly.

                                1. 2

                                  That’s really neat. I wouldnt have expected people building on Burroughs VM’s unless Unisys had a deal with the college to make them some talent. ;)

                                  Did the VM have the pointer, bounds and argument checks like the B5000? And did the experience teach you anything that impacted later work?

                                  1. 3

                                    The VM was written by our professor - quirky guy but I learned a huge amount from him. My understanding is it was a simplified version of the B5000 but it did have bounds checking.

                                    As to what I learned - I’m not sure I got any insight about computer architecture because it was a Burroughs ISA. I think a lot of what I learned was more around the trade offs you make in process scheduling and building rudimentary filesystems.

                                    One big aspect of this project was he gave us an incomplete compiler for a Pascal-like language. You had to extend it to support things like arrays and loops. The compile target was the Burroughs VM. I recall thinking that the ISA was quite clean to generate for.

                                    I’m sure if I’d have had to reimplement the same project on x86, I’s have seen a lot of the advantages of the B5000.

                                    A lot of what I recall specific to Burroughs ISA was that it was very easy to understand. I was a CS major so I only had 2 or 3 courses that dealt with hardware directly. For me, x86 was very frustrating to work with.

                              1. 34

                                I think you could reimplement it easily yourself with a small shell script and some calls to mount; but I haven’t bothered.

                                I don’t have the expertise to criticize the content itself, but statements like the above make me suspect that the author doesn’t know nearly as much about the problem as they think they know.

                                1. 32

                                  This reminds me of a trope in the DIY (esp. woodworking DIY) world.

                                  First, show video of a ludicrously well equipped ‘starter shop’ (it always has a SawStop, Powermatic Bandsaw, and inexplicably some kind of niche tool that never really gets used, and a CNC router).

                                  Next, show video of a complicated bit of joinery done using some of the specialized machines.

                                  Finally, audio: “I used for this, but if you don’t have one, you can do the same with hand tools.”

                                  No, asshole, no I can’t. Not in any reasonable timeframe. Usually this happens in the context of the CNC. “I CNC’d out 3 dozen parts, but you could do the same with hand tools.”

                                  I get a strong whiff of that sort of attitude from this. It may be that the author is capable of this. It may be possible to ‘do this with hand tools’ like Shell and some calls to mount. It might even be easy! However, there is a reason docker is so popular, it’s because it’s cheap, does the job, and lets me concentrate on the things I want to concentrate on.

                                  1. 9

                                    As someone who can do “docker with hand tools,” you and @joshuacc are completely correct. Linux does not have a unified “container API,” it has a bunch of little things that you can put together to make a container system. And even if you know the 7 main namespaces you need, you still have to configure the namespaces properly.

                                    For example, it isn’t sufficient to just throw a process in its own network namespace, you’ve got to create a veth pair and put one end of that into the namespace with the process, and attach the other end to a virtual bridge interface. Then you’ve got to decide if you want to allocate an IP for the container on your network (common in kubernetes), or masquerade (NAT) on the local machine (common in single box docker). If you masquerade you must make snat and dnat iptables rules to port forward to the veth interface, and enable the net.ipv4.ip_forward sysctl.

                                    So the “small shell script” is now also a management interface for a network router. The mount namespace is even more delightful.

                                    1. 8

                                      Exactly this! One of the most egregious things about the ‘… you could do it with hand tools’ is that it is dismissive of people who really can do it with hand tools and dismissive of the folks that can do it with CNC.

                                      In woodworking, CNC work is complicated, requires a particular set of skills and understanding, and is prone to a totally different, equally painful class of errors that hand tools are not.

                                      Similarly, Hand tool work is complicated, requires a particular set of skills and understanding, and is prone to a totally different, equally painful class of errors that power/CNC work is not.

                                      Both are respectable, and both are prone to be dismissive of the other, but a hand-cut, perfect half-blind dovetail drawer is amazing. Similarly, a CNC cut of 30 identical, perfect half-blind dovetail drawers is equally amazing.

                                      The moral of this story: I can use the power tool version of containers. It’s called docker. It lets me spit out dozens of identically configured and run services in a pretty easy way.

                                      You are capable of ‘doing it with hand tools’, and that’s pretty fucking awesome, but as you lay out, it’s not accomplishing the same thing. The OP seems to believe that building it artisinally is intrinsically ‘better’ somehow, but it’s not necessarily the case. I don’t know what OP’s background is, but I’d be willing to bet it’s not at all similar to mine. I have to manage fleets of dozens or hundreds of machines. I don’t have time to build artisanal versions of my power tools.

                                    2. 2

                                      And then you have Paul Sellers. https://www.youtube.com/watch?v=Zuybp4y5uTA

                                      Sometimes, doing things by hand really is faster on a small scale.

                                      1. 2

                                        He’s exactly the guy I’m talking about though in my other post in this tree – he’s capable of doing that with hand tools and that’s legitimately amazing. One nice thing about Paul though is he is pretty much the opposite of the morality play from above. He has a ludicrously well-equipped shop, sure, but that’s because he’s been doing this for a thousand years and is also a wizard.

                                        He says, “I did this with hand tools, but you can use power tools if you like.” Which is also occasionally untrue, but the sentiment is a lot better.

                                        He also isn’t elitist. He uses the bandsaw periodically, power drillmotors, and so on. He also uses panel saws and brace-and-bit, but it’s not an affectation, he just knows both systems cold and uses whatever makes the most sense.

                                        Paul Sellers is amazing and great and – for those people in the back just watching – go watch some Paul Sellers videos, even if you’re not a woodworker (or a wannabe like me), they’re great and he’s incredible. I like the one where he makes a joiner’s mallet a lot. Also there’s some floating around of him making a cabinet to hold his planes.

                                    3. 1

                                      My reaction was “if you had to write this much to convince me that there are easier ways than Docker, then it sounds like this is why Docker has a market.”

                                      I’m late to the Docker game - my new company uses it heavily in our infrastructure. Frankly, I was impressed at how easy it was for me to get test environments up and running with Docker.

                                      I concede it likely has issues that need addressing but I’ve never encountered software that didn’t.

                                    1. 2

                                      (Preface: I didn’t know much, and still don’t, about the *Solaris ecosystem.)

                                      So it seems like the evolution of *Solaris took an approach closer to Linux? Where there’s a core chunk of the OS (kernel and core build toolchain?) that is maintained as its own project. Then there’s distributions built on top of illumos (or unleashed) that make them ready-to-use for endusers?

                                      For some reason, I had assumed it was closer to the *BSD model where illumos is largely equivalent to something like FreeBSD.

                                      If I wanted to play with a desktop-ready distribution, what’s my best bet? SmartOS appears very server oriented - unsurprising given *Solaris was really make more in-roads there in recent years. OpenIndiana?

                                      1. 3

                                        If Linux (kernel only) and BSD (whole OS) are the extremes of the scale, illumos is somewhere in the middle. It is a lot more than just a kernel, but it lacks some things to even build itself. It relies on the distros to provide those bits.

                                        Historically, since Solaris was maintained by one corporation with lots of release engineering resources and many teams working on subsets of the OS as a whole, it made sense to divide it up into different pieces. The most notable one being the “OS/Net consolidation” which is what morphed into what is now illumos.

                                        Unleashed is still split across more than one repo, but in a way it is closer to the BSD way of doing things rather than the Linux way.

                                        Hope this helps clear things up!

                                        If I wanted to play with a desktop-ready distribution, what’s my best bet? SmartOS appears very server oriented - unsurprising given *Solaris was really make more in-roads there in recent years. OpenIndiana?

                                        OI would be the easiest one to start with on a desktop. People have gotten Xorg running on OmniOS (and even SmartOS), but it’s extra work vs. just having it.

                                        1. 1

                                          Solaris is like BSD in that it includes the kernel + user space. In Linux, Linux is just the kernel and the distros define user space.

                                          1. 1

                                            So…. is there no desktop version of Illumos I can download? Why does their “get illumos” page point me at a bunch of distributions?

                                            Genuine questions - I’m just not sure where to start if I want to play with illumos.

                                            1. 3

                                              illumos itself doesn’t have an actual release. You’re expected to use one of its distributions as far as I can tell, which should arguably be called “derivatives” instead. OpenIndiana seems to be the main desktop version.

                                              1. 1

                                                I don’t know. I know there are some people who run SmartOS on their desktop, but I get the feeling it’s not targeting that use case, or at least there isn’t a lot of work going into supporting it.

                                          1. 1

                                            Despite the dopey cover, Smith’s book is IMO the best introductory text in the subject.

                                            1. 2

                                              Based on the index and introduction, that definitely looks like a good starting point. I will have a look. Thanks!

                                            1. 7

                                              I have done some audio programming, and am studying engineering, so I guess I have some knowledge about it. There are many who are better than me, though. I hope this isn’t too mathematical, but you need to have some grasp on differentiation, integration, complex numbers and linear algebra anyway. Here’s a ‘short’ overview of the basics:

                                              First of all, you need to know what happens when an analog, continuous signal is converted to digital data and back. The A->D direction is called sampling. The amount of times the data can be read out per second (the sampling rate) and the accuracy (bit depth) are limited for obvious reasons, and this needs to be taken into account.

                                              Secondly, analysing a signal in the time domain doesn’t yield much interesting information, it’s much more useful to look analyse the frequencies in the signal instead.

                                              Fourier’s theorem states that every signal can be represented as a sum of (co)sines. Getting the amplitude of a given freqency is done through the Fourier transform (F(omega) = integrate(lambda t: f(t) * e^-omega*j*t, 0, infinity)). It works a bit like the following:

                                              1. Draw the function on a long ribbon
                                              2. Twist the ribbon along its longest axis, with an angle proportional to the desired frequency you want the amplitude of (multiplying f(t) by e^-omega*j*t, omega is the pulsation of the desired frequency, i.e. omega = 2pi*f, and j is the imaginary unit. j is used more often than i in engineering.)
                                              3. Now smash it flat. In the resulting (complex) plane, take the average of all the points (i.e. complex numbers). (This is the integration step.)
                                              4. The sines will cancel out themselves, except for the one with the desired freqency. The resulting complex number’s magnitude is the amplitude of the sine, and its angle is the sine’s phase.

                                              (Note: the Fourier transform is also known as the Laplace transform, when substituting omega*j with s (or p, or z, they’re “implicitely” complex variables), and as the Z-transform, when dealing with discrete signals. It’s still basically the same, though, and I’ll be using the terms pretty much interchangably. The Laplace transform is also used when analyzing linear differential equations, which is, under the hood, what we’re doing here anyway. If you really want to understand most/everything, you need to grok the Laplace transform first, and how it’s used to deal with differential equations.)

                                              Now, doing a Fourier transform (and an inverse afterwards) can be costly, so it’s better to use the information gained from a Fourier transform while writing code that modifies a signal (i.e. amplifies some frequencies while attenuating others, or adding a delay, etc.), and works only (or most of the time) in the time domain. Components like these are often called filters.

                                              Filters are linear systems (they can be nonlinear as well, but that complicates things). They are best thought of components that scale, add, or delay signals, combined like this. (A z^-1-box is a delay of one sample, the Z-transform of f(t-1) is equal to the Z-transform of f(t), divided by z.)

                                              If the system is linear, such a diagram can be ‘transformed’ into a bunch of matrix multiplications (A, B, C and D are matrices):

                                              • state [t+1] = A*state[t] + B*input[t]
                                              • output[t ] = C*state[t] + D*input[t]

                                              with state[t] a vector containing the state of the delays at t.

                                              Analyzing them happens as follows:

                                              1. Take the Z-transform of the input signal (Z{x(t)}=X(z)) and the output signal (Z{y(t)}=Y(z)).
                                              2. The proportion between Y and X is a (rational) function in z, the transfer function H(z).
                                              3. Now find the zeros of the numerator and denominator. The zeros of the latter are called the poles, signals of (or near to) that frequency are amplified. Zeros of the numerator are (boringly) called zeros, and they attenuate signals. These poles and zeros are also related to the eigenvectors and -values of the matrix A.

                                              However, if the poles are outside of the unit circle, the system is ‘unstable’: the output will grow exponentially (i.e. “explode”). If the pole is complex or negative, the output will oscillate a little (this corresponds to complex eigenvalues, and complex solutions to the characteristic equation of the linear differential equation).

                                              What most often is done, though, is making filters using some given poles and zeros. Then you just need to perform the steps in reverse direction.

                                              Finally, codecs simply use that knowledge to throw away uninteresting stuff. (Eg. data is stored in the frequency domain, and very soft sines, or sines outside the audible range are discarded. With images and video, it’s the same thing but in two dimensions.) I don’t know anything specific about them, though, so you should look up some stuff about them yourself.


                                              Hopefully, this wasn’t too overwhelming :). I suggest reading Yehar’s DSP tutorial for the braindead to get some more information (but it doesn’t become too technical), and you can use the Audio EQ Cookbook if you want to implement some filters. [This is a personal mirror, as the original seems to be down - 509.]

                                              There’s also a copy of Think DSP lying on my HDD, but I never read it, so I don’t know if it’s any good.

                                              1. 3

                                                The amount of times the data can be read out per second (the sampling rate) and the accuracy (bit depth) are limited for obvious reasons

                                                Interesting post. I wanted to highlight this part where you say it’s limited for “obvious reasons.” It’s probably better to explain that since it might not be obvious to folks trained to think transistors are free, the CPU’s are doing billions of ops a second, and everything is working instantly down to nanosecond scale. “How could such machines not see and process about everything?” I thought. What I learned studying hardware design at a high-level, esp on the tools and processes, was that the digital cells appeared to be asleep a good chunk of the time. From a software guy’s view, it’s like the clock signal comes as a wave, starts lighting them up to do their thing, leaves, and then they’re doing nothing. Whereas, the analog circuits worked non-stop. If it’s a sensor, it’s like the digital circuits kept closing their eyes periodically where they’d miss stuff. The analog circuits never blinked.

                                                After that, the ADC and DAC tutorials would explain how the system would go from continouous to discrete using the choppers or whatever. My interpretation was the digital cells were grabbing a snapshot of the electrical state as bit-based input kind of like requesting a picture of what a fast-moving database contains. It might even change a bit between cycles. I’m still not sure about that part since I didn’t learn it hands on where I could experiment. So, they’d have to design it to work with whatever its sampling rate/size was. Also, the mixed-signal people told me they’d do some components in analog specifically to take advantage of full-speed, non-blinking, and/or low-energy operation. Especially non-blinking, though, for detecting things like electrical problems that can negatively impact the digital chips. Analog could respond faster, too. Some entire designs like control systems or at least checking systems in safety-critical stuck with analog since the components directly implemented mathematical functions well-understood in terms of signal processing. More stuff could go wrong in a complex, digital chip they’d say. Maybe they just understood the older stuff better, too.

                                                So, that’s some of what I learned dipping my toes into this stuff. I don’t do hardware development or anything. I did find all of that really enlightening when looking at the ways hardware might fail or be subverted. That the digital stuff was an illusion built on lego-like, analog circuits was pretty mind-blowing. The analog wasn’t dead: it just got tamed into a regular, synthesizable, and manageable form that was then deployed all over the place. Many of the SoC’s still had to have analog components for signal processing and/or power competitiveness, though.

                                                1. 3

                                                  You’re right, of course. On the other hand, I intended to make it a bit short (even though it didn’t work out as intended). I don’t know much about how CPUs work, though, I’m only in my first year.

                                                  I remember an exercise during maths class in what’s probably the equivalent of middle or early high school, where multiple people were measuring the sea level at certain intervals. To one, the level remained flat, while for the other, it was wildly fluctuating, while to a third person, it was only slightly so, and at a different frequency.

                                                  Because of the reasons you described, the ADC can’t keep up when the signal’s frequency is above half the sampling frequency (i.e. the Nyqvist frequency).

                                                  (Interestingly, this causes the Fourier transform of the signal to be ‘reflected’ at the Nyqvist frequency. There’s a graph that makes this clear, but I can’t find it. Here’s a replacement I quickly hacked together using Inkscape. [Welp, the text is jumping around a little. I’m too tired to fix it.])

                                                  The “changing a bit between cycles” might happen because the conversion doesn’t happen instantaneously, so the value can change during the conversion as well. Or, when converting multiple values that should happen “instantaneously” (such as taking a picture), the last part will be converted a little bit later than the first part, which sounds analogous to screen tearing to me. Then again, I might be wrong.


                                                  P.S. I’ll take “interesting” as a compliment, I just finished my last exam when I wrote that, so I’m a little tired now. Some errors are very probably lurking in my replies.

                                                  1. 3

                                                    I’ll take “interesting” as a complimen

                                                    You were trying to explain some hard concepts. I enjoy reading these summaries since I’m an outsider to these fields. I learn lots of stuff by reading and comparing explanations from both students and veterans. Yeah, it was a compliment for the effort. :)

                                                2. 3

                                                  Even though I learned about the Fourier transformation in University this video gave me a new intuition: https://www.youtube.com/watch?v=spUNpyF58BY

                                                  1. 2

                                                    Thanks very much for your detailed reply :). The math doesn’t scare me, it’s just very rusty for me since a lot of what I do doesn’t have as much pure math in it.

                                                    I appreciate the time you put into it.

                                                    1. 2

                                                      Speaking specifically of Fourier transform: it behaves well for infinite signals and for whole numbers of periods of strictly periodic signals.

                                                      But in reality the period usually doesn’t divide the finite fragment we have (and also there are different components with different periods). If we ignore this, we effectively multiply the signal by a rectangle function (0… 1 in the interval… 0…) — and Fourier transform converts pointwise multiplication into convolution (an operation similar to blur). Having hard edges is bad, so the rectangle has a rather bad spectrum with large amplitudes pretty far from zero, and it is better to avoid convolution with that — this would mix rather strongly even frequencies very far from each other.

                                                      This is the reason why window functions are used: the signal is multipled by something that goes smoothly to zero at the edges. A good window has a Fourier transform that falls very quickly as you go away from zero, but this usually requires the spectrum to have high intensity on a wide band near zero. This tradeoff means that if you want less leak between vastly different frequencies, you need to mix similar frequencies more. It is also one of the illustrations of the reason why a long recording is needed to separate close frequencies.

                                                    1. 2

                                                      I’d really like to write more D. In my particular case, I couldn’t have a GC in play (self-imposed memory constraints), but there’s a lot about it that’s attractive to me. I don’t have any desire to choose Go over it - power of the language is considerably greater from my limited experience.

                                                      That said, Go does have a big package community behind it like Rust.

                                                      1. 14

                                                        Stick a @nogc on your main function and you have a compile-time guarantee that no GC allocations will happen in your program.

                                                        1. 6

                                                          Neat - I didn’t realize this. Too late now for the current project, but good to know for the future. I’m particularly interested in its C++ FFI story. There’s a couple of specialized C++ libraries I’d like to use without having to write flat-C style wrappers just to call them sanely from Rust.

                                                          Thanks for that!

                                                          1. 3

                                                            That’s exactly the kind of tip I was hoping for in the comments. Thanks!

                                                            1. 5

                                                              It always the same arguments with D discussions:

                                                              • I don’t like D that has a GC!
                                                              • Just use @nogc!
                                                              • But then some stuff from the standard library does not work anymore!
                                                              • How much of the standard library?
                                                              • Nobody knows and how would you measure it anyway?
                                                              1. 1

                                                                It’s at least a pattern that’s solvable. Someone just has to attempt to compile the whole standard library with no GC option. Then, list the breakage. Then, fix in order of priority for the kind of apps that would want no-GC option. Then, write this up into a web page. Then, everyone shares it in threads where pattern shows up. Finally, the pattern dies after 10-20 years of network effects.

                                                                1. 2

                                                                  People are doing that. Well, except for the “write this up into a web page” part. I guess you are thinking of web pages like http://www.arewewebyet.org/

                                                                  1. 1

                                                                    Yeah, some way for people to know that they’re doing it with what level of progress. Good to know they’re doing it. That you’re the first to tell me illustrates how a page like that would be useful in these conversations. People in D camp can just drop a link and be done with it.

                                                          2. 3

                                                            I find D has a lot of packages too. Not an explosive smörgåsbord, but sufficient for my purposes.

                                                            https://code.dlang.org/

                                                            The standard library by itself is fairly rich already.

                                                            https://dlang.org/phobos/

                                                            1. 1

                                                              I guess the question would be whether unsafe or smart pointers are about as easy to use in D as C or C++. If so, the GC might not be a problem. In some languages, GC is really hard to avoid.

                                                              Maybe @JordiGH, who uses D, can tell us.

                                                              1. 5

                                                                I write D daily. Unsafe pointers work the same as in C or C++. I wrote a GC-less C++-like smart pointer library for D. It’s basically std::unique_ptr and std::shared_ptr, but no std::weak_ptr because 1) I haven’t needed it and 2) One can, if needed, rely on the GC to break cycles (although I don’t know how easy that would be do to currently in practice.

                                                                1. 1

                                                                  D is a better C++, so pointers easier to use than C++. As I understand, the main problem is that it used to be the case that the standard library used GC freely, making GC hard to avoid if you used the standard library. I understand there is an ongoing effort to clear this but I don’t know the current status.

                                                                  1. 3

                                                                    It depends on which part of the standard library. These days, the parts most often used have functions that don’t allocate. In any case it’s easy to avoid by using @nogc.

                                                              1. 1

                                                                While I agree with some of the “examples” listed, I don’t think it’s fair to say microservices are “hyped and volatile”. Microservice architecture isn’t some advent to the computing world. Conceptually, it’s been around for a long long time when you look at operating system kernels. I’d also argue that building software that “does one thing and does it well” is not a new idea either (i.e. UNIX design philosophy).

                                                                While the author has some good points, that one seems poorly thought out. Perhaps microservice means something different to the author?

                                                                1. 2

                                                                  Conceptually many ideas, including NoSQL, have been around for ages. It seems clear to me that the criticism in both cases are in references to the fad of microservices-for-microservices-sake and NoSQL-for-NoSQL’s-sake fads in Web Application Architecture, where there is often no real rationale behind their choice other than they are the trendy topics on the web conference circuits.

                                                                  (And yes, obviously, there are circumstances in which a microservices approach or a particular NoSQL solution make good sense – these are frequently not, unfortunately, the circumstances in which I have tended to see them used professionally).

                                                                1. 7

                                                                  Oddly - this sounds like the author has just discovered dependency injection? I would have thought that concept would translate pretty well to Go. I’ve written a lot of Go, but I cut my teeth largely on C, C++, and C# so dependency injection has always been on my radar. When I wrote Go, I learned it and largely applied my own lessons from C, C++, and C#.

                                                                  Due to compiler constraints and the language’s ethos, global state and func init feel weird in Rust (my current language). You can’t, without jumping through hoops, create objects that are partially constructed (e.g. using Option for uninitialized types). That said, even if you’ve got struct members that are of type Option, you are actually initializing it to something - sometimes it’s just None.

                                                                  I don’t have enough context in Go land to know why this author’s argument might be a novel conclusion. Does anyone have some context? I’d love to learn more.

                                                                  1. 10

                                                                    Many Go programmers seem to feel very comfortable with global state. When I join new organizations or projects, I often find myself needing to educate and socialize the problems that come from that. This post is just a formalization of the things I’ve been saying informally for a long time.

                                                                    I wish I knew why this was so relatively prevalent in the Go ecosystem. If I had to guess, I’d speculate that it’s because a lot of the example code in e.g. books and tutorials doesn’t really shy away from global state.

                                                                    1. 7

                                                                      It’s also related to the standard library itself having lots of globals. Which itself leads to bad things, like the cryptographic randomness source being trivially modifiable: https://github.com/golang/go/issues/24160

                                                                      1. 3

                                                                        The Go community has a strong culture of writing application-specific code that is “good enough”, and tends to err strongly on the side of avoiding premature abstraction. For a significant number of use cases, globals (combined with a reasonable amount of documentation, testing, and good discipline) tend to be the “good enough” solution.

                                                                        The thesis of Go’s culture is that premature abstraction often costs more than rewriting code. The argument is that you often know your domain better after writing a “good enough” first version, and that premature abstractions lock you in to specific designs more tightly that may be harder to change down the line.

                                                                        It’s definitely not a conventional position, but it’s not indefensible – it’s hard to argue that Go developers are not pragmatic (or not productive).

                                                                        1. 1

                                                                          Interesting. Good to know!

                                                                        2. 2

                                                                          Yup, this was my comment when this appeared a year ago on HN:

                                                                          In other words, use a pure dependency-injection style, and ZERO globals.

                                                                          https://news.ycombinator.com/item?id=14521894

                                                                        1. 6

                                                                          This was a really enjoyable read. Hadn’t heard of Digital Antiquarian before but now that I’ve seen there’s e-book formats available, I’ve got a lot of reading to do now :).

                                                                          1. 7

                                                                            Am I crazy? I thought Oracle killed off Solaris? Or did they just lay off a bunch of folks?

                                                                            1. 8

                                                                              They definitely sacked a lot of people, which presumably did nothing for the morale of the folks left behind either. We picked up some of the people jumping ship at Joyent when it happened.

                                                                              You can only fire or terrorise so many of the people in an organisation who know how something works before there won’t be a critical mass left to maintain it. Because of the long pipeline of work that was already underway at the time of the event (including the most recent line of SPARC microprocessors) they had a lot of stuff they could release just after the firing – “See, of course we’re not dead!” – but I wouldn’t hold out much hope for anything else.

                                                                              1. 1

                                                                                Oracle killed OpenSolaris, not Solaris.

                                                                              1. 5

                                                                                This speaks to me on a fundamental level. There are definitely programming languages that just click for me and ones that I’ve had to work a lot to understand them.

                                                                                For example:

                                                                                • Forth took a lot of work but was very rewarding. It took on a new meaning when I learned Factor and started to pay closer attention to combinators.
                                                                                • Lisp/Scheme were both great introductions to meta programming for me. I didn’t really get what was possible with meta programming before I used them.
                                                                                • StandardML/Ocaml/F# was when I first grokked functional programming. They also were where I first grokked types.
                                                                                • Rust has been this way, though slowly. I was turned off by Rust’s earlier incarnations of syntax but of recent, I’ve been really enjoying it. I think Rust might the first language in many years that just feels natural to me.
                                                                                1. 4

                                                                                  It is crazy and fascinating to me that VMS still has substantial enough install bases to justify roadmaps like this.

                                                                                  I really wish it was something you could actually download and play with. I’m not aware of any way to do that though.

                                                                                  EDIT: I stand corrected - if you hunt around a bit, there’s instructions: https://sourceforge.net/p/vms-ports/wiki/VMSInstallation/

                                                                                  1. 4

                                                                                    There’s the OpenVMS Hobbyist program - all you have to do is sign up as a member of your local DECUS chapter (free) and once you have a membership number, request a license. Licenses are only valid for a year but they’re renewable. They’ll also send you the (frequently rotated) login details for the ftp server so you can download the current releases for VAX, Alpha and Itanium.

                                                                                    I believe the program will be continued by VMS Software and will include the x86_64 port. That’s still some way away from GA though.

                                                                                    1. 2

                                                                                      HP killed it far as I can tell because they had two competing lines: VMS clusters and NonStop. Yet, they said before that it was one of most profitable divisions. Probably due to high prices plus them not investing much in maintenance. You’d expect a large customer base if it was a large profit center. On top of it, the customers themselves in surveys said it was rock-solid platform that never gave them headaches. Plenty loyalty.

                                                                                      So they killed it for who kniws what reason. This company revived it. It’s now one of most interesting legacy, porting efforts.

                                                                                    1. 6

                                                                                      An interesting set of notes. I’m dubious about how much you can rely on this sort of thing. It leaves out a vast swath of the industry - e.g. private repos. I know several companies with huge and established Ruby code bases so I’m not sure I’d say “avoid Ruby”.

                                                                                      I think it’s more interesting to consider what communities are aggressively adopting open source policies. For example, science likes python (e.g. NumPy, SciPy, Jupyter) and python is heavily represented on Github. Does that mean science has embraced uploading more content to places like Github in an open fashion? (Caveat: correlation =/= causation, etc etc).

                                                                                      1. 10

                                                                                        The incredible bias here is that Ruby is the core language of GitHub. Their first big project was Rails. At some time, the only community purely happening at GitHub was Ruby. It can only go down from there.

                                                                                        What is happening in this graph is the whole world moving to GitHub, expanding its size extremely and fixing the bias that GitHub had. You can’t read anything notable in that graph. They even admit that factor and still draw that conclusion.

                                                                                        1. 9

                                                                                          Even then it’s still not accurate just yet. For every Ruby/JavaScript/Python company there’s 10 Java/.NET ones that we never hear about because they aren’t really part of the GitHub/startup/Twitter/HN sphere.

                                                                                          1. 2

                                                                                            Yep, very much this.

                                                                                      1. 8

                                                                                        Reading more about Rust as I prepare to start a job that will largely have me writing in it.

                                                                                        The new position puts me as the only engineer on a project initially and my usual languages are C and C++. I’m worried about safety of a system I’m writing by myself. Rust should provide sufficient safety and strictness to help mitigate a large portion of problems with writing low level code, especially since this will be completely green field development.

                                                                                        I’m also excited to be using a modern toolchain for development (e.g. cargo) :).

                                                                                        1. 4

                                                                                          start a job that will largely have me writing in it.

                                                                                          That’s a really exiting statement as far as the overall Rust ecosystem goes. Are you at a point where you’d be comfortable saying who this is for, or maybe at least a bit more about the type of software you’ll be building? I’m really curious to hear more about where Rust is heading in industry.

                                                                                          1. 7

                                                                                            In the interest of transparency, the job didn’t dictate the language - I did. I spent most of the Christmas holiday prototyping and experimenting with stuff to try and find a language I felt most suited the nature of the project (one engineer, strict performance requirements, strict/safe compiler). I’m an OS-level engineer by experience, having spent the entirety of my professional career working on lower level OS stuff.

                                                                                            The project itself is under wraps but it’ll be low level OS-related code. Some OS services/daemons and some driver work. The code will need to be minimal overhead for CPU/memory and run on a variety of hardware setups. Where I can, I’m going to use Rust as much as possible.

                                                                                            If I can find the time, I’d like to blog about and/or open source some if it as we go along :).

                                                                                            1. 3

                                                                                              I know I speak for others too when I say it will be great to hear about any of your experiences using Rust in production. Good luck and I hope you find it enjoyable!