Strong agree. Shot myself in the foot too many times with closed intervals. I’ve been using ‘start’ and ‘endx’ as variable names, where x means exclusive.
Huh – slightly surprised to see it uses the higher-overhead 8b/10b instead of something denser like 128b/130b as employed in PCIe 3+. (And also that USB2 apparently has nothing of the sort at all, though I guess I don’t have much gut sense of the point at which those measures become necessary.)
Wouldn’t 8b10b provide a lot more frequent transitions between 1 and 0 than 128b130b? Maybe USB wants high enough clock variance tolerances that it needs to guarantee transitions more frequently than 128b130b would?
USB1.1 and USB2 use bit stuffing. https://www.embedded.com/bit-stuffing/
I’d assume that the tolerances are way lower in PCIe (i. e. it doesn’t have to worry about people moving and bending cables while operating).
I really like stack machines. They are becoming relevant again in the current cheap FPGA revolution. I’ve been recently exploring stack machines in a Haskell code generator setting, where a parameterized CPU core, associated Forth-based machine code, and QuickCheck based test benches can all be hosted in a single Haskell program, which then generates Verilog and SRAM images to run on target. Very much work in progress, but it isn’t as hard or crazy as it sounds when you keep things simple.
Superscalar killed stack machines. Superscalar designs rely on being able to identify dependencies between instructions so that you can execute independent ones in parallel. Dependency analysis in a register machine is quite easy: instructions that don’t use the same registers are independent. It’s really, really hard to do with stack machines. (Almost) Every instruction modifies the top of the stack.
Since then, modern ISA designs have significantly improved encoding density. The one area where stack machines were a big win - program size (and therefore i-cache efficiency) - is largely gone.
That said, with the security problems associated with speculative execution and the programming languages being developed to better take advantage of multicore systems, I do wonder if there’s going to be a place in 10-15 years for many-core in-order processors. Either stack machines or simple multithreaded in-order cores are more efficient than big superscalar out-of-order machines if you can write software for them that keeps the cores busy.
There is such a place: deeply embedded programming, where you would use a tiny microcontroller that sits in the middle of custom gateware instead of a complex ad-hoc controller state machine. The CPU I was talking about is such a thing: it serves a single purpose: controlling a bunch of peripherals. In an FPGA with block SRAM, it still makes sense to keep instruction width and program size small so a program fits in say a single 256 byte block SRAM, and keep the CPU simple so you need less gates to implement it. The Forth machine I mentioned is so stripped down it doesn’t even have a full datapath. The only computation it performs is decrement and branch if zero, so it can do loops, and in this case it made sense to go back to a single stack that can be used for subroutine return addresses and loop counter. For the rest it can read/write registers. Surprising what you can do so little.
Wonderful! I believe my Fomu just found a weekend playdate.
It’s annoying that Elixir or Erlang languages are only good for writing web applications. You can’t write CLI tools as starting the BEAM doesn’t make sense for it. Unless I’m mistaken.
With Go for example, you can write the same highly concurrent web apps but also write CLIs thus use one language for everything + Go LSP is more mature.
I never tried out Elixir/Gleam though, perhaps its functional & expressive features are really that good.
You can write command line applications in Erlang based languages, it’ll just be a bit slower to start and harder to distribute than a language that complies to binary, similar to tools written in Ruby, etc.
I wouldn’t pick Erlang for a general purpose CLI but I might for a tool for a larger Erlang project. This is a common approach: iex, mix, rebar3, rabbitmq CLI, etc.
Go does a great job of making easy to distribute binaries, but sadly it is lacking many of the concurrency and durability features of Erlang. Ideally there would be a language with both! I’m hoping Lumen, the LLVM compiler for Erlang, will deliver here in future.
(On a second read of your comment, I can’t tell if you have experience in the Erlang world. If you do, my apologies if you already know what I’ve written below. I’m learning about a new topic and I’m some what excited.)
I have been messing around with Elixir for a few weeks and I love the pattern matching, function overloading, and expression oriented syntax. It’s a very fun, expressive language. I would recommend checking it out!
I believe that there are some frameworks to make TUIs, but it is probably not suitable for short lived command line tools. Specifically, I understand that it is not very fast when it comes to text processing.
One important thing to note about the erlang virtual machine (BEAM) is that it is built for low and consistent latency. You could run an infinite, cpu bound loop in one process and the rest of the processes would keep chugging along. This means that it is not necessarily built for high throughput. (The runtime interrupts each process after a certain number of “reductions”.)
So while Go might be able to build highly concurrent systems I think you would have to put a lot of work into making the latency consistent and to build fault tolerance into your application. That said, throughput is more important than responsiveness for some applications so it is a reasonable trade-off to make.
Use escript. It doesn’t start OTP which is where the startup time is spent. Starting escript is pretty much instantaneous. If you need distribution access you can still call net_kernel:start/2 manually and send messages to other instances.
As a follow up to my previous reply (https://lobste.rs/s/q66slp/v0_10_gleam_statically_typed_language_for#c_upesnz) I would like to say that I’ve been thinking about how Go could be a complication target for Gleam. I keep hitting into problems of not being able to replicate some Erlang features using the Go concurrency primitives (namely monitors, process isolation, and links). If you or anyone has an idea of how we could tackle this I’d love to explore this area!
Language server protocol. It’s a common API for IDE engines so that one program can be made per language and then shared with all editors, rather than each editor implement their own language tooling.
Not an edited blog, but a messy “public notes” collection. Give me an upvote and I’ll fix the https and the update script :) It looks like that broke a year ago. This used to get some traffic but now seems to no longer end up in search results.
This is a truly great tool.
This is a great book. It’s perfect if you already know a couple of programming paradigms, but want to get a better idea of how they all relate to each other.
How is it wrt to code size and RAM size? Would you be able to do anything interesting on e.g. an STM32F103C8 Cortex M3 with 64kByte Flash and 20kByte RAM? How hard is it to revive iterators without std? Iterators seem very useful for signal processing and handling communication protocols. Would it be wise to take this into production at this point?
The Iterator traits are libcore features - just the implementation is libstd. Have a look at iterator.rs - it is not all too long.
I don’t have any experience with such systems, but core never allocates and from there on, it should be possible to write a std replacement that fits your usecase better.