1.  

    I am working on the STM32-based terminal board supporting color VGA and USB keyboard. The idea is to have a common core for the firmware that implements all escape sequences to be compatible with VTs and XTerm. Then, use this core with a new STM32 board and Geoff’s PIC32-based board. Geoff’s board won’t support color or USB keyboard but otherwise should benefit from more compliant terminal implementation. This weekend is all about getting it to pass vttest, which requires a lot of digging through some amazing DEC manuals (75MB PDF!).

    1. 2

      Really cool article! Have you thought about making the Servo trait in a generic library? That would be really useful given the existence of the Pwm HAL!

      Speaking of RISC-V, other than the HiFive, there are now cheap RISC-V dev boards all with Rust support, all under $10:

      They all feature the GD32VF103VBT6 which is a RISC-V clone of the famous STM32F103. Rust support exists in the gdvf103-pac and gd32vf103xx-hal crates.

      1. 1

        For making the Servo trait as a generic library it would need to have sufficient parametrization to support the variety of servo motors out there. Something to research!

        Thanks for the pointers! GD32VF103VBT6 looks like very nice MCU, also priced very competitively.

        1. 1

          Wio Lite RISC-V with an ESP8266 WiFi coprocessor

          Wouldn’t the ESP8266 chip be a lot faster than the RISC-V chip? Why not just run your application on that?

          1. 2

            The ESP8266 is clocked at 80MHz, the GD32 at 108MHz. I doubt the ESP8266 will be faster. What is more, the Xtensa ISA of the ESP8266 is not supported by mainline LLVM so getting Rust to run on it will require some tricks.

            I guess they put the ESP8266 on there since they want to market this to people who want dev boards with WiFi. Nobody has yet made a RISC-V SoC with WiFi on it, but that’s just a matter of time.

          1. 1

            Oops! Fixed.

          1. 4

            Very nice! I will use it for the next iteration of Geoff’s VT100 terminal.

            1. 1

              Heh. Did you get this from my phlog reference or is this a coincidence?

              Been eyeing building one of these for some time. I’m new to electronics, though, and don’t have a pile of components or know good sources for them. Building one of these from scratch seems overwhelming. Though it’s small and would be a good one to start with, I guess. I’d need a chip programmer, too, though.

              Someone distribute it in kit form. :)

              1. 2

                Here is the kit. Currently sold out, but more PCBs are on the way. You can join waitlist to get notified.

                1. 1

                  Sweet! Thanks. Somehow I didn’t even think to look on Tindie.

              1. 1

                I’ve used JMeter extensively to do performance and scalability tests for services I’ve helped build. Driving it through CI is almost essential in order to keep a reliable shared history of test runs. The jmeter-ec2 project has been helpful for scaling tests out economically, although it has significant bugs and limitations. I’ve usually measured the applications under test with New Relic.

                1. 1

                  Is there a limit of how many users JMeter can simulate?

                  1. 1

                    There’s a practical limit per node that is somewhere between 200-4000 threads, which are a good proxy for individual users, depending on how JMeter is tuned. You can use multiple nodes to horizontally scale out though. I’ve done practical tests with the equivalent of 20,000 users using jmeter-ec2, spread across dozens of EC2 servers.

                1. 3

                  How running containers reconcile with components that rely on “owning” entire physical machine, like Postgres or Erlang VM? Say with Erlang I can routinely run with half terabyte RAM and 10G dedicated network and serve 100s of thousands users per node. Can I do this with K8s?

                  1. 2

                    I wouldn’t say PostgreSQL relies on “owning” an entire machine, but if you want that, you can create node pools with taints, and then setup your PostgreSQL pod such that it can tolerate said taints. It will be the only pod allowed to be scheduled on that node. (I suspect you might still have some Kubernetes infrastructure running on that node, so I doubt you can literally remove everything, but you can certainly manage the allocation of pods to nodes in a fine grained way.)

                    1. 1

                      Can K8s help with replication and failover? Say Amazon RDS maintains DNS record to be used in clients and when master failure is detected it promotes slave and adjusts that record.

                      1. 3

                        think of k8s as ‘erlang for the datacenter, thrown roughly together by enterprises and people who like C’ and you’ll get pretty close.

                        1. 1

                          I don’t know. I’m not a k8s expert. I just know the basics. My guess is that something like that is possible. Disclaimer: that’s probably my answer for every question you might ask. K8s is very large and very complicated. I don’t even know enough to say whether it is mostly incidental or necessary complexity.

                    1. 1

                      Cowboy web server supports variation [1] of Webmachine REST flow.

                      1. http://ninenines.eu/docs/en/cowboy/HEAD/guide/rest_handlers/