Great article. I recommend reading yosefk’s article with it. I’m going to draw on it as I counter some of this one.
“due to its roots in simplicity and compactness, considering that it emerged in the microcomputer era, on machines that provided (for todays standards) severely limited capacity.”
“You step down on the level of assembly language which may sound daunting, yet gives you full control over every aspect of memory layout. There are no articial barriers between you and the machine’s architecture”
This is only partly true. It’s other root was in the personal preferences of Chuck Moore. That’s why it’s 18-bit, stack-oriented design in an era of 8/16/32/64-bit stack and register machines. An 18-bit stack on a 32-64-bit RISC machine is neither the simplest nor most efficient way to do things. Even if aiming for low gates, the billions a year in 8-16-bit MCU’s sold tell me there’s people getting by with less. The constructs themselves aren’t close to how we think about a lot of operations. Languages like LISP and Nim have metaprogramming that let us write constructs close to how we think that get turned into effecient primitives we still can understand. They have some benefits of Forth without the downside of putting square pegs into round holes.
So, Forth is the result of both squeezing what one can out of constrained hardware/software as author says and arbitrary decisions by Chuck Moore. It’s easier to understand some of it when you remember that one person just liked doing it that way, refused to adopt any method from others, and tried to force everything through his limited tools. He’s actually like a for-profit, industrial version of the demoscene. It’s cool seeing what he does. It’s also objectively worse than what others are doing along a lot of metrics. Especially hardware where his tools can’t handle modern, deep-sub-micron designs with their speed, size, cost, and energy benefits.
So, I default on thinking that future work in Forth should attempt to isolate and ditch arbitrary restrictions to build a similar model of simplicity for current CPU’s or hardware design. A modern CPU is 64-bits w/ registers and stacks, several levels of cache, SIMD, multicore, maybe hardware accelerators, supports memory isolation, and supports VM’s. What does the simplest implementation of that look like in software? For hardware, it has to be FSM’s that are synthesized and resynthesized in a series of NP-hard problems on manufacturing processes that have up to 2,500 design rules at 28nm. One problem even requires image recognition on circuits themselves before redrawing them. What does the simplest, EDA tool for that look like? Answering such questions gets us the benefits of Moore’s philosophy without the unnecessary baggage.
Hint: Microcode and PALcode. They’re like the ISA version of defining Forth functions. I want microcode I can customize with open tools in every CPU. :)
“Imagine being able to debug device drivers interactively”
This is an argument for interpreters, not Forth itself. Forth is one possibility. An interpreter for a subset of C or superset of assembly (eg Hyde’s HLA) w/ live debugging and code reloading is also possible. I’ve thought about building one many times. I’d be shocked if there weren’t already a dozen examples.
“You can’t have both a language that provides direct low-level access to the machine and that at the same time give you all advantages of a high-level environment like automatic memory management.”
These languages exist. They’re safe, systems languages that allow inline assembly. There’s also typed and macro assembly languages. Recently, there’s work on type systems to know combinations of them behave as expected. You can’t do just any arbitrary thing. You can mix approaches where you want, though.
“It is satisfying and reassuring to be able to fully understand the code that is generated from your sources”
Wirth showed you can do this with a language built for human understanding. His languages are simple enough that students design compilers for them. One could do an interpreter even easier. It’s pretty clear what object code will come out of it. And a little discipline can prevent things like cache misses. Worst case, you just look at the assembly of each module to see if you want to tinker with it. As with any other goal, it’s possible that Forth takes simplicity too far. I thought Wirth did that, too. I think there’s tradeoffs to be made between simplicity and complexity. Forth philosophy, like Wirth’s, will sacrifice everything else to simplicity. I’m fine with a more complex implementation of a language if it lets me express myself faster, safer, and so on with efficient code coming out.
“But why do so many insanely complex protocols and architectures exist?”
Author is right that a lot of complexity is unnecessary. The author also writes like all complexity is unnecessary. This is incorrect: much of it is intrinsic to the software requirements as Brooks wrote. Even a simple design becomes more complex if you handle stuff like error handling, software updates, backups, security mitigations, and so on. Even Forth itself is more complex than a native alternative with similar primitives and macros since it will have an interpreter/compiler/updater built in. Forth’s people think that complexity is justified over bare metal since it gives them benefits in the article. Well, that’s what we’re saying about a lot of this other complexity in CPU’s, OS’s, and languages.
“It means to drop the bells and whistles, the nice user interfaces”
User adoption, onboarding, and support is apparently not a thing in the Forth world. Both market share and usability studies showed GUI’s were superior to text for common applications for most users. So, we should give them what they want, do what works, or however you want to look at it. If we don’t need it, then don’t implement it to reduce complexity. There’s plenty of terminal users or services that don’t need GUI’s. That’s not them being inherently bad like author argues, though.
“What you reuse of a library is usually just a tiny part, but you pay for all the unused or pointless functionality and particularly for the bugs that are not yours.”
Author massively overstates real problems that come with bringing in dependencies. Lots of libraries are simple to use and easy to swap out. The OpenBSD and Suckless camps regularly crank out simple implementations of software. Even the complex ones like Nginx often have guides that make installing and using them simple. The complexity is dealt with by somebody else. Likewise with these hosting providers that make it simple to get some services running at almost no cost.
I imagine many businesses and products wouldn’t happen at the rate and cost they do if each developer spent tons of time rebuilding compatible networking stacks, web servers, and browsers from the ground up with deep understanding of them. Nah, they just install a package, read its guide, and Get Shit Done. You could say we opponents of Do It Yourself just like to Get Shit Done in Whatever Way Works. If you do API’s and modularity right, you can always improve later on anything you had to keep shoddy to meet constraints.
Forth was the first programming language I encountered. I’ve rarely used it (debugged from the openboot prom once, wrote some minor things in Quartus on the palm pilot) since learning it as a child in 1978. There’s a joke that if you’ve seen one Forth, you’ve seen one Forth. This rings true because there’s nearly nothing to Forth but a couple of primitives everyone more or less agrees upon, a standard everyone ignores, a philosophy, and the problem at hand. The solution is to build up the language and it becomes a DSL for that specific problem, as understood by that particular programmer, and becomes weird.
Yeah, me too. I really love D. Its metaprogramming alone is worth it.
For example, here is a compile-time parser generator:
This is a good point. I had to edit out a part on that a language without major adoption is less suitable since it may not get the resources it needs to stay current on all platforms. You could have the perfect language but if somehow it failed to gain momentum, it turns into somewhat of a risk anyhow.
That’s true. If I were running a software team and were picking a language, I’d pick one that appeared to have some staying power. With all that said, though, I very much believe D has that.
In my opinion, until ocaml gets rid of it’s GIL, which they are working on, I don’t think it belongs in this category. A major selling point of Go, D, and rust is their ability to easily do concurrency.
Both https://github.com/janestreet/async and https://github.com/ocsigen/lwt allow concurrent programming in OCaml. Parallelism is what you’re talking about, and I think there are plenty of domains where single process parallelism is not very important.
You are right. There is Multicore OCaml, though: https://github.com/ocamllabs/ocaml-multicore
I’ve always just written of D because of the problems with what parts of the compiler are and are not FOSS. Maybe it’s more straightforward now, but it’s not something I’m incredibly interested in investigating, and I suspect I’m not the only one.
As of last year it’s open source, joining LDC
with discussions on HackerNews and Reddit.
At home: Filco Majestouch 2 with MX Blues
At work: Filco Majestouch Ninja with MX Browns (and O-Ring dampers), because I prefer my colleagues not to hate me.
I haven’t bought anything shiny in a little while though, so I’m eyeing up something a bit different like a Pok3r or and Ergodox-alike.
I have a FIlco Majestouch 2 (also with MX Blues) and after a few years of daily use it began suffering extreme keybounce. It’s unusable now due to chatter/key bounce that affects nearly all the common keys. I’ve given up on it but might disassemble it and see if a deep cleaning with distilled water helps.
Mine is coming up on 4 years old I think, and I’ve ad no issues at all. That said, it sounds like a fault and I’d try contacting the reseller or manufacturer to see if they have any suggestions. Mechanical Keyboards should last pretty much until the switch mechanisms give out, if not longer.
This was ordered (had to dig out the email archive) in mid-2010 and was used daily until late-2013 or 2014, based on other purchase dates. Meh. I’ll contact the original seller, elitekeyboards.com, but frankly I don’t have expectations because it’s nearly eight years old and they don’t carry that line any more.
I too have two Filco Majestouch - one at work and one at home, both Ninjas, with different switches! At home, the keyboard’s plugged into a Mac and has a problem with dropping keystrokes / being slow as I type in a browser (and sometimes elsewhere). I can’t find a fix so I’m likely to plug an Apple keyboard in instead :(
The only remappings I do are:
The post mentions ‘programming’ - I’m assuming this is referring to remapping. I did look for keyboards where I could record macros, but those that exist are very expensive.
Currently:
I use the laptop keyboard unless I know I’ll be stationary, then I drag around a KBC Poker-X with Cherry Blues.
KUL ES-87
I have a pair of ES-87 keyboards as well – one at work and one at home – but with Cherry MX Brown switches. I like the feel, and they’re just quiet enough that I don’t get evicted from the office or my family!
I use xmodmap instead of the DIP switches on the keyboard to replace Caps Lock with Control.
There is apparently very little constructive to be said about a twitter thread with very little content.
Yes, I’ve flagged.
Please, consider the same.
I am making a very frowny face at the snarky hot takes about these. Contemptuous knee-jerk gripes are bad in and of themselves, rarely lead to good conversation, and erode community norms. We’ve already seen Reddit, Digg, and YC News fall down that slippery slope.
I don’t know if this is encouraged by the fact that the story link goes to twitter instead of a longer explanation in an article or talk, but it probably doesn’t help.
Those communities had more damage done to them by the garbage of Twitter posts and hotcakes than by people trying to keep weeds out.
Everybody wants a garden, nobody likes being interrupted by the gardeners.
I think @pushcx was agreeing with my review of the comments thus posted. I… think?
[Comment removed by author]
In Soviet Russia…
Shall we really go back to slashdot’s Natalie Portman naked and petrified with hot grits nonsense and whatever happened on kuro5hin? Some of it gets a smirk but it’s not a good use of my time.
I’ve flagged it too. The story here is probably interesting, but the twitter thread is just nonsense.
I’m unclear if you need to build an infrastructure to support existing software/hardware tools or if you need software (and maybe devices) that is tolerant to the conditions of the infrastructure you have available.
This was recently posted to lobste.rs
http://www.lowtechmagazine.com/2015/10/how-to-build-a-low-tech-internet.html
I like Secure Scuttlebutt. There is also maybe Briar:
SSB and, I think Briar, support building applications on top of the protocol.
HAM, has rules (laws) that may or may not be an issue in places like Africa. For example, in the US, transmissions cannot be encrypted. They can be encoded in a publicly available protocol, but anyone must be able to decode it. If you have those constraints in-country, the data might be of a nature that you wouldn’t want to be transmitting it to the world.
Its existing tools and software (I’m the developer). Essentially what i’m looking to get out of this post is a slew of ideas for what we can do when the best case scenarios for getting data out of a remote place fall down.
Right now the core of what we’re doing is selective syncing of data where the user has control over when they sync their data from the phone or desktop application. We’re moving to something similar to scuttlebutt but implemented on top of zyre on top of 0mq for the desktop application and trying to sort out a similar implementation on android where we can piggyback data through multiple nodes to the end central points.
I actually jumped into ##hamradio on freenode and asked some questions (and got what felt like a little bit a ribbing for not knowing enough about HAM radios) but the gist of what I got out of the conversation was that it’s probably only worth doing if you licence a frequency for use and even then it’s very complicated. I thin kit’s still worth digging into, just not something that could be implemented on a short turnaround time.
What I’m sort of looking at is all these amazing answers and pointers I’m getting from you guys and trying to sort of group and organize them and of the things that we can implement now, implement them so we have multiple modes per platform (ios, android, osx, windows, linux) for getting data to where it needs to go, the criticality in terms of timeframes for what we’re involved in means we need that data as soon as we can for decision making.
I actually jumped into ##hamradio on freenode and asked some questions (and got what felt like a little bit a ribbing for not knowing enough about HAM radios) but the gist of what I got out of the conversation was that it’s probably only worth doing if you licence a frequency for use and even then it’s very complicated.
I’m not one but the ham’s I know are a kind of stodgy but knowledgeable bunch who expect you to have done your reading. Anyway, radio might not be out of the question and will depend on local regulations. In the USA (not true everywhere, for example Canada) there is MURS which requires no license, operates over the 151–154 MHz range (formerly business radio, so inexpensive transceivers), can be used for digital transmission, and can achieve several miles. At least in the USA packet-forwarding/repeating is prohibited which limits some interesting uses. You’ll need to ask someone who has local expertise as to what is permitted in each country.
Brainstorming/spewing…
These are all really good starting points, thank you. We dabbled with SMS sending data for a bit and I think we’re going to have to go down that route no matter what to have it as a fallback when data isn’t available. What we may have to do is subvert the telecoms a bit and set up an android app to act as a temporary SMS gateway with a sim in it so we can hit the ground running instead of trying to get a gateway provisioned in country.
the initial request is rigid and does not include a field that could be used to request that new servers modify their response without breaking compatibility with existing servers
we needed to find a side channel which existing servers would ignore but could be used to safely communicate with newer servers.
Folks: Always put version numbers in your protocols and file formats. Oh, and while you’re add it, prefer extensible records too.
In some IBM products I saw their smart design around C data structures and wire protocols: everything has in the first few bytes a static ascii “eyecatcher”, a version, a header offset, a payload offset, lengths, and usually a checksum. I’ve had arguments about this since but I believe the dozen byte trade off is worth it since within seconds you can identify the protocol and the payload, you know if you got the whole thing, you can detect (some) corruption, and if a future version of the structure is released the old one still works.
This is done in their mainframe object code as well, specifically in the function prolog for XPLINK (and other linkage conventions, I believe). Every function is laid out with a header that starts with 0x00C300C500C500F1 (“CEE1” in EBCDIC, with null bytes) and some other bits of information that lets you walk the stack very easily. It also lets you get at the name of the function so not only can you walk the stack, but you can get the function name. This means you don’t need debugging information to get a stack trace. It’s also really useful is you are searching a raw hex dump.
These were WebSphere MQ and DB2 LUW, which are strongly influenced by their mainframe heritage.
I didn’t know about the linkage convention but assumed they must because on occasion I’d uploaded core dumps and they trivially walked them and someone mentioned once that they had an internal tool that did it. Their trace output was also top notch, really fantastic and clearly designed in from the start.
I can entirely agree with you on that. The time it takes to implement a version number field is negligent compared to the amount of pain you’ll need to dedicate to supporting an old format with no version field.
Can you tell me why your blog requires javascript to display the content? It’s extremely frustrating, it’s all there but post the initial blip the site just turns white with only the header remaining visible.
Works fine with FF’s reader view mode even with javascript blocked. He has a bit of javascript that loads typekit and toggles opacity on success. As to why?
Timely. I have Nim in Action here after skimming the tutorial (as much to support the book author as anything), and my reactions to most of the language have been “Oh, that’s convenient”. I’m drawn to the easy C/C++ FFI but overall my fiddling with the language has been pleasant.
Nim in Action looks interesting. I like it that it directly dives into building things (at least it looks like that from Chapters 3 through 7). Most other books lose me at “here are 10 ways how to declare an integer”.
This is precisely what I was going for when writing Nim in Action. Super glad to see it being appreciated :)
I strongly believe that learning by implementing (sometimes large) practical examples is the best way to learn a programming language. Learning every single detail of the language presented through small unrealistic examples gets boring quickly.
How up-to-date is the book? It was published 2 years ago, I think, but I imagine a lot has changed in that time?
Reasonably up to date for a dead-tree edition, it’s from 2017. I haven’t compared to the complementary ebook but those are “live” and get updates. I’ve noticed a couple changes/errata (compiler invocation option, deprecation of Aporia editor, a change to the
..<operator) but nothing significant.Indeed, it’s barely a year old.
I’ve actually added all the projects in the book to the Nim compiler’s test suite so the book should remain compatible with new versions. I’ve also asked our BDFL to not break these, so far so good :)
Great idea on that!
Cool. I love when the author jumps in on a comment :)