1.

I think the question shouldn’t be “are there common purely interpreted languages”, but “are there common pure interpreters for common languages”. I’m sure there have been pure interpreters for ruby, python, perl, js, all of them! The reason why they get replaced is speed, and I’m not really sure why you’d prefer a 100x-1000x slowdown for direct ast interpretation. Maybe for the dynamism?

1.

Came here to say something similar. Pure interpreters are a step in (one method of doing) language development, but there’s nothing that says they’re the last step. If a language becomes popular, and performance (runtime, memory, or otherwise) becomes a pain point, you can more or less guarantee a rewrite of the interpreter into a bytecode compiler/executor, a JIT compiler, or a straight compiler.

I’m also curious to know why the author wants to find a pure interpreted language. If it’s AST dynamism, that’s something that lisp macros take care of handily, while still being able to compile at least somewhat. Perhaps as a learning exercise?

1. 9

I know some devs who only code at work and are fine. But if there were two identical candidates (a myth, I know) except one had side projects, guess who I pick?

1. 7

I’m not sure it’d be an easy choice for me, though a lot depends on how you resolve that bit about the “identical candidates”. To really generalize, my interactions with people who have side projects are that they learn a bunch of stuff outside of work that could be useful for work, but often would prefer to be doing those side projects too, so might be less focused and/or prone to bikeshedding. I include myself in the 2nd category, fwiw: I learn a lot of things “on my own time” but I’m not necessarily the world’s best employee if you just want to hire someone to produce code.

If you had people with identical starting technical skill, but one had side projects, my no-other-information guess might even be that the person without side projects would be a more productive employee initially. It’s also probably true that they’d be less likely to keep up to date and/or proactively recommend new things unless there was an explicit framework in place to make that happen on work time. But I’m not sure that’s obviously worse in terms of what a company is looking for out of an employee.

1. 10

If they have identical technical skill and only one has technical side projects, the other is obviously more talented, because they picked up identical technical skills without spending out-of-work time on it.

2. 3

The one that had hobbies that improved their ability to communicate and work in a team? Maybe even to give and receive constructive criticism, and to compromise?

That can be satisfied by coding projects, sure. If they’re, for example, participating in an open source project by actively participating in the mailing list or forum, and managing tickets or incoming patches. A solo side-project is the opposite of this, though. Anything where the candidate is spending their time being the sole person making decisions and in control won’t help them with teamwork. If they’re not going through code and architecture reviews, there’s an excellent chance it won’t help them be better coders, either.

On the other hand, board gaming, team sports, playing D&D, or any number of things will help candidates with the stuff that will make them really productive employees. The kind that isn’t just an additive part of your team, but a potential multiplicative part.

1. 1

If they’re not going through code and architecture reviews, there’s an excellent chance it won’t help them be better coders, either.

I don’t think this is true at all. Sure, it is a whole lot easier to improve when you have an experienced mentor pointing out ways that you can do better. But there are plenty of ways to advance as a programmer without someone else coaching you. Reading quality books and source code written by programmers that are better than yourself is a great way to fill that gap; and arguably something you should be doing even if you have a mentor. At the end of the day programming is no different than any other skill, the key to improving is practicing purposefully, practicing routinely, and taking plenty of time to reflect on that practice. If you’re not willing to do those things you’re not going to be very good even if you have someone telling you how to improve.

1. 3

Sure. It’s even possible to improve all on your own, without books or mentors, as long as you’re consistently pushing yourself out of your comfort zone, consistently failing, and consistently reflecting on your experiences and what weaknesses are best addressed next.

But that’s remarkably hard. Solo projects are great at getting you more familiar with a language, or more familiar with some specific libraries, but they’re just not the right tool if you want to improve your craft.

If you want to learn how to play violin, then sure you can try buying one, trying to play, and never performing. Reading an introductory text might help a bit. But it’s going to be much faster and better to learn from someone who knows how to play the violin, to perform so you’re confronted with feedback from uninterested parties, and to go back to the drawing board and repeat the process. You can improve at chess by reading books, but if you’re not playing games your progress will be slow. If you’re only playing games against people of similar and lesser skill than you, you’re unlikely to learn much at all.

Having teammates or other people who are better than you, and who are willing to thoughtfully critique your work and suggest improvements, is the most tried-and-true method of improving your skill at something.

Failure and feedback are the best tools we have. And they’re usually not provided by solo projects.

1. 1

Oh yeah, it’s a million times harder to go at it alone. And I suppose any solo project that would provide a good platform for improving, like writing an open source framework/library or building a complex application that makes it into production and has users, will eventually become collaborative. Because once you have users you’ve got to write some sort of documentation for them and they’re going to be telling you about all the issues they run into and all of the improvements they want made.

1. 22

This article is great except for No 3: learning how hardware works. C will teach you how PDP-11 hardware works with some extensions, but not modern hardware. They have different models. The article then mentions computer architecture and assembly are things they teach students. Those plus online articles with examples on specific topics will teach the hardware. So, they’re already doing the right thing even if maybe saying the wrong thing in No. 3.

Maybe one other modification. There’s quite a lot of tools, esp reimplementations or clones, written in non-C languages. Trend started getting big with Java and .NET with things like Rust and Go making some more waves. There’s also a tendency to write things in themselves. I bring it up because even the Python example isn’t true if you use a Python written in Python, recent interpreter tutorials in Go language, or something like that. You can benefit from understanding the implementation language and/or debugger of whatever you’re using in some situations. That’s not always C, though.

1. 14

Agreed. I’ll add that even C’s status as a lingua franca is largely due to the omnipresence of unix, unix-derived, and posix-influenced operating systems. That is, understanding C is still necessary to, for example, link non-ruby extensions to ruby code. That wouldn’t be the case if VMS had ended up dominant, or lisp machines.

In that way, C is important to study for historical context. Personally, I’d try to find a series of exercises to demonstrate how much different current computer architecture is from what C assumes, and use that as a jumping point to discuss how relevant C’s semantic model is today, and what tradeoffs were made. That could spin out either to designing a language which maps to today’s hardware more completely and correctly, or to discussions of modern optimizing compilers and how far abstracted a language can become and still compile to efficient code.

A final note: no language “helps you think like a computer”. Our rich history shows that we teach computers how to think, and there’s remarkable flexibility there. Even at the low levels of memory, we’ve seen binary, ternary, binary-coded-decimal, and I’m sure other approaches, all within the first couple decades of computers’ existence. Phrasing it as the original author did implies a limited understanding of what computers can do.

1. 8

C will teach you how PDP-11 hardware works with some extensions, but not modern hardware. They have different models.

I keep hearing this meme, but pdp11 hardware is similar enough to modern hardware in every way that C exposes. Except, arguably, with the exception of NUMA and inter-processor effects.

1. 10

You just countered it yourself even with that given prevalence of multicores and multiprocessors. Then there’s cache hierarchies, SIMD, maybe alignment differences (memory is fuzzy), effects of security features, and so on.

They’d be better of just reading on modern, computer hardware and ways of using it properly.

1. 6

Given that none of these are represented directly in assembly, would you also say that the assembly model is a poor fit for modeling modern assembly?

I mean, it’s a good argument to make, but the attempts to make assembly model the hardware more closely seem to be vaporware so far.

1. 6

Hmm. They’re represented more directly than with C given there’s no translation to be done to the ISA. Some like SIMD, atomics, etc will be actual instructions on specific architectures. So, Id say learning hardware and ASM is still better than learning C if wanting to know what resulting ASM is doing on that hardware. Im leaning toward yes.

There is some discrepency between assembly and hardware on highly-complex architectures, though. The RISC’s and microcontrollers will have less, though.

2. 1

Not helped by the C/Unix paradigm switching us from “feature-rich interconnected systems” like in the 1960s to “fast, dumb, and cheap” CPUs of today.

3. 2

I really don’t see how C is supposed to teach me how PDP-11 hardware works. C is my primary programming language and I have nearly no knowledge about PDP-11, so I don’t see what you mean. The way I see it is that the C standard is just a contract between language implementors and language users; it has no assumptions about the hardware. The C abstract machine is sufficiently abstract to implement it as a software-level interpreter.

1. 1

As in this video of its history, the C language was designed specifically for the hardware it ran on due to its extremely-limited resources. It was based heavily on BCPL, which invented “programmer is in control,” that was what features of ALGOL could compile on another limited machine called an EDSAC. Even being byte-oriented versus word-oriented was due to PDP-7 being byte-oriented vs EDSAC that allowed word-oriented. After a lot of software was written in it, two things happened:

(a) Specific hardware implementations tried to be compatible to it in stack or memory models so that program’s written for C’s abstract machine would go fast. Although possibly good for PDP-11 hardware, this compatibility would mean many missed opportunities for both safety/security and optimization as hardware improved. These things, though, are what you might learn about hardware studying C.

(b) Hardware vendors competing with each other on performance, concurrency, energy usage, and security both extended their architectures and made them more heterogenous than before. The C model didn’t just diverge from these: new languages were invented (esp in HPC) so programmers could easily use them via something that gives a mental model closer to what hardware does. The default was hand-coded assembly that got called in C or Fortran apps, though. Yes, HPC often used Fortran since it’s model gave them better performance than C’s on numerical applications even on hardware designed for C’s abstract machine. Even though easy on hardware, the C model introduced too much uncertainty about programmers’ intent for compilers to optimize those routines.

For this reason, it’s better to just study hardware to learn hardware. Plus, the various languages either designed for max use of that hardware or that the hardware itself is designed for. C language is an option for the latter.

“ it has no assumptions about the hardware”

It assumes the hardware will give people direct control over pointers and memory in ways that can break programs. Recent work tries to fix the damage that came from keeping the PDP-11 model all this time. There were also languages that handled them safely by default unless told otherwise using overflow or bounds checks. SPARK eliminated them for most of its code with compiler substituting pointers in where it’s safe to do so. It’s also harder in general to make C programs enforce POLA with hardware or OS mechanisms versus a language with that generated for you or having true macros to hide boilerplate.

“ The C abstract machine is sufficiently abstract to implement it as a software-level interpreter.”

You can implement any piece of hardware as a software-level interpreter. It’s just slower. Simulation is also a standard part of hardware development. I don’t think whether it can be interpreted matters. Question is: how much does it match what people are doing with hardware vs just studying hardware, assembly for that hardware, or other languages designed for that hardware?

1. 3

I admit that the history of C and also history of implementations of C do give some insight into computers and how they’ve evolved into what we have now. I do agree that hardware, operating systems and the language have been all evolving at the same time and have made impact on each other. That’s not what I’m disagreeing with.

I don’t see a hint of proof that knowledge about the C programming language (as defined by its current standard) gives you any knowledge about any kind of hardware. In other words, I don’t believe you can learn anything practical about hardware just from learning C.

To extend what I’ve already said, the C abstract machine is sufficiently abstract to implement it as a software interpreter and it matters since it proves that C draws clear boundaries between expected behavior and implementation details, which include how a certain piece of hardware might behave. It does impose constraints on all compliant implementations, but that tells you nothing about what “runs under the hood” when you run things on your computer; an implementation might be a typical, bare-bones PC, or a simulated piece of hardware, or a human brain. So the fact that one can simulate hardware is not relevant to the fact, that you still can’t draw practical assumptions about its behavior just from knowing C. The C abstract machine is neither hardware nor software.

Question is: how much does it match what people are doing with hardware vs just studying hardware, assembly for that hardware, or other languages designed for that hardware?

What people do with hardware is directly related to knowledge about that particular piece of hardware, the language implementation they’re using, and so on. That doesn’t prove that C helps you understand that or any other piece of hardware. For example, people do study assembly generated by their gcc running on Linux to think about what their Intel CPU will do, but that kind of knowledge doesn’t come from knowing C - it comes from observing and analyzing behavior of that particular implementation directly and behavior of that particular piece of hardware indirectly (since modern compilers have to have knowledge about it, to some extent). The most you can do is try and determine whether the generated code is in accordance with the chosen standard.

1. 1

In that case, it seems we mostly agree about its connection to learning hardware. Thanjs for elaborating.

1. 2

I’ve heard a great deal of buzz and praise for this editor. I’ve got a couple decades’ experience with my current editor – is it good enough to warrant considering a switch?

1. 3

What do you dislike about it?

What are the things your editor needs to provide that you aren’t willing to compromise on?

1. 2

It probably isn’t, but it’s maybe worth playing around with, just to see how it compares. It’s definitely the best behaved Electron app I’ve ever seen. It doesn’t compete with the Emacs operating system configurations, but it does compete for things like Textmate, Sublime, and the other smaller code-editors. It has VI bindings(via a plugin) that’s actually pretty good(and can use neovim under the hood!). I still don’t understand Microsoft’s motivation for writing this thing, but it’s nice that they dedicate a talented team to it.

It’s very much still a work in progress, but it’s definitely usable.

1. 3

Here’s the story of how it was created[1]. It’s a nice, technical interview. However, the most important thing about this editor is that it marked an interesting shift in Microsoft’s culture. It appears that is the single most widely used open source product originating by MS.

https://changelog.com/podcast/277

1. 1

Thanks for linking that show up.

2. 2

It’s worth a try. It’s pretty good. I went from vim to vscode mostly due to windows support issues. I often switch between operating systems, so having a portable editor matters.

1. 1

It’s pretty decent editor to try it out. I’ve personally given up because it’s just too slow :| The only scenario in which I tolerate slowness, is a heavy-weight IDE (e.g., IntelliJ family). For simple editing I’d rather check out sublime (it’s not gratis, but it’s pretty fast).

1. 1

It doesn’t have to be a hard switch, I for example switch between vim and vs-code depending on the language and task. And if there is some Java or Kotlin to code then I will use Intellij Idea, simply because it feels like the best tool for the job. See your text editors more like a tool in your toolbelt, you won’t drive in a screw with a hammer, won’t you? I see the text editors I use more like a tool in my toolbelt.

1. 1

I do a similar thing. I’ve found emacs unbearable for java (the best solution I’ve seen is eclim which literally runs eclipse in the background), so I use intellij for that.

For python, emacs isn’t quite as bad as it is with java, but I’ve found pycharm to be much better.

Emacs really wins out with pretty much anything else, especially C/++ and lisps.

1. 1

VS Code has a very nice python module (i.e. good autocomplete and debugger), the author of which has been hired by MS to work on it full time. Not quite PyCharm-level yet but worth checking out if you’re using Code for other stuff.

1. 5
1. 3

Also in response to the article, two different people wrote color-identifiers-mode and rainbow-identifiers-mode for emacs.

1. 2

There was a period of time when I was all about OCaml. I appreciate the purity of Caml and StandardML more though, for whatever reason, just from an aesthetics point (the object-orientedness of OCaml just seems shoehorned in to me).

The sad truth, though, is that in my limited time I have to focus on the stuff I need for work. The only languages I’m truly fluent in anymore are C, Python, SQL, Bourne shell, and…I guess that’s it. I can get around in Lua if I need to, but I haven’t written more than a dozen lines of code in another language in at least five years.

(That’s not to say I don’t love C, Python, SQL, and Bourne shell, because I do.)

I’ve been messing around with Prolog (the first language I ever had a crush on) again just for fun, but I’m worried I’m going to have to put it down because of the aforementioned time issue. Maybe I can start writing some projects at work in ML. :)

1. 7

SML is probably my favorite language. It’s compact enough that you can keep the whole language (and Basis libraries) in your head fairly easily (compared to, say, Haskell, which is a sprawling language). I find strict execution much easier to reason about than lazy, but the functional-by-default nature remains very appealing.

Basically, it’s in a good sweet spot of languages for me.

But, it’s also a dead language. There is a community, but it’s either largely disengaged (busy writing other languages for work), or students who have high engagement but short lifespans. There are a few libraries out there, and some are good but rarely/never updated, and some are not good and rarely/never updated.

I still think it’s a great language to learn, because (as lmmm says) being fluent in SML will make you a better programmer elsewhere. Just know that there aren’t many active resources out there to help you actually write projects and whatnot.

1. 2

Everything that you said, plus one thing: Standard ML, unlike Haskell or OCaml, realistically allows you to prove things about programs — actual programs, not informally described algorithms that programs allegedly implement. Moreoever, this doesn’t need any fancy tools like automatic theorem provers or proof assistants — all you need is simple proof techniques that you learn in an undergraduate course in discrete mathematics and/or data structures and algorithms.

1. 3

Absolutely. I think the niche for languages with a formal specification is fairly small, but it is irreplacable in that niche.

1. 1

Just out of curiosity, do you have any reading recommendations on formal proofs for ML programs?

1. 3

Let me be upfront: When I said “prove” in my previous comment, I didn’t mean “fully formally prove”. The sheer amount of tedious but unenlightening detail contained in a fully formal proof makes this approach prohibitively expensive without mechanical aid. Formal logic does not (and probably cannot) make a distinction between “key ideas” and “routine detail”, which is essential for writing proofs that are actually helpful to human beings to understand.

With that being said, I found Bob Harper’s notes very helpful to get started, especially Section IV, “Programming Techniques”. It is also important to read The Definition of Standard ML at some point to get an idea of the scope of the language’s design, because that tells you what you can or can’t prove about SML programs. For example, the Definition doesn’t mention concurrency except in an appendix with historical commentary. Consequently, to prove things about SML programs that use concurrency, you need a formalization of the specifics of the SML implementation you happen to be using (which, to the best of my knowledge, no existing SML implementation provides).

2. 3

OCaml is yet another mainstream-aiming language full of dirty compromises and even outright design mistakes:

• The types of strict lists, trees, etc. are not really inductive, due to OCaml’s permisiveness w.r.t. what can go on the right-hand side of a let rec definition.
• It has an annoying Common Lisp-like distinction between “shallow” and “deep” equality.
• Moreover, either kind of equality can be used to violate type abstraction.
• Mutation is hardwired into several different language constructs (records, objects), rather than provided as a single abstract data type as it well should be.
• Applicative functors with impure bodies are leaky abstractions.
1. 3

Many complaints about OCaml here are justified in a way (I use it in my day job), so I’ve run into a number of issues myself. It is a complex language, especially the module language.

the object-orientedness of OCaml just seems shoehorned in to me

I think that’s a commonly repeated myth but OCaml OOP is not really like Java. Objects are structural which gives it a quite interesting spin compared to traditional nominal systems, classes are more like templates for objects and the object system is in my opinion not more shoehorned than polymorphic variants (unless you consider those shoehorned as well).

1. 4

…OCaml…(I use it in my day job)

So how’s working at Jane Street? :)

Objects are structural which gives it a quite interesting spin compared to traditional nominal systems…

Oh no, I get that. It’s a matter of having object-oriented constructs at all. It’s like C++ which is procedural and object-oriented, and generic, and functional, and and and. I like my languages single-paradigm, dang it! (I know it’s a silly objection, but I’m sometimes too much of a purist.)

2. 1

I work full-time in Scala, and I credit Paulson with teaching many of the foundations that make me effective in that language. Indeed even when working in Python, my code was greatly improved by my ML experience.

1. 1

How is Scala? I feel like there would be a significant impedance mismatch between the Java standard libraries, with their heavy object-orientation, and Scala with its (from what I understand) functional style.

I think it would also bug me that the vast majority of the documentation for my languages libraries would be written for another language (that is, I need to know how to use something in Scala, but the documentation is all Java).

1. 2

How is Scala?

It’s really nice. More expressive than Python, safer than anything else one could get a job writing.

I feel like there would be a significant impedance mismatch between the Java standard libraries, with their heavy object-orientation, and Scala with its (from what I understand) functional style.

There’s a mismatch but there are libraries at every point along the path, so it gives you a way to get gradually from A to B while remaining productive.

I think it would also bug me that the vast majority of the documentation for my languages libraries would be written for another language (that is, I need to know how to use something in Scala, but the documentation is all Java).

Nowadays there are pure-Scala libraries for most things, you only occasionally have to fall back to the Java “FFI”. It made for a clever way to bootstrap the language, but is mostly unnecessary now.

1. 1

Very informative, thank you.

1. 4

I enjoy working in an office, as long as I’m actually in my own office (with a door that closes). Unfortunately, open floorplans are still the standard for most companies, and I just have a hard time with those. So remote work is a way for me to not have to deal with loud officemates and all the other sights, sounds, and smells that I find frustrating in an open floorplan space.

1. 1

Very handy for early in a project. I, for one, always forget about the logical replication possibilities, probably because I haven’t had occasion to use them yet.

1. 6

why would such a site have google analytics?

1. 1

People often include google analytics without really thinking about the privacy implications, just because publishing blind is so annoying I suppose. Is there a better alternative?

1. 2

Well, there’s Piwik. I find it quite nice, though I’ve heard Google Analytics is in a league of its own. Wouldn’t know since I don’t use it for these exact privacy concerns.

1. 1

You probably also punish yourself with google search ranking by not using google analytics too. bummer.

1. 3

Anecdotally, this seems to be the case, based on what I’ve played with this on my own site.

Currently, if you search for “Benjamin Pollack” on Google, my blog is (usually, because Google) about third on the page. About two years ago, I noticed that it had suddenly and without any warning plummeted to almost the bottom of page one. Sometimes, it wasn’t even on page one, which was even worse. While I generally don’t like doing SEO, I didn’t really like not having my blog rank highly, either, and the sudden drop didn’t make much sense to me. So, I spent some time poking.

I knew I’d gotten some whining from Google about not looking good on mobile platforms and some other things, so I started there: gave the site a responsive design, turned on HTTPS, added a site map, improved favicon resolution, and some other stuff. But while those changes did help a bit on some other search engines, none of it really seemed to help much on Google. In frustration, I started looking through what I’d changed recently to see if I’d perhaps broken something that Google cared about.

Turned out, I did: while I’d used Mint in practice to track my site’s usage, I’d accidentally left Google Analytics on as well for quite some time. I’d caught it shortly before the rankings drop, and removed it from my site. On a hunch, I added Google Analytics back in, and…presto, back up to roughly my old position.

I don’t actually think this is malice. I think that Google absolutely factors in the traffic patterns they see when calculating search results. In the case of my blog, their being able to see people showing up there based on my name, and then staying on the site, probably helps, and likewise probably gave them insight they might otherwise lack that I tend to have a few key pages that get a lot of traffic.

So, yeah: unfortunately, I do think you punish yourself with Google by not using analytics. For some, that might be okay; for others, perhaps not.

1. 5

I don’t actually think this is malice. I think that Google absolutely factors in the traffic patterns they see when calculating search results.

Perhaps not active malice, but this is the exact sort of thing people mean when they say that algorithms encode values.

It may not be active malice, but it still has malicious effect, and it’s still incumbent upon Google to clarify, fix, and/or restate their values accordingly.

1. 1

I knew I’d gotten some whining from Google about not looking good on mobile platforms and some other things, so I started there: gave the site a responsive design, turned on HTTPS, added a site map, improved favicon resolution, and some other stuff.

in what form did you receive the “whining”? as someone with an irrational hatred of the web 2.0 “upgrades” that have been sweeping the web, making fonts huge, breaking sites under noscript or netsurf, etc., i have been wondering about the reasons for this. like is there some group of PR people going around making people feel bad about their “out-dated” websites, convincing them to use bootstrap?

would motherfuckingwebsite.com live up to google’s standards of “responsiveness”?

2. 3

For what it’s worth: When I worked on Google Analytics a few years ago, that was definitely not true. And I’d bet that it’s still not true and will never be true. Search ranking is heavily silo’d from the rest of the company’s data, both due to regulatory reasons and out of principle. Just getting the Search Console data linked into GA was a big ordeal.

Edit: Just did a quick search, here’s a more official statement from somebody more relevant: https://twitter.com/methode/status/598390635041673217, I’m pretty sure there were many other similar statements made by other people over the years too.

1. 1

thanks for that info.

1. 1

I understand if you can’t say anything but I’m wondering if there’s a different explanation for https://lobste.rs/s/3o3acu/decentralized_web#c_ltcs3n then?

1. 2

I don’t work there anymore, so there’s no way for me to know for sure.

If I had to guess I’d say it’s a similar deal to the dozens/hundreds of “I spoke about X in private and now I’m seeing ads for X, so my phone/car/alexa/dishwasher is spying on me” stories. We’re really good at attributing things incorrectly.

The comment you link already mentioned various things that happened which likely ruined the ranking: Unresponsive design, no HTTPS, whatever else was wrong with it. The thing is, it takes time for ranking to get updated and propagate. Even if everything was fixed yesterday and the site got crawled today, it can take weeks or months for relative ranking in a specific keyword to improve. It’s very hard to attribute an improvement to any specific thing—all you can do is do your best across the board over the long term.

Some other possible things that might have gone wrong which the comment didn’t already mention: Maybe Mint was doing something bad, like loading slowly or insecurely or something else. Maybe some high-value incoming links disappeared. Maybe Google rolled out one of their big algorithm changes and the site was affected by some quirk of it (it happens fairly regularly, lots of rants about it out there).

1. 1

Hmm, thanks. That makes sense; I appreciate the explanation!

2. [Comment removed by author]

1. 2

they got rid of that along with the serifs on their logo

2. 1

what is so annoying about publishing blind? i am publishing this comment blind and it doesn’t bother me.

isn’t it easier to do nothing, than to do something and set up google analytics?

1. 1

Eh, well, there’s actually up down vote buttons on your comment. So the tracking was already there for you. Likes and claps and shit… people want to see who’s seeing them.

1. 1

tracking is different from allowing voluntary participation.

1. 1

It’s been a while, but I used to swear by wmx. Simple, not terribly ugly, and with just enough features to do what I needed. The virtual desktop format (switch back and forth linearly, always have one empty at the end) ended up being the default mode for Gnome 3, if I understand correctly, and beat it by 15 years.

Don’t get me wrong, I shopped around quite a bit. Started with twm, spent a lot of time in blackbox, kept trying different wm’s, but wmx was the one that stuck.

1. 10

I love Rust and I know this is gonna get the whole “Python is boring. STFU” crowd down on me, but I’m honestly not sure that Rust’s level of abstraction is ideal for the vast majority of devops tasks. Sure there are plenty of performance intensive cases where Rust could really shine, but I think languages like Python and Go (which I have the same abstraction issue with FWIW but it’s at least a bit higher up the stack AFAICT) may retain the advantage for a while at least until a set of very solid libraries to perform common tasks become mature and stable.

1. 7

I definitely agree that often, working at a higher level of abstraction can be more useful. One of the things that I love about Rust is that it doens’t have to be either/or; for example, the new Conduit tool uses Rust at its core, but with Go layered on top.

https://www.reddit.com/r/programming/comments/7hx3lk/the_rise_of_rust_in_devops/dqut2cl/ is an interesting thread developing where some people are talking about why they would or wouldn’t choose Rust here; I think there’s many viable answers!

1. 3

Surely there’s some middle ground between Rust’s “we’re going to use a language designed to minimize runtime costs for a task that is inherently IO-bound” and Python/Go’s “we’re going to basically throw out types”.

1. 2

What about something like mypy ?

2. 2

Can you say more about in what ways you feel rust is too low in the stack compared to Python? One reason I like Rust a lot is I can easily make great higher level abstractions in my programs, and still retain the safety around types and borrowing. I’ve been bitten too many times with bad concurrency implemented in “simple devops scripts” to want to return to that world.

1. 5

I think if you’re doing concurrency then traditional very high level procedural languages like Ruby and Python are a very poor choice.

In concurrent applications, any of the abstraction complaints I might have with Rust fade away because you MUST think about things like memory allocation and structure in the concurrent problem space.

This is the danger in (my) speaking in generalities. In my 25 years of doing infrastructure work, I have yet to encounter a problem that truly demands a solution involving concurrency. I recognize that this is merely anecdotal evidence, but be that as it may, I prefer to work in languages that handle the details of memory allocation for me, because in every use case I’ve thus far encountered, that level of performance is Good Enough.

That said, a couple of examples of aspects of Rust I would feel are in the way for the kind of work I mostly need to do:

• Pointers and dererencing / destructuring
• The ownership and borrowing memory model

I am not a Rust expert, but I read and worked through the entire Rust book 1.0 a couple of years back, which left me with a deep abiding respect for the power of Rust. I just don’t feel that it’s suited to the kinds of work I do on a day to day basis.

1. 1

In my 25 years of doing infrastructure work, I have yet to encounter a problem that truly demands a solution involving concurrency.

I have, though I’ve generally still used python to handle it (the multiprocessing standard library is super-handy). My use cases have been simple, though. All of them can boil down to: process a big list of things individually, but in parallel, and make sure the result gets logged. No need for memory sharing, just use a queue (or even split the list into X input files for X processes).

That said, I don’t think rust would be a bad language to handle those problems. In fact, python might have been a terrible one. Just wanted to say that even workloads that require concurrency often end up very simple.

1. 3

I don’t understand the Rust advocacy that wants everything to get rewritten in Rust. Technical issues aside, nobody in their right mind is going to toss out millions of lines of debugged code to rewrite in a new language simply because of hand wavy claims that it’s theoretically safer.

It’s most bizarre that they’re wanting OpenBSD to do so, because the OpenBSD devs are known for being some of the best C developers around. They’re already expert at avoiding and fixing and working around the issues Rust is supposed to solve, so there’s little effort on their part to keep using C safely, but a massive effort to switch to Rust.

1. 6

Rust was born for rewriting a major system, and its major selling point is “C, but safe” (I disagree that it’s handwavy, for what it’s worth, but the selling point is the important bit, not whether it’s valid). It seems pretty natural to me that rust developers see big C code and feel an urge to rewrite it in rust.

And it’s not a worthless effort, either. Rust rewrites have exposed bugs in practice, and frequently result in patches to the older C code.

As for why OpenBSD, I don’t think that there’s any organizational effort to target it. It seems more likely to me that developers who are interested in safety might be drawn to both rust and OpenBSD for their reputations for safety. From that standpoint, it doesn’t seem bizarre at all.

1. 2

Rust was born for rewriting a major system, and its major selling point is “C, but safe” (I disagree that it’s handwavy, for what it’s worth, but the selling point is the important bit, not whether it’s valid). It seems pretty natural to me that rust developers see big C code and feel an urge to rewrite it in rust.

If Rust developers want to rewrite stuff in Rust, then they should just go do it. What bothers me is that they’re nagging everybody else to do it instead. Don’t waste everybody’s time trying to convince us. Just go write the code and show us how much better it is. If it’s really better then people will notice and switch to Rust on their own.

I also take issue with the attitude that any project written in C or C++ is automatically full of memory bugs and security problems. I work on a large-ish C++ project, and have first hand experience that it’s not the case. To my knowledge the product we ship has never had a production bug due to a memory access violation, buffer overflow, or anything like that.

Any C or C++ developer worth hiring is very aware of those problems, and knows to avoid them. And we have language tools like references and shared pointers, and analysis tools like Coverity and Valgrind that catch most problems before we ship. Whether it’s the compiler or another build tool finding the bugs doesn’t matter as long as the bugs are found.

I’m not naive enough to say that we don’t have any memory bugs hiding somewhere, but it’s hardly worth the trouble of teaching 120 people a new language and rewriting a half million lines of code.

And it’s not a worthless effort, either. Rust rewrites have exposed bugs in practice, and frequently result in patches to the older C code.

But rewriting in any language will have that effect. A person can’t rewrite something without understanding what it’s doing, and the process of understanding and scrutinizing it is sure to find bugs. No doubt a C++ rewrite of a Rust program would find bugs in the Rust program, too.

1. 4

What bothers me is that they’re nagging everybody else to do it instead.

From the earlier mailing list post:

what if someone actually bothered? Under what conditions would you consider replacing one of the current C implementations with an implementation written in another, “safer” language?

That doesn’t sound like nagging anyone else to do work, it sounds like a reasonable question: If I did the work, would that actually be enough? It strikes me as perfectly acceptable to ask a project whether contributions would be accepted before actually doing significant development.

Theo’s answer was a resounding “no”.

Maybe you’ve had other, personal experiences with people asking you to rewrite things in a new language, but that is clearly not the case here.

1. 0

I haven’t personally been asked to rewrite anything, but I see it all the time on sites like this one.

Theo’s answer is only a resounding “no” when it’s assumed the person asking the question isn’t going to do the work that needs to be done to include the Rust code. In other words, they’re not going to special case a handful of utilities on one or two platforms, and before Rust can be used on the base OpenBSD system it has to follow the rules and support all of the platforms OpenBSD supports.

1. 4

I see it all the time on sites like this one

I see this claim all the time. I see people unironically requesting a rewrite in Rust… actually, I don’t think I’ve ever seen that.

If you want your claim to be taken seriously, you must at the very least link some examples.

2. 4

I don’t understand the Rust advocacy that wants everything to get rewritten in Rust. […] It’s most bizarre that they’re wanting OpenBSD to do so

I wouldn’t understand it either. Sounds like wishful thinking at best, and downright rude at worst. Can you show me the people that are telling the OpenBSD folks that they want them to rewrite everything in Rust? I’d be happy to send them a polite request that they knock it off, assuming I know how to reach them.

I did read the OP and I didn’t see any such advocacy, but perhaps you’re aware of other posts on the OpenBSD mailing list advocating that they rewrite everything in Rust?

1. 6

I think there’s a between the lines implication which causes people to perceive things differently. I mean, if somebody wants ripgrep, for example, on openbsd, it’s only a pkg_add ripgrep away. So what is meant when somebody starts talking about rust on an openbsd list? A very strange way to ask how do I install ripgrep.

Put another way, it’d be like I visit your house, where you have the walls covered in Monet paintings. And then I start explaining how Picasso is so much better than Monet. You get upset, and then I say “oh, I’m just saying in general, why would you think I’m referring to your decor?”

1. 6

Sure… But, I mean, the original sender didn’t even mention Rust. They talked about memory safe languages and brought up Haskell. I don’t know if they were trying to be coy (and maybe they were, based on the video they linked), but ya know, as much as folks are sick of “Rust advocacy,” I’m personally sick of people overblowing it. I’m obviously deeply entrenched in Rust things so I probably take it a bit more personally, but we do try very hard to be practical about these things and understand that memory safety isn’t the be-all-end-all of programming (I agree with many of your counterpoints to nickpsecurity, for example). But that is never ever going to stop people from spouting stupid stuff like “using a memory unsafe language in this day and age is just irresponsible.” I can’t stand that crap, it’s the same kind of shaming people use against lots of things, e.g., see pretty much anything that the PL people say about Go.

I get the other perspective here. I maintain lots of projects and I get my fair share of drive by questions that don’t really make any sense or unwittingly imply that the amount of work or maintenance involved shouldn’t be part of the calculus. And yeah, they are super frustrating. But it’s just what people do.

1. 4

Ah, good point. On the bright side, if people hear safe and immediately think of rust, that might be considered a sign of success.

2. 4

nobody in their right mind is going to toss out millions of lines of debugged code to rewrite in a new language simply because of hand wavy claims that it’s theoretically safer

Mozilla is doing exactly that with Firefox (36MLoC according to OpenHub). You could say they went even further and invented a new language first.

1. 3

Nice find! I wish I knew of more projects using Standard ML. https://en.wikipedia.org/wiki/Standard_ML#Major_projects_using_SML doesn’t have much to say.

1. 3

One advantage in high-security is different projects have developed many pieces of an overall vision of SML that would make it a fierce contender for non-high-performance apps at EAL5 equivalent. There’s its base properties of safety, a concurrent version, a version for information flow (esp covert channels), maybe contracts in one, a QuickCheck port, embeddings in LISP’s for RAD style, an optimizing compiler for untrusted code, a verifying compiler for trusted code, integration with provers, and some neat stuff I cant recall but know I read. Blurry.

These pieces could be combined into something that had the advantages of each plus knobs to turn things on or off depending on users’ needs. I’ve always said SML or Ocaml with a Ocaml-SML translater woukd be a great standard for tool development in any project supporting efforts where we prefer every component to be trustworthy. The empirical evidence leans if favor of this with all the compilers and other analysis/transform tooling written fairly-reliably in ML’s. Highest examples outside extracted code like CompCert would be things like Esterel’s code generator they said Ocaml aided a lot on source-to-object verification plus general correctness.

1. 2

There’s the Standard ML github org, though many of those projects seem stale. Check the project page, too, for more links (also sometimes stale). On the one hand, some of those links are stale because SML is long stable and the software is functional and complete. On the other hand, some of those links are stale because SML is stale and the community is small and transient (lots of folks use SML during college and then drop off after those few years).

1. 6

I sometimes experience anxiety episodes when I see my own score

We cannot change the world to better suit your mood. Write a user script and alter the site to your liking in your own browser.

1. 15

We cannot change the world to better suit your mood

I mean, we absolutely can. Because “the world” in this case is a bunch of code that’s open source. And that code was already written with some moods in mind. For example:

• It requires an explanation of downvotes, in order to reduce ill-considered downvote behavior
• It makes public all moderation logs, in order to increase transparency and trust
• It supports an interface entirely through email, in order to emulate a very specific user experience from the distant past

It’s not at all clear to my why we can’t add this to the list:

• It hides user karma score from the home screen, in order to reduce upvote-chasing and competitive behavior
1. 0

It requires an explanation of downvotes, in order to reduce ill-considered downvote behavior

That obviously did not work. I’m constantly downvoted as “troll” even when posting purely factual comments. “troll” is the new “fuck you”. Oh, there’s a new entry on my top level comment here: “-1 me-too”. How does that make any sense?

When you force people to choose from a list of justifications for downvoting, they’ll either choose an insulting one or a random one. Anything but give up downvoting because they see the error of their ways. It’s basic human nature.

1. 3

So you’re saying that sometimes it’s worth changing the way we do things because what we do sometimes has unintended effects?

Well, then I’m glad you agree that we should consider hiding the user’s karma score from the home screen.

2. 7

I personally don’t mind them, but perhaps a profile preference could be added, that hides karma for those who wish to avoid seeing it?

1. 6

That’s a great way of driving people with anxieties away so they leave the site. And in the end we’re left with just the bunch of users who talk and walk like stefantalpalaru…

1. 4

I don’t care either way, but I think the anxiety issue needs a different solution. If the karma numbers concern somebody, then lots of other things will too.

1. 0

And in the end we’re left with just the bunch of users who talk and walk like stefantalpalaru…

You say it like it’s a bad thing :-)

2. 2

change the world to better suit your mood

I think the lyric is “change my life to better suit your mood” (Santana’s “Smooth”)

self-amusement aside, I like the karma count and don’t see it as a distraction or a significant motivator.

1. 4

For those of you who, like me, were reading along and wondering “what does he mean by ‘now’?”, I tracked down the author’s CV, and this paper dates back to 2000. I found that context helpful.

1. 4

I don’t understand the policy/practice of not putting publication dates on papers. It is such a critical piece of information - why do so many authors do this?

1. 2

I agree, i have been frustrated by this multiple times, so many authors reference time with ‘recently’, ‘now’ or ‘soon’ and then don’t put a date on the paper.

1. 1

I don’t know for certain, but I suspect it’s because they upload the version they submitted for publication, so didn’t know the publication date.

1. 1

Ok, but in those circumstances why not put the date when you finished the paper?

1. 2

In every school I went to, I was straight up required to do that or the paper didn’t count. I’ve collected so many CompSci papers over the years. The problem is huge. I often had to Google lots of the ones I found in various places to find author on the university website or paywall organizations that list that stuff. Sometimes it was even harder than that. They need to put the darned dates in the papers.

2. 1

Oops. I just noticed I forgot to put the date on it this time. My bad. Since then, there’s been a few developments including A2 I linked to below, Blackbox Component Pascal for one of latest variants, and Astrobe for embedded applications.

http://blackboxframework.org/

http://astrobe.com/Oberon.htm

1. 4

I’m a bit surprised at all of the people rushing to defend Rick. Should the situation have never gotten that far? Sure. Does that absolve Rick from being very bad at his job? Not at all. Especially if you read some of the followups (e.g. Why Rick couldn’t come back from the brink), he was afforded ample opportunity to not be an asshole martyr.

Are we so attached to the solo hero coder idea that we can’t help but defend it even when it’s as toxic as Rick is?

1. 7

I wouldn’t call the comments here as defending Rick as much as trying to be more critical of the author, who chose to exemplify his firing of an employee as an example of good management. I mean, it’s in the freaking inflammatory title: “We fired our top talent. Best decision we ever made.” Would you expect that to be the statement of a well-meaning, constantly-reflecting leader?

1. [Comment from banned user removed]

1. 1

The concept of “solo hero coder” is really attempting to diminish those people as unique individuals and mocking their talents and skills, which suggests jealousy or inferiority

Would you prefer 10x/100x coder? Same concept. But I how you immediately jumped to assuming jealousy on my part. That’s a remarkable level of defensiveness.

Levy’s book was written over three decades ago, about an industry that had only existed for about three decades before that. It has very little insight into the work of today, where, unless you’re producing disposable code, you need to be able to work with other people to make software.

1. 2

But I how you immediately jumped to assuming jealousy on my part. That’s a remarkable level of defensiveness.

That’s not a fair reading–the concept of “solo hero coder” is very much an archetype being spread and deconstructed in programming and business culture today: you don’t need to infer accusation of jealousy. As @pushcx said, have charity towards other posters.

I think @simba was pointing out that (rightly) there is very much an movement to discredit and suppress the “genius solo coder”. Whether that is good or bad is a different matter altogether.

1. [Comment from banned user removed]

1. 5

I’m guessing the last time you read it was also 3 decades ago.

Please assume the best of fellow commenters.

1. 1

I’m guessing the last time you read it was also 3 decades ago.

I’ve been in the industry a long time, but not that long. I read it in the mid-90s. My recollection is that the book largely consists of stories of individuals in the 1960s and 1970s creating or hacking impressive things, tied together thematically under the “hacker ethos”. When it was published, many pieces of commercial software (perhaps most, though I don’t know how large the non-microcomputer market was) were still written by one or a small number of programmers.

That’s simply no longer the case. The vast majority of commercial software – heck, the vast majority of software that gets distributed at all – is written by a team.

But that isn’t the case, many of the most important and useful softwares were written entirely by one person

I’m curious to hear your (modern) examples.

When a musician is extremely talented, nobody uses phrases like “solo hero musician” to describe them. They just pay them millions of dollars for their work

That might not be an accurate portrait of the music industry. Talent does not equate to financial success, nor vice versa. That is even more true in the visual arts. It’s very hard to make a living as an artist, let alone make a fortune.

Extremely talented developers deserve the same level of recognition and reward for their contributions.

I mean, they do. Bill Gates, Linus Torvalds, John Carmack, Notch, are all examples of programmers who wrote successful software, and are now worth a great deal of money.

1. 2

I’m curious to hear your (modern) examples.

minecraft, redis, Dwarf Fortress, Ethereum, Buckmaster over at Craiglist, Whitney’s K, for a start.

1. [Comment from banned user removed]

1. [Comment from banned user removed]

1. 1

Fabrice Bellard made ffmpeg and qemu (as a modern example).

1. [Comment removed by author]

1. 16
1. 5

If you look over his other blog posts he mentions forcing his team to pull 7-day weeks with 12-hour days FOR EIGHT MONTHS.

I looked briefly but didn’t find that. Can you point me where you saw that?

In this article, the author mentions that Rick was working like that. That’s the only 12/7 reference I’ve found, though:

Rick was churning out code faster than ever. He was working seven-day weeks, twelve hours a day.

1. 12

Can you point me where you saw that?

It took eight months of seven-day weeks and twelve-hour days to complete our last legacy system overhaul.

1. 2

Thanks!

1. 11

If you read in between the lines, it appears that management was complacent to lay problems at Rick’s doorstop, and didn’t care that Rick and/or the team didn’t take time to document the problem and/or resolution.

…..

Instead of tackling the root cause of the issue (hey man, whats eating you?), they opted for the quick and easy fix (Hey Rick, GTFO!). Par for the course, as far as I can tell.

If you read actual text, you’d see that this was something the company already thought of:

I agree that the situation that came about was also his manager’s fault. He never should have been allowed to take on so much. If it gives comfort to anyone else reading this, the manager went first because ultimately management bears responsibility, always.

They then followed up with:

Rick rejected months of overtures by leadership. He refused to take time off or allow any work to be delegated. He also repeatedly rejected attempts to introduce free open source frameworks to replace hard-to-maintain bespoke tools.

As I mention in a comment on the original post, I’m surprised at how many people are kneejerk defending Rick. In this case, I’m embarassed for this poster who not only kneejerk defended him, but claimed additional insight into the story, all while ignoring the wealth of info provided by the original author.

Could he have provided this info in the original post? Sure. Why should he have to? What is so special about this particular “we fired a toxic team member” story that everyone is instantly certain they did it wrong? And unwilling to do even the least bit of additional reading about it?

Why does this story of Rick prompt such irrational, emotional responses?

1. 21

Why does this story of Rick prompt such irrational, emotional responses?

Explaining why someone was terminated within a company is a really delicate task. Doing so on the internet requires even more tact.

Comparing the terminated employee with a narcissistic, nihilistic, and downright crazy cartoon character doesn’t demonstrate much respect for the terminated employee or the seriousness of the situation. I think that’s the main reason the original article left a bad taste in my mouth.

1. 5

Thanks @davidholman, it’s bizarre to me that someone could think the comparison or even the title of the original blog post are any acceptable way for a manager to discuss other colleagues.

2. 8

Hey thanks for the reply, I did apparently miss something in the original - likely as it was hidden underneath the blanket of scapegoating. The “actual text” link from you is a completely different article however, that I had not yet seen.

To answer the question you pose at the bottom: it’s because many of us have been there. Either directly involved or on the sidelines. We’ve seen the personalities and the egos and the mismanagement. It’s a difficult subject. However I wouldn’t call the responses “irrational”. Emotional, yes, but those empathetic enough will relive their own personal experiences and react. I worked for a toxic company for several years, and saw some bad shit. I saw crazy nepotistic owners oversell the world and then fire those that they used after burning them out to the core. Those who survive take away an insight that we shouldn’t need. Ask me how many times a day I get to say “no” to some absurd request now :)

1. 3

The “actual text” link from you is a completely different article however

It’s a comment on the original article. Medium treats it as an additional document, but it’s eminently findable on the original article page.

However I wouldn’t call the responses “irrational”.

How is it rational? A rational response to “somebody I don’t know got fired” might be something like “did he deserve to be fired? I’ll look into that”, or “something sounds fishy about this story. If I feel the need to post my own essay response, it will be asking those questions and examining different ways they could be answered”.

Not “I’m now going to post a kneejerk rant against imagined management problems, because Rick deserved better!”. That seems textbook irrational to me.

those empathetic enough will relive their own personal experiences and react.

I did. I’ve been burnt by Ricks before. I suspect I’ll be burnt by Ricks again. My response is similar to the original article author’s: fix systemic problems where possible, train toxic people when possible, but fire toxic people who insist on remaining toxic.

2. 8

Could he have provided this info in the original post? Sure. Why should he have to? What is so special about this particular “we fired a toxic team member” story that everyone is instantly certain they did it wrong? And unwilling to do even the least bit of additional reading about it?

The guy who wrote the original article, which is 99% scapegoating “Rick”, is a manager who seriously thinks literally months of 12-hour days 7 days a week is a good idea: https://medium.freecodecamp.org/our-team-broke-up-with-instant-legacy-releases-and-you-can-too-d129d7ae96bb :

It took eight months of seven-day weeks and twelve-hour days to complete our last legacy system overhaul.

No wonder “Rick” burnt out.

1. [Comment removed by author]

1. 9

Right, and they fired the manager.

Then they tried, for a year, to coax Rick into a different, better, set of work behaviors, and he resisted the entire time. Then he was fired. That part all seems reasonable to me.

1. 5

And then instead of reasonably moving on from the monster they ultimately created, they publicly shat on the guy, nicknaming him after a cartoon character that embodies every character flaw I can think of. That seems to be the source of contention, at least for me.

2. 6

The title of the story is about firing rick, and being proud of it, not “we fucked up bad and we unfortunately had to fire someone”. The content of the article is 99% about how Rick was to blame for everything. I can’t even find the link you gave off the front page, I assume it’s nested somewhere in the content. So your claim that you just have to read the actual text and everyone is freaking out over nothing doesn’t jive with reality:

The author, as management, does not take responsibility in the original post.

1. 2

Ah, the TI99/4A. My almost first computer. Turns out my mom had bought me one (this was no mean feat as we did not have a lot of money) and then, I read somewhere that the architecture was closed and that this platform was doomed to failure, so I decided I wanted an Atari instead. I never knew this until years later, but she returned the TI and wound up with metric ton of S&H Green Stamps, and we pitched in and got me my Atari 400 (with the membrane keyboard and Attract mode :)

I still think I made the right choice, but the TI was still a great machine!

1. 2

Fun fact, the TI 99/4 and 99/4A both had 16-bit CPUs, making them the first 16-bit home computers around by several years.

The architecture was terrible, though.

1. 2

I don’t really have any sense of the architecture at all. I understand the Atari 8 bit innards pretty well, at least from a functional perspective.

Looking at this I think I made the right choice. The graphics specs alone weren’t amazing even for the time.

Although, looking at the VDP page, the sprite architecture was nicer than Atari’s. In the Atari universe if you wanted to move your spite (player / missile) horizontally, you could just change a byte value in a memory address, but if you wanted vertical movement, you had to actually move the object’s bytes through video memory.

So, score one for TI I guess :)

1. 4

Yeah, that diagram brings me back.

I guess I should amend myself: “terrible” isn’t very descriptive. It was vastly more complex than any other home computer out there. And it was saddled with some decisions that are just hard to justify.

For example, the whole GPL thing. They have a 16-bit CPU, so you’d think you could write 16-bit assembly for it, and it would be cool, right? Nope, they have an 8-bit assembly called “GPL” which runs through an interpreter in the ROM. Your game cartridges were written in GPL, so they sometimes exhibited … a leisurely pace for the hardware spec. Your TI-BASIC programs were run through an interpreter which was itself written in GPL, so they were also not very fast.

Then you had the RAM. You could have plenty (for the time), but it came with some caveats. First, most of your CPU registers were actually stored in RAM. Second, the CPU could only access 256 bytes of RAM directly. Third, to get to the rest of the RAM, you had to go through an 8-bit bus, accessed through a 16-to-8 multiplexer. Fourth, you could, instead, use video RAM, which was on the CPU-side of the big multiplexer, but that was also 8-bit, and had to go through its own multiplexing.

So the RAM was slow, and all the programs were slow, and all of it for seemingly artificial reasons. But the hardware was all pretty high-grade – the sound system was pretty impressive, the display was impressive, the expansion system was simple and usable (if not entirely well-thought-out), and the CPU was far more powerful and faster than anything else on the market (even if you couldn’t really use it much).

1. 2

Wow, those are some - “fascinating” architectural decisions. What do you think may have influenced them? Time to market? Ease of development?

(BTW this thread is a perfect example of why I love Lobste.rs and why you don’t get this kind of content anywhere else. Somebody posts about an oddball topic, somebody knows something, and great discussion and information sharing ensues!)

1. 2

The explanation I always heard, though it may be apocryphal, is that they were planning on using a new 8-bit chip they were developing, but for some reason that didn’t pan out, so they decided to wedge in an existing 16-bit chip instead.

If that’s true, it doesn’t explain all their odd decisions, but it does explain quite a bit.

1. 1

That all sounds fine, but there are definitely features missing (or at least not mentioned here) which I look for in a lightweight markup language. Those include:

• Footnotes/endnotes/sidenotes (I think org-mode actually supports at least one of these, though it’s not mentioned in the article)
• Embedded images/other media
• Embedded other markup - math markup is very useful (to me, at least), and I know some people have been keen on embedding graph diagrams (e.g. graphviz). This sort of feature usually translates into the ability to use plugins.

Of course, the more of those features you support, the less “lightweight” the markup ends up being. But that doesn’t make the bits I need any less necessary.

1. 2

I’m a happy preferrant on reStructured Text.

Some may complain about backticks, but it gets everything done.

Markfown feels like a simplified version and this orgmode contraption like weird NIH-CADT of that.

But the world being a mountain of shit, RST requires page-breaks to be embedded separately for each output type. I hope I’m wrong on this, but I don’t think I am.

1. 1

It’s easy to embed latex for math and graphviz for pictures in org-mode, along with a pile of other plugins. One cool feature is embedding your programming language of choice and having a following block show results for that code.

1. 2

It’s easy to embed latex for math

And not just LaTeX math, also LaTeX environments. And if you use GUI Emacs, you can preview the equations and LaTeX environments inline in Emacs with C-c C-x C-l. E.g., here is some inline TikZ in my research notes, where the TikZ fragment is rendered and previewed in Emacs:

https://www.dropbox.com/s/t18zqabwg14bl2n/emacs-latex-environment.png?dl=0

When exporting to LaTeX, the environment is copied as-is. For HTML exports, I have set org-mode to use dvisvgm. So, every LaTeX environment/equation is saved as SVG and embedded in the resulting HTML (you can also use MathJax, but it obviously doesn’t render any non-math LaTeX).

One cool feature is embedding your programming language of choice and having a following block show results for that code.

And the result handling is really powerful. For example, you can let org-mode generate an org table from the program output. Or you can let the fragment generate an image and include the result directly in the org mode file. This is really convenient to generate and embed R/matplotlib/gnuplot graphs. You can then decide whether the code, the result, or both should be exported (to LaTeX/HTML/… output).