1. 4
  1.  

  2. 2

    I’m totally a believer in “small computing” and distributed decentralized systems.

    But there are some weird category errors in this essay, mixing up very high level and very low level designs. Or else the point it’s making is obscure enough that I’ve missed it.

    Access protection in classes isn’t a cargo-cult design reflecting bureaucratic hierarchical organizations: it’s to support encapsulation and abstraction. Public items are the API, private ones the implementation, and privacy provides guarantees that let you reason about making safe changes to the implementation. (Private members are also a zero cost abstraction, while intercepting and checking messages is expensive.) And this isn’t about “corporate” languages like Java: in Smalltalk all instance variables are private, as part of the principle that objects communicate only by message passing.

    I also don’t see why small computing shouldn’t have security features like ASLR or code signing. Where does your device get application programs and OS updates from? How do you authenticate them? If there’s a vulnerability in the software and an attacker can execute malicious code, how do you mitigate that code’s ability to make system calls, access or modify persistent data, or install malware?

    It’s possible I’ve completely missed the point … if so, I’m interested to learn what it is, because as I said this is an area I’m really interested in.

    1. 1

      Public items are the API, private ones the implementation

      This would be fantastic if there was generally, in actually-existing code, a real distinction aside from number of characters in the identifier name. Like I said in the essay, the appropriate way to handle this is with metatables if you really want to apply guarantees about data validity. If you want to support real encapsulation, then no outside class should access your member data at all & you ought to only support methods that are meaningful on their own (like smalltalk).

      But, ultimately, creating language features to protect programmers from each other and themselves and making them mandatory or awkward to circumvent is a part of big-computing mentality. It’s one thing to supply a foolproof way to do particular things, and it’s another to assume your users are fools & hand them safety-scissors. The latter absolutely makes sense in a corporate setting, where the potential reach of trivial bugs is much greater than any one person, but it is not appropriate for a small-computing context.

      Where does your device get application programs and OS updates from? How do you authenticate them?

      In a small-computing context, application programs ought to be understandable to (and largely written by) non-technical end users.

      Signed code produces a legitimacy hierarchy: “real programmers” are knighted by the crown and given the sacred responsibility of the trusted key, and everybody else’s code is second-class. That’s fine in a big-computing context, where the scale (and therefore the liability) is so big that it’s worth making regular users into oppressed subjects in order to avert disaster. But, the idea of small computing is that this ought to be a rare emergency situation, and the norm ought to be user-centric (and user-driven) design and development.

      If there’s a vulnerability in the software and an attacker can execute malicious code, how do you mitigate that code’s ability to make system calls, access or modify persistent data, or install malware?

      All of the things that ‘malware’ does are things that the owner of the device might also legitimately want to do. So, you can’t define ‘malware’ or ‘malicious code’ or ‘vulnerability’ based on behavior. You can only define it with respect to the will of the owner of the device.

      In a big computing context, the owner of the device is a corporation employing developers to perform largely-conventional programming techniques, and it’s justifiable to blacklist programming techniques on the grounds of sloppiness or similarity to things that folks without official authorization also do. In a small-computing context, that’s not a good excuse: non-professional programmers sometimes write very clever and hacky code because the conventional ways of doing things are unfamiliar to them, and the most straightforward way to do things is often the place where ‘security’-based authorization restrictions are imposed.

    2. 2

      Canonical version is here but lobste.rs still strips the friend link code last I checked, so I had to post the cached version.