Interconnections, by Radia Perlman, goes into some detail about how lower-level network protocols are designed including the tradeoffs and the human issues involved in designing them. You could probably get a sense for tradeoffs that would also apply to higher level protocol design.
I started researching the GitHub APIs that would be relevant to implement something like this a few months ago, but I’m really hesitant to sink a lot of investment into GitHub and its accompanying monopoly in my free time.
I’ve moved a bunch of my personal projects over to GitLab, but they’ve been doing stupid stuff like refusing to render static repository content without whitelisting Javascript, or telling my my two-week-old browser is unsupported because it’s outdated, so … not a lot of motivation to invest in that ecosystem either.
This. I noticed the mandatory JS for rendering nonsense too. I really want to like GitLab, and have tried multiple times to use them as my main, but to me the UX is just inferior to GitHub. The UI is sluggish and feels very bloated.
It’s been a while since I’ve given up on GitLab for the time being, and have been self-hosting Gitea. Now Gitea uses JS too, but also works quite well without it. And it’s nowhere near as slow as GitLab.
but to me the UX is just inferior to GitHub.
Well, GitLab for all its faults doesn’t hijack my Emacs key bindings to do idiotic shit like “bold this thing in markdown” (which was already only two keystrokes to begin with; why are you adding a shortcut for something I never do on ctrl-b that I use all the time?) so I wouldn’t say GitLab has quite sunk to that level yet.
Interesting. That’s a fair point; though GitHub’s editor isn’t the first to do that. I hadn’t noticed it with GitHub mainly because I use Vimium in Firefox, Evil in Emacs, and bspwm; so I rarely use Emacs-style bindings but I agree that could be frustrating.
Does exwm’s simulation keys work around the issue, or does GitHub’s in-browser binding take precedence?
EDIT: There’s also xkeysnail, though it does require running as root.
EDIT2: It seems like running xkeysnail as root may not be necessary if the user has access to input devices. On Arch (or any distro with systemd >= 215) that can be achieved by adding the user to the input group (see here and here).
EDIT3: The Emacs-keybinding extension may be another option, though apparently it only works in macOS. There’s also shortkeys but I haven’t tried either one.
If you’re editing text, Ctrl-B for bold (or Ctrl-F if you’re in Germany) should be expected. Editing text means Word keybindings, not Emacs bindings.
This also means Ctrl-I for italic (or Ctrl-K in Germany) and Ctrl-U for underlined (this one is actually the same).
I strongly disagree, at least on a Macintosh, where all native text entry widgetsobey the Emacs keybindings. Web garbage that arrogates system functionality to itself, hijacking my chosen platform experience for a poor copy of some other system is noxious, and broken.
I just tried in the macOS Notes app and ctrl+b makes the text bold. The Pages app does the same, ctrl+b makes the text bold. These are two native text entry applications developed and provided by Apple themselves.
That’s the problem of your system then – the browser explicitly exposes Ctrl, Alt, Meta. If your keyboard does not offer these, either your browser, OS, or keyboard has to map between these and the actual keys.
Users on all other systems (aka 99.5% of users) expect Ctrl-B (or Ctrl-F) to create bold text.
No, users on Macs expect their modifier keys to respect platform convention – Emacs keybindings for movement, cmd for meta. To assume otherwise is disrespectful.
So how do you suggest to do that without using heuristics on the useragent?
I’d be interested in your implementation of a JS function that returns the correct set of modifiers and keys to use for the bold shortcut. And which works reliably.
Currently, the browser doesn’t expose this, so everyone gets the most commonly used solution.
Currently, the browser doesn’t expose this, so everyone gets the most commonly used solution.
????
Note: On Macintosh keyboards, [.metaKey] is the ⌘ Command key.
MOD_KEY_FIELD = navigator.platform.startsWith('Mac') ? 'metaKey' : 'ctrlKey';
// lazy
if (keyEvent.ctrlKey && ...
// bare minimum for any self-respecting developer
if (keyEvent[MOD_KEY_FIELD] && ...
What I want to know is how you’re commenting from 1997. Just hang tight, in a couple years two nerds are gonna found a company called Google and make it a lot easier to find information on the internet.
Using the proper modifier depending on platform? The browser should expose “application-level modifier” say, for bold, and that would be ^B on X11/Windows and Super-B for Mac.
The browser isn’t exposing this, though. The best chance is sniffing the user agent and then using heuristics on that, but that breaks easily as well.
100 - 99.5 != 12.8, your assumption is off by a factor of 25.
Ctrl+b on my Mac goes back a character in both macOS Notes and Pages, as it does everywhere else. Cmd+b bolds text (as also it does everywhere else).
In general, Macs don’t use the Ctrl key as a modifier too often (although you can change that if you want). They usually leave the readline keybindings free for text fields. This seems to be by design
The standard key bindings are specified in
/System/Library/Frameworks/AppKit.framework/Resources/StandardKeyBinding.dict. These standard bindings include a large number of Emacs-compatible control key bindings…
Editing text means Word keybindings, not Emacs bindings.
Those of us who use emacs to edit text expect editing text to imply emacs keybindings.
Some of us expect them everywhere, even.
If it was a rich text WYSIWYG entry, I’d be 100% agreed with you. (I would also be annoyed, but for different reasons.)
But this is a markdown input box. The entire point of markdown is to support formatted text which is entered as plain text.
It’d be great if we had a language server protocol extension for code review + a gerrit backend. I started taking a look at this a few months ago (I work mostly in Gerrit now) but didn’t have the bandwidth for actually prototyping it. It seems like an obviously good idea, though having to use git hampers some of the possibilities.
Slightly off topic: I see people complaining a lot about Electron, with Slack being a prime example.
I think the success of Visual Studio Code means that it is possible to build excellent apps using Electron. And that the performance of Slack app is not necessarily representative of all Electron based apps.
I think it’s possible. But VSC is literally the only Electron app that doesn’t blatantly suck performance-wise. Is that because Microsoft just actually put in the effort to make something good? Or is it because Microsoft has built best-in-class IDEs that scale radically better than any alternative for a long long long time?
Now no one get me wrong, I’m a UNIX guy through and through, but anyone who claims there’s anything better than Visual Studio for large scale C++ development has no clue what they’re talking about. C++ as a language is complete bullshit, the most hostile language you can write an IDE for. Building an IDE for any other language is child’s play in comparison, and Microsoft is proving it with VSC.
I don’t think it’s currently possible for anyone besides Microsoft to make an excellent Electron app. They took a bunch of internal skill for building huge GUI applications that scale, and built their own language to translate that skill to a cross platform environment. I think they could have chosen whatever platform they felt like, and only chose to target Javascript because the web / cloud is good for business. We’ll start seeing good Electron apps when Typescript and the Microsoft way become the de facto standard for building Electron apps.
But I also haven’t slept in 24 hours so maybe I’m crazy. I reckon I’ll go to bed now.
but anyone who claims there’s anything better than Visual Studio for large scale C++ development has no clue what they’re talking about.
JetBrains CLion might actually be a bit better – but they started building addons to improve development in VisualStudio (e.g. the amazing ReSharper) originally, and only expanded to build their own IDE later on.
I fully agree on all other points.
CLion definitely has a great feature set, but I’ve found a lot of it to be unusably slow, at least on our large codebase. Lots of us use Qt Creator even though it’s objectively worse and has some sketchy bugs, because it’s at least fast for the stuff it does do. I look forward to the day I can comfortably switch to CLion.
[Comment removed by author]
I don’t think I can agree on the hype thingy here.
Background: I hate developing on Windows, been using Linux for god knows how many years, but I do have a Windows work(play)station at home where I sometimes do development and I don’t always want to ssh into some box to develop (or in the case of creating Windows applications, I can’t)
I’ve been using eclipse for years (for the right combination of languges and available plugins, of course) and had been searching for a decent “general-purpose” replacements (e.g. supports a lot of languages in a decent way, is configurable enough so you can work, has more features than, say, an editor with only syntax highlighting). So ok, I never used Sublime Text (tried it out, didn’t like it for some reason) and VS Code was the first thing since like 10 years where it was just a joy having a nice, functioning and free IDE/text editor that didn’t look like it was written i the 90s (like, can’t configure the font, horrible Office 94-like MDI), doesn’t take 2mins to load (like eclipse with certain plugins) etc.pp
It’s about frictionless onboarding, and yes, maybe I sound really nitpicky here - but it’s from the standpoint as a totally hobbyist programmer, as my overlap with any work projects or any serious open source work (where I usually have the tooling set up like at work, as it’s long-ongoing and worth the investmnt). That’s also the focus on free. Absolutely willing to pay for a good IDE (e.g. IntelliJ IDEA) but not if I’m firing it up once per month.
Is there a chance that the ill reputation of Electron apps is that Electron itself offers ample opportunity for prime footgunmanship?
I’d argue that yes, it’s quite possible to build a nice simple (moderately) lightweight thing in Electron; it’s just pretty hard in comparison to building, say, a nice simple (definitely) lightweight CLI. Or even a desktop app using a regular graphical toolkit?
Visual Studio Code, Slack, Discord, WhatsApp, Spotify all are unfortunately not simple. And while they could be reduced to simpler apps, I kinda feel like we’re all using them exactly because they have all these advanced features. These features are not useless, and a simpler app would disappoint.
It also seems like GUI and CLI toolkits are lagging behind the Web by maybe a decade, no joke. I’d love to see a native framework that implements the React+Redux flow. Doesn’t even have to be portable or JavaScript.
I’m a huge fan of CLI software that eats text and outputs text. It’s easier to integrate into my flow, and the plethora of tools that are already available to manipulate the inputs and outputs.
An example: I’ve written a CLI client to JIRA that I have plugged into the Acme editor. I just tweaked my output templates a bit to include commands that I’d want to run related to a given ticket as part of my regular output, and added a very simple plumber rule that fetches a ticket’s information if I right-click anything that looks like a JIRA ticket (TASK-1234, for example). It’s served me well as a means to not have to deal with the JIRA UI, which I find bloated and unintuitive, and it allows me to remain in the context of my work to deal with the annoyance of updating a ticket (or fetching info regarding a ticket (or listing tickets, or pretty much anything really)). It’s far from perfect, but it covers most, if not all, of my day-to-day interaction with JIRA, and it’s all just an integration of different programs that know how to deal with text.
[edit: It’s far from perfect, but I find it better than the alternative]
Is either part of that open-source by chance? I’ve been trying acme as my editor and use JIRA at work. I have a hunch you’re largely describing four lines of plumb rules and a short shell script, but I’m still having trouble wrapping my head around the right way to do these things.
Full disclosure, the JIRA thing has bugs that have not stopped me from using it in any meaningful way. https://github.com/otremblay/jkl
The acme plumbing rule is as follows:
type is text
data matches '([A-Za-z]+)-([0-9]+)'
plumb start rc -c 'jkl '$1'-'$2' >[2=1] | nobs | plumb -i -d edit -a ''action=showdata filename=/jkl/'$1'-'$2''''
It checks for a file called “.jklrc” in $HOME. Its shape is as follows:
JIRA_ROOT=https://your.jira.server/
JIRA_USER=yourusername
JIRA_PASSWORD=yourpassword
JIRA_PROJECT=PROJECTKEY
#JKLNOCOLOR=true
RED_ISSUE_STATUSES=Open
BLUE_ISSUE_STATUSES=Ready for QA,In QA,Ready for Deploy
YELLOW_ISSUE_STATUSES=default
GREEN_ISSUE_STATUSES=Done,Closed
# The following is the template for a given issue. You don't need this, but mine contains commands that jkl can run using middleclick.
JKL_ISSUE_TMPL="{{$key := .Key}}{{$key}} {{if .Fields.IssueType}}[{{.Fields.IssueType.Name}}]{{end}} {{.Fields.Summary}}\n\nURL: {{.URL}}\n\n{{if .Fields.Status}}Status: {{.Fields.Status.Name}}\n{{end}}Transitions: {{range .Transitions}}\n {{.Name}} | jkl {{$key}} '{{.Name}}'{{end}}\n\n{{if .Fields.Assignee}}Assignee: {{.Fields.Assignee.Name}}\n{{end}}jkl assign {{$key}} otremblay\n\nTime Remaining/Original Estimate: {{.Fields.PrettyRemaining}} / {{.Fields.PrettyOriginalEstimate}}\n\n{{.PrintExtraFields}}\n\nDescription: {{.Fields.Description}} \n\nIssue Links: \n{{range .Fields.IssueLinks}} {{.}}\n{{end}}\n\nComments: jkl comment {{$key}}\n\n{{if .Fields.Comment }}{{$k := $key}}{{range .Fields.Comment.Comments}}{{.Author.DisplayName}} [~{{.Author.Name}}] (jkl edit {{$k}}~{{.Id}}):\n-----------------\n{{.Body}}\n-----------------\n\n{{end}}{{end}}"
Thank you so much! I’ll take a look shortly. It really helps to see real-world examples like this.
If “jkl” blows up in your face, I totally accept PRs. If you decide to go down that path, I’m sorry about the state of the code. :P
It also seems like GUI and CLI toolkits are lagging behind the Web by maybe a decade, no joke. I’d love to see a native framework that implements the React+Redux flow. Doesn’t even have to be portable or JavaScript.
I couldn’t disagree more. Sure, maybe in “developer ergonomics” Web is ahead, but GUI trounces Web in terms of performance and consistency.
I belive one of the things that gave Electron apps a bad reputation (aside from the obvious technological issuses) were things like “new” web browsers, built with electron, offering nothing practically new, that most people would actually want, such as lower memory consumption, for example.
Building UIs is hard in general - it seems like electron trades off ease of making UIs performant for ease of building them.
That being said, it seems like it’s not prohibitively difficult to build a fast UI in electron: https://keminglabs.com/blog/building-a-fast-electron-app-with-rust/
It seems like most people building electron apps just don’t think about performance until much later in the process of development.
I think one of the few main selling points of Electron was accessibility. Anybody with solid knowledge of HTML, CSS and JS could find his way around and build the app that was running on multiple platforms. But it wasn’t performant and it was quite resource hog as it turned out. Now why is this not the case with Visual Studio Code? Because it is being written by really good developers, who are working for Microsoft, who worked on creating Typescript, in which is Visual Studio Code written, on top of Electron. Now you can get the sense of things why Visual Studio Code is different case than rest of the Electron apps, people behind it are the reason. And whole story defeats the point of Electron. If Electron as a platform could produce half as good results as VSC in terms of performance and resource efficiency than maybe it would be more viable option, as it is right now, I can see the pendulum swinging back to some other native way of implementing applications.
I mean, I hate the web stack like few others, but I think the point that ultimately the people are more determinative than the technology stands.
I just really hate the web.
I completely agree. I think that a lot of the frustrations with the quality of Electron apps is misplaced.
Perhaps something like a simple form of codegen for common data structures and algorithms?
Isn’t go generate exactly this feature?
Superficially, yes. Realistically, it has usability issues that disqualify it as “simple”. It’s not an integrated part of the build process, so here’s hoping you have a script that you can drop it into, and you’re not in the habit of actually using go build (which falls short of the 40-year-old make when it comes to rebuilding generated files as needed when something changes). If it’s a library you’re using that requires codegen to fit your app, it’s that much worse, since now the library is imposing on your build process. Plus go generate‘s only mode of operation is processing one file with magical comments and generating a different disk file for the build to find later, which you’ll probably be forced to check in whether you like it or not. Compare to macros (any sort), which are processed logically by the compiler without having to plop the generated source down on disk and manage it separately. Or the kind of codegen commonly employed in libraries in dynamic languages, which, hey, it might be string eval, but at least it goes off cleanly without user interaction. And when you consider that 99% of the use of go generate is for stuff that in a sane world wouldn’t require codegen at all…
I write Go for work, and most of the time I even like it, but sometimes I swear the Go devs are passing off laziness and failure of imagination as wisdom and timeless minimalism.
I write Go for work, and most of the time I even like it, but sometimes I swear the Go devs are passing off laziness and failure of imagination as wisdom and timeless minimalism.
This is a great quote and I hope to use it some day.
I’m not sure if this is a great strategy but it approximates what I actually did over my first four years of programming, mostly because I wasn’t beholden to anyone for the first two and could dabble in whatever I want. I feel like the following effect referenced in the article is one of the biggest advantages, especially at a big company:
As the popularity of languages ebb and flow, you will have a wider choice of jobs, companies and projects if you’re not limited by language choice.
I’m at Google now and being free to contribute to any project across the whole company (20% time and all that) has been empowering.
I don’t think so, because that seems like the purpose of a comment. If someone says “A, therefore not B,” just saying “not well reasoned” is not a valuable downvote. You might say: “I don’t see any connection between A and B, so this doesn’t seem like a sound argument.” The initial commenter might then say: “Oh, I see what you mean…” or they might say “I know it’s not obvious, but please see this paper by Philip Wadler.” Discussion leads to learning opportunities for both parties. A down-vote just leads to the commenter’s frustration.
The only downside of replying with a comment is that you end up with the phenomen of weak top comments on a story, with the top replies being effective takedowns of the ideas in the top comment. Even well intentioned people can miss weaknesses in comments the first time they’re read (and upvote them), so a lot of space is wasted on refutation and other good top-level comments get drowned. A downvote option lets those sink to the bottom.
I think if something is not well reasoned you can either:
I don’t think writing half-assed comments is a sin and thus it shouldn’t be punished. Just ignore them and let their comments sit with 1 points at the bottom of the page.
With REST, there is a continuous tension between efficiently satisfying the data needs of specific clients, and keeping the number of endpoints manageable. The reason for this tension is that a server defines what data an endpoint returns, and the client merely requests it. Especially in mobile apps, the overhead of performing separate requests or of requesting extraneous data is often unacceptable, so we’re forced to add ad-hoc endpoints with custom response structures.
In some cases, yes, this might be true but this usually applies for extremely complex mobile applications that have a very good understanding of the underlying infrastructure. It’s not a big surprise that Facebook came up with this solution as its mobile application is probably one of the most complex apps out there. I don’t think the complexity required by implement a GraphQL-like interface will help a lot of APIs (two high-profile examples might be Twitter and Netflix).
Also, if the REST APIs are simple (as in “not complex”) this doesn’t mean that it’s a bad thing. On the contrary, it’s a lot easier to understand and implement a client that’s working with a REST API because the entities are so obvious and the way to query them is also very obvious.
As a counterpoint, Netflix simultaneously invented a specification called JSONGraph that is extremely similar to GraphQL, mostly because of the huge variety of clients that were all being forced to consume the same API, leading to response payloads that were optimal for none of them. The idea of client-driven query endpoints is here to stay, I think.
I think that, contrary to the linked article, both REST and GraphQL are here to stay. I believe that GraphQL might be better suited for complex API interactions while REST has (or could have) a simplicity that can be beneficial for simpler clients.
A transcript of the talk is available here: http://jvns.ca/blog/2016/09/17/strange-loop-talk/
My first response was too much of a troll for sure. Here’s a more detailed and constructive review.
“A large class of errors are caught, earlier in the development process…”
Caught in an earlier stage, perhaps not earlier in time. Dynamic languages do not have a compile stage. That large class of errors can be caught in different ways with a disciplined interactive dynamic languages methodology.
“closer to the location where they are introduced”
Again interactive programming provided a means of working “closer to the problem”. Which is closer? There’s no information about how to tell.
“The types guide development and can even write code for you—the programmer ends up with less to specify”
Which is true of declarative programming of various flavors, even the dynamic kind. Again there’s no comparison with other dynamic means of declarative programming, so again we do not know which lead to less code or less churn.
“More advanced dependently typed languages routinely derive large amounts of boring code with guidance from the programmer”
Again this is not in comparison to anything. What is the cost in time or effort, and what kinds of problems are most suitable for current dependent tupe systems? Perhaps that boring code is not needed and a little exploration in a rapid prototyping system with a customer can rule out the need for it. Perhaps the code in question has already been developed and validated, and there’s no value in rewriting it in a dependently typed system.
I’ve no experience with depended types, but my sense is there’s a cost curve for many situations where Horse Logic semi-formally applied could be more cost effective in practice. But either way there’s no evidence presented.
“One can refactor with greater confidence…”
Greater confidence than what? The smalltalk refactoring browser was the first, most featured for years. Simple, expressive languages with good tools can be more effective than languages with more rules. So this argument without evidence sounds theoretically true, but there’s sufficient experienced in the field over decades to suggest it’s not necessarily true in practice.
“Static types can ease the mental burden of writing programs, by automatically tracking information the programmer would otherwise have to track mentally in some fashion”
This is a false dichotomy. Some dynamic languages have very good tools that programmers in some static languages would envy.
“Types serve as documentation for yourself and other programmers and provide a ‘gradient’ that tells you what terms make sense to write”
These tend not to be sufficient for understanding dynamic systems. And again good tools are able to discover most of this in a dynamic language system. And good practice including documentation can be cost effective. Once again there’s no way to tell from this article, it’s just a proclamation of what the author believes or feels based on whatever experiences they’ve had, but failed to explain.
“Dynamic languages executed naively…”
Anything done naively will have faults. But there are many situations where a naïve approach can be the most effective overall. And there are many situations where a dynamically typed system can perform close to or better than C. There’s nothing to make of this point in the article.
“You often need some sort of JIT to get really good performance…”
Some dynamic languages support compile-ahead plus an interpreter for the truly dynamic runtime needs. Also whole-program deployment tools can eliminate many of the costs of an otherwise dynamic runtime.
“learning how best to carve a program up into types is a skill that takes years to master”
The same is true of dynamic language systems. Yet we take the bad examples as the norm for them. Programming is hard no matter what. It takes a lot of experience and discernment.
A few points in your reply that my experience disagrees with:
closer to the location where they are introduced
Again interactive programming provided a means of working “closer to the problem”. Which is closer? There’s no information about how to tell.
What do you mean here? One can do interactive programming in statically typed languages as well, and statically typed languages will have the opportunity (through using lots of types) to make the problem visible closer to its introduction point.
A large class of errors are caught, earlier in the development process…
Caught in an earlier stage, perhaps not earlier in time. Dynamic languages do not have a compile stage. That large class of errors can be caught in different ways with a disciplined interactive dynamic languages methodology.
IME that discipline looks a lot like one would get “for free” from a statically typed language. What kind of discipline are you talking about?
One can refactor with greater confidence…
Greater confidence than what? The smalltalk refactoring browser was the first, most featured for years. Simple, expressive languages with good tools can be more effective than languages with more rules. So this argument without evidence sounds theoretically true, but there’s sufficient experienced in the field over decades to suggest it’s not necessarily true in practice.
I think the claim is roughly that the more expressive type system one has the more confidence the refactoring will be. FWIW, my experience in the industry has been that this is absolutely true. My career has almost entirely been refactoring existing code to make it a little less worse and the difficulty of the task is proportional to the power of the type system the language has that the refactoring is place in. Refactoring complex business logic in Python is absolutely miserable. I think the nature of refactoring is clearly on the side of static type systems and the only tools that make refactoring more palatable in a dynamically typed language are mimicking what one gets with a statically typed language.
“One can do interactive programming in statically typed languages as well”
Agreed. My claim is not that one if clearly superior to the other. Rather my claim is that done well, both can be effective. The article seems to claim superiority but offers no evidence.
“statically typed languages will have the opportunity (through using lots of types) to make the problem visible closer to its introduction point.”
Again there is an appeal that this would be the case, but it seems there are ample proponents of both static and dynamic that claim a preference for one or the other. Seems based more on personal experience and preference than anything else.
“IME that discipline looks a lot like one would get “for free” from a statically typed language.”
Nominal checking of syntax is not necessarily free, nor sufficient. Incremental, dynamic, contract-based design and feedback (the hard part) is needed no matter what the implementation language. Again I’m not claiming superiority, rather lack of evidence beyond personal preference for one or the other.
“I think the claim is roughly that the more expressive type system one has the more confidence the refactoring will be”
I understand the claim in theory. The problem is the Smalltalk refactoring browser has been around longer than any other, and has for decades served as a counter example in widespread use.
“Refactoring complex business logic in Python is absolutely miserable”
I believe you. I’ve never seen any decent tools for Python.
“the only tools that make refactoring more palatable in a dynamically typed language are mimicking what one gets with a statically typed language.”
You’re incorrect. The Smalltalk refactoring browser embraces the nature of Smalltalk and pre-dates the implementation of refactoring tools for statically typed languages by many years.
IME that discipline looks a lot like one would get “for free” from a statically typed language.
Nominal checking of syntax is not necessarily free, nor sufficient. Incremental, dynamic, contract-based design and feedback (the hard part) is needed no matter what the implementation language. Again I’m not claiming superiority, rather lack of evidence beyond personal preference for one or the other.
What does syntax checking have to do with this? Types provide proofs about your program and a checker for them, that is not syntax, that is semantics.
I think the claim is roughly that the more expressive type system one has the more confidence the refactoring will be
I understand the claim in theory. The problem is the Smalltalk refactoring browser has been around longer than any other, and has for decades served as a counter example in widespread use.
Perhaps Smalltalk is a counterexample but it’s quite hard to know. I just don’t see Smalltalk deployed to the millions of lines of code as the Python I’ve had to clean up. But I think you’re conflating a few things here. Even if the Smalltalk refactoring browser is great, types will still give you a more powerful tool to determine the correctness of a refactoring because Smalltalk simply can’t. It is the nature of types that gives this confidence. Smalltalk might be good enough and have other benefits but that is a separate argument.
But the real win, that it is unfortunate the link doesn’t go put as #1, is that that the semantics of any dynamic language can be represented in a statically typed language with a sufficiently powerful type system. C# has had a dynamic type for a long time and GHC has recently gotten one.
“Types provide proofs about your program and a checker for them, that is not syntax, that is semantics”
By and large they are nominal or structural checks of self-consistency. Until you get into dependent types, the semantics are pretty limited. And it’s not clear yet where the cost/benefit is yet for dependent types vs. semi-formal runtime contacts.
“I just don’t see Smalltalk deployed to the millions of lines of code as the Python I’ve had to clean up.”
There’s s lot more Python then Smalltalk, but there are definitely millions of lines of good Smalltalk in production. I’m not here to defend the industry’s track record. Nor am I convinced that legions of bad Python programmers will suddenly create flawless typed FP programs any time soon.
“types will still give you a more powerful tool to determine the correctness of a refactoring because Smalltalk simply can’t”
Your bias is flying in the face of a reality that has been in practice for decades. I’m not intended to fight with people’s belief systems. I’m interested in evidence.
“the semantics of any dynamic language can be represented in a statically typed language with a sufficiently powerful type system”
Agreed, regarding the end result program. The experience of how a team effectively gets to the end result is different based on the language, the tools, and the team. It’s not enough to say that in the end the programs are the same.
Anyway, not interested in high back and forth on this. I’ve been through it with you and others many times.
By and large they are nominal or structural checks of self-consistency. Until you get into dependent types, the semantics are pretty limited. And it’s not clear yet where the cost/benefit is yet for dependent types vs. semi-formal runtime contacts.
I’m not exactly sure what you mean here but correct-by-construction and whatever one’s equivalent of newtype is goes very far in helping with correctness. In this case, the constructor acts as a witness that something is correct and it travels around the program. Maybe that is limited but the impact is large.
Your bias is flying in the face of a reality that has been in practice for decades. I’m not intended to fight with people’s belief systems. I’m interested in evidence.
This isn’t a bias, it is a factual statement. The type checker will validate whatever the type system offers. Smalltalk does not have this. So the refactoring tool might be good enough for any programmer to use but it is less powerful than a type system in guaranteeing the correctness of a program.
Thanks for the chat! It was a pleasure.
This isn’t a bias, it is a factual statement. The type checker will validate whatever the type system offers. Smalltalk does not have this. So the refactoring tool might be good enough for any programmer to use but it is less powerful than a type system in guaranteeing the correctness of a program.
I don’t disagree that this isn’t a fact, but it’s easy to see how Smalltalk’s refactoring tools might actually do the same kind of thing to make similar guarantees.
There is a JavaScript code analysis system called ternjs, that can do automatic refactoring with confidence, and other truly impressive things. It works, partially, by building a type tree, but at each node there is a possibility of multiple types. I think this post describes the process as of a couple years ago. I am not at all sure how the Smalltalk’s stuff works, but I imagine it’s similar.
Thanks!
This looks pretty neat, but it does mention that it doesn’t strive to be automatic all the time. And it doesn’t really seem like it does anything all that special, really.
Parse tree transformations, reflection, and dynamic analysis at runtime, via method wrappers. It also mentions that these capabilities are only as good as your test suite.
While this is pretty much true of every sort of automatic fiddling of code, I can’t see how Smalltalk can make guarantees that are stronger, or even as strong as the refactoring tools @apy suggests. The dynamic analysis looks at actual values via method wrappers, but is there a guarantee that all possible program paths are executed and analyzed?
Right, there is s difference between being effective and giving guarantees. It is effective in practice even though Smalltalk is a very simple and expressive language. But a languages like Smalltalk and use tools will no more guarantee success than will a static typed languages and tools. Badly designed systems can type check. That doesn’t make them easy to improve.
Badly designed systems can type check. That doesn’t make them easy to improve.
This is not the same argument, however. The claim is that a static type system lets one refactor with more confidence that the outcome is correct. A static type system will guarantee something about your program before you run it, and that something is dependent on how good your types are. A dynamically typed language will not guarantee anything. To reiterate: the refactor browser might be quite high quality and work well, but in terms of guarantees it can give the developer it is clearly less than what a statically typed language can offer.
“ but it seems there are ample proponents of both static and dynamic that claim a preference for one or the other. Seems based more on personal experience and preference than anything else.”
It’s not a preference: it’s mathematically provable. Programming essentially produces a bunch of state machines interacting with various input and output patterns. Certain patterns or states are a problem. You can detect these at compile time or runtime. You can detect these with analysis or testing. The difficulty of the runtime and testing approaches go up dramatically when complexity increases until they quickly reach combinatorial explosion. Analysis approaches constrain the expression of the program in a way that catches chunks of them before it’s even run (static analysis), spots others quickly with less tests (dynamic analysis), often applies to all executions rather than just test cases you thought of, and with hopefully less work as types & specs become second nature. These checks range in power and complexity as you go up from basic types to interface types to dependent types to whatever else they come up with. And in tooling from basic type checker to static analysis to provers.
Yet, most published evidence in the literature of software verification… academic, commercial, and FOSS… shows stronger guarantees of correctness for more inputs and outputs done in the constrained, static languages. Also easier compiler optimization. There is research and existing tooling on doing such things for dynamic languages. The ones I’ve seen are much more complex and expensive in analysis than, say, Oberon or Design-by-Contract. One day, they might have methods to equal static typing in ease of analysis or correctness. So far, the evidence is against that.
“The problem is the Smalltalk refactoring browser has been around longer than any other”
Smalltalk is an interesting case. It’s exceptional in quite a few ways. The reason it’s easy to do refactoring in Smalltalk is that it’s designed to put stuff like that at a higher priority. Things like dynamic types, extreme decomposition with message passing, reflection, and a runtime for bytecode add up to make it easier. What it traded off was high performance, ease of implementation of compiler, and provable correctness or properties of apps and compiler given it’s more difficult to analyze apps made this way. Languages wanting those + refactoring will likely be statically typed since it’s just easier to build the tooling. That’s the path Eiffel took with better, overall results.
https://en.wikipedia.org/wiki/Eiffel_(programming_language)
Although, Smalltalk probably still has it beat on rapid prototyping and refactoring. My guess. :)
“It’s not a preference: it’s mathematically provable.”
A type system can be proven logically. That is a long way from proving an application will be built more effectively (time, budget, quality, features) in practice.
“Certain patterns or states are a problem. You can detect these at compile time or runtime. You can detect these with analysis or testing. The difficulty of the runtime and testing approaches go up dramatically when complexity increases”
This seems to assume there are two ways to solve the problem, type system X or ad hoc. In fact I’ve developed state machine generators and interpreters in Smalltalk and Lisp for things such as multiuser collaborative systems and manufacturing planning.
“There is research and existing tooling on doing such things for dynamic languages. The ones I’ve seen are much more complex”
Complexity is not a stranger to much of the software world. Unfortunately. Typed or not.
As a counter example to your experience, see ObjecTime. A tool for designing realtime, distributed, communicating actors and state machines. Written using Smalltalk, it also supported C, C++, etc. It was eating IBM’s lunch in the realtime market in the day, so IBM bought the company.
https://drive.google.com/file/d/0B0cKsRm-3yprYlJyeTI0dTduTGc/view?usp=drivesdk
“Smalltalk is an interesting case. It’s exceptional in quite a few ways”
Funny that type system proponents tend to position state of the art type systems in theory against average teams with bad dynamic languages such as Python in the field. Bad teams will used good languages poorly, period.
“The reason it’s easy to do refactoring in Smalltalk is that it’s designed to put stuff like that at a higher priority”
Exactly. I’m arguing the best in dynamic language systems and the best in typed languages can be equally effective, and there’s little evidence to show otherwise.
“That’s the path Eiffel took with better, overall results.”
Eiffel was a nice system. It’s arguable about better overall results.
“Smalltalk probably still has it beat on rapid prototyping and refactoring. My guess. :)”
Yes, I don’t recall Eiffel ever having any refactoring or even a good interactive development environment.
“A type system can be proven logically. That is a long way from proving an application will be built more effectively (time, budget, quality, features) in practice.”
The published studies in industry always showed the strongly-typed languages with interface checks improved quality and reduced debugging time for modules and integrations. Formal methods did as well. There’s time and quality. The lightweight methods, esp strong typing + design-by-contract, didn’t have significant impact on labor for basic annotations but reduced it for debugging. So, there’s budget. Features require ability to extend or modify the program rapidly and with minimal breakage. Strong, dynamic typing dominates on speed of development with most correctness of new features coming from those using strong, static typing or analysis. So, that’s the field results survying defect rates and such.
I don’t have that many empirical studies on strong, dynamic language’s ability to do affordable development plus prove absence of entire classes of errors. If you have them, I’d appreciate teh the links for further evaluation.
“This seems to assume there are two ways to solve the problem, type system X or ad hoc. In fact I’ve developed state machine generators and interpreters in Smalltalk and Lisp for things such as multiuser collaborative systems and manufacturing planning.”
I’m sure you have. People did in assembly, too. The point of the strong, static types is to ensure guarantees across the software no matter what people are doing with it. So long as the code is type-safe. This can be used to knock out entire classes of problems with no extra work on the developer. You have to manually check for those if its dynamic. Many developers won’t, though, so the baseline will suck in those areas. Hence, standardizing on type systems that force a better baseline.
“This seems to assume there are two ways to solve the problem, type system X or ad hoc. In fact I’ve developed state machine generators and interpreters in Smalltalk and Lisp for things such as multiuser collaborative systems and manufacturing planning.”
Looks neat. We’ve been talking about what one can do with a static or dynamically-typed language. You just introduced a CASE tool whose DSL…
https://www.eclipse.org/etrice/
…looks like a statically-typed language that gets transformed into necessary Smalltalk, etc that gets the job done combined with a runtime. I’m not sure how a CASE tool w/ a static language replacing your dynamic language for model-driven development supports your point thats that (a) dynamic language users have comparable benefits with the language or (b) that statically-typed languages/tools aren’t needed. I could’ve easily linked Ensemble’s protocol composition, Verdi correct-by-construction generators for protocols, numerous works in Coq/HoL that genere real-time software, Atom specification language in Haskell, or Hamilton et al’s 001 toolkit that generated whole systems from static, logical specs w/ OOP and timing info. Tons of third party tools that use DSL’s or MDD for more robust software all written in arbitrary stuff. It might be indicative of something that the most robust ones are done with statically-typed languages. Those using dynamic usually just do thorough testing at best.
“Bad teams will used good languages poorly, period.”
The original studies on quality by Mitre, DOD, SEI, etc compared professionals on various projects using languages like assembly, Fortran, C, C++, Ada, and Java. The last three, due to typing/safety, consistently had more productivity and lower defects than the others. Ada always had least defects due to strongest typing except one study I saw where C++ and Ada projects matched. Anomaly. Also, one of earliest, correct-by-construction approaches that got tested was Cleanroom. People hit very low defect rates on the first try while remaining productive and flexible. The main way it did it was constraining how programs were expressed for easy composition and verification. Like static typing and analysis.
What bad teams do is a strawman I’ve never relied on promoting software, assurance methods. The results of pro’s who got more done told me what I needed to know. The consistent lack of anything similarly achieved on other side outside some verified LISP’s told me some more. This forms a status quo. It’s dynamic side that need to prove themselves with bulletproof code with proofs of properties and/or empirical studies against good, static toolchains.
“I’m arguing the best in dynamic language systems and the best in typed languages can be equally effective, and there’s little evidence to show otherwise.”
Show me an ultra-efficient, dynamic-language, imperative application or component that’s been verified down to assembly to be correct and terminate properly. There’s a bunch of them for strong, statically-typed imperative and a few for functional with caveats. Bring on the evidence that dynamic languages are achieving just as much with proof it works + testing rather than just some testing where it appears to work.
The published studies in industry always showed the strongly-typed languages with interface checks improved quality and reduced debugging time for modules and integrations.
I’m always interested in reading good studies if you have 2-3 at hand.
Formal methods did as well.
Of course formal methods can be and have been used in conjunction with dynamic languages.
The lightweight methods, esp strong typing + design-by-contract, didn’t have significant impact on labor for basic annotations but reduced it for debugging. So, there’s budget.
I’d really like to read these studies, especially if they considered lightweight methods, esp. DbC, along with a high-quality dynamic language that has good tool support such as Lisp or Smalltalk.
I don’t have that many empirical studies on strong, dynamic language’s ability to do affordable development plus prove absence of entire classes of errors. If you have them, I’d appreciate the the links for further evaluation.
Here’s at least one study in the following link that puts Smalltalk ahead of Ada, C++, C, PL/I, Pascal, etc. on a significant sized project.
https://drive.google.com/file/d/0B0cKsRm-3yprYTR5YTRaRFBfR28/view?usp=sharing
This can be used to knock out entire classes of problems with no extra work on the developer. You have to manually check for those if its dynamic.
Many of those kinds of problems are covered by tests that would be needed in either case. Some of those kinds of problems can be addressed through tools and metaprogramming, even optional type systems, e.g. Strongtalk. I’ve used that same optional type system a fair bit with Dart.
Many developers won’t, though, so the baseline will suck in those areas. Hence, standardizing on type systems that force a better baseline.
I have doubts that a good type system will make good programmers out of bad ones, just as OOP and dynamic typed FP has not. Bad programs can be type checked.
https://www.eclipse.org/etrice/ …looks like a statically-typed language…
That’s a second, later implementation of the same ROOM methodology. ObjecTime is decades older, implemented in Smalltalk-80.
Show me an ultra-efficient, dynamic-language, imperative application or component that’s been verified down to assembly to be correct and terminate properly.
The closest I can think of off the top of my head is https://en.wikipedia.org/wiki/ACL2
But that’s not really my point. I’m not arguing dynamic languages are the best choice in all cases, but that they can be at least as good of a choice in most cases. Very few software systems need this level of verification. In fact most software systems are more concerned with discovering what the spec should be, and achieving an economical, maintainable implementation of a semi-formal specification.
There is a great review of empirical studies of programming here: http://danluu.com/empirical-pl/
Appreciate the list. Many of the studies have problems but there’s good information in the overall collection.
“I’m always interested in reading good studies if you have 2-3 at hand.”
I just reinstalled my system due to a crash a few days ago. I’m still getting it organized. Maybe I’ll have the links in a future discussion. Meanwhile, I found quite a few of them Googling terms like defects, Ada, study, programming languages, comparison. Definitely include Ada, Ocaml, Haskell, ATS, or Rust given they use real power of strong, static typing. C, Java, etc do not. It’s mostly just a burden on developers that assists the compiler in those languages. It’s why study results rarely find a benefit: they’re not that beneficial. ;)
Thomas Leonard’s study of trying to rewrite his tool in many languages is actually illustrative of some of the benefits of static typing. He was a Python programmer who ended up on Ocaml due to type system’s benefits plus performance gain. On the page below, in Type Checking, he gives many examples of where the static typing caught problems he’d have had to think up a unit test for in Python:
http://roscidus.com/blog/blog/2014/02/13/ocaml-what-you-gain/
He doesn’t even have to think about these in the static language. Just define his stuff in a typical way allowed by the language. It knocks out all kinds of problems invisibly from there. That he was an amateur following basic tutorials without much due diligence, but still got the results, supports benefit the static types had. The ML’s also are close to dynamic languages in length due to type inference and syntactic sugar.
“Here’s at least one study in the following link that puts Smalltalk ahead of Ada, C++, C, PL/I, Pascal, etc. on a significant sized project.”
It actually doesn’t since it leaves off data on defect introduction, time for removal, and residual ones. Basically measures speed without quality. In those studies, including some on stijlist’s link, the LISPers have larger productivity boost over C++ than Smalltalk does here. I ignore LISP over strong, static types for same reason: few comparisons on defect introduction & removal during development or maintenance. The studies I saw on Ada showed it way ahead of other 3GL’s on that. Dynamic languages usually weren’t in the defect comparisons of the time, though.
In yours, the Smalltalk program gets developed the fastest (expected) with unknown quality. The Ada95 program… whose compiler catches all kinds of algorithmic and interface errors by default… cost 75.17 person-months over Smalltalk but 24.1 faster than C++. So, it looks good in this study compared to flagship imperative and dynamic languages of the time given it’s a straight-jacket language. Smalltalk’s design advantages mean Ada can’t ever catch up in productivity. Logical next step is redoing the same thing with modern Smalltalk and Ocaml (or something similar). As link above shows, Ocaml has strong-typing with benefits close to Ada but conciseness and productivity more like Python. That would be a nice test against Smalltalk. Actually, the better tooling means Ocaml would be handicapped against Smalltalk. So, maybe not as nice a test but any comparable or winning result on Ocaml’s side would mean more if handicapped, eh?
“Many of those kinds of problems are covered by tests that would be needed in either case. ”
You don’t need the test if the type system eliminates the possibility of it ever happening. Java programmers aren’t testing memory safety constantly. Modula-3 programmers didn’t worry about array, stack, function type, or linker errors. Rust programs that compile with linear types don’t have dangling pointers. Eiffel and Rust are immune to data races. Yeah, go test out those faster than Eiffel programmers prevent them. ;) Annotation method in Design-by-Contract documents requirements in code, catches obvious errors at compile time quickly, supports static analyzers, and can be used to generate unit tests that provably test specs. As code piles up, the kinds of things you have to test go down vs dynamic stuff where you have to hand test about everything. Good annotations, which cover every case in a range, mean the large programs of 100,000+ lines simply can’t compile with the interface errors. Imagine… although you’ve probably done it… trying to code up tests representing every way a behemoth’s software might interact incorrectly plus the runtime they impose after each change.
So, part of the argument is reducing maintenance by balancing QA requirements across static types, annotations, static analysis, and tests. Ensure each property holds by using whichever of them is easiest to do, maintain, and apply to maximum number of execution traces or states.
“I have doubts that a good type system will make good programmers out of bad ones, just as OOP and dynamic typed FP has not. Bad programs can be type checked.”
I’m sure the average Rust programmer will tell you a different story about how many memory errors they experience vs coding in C. They don’t test for them: it’s simply impossible to compile code that has them in safe mode. Same with SafeD and SafeHaskell. Common cases in Ada and the Wirth languages also have checks inserted automatically. If it’s common and should fail-safe, you shouldn’t have to think about it. Pointers, arrays, strings, function arguments, handles for OS resources, proper linking… these things come to mind. Strong, static typing lets you just use them correctly without thinking past the initial definitions. No need to even write tests for that.
“That’s a second, later implementation of the same ROOM methodology. ObjecTime is decades older, implemented in Smalltalk-80.”
I know. My point didn’t change. You supported what Smalltalk can do by using a 3rd party tool whose DSL (ROOM) uses static, strong types that drive the checking and code generation process. The tool supports my argument. Imagine if it let you assign to various variables interfaces, ports, states, and so on without checking or labeling them. I imagine correctness would involve more thought by developers than the constrained process I saw that prevented classes of problems by design through variable types and enforced structuring. Did the ObjectTime not have labels representing types, restrictions on what variables could contain, and similar structure to force sane expression of FSM’s, etc?
“The closest I can think of off the top of my head is ACL2”
I knew you’d go for that. I made same mistake on Hacker News where ACL2 users grilled me. They pointed out that, despite starting with LISP, ACL2 is a heavily constrained version of it because dynamic LISP couldn’t be verified so easily. Further, if you check docs, you’ll see it has all kinds of static types that are used during development. Leave them off and it won’t do much for you at all. Hard to classify ACL2 but it’s more like a statically-typed LISP than a dynamic one. .
“but that they can be at least as good of a choice in most cases. Very few software systems need this level of verification.”
I agree that they can be used and used well. I disagree on the last part given the Github issues and CVE’s indicate the average app needs better built-in support for correct applications. The weak, static languages and dynamic ones seem to catch fewer problems in small and large apps. This results in much time wasted figuring out why or residual defects because they don’t want to invest time in figuring it out. So, even average developer needs a good baseline in their language and tooling. Code away in the dynamic languages if you want but strong, static ones catch more problems you’ll run into. If you’re disciplined and experienced, you might prevent the same ones with you’re strategies of testing and designing safer constructions. Evidence indicates most won’t go that far, though, due to less discipline or experience. Plus, different, average developers rolling their own way of doing basic checks instead of built-in often leads to nightmares in debugging or integrations.
OCaml is a nice language. The problem with that anecdote is it compares a really good static typed language (OCaml) with a really bad dynamic language (Python).
[The study I cited] “Basically measures speed without quality.”
True there are no clear citations of defects. But you are reading into that differently than I am. My assumption is if a language is cited as implementing a function point in N time units then that implies the function point is correct as implemented in that time. What would be the point of stating a function point was implemented in N time units but full of bugs? I would give the study the benefit of the doubt but admit the gap. These kinds of studies are always full of questions, which is why I’m interested in any you have available.
“ACL2 is a heavily constrained version of it because dynamic LISP couldn’t be verified so easily.”
Well, right, because one approach is to add some static tools above a dynamic language. If the argument was Lisp or Smalltalk could statically check exactly as OCaml does then you’d be right. But if the argument is there are large categories of applications where simple dynamic languages can be augmented with certain practices and tools, including some optional static checking to great effect, then you’d not be right.
“You supported what Smalltalk can do by using a 3rd party tool whose DSL (ROOM) uses static, strong types that drive the checking and code generation process.”
You are continuing to refer to a specific, second implementation of ROOM that has nothing to do with the original implementation. The original implementation does not use that DSL. The implementation of ObjecTime is almost entirely in Smalltalk. The executable models consist of those built by the tool structurally (graphically) augmented by code written in “any supported” programming language… almost always C but also C++ or Smalltalk.
“I disagree on the last part given the Github issues and CVE’s indicate the average app needs better built-in support for correct applications.”
I for one certainly do not care a zip about the average GitHub project. And I would remain skeptical that OCaml or Haskell or Idris or whatever may come along in the next 10 years would do much of anything for the average GitHub project. A type checked mess is still a mess.
“Logical next step is redoing the same thing with modern Smalltalk and Ocaml (or something similar).”
I agree. As I wrote above, I like OCaml, and would certainly not be unhappy if it turned out to be the best choice for some project I participated in. If you read everything I’ve written under this headline artical, I’ve never claimed the superiority of a dynamic language over a static typed language. I’ve suggested the evidence does not clearly favor one over the other. Good simple languages appeal to me. I’ve used Smalltalk and Lisp a lot and know they fit the bill for the kinds of systems I’ve worked on for 35 years. I’ve done enough OCaml and other ML’s to suspect they largely would meet my needs. Haskell’s another story, personally. I used Haskell 98-era a good bit, but I’m just not interested in the last 10+ years of what’s been going on there.
I appreciate what you are saying about Ada, OCaml, etc. and the benefits of static typing. I am disagreeing with how you are positioning dynamic languages requiring more effort to test, etc. due to lack of type checking. I’ve just not experienced that, but I understand a lot of people have not had the same experiences I’ve had with good languages, tools, and practices. But neither am I convinced those same programmers will end up with significantly better programming effectiveness in a good static language. You think they will, I think they won’t. I’m not sure we’re going to resolve those differences, or need to.
I appreciate you clarifying your points. I think its time to tie this one up. Your comments, including the last one, have helped me plenty in understanding how we might better assess the dynamic vs static typing in future studies. I’ll also lean more on Smalltalk and its professional practices in future research for comparisons. The proper comparison will be best practices of it vs something like Ocaml plus considerations for what work static analysis, formal verification, etc required with what results.
Enjoy the rest of your day! :)
Dynamic languages do not have a compile stage.
This is a major pet peeve of mine; it really bugs me that anyone still believes there is any relation between a language’s type system and whether a language is compiled or interpreted. Can’t we be done with this tired misinformation?
Fine. It’s a terminology thing. I misspoke. Clearly statically checked languages have a stage where checks are made prior to running the program. That’s what I meant. I started programming in Lisp in the early 1980s and Smalltalk in 1986. I’m fully aware dynamic vs static has nothing to do with compiler or interpreter.
Some dynamic languages have very good tools that programmers in some static languages would envy.
Could you provide some examples?
One example, re: semi-formal tools + simple languages vs more complex languages and types… Smalltalk has the best “lint” tool I’ve seen. Combined with other tools and practices, this will go a long, inexpensive way to keeping code clean and well designed (which then makes testing, refactoring, etc. much easier.) The equivalent in a static typed languages would be a kind of “type advisor” which I think would carry those languages a good way into the mainstream.
Thanks. I’m especially curious about refactoring. Are you aware of any video showing how to rename a function and add a parameter to a function signature in Smalltalk?
“Two Decades of the Refactoring Browser” https://youtu.be/39FAoxtmW_s
Dialyzer for Erlang also comes to mind. It catches a lot of bugs that many simplistic static type systems (like Java or Google Go) have no chance at.
I wish I thought of this title when writing my histories (and alternate histories) on either System/360 or UNIX/C. Specifically, the decision of many businesses to go for raw speed and low-level software (System/360) over safety, high-level software, and maintainability (Burroughs B5000) has had tragic, long-term consequences. Something similar happened in how C vs Modula2/Oberon were handled on similarly bad hardware. Good title for either of these that someone beat me to.
Wait, I got one. I might do 11/22/63 where guy goes back in time to explain the software crises before people buy IBM’s systems. Stays in the past to wait for the 70’s to come along to explain the concept of hacking to two people that can prevent a lot of it. Returns with just enough knowledge to become only CompSci professor at his school hired without any credentials.
Poor Burroughs. Harris bank and some other folks, if memory served, saw the light.
Then again, nobody ever got fired for buying IBM.
Unisys, formerly Burroughs, brings in billions a year. Quite a few saw the light. Just wasn’t bright enough for the rest. ;)
Can you link to one of these histories? I’m pretty curious about the B5000 and that era in computing - The Dream Machine and Alan Kay’s allusions to it don’t do much to put it in its historical context.
I don’t know that I have a link to full history. I do have great one for the architecture. It’s important to remember that, in this period, computers were giant, single-CPU, number crunchers. They were coded in assembly, Fortran, or stuff like that. It was years before Margaret Hamilton, Dijkstra, and others invented formal, software engineering. Virtually nobody was thinking about readable, safe, easily extended, and easy to maintain programs. Except one team at one company that was simply too far ahead of their time given hardware constraints and mentality of it.
Architecture of Burroughs B5000: http://www.smecc.org/The%20Architecture%20%20of%20the%20Burroughs%20B-5000.htm
Wikipedia page, esp unique features and influence sections, has a lot of good info: https://en.wikipedia.org/wiki/Burroughs_B5000
Vision document from brilliant, Bob Barton conceiving high-level machines, safe languages, software engineering, and all sorts of stuff in one PDF: http://worrydream.com/refs/Barton%20-%20A%20New%20Approach%20to%20the%20Functional%20Design%20of%20a%20New%20Computer.pdf
If you’re into INFOSEC, esp the kind that works, then you might also like the interview below with one of field’s inventors, Roger Schell. It’s a long interview but totally worth it. Military, politicians, companies… all kinds of people fought against the idea of protecting computers with he and his people doing everything up to embezzlement to fund the secure prototypes. Interestingly, I learned in it that one Burroughs guy invented INFOSEC (Anderson) and another Burroughs guy added to Intel’s 286/386 what little security it had. Principles started with Barton that I can tell. So, beginning of business mainframes, high-level OS’s, HW/SW combos for safety, and INFOSEC field (esp high-assurance security) trace back to Burrough’s B5000 work and people.
http://conservancy.umn.edu/bitstream/handle/11299/133439/oh405rrs.pdf
This is non-constructive… Which language do you thing Zig overlap with? The closer that come to mind would be Rust, but Zig definitely seems to intend to be lower level than Rust and the C compatibility seems to be one of the drive of the project (Unlike Rust where many C concept doesn’t translate as well to Rust and require a bit more of glue code (Which is totally fine, but not as convenient)).
I’m with you here. I can totally understand the frustration with new programming languages coming out every day, but really, WHY? Those that attract an audience will prosper, and those that don’t will barely make a ripple in the overall computing pond.
Why fling poo? If you think it’s pointless, just ignore it and let it die unloved.
Maybe it’s non-constructive, but I can’t really see anything actually interesting or unique about Zig. It just seems to be adding syntax to things … because.
And maybe that’s more just that the front page is not terribly informative on how the language works, and instead just throws you in the deep end, but it really just looks like someone said “you know what, I think C needs to look more like Ruby, and totally incomprehensible to everyone else”
It could be said that most langage are just adding syntax to thing that could boil down to more C boilerplate, especially system programming langage. I see Zig as a nice attempt to modernize C. Sane error handling, Maybe instead of Null, generics, explicit integer overflow wrapping, sane preprocessor instead of #ifdef/#endif hell, C header “module” that just end up exploding the compile time, etc.
I don’t see where you get the Ruby feeling from this. Zig doesn’t look like implicit magic all over the place, and rather remove some of the magic/undefined behaviour seen with C.
I think you’re conflating Ruby and Rails, I feel like the language on its own doesn’t have that much magic involved, but a lot of the community, and any Rails project, has a whole lot of magic in certain things.
Where I see the similarities with Ruby are mainly that Zig uses a pipe character for seemingly indecipherable things to people who are new to the language, and there seems to be some magic in when you use the percent symbol. In general, based on the code samples, it also looks like it’s a big fan of just throwing special characters into lots of places to create syntax, like Ruby.
If people followed your advice, there would only be one programming language, which would probably be LISP, which would then contradict your advice because LISP is already perfect and can not be improved.
The point of this language is to enable incremental migrations from C to a language that is significantly safer than C, yet comparably performant. It seems to me that there are no still-living languages that are as easy to migrate incrementally without either carting over all the sharp edges and undefined behaviors of C or changing the performance profile of your program substantially.
It sounds neat. But I don’t think I fully understand the use case which provided motivation for this.
Is the working tree based on a public commit (already pushed to a published repository)? Which the draft commit series should now be “rebased” on top of?
Where would the local changes in the working tree typically come from? And are these changes conceptually based on the top of the stack of draft commits, or are they based on a public commit? Perhaps that does not matter?
Could you provide an example?
Is the working tree based on a public commit (already pushed to a published repository)?
No, if the working directory is on a public commit, this command does a no-op.
Which the draft commit series should now be “rebased” on top of?
This command does no rebasing at all, not in the sense of “changing the base”. (Git uses “rebase” to refer to all commit rewriting. This command only rewrites, doesn’t rebase.)
Where would the local changes in the working tree typically come from?
From the working directory. You’ve modified your files but the commits you really want to amend are back in your draft history.
Could you provide an example?
You have P -> X -> Y -> Z and you check out Z. Here P is a public base commit, while X, Y, and Z are drafts based on P. X modified file x, Y modified file y, and Z modified file z. In your working directory you make further modifications to the same lines as modified in files x, y, and z. When you run this new hg command, the changes in your working directory will be distributed to commit X for file x, to commit Y for file y, and to commit Z for file z.
Your description makes me think of history (obviously) and absorbing (for some reason).
Maybe hg absorb or hg soak?
Does this have to be a new subcommand, or could it be an option for ‘hg commit’?
Since “hg commit –amend” will modifiy commit Z, having a UI tweak which modifies X, Y, and Z might make sense. Also, ‘draft’ is very clearly defined here: https://selenic.com/hg/help/phases
So perhaps something like “hg commit –amend-drafts” or “hg commit –amend –draft” would do?
It can be either. But generally we don’t like a ton of options for the same command. Ideally, hg help $command should fit in a screenful, and git checkout doing four different things depending on flags is our go-to example of worst case scenario. We already have hg amend as a proposed deprecation path for hg commit --amend (deprecation in hg just means removing it from the docs; all functionality stays forever).
If you had to pick a single verb or short phrase to describe this command, what you would you pick?
Then how about hg amend-drafts. The fact that the command allows you to amend multiple draft-phase changesets is all in the name.
hg amend-drafts
^ ^ ^
| | |
amend | |
draft-phase |
changesets |
multiple
And while not a single word I find it more descriptive then hg smartfixup (Smart? Fix up what?) or something like hg absorb (Absorb? What into where?).
I also like the hg revise suggestion by stsp.
It’s not easy. Some things that come to mind (not sure if they’re any good):
hg revise
hg refine
hg rewrite
hg patchup
hg refit
hg gulp
I would also consider ‘hg absorb’ as suggested by trousers.
Maybe “decompose”? This doesn’t give the user any hints about what is being decomposed or where the decomposed pieces are going for discoverability, but once you know the command, it’s memorable. I’m thinking of “problem decomposition” and hoping that connotation dominates “biological composition” :)
Ok, so it’s about distributing this commit over previous draft commits, or putting this commit into the points in history where it fits.
How about hg retcon? That sounds like the closest analogy to what you want to do - pretend that these changes were always at the appropriate point in draft history where they should have been.
Whoa, I did not know lambdas and partial functions had been added to Vimscript. Async process support and first-class functions as callback might make writing plugins significantly less painful - do the heavy lifting in the async subprocess, and handle the asynchrony using a mechanism that many people are well-versed in making effective thanks to JavaScript.
This is really nice - I’m looking forward to the maturation of Tokio (https://github.com/tokio-rs/tokio), a Finagle-like network programming library built on top of this future abstraction.
If you’re interested in learning what programming using this kind of abstraction is like, Marius Eriksen(author of Finagle)’s writings about it are really interesting - check out RPC Redux, Your Server as a Function, and Systems Programming at Twitter.
What does success look like for CopperheadOS? Is it being more secure than Android? The most secure smartphone platform? Responding to vulnerability reports faster than other mobile platforms? Harder to find vulnerabilities in than any other mobile platform?
I don’t know if you can bolt security principles on after the fact and end up in a better place than e.g. iOS, which has been aggressively security-focused from the start.
I find it tough to imagine how a small team using the same foundational technologies as commodity consumer computing is going to achieve a breakthrough in trustworthiness or reliability.
I really wish this paper talked a little bit more about the tradeoffs of one vs the other. How does the OCaml implementation compare on memory usage, GC pause time / latency, throughput, or other important factors? What kind of bugs do you find when you fuzz both libraries?
So, instead of just collectively grumping about Uncle Bob, let’s pick out a piece of his article that’s actually worth discussing.
What do you think it would mean to be “professional* in software engineering?
What do you think we have to do to achieve (1)?
Do you agree that limiting our tools to reduce churn is a good approach? Why or why not?
I think choosing not to make life difficult for those who come after us is a professional trait. That may include sticking to a reduced, but standardized, tool set.
After the development phase, software projects often go into maintenance mode, where a rotating cast of temp contractors is brought in to make necessary tweaks. The time you save by building a gloriously elegant automaton must be weighed against the cumulative time all of them must spend deciphering how the system works.
[Comment removed by author]
It pains me to say this but regulation.
I think not just regulation, but effectively implemented regulation.
I’ve worked in several regulated industries or on systems where there are industry standards like PCI that need to be followed/applied and things are really no better. In fact, regulations can sometimes cause more problems - the rigorous testing/validation requirements mean that once a system is productive, it’s not patched because of the onerous testing requirement (testing that should, ideally, be automated but just isn’t in most organisations).
Yes, that comes down to organisational practises, but we really should be in a better place in 2016 - Sarbanes-Oxley has helped in a lot of areas with things like segregation of duties, proper record keeping, etc, but it’s only a drop in the ocean.
Do you agree that limiting our tools to reduce churn is a good approach? Why or why not?
All other things equal, yes. Maciej Ceglowski [0]:
I believe that relying on very basic and well-understood technologies at the architectural level forces you to save all your cleverness and new ideas for the actual app, where it can make a difference to users.
I think many developers (myself included) are easily seduced by new technology and are willing to burn a lot of time rigging it together just for the joy of tinkering. So nowadays we see a lot of fairly uninteresting web apps with very technically sweet implementations. In designing Pinboard, I tried to steer clear of this temptation by picking very familiar, vanilla tools wherever possible so I would have no excuse for architectural wank.
I complain about frontend engineers and their magpie tendencies, but backend engineers have the same affliction, and its name is Architectural Wank. This theme of brutally limiting your solution space for non-core problems is elaborated on further in “Choose Boring Technology” [1]:
Let’s say every company gets about three innovation tokens. You can spend these however you want, but the supply is fixed for a long while. You might get a few more after you achieve a certain level of stability and maturity, but the general tendency is to overestimate the contents of your wallet. Clearly this model is approximate, but I think it helps.
If you choose to write your website in NodeJS, you just spent one of your innovation tokens. If you choose to use MongoDB, you just spent one of your innovation tokens. If you choose to use service discovery tech that’s existed for a year or less, you just spent one of your innovation tokens. If you choose to write your own database, oh god, you’re in trouble.
“All other things equal” is one hell of a caveat, though :)
I’m a huge fan of the healthy skepticism both Dan McKinley and Maciej exhibit when it comes to technology decisions. When something passes the high bar for making a technology change, though, make that change! Inertia is not a strategy.
2.a: Take diversity seriously. Don’t act like raging testosterone poisoned homophobic ethnophobic nits just because we’ve been able to get away with it in the past.
2.b: Work to cleanly separate requirements and the best tools to satisfy them in the least amount of time from our desire to play with new toys all the time. 2.c: Stop putting $OTHER language down all the time because we see it as old/lame/too much boilerplate/badly designed. If people are doing real useful work in it, clearly it has value. Full stop.
Those would be a good start.
3: See 2.b - I think saying “Let’s limit our tools” is too broad a statement to be useful. Let’s work to keep our passions within due bounds and try to make cold hard clinical decisions about the tools we use and recognize that if we want to run off and play with FORTH because it’s back in vogue, that’s totally cool (there’s all kinds of evidence that this is a good thing for programmers in any number of ways) but that perhaps writing the next big project at work in it is a mistake.
What do you think it would mean to be “professional* in software engineering?
Our stupid divisions and churn come partly from employers and partly from our own crab mentality as engineers.
They come from employers insofar as most people in hiring positions have no idea how to hire engineers nor a good sense of how easily we can pick up new technologies, so they force us into tribal boxes like “data scientist” and “Java programmer”. They force us into identifying with technological choices that ought to be far more prosaic (“I’m a Spaces programmer; fuck all you Tabs mouth-breathers”). This is amplified by our own tribalism as well as our desire to escape our low status in the corporate world coupled with a complete inability to pull it off– that is, crab mentality.
What do you think we have to do to achieve (1)?
I’ve written at length on this and I don’t think my opinions are secret. :)
Do you agree that limiting our tools to reduce churn is a good approach? Why or why not?
I’m getting tired of the faddishness of the industry, but I don’t think that trashing all new ideas just because they’re “churn” is a good idea either. New ideas that are genuinely better should replace the old ones. The problem is that our industry is full of low-skill young programmers and technologies/management styles designed around their limitations, and it’s producing a lot of churn that isn’t progress but just new random junk to learn that really doesn’t add new capabilities for a serious programmer.
I’m getting tired of the faddishness of the industry, but I don’t think that trashing all new ideas just because they’re “churn” is a good idea either
I agree, I completely agree. I absolutely understand that it is foolish to adopt new tech before it has developed good tooling (and developed, as someone pointed out in a comments section somewhere, a robust bevy of answers of Stack Overflow). You’re just making your developers' lives harder. Still, trashing new ideas is also silly, for a very good reason.
I think that the argument ignores genuine advances in technology. In the article, Java is likened to a screwdriver. Sure, throwing away a screwdriver for a hammer is nonsensical tribalism, but throwing away a screwdriver for a power drill isn’t. There will be times when I want to explicitly write to buffers – I’ll use C or C++ as needed. But why would I otherwise pick a language that segfaults, when advances in language design and compiler theory have yielded Rust, which may well do the same thing*?
It might cost more in the short term to tear down the wooden bridge and build a concrete bridge. Heck it might cost more in the long term to do so, if concrete is more expensive to maintain (I acknowledge my analogy is getting a tad overwrought.) But aren’t better guarantees about the software you produce worth it?
For the record, I’m not trying to speak as a Rust evangelist here – it’s just a topic I know about that fits the argument. It’s new, it’s still developing its tooling, but it clearly represents progress in programming language theory.
For another example, imagine if the people in the argument used vim. Vim is robust and powerful – but many people consider it a poor choice of tool for Java development. How would I convince this person to switch from vim to IntelliJ. Isn’t IntelliJ just another example of churn? It’s a new shiny tool, right? Thoughtful consideration of new stuff is required to distinguish between “churn” and “hey maybe we can move on from the dark ages.”
I don’t want to be accused of talking past the author. I think that the author would agree with an underlying point – that whichever language, IDE, framework you choose, you should choose with a good understanding of what your tool can do, and what the alternatives are.
*I mean, it might not do the same thing – you might want blazing speed or something else that C provides that Rust does not yet. So, yeah, choose your tools wisely.
Does anybody know of any programming languages or environments that support the sort of interactive game development Hague is advocating? I had seen Bret Victor’s talk before, and I’m not aware of anything that supports that kind of live tweaking.
Exactly as he has shown, I don’t know, but Lisp and Smalltalk environments have support hot-code reloading for decades.
There is one that is used in production that I know of - ClojureScript’s Figwheel. Here’s an early demo of the author building a Flappy Bird clone in it. It’s since matured tremendously, and now this is the way everyone I know works with ClojureScript applications.
Elm’s Reactor has similar promise, though I’m less familiar with it and this mailing list message from Evan makes it sound like it’s currently in a non-functional state and it will takes some work to restore it. Playing with the live-reloading Elm tutorials on the Elm site is great fun, though.
Look into Qubes if you’re feeling truly paranoid.
It only runs on backdoored platforms that get a lot of scrutiny from black hats, too. Last I checked, also on a virtualization platform not built for security. So, one must consider who has the backdoors and rate of bugs found in threat analysis. Although paranoids don’t trust computers at all, the safest option is still probably Leon3’s GPL edition since all the hardware was open. One could get silicon, FOSS drivers, etc. Just need one, big fundraiser. :)
Note: Alternatively, a recent design like Rocket (RISC-V) or OpenPITON. I like highlighting Leon3 since it’s just been siting there waiting for FOSS to use for years without any action.