I believe “Easyness” is missing. It is probably before or after Simplicity. I don’t know Pony.
The distinction by example: Lua is a simple language. Very few mechanisms (everything is a table). Python is not simple. There are lots of corner cases in the language. However, it is easy. Many people describe it as the language closest to Pseudocode.
I think easiness was part of simplicity. In the Richard Gabriel essay, the Worse is Better solution that favored simplicity might use an easy construct that sort of worked now but caused problems later. It should definitely be a measurement, though. One thing I’ll add that people overlook is your background and thinking style makes some things easy to you that aren’t to other people. People from mathematics background might grok functional programming while imperative programmers have a hard time with it. That was a common one with a general form, too: anything you are learning that’s really different from your main approach might be hard. If we follow that reasoning, we’d have tossed out both functional and OOP once imperative dominated the world.
So, we have to make sure we consider that when evaluating easy to learn/use. Easy to who and with what prior skills I’ll say.
People from mathematics background might grok functional programming while imperative programmers have a hard time with it.
Except the people who developed Fortran had a very strong mathematical background. Personally, I think the “mathematics” of e.g. Haskell is sloppy and poorly grounded. But each to their taste.
Except the people who developed Fortran had a very strong mathematical background.
You got me there. Excellent catch. It also helped that it was higher-level and closer to how mathematicians think than something like C. It had less dark corners in it that hurt optimization of numerical algorithms, too. No wonder it lasted in HPC with occasional updates.
I like a lot of what I see in Pony. Unfortunately I think the “incorrectness is simply not allowed” philosophy does not have good “adoption qualities”. I think there is some fundamental law where if you disallow incorrect behavior, you also disallow “creativity” and thus unanticipated evolution.
Or you have to be omniscient as a language designer and anticipate every single use case. Nobody is omniscient, so that’s why C and JavaScript succeeded over their stricter peers. They have all sorts of weird places where you can be “creative” (preprocessor, eval, feature detection, etc.). Pascal would be the opposite – it’s rather strict, but those helpful qualities led to its gradual disuse.
It seems that a guarantee of no data races is one of the main thing that distinguishes Pony and Go. (What else is there?)
It’s very interesting that I never see Go users complain about race conditions that aren’t caught. Instead I hear them complain about verbose error handling, satisfying the unused imports check, package versioning, and generics.
I don’t really know what to take away from that. Maybe the informal conventions of sharing by passing over channels is good enough. I know Go has a race detector – do people use it?
I think there is some fundamental law where if you disallow incorrect behavior, you also disallow “creativity” and thus unanticipated evolution.
Be careful to differentiate between what ideas/solutions can be built in a language and what methods offers to express ideas. Any 3GL can build about anything. If you want self-modifying and stuff, naturally languages with macros or interpreters get an advantage there. However, you can reduce all that other stuff down to simple parts. There’s nothing stopping your creativity in terms of end result. Whereas, a language leaning toward correctness means simply that it will help you do exactly what you’re thinking without the language features hurting your or (esp typing/safety/proofs) the language features will try to help you.
“Nobody is omniscient, so that’s why C and JavaScript succeeded over their stricter peers. “
I’ve looked hard into C while I can’t remember as much on JavaScript. C was a product of chance and tinkering, not even design that I can tell, trying to squeeze barely a language into three, terrible machines. Most of its key traits are due to Richard’s BCPL w/ structs the main differentiator. The tech built on C, UNIX, spread like wildfire. People everywhere had C because they had UNIX (or shitty hardware). It was economic and social factors not technical w/ economic ones not existing anymore even for embedded.
Whereas, if you want technical, Wirth’s Pascal/P project created a stack machine as a compiler target to allow easy porting to about any architecture. The whole toolchain targeted that machine so devs didn’t need to redo whole compiler or standard library for their ports. So, write an interpreter or tiny compiler to be in business. Result: ported to over 70 architectures in 2 years by mostly amateurs per the paper I read on it. Wirth’s method of type-safety, memory-safety, and speed has lasted minus the simplicity. The current ones are all bloated. On functional side, PreScheme had low-level efficiency of C, productivity closer to LISP with macros/functional, and got mathematically verified for correctness in VLISP. C and Wirth stuff still aren’t that productive with full verification of subset happening decades later it was so cruddy & low-level. LISP’s design allowed the Common LISP’s to also add about every paradigm that followed sometimes as easily as including a library.
Far as Javascript, it had Worse is Better effects all over it. Offered useful features just good enough similarly latching onto something else entirely (Web browsers) that was gaining huge momentum. Once again, just good enough plus social and economic factors. Especially OS and browser vendors fighting too much to come up with good, native solutions. Same thing happened with Java with money behind it. Compared to them, the Juice project looked way better for my 28Kbps line and Pentium 2. Just didn’t have lots of money, politics, and so on behind it.
Far as Pascal, modern languages are more like Pascal than C although they often retained C-like syntax for transition. They bloated the concept to death, too, as is typical. So, I don’t think that’s what led to its disuse unless it again was a product of social factors of what people were into at the time. By the time I ran into it, people were telling me C was better as an article of faith. They didn’t tell me Pascal was immune to many of the problems I ran into coding C. Since we had it onhand, I switched to industrial BASIC for rapid prototyping of safe code and almost never needed C.
“They have all sorts of weird places where you can be “creative” (preprocessor, eval, feature detection, etc.). “
This could be useful. I’m a fan of DSL’s and macros. Yet, we’ve seen the raw power of that abused to no end. Limited use of macros to only the most necessary stuff seems just fine.
“It’s very interesting that I never see Go users complain about race conditions that aren’t caught”
Rob Pike worked with Limbo which used CSP. So, it doesn’t surprise me if it has some mitigation in there. Whereas, Eiffel’s SCOOP was probably the most deployed model for race-free concurrency before Rust got popular. Ravenscar was in Ada but it isn’t simple. I think simplicity is your point here. SCOOP doesn’t get enough attention given the number of CompSci works that improve it.
I’ve looked hard into C while I can’t remember as much on JavaScript. C was a product of chance and tinkering, not even design that I can tell, trying to squeeze barely a language into three, terrible machines. Most of its key traits are due to Richard’s BCPL w/ structs the main differentiator.
C is definitely influenced by Ritchie’s work with Albert Meyer on recursive function theory - although he didn’t seem to agree. The mathematical sophistication of Ritchie, Thompson, and others at Bell Labs is generally underappreciated especially by people who didn’t learn what AWK stands for.
Rob Pike worked with Limbo which used CSP. So, it doesn’t surprise me if it has some mitigation in there.
Occam was the most CSP like language. I remember people using for robotics, or trying to, saying it was the most advanced technology ever developed for producing deadlocks. In practice blocking message transfer scales really poorly.
The mathematical sophistication of Ritchie, Thompson, and others at Bell Labs is generally underappreciated
Or we don’t bring it up because it’s not seen in the “design” of C at all. I have the early papers from Richards, Thompson, and others describing history of C. It was clearly mostly from BCPL… Richards et al invented “programmer is in control” + small language… then a series of tweaks to make it work on terrible hardware. The “proto C’s” still couldn’t manage writing UNIX until they added structs. So, no, we don’t look at C and appreciate the mathematical background they had. We instead wish they had better hardware or Thompson never discovered BCPL. Maybe we’d have a more efficient version of Wirth’s stuff.
“Occam was the most CSP like language.”
Definitely. It had issues. Concurrent Pascal was more interesting given Hansen wrote actual OS’s with it. Simula being the most influential of languages coming from the process or event models. It initially led to OOP but then race-free concurrency with Eiffel SCOOP. Far as Occam, you might find the OS project interesting:
Yeah that is what I’m getting at. If you can push some complexity into the tools, that can be a good thing. It seems like that works in the case of race conditions in Go.
I’ve used LLVM-based race detectors and they are very good. I think Go essentially uses the same technology.
It’s sort like the opposite of the Haskell philosophy, where the compiler is very advanced and featureful, but the tools are sloppy.
Jon Blow was arguing for a similar thing in his Jai language. Instead of having Rust flag ownership errors at compile time, just have debug tracing in the allocator on all the time, so you get the line number of a double-free when you hit it at runtime.
It’s dynamic analysis vs. static analysis. Some things are easy with dynamic analysis and open research problems with static analysis.
Instead of having Rust flag ownership errors at compile time, just have debug tracing in the allocator on all the time, so you get the line number of a double-free when you hit it at runtime.
The whole point of C, C++, and Rust is as little overhead as possible at runtime. The borrow-checker accomplishes that to a greater degree. The tracing might be easier but might impact performance or add room for compiler introducing problems. So, if optimizing for correctness, then Rust’s approach gives you that without a runtime cost. You don’t need to learn theorem provers or separation logic, either. Greatly improved usability. :)
“It’s dynamic analysis vs. static analysis.”
I think they’re complementary. It’s still an open question exactly what is better at catching what since so many methods get invented. My compromise is to design the software as static as possible to benefit from static analysis as much as possible. This will knock out all kinds of key errors. Then, have a tool instrument that same software for dynamic analysis (esp hard to track bugs) and throw a pile of manually- and auto-generated tests at it to see what happens. Good chance it finds something static analysis missed.
I misstated slightly: he was advocating that debug builds have the allocator debug diagnostics turned on. This is already possible in C and C++ but most projects don’t do it, whether industrial or open source.
I’m not necessarily saying Rust is bad, although certainly lots of people have agreed that “Rust skipped leg day”. Huge parts of the compiler are devoted to static analysis of a few things, while ignoring other correctness issues.
I’m basically saying there is a tradeoff. To me the Go design for race conditions seems to indicate that you can do alright with dynamic analysis, and then you have some more language real estate and compiler complexity budget to deal with other things.
Or you trade it for compiler speed, which is very important. Honestly I think you can probably write a paper on an empiral study of compiler speed vs. software quality. Running your program more often is good for quality.
“Or you trade it for compiler speed, which is very important. Honestly I think you can probably write a paper on an empiral study of compiler speed vs. software quality. Running your program more often is good for quality.”
There definitely is benefit. Hard to say how much. The main concepts here are exploration of the design and mental flow that lets you operate at peak. Slow compile times can reduce both. So, being able to turn off the safety or have a dynamic model can help there. Now, what problems it will catch vs strong typing are harder to say. It might also lead you to waste time building and building on something that can never make it through Rust-style analysis when turned back on. As in you throw away a lot of it anyway.
Interesting tradeoffs of their approach compared to Richard Gabriel’s false, but useful, dilemma. Despite name, I tagged it practices since it’s about how to do software development. Title changed so Richard Gabriel fans wont miss it thinking it’s a just language push.
Most of it was a description relevant to the title. That’s typical practice. The “interesting tradeoffs” sentence fit what you said, though. So, I transfered whole thing to a comment.
I believe “Easyness” is missing. It is probably before or after Simplicity. I don’t know Pony.
The distinction by example: Lua is a simple language. Very few mechanisms (everything is a table). Python is not simple. There are lots of corner cases in the language. However, it is easy. Many people describe it as the language closest to Pseudocode.
I think easiness was part of simplicity. In the Richard Gabriel essay, the Worse is Better solution that favored simplicity might use an easy construct that sort of worked now but caused problems later. It should definitely be a measurement, though. One thing I’ll add that people overlook is your background and thinking style makes some things easy to you that aren’t to other people. People from mathematics background might grok functional programming while imperative programmers have a hard time with it. That was a common one with a general form, too: anything you are learning that’s really different from your main approach might be hard. If we follow that reasoning, we’d have tossed out both functional and OOP once imperative dominated the world.
So, we have to make sure we consider that when evaluating easy to learn/use. Easy to who and with what prior skills I’ll say.
Except the people who developed Fortran had a very strong mathematical background. Personally, I think the “mathematics” of e.g. Haskell is sloppy and poorly grounded. But each to their taste.
You got me there. Excellent catch. It also helped that it was higher-level and closer to how mathematicians think than something like C. It had less dark corners in it that hurt optimization of numerical algorithms, too. No wonder it lasted in HPC with occasional updates.
You sound like Rich Hickey
I like a lot of what I see in Pony. Unfortunately I think the “incorrectness is simply not allowed” philosophy does not have good “adoption qualities”. I think there is some fundamental law where if you disallow incorrect behavior, you also disallow “creativity” and thus unanticipated evolution.
Or you have to be omniscient as a language designer and anticipate every single use case. Nobody is omniscient, so that’s why C and JavaScript succeeded over their stricter peers. They have all sorts of weird places where you can be “creative” (preprocessor, eval, feature detection, etc.). Pascal would be the opposite – it’s rather strict, but those helpful qualities led to its gradual disuse.
It seems that a guarantee of no data races is one of the main thing that distinguishes Pony and Go. (What else is there?)
It’s very interesting that I never see Go users complain about race conditions that aren’t caught. Instead I hear them complain about verbose error handling, satisfying the unused imports check, package versioning, and generics.
I don’t really know what to take away from that. Maybe the informal conventions of sharing by passing over channels is good enough. I know Go has a race detector – do people use it?
Be careful to differentiate between what ideas/solutions can be built in a language and what methods offers to express ideas. Any 3GL can build about anything. If you want self-modifying and stuff, naturally languages with macros or interpreters get an advantage there. However, you can reduce all that other stuff down to simple parts. There’s nothing stopping your creativity in terms of end result. Whereas, a language leaning toward correctness means simply that it will help you do exactly what you’re thinking without the language features hurting your or (esp typing/safety/proofs) the language features will try to help you.
“Nobody is omniscient, so that’s why C and JavaScript succeeded over their stricter peers. “
I’ve looked hard into C while I can’t remember as much on JavaScript. C was a product of chance and tinkering, not even design that I can tell, trying to squeeze barely a language into three, terrible machines. Most of its key traits are due to Richard’s BCPL w/ structs the main differentiator. The tech built on C, UNIX, spread like wildfire. People everywhere had C because they had UNIX (or shitty hardware). It was economic and social factors not technical w/ economic ones not existing anymore even for embedded.
Whereas, if you want technical, Wirth’s Pascal/P project created a stack machine as a compiler target to allow easy porting to about any architecture. The whole toolchain targeted that machine so devs didn’t need to redo whole compiler or standard library for their ports. So, write an interpreter or tiny compiler to be in business. Result: ported to over 70 architectures in 2 years by mostly amateurs per the paper I read on it. Wirth’s method of type-safety, memory-safety, and speed has lasted minus the simplicity. The current ones are all bloated. On functional side, PreScheme had low-level efficiency of C, productivity closer to LISP with macros/functional, and got mathematically verified for correctness in VLISP. C and Wirth stuff still aren’t that productive with full verification of subset happening decades later it was so cruddy & low-level. LISP’s design allowed the Common LISP’s to also add about every paradigm that followed sometimes as easily as including a library.
Far as Javascript, it had Worse is Better effects all over it. Offered useful features just good enough similarly latching onto something else entirely (Web browsers) that was gaining huge momentum. Once again, just good enough plus social and economic factors. Especially OS and browser vendors fighting too much to come up with good, native solutions. Same thing happened with Java with money behind it. Compared to them, the Juice project looked way better for my 28Kbps line and Pentium 2. Just didn’t have lots of money, politics, and so on behind it.
https://github.com/Spirit-of-Oberon/Juice/blob/master/Juice.pdf
Far as Pascal, modern languages are more like Pascal than C although they often retained C-like syntax for transition. They bloated the concept to death, too, as is typical. So, I don’t think that’s what led to its disuse unless it again was a product of social factors of what people were into at the time. By the time I ran into it, people were telling me C was better as an article of faith. They didn’t tell me Pascal was immune to many of the problems I ran into coding C. Since we had it onhand, I switched to industrial BASIC for rapid prototyping of safe code and almost never needed C.
“They have all sorts of weird places where you can be “creative” (preprocessor, eval, feature detection, etc.). “
This could be useful. I’m a fan of DSL’s and macros. Yet, we’ve seen the raw power of that abused to no end. Limited use of macros to only the most necessary stuff seems just fine.
“It’s very interesting that I never see Go users complain about race conditions that aren’t caught”
Rob Pike worked with Limbo which used CSP. So, it doesn’t surprise me if it has some mitigation in there. Whereas, Eiffel’s SCOOP was probably the most deployed model for race-free concurrency before Rust got popular. Ravenscar was in Ada but it isn’t simple. I think simplicity is your point here. SCOOP doesn’t get enough attention given the number of CompSci works that improve it.
C is definitely influenced by Ritchie’s work with Albert Meyer on recursive function theory - although he didn’t seem to agree. The mathematical sophistication of Ritchie, Thompson, and others at Bell Labs is generally underappreciated especially by people who didn’t learn what AWK stands for.
Occam was the most CSP like language. I remember people using for robotics, or trying to, saying it was the most advanced technology ever developed for producing deadlocks. In practice blocking message transfer scales really poorly.
Or we don’t bring it up because it’s not seen in the “design” of C at all. I have the early papers from Richards, Thompson, and others describing history of C. It was clearly mostly from BCPL… Richards et al invented “programmer is in control” + small language… then a series of tweaks to make it work on terrible hardware. The “proto C’s” still couldn’t manage writing UNIX until they added structs. So, no, we don’t look at C and appreciate the mathematical background they had. We instead wish they had better hardware or Thompson never discovered BCPL. Maybe we’d have a more efficient version of Wirth’s stuff.
“Occam was the most CSP like language.”
Definitely. It had issues. Concurrent Pascal was more interesting given Hansen wrote actual OS’s with it. Simula being the most influential of languages coming from the process or event models. It initially led to OOP but then race-free concurrency with Eiffel SCOOP. Far as Occam, you might find the OS project interesting:
http://rmox.net/
FYI https://people.csail.mit.edu/meyer/meyer-ritchie.pdf
Go most definitely has data races. They have some wonderful tooling to detect them at runtime though.
Yeah that is what I’m getting at. If you can push some complexity into the tools, that can be a good thing. It seems like that works in the case of race conditions in Go.
I’ve used LLVM-based race detectors and they are very good. I think Go essentially uses the same technology.
It’s sort like the opposite of the Haskell philosophy, where the compiler is very advanced and featureful, but the tools are sloppy.
Jon Blow was arguing for a similar thing in his Jai language. Instead of having Rust flag ownership errors at compile time, just have debug tracing in the allocator on all the time, so you get the line number of a double-free when you hit it at runtime.
It’s dynamic analysis vs. static analysis. Some things are easy with dynamic analysis and open research problems with static analysis.
The whole point of C, C++, and Rust is as little overhead as possible at runtime. The borrow-checker accomplishes that to a greater degree. The tracing might be easier but might impact performance or add room for compiler introducing problems. So, if optimizing for correctness, then Rust’s approach gives you that without a runtime cost. You don’t need to learn theorem provers or separation logic, either. Greatly improved usability. :)
“It’s dynamic analysis vs. static analysis.”
I think they’re complementary. It’s still an open question exactly what is better at catching what since so many methods get invented. My compromise is to design the software as static as possible to benefit from static analysis as much as possible. This will knock out all kinds of key errors. Then, have a tool instrument that same software for dynamic analysis (esp hard to track bugs) and throw a pile of manually- and auto-generated tests at it to see what happens. Good chance it finds something static analysis missed.
I misstated slightly: he was advocating that debug builds have the allocator debug diagnostics turned on. This is already possible in C and C++ but most projects don’t do it, whether industrial or open source.
I’m not necessarily saying Rust is bad, although certainly lots of people have agreed that “Rust skipped leg day”. Huge parts of the compiler are devoted to static analysis of a few things, while ignoring other correctness issues.
I’m basically saying there is a tradeoff. To me the Go design for race conditions seems to indicate that you can do alright with dynamic analysis, and then you have some more language real estate and compiler complexity budget to deal with other things.
Or you trade it for compiler speed, which is very important. Honestly I think you can probably write a paper on an empiral study of compiler speed vs. software quality. Running your program more often is good for quality.
“Or you trade it for compiler speed, which is very important. Honestly I think you can probably write a paper on an empiral study of compiler speed vs. software quality. Running your program more often is good for quality.”
There definitely is benefit. Hard to say how much. The main concepts here are exploration of the design and mental flow that lets you operate at peak. Slow compile times can reduce both. So, being able to turn off the safety or have a dynamic model can help there. Now, what problems it will catch vs strong typing are harder to say. It might also lead you to waste time building and building on something that can never make it through Rust-style analysis when turned back on. As in you throw away a lot of it anyway.
So, those are the tradeoffs that come to mind.
Interesting tradeoffs of their approach compared to Richard Gabriel’s false, but useful, dilemma. Despite name, I tagged it practices since it’s about how to do software development. Title changed so Richard Gabriel fans wont miss it thinking it’s a just language push.
Submission guidelines:
Most of it was a description relevant to the title. That’s typical practice. The “interesting tradeoffs” sentence fit what you said, though. So, I transfered whole thing to a comment.
What is the difference between simplicity and consistency? For example, where does Pony sacrifice consistency for simplicity?
@SeanTAllen might be able to tell you.