I strongly agree with this list, though I’d note that Kotlin is doing a really good job bringing literally all of these except value types to the JVM, in a fairly lightweight syntax to boot. (Properly doing value types would require changes to the JVM, though Kotlin immutable data classes by default behave similarly to value types, which can be useful in many situations.)
That said, this post inadvertently reminded me of how few people actually get what C# async and await do. If you look at the sample output in that example, the astute reader will realize that nothing happens in the asynchronous task until .Wait() is called. That’s because Main() is not itself an asynchronous method, and the author never set up a task runner, so this program actually runs entirely in a single thread. I suspect what the author wanted to do would be closer to something like
TaskAwaiter david = Task.Run(ThinkAboutIt).GetAwaiter();
...
int result = david.GetResult();
which would actually have David read in the background.
That said, this post inadvertently reminded me of how few people actually get what C# async and await do. If you look at the sample output in that example, the astute reader will realize that nothing happens in the asynchronous task until .Wait() is called. That’s because Main() is not itself an asynchronous method, and the author never set up a task runner, so this program actually runs entirely in a single thread.
This is largely inaccurate. ThinkAboutIt doesn’t run when Wait is called, it will immediately begin running on Main’s thread until it hits the first await (the one inside ReadTheManual), at which point control returns to Main, and the rest of the logical thread of control will be on a background thread (not on the initial thread).
You can verify this yourself by inserting a few prints of Thread.CurrentThread.ManagedThreadId.
Huh, I apologize; you’re correct. That said, I’m now very confused. When async/await was first introduced, I remember Microsoft screaming about how you needed to be in an event loop for it to work properly, and I remember having to lean on third-party runloops (e.g. Nito.AsyncEx) to get the behavior that’s going on here. (In fact, IIRC, the behavior of await was to return immediately to the parent thread—which, since it was called directly from Main here, would be the OS, which resulted in the program terminating.)
Did the implementation change, or is there now an implicit run-loop, or maybe I’m conflating this behavior with having Main itself be async (which I think they’re going to allow in the next C# revision anyway)? Any idea what I’m remembering here?
Maybe the very early CTPs did that? The F# version uses ‘cold’ tasks but C# has been hot since I’ve been using it…
To my knowledge, await has always ran its target on the current thread until an actually asynchronous operation takes place. This could be something like waiting for a thread to complete, or async IO to finish.
if you have an async method that skips doing any real async work (maybe you have a cache or something?), you don’t want to have to launch a new thread for that method.
“I don’t know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras.
– Alan Kay
I maintain this somewhat popular list, but I don’t think I will include this, because executing the Turing machine requires manual clicking.
Perhaps it can get an honorable mention :)
You include the C Preprocessor which must be run in a loop to be Turing Compete. That seems like a similar condition to me.
A similar signal that was often confused with FRB: https://en.wikipedia.org/wiki/Peryton_(astronomy)
In 2015, perytons were found to be the result of premature opening of microwave oven doors at the Parkes Observatory. The microwave oven releases a frequency-swept radio pulse that mimics an FRB as the magnetron turns off.
This is 20% good advice (e.g. ‘An hour will never occur twice in a single day’) and 80% useless ‘fun facts’ (e.g. ‘The month Pi Kogi Enavot in the Coptic calendar only has 5 or 6 days in it’).
I wish these kinds of sites / articles would concentrate on actual advice and problems.
“Useful” really depends on what you need to do. If I was implementing a Coptic calendaring system, I would certainly want to keep it in mind.
If you were implementing a Coptic calendaring system, you probably shouldn’t be relying on blog posts for your edge-cases.
Using template strings as your render function is a really bad habit to get in to. The reason that people use tools like React is so that they can build dom nodes efficiently and safely!
That’s it, I’m officially naming my kid “<script>alert(1)</script>”
[Comment removed by author]
var x = '<img src="https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png" onload="alert(1)">';
var y = `foo ${x}`;
document.body.innerHtml = y;
$39 is a completely fair price for your new knowledge, and comparable to what you’d pay for a printed book.
Any plans on having a printed book for sale? I’d be willing to this if I got a hardbound brick to sit on my desk.
Check out the “About the Author” page.
Do you plan to release this book as a paperback? No.
wingolog is probably my favorite programming blog. It’s absurd how often I’ll spend hours researching a topic only to land on one of his posts and have it cleared up in a matter of minutes.
One of my favorites: https://wingolog.org/archives/2010/02/26/guile-and-delimited-continuations
I don’t understand why they released 1.0 without being confident about these issues. Fixing them without breaking compatibility is going to result in something worse than fixing it before 1.0.
I understand it’s an enormous project, but Rust feels designed by committee. I’m not excited by it.
I don’t understand why they released 1.0 without being confident about these issues. Fixing them without breaking compatibility is going to result in something worse than fixing it before 1.0.
Because you have to release at some point. Especially ergonomics is hard to judge before things find actual widespread use. Many of these things are also rather minor in day-to-day use, but need some amount of learning effort to grock - that can be fixed.
Also see that some of these are controversial, for example, I’m against dropping “extern” and “mod” declarations, it would - IMHO - introduce lots more implicitness into the language, which doesn’t make it easier.
for example, I’m against dropping “extern” and “mod” declarations, it would - IMHO - introduce lots more implicitness into the language, which doesn’t make it easier.
In the case of extern crate, there is no use of “crate” without “extern”, so “extern” is redundant there.
I agree that getting rid of “mod” declarations in favor of Cargo.toml entries is probably not ready for prime time until tooling is more mature.
I don’t understand why they released 1.0 without being confident about these issues.
Because the number of issues will never end. At some point, you have to become happy with what you have and release it.
There’s also the fact that many things don’t become un-ergonomic until people actually start using them. To borrow the metaphor of tool ergonomics (as in handtools or power tools), you can often find yourself ‘used’ to the rough ergonomics of an old hammer or drill which you’ve used for a while, even if it’s uncomfortable, you learn to ignore the problems, at least until you use a new, better designed tool which doesn’t have those problems. Most people I’m sure can think of a time where they were used to doing things one way, then made a change to how they were doing it to use some new tool, and had a ‘Where have you been all my life’ moment, I suspect that on some level the ergonomic initiative is an attempt to ‘smooth out’ those rough edges which the core developers (who have written enough Rust to become inured to its rough parts) discovered when lots of others started using the language.
For me, Rust has been hard to adopt not because it’s a bad language (indeed, it checks off the entirety of the checklist of things I’d like in an ideal language), but because the cost in attention needed to learn it has been high enough to not break the threshold of what I can learn in my free time. I don’t know if I’m just prematurely old and “get-off-my-lawn”-y about these things, but the ergonomics of languages matter a lot more to me these days then, say, when I wrote nontrivial amounts of Scheme or Haskell (both of which are languages which I know and like, but have some serious ergonomic issues).
I also think that ergonomics is one of the big reasons Python, Ruby, and even PHP or Javascript are as popular as they are. While yes, you do give up some benefits when you give up static typing (or, in the case of PHP and Javascript, sanity), the comfort factor of working in Ruby or Python, or the convenience of working in javascript is significant as a ‘smoothing’ factor. Like a rough board smoothed with dozens of layers of finish, the experience of working in the language is so fundamentally pleasant that what structural roughness there is is effectively hidden by the good ergonomics of working in the language. Having a convenient package manager, a clear path to deployment, copious documentation and support tooling, and a relatively pleasant to use language which balances the implicit/explicit tradeoff the OP talks about, you end up with a language which has all the necessary components to be successful despite it’s ‘disadvantages’ of not using all the latest PLT goodness.
I hope Rust (which does have a lot of that sweet sweet PLT goodness) lowers it’s barrier to entry significantly with these changes, I’m very excited to see the ideas of making the language more ‘Do-what-I-mean’ by careful application of implicit conventions. It already has a wonderful support ecosystem, and killer documentation, I think that a little bit of sanding will put Rust firmly in my “Daily use” bin, which would make me very happy indeed.
Something that hasn’t been mentioned yet is that the Rust language has a tool for testing new versions of the compiler with every rust program hosted on the package repository. This way, they can gauge how bad breaking changes are going to impact the community, and in rare cases, actually send patches out to those projects before the change lands!
[Comment removed by author]
This was my understanding as well. The thought that the Go GC would garbage collect your C libraries is terrifying and would be a massive security violation.
Just imagine if you could plug the Go GC into any C application. Would it work 100% of the time for 100% of the C code in existence?
Wow, this article has more red flags than all the official buildings in Turkey combined. Just looking at this article, I’d guess that teamed.io is probably a terrible place to work at.
I’m a passionate programmer, and I’ve built a game in my free time. I explore new languages, study the fundamentals of computer science and abstract math. I have no interest in developing crappy open source libraries for the sake of building a profile in languages and ecosystems that encourage billions of dependencies. My personal interests (that I’m passionate enough to spend my spare time) are too niche to bother write libraries for. And even if I did write open-source libraries, it’d get 2 stars on github due to the niche domain. I also personally think certifications are useless (no offense to people that think otherwise, just my personal opinion), and anything that fancies me is probably too unexplored to be given certifications for. In Stackoverflow you get far more reputation for answering beginner questions, which only demonstrate that you’re wasting your time on Stackoverflow trying to gain some rep by jumping on simple questions.
All your heuristics are stupid metrics that can easily be gamed, and they ARE being actively gamed. The passionate programmer in me despises these metrics and has no interest in maximizing them.
I’m still not entirely convinced that it isn’t.
Take a read at http://www.teamed.io/
Tap into a distributed global network of the highest-caliber programmers, working under our control
and
There are 70+ software developers working with us at the moment, but these guys are simply the best: [list of names]
and
Quality of Code is Exceptional; It is the highest in the entire industry.
Just scream parody
From the liked list I took this one which does not seem to satisfy this blog posts criterias to begin with, since it hasn’t any really popular projects used by people nor high activity.
Though he does write similarly obnoxious blog posts:
Wait, you mean you can be a great programmer without spending time amassing tons of Internet Points?
It was only a matter of time before we assigned vanity metrics, such as open source project popularity and SO reputation to have actual meaning. Nevertheless, it is still sad to read.
I weep for the Internet of before where striving for fake points on websites was seen as meaningful only within the context of the site itself, and, beyond that, somewhat silly. You may counter with, “but, how will we know if so-and-so has skills?” That remains your problem, and not something you foist onto bullshit metrics.
I think you paint an overly pessimistic view of contributing to the body of open source software and helping beginners.
I’m sorry, you’re right, these things are undisputedly good by themselves (especially the way you put them). My objection is to the way these things are distorted by the likes of the author as well as those that game the metrics. Contributing to open source is a very noble activity, but spamming crappy libraries and saying “f* you and your problems, I did this for free!” for the sake of github stars is not. Helping beginners is good, but claiming you’re an expert because you were first to answer how to concatenate two strings, which earned you 150k points is not.
My tone was probably overly defensive though.
It’s quite telling of their culture that in their second point as to why open source projects are important they say:
I often hear something like “my company doesn’t pay me for open source contribution and at home I want to spend time with my family.”
They “debunk” the first part but they never again mention why some people would rather spend time with family.
Bob Nystrom writing another book is the highlight of my week. If you haven’t already read his Game Programming Patterns, definitely give it a shot; it ranks up there as one of my favorite techincal books of all time! http://gameprogrammingpatterns.com
I sort of conflicted about this….
I love memory safety…. but I can’t remember when last I created a bug in this class.
I have found and fixed a few, but they were really really a small percentage of the available bugs.
Yes, I would love my languages to guard against that.
No, it’s not at the top of my list.
What screams to me about that example is the violation of encapsulation.
If y really belonged in that struct, it was part of that’s struct’s invariant.
If so, what the hell is it doing escaping, as a naked reference, away from the interface that enforces that invariant?
ie. The lifecycle bug is the least of the bugs being enabled by that interface.
I’m not convinced Rust’s lifetimes and borrow checking is the best solution to the problem, but I do think that this is a real problem. It just doesn’t happen as directly as it does in this toy example.
Consider a type that provides two properly encapsulated operations: query and update. The query operation is surprisingly complex and operates on various subdata structures. One multi-level-nested sub-structure accidentally returns a pointer to some owned memory, and things work out just fine…. for a while. In fact, the bug may live in production for a decade and nobody notices because the caller didn’t retain the data between calls to update. … And then somebody comes along and caches the result of a query. Whoops. Or, even worse, somebody comes along and adds mutex and a second thread making query/update calls. Now you have memory corruption and it can be pretty hard to track down.
Having said that, I’d rather solve this problem with garbage collection and immutable data or variants on that theme, rather than mitigated single ownership and borrowing of mutable data.
Garbage collection is a perfectly fine solution to the problem of memory management. More generally, garbage collection can be used when the following two conditions are met:
malloc returns this or that memory block? Usually, no, so long as the block is big enough to store the data you actually care about.Unfortunately, these conditions aren’t always met:
When the physical identity of the resource matters, ownership is a fundamental abstraction.
When the physical identity of the resource matters, ownership is a fundamental abstraction.
For example, imagine a database API that lets you use the connection with a closure, to build and execute a transaction:
db.transaction(|txn| {
txn.select(...stuff...);
txn.insert(...stuff...);
txn.delete(...stuff...);
});
Wouldn’t it be nice to know that you can’t do this?
let mut escape = None;
{
let mut escaper = |txn| { escape = Some(txn); };
db.transaction(escaper);
}
if let Some(txn) = escape {
txn.launch_missiles(); // Now we are fiddling with a commited transaction.
}
(I apologize in advance for any errors in syntax or declaration.)
When the physical identity of the resource matters, ownership is a fundamental abstraction.
I agree with this statement 100%. I’ll go further and say that I’m very excited about the research and engineering that has gone in to crystallizing this abstraction in Rust. However, that doesn’t mean it needs to be such a pervasive abstraction.
Maybe I can make my perspective on this clear by analogy: I have nothing against Objects, but I am against a style of programming oriented by objects. I believe that encapsulation and messaging is a fundamental abstraction, but that doesn’t mean I should structure my entire system using that abstraction.
Similarly, I have nothing against modeling ownership and borrowing, but I am against a style of programming oriented by ownership and borrowing.
Then there’s the subject of enforcement. I’m not convinced that static enforcement of lifetimes and borrows is the only way, or a strictly better way, to enforce this abstraction. I’d love to see the dynamic languages community take a crack at this abstraction as well.
Similarly, I have nothing against modeling ownership and borrowing, but I am against a style of programming oriented by ownership and borrowing.
I’ve expressed elsewhere mild dissatisfaction with the fact everything is owned in Rust. Owned files? Yay! Owned strings? Meh, I don’t need in-place mutation that often. Owned complicated but non-concurrent data structures? You’re seriously getting in the way.
Then there’s the subject of enforcement. I’m not convinced that static enforcement of lifetimes and borrows is the only way, or a strictly better way, to enforce this abstraction. I’d love to see the dynamic languages community take a crack at this abstraction as well.
Well, what else could work? Do you have any examples of abstractions that can be successfully enforced dynamically? Note I’m not saying “in a dynamic language”, since dynamic languages can have static enforcement facilities too, e.g., Typed Racket is built on top of a dynamic language, but it’s built using a very static macro system.
Owned strings? Meh, I don’t need in-place mutation that often.
If I want to share strings, I’ll stick it in an Rc and be done with it! You instantly go from a mutable string buffer, to a reference counted immutable string.
Owned complicated but non-concurrent data structures? You’re seriously getting in the way.
See, I’m of a totally opposite view here. I love the fact that my language statically prevents bugs like iterator invalidation and use-after-free.
For non-concurrent data structures, in the vast majority of cases, purely functional data structures are simpler to understand and implement, and they perform well enough. (At least for my use cases. I don’t write web browsers or real-time video games, admittedly.)
However, purely functional programs can’t express concurrency (not to be confused with parallelism, which purely functional programming handles excellently), and that’s precisely where I’d like to have a Rust-like ownership system.
Many of Rust’s defaults could be built around purely functional data structures, with deref to smoother over somethings. Because "..." is always a &str and &str is efficient, one finds oneself dealing with &str vs String all the time which for many applications is too tricky. C++ programmers are used to this, or so I understand – distinguishing between char * and std::string – but for most Ruby/Python programmers it is odd and it’s also hard to just pick String and stick with it (or just pick &str and stick with it).
It’s hard to know what “easy” is, basically.
I don’t want to have to think about who owns or who has borrowed a purely functional data structure. Unlike files, sockets, etc. which are objects, created at some point in time, and destroyed at a later one; bigints, strings, syntax trees, etc. are values, which conceptually exist forever (or, even better, independently of time), no matter how ephemeral (or, rather, time-bound) their representation in computer memory might be.
Swift might end up doing the right thing here. The default is to treat things as copyable values, and then statically elide the copy and heap allocation if it can. However, you can mark function parameters as inout to communicate that there is purposeful sharing. There is some stuff in some slides about changing this keyword to borrowed and adding a notion of linear typing.
In Rust, the approach taken to simplify this stuff is to have a lot of Deref instances, so that functions which take references to stack allocated values (&...) can take heap allocated stuff, too. This turns out not to make it simple “enough”, though; because if you want to return something you have to remember to put it on the heap, as opposed to just declaring it and hoping for stack promotion if it makes sense. That might be right, from the standpoint of what Rust is trying to do.
Do you have any examples of abstractions that can be successfully enforced dynamically?
I believe that every static abstraction has a dynamic counterpart. In all likelihood, each dual has already been discovered, but the relationship may not yet have been formalized. It’s also worth noting that, traditionally, dynamic enforcement is reserved for “heavier” use cases, since it’s typically slower by default.
For example:
Also, some dynamic abstractions don’t yet have good static counterparts, or inspired static counterparts that are still not yet popularized.
And then other abstractions are already somewhere in the middle and being stretched in both directions. For example, ML style modularity is deeply related to dynamic linking. Now you can get dynamic modularity for effects with “algebraic effect handlers”, but at the same time you can do even more dynamic “linking” of your entire kernel with something like Docker.
^^^ I had a bunch more examples for each of the categories above, but my comment got eaten by my browser and I need to get back to work…. sorry.
Well, what else could work?
One strawman idea: You could get an exception if you try to make a re-entrant call in to a “borrowed” object/actor/service.
Unix enforces per-process memory safety with the memory manager hardware.
Only because it has to: memory-unsafe programs already exist in the wild, and people will run them no matter what, so it would be totally crazy not to perform any runtime checks. But, wouldn’t it be nice if all programs were statically guaranteed to be memory-safe?
“object capabilities” See Mark S Miller’s Thesis
Isn’t ownership essentially a statically enforced object capability? Or am I missing something?
For example, ML style modularity is deeply related to dynamic linking.
This is false. ML modules are traditionally second-class (not first-class values) and meant to be statically linked before you deploy your program. Some ML dialects (OCaml, Alice ML, etc.) allow you to package modules as first-class values, but that feature is very much an afterthought, not something you want to use if you can avoid nit.
Now you can get dynamic modularity for effects with “algebraic effect handlers”, but at the same time you can do even more dynamic “linking” of your entire kernel with something like Docker.
Algebraic effects aren’t about modularity, they’re about expressiveness: without them, you have to manually inject your code into some delimited continuation monad, or reify those pesky continuations as zippers or some such.
One strawman idea: You could get an exception if you try to make a re-entrant call in to a “borrowed” object/actor/service.
Okay, an exception is thrown… and then what? How do you determine exactly what part of your program is wrong and has to be fixed?
wouldn’t it be nice if all programs were statically guaranteed to be memory-safe
First: If memory-safety is enforced dynamically, then the program is statically guaranteed to be memory safe. It’s just not statically guaranteed to avoid attempting and failing a memory-unsafe operation.
Second, unless you 1) heavily constrain the language (such as ban raw pointers) or 2) provide an excessively powerful logic (sequent calculus, etc), you’re not going to be able to statically guaranteed both memory safety with freedom from any runtime errors.
This is my fundamental argument: The engineering tradeoff is closer to a mix of static and dynamic enforcement. Statically prove or dynamically validate as much as you reasonably can given your tools, skill, budget, risk tolerance, etc.
Isn’t ownership essentially a statically enforced object capability? Or am I missing something?
Yes, you’re missing exactly the class of dynamic security properties. For example, consider permission revocation.
meant to be statically linked before you deploy your program
My statement is absolutely true: These things are related by the static/dynamic dual I’m talking about. Of course the ML designers, being static language advocates, push for static linking. However, as you even mention, things like first class modules show that you can satisfy module signatures dynamically. Consider how you can do things like in C/unix like LD_PRELOAD for substituting your malloc/free implementation. That’s effectively dynamic modularity.
Algebraic effects aren’t about modularity, they’re about expressiveness
I disagree that this isn’t modularity. Algebraic effects enable temporal modularity. Anytime you have time, you have something which is naturally dynamic. That’s why it’s so common for static languages to offer dynamically typed open unions for exceptions.
How do you determine exactly what part of your program is wrong and has to be fixed?
You get a stack trace, just like when an application divides by zero in production…
First: If memory-safety is enforced dynamically, then the program is statically guaranteed to be memory safe. It’s just not statically guaranteed to avoid attempting and failing a memory-unsafe operation.
For me, that notion of safety is completely useless. We consider C unsafe because C programs can attempt memory-unsafe operations, even if the operating system will detect such attempts in some cases, and kill the process as a result.
Second, unless you 1) heavily constrain the language (such as ban raw pointers) or 2) provide an excessively powerful logic (sequent calculus, etc), you’re not going to be able to statically guaranteed both memory safety with freedom from any runtime errors.
I don’t believe in “mechanically verify everything”. IMO we all have vastly underestimated the possibility of using our own brains (enhanced with pencil and paper) for proving things about programs. Sadly, neither the mechanical verification gang nor the programmer gang cares…
This is my fundamental argument: The engineering tradeoff is closer to a mix of static and dynamic enforcement.
… and neither do you, apparently. :-(
For example, consider permission revocation.
Then you never had permission in the first place. You only had “maybe permission”. (In the everyday sense of the word “maybe”, not Haskell’s.)
However, as you even mention, things like first class modules show that you can satisfy module signatures dynamically.
Alice ML can certainly do that. OCaml, I’m not so sure. OCaml has modules packaged as first-class values, but all checks are static as far as I can tell.
Algebraic effects enable temporal modularity.
As I said before, you can get the same “temporal modularity” (whatever that might mean) by zipper-ifying all your data structure traversal code. In fact, I’m doing just that in my own SML code, because SML has no built-in support for algebraic effects.
You get a stack trace, just like when an application divides by zero in production…
Alas, concurrency errors are much harder to trace back to their ultimate causes than division by zero errors (in a non-concurrent setting).
that notion of safety is completely useless
It’s all about boundaries. C/Unix programs are memory safe. C functions are not.
This is a spatial boundary: “program” or “function”, but there’s also a possibility for temporal boundaries. Consider mprotect.
and neither do you, apparently
I’m not sure how you got that impression from my comment. I think you and I are on the same page on that point.
You only had “maybe permission”
You’ve lifted a dynamic property “you may or may not have permission” in to a static property “you have dynamic permission”. This is the heart of the “unityped” argument, but I view it as “half a dozen of one, six of dozen of the other”. Given that, what do you do now when you are denied permission? If the programmer considered that case, the code does case analysis and custom handling logic. If the programmer didn’t consider that case: raise an exception!
concurrency errors are much harder to trace back to their ultimate causes than division by zero errors
I’m not sure that this is true in general. Where did the zero come from?
In general, stack traces are an atrociously impoverished debugging tool for origin tracking. However, many common runtimes even suck at stack traces! Ideally, there would be a blend of static and dynamic metadata associated with the 0, so that when the error does occur you can quickly find the cause. Static metadata would include callers, dataflow analysis, etc. Dynamic metadata could include something like passport stamps: Where has this value been on its journey?
I’m not sure how you got that impression from my comment. I think you and I are on the same page on that point.
How did you get that impression? I want ahead-of-time verification under all circumstances - just not always automated, because automatic verification tools have limitations, and we shouldn’t be bound by them.
Ideally, there would be a blend of static and dynamic metadata associated with the 0, so that when the error does occur you can quickly find the cause.
To find the cause, you need to think in terms of predicates on the program state (preconditions, postconditions, invariants). I don’t think the usual kinds of “metadata” attached to program data are particularly helpful for recovering such predicates.
I want ahead-of-time verification under all circumstances - just not always automated, because automatic verification tools have limitations, and we shouldn’t be bound by them.
I think we’re on the same page because I also value ahead-of-time verification, only where where time=production, not necessarily time=run. You’re saying you want ahead of time analysis to include external analysis, such as by-hand proofs, etc. I agree with that, but I’d also like to include imperfect analysis, such as dynamic (eg. code coverage) and stochastic methods (ie fuzz testing, quick check, etc). Blended portfolio is my strategy.
I don’t think the usual kinds of “metadata” attached to program data are particularly helpful
You’re right, the usual metadata isn’t particularly helpful. The best systems I’ve ever had to work with add unusual metadata, usually in the form of dynamic trace information. For example, tagging an HTTP request with a set of symbols for which middleware functions touched it. Or an “undo stack” based on persistent data. Or simply some counters.
I think we’re on the same page because I also value ahead-of-time verification, only where where time=production, not necessarily time=run.
I can agree with the notion of a debug build that performs checks that are in principle unnecessary (because they are meant to always succeed, even if in practice they will sometimes fail). But if you can’t confidently strip those checks out of the release build, then you haven’t really verified your program ahead of time.
I’d also like to include imperfect analysis, such as dynamic (eg. code coverage) and stochastic methods (ie fuzz testing, quick check, etc).
Sure, use whatever works, as long as the release build is guaranteed to be free of both errors and unnecessary checks. Tests are useful, not so much as an acceptance criterion, but rather as a means to quickly reject wrong programs.
That being said, for the specific case of ownership enforcement, I don’t think tests are particularly helpful, even for rejecting wrong programs. The amount of runtime work necessary to keep track of who owns what is prohibitively high. For example, if you have an invariant of the form “Foo and Bar are always owned by different threads”, then the only way to check that invariant is to stop the world.
I don’t think the usual kinds of “metadata” attached to program data are particularly helpful
You’re right, the usual metadata isn’t particularly helpful.
You have conveniently left out the most important part of my message: “predicates on the program state”.
I’m not convinced that static enforcement of lifetimes and borrows is the only way, or a strictly better way, to enforce this abstraction. I’d love to see the dynamic languages community take a crack at this abstraction as well.
Because of it’s relationship to code generation – the post linked by the OP points out that there is check around the dereference – it is at least attractive to have static enforcement.
Static enforcement seems better to me because you get a check and an error to fix at compile time, instead of having to (try to) exercise all the code paths that might lead to the problem. Why do you think it might not be strictly better?
Why do you think it might not be strictly better?
Because it’s always runtime somewhere.
If I have a dynamic implementation, I have the option of static analysis. I might have to add some hints.
If I have a static implementation, I generally have to rewrite my program in order to get a dynamic version.
I can agree that it’s one of those things where one does not always want static analysis – but I suspect one usually wants it – which is to say it could be opt-out rather than opt-in.
This is the situation that arises with Flow viz-a-viz TypeScript. Sure, you can type check your JS with Flow…hope you don’t have too much code lying around…
You do see static languages sometimes move in this direction – TypeScript’s Any, Swift’s AnyObject and Any – but it does seem that this feature, like any potentially-static-feature, is usually provided always on or always off.
Oh I agree, it’s a real problem…. Just not near the top of my list of problems I have found and/or created in real industrial code recently.
So certainly, any language that fixes this class of problems will be regarded as an improvement by me…
Borrow checking is in a sense “reference counting on the fingers of 1 thumb”. AKA linear logic.
So in some senses I am very much for it.
D language is exploring somewhat more flexible controls on reachability and life times, which has potential to be great.
https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md
My core concern is leaks of references beyond the lifetime of the object is only half the problem.
The whole point of a class is to enforce the class invariant. If you leak instance variables all over the place, having a reference to an instance variable existing beyond the lifetime of the instance is merely one of many ills that can befall you.
I really disagree with the “you don’t need to understand monads” theme found here and in other Haskell blog posts; they all seem to be written by people that already understand monads and don’t get confused when the examples in documentation for popular libraries are littered with calls to liftM and >=>.
Sidenote, the documentation for both of those functions is:
I don’t think the article/presentation’s thesis is “you don’t need to understand monads” so much as it is “the best way to understand monads is by example, and by seeing the patterns between types that implement the Monad typeclass”.
I’ve written two CAD programs that generate paths for industrial laser cutters in Rust. Also it’s a fun language to play around with interpreters, so I’ve written an AST-based interpreter and a VM-based interpreter just to play around with novel scripting language concepts.
I like the idea, but this
Be verifiable - you should be able to see your own vote.
is an anti-goal. Voting systems are set up to prevent you from seeing how you voted.
This may seem backwards at first, but consider this: under a system where you can see your own vote, you can voluntarily offer up proof of your vote to 3rd parties. While this isn’t an issue for those of us with good work/family/friends, it is a huge problem for people in abusive relationships, or children (who can vote, but are still living with their parents). Suddenly it becomes possible (even if made illegal!) for a spouse to demand to see their partners voting record; or for a parent to force their child to reveal how he/she voted.
And then there’s obviously the issue of “buying votes.” Right now, if you said to me “I’ll pay you $20 to vote for candidate X” I could say “yeah sure” and then vote for whoever I want. This would not be the case under a verifiable system, where you could say “prove that you voted for candidate X for your $20”. Obviously this would be highly illegal, and I’d argue that it wouldn’t happen very often, but it is an issue that doesn’t happen under the current system.
Anyway, I love the idea of crypo voting, but being verifiable is very bad. It brings our voting system from one where votes are anonymous, to one where others can verify how you voted if they drug you and beat you with a $5 wrench [https://xkcd.com/538/].
[Comment removed by author]
Yes, but vote-buying “the fact that you voted” is not a thing. Hell there’s nothing stopping you from turning in an empty ballot if somebody pays you to show up and vote “something”. Now is an abusive spouse going to beat their husband for not voting at all? I doubt it. coercion just-to-vote is not as damaging as coercion to vote a certain way.
So I 100% see the point you are making here, but I see this more as not fixing an already existing problem rather than creating a new one, ultimately coming from the fact that we have mail in ballots in all but 7 states.
For example, say I want to make a quick $20 bucks from the election. I apply for a mail in ballot, fill it out, and then sends it to the mayor’s re-election campaign office. This office checks that my ballot is filled out how I said it would be, and then sends it along to the voting officials, and pays me $20. Alternatively, if I live with a controlling parent/partner/other, they can verify my ballot is the way they want it before sending it.
The solution to this is similar to the solution for the mail in problem, which is vote invalidation. The idea is that I can vote as many times as I want, and each time I will get a different identifier. However, only the last vote I make will actually be counted. This allows me to vote however an outside force compels me too, so I have verification, and then vote again with how I truly feel. As an added bonus, we get the solution to people who fat finger the button or otherwise see that their vote is different than they wanted after they go home and check the block chain.
The downside to this solution is that it would be easier to DDOS the system by sending massive amounts of votes, because you would need to add every vote to the block chain. I’m not sure how this problem would be solved, but it doesn’t sound intractable.
I don’t have a solution for the DDOS but wouldn’t proof of work make it too expensive to generate massive amounts of valid votes, bloating the block chain?
One problem here is that it wouldn’t be possible to also verify that your vote (the last one you submitted) was included in the results (as your sibling commenter suggested) because then outside actors would be able to verify that you hadn’t overridden the vote you showed to them. It’s a tricky problem… :-)
I use Visual Studio for C# development, finding all references of a variable is as simple as placing the cursor on the variable and pressing shift+f12. The language itself does not facilitate easy grepping, though Omnisharp does give similar functionality to other editors (including vim).
The refactoring and find-all-references is what makes C# development for me. It’s something that becomes so second-nature after a while that it hurts when I have to use something else.
Same here, but I do wish I could easily see all places where an identifier is on the left hand side of an assignment.
“JavaScript historically didn’t have a way to intercept attribute access, which is a travesty. And by “intercept attribute access”, I mean that you couldn’t design a value foo such that evaluating foo.bar runs some code you wrote.”
I’d call that a feature.
I’m with you.
This discussion must have happened in many language committee’s around the world:
“Hey, I have a great idea … guys, listen … let’s hide the fact that code is being run, programmers will appreciate the clarity of not being able to see what is going on.”
The argument is this: your API exposes an attribute. You want to extend that so some code has to run in addition (e.g. add logging). Can you make the change in a backwards compatible way?
If a language has no mechanism for this (Java) guidelines emerge to always use getters and setters and never expose attributes.
And the argument against is that property access doesn’t feel like something that should be able to induce side effects that you can’t possibly reason about. How much more expensive would the property access become if you add logging? How do you express the possible failure of that operation to the consumer?
When exceptions are part of the function signature / contract, it becomes difficult to express. But they usually aren’t in message passing implementations of OO, like Ruby or Objective-C, where everything is like a method call.
So stuff just needs to balance in the language as a whole.
Maybe I’ll come to regret saying this when it ends up biting me, but my gut answer is that I’d prefer you just change the api rather than lie to me. This isn’t something I deal with regularly though.
Yup, changing foo.bar = baz to foo.setBar(baz) when it is needed (and not before) is not really a difficult change to make. Most people don’t need such intense backwards compatibility guarantees.
For back-compat reasons, the alternative to attribute access interception is to use
getFooandsetFoofor every property (hiding the original properties).That’s not entirely true. defineGetter and defineSetter were a thing quite a long while ago..
Do you think it’s an antifeature that you can run code? Or that you can run code with side effects?
The code in the post is completely deterministic and is basically just a transformation of already-known values. You could do that outside the class too but it’s there for convenience. That seems like an unambiguous win to me.
No, those things are fine. I want them to happen when I call functions, I don’t want them disguised as accessing a property (which I assume is looking something up in a hash at worst, but I don’t actually know what’s going on in there).
Interesting. So the mere fact that it runs code at all, regardless of whether it had side effects, is what bothers you? Because it violates assumptions about performance characteristics?
That seems a bit like premature optimization but maybe I don’t have the necessary “at scale” experience to comment.