1. -1

    Consequently, to overcome this restriction, the implementations of Rust’s standard libraries make widespread use of unsafe operations, such as “raw pointer” manipulations for which aliasing is not tracked. The developers of these libraries claim that their uses of unsafe code have been properly “encapsulated”, meaning that if programmers make use of the APIs exported by these libraries but otherwise avoid the use of unsafe operations themselves, then their programs should never exhibit any unsafe/undefined behaviors. In effect, these libraries extend the expressive power of Rust’s type system by loosening its ownership discipline on aliased mutable state in a modular, controlled fashion: Even though a shared reference of type &T may not be used to directly mutate the contents of the reference, it may nonetheless be used to indirectly mutate them by passing it to one of the observably “safe” (but internally unsafe) methods exported by the object’s API

    C has too many unsafe operations. To solve this problem, our new super language rules out all unsafe operations except those which one precedes with the keyword “unsafe”. Ta da!

    1. 16

      There’s always an unsafe part. It’s like the trusted part in secure systems: it’s the TCB. You can’t get rid of it entirely. So, you make it as small and simple as possible. Then interface with it carefully. In the process, you avoid the severe damage (esp code injection) of common defects in vast majority of your code.

      That they can do this up to temporal safety and race-free concurrency is a huge improvement over status quo.

      1. 1

        I don’t even know if that’s a good method of design for security. DJBs comments on design for security seem pretty insightful. To be fair, I don’t have a better solution, just some skepticism.

      2. 11

        What evidence would convince you that unsafe markings as manifest in Rust are an effective tool?

        1. -2

          I think the requirement for unsafe indicates that the basic system is not adequate. Either you solved the problem or you didn’t. I think that e.g. Java or Go or Lua using C libraries is a more coherent response than a system programming language with an elaborate safety mechanism that needs to be defeated in order to implement its own libraries. This is the same problem I have with the stupid C standard type aliasing rules: to impose “safety” restrictions that have to be escaped in order to implement basic functions seems like putting ones hands over ones eyes.

          1. 14

            Okay? I’ll note that you didn’t actually answer my question. Skepticism is good, but it’s a lot more productive when you can state more precisely the level at which your belief is falsifiable. Like, I don’t know what “you either solved the problem or you didn’t” actually means in this context.

            elaborate safety mechanism

            How is unsafe simultaneously just a keyword and also an elaborate safety mechanism? I found your initial comment overly reductive, but you jumped from that to “elaborate safety mechanism” in the blink of an eye! What gives?

            seems like putting ones hands over ones eyes

            How so? What examples of “safety” restrictions are you referring to? How are they like Rust’s unsafe keyword?

            1. -2

              The Rust system of memory management and pointer aliasing is elaborate. But to create necessary libraries, the pointer safety system needs to be escaped. To me, that’s a design failure. It’s the classic failure in security too. It’s not like you can average safety together: 1000000 lines of totally safe code and 10000 lines of unsafe does not make it 99% safe.

              1. 13

                … You still haven’t answered my question! Could you please address it?

                The Rust system of memory management and pointer aliasing is elaborate.

                This seems inconsistent with your initial comment.

                But to create necessary libraries, the pointer safety system needs to be escaped. To me, that’s a design failure. It’s the classic failure in security too.

                If it’s a design failure, then that implies there is either a better design that isn’t a failure or that there is no possible design that wouldn’t be a failure in your view. If it’s the former, could you elaborate on what the design is? (Or if that’s not possible to do in a comment, could you at least describe the properties of said design and what you think it would take to achieve them?) If it’s the latter, then we are back to square 1 and I’m forced to ask: are some design failures better than others? How would you measure such things?

                It’s not like you can average safety together: 1000000 lines of totally safe code and 10000 lines of unsafe does not make it 99% safe.

                Who is doing this, exactly? Do you think such a simplistic reduction is an appropriate way to judge the merit of a safety system? Can you think of a better way?

                1. 0

                  … You still haven’t answered my question! Could you please address it?

                  I’m not sure why you are still confused by this but the “elaborate” in my initial comment was not referring to the escape.

                  This is like: I’ve invented a perpetual motion machine, you just need to push it every now and then to keep it moving. I’ve invented a safe programming language, it just needs an unsafe escape mechanism or an FFI for implementing real applications.

                  If it’s a design failure, then that implies there is either a better design that isn’t a failure or that there is no possible design that wouldn’t be a failure in your view

                  I think there should be a better design, but don’t know what it is.

                  This seems inconsistent with your initial comment.

                  It is not even remotely inconsistent.

                  1. 7

                    This is the question I was referring to that I haven’t seen answered:

                    What evidence would convince you that unsafe markings as manifest in Rust are an effective tool?

                    1. 1

                      My complaint is that the language requires an escape mechanism. So what would convince me is if it did not need to turn off its own safety mechanisms.

                      1. 8

                        If it didn’t need to turn off its own safety mechanisms, then the unsafe markings themselves would cease to exist. So, that doesn’t answer my question unfortunately. If you’d like some clarification on my question, then I’d be happy to give it, but I’m not sure where you’re confused.

                        Here’s another way to think about this:

                        I think there should be a better design, but don’t know what it is.

                        What would it take to convince you that there is no better design? If you were convinced of such an outcome, would you still consider Rust’s memory safety mechanisms a design failure?

                        Here’s yet another way: if a better design does exist, do you think it’s possible to improve our tools until the better design is known, even if you would consider said tools to be a design failure? Or are all design failures equal in your eyes?

                2. 1

                  If C had been defined with keywords to partition blocks with unsafe operations from safe ones, wouldn’t leveraging those be a best practice now? Or do you feel like we would see it now as a design failure of C?

                  This concept seems very similar to inline assembly and/or linking against handwritten assembly implementations of popular functions. C-libraries generally have some critical sections implemented in assembly whereas it’s much less popular for normal applications to leverage this feature.

              2. 13

                I think the requirement for unsafe indicates that the basic system is not adequate.

                I would like to write an operating system. I need to write a VGA driver. My platform is x86. To do this, the VGA device is memory mapped at the physical address 0xB8000. I have to write two bytes to this address, one with the colors, and one with the ASCII of the character I’d like to print.

                How do I convince my language that writing to 0xB8000 is safe? In order to know that it’s a VGA driver, I’d need to encode the entire spec of that hardware into my programming languages’ spec. What about other drivers on other platforms? Furthermore, I’d need to know that I was in ring0, not ring3. Is that aspect of the hardware encoded into my language?

                How would you propose getting around this?

                I think that e.g. Java or Go or Lua using C libraries is a more coherent response

                This is interesting, since many people refer to unsafe as a kind of FFI :).

                Fundamentally, the difference here is “when you use FFI you don’t know because it’s not marked”, and unsafe is marked. Why is not marking it more coherent? They’re isomorphic otherwise.

                1. 3

                  How would you propose getting around this?

                  I think the point is somewhat that you can’t without losing your ability to claim truly safe and secure status.

                  A totally-safe systems language has to include the semantics of the hardware systems it runs on, otherwise it’s just wishful thinking.

                  Now, it’s clearly a hard problem on how to do this, right, but maybe that’s informative in and of itself.

                  1. 6

                    Or it just needs a certifying compiler or translation validation of generated code. Certifying compilers exist for quite a bit of C, LISP 1.5, and Standard ML so far. Ensures resulting assembly will do exactly what source says. They also have intermediate languages in them that themselves can be useful.

                    As TALC and CoqASM show, one can also add types and memory safety to assembly code to prove properties directly. One could replace the unsafe HLL code with provably-safe, assembly code. Then, you just need to take the interface specification of one then plug it into the others’ tooling. It’s one of the things Microsoft did for VerveOS: OS written in C# compiling to typed assembly interfacing with “Nucleus” separately verified.

                    https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/pldi117-yang.pdf

                    1. 4

                      Actually, I think this misses the point. The point, IMO, are the trade offs between two choices:

                      1. Encoding the safety in the language itself by encoding the semantics of the hardware systems it runs on.
                      2. Give users the tools to encode safety themselves.

                      The process for proving safety is the same and neither are “more safe” than the other. The only difference between them is that one is practical while the other is not (in a general programming language). That is, neither choice is “totally safe” (by your defintion). The choices just push safety around different levels of abstraction. The abstraction of safety in the first place is the important bit.

                      1. 1

                        Give users the tools to encode safety themselves.

                        That’s what we did with C, and look how that turned out. :)

                        The only way to prevent people from doing stupid things is to forbid them by construction–and sadly, this often limits the clever things too.

                        1. 6

                          That’s what we did with C, and look how that turned out.

                          Uh, no? C has no way to encapsulate memory safety.

                          The only way to prevent people from doing stupid things is to forbid them by construction–and sadly, this often limits the clever things too.

                          This seems overly reductive to me. We don’t actually have to prevent people from doing stupid things. A measurable reduction would be an improvement.

                          1. 0

                            C has no way to encapsulate memory safety.

                            Eh? Users can “embed” memory safety by using the correct library calls and compiler warnings and whatnot to catch mistakes (like, incorrect number of args to printf, using uninitialized pointers, etc. and so forth)–and can use libraries that provide APIs that do things like prevent incorrect allocation of memory and unsafe arithmetic.

                            The problem with saying “Users can embed their own safety!” is that you then have to consider all of the legacy ways that users did that (e.g., in C, as explained above) and how that didn’t always work, because of user failings.

                            And a “measurable reduction” instead of complete prevention makes Rust a lot less compelling than just using people’s existing knowledge of C and competent analysis tools and practices.

                            Given the amazing marketing efforts by the Rust Evangelion Strike Force and friends, I’d rather hope you all would look to see just how good you could make it.

                            1. 10

                              Eh? Users can “embed” memory safety by using the correct library calls and compiler warnings and whatnot to catch mistakes (like, incorrect number of args to printf, using uninitialized pointers, etc. and so forth)–and can use libraries that provide APIs that do things like prevent incorrect allocation of memory and unsafe arithmetic.

                              This equivalence you’re trying to establish seems incorrect to me. In Rust, I can provide an API and make the following guarantee that is enforced by the compiler: if my library is memory safe for all inputs to all public API items, then all uses of said library in safe Rust code are also memory safe. You can’t get that guarantee in C because it’s unsafe-everywhere. The obvious benefit of this implication is that unsafe becomes a marker for where to look when you find a memory safety bug in your program. Equivalently, unsafe becomes a marker for flagging certain aspects of your code for extra scrutiny. This implication is what makes encapsulation of safety possible at all.

                              There are plenty of downsides with this scheme that make it imperfect:

                              1. People could misuse unsafe. (e.g., By not marking something as unsafe that should be unsafe.)
                              2. The safe subset of Rust could be so useless that unsafe winds up being a large proportion of one’s code.
                              3. If unsafe is used infrequently, then folks will have less experience with it, and therefore might be more inclined to screw it up when they do need to use it.

                              But they are tradeoffs. That’s my point. They must be evaluated in a context that compares your available choices. Not some idealized scheme of perfection. Everyone who has used Rust has formed their own opinions about how well it guards against memory safety bugs. If we’re lucky enough, we might even get to collect real data that supports a conclusion that using Rust leads to fewer memory safety bugs than C or C++ in the aggregate. (The answer seems obvious enough to me, and I have my own data to support it, but it’s just anecdotal.)

                              And a “measurable reduction” instead of complete prevention makes Rust a lot less compelling than just using people’s existing knowledge of C and competent analysis tools and practices.

                              “you aren’t perfect, so you aren’t worth my time” — That’s an amazing ideal to have. I don’t know how you possibly maintain it. Seems like a surefire way to never actually improve anything! Surely I must be mis-interpreting your standards here?

                              For me personally, I’m more inclined to not let perfect be the enemy of good.

                              Given the amazing marketing efforts by the Rust Evangelion Strike Force and friends, I’d rather hope you all would look to see just how good you could make it.

                              <rolleyes> Go troll someone else.

                              1. 2

                                “you aren’t perfect, so you aren’t worth my time” — That’s an amazing ideal to have. I don’t know how you possibly maintain it. Seems like a surefire way to never actually improve anything!

                                As you said, there are plenty of tradeoffs with Rust’s memory safety scheme, and established industry knowledge of C vs. Rust is just another kind of tradeoff. That seems to be his point.

                                1. 3

                                  If that was the point, then I’d be happy, but that’s not my interpretation of friendlysock’s comments at all. (They contain zero acknowledgment of tradeoffs, and instead attempt to judge Rust against a model of perfection.)

                                  1. 1

                                    And a “measurable reduction” instead of complete prevention makes Rust a lot less compelling than just using people’s existing knowledge of C and competent analysis tools and practices.

                                    I interpreted this as “use existing knowledge and tooling to make C programs better, or switch to a new language with some embedded memory safety guarantees.” Sounds like there are tradeoffs in there to me, especially if you rely on unsafe code.

                                  2. 1

                                    You’ve grokked the essence of it–it’s not just industry knowledge, it’s also things like the vast amount of code which, while unsafe, has been tested and patched, and the operating system protections put into place to mitigate compromised programs.

                                    Who cares if somebody can escape to a shell via an overflow if they end up as a neutered user account?

                                    1. 5

                                      Who cares if somebody can escape to a shell via an overflow if they end up as a neutered user account?

                                      Step 1: Get barely-privileged account.

                                      Step 2: Privilege escalation with another bug.

                                      This works so much it’s standard in hacking guides. So, preventing that vulnerability bug is quite worthwhile. If you’re preventing step 2, then whatever they’re interacting with that’s privileged has to have no or few bugs. That hasn’t been true in mainstream software. So, these two steps are both worth putting extra effort into given the countless vulnerabilities that have happened with each.

                                      1. 2

                                        Fair enough!

                                2. 4

                                  It’s impossible to have a programming language that is simultaneously 1) good for systems-level programming and 2) has no mechanism for bypassing memory safety. On Linux you can simply read and write from /proc/self/mem. Windows and Mac OS X have similar mechanisms.

                                  1. 1

                                    Is that true?

                                    Java ME Embedded, for example, was successfully used as a systems programming language without that escape hatch. If one reads the Oberon documentation, it looks like this language (used for real systems!) managed to support pointers without a lot of the pitfalls.

                                    1. 3

                                      So your definition of “good for systems-level programming” excludes being able to read and write from the filesystem on Linux?

                                      1. 0

                                        That’s not a reasonable criticism. The programming language is not asked to enforce safe use of the memory subsystem or the file system or to keep someone from jamming a paperclip in the processor fan. The question is purely a design decision for the language. Obviously there are many ways to split the difference and, right now, none of them are totally satisfactory.

                                        1. 6

                                          I believe my criticism is quite reasonable.

                                          • friendlysock wants “complete prevention,” and is against “measurable reduction.”

                                          • Rust is a systems programming language, and for a systems programming language, security is relative to the set of syscalls available.

                                          • Popular operating systems have syscalls that violate memory safety.

                                          • Therefore, “complete prevention” is out of the picture, and we must talk in terms of “measurable reduction.” This completes my argument.

                                          It often isn’t worthwhile to model “paperclip in the processor fan”, although, sometimes it is. If you’re designing a programming language that can deliver correct results in the face of faulty hardware, then random bit flips need to be part of the model. If you’re designing a systems programming language, then the syscall interface needs to be part of the model. Once you acknowledge that, you find that fretting about the programmer using unsafe when they shouldn’t is pointless.

                                          1. 1
                                            friendlysock wants “complete prevention,” and is against “measurable reduction.”
                                            

                                            Not quite–I’m not “against” it, it’s just that the benefits of using something that only provides “measurable reduction” instead of “complete prevention” are not sufficiently large when I also take into account retraining and retooling.

                                            Popular operating systems have syscalls that violate memory safety.

                                            Ah, I think I see the angle you’re taking. I kind of assume that since we’re talking a systems programming language, we ignore ill-conceived syscalls since we could be writing an OS that doesn’t contain them.

                                            1. 1

                                              If we’re talking about a language for writing applications for an operating system that uses type safety to enforce process isolation, then I agree, Rust as-is is not suitable for the task. Maybe Rust combined with a different language for unsafe segments, as mentioned in this thread.

                                            2. 1

                                              If you’re designing a systems programming language, then the syscall interface needs to be part of the model.

                                              Since it’s highly relevant to this topic, see Galois Group’s presentation on Ivory, a synthesis language for systems. Notably, they assume that an underlying system task scheduler exists.

                                              1. 1

                                                You might find immunity-aware programming interesting:

                                                https://en.m.wikipedia.org/wiki/Immunity-aware_programming

                            2. 2

                              “I’d need to encode the entire spec of that hardware into my programming languages’ spec. “

                              That’s pretty much what you do. You can do abstract, state machines with its key properties to avoid doing the whole thing in detail. You do one each for program and hardware function. Then you basically do an equivalence on the likely inputs.

                              Alternatively, you assume the hardware works then specify and implement the unsafe stuff in something like SPARK Ada. Prove it safe. Then wrap the result in Rust with interface checks that ensure safe Rust code uses the “unsafe,” but verified, code it’s calling in a safe way. I think Rust’s would do well combined with tech like Frama-C or SPARK for unsafe parts.

                              End result of either method is only the lowest-level or least safe stuff needs extra verification. Still reduces burden on developers a lot versus looking for all kinds of undefined behavior..

                              1. 4

                                What does this have to do with whether Rust’s safety guarantees are completely encoded in the language vs permitting users of the language to use unsafe markers?

                                1. 2

                                  Im only addressing how to handle code that’s actually unsafe that’s included in Rust apps and can maybe break the safety. I dont know the implications or uses of Rust’s unsafe keyword enough to do anything further. Just defaults I say for any safe, systems language with unsafe parts. :)

                                  1. 2

                                    The point is that encoding the hardware semantics is not sufficient to remove unsafe while retaining Rust’s “zero-overhead” guarantee. You would also need to add a logic that subsumes quantifier-free Peano arithmetic to your type system which will seriously gum up the little type inference that Rust does already. One alternative is to use a different (more proof-oriented) language and type system for the unsafe bits. The boundary between unsafe and safe then is well-defined as the point where the invariants of the unsafe code can be expressed in terms of Rust’s type system.

                                2. -1

                                  That’s my point. I want to write an OS. You propose a programming language that includes elaborate type safety mechanisms, particularly strong control of pointers and explain it is much better than C/C++ or Ada or assembler or some other unsafe language because of this mechanism and also when I need to write a VGA driver or parse a command string or do any of the other things that are where raw pointers are most problematic - I can escape the control! So, to me, you haven’t solved the actual problem, you just made coding more inconvenient.

                                  The basic problem is very difficult, I agree.

                                  Fundamentally, the difference here is “when you use FFI you don’t know because it’s not marked”, and unsafe is marked. Why is not marking it more coherent? They’re isomorphic otherwise.

                                  How do I know when I call a library function that, 7 layers down, some knucklehead decided code would look “more clean” using the unsafe escape? you are right: it’s essentially a FFI. So what have you gained? It may well be that the general design of the language is great and has other virtues, but you have not really solved the problem of unsafe pointers, you’ve just swept them under the FFI. The FFI at least gives me a clean separation.

                                  1. 5

                                    you just made coding more inconvenient

                                    In my experience, coding becomes more convenient. Is your experience to the contrary? Could you elaborate?

                                    The basic problem is very difficult, I agree.

                                    What do you think the basic problem is?

                                    So what have you gained?

                                    A tool to encapsulate unsafety.

                                    1. 1

                                      A tool to encapsulate unsafety.

                                      Textually. But e.g. a buffer overlow in this “encapsulated” code can spill into your safe zone - no?

                                      1. 6

                                        Yes, but then you know precisely from where it spilled, unlike in a language that does not demarcate unsafe code. That’s what you gain. When things go wrong, it’s quite quick to hone in on the problem area. The surface area to explore is reduced potentially by orders of magnitude.

                                        1. 3

                                          Yes. If there’s a bug. What form of encapsulation doesn’t have the problem that it might contain bugs?

                                        2. 1

                                          A tool to encapsulate unsafety.

                                          Textually. But e.g. a buffer overlow in this “encapsulated” code can spill into your safe zone - no?

                              1. 2

                                philosophical question, who do you pay back technical debt to? I think this is where the analogy breaks down. Ward Cunningham invented the term to explain what his engineering team was doing to a bunch of finance people. While the analogy may have made sense to said finance people, it makes less sense to me.

                                I prefer the idea of entropy instead of technical debt.

                                1. 7

                                  Your (future) self. We often speak of living on borrowed time, even though there’s no time bank from which to borrow time. Possibly alternately phrased as the devil always gets his due.

                                  The financial metaphor is very accurate. It’s all about time value. Having a feature today can be worth much more than the cost of building it twice over the long term.

                                  1. 1

                                    I don’t think it is that simple. What about when you leave the org? Maybe the org owns the debt? Who does it get the debt from? Itself? What about when the org fails, does the debt disappear? Maybe they open source it and the greater society takes on the debt? Does the debt have an interest? Usually you have to conciously acknowledge taking debt. How often do you find engineers decide consciously they are taking on “technical” debt? Debt typically has a schedule for payback, does technical debt have a schedule?

                                    It’s not that simple.

                                    1. 1

                                      The debt is like a lien against the code. As the owner of the code today, you get to make the decision to incur debt. When you leave and give the code to somebody else, they get the debt too.

                                      No, technical debt doesn’t require sitting down with a loan officer and filling out forms in triplicate and waiting for approval.

                                      1. 2

                                        BRAINWAVE: securitizing technical debt. GET ME A16Z, this bad boy isn’t going to last!

                                        1. 1

                                          Now you are getting into some classic religious territory about debt to the universe type stuff. We have a lien against the code which we borrowed from the gods.

                                    2. 4

                                      My favourite metaphor is a comparison to either a legal system or a tax code.

                                      As a society evolves, laws are re-written and supersede (parts of) previous laws, but the old laws aren’t usually retracted explicitly. The legal system becomes more complicated to interpret and reason about in general over time. Each legal document or law often references other clauses elsewhere as exclusions or edge-cases, requiring you to read or be familiar with those other parts in order to make sense of the document you’re interpreting. Anyone who has done taxes in the US knows how difficult it can be to work within complex tax codes.

                                      It is possible to aggregate and “compress” the effective laws or tax code into a simplified and revised set of documents that would bee less ambiguous, easier to interpret/change, and which read mostly linearly. Refactor it, if you will.

                                      1. 2

                                        Does it really break the analogy? How much does the other party matter? I suppose in theory it’s possible to renegotiate the terms of your real-world debt, but how often is that an actual concern for the business?

                                        I think entropy is a worse analogy in many respects. It suggests an inevitable law of nature rather than a choice, and that’s to the people to whom it suggests anything at all.

                                        1. 1

                                          I think it is more accurate because typically you have to consciously take on more debt. While most developers don’t even acknowledge they are taking any “debt”. Often many decisions are not conscious. It’s more like an inevitable law of nature you are fighting. And if you don’t fight it you die.

                                        2. 1

                                          To me, entropy carries the sense of inevitability and so removes the sense of it being worthwhile to oppose it.

                                          I prefer the debt metaphor since you are borrowing some effort from the future to have something today. If you are lucky you can write it off and have nothing to pay back.

                                          1. 1

                                            You are fighting entropy just by being alive. Life IS about fighting entropy. And yes death is inevitable.

                                        1. 5

                                          Closing one’s laptop is perhaps the easiest if you’re running OSX. For the life of me, I cannot understand why there isn’t a fast lock hotkey for OSX (like Ctrl + Alt + L [Ubuntu variants] or Win + L [Windows]).

                                          The closest thing OSX has is Cmd + Opt + Power, which puts the machine to sleep. This isn’t useful if I want to step away from my machine while I’m SCP'ing a large file to a remote server! Just because I’m stepping away from the physical machine doesn’t imply I want it to go to sleep.

                                          1. 8

                                            There is a hotkey: Alt+Shift+(Eject or Power, depending on your Mac). You just have to set up in system preferences that after monitor shutdown, it has to prompt for a password.

                                            This way, the screen will turn off and when you reactivate it, it will ask for a password.

                                            1. 7

                                              *Ctrl+Shift+Eject

                                              1. 2

                                                Ah sorry, yes! It’s been a while since I’d last used my Macintosh computer.

                                                1. 1

                                                  Thanks! Can’t believe I never came across this particular hotkey after all this time of using a Mac :)!

                                              2. 7

                                                On my Mac I configured a hot corner to lock the screen, so I just swipe the mouse down to the lower right corner of the screen and it locks immediately. Here’s a tutorial: http://it.emory.edu/security/screensaver_password.html

                                                1. 5

                                                  I use Alfred (Spotlight search replacement), which has a lock command, along with other useful commands like eject.

                                                1. 2

                                                  I’ve been using xhyve indirectly via dlite which is a transparent way to use docker on OS X which works superbly (for local dev scenarios at least—not production). It’s a really nice combination of tech IMO.

                                                  1. 4

                                                    Some of these are great, but the creator obviously got carried away with the likes of #{. Not every two symbols which sit next to each other in code need to or should be a ligature. However, I really like >=, ->, !=, >>, &&, ||, <=>, etc. Particularly silly are the comment prefix ligatures. I think I’d be happy if all the ligatures which reduced the kerning were dropped and the ones which substituted in symbols of significance remained.

                                                    1. 4

                                                      Yes. I like the idea a lot more than I like the implementation. If a symbol isn’t legible, it shouldn’t be in the set until a better rendition is invented.

                                                      I hope the ones that just tweak the kerning are done via the actual kerning table, and not the ligature tables. :) But I suppose that only matters to the font maintainer…

                                                    1. 3

                                                      Fantastic article @tylertreat. This very closely echoes my own experiences with Go, though ultimately the experience led me to put Go to the side. I spent a long time (too long) in a frustrating love/hate relationship with it. Thanks for writing this up—I’ll be linking to this often, I think.

                                                      1. 1

                                                        This was very interesting. Conclusion: don’t use any grammatical structure in your passphrases a la http://xkcd.com/936/

                                                        1. 1

                                                          I don’t know, a “more than 50%” decrease in search space isn’t that significant if you’re dealing with search spaces of the size that secure passwords have.

                                                          (Also “correct horse battery staple” doesn’t have any grammatical structure.)

                                                        1. 1

                                                          <3<3<3

                                                          1. 3

                                                            Mostly I’m curious: aside from VimScript (which, at this point, I think is honestly a straw man in these conversations) what is it about Vim that its users don’t like? I’ve been using Vim for about 18 months; I’ve put together a vimrc that works for me and a small set of plugins that add functionality that I find useful.

                                                            What is it that drives people who really grok Vim to explore other editors—and then do their best to make them like Vim?

                                                            1. 4

                                                              VimScript is more than just a straw man, IMO. VimScript is the entire gateway to extensibility of Vim and it’s abilities and flaws directly affect the experience in the editor. The main thing other than VimScript is how atrociously it handles external processes — especially if you want to interface with that process via a buffer (e.g. a terminal, REPL, whatever). Vim’s extensibility model is also complex and finicky.

                                                              Don’t get me wrong… I put a lot of love into my .vimrc to get it right. But, when I have to use another editor, the things that are truly integral to the essence of Vim and the powerful abilities it brings are actually a pretty small part of Vim.

                                                              I explored Emacs mostly because I was doing more Lisp and, being partially written in a lisp, Emacs' lisp support (esp with things like Paredit) is phenomenal.

                                                              1. 4

                                                                I had a pretty elaborate vim setup https://github.com/mbriggs/dotvim, and now I am using emacs with evil. I’ll go through the reasons why, with a giant caveat being I actually don’t care what editor anyone who reads this uses, so long as they learn it and it works for them. As someone who has used both, vim shines for people who aren’t in to heavy customization. You can heavily customize vim, but it is so easy to make it dog slow, and even with a ton of work you won’t hit what you can accomplish in emacs. Here are some examples, and also essentially why I am no longer using vim, YMMV If I want to run a command and have it pipe to another buffer AND not completely lock up the editor, I can do that with a few lines and compilation-mode. This is next to impossible in vim. I can split my editor window and have a shell running on the other side, that i can use all the keys and tools on that I use to edit code, again, not possible in vim. I can have a repl connected to my editor that the editor uses for auto-complete targets, and to pass code to to evaluate it. You can fake some of this in vim with a bunch of hackery, but it is nowhere near as nice. There is also a pretty wide range of modes that are possible to do in vim, but for whatever reason just aren’t there. Some I use constantly all day: smex lets me fuzzy narrow a list of all commands in the editor to find what I want (kind of like sublimes command pallet) auto-complete will put up a light grey outline of text as you type if it finds things you can complete, if you want to select it you hit tab, if you ignore it it won’t intrude on your life. magit is sort of like fugitive, just way more full featured, and the UI is quite a bit nicer. I have tried a bunch of git gui tools, and even those costing ~80$ really don’t hold a candle to magit (once you learn how to use it) flymake tells me about syntax errors as I type js3 mode has some of the best js indentation I have seen, and does full AST parsing, which means it can tell you things that are wrong with the code as you type. Linting on save works as well, but this is nicer. org-mode is an amazing tool for many things. I use it for notes, team brainstorming sessions, todo lists. Lets say I am testing a csv output, if I paste it into an org mode buffer, I can c-c |, and it becomes a tablle that I can navigate, modify, sort, etc. Haven’t used any general purpose structured text tool that even comes close. If you pair it with deft, and store your org files on dropbox, you can have an amazing searching interface to a directory of your notes/todos/etc that auto-backups/replicates. This is just scratching the surface. calc-mode is the most advanced calculator app i know of on my computer. regex-builder i use regularly. IDO mode is so sweet it is really painful to watch vim people use :Explore Finally, the last piece is elisp vs vimscript. I got to the point with my vim usage that I needed to learn vimscript to do what I wanted to do, and I hated it. elisp has its own quirks and baggage, but it is so far ahead of vimscript in every way that you can barely compare the two. vimscript is a giant hack tacked on to a massive existing set of commands, compared to emacs which is an elisp platform that happens to have implemented editor functionality.

                                                                1. 2

                                                                  The formatting of your comment got messed up – it’s all one big paragraph. Here’s a version I made that corrects that, making the comment easier to read.

                                                                  Use two consecutive newlines for a paragraph break, not just one newline. See my version’s Markdown source for an example. You can check your formatting with “Preview Comment” before posting.

                                                                  1. 2

                                                                    I wish I had read your comment before reading the entire über-paragraph.

                                                                    1. 2

                                                                      I think I actually hit some sort of bug. It was nicely formatted, but I edited, which created a new comment. Then I copy pasted the edit into this one, and made mega-paragraph. Now I cant edit :(

                                                                1. 1

                                                                  The feature list says it has an A8 processor, but the photo shows it with an A10…

                                                                  1. 3

                                                                    Having have done a bunch of Clojure recently, the prospect of using it’s varying data representations in other contexts and languages makes me very excited…

                                                                    1. 1

                                                                      it’s –> its

                                                                    1. 2

                                                                      This article is neat because I think that it explains the difference between NJ style and MIT style on a personal level, rather than on a project-wide level, looking into the root causes of the MIT style. Although Ritchie could just dig in and make what people wanted, the lispers got stuck doing the right thing. CL was something of a concession, with loops, etc, but not “hack”-ish enough to cut it.

                                                                      My favorite part of this is definitely the Into the Woods quote though.

                                                                      1. 3

                                                                        My favorite anecdote about MIT vs. NJ style comes from Richard Feynman in “Surely You’re Joking” (pg 62). Feyman graduated from MIT undergrad and entered a PhD program at Princeton, because all the best results from experimental physics were coming out of their cyclotron. When Feynman showed up he realized why. Everything at Princeton was a hack job — wires strewn everywhere, a real mess. But they were agile, hacked things fast and got the best results the fastest. But all that chaos eventually caused a fire that destroyed the cyclotron.

                                                                        http://books.google.com/books?id=7papZR4oVssC&pg=PA62&lpg=PA62&dq=%22incidentally+they+had+a+fire+in+that+room%22+feynman&source=bl&ots=esR-ggmPZ0&sig=E3O347XzCtTKYP8G0CvplUV7UMs&hl=en#v=onepage&q&f=false

                                                                        Startups hack because that’s how you make big wins. But once you’re established the risk of fires outweighs the potential gains, so you slow down and do things MIT style.

                                                                        1. 1

                                                                          My favorite part of this is definitely the Into the Woods quote though.

                                                                          <3 Sondheim

                                                                        1. 3

                                                                          Being a hacker has come back to bite me in some ways, though. Poor architecture decisions can really slow you down once you’ve gotten past the proof-of-concept stage. I’ve rewritten products from scratch after POC and I gained iteration speed because of it, but I do wish I had thought a bit harder about the arch before I jumped in.

                                                                          1. 2

                                                                            Agreed. That’s what pushed me the other side. I think the answer is somewhere in the middle… at least, for me.