1. 1

    The generics story doesn’t really make sense to me. Let’s say I have a function Fun1 that works on a list of anything that’s equatable. However, I’m writing a new function Fun2, using the old one, that works on a list of anything that’s well-ordered—a more specific condition. How do I specialize Fun1 in my definition of Fun2, when I must provide a concrete type in order to call the function?

    1. 2

      I’m not sure I see the problem. it seems straightforward to me:

      contract Equal(t T) { t == t }
      contract WellOrdered(t T) {
          Equal(T)
          t < t
      }
      
      func Fun1(type T Equal)(list []T) { ... }
      func Fun2(type T WellOrdered) (list []T) { ... Fun1(list) ... }
      
      func main() {
          Fun1([]int{1,2,3})
          Fun1([]float64{1.0, 2.0, 3.0})
      }
      
      1. 1

        Where is this behavior described? I was under the impression that you had to hand the function a concrete type.

        Also, note that my choice of “well-ordered” was arbitrary. However, a more rigorous definition is required, as there must be guarantee of exclusivity, i.e., “for all a and b, exactly one of a < b, a == b, and a > b is true”.

        func Ident(val T) {
            return val
        }
        
        contract WellOrdered(t T) {
            len(Filter([3]bool{t < t, t == t, t > t}, Ident)) == 1
            // sightly inefficient as Go has strict evaluation
            // also, conditions aren't enforced in contract bodies, only usage
        }
        

        I get that it isn’t Go’s goal to provide these kinds of guarantees. But in that case it would be inaccurate to call the contract “WellOrdered”.

    1. 1

      Looks great! But I don’t understand the rationale behind not allowing genericity on struct methods. Why is that case any more complicated than on normal functions, which they say this proposal will support?

      1. 3

        I think it is due to reflection and their “dual-implementation” constraint. In Go you can query any type for number of methods (and iterate over them) using reflect.Type. It would be impossible to implement reflect.Type.Method* family of methods for public type using static generics implementation strategy. When compiling package you have no way to know how many different methods type will have in final executable.

        1. 1

          Ah, I see. Seems like an odd concession to make – I would think the utility of generic methods would outweigh the use of reflection. Also, what would have happened if generic functions were also at odds with the reflection API? Would that have torpedoed the whole thing?

          1. 2

            Go has a philosophy of only adding orthogonal features.

            They aren’t willing to have ‘reflection works except on generic methods’, and they aren’t going to break backwards compatibility on reflection.

            1. 2

              Well sure, but now they’ll have ‘generics work except on methods.’ As annoying as it seems I respect the unwillingness to break backwards compatibility, though.

            2. 1

              They want to maintain compatibility with Go1 which means that all the old APIs should still produce correct results.

        1. 13

          Thanks @sjl! I keep thinking I should try again at Lisp and this post will make a handy roadmap to refer to.

          But my number 1 question, why aren’t there more significant projects written in lisp?

          I would think that it would become a dependency for some important piece of software but it never is (aside from Emacs).

          Git is so important that it will require perl and bash on my local machine (even on windows!), mercurial will include python, firefox is starting to sneak Rust in piece by piece, but is lisp just so happy being on its own island that it never gets used by any other project? Clojure flipped this by buildling the lisp on top of the other language, which I think contributes to its success.

          Or does it’s flexibility make it difficult to use in large organizations (with varying levels of experience)? Or tsomething else?

          Thanks

          1. 13

            First, there are at least some popular non-trivial projects that use Common Lisp – pgloader is an example. But you’re right that it’s certainly rare.

            The glib answer to “why aren’t there a lot of Common Lisp projects” is”because there aren’t a lot of Common Lisp programmers”. I think there are a couple of reasons for that.

            One reason is that the barrier to entry for Common Lisp is really high. I’ve had people tell me I could learn Go in a month. I don’t know if that’s true (though I’ll be finding out when I start a new job in Go in October) but I can tell you that you definitely cannot really learn Common Lisp in a month. I think even six months of hard work would be pushing it. It’s been three years for me, with two of them being working in CL almost full time (thanks, grad school), and I’m comfortable now but still feel like I have a lot to learn. Most people aren’t lucky enough to have a few years of their life to dedicate to learning a language, so the pool of Common Lisp programmers is always going to be smaller than languages with less of a barrier to entry.

            Another reason is that there are some really common misconceptions about Common Lisp, the primary one being “it’s a functional programming language”. It’s not. It’s a language you can do functional programming in if you want, but is really a procedural language at heart. The misconception is bad for two reasons: people who start learning it because they want a functional language get disillusioned and quit, and people who would otherwise want a procedural Lisp never consider it because of the reputation.

            Your example of Python is a good one because it illustrates another aspect of the chicken/egg problem: Common Lisp isn’t installed by default anywhere. If I write a small script in Python I can be pretty sure I can just run it on a server somewhere. If a newbie wants to get started writing some Python on MacOS or Linux they can just dive in and worry about all the virtualenv shit later. But CL isn’t included by default on any distro that I know of, so that’s another barrier to entry to overcome.

            Those are just a few reasons off the top of my head. To sum it up: I think there’s a bunch of reasons that all feed back on each other and result in a high barrier to entry for Common Lisp. This means there are fewer Common Lisp programmers overall, and that results in fewer Common Lisp projects overall.

            1. 6

              Why is the barrier to entry for Common Lisp so high? Is it just the absurdly high number of things that can be done in it? It kinda sounds like the C++ of parenthesis-languages, if the learning curve is that long.

              1. 10

                A couple of reasons:

                • It’s a big language (though not C++ big, I think). CLtL2 is one book, but it’s a thousand-page book.
                • A lot of things are structured and/or named oddly for historical reasons. You need to unlearn and relearn a bunch of stuff, and just have to memorize other things.
                • There’s more high and low-level power than you might be used to in a single language. If you really want to learn CL you’ll want to eventually get comfortable writing super high-level metaprogramming macros and also reading x86 assembly spit out by DISASSEMBLE.
                • CL tends to err on the side of giving the programmer freedom instead of saying “there is exactly one way to do it” (e.g. the package system’s orthogonality with files).

                In short: the barrier to entry is so high for the same reasons it took me ten thousand words to describe how to learn it :)

                1. 8

                  Shorter: Common Lisp is the combined second-system effect of multiple existing Lisp implementations, all implemented by people who worked at places like the MIT AI Lab and Symbolics and who were, therefore, accustomed to complicated and powerful systems. I mean, when your OS’s command line interpreter is a machine code debugger (HACTRN on ITS), you obviously don’t have much respect for an argument that a given feature gives programmers “too much” power.

                  1. 4

                    That’s exactly what I read happened. Each one was a powerful toolset people were using. Common LISP tried to be the one language to rule them all. It was a huge mess that achieved its goal. A programmer wanting LISP-like power without compatibility with older, merged LISP’s might want something else entirely. Hence, the success of some Scheme’s, Clojure, and even non-LISP’s with macro systems. There’s still people that like what Common LISP has, too.

                    If CL is too much, there’s other options that get one many benefits.

                  2. 3

                    It’s a big language (though not C++ big, I think)

                    It looks like CL standard is quite close in size to C++.

                2. 5

                  You don’t write about Clojure much lately, is that because of a preference for CL / distaste for Clojure? Would be interested in your current thoughts on Clojure.

                  1. 2

                    Having learned Go (and written the majority of a decent sized system in production), a solid programmer can be productive in 1 month. I taught it to myself (mostly) with a greenfield project and if you were working in an environment with knowledgable co-workers or some code to reference/build on, that shouldn’t be too much of a stretch.

                    1. 3

                      You can be a productive programmer in a couple of weeks in CL as well. IIRC Dan Weinreb said they gave new hires PCL and to weeks. But being able to contribute to a code base doesn’t require learning the whole language. For example you can be productive without learning how to customize the reader.

                      1. 2

                        Respectfully, I’m going to say Dan Weinreb had a sampling bias. Not all of us live near schools that have students (much less faculty) that are bastions of Lisp (or scheme) programmers. Not to mention that most MIT engineers don’t typically work next to programmers from $SMALL_STATE_SCHOOL.

                        1. 5

                          Lisp, and Common Lisp, are not complicated languages to learn. Quite the opposite in fact and that is the reason I like them.

                          There is a myth that they are complicated. When I tell others Common Lisp is my favorite language one of the first things said (besides the obvious parenthesis jokes) is: “Ooh, that’s such a hard language!” I don’t know where that comes from.

                          C++ is hard, Common Lisp is not.

                          1. 2

                            Not all of us live near schools that have students (much less faculty) that are bastions of Lisp (or scheme) programmers

                            They were talking about people that didn’t knew any Lisp before joining the company. So I’m not sure why would the availability of Lisp programmers would be of any relevance to the question how long does it takes a programmer to go from I don’t know Lisp to I can contribute to a code base.

                      2. 2

                        I can tell you that you definitely cannot really learn Common Lisp in a month

                        I think you can be productive in a month or two. (In any language, of course, it takes much longer to become expert.) A lot of people compare their experience learning Common Lisp on their own to learning other languages on the job, or in some other collaborative environment where it’s easy to ask questions of more experienced people, or where you are working on an existing codebase, written by people skilled in the language, with patterns you can emulate. It’s much easier to learn a language in that kind of environment, but there are few opportunities to learn Common Lisp that way. I think it’s these circumstantial factors, not anything intrinsic, that give Lisp its reputation as a hard language to learn;

                        1. 1

                          Is the barrier to Common Lisp really high?

                          I’m not sure I agree. It’s pretty easy (except perhaps on Windows) to just start up SBCL and fiddle around. There’s no huge swathes of things you need to install and IDE’s to tweak to do a simple “Hello, World!” or to do some excercises from a book.

                          Also, it’s a very practical language that does not get in the way. Lisp itself is syntactically very simple.

                          I do admit that for certain projects (f.e. GUIs) things get a little harder and the standard library is huge but that’s something you pick up along the way.

                        2. 9

                          Lisp provokes an instinctive hostility from others, much like Haskell.

                          The Common Lisp community is a crew of mostly loners.

                          Writing Lisp well takes, in my experience, an above average programmer, and in a group, a programmer sensitive to varying capabilities.

                          Many groups reject tools that demand anything above average.

                          GUI integration is, as usual for most open source, terrible.

                          Web service software tends to be messy.

                          No company or Glamorous Person really is championing Common Lisp. Unlike Haskell, the academy has rejected Common Lisp (but tolerates Scheme).

                          Companies absolutely loathe risk and extra investment, so the above keeps Lisp out of the corporate environment.

                          All of the above keep Common Lisp roughly around OCaml in capability and mindshare. It’s a shame.

                          1. 3

                            Writing Lisp well takes, in my experience, an above average programmer, and in a group, a programmer sensitive to varying capabilities.

                            I disagree with this assessment. The main attraction of CL to me is that it gets out of my way when I want to do something. I don’t have to jump through hoops of the arbitrary restrictions of a particular programming language. All programers may benefit from this quality. I would concede that it may take an above average programmer to exploit said freedom.

                            1. 7

                              A lot of programmers are simply uninterested in expressive power. And, a lot of engineering leads prefer to limit expressive power to limit cleverness. Hence Java and Go.

                              I understand the power of Common Lisp - I maintainish two Common Lisp assistance sites and used it for 8 years nearly continuously at home; I still use it myself for my own site. But how typical industrial programming work goes is simply opposed to the Common Lisp ideals, in my experience and discussion with others.

                          2. 3

                            The main reason why you won’t see a Lisp ‘killer app’ as part of the user’s OS is that Lisp’s world view of the world is incompatible with Unix. Unix accommodates to different languages by providing a ‘common interface’ in processes. To some degree, one can draw the analogy with processes being function calls , the standard input being the parameters of the function and the standard output its return value. By contrast, in Lisp (and Smalltalk) you start with a base world (your image) and you add your program into the world little by little, by incremental compilation. Your compiler, debugger, and the rest of the environment is part of your image. To accommodate to the Unix world view would mean to recompile the whole program each time, which destroys the advantages of incremental development offered by Lisp.

                            Or does it’s flexibility make it difficult to use in large organizations (with varying levels of experience)? Or something else?

                            We know its been used in large organization (e.j. ITA now Google Flights) so its not a question of capability.

                            1. 1

                              I would say the mind set of not wanting to deal with the outside world seems problematic. You wouldn’t want to write a program that solves a problem for the rest of the world, because the other problems are solved by something else?

                              I knew of ITA/Google Flights, but didn’t think it qualified as a large organization (though I’ll happily be corrected).
                              I know that the designers of Java and much later Go and even later Dart all specifically tailored aspects of the language towards large groups with varying skill levels, but I’ve never come across anything that mentioned that was a design goal of Common Lisp - again, happy to be corrected.

                              1. 2

                                I would say the mind set of not wanting to deal with the outside world seems problematic.

                                I didn’t say that Lisp doesn’t want to deal with the outside world. I said that its view of the world is different from Unix. In Lisp a ‘program’ is not different than a function. How else are you going to edit your compiler while its running? Your debugger while your debugging?

                                Mind you that you could a Lisp implementation that performs the edit/compile/run cycle from the CLI, mocl does AoT compilation and there is wcl (which is mostly a PoC) but that would destroy any semblance of interactive development which is one of the main differentiating features of Lips. It would be no more interactive than the other batch oriented PLs, say Racket.

                                I knew of ITA/Google Flights, but didn’t think it qualified as a large organization

                                So what number qualifies? 50+ (Or 70 I’m not sure) developers working on a +2M LoC is a large organization in my books.

                                1. 1

                                  Thanks, I was unsure of the size of ITA (pre-acquisition) or Google Flights (post-acquisition).

                          1. 8

                            an alternative to GOPATH with integrated support for versioning and package distribution

                            Right, after saying no one needs them for years. ;)

                            1. 6

                              an alternative to GOPATH with integrated support for versioning and package distribution

                              Right, after saying no one needs them for years. ;)

                              Was it really the case? From Russ first post on vgo/modules:

                              It was clear in the very first discussions of goinstall that we needed to do something about versioning. Unfortunately, it was not clear, at least to us on the Go team, exactly what to do. […] This ignorance of package versioning has led to at least two significant shortcomings.

                              And when you read that linked thread you will see that Russ agrees that versioning is needed:

                              All the problems about versioning are orthogonal to goinstall. They are fundamental to any system in which you’re using multiple code bases that progress independently. Solving that problem is explicitly not a goal for goinstall today. I’d be more than happy for goinstall to solve the versioning problem later, if we can figure out how.

                              I am not dismissing the problem. I just think it is difficult and not any different for goinstall than it is for any other software system.

                              1. 2

                                I do have to admit that I haven’t checked what authors say about it, I’ve only seen Go users defending lack of library and version management. I’m glad to see that they weren’t opposed to it.

                                I mostly stopped watching after they went to implement polymorphic data structures by silently downcasting to interface{} after saying no one needs polymorphism.

                                1. 1

                                  There are plans (see couple of next slides) to provide facilities for static polymorphism for Go2.

                            1. 1

                              We don’t want to get submissions for every CVE and, if we do get CVEs, we probably want them tagged security.

                              1. 16

                                while I agree with you in this case, I don’t particularly like the “I speak for everyone” stance you seem to be taking here.

                                1. 9

                                  This one is somewhat notable for being the first (?) RCE in Rust, a very safety-focused language. However, the CVE entry itself is almost useless, and the previously-linked blog post (mentioned by @Freaky) is a much better article to link and discuss.

                                  1. 4

                                    Second. There was a security vulnerability affecting rustdoc plugins.

                                2. 4

                                  Do you think an additional CVE tag would make sense? Given there’s upvotes some people seem to be interested.

                                  1. 2

                                    That’d be a good meta tag proposal thread.

                                  2. 4

                                    Yeah, I’d rather not have them at all. Maybe a detailed, tech write-up of discovery, implementation, and mitigation of new classes of vulnerability with wide impact. Meltdown/Spectre or Return-oriented Programming are examples. Then, we see only the deep stuff with vulnerability-listing sites having the regular stuff for people using that stuff.

                                    1. 5

                                      seems like a CVE especially arbitrary code execution is worth posting. my 2 cents

                                      1. 5

                                        There are a lot of potentially-RCE bugs (type confusion, use after free, buffer overflow write), if there was a lobsters thread for each of them, there’d be no room for anything else.

                                        Here’s a list a short from the past year or two, from one source: https://bugs.chromium.org/p/oss-fuzz/issues/list?can=1&q=Type%3DBug-Security+label%3AStability-Memory-AddressSanitizer&sort=-modified&colspec=ID+Type+Component+Status+Library+Reported+Owner+Summary+Modified&cells=ids

                                        1. 2

                                          i’m fully aware of that. What I was commenting on was Rust having one of these RCE-type bugs, which, to me, is worthy of discussion. I think its weird to police these like their some kind of existential threat to the community, especially given how much enlightenment can be gained by discussion of their individual circumstances.

                                          1. -2

                                            But that’s not Rust, the perfect language that is supposed to save the world from security vulnerabilities.

                                            1. 4

                                              Rust is not and never claimed to be perfect. On the other hand, Rust is and claims to be better than C++ with respect to security vulnerabilities.

                                              1. 0

                                                It claims few things - from the rustlang website:

                                                Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.

                                                None of those claims are really true.

                                                It’s clearly not fast enough if you need unsafe to get real performance - which is the reason this cve was possible.

                                                It’s clearly not preventing segfaults - which this cve shows.

                                                It also can’t prevent deadlocks so it is not guaranteeing thread safety.

                                                I like rustlang but the claims it makes are mostly incorrect or overblown.

                                                1. 2

                                                  Unsafe Rust is part of Rust. I grant you that “safe Rust is blazingly fast” may not be “really true”.

                                                  Rust prevents segfaults. It just does not prevent all segfaults. For example, a DOM fuzzer was run on Chrome and Firefox and found segfaults, but the same fuzzer run for the same time on Servo found none.

                                                  I grant you on deadlocks. But “Rust prevents data race” is true.

                                              2. 2

                                                I’m just going to link my previous commentary: https://lobste.rs/s/7b0gab/how_rust_s_standard_library_was#c_njpoza

                                          1. 3

                                            Well, so one of my Berlin Rust Hack & Learn regulars is porting rustc to Gnu Hurd. I can switch soon, year of the desktop is 2109.

                                            1. 2

                                              The fact that I can’t tell if this is a joke or a typo makes it a better joke.

                                              1. 2

                                                Both. I made the typo and decided to’s too good to be fixed.

                                            2. 3

                                              If I remember correctly Haiku also has microkernel.

                                              1. 4

                                                I thought that BeOS was microkernel based on what so many said. waddlespash of Haiku countered me saying it wasn’t. That discussion is here.

                                                1. 1

                                                  Haiku has a hybrid kernel, like Mac OS X or Windows NT.

                                                2. 2

                                                  QNX, Minix 3, or Genode get you more mileage. At least two have desktop environments, too. I’m not sure about Minix 3 but did find this picture.

                                                  1. 1

                                                    Don’t MacOS and iOS both use variants of the Mach microkernel?

                                                    1. 4

                                                      They’re what’s called hybrid kernels. They have too much running in kernel space to really qualify as microkernel. Using Mach was probably a mistake. It’s the microkernel whose inefficient design created the misconceptions we’ve been countering for a long time. Plus, if you have that much in the kernel, might as well just use a well-organized, monolothic design.

                                                      That’s what I thought a long time. CompSci work on both hardware and software has created many new methods that might have implications for hybrid designs. Micro vs something in between vs monolithic is worth rethinking hard these days.

                                                      1. 5

                                                        That narrative makes it sound like they took Mach and added BSD back in until it was ready, when the evolution of Mach was that it started as an object-oriented kernel with an in-kernel BSD personality and that was the kernel NeXT took, along with CMU developer and Mach lead Avie Tevanien.

                                                        That was Mach 2.5. Mach 3.0 was the first microkernel version of Mach, and that’s the one GNU Mach is based on. Some code changes were backported to the XNU and OSFMK kernels from Mach 3.0, but they were always designed and implemented as full BSD kernels with object-oriented IPC, virtual memory management and multithreading.

                                                        1. 2

                                                          Yeah, I didn’t study the development of Mach. Thanks for filling in those details. That they tried to trim a bigger OS into a microkernel makes its failure even more likely.

                                                          1. 1

                                                            I don’t follow the reasoning; what failed? They didn’t fail to make a microkernel BSD, as Mach 3 is that. They didn’t fail to get adoption, and indeed it’s easier when you’re compatible with an existing system.

                                                            1. 1

                                                              They failed in many ways:

                                                              1. Little adoption. XNU is not Mach but incorporates it. Whereas Windows, Linux, and BSD kernels are used directly by large, install bases.

                                                              2. So slow as a microkernel that people wanting microkernels went with other designs.

                                                              3. Less reliable than some alternatives under fault conditions.

                                                              4. Less maintainable, such as easy swaps of modules, than L4 and KeyKOS-based systems.

                                                              5. Due to its complexity, every attempt to secure it failed. Reading about Trusted Mach, DTMach, DTOS, etc is when I first saw it. All they did was talk trash about the problems they had analyzing and verifying it vs other systems of the time like STOP, GEMSOS and LOCK.

                                                              So, it was objectively worse than competing designs then and later in many attributes. It was too complex, too slow, and not as reliable as competitors like QNX. It couldn’t be secured to high assurance either ever or for a long time. So, it was a failure compared to them. It was a success if the goal was to generate research papers/funding, give people ideas, and make code someone might randomly mix with other code to create a commercial product.

                                                              All depends on viewpoint of or requirements for OS you’re selecting. It failed mine. Microkernels + isolated applications + user-mode Linux are currently best fit for my combined requirements. OKL4, INTEGRITY-178B, LynxSecure, and GenodeOS are examples implementing that model.

                                                      2. 3

                                                        Yes, but with most of a BSD kernel stuck on and running in the same address space. https://en.wikipedia.org/wiki/XNU

                                                    1. 7

                                                      One thing that is clear to me: the author hasn’t actually written much (or perhaps any) Rust. This is clear to me because I think one of the traps that the merely Rust-curious fall into is a disproportional fear and loathing of the borrow checker. This is disproportional because it ignores many of the delightful aspects of Rust – for example, that algebraic types in a non-GC’d language represent a revolution in error handling. (I also happen to love the macro system, Cargo, the built-in testing framework, and a bunch of other smaller things.) Yes, the lack of things like non-lexical lifetimes can make for some wrestling with the borrow checker, but once one is far enough into Rust to encounter these things, they are also far enough in to appreciate the value it brings to systems programming.

                                                      To sum, the author shouldn’t weigh in on Rust (or any language, really) so definitively without having written any – or at least make clear that his perspective is informed by reading blog entries, not actual experience…

                                                      1. 1

                                                        One thing that is clear to me: the author hasn’t actually written much (or perhaps any) Rust. This is clear to me because …

                                                        To sum, the author shouldn’t weigh in on Rust (or any language, really) so definitively without having written any – or at least make clear that his perspective is informed by reading blog entries, not actual experience…

                                                        I believe it wasn’t your intent, but your commentary reads a bit like “Only true Rustaceans should be allowed to talk about Rust”.

                                                        1. 3

                                                          Everyone should be allowed to talk about Rust. There is no authority that deserves to have the power to decide which people can or cannot talk about Rust.

                                                          That said, it’s also fine to say that the author’s opinion about Rust is untrustworthy because it bears the hallmarks of someone who has read about Rust but not actually used it themselves in any meaningful way. I myself agree that it’s possible to write lots of useful rust code without running into situations where the borrow checker trips you up, and that some of Rust’s best innovations are the “small” things like the algebraic types, macros, Cargo, etc. that are now available in a non-GC systems language.

                                                          1. 1

                                                            it bears the hallmarks of someone who has read about Rust but not actually used it themselves in any meaningful way

                                                            I still use rustlang but share the same opinion as the author. Did I write enough of it to be trustworthy? :)

                                                            Rust’s best innovations are the “small” things like the algebraic types, macros, Cargo, etc. that are now available in a non-GC systems language

                                                            Nothing on that list was rustlang’s innovation.

                                                      1. 4

                                                        I wonder how much wasted bandwidth and cpu could have been saved by using for example protobuf.

                                                        1. 2

                                                          Or at least msgpack - it’s pretty much a drop-in for JSON

                                                        1. 7

                                                          signify seems like it would be a great tool for git signatures.

                                                          1. 7

                                                            But it’s written in C, which by definition can’t be used by any respectable rustlang developer ;)

                                                          1. 7

                                                            What am I supposed to do with that json file? edit: … oh, it renders completely differently on desktop…

                                                            1. 2

                                                              it doesn’t render at all for me

                                                            1. 2

                                                              News and linkndumps aren’t really the sweet spot for content here.

                                                              1. 2

                                                                Thanks for input.

                                                                1. 1

                                                                  Maybe it’s me, but in the last year or so I’ve started to hide quite a lot of stories posted here. It’s not yet HN, but we’re getting there slowly :/

                                                                1. 1

                                                                  Today I had (once again) a pleasure of using Go’s quick.CheckEqual. It’s very simple (for example there is no minification step for inputs) but it is also very easy to use and is always there as part of standard library.

                                                                  Here’s an example that verifies equivalence of naive implementation with real one.

                                                                  1. 1

                                                                    We really need GC soon, it’s hard to get started on webasm as a compilation target for a GC’d language without it. Also TCO would be incredibly useful for implementing certain languages.

                                                                    1. 3

                                                                      Interestingly Go has wasm target and just uses its own GC which is written in Go :)

                                                                      1. 1

                                                                        That’s exactly what I’d have suggested minus self-hosting part. Just try porting what already works.

                                                                    1. 4
                                                                      1. 2

                                                                        Another ergodox with dvorak here. The ability to type without having to squeeze your wrists together greatly increases comfort imho. I like the straight (vertically-staggered) columns from a comfort perspective as well.

                                                                        1. 2

                                                                          On the ergodox ez and very happy with it. I built one back before the ez with clear switches but I actually prefer the brown switches in my ex.

                                                                        1. 6

                                                                          There is no technical content in that post :(

                                                                          1. 3

                                                                            Ah sorry. I wasn’t sure how focused this site was meant to be on tech. I’d delete the post but there is no feature for that here.

                                                                            1. 26

                                                                              Personally, I found the post interesting.

                                                                              1. 5

                                                                                Same here. This is political, and as much as we might like to pretend otherwise, all technology is inherently political. +1 for this kind of post, and more of them.

                                                                                1. 2

                                                                                  This is political, and as much as we might like to pretend otherwise, all technology is inherently political.

                                                                                  I dislike this justification being used to shoehorn politics into spaces which previously had functioned somewhat as a refuge from the sturm und drang of the times. I’ve also never seen a good stacktrace for the sentiment.

                                                                                2. 1

                                                                                  Lots of things are interesting but have better homes elsewhere.

                                                                            1. 1

                                                                              I don’t see the point of that post. It’s just copy of various sections (1.1.7 and 1.3.3) from first SICP chapter without any added value.

                                                                              1. 1

                                                                                However, I still think there is value in fuzzing compilers. Personally I find it very interesting that the same technique on rustc, the Rust compiler, only found 8 bugs in a couple of weeks of fuzzing, and not > a single one of them was an actual segfault. I think it does say something about the nature of the ode base, code quality, and the relative dangers of different programming languages, in case it was not clear already. In addition, compilers (and compiler writers) should have these fuzz testing techniques available to them, because it clearly finds bugs. Some of these bugs also point to underlying weaknesses or to general cases where something really could go wrong in a real program. In all, knowing about the bugs, even if they are relatively unimportant, will not hurt us.

                                                                                This is a really interesting point - this kind of fuzzing gives us a test for whether the sorts of more advanced static verification that programming languages like Rust offer are actually paying off in terms of program reliability. If rustc, written in Rust, gets a “better score” when fuzzed than gcc, written in C (do they use C++?) does, that’s evidence that the work the Rust language designers put into the borrow checker and the type system and so forth was worthwhile. We can imagine similar fuzz testing for large programs in other programming languages.

                                                                                1. 1

                                                                                  that’s evidence that the work the Rust language designers put into the borrow checker and the type system and so forth was worthwhile

                                                                                  Not really - gcc and rustc are far from equivalent programs.

                                                                                  1. 2

                                                                                    It’d be interesting to know whether LLVM was also compiled with AFL’s instrumentation. Obviously any findings from GCC’s optimizers would be “expected” to be found in LLVM, not rustc.

                                                                                    1. 2

                                                                                      Maybe instead compare this compiler with just the parts of rustc it was based on. That version, too. From there, there’s a difference between team size, amount of time to do reviews, and possibly talent. Those could create a big difference in bugs. However, the bugs that should always be prevented by its static types should still count given the language should prevent them.

                                                                                      So, I’d like to see rustc vs mrustc in a fuzzing comparison.

                                                                                  1. 4

                                                                                    Can anyone help me understand why Metal was designed? Apple’s a heavy hitter in Khronos, right? So what was it that they felt like they couldn’t accomplish with OGL/OCL? Are there non-Mac targets that support Metal?

                                                                                    1. 6

                                                                                      OpenGL is a tired old API that is too high level for high performance graphics work. At the time when Metal was being developed folks were working on lower level APIs to expose the GPU more, like Mantle and DirectX 12, and Metal was Apple’s offering. I believe Mantle eventually evolved into Vulkan, but for some reason Apple is continuing to promote Metal. It’s a nicer API for Swift users, but that’s about it. I would have preferred that they’d make a safe API over Vulkan for Swift like Vulkano, they seem to be under some weird impression that they’ll be able to trap devs in their platform with their own, proprietary API. Or maybe they just can’t bear to give up all the sunk cost.

                                                                                      1. 2

                                                                                        they seem to be under some weird impression that they’ll be able to trap devs in their platform with their own, proprietary API

                                                                                        Is it not working quite well for Microsoft with DirectX?

                                                                                      2. 1

                                                                                        As I vaguely recall, it started on ios as a way to utilize their graphics chips faster and more efficiently (lower overhead).

                                                                                      1. 1

                                                                                        Is the resulting C++ and Haskell source available somewhere?

                                                                                        1. 1

                                                                                          I gave up trying to find it shortly in due to how University of West Florida’s website is laid out. Most can take me right to the publications and software. It’s like they’re trying to hide their work behind a bunch of sales pitches. Coffey’s page was interesting in that he did a bunch of work on knowledge bases and cognitive applications. If not paywalled, his work on knowledge elicitation and representation might be submission-worthy.

                                                                                        1. 3

                                                                                          Personally I think these small language are much more exciting than big oil tankers like Rust or Swift.

                                                                                          I’m not familiar with either of those languages, but any idea what the author means by this? I thought Rust has been picking up quite a bit recently.

                                                                                          1. 11

                                                                                            I understood the author to be talking about the “size” of the language, not the degree of adoption.

                                                                                            I’m not sure that I personally agree that C is a small language, but many do belive that.

                                                                                            1. 3

                                                                                              Your involvement with rust will bias your opinion - rust team hat would be appropriate here :)

                                                                                              1. 12

                                                                                                He is right though. C’s execution model may be conceptually simple but you may need to sweat the implementation details of it, depending on what you’re doing. This doesn’t make C bad, it just raises the bar.

                                                                                                1. 10

                                                                                                  I had that opinion before Rust, and I’m certainly not speaking on behalf of the Rust team, so in my understanding, the hat is very inappropriate.

                                                                                                  (I’m also not making any claims about Rust’s size, in absolute terms nor relative to C)

                                                                                                  1. 5

                                                                                                    Or you can just test his claim with numbers. A full, C semantics is huge compared to something like Oberon whose grammar fits on a page or two. Forth is simpler, too. Whereas, Ada and Rust are complicated as can be.

                                                                                                    1. 5

                                                                                                      I agree that there are languages considerably smaller than C. In my view, there is a small and simple core to C that is unfortunately complicated by some gnarly details and feature creep. I’ve expressed a desire for a “better C” that does all we want from C without all the crap, and I sincerely believe we could make such a thing by taking C, stripping stuff and fixing some unfortunate design choices. The result should be the small and simple core I see in C.

                                                                                                      When comparing the complexity of languages, I prefer to ignore syntax (focusing on that is kinda like bickering about style; yeah I have my own style too, and I generally prefer simpler syntax). I also prefer to ignore the standard library. What I would focus on is the language semantics as well as the burden they place on implementation. I would also weigh languages against the features they provide; otherwise we’re talking apples vs oranges where one language simply makes one thing impossible or you have to “invent” that thing outside the language spec. It may look simpler to only present a floating 64-bit point numeric type, but that only increases complexity when people actually need to deal with 64-bit integers and hardware registers.

                                                                                                      That brings us to Oberon. Yes, the spec is short. I guess that’s mostly not because it has simple semantics, but because it lacks semantics. What is the range of integer types? Are they bignums, and if so, what happens you run out of memory trying to perform multiplication? Perhaps they have a fixed range. If so, what happens when you overflow? What happens if you divide by zero? And what happens when you dereference nil? No focking idea.

                                                                                                      The “spec” is one for a toy language. That is why it is so short. How long would it grow if it were properly specified? Of course you could decide that everything the spec doesn’t cover is undefined and maybe results in program termination. That would make it impossible to write robust programs that can deal with implementation limitations in varying environments (unless you have perfect static analysis). See my point about apples vs oranges.

                                                                                                      So the deeper question I have is: how small can you make a language with

                                                                                                      1. a spec that isn’t a toy spec
                                                                                                      2. not simply shifting complexity to the user
                                                                                                      3. enough of the same facilities we have in C so that we can interface with the hardware as well as write robust programs in the face of limited & changing system resources

                                                                                                      Scheme, Oberon, PostScript, Brainfuck, etc. don’t really give us any data points in that direction.

                                                                                                      1. 6

                                                                                                        So the deeper question I have is: how small can you make a language with

                                                                                                        1. a spec that isn’t a toy spec
                                                                                                        2. not simply shifting complexity to the user
                                                                                                        3. enough of the same facilities we have in C so that we can interface with the hardware as well as write robust programs in the face of limited & changing system resources

                                                                                                        Scheme, Oberon, PostScript, Brainfuck, etc. don’t really give us any data points in that direction.

                                                                                                        Good question. There are few languages with official standards (sorted by page count) that are also used in practice (well.. maybe not scheme ;>):

                                                                                                        1. Scheme r7rs - 88 pages - seems to be only language without useful standard library
                                                                                                        2. Ruby 1.8 - 341 pages
                                                                                                        3. Ada 95 - 582 pages
                                                                                                        4. Fortran 2008 - 621 pages - seems to be only language without useful standard library
                                                                                                        5. C11 - 701 pages
                                                                                                        6. EcmaScript - 885 pages
                                                                                                        7. Common Lisp - 1356 pages
                                                                                                        8. C++17 - 1623 pages

                                                                                                        I know that page count is poor metric, but it looks like ~600 pages should be enough :)

                                                                                                        1. 3

                                                                                                          Here are the page counts for a few other programming language standards:

                                                                                                          1. PL/I General purpose subset 443 pages
                                                                                                          2. Modula-2 800 pages - base - 707 pages, generics - 45 pages, objects - 48 pages
                                                                                                          3. Ada 2012 832 pages
                                                                                                          4. Eiffel 172 pages
                                                                                                          5. ISO Pascal 78 pages
                                                                                                          6. Jovial J73 168 pages
                                                                                                          1. 2

                                                                                                            I know that page count is poor metric, but it looks like ~600 pages should be enough :)

                                                                                                            Given that N1256 is 552 pages, yeah, without a doubt.. :-)

                                                                                                            The language proper, if we cut it off starting at “future language directions” (then followed by standard library, appendices, index, etc.) is only some 170 pages. It’s not big, but I’m sure it could be made smaller.

                                                                                                          2. 3

                                                                                                            I’ve expressed a desire for a “better C” that does all we want from C without all the crap, and I sincerely believe we could make such a thing by taking C, stripping stuff and fixing some unfortunate design choices. The result should be the small and simple core I see in C.

                                                                                                            That might be worth you writing up with hypothetical design. I was exploring that space as part of bootstrapping for C compilers. My design idea actually started with x86 assembler trying to design a few, high-level operations that map over it which also work on RISC CPU’s. Expressions, 64-bit scalar type, 64-bit array type, variables, stack ops, heap ops, expressions, conditionals, goto, and Scheme-like macros. Everything else should be expressable in terms of the basics with the macros or compiler extensions. The common stuff gets a custom, optimized implementation to avoid macro overhead.

                                                                                                            “ What I would focus on is the language semantics as well as the burden they place on implementation. “

                                                                                                            Interesting you arrived at that since some others and I talking verification are convinced a language design should evolve with a formal spec for that reason. It could be as simple as Abstract, State Machines or as complex as Isabelle/HOL. The point is the feature is described precisely in terms of what it does and its interaction with other features. If one can’t describe that precisely, how the hell is a complicated program using those same features going to be easy to understand or predict? As an additional example, adding a “simple, local change” show unexpected interactions or state explosion once you run the model somehow. Maybe not so simple or local after all but it isn’t always evident if just talking in vague English about the language. I was going to prototype the concept with Oberon, too, since it’s so small and easy to understand.

                                                                                                            “but because it lacks semantics.”

                                                                                                            I didn’t think about that. You have a good point. Might be worth formalizing some of the details to see what happens. Might get messier as we formalize. Hmm.

                                                                                                            “So the deeper question I have is: how small can you make a language with”

                                                                                                            I think we have answers to some of that but they’re in pieces across projects. They haven’t been integrated into the view you’re looking for. You’ve definitely given me something to think about if I attempt a C-like design. :)

                                                                                                    2. 4

                                                                                                      He also says that the issues with memory-safety in C are overrated, so take it with a grain of salt.

                                                                                                      1. 13

                                                                                                        He is not claiming that memory safety in general is not an issue in C. What he is saying is that in his own projects he was able to limit or completely eliminate dynamic memory allocation:

                                                                                                        In the 32 kloc of C code I’ve written since last August, there are only 13 calls to malloc overall, all in the sokol_gfx.h header, and 10 of those calls happen in the sokol-gfx initialization function

                                                                                                        The entire 8-bit emulator code (chip headers, tests and examples, about 12 kloc) doesn’t have a single call to malloc or free.

                                                                                                        That actually sounds like someone who understands that memory safety is very hard and important.

                                                                                                        1. 3

                                                                                                          Not at all the vibe I got from it.

                                                                                                        2. 4

                                                                                                          I’m not familiar with either of those languages, but any idea what the author means by this?

                                                                                                          I’m also way more interested in Zig than I am in Rust.

                                                                                                          What I think he’s saying is that the two “big” languages are overhyped and have gained disproportionate attention for what they offer, compared to some of the smaller projects that don’t hit HN/Lobsters headlines regularly.

                                                                                                          Or maybe it’s a statement w.r.t. size and scope. I don’t know Swift well enough to say if it counts as big. But Rust looks like “Rubyists reinvented C++ and claim it to be a replacement for C.” I feel that people who prefer C are into things that small and simple. C++ is a behemoth. When your ideal replacement for C would also be small and simple, perhaps even more so than C itself, Rust starts to seem more and more like an oil tanker as it goes the C++ way.

                                                                                                          1. 3

                                                                                                            I agree with your point on attention. I just wanted to say maybe we should get a bit more credit here:

                                                                                                            “compared to some of the smaller projects that don’t hit HN/Lobsters headlines regularly.”

                                                                                                            Maybe HN but Lobsters covers plenty oddball languages. Sometimes with good discussions, too. We had authors of them in it for a few. I’ve stayed digging them up to keep fresh ideas on the site.

                                                                                                            So, we’re doing better here than most forums on that. :)

                                                                                                            1. 2

                                                                                                              Sure! Lobsters is where I first learned about Zig. :-)