1. 8

    an alternative to GOPATH with integrated support for versioning and package distribution

    Right, after saying no one needs them for years. ;)

    1. 5

      an alternative to GOPATH with integrated support for versioning and package distribution

      Right, after saying no one needs them for years. ;)

      Was it really the case? From Russ first post on vgo/modules:

      It was clear in the very first discussions of goinstall that we needed to do something about versioning. Unfortunately, it was not clear, at least to us on the Go team, exactly what to do. […] This ignorance of package versioning has led to at least two significant shortcomings.

      And when you read that linked thread you will see that Russ agrees that versioning is needed:

      All the problems about versioning are orthogonal to goinstall. They are fundamental to any system in which you’re using multiple code bases that progress independently. Solving that problem is explicitly not a goal for goinstall today. I’d be more than happy for goinstall to solve the versioning problem later, if we can figure out how.

      I am not dismissing the problem. I just think it is difficult and not any different for goinstall than it is for any other software system.

      1. 2

        I do have to admit that I haven’t checked what authors say about it, I’ve only seen Go users defending lack of library and version management. I’m glad to see that they weren’t opposed to it.

        I mostly stopped watching after they went to implement polymorphic data structures by silently downcasting to interface{} after saying no one needs polymorphism.

        1. 1

          There are plans (see couple of next slides) to provide facilities for static polymorphism for Go2.

    1. 1

      We don’t want to get submissions for every CVE and, if we do get CVEs, we probably want them tagged security.

      1. 16

        while I agree with you in this case, I don’t particularly like the “I speak for everyone” stance you seem to be taking here.

        1. 9

          This one is somewhat notable for being the first (?) RCE in Rust, a very safety-focused language. However, the CVE entry itself is almost useless, and the previously-linked blog post (mentioned by @Freaky) is a much better article to link and discuss.

          1. 4

            Second. There was a security vulnerability affecting rustdoc plugins.

        2. 4

          Do you think an additional CVE tag would make sense? Given there’s upvotes some people seem to be interested.

          1. 2

            That’d be a good meta tag proposal thread.

          2. 4

            Yeah, I’d rather not have them at all. Maybe a detailed, tech write-up of discovery, implementation, and mitigation of new classes of vulnerability with wide impact. Meltdown/Spectre or Return-oriented Programming are examples. Then, we see only the deep stuff with vulnerability-listing sites having the regular stuff for people using that stuff.

            1. 5

              seems like a CVE especially arbitrary code execution is worth posting. my 2 cents

              1. 5

                There are a lot of potentially-RCE bugs (type confusion, use after free, buffer overflow write), if there was a lobsters thread for each of them, there’d be no room for anything else.

                Here’s a list a short from the past year or two, from one source: https://bugs.chromium.org/p/oss-fuzz/issues/list?can=1&q=Type%3DBug-Security+label%3AStability-Memory-AddressSanitizer&sort=-modified&colspec=ID+Type+Component+Status+Library+Reported+Owner+Summary+Modified&cells=ids

                1. 2

                  i’m fully aware of that. What I was commenting on was Rust having one of these RCE-type bugs, which, to me, is worthy of discussion. I think its weird to police these like their some kind of existential threat to the community, especially given how much enlightenment can be gained by discussion of their individual circumstances.

                  1. -1

                    But that’s not Rust, the perfect language that is supposed to save the world from security vulnerabilities.

                    1. 3

                      Rust is not and never claimed to be perfect. On the other hand, Rust is and claims to be better than C++ with respect to security vulnerabilities.

                      1. 0

                        It claims few things - from the rustlang website:

                        Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.

                        None of those claims are really true.

                        It’s clearly not fast enough if you need unsafe to get real performance - which is the reason this cve was possible.

                        It’s clearly not preventing segfaults - which this cve shows.

                        It also can’t prevent deadlocks so it is not guaranteeing thread safety.

                        I like rustlang but the claims it makes are mostly incorrect or overblown.

                        1. 2

                          Unsafe Rust is part of Rust. I grant you that “safe Rust is blazingly fast” may not be “really true”.

                          Rust prevents segfaults. It just does not prevent all segfaults. For example, a DOM fuzzer was run on Chrome and Firefox and found segfaults, but the same fuzzer run for the same time on Servo found none.

                          I grant you on deadlocks. But “Rust prevents data race” is true.

                      2. 2

                        I’m just going to link my previous commentary: https://lobste.rs/s/7b0gab/how_rust_s_standard_library_was#c_njpoza

                  1. 3

                    Well, so one of my Berlin Rust Hack & Learn regulars is porting rustc to Gnu Hurd. I can switch soon, year of the desktop is 2109.

                    1. 2

                      The fact that I can’t tell if this is a joke or a typo makes it a better joke.

                      1. 2

                        Both. I made the typo and decided to’s too good to be fixed.

                    2. 3

                      If I remember correctly Haiku also has microkernel.

                      1. 4

                        I thought that BeOS was microkernel based on what so many said. waddlespash of Haiku countered me saying it wasn’t. That discussion is here.

                        1. 1

                          Haiku has a hybrid kernel, like Mac OS X or Windows NT.

                        2. 2

                          QNX, Minix 3, or Genode get you more mileage. At least two have desktop environments, too. I’m not sure about Minix 3 but did find this picture.

                          1. 1

                            Don’t MacOS and iOS both use variants of the Mach microkernel?

                            1. 4

                              They’re what’s called hybrid kernels. They have too much running in kernel space to really qualify as microkernel. Using Mach was probably a mistake. It’s the microkernel whose inefficient design created the misconceptions we’ve been countering for a long time. Plus, if you have that much in the kernel, might as well just use a well-organized, monolothic design.

                              That’s what I thought a long time. CompSci work on both hardware and software has created many new methods that might have implications for hybrid designs. Micro vs something in between vs monolithic is worth rethinking hard these days.

                              1. 5

                                That narrative makes it sound like they took Mach and added BSD back in until it was ready, when the evolution of Mach was that it started as an object-oriented kernel with an in-kernel BSD personality and that was the kernel NeXT took, along with CMU developer and Mach lead Avie Tevanien.

                                That was Mach 2.5. Mach 3.0 was the first microkernel version of Mach, and that’s the one GNU Mach is based on. Some code changes were backported to the XNU and OSFMK kernels from Mach 3.0, but they were always designed and implemented as full BSD kernels with object-oriented IPC, virtual memory management and multithreading.

                                1. 2

                                  Yeah, I didn’t study the development of Mach. Thanks for filling in those details. That they tried to trim a bigger OS into a microkernel makes its failure even more likely.

                                  1. 1

                                    I don’t follow the reasoning; what failed? They didn’t fail to make a microkernel BSD, as Mach 3 is that. They didn’t fail to get adoption, and indeed it’s easier when you’re compatible with an existing system.

                                    1. 1

                                      They failed in many ways:

                                      1. Little adoption. XNU is not Mach but incorporates it. Whereas Windows, Linux, and BSD kernels are used directly by large, install bases.

                                      2. So slow as a microkernel that people wanting microkernels went with other designs.

                                      3. Less reliable than some alternatives under fault conditions.

                                      4. Less maintainable, such as easy swaps of modules, than L4 and KeyKOS-based systems.

                                      5. Due to its complexity, every attempt to secure it failed. Reading about Trusted Mach, DTMach, DTOS, etc is when I first saw it. All they did was talk trash about the problems they had analyzing and verifying it vs other systems of the time like STOP, GEMSOS and LOCK.

                                      So, it was objectively worse than competing designs then and later in many attributes. It was too complex, too slow, and not as reliable as competitors like QNX. It couldn’t be secured to high assurance either ever or for a long time. So, it was a failure compared to them. It was a success if the goal was to generate research papers/funding, give people ideas, and make code someone might randomly mix with other code to create a commercial product.

                                      All depends on viewpoint of or requirements for OS you’re selecting. It failed mine. Microkernels + isolated applications + user-mode Linux are currently best fit for my combined requirements. OKL4, INTEGRITY-178B, LynxSecure, and GenodeOS are examples implementing that model.

                              2. 3

                                Yes, but with most of a BSD kernel stuck on and running in the same address space. https://en.wikipedia.org/wiki/XNU

                            1. 7

                              One thing that is clear to me: the author hasn’t actually written much (or perhaps any) Rust. This is clear to me because I think one of the traps that the merely Rust-curious fall into is a disproportional fear and loathing of the borrow checker. This is disproportional because it ignores many of the delightful aspects of Rust – for example, that algebraic types in a non-GC’d language represent a revolution in error handling. (I also happen to love the macro system, Cargo, the built-in testing framework, and a bunch of other smaller things.) Yes, the lack of things like non-lexical lifetimes can make for some wrestling with the borrow checker, but once one is far enough into Rust to encounter these things, they are also far enough in to appreciate the value it brings to systems programming.

                              To sum, the author shouldn’t weigh in on Rust (or any language, really) so definitively without having written any – or at least make clear that his perspective is informed by reading blog entries, not actual experience…

                              1. 1

                                One thing that is clear to me: the author hasn’t actually written much (or perhaps any) Rust. This is clear to me because …

                                To sum, the author shouldn’t weigh in on Rust (or any language, really) so definitively without having written any – or at least make clear that his perspective is informed by reading blog entries, not actual experience…

                                I believe it wasn’t your intent, but your commentary reads a bit like “Only true Rustaceans should be allowed to talk about Rust”.

                                1. 3

                                  Everyone should be allowed to talk about Rust. There is no authority that deserves to have the power to decide which people can or cannot talk about Rust.

                                  That said, it’s also fine to say that the author’s opinion about Rust is untrustworthy because it bears the hallmarks of someone who has read about Rust but not actually used it themselves in any meaningful way. I myself agree that it’s possible to write lots of useful rust code without running into situations where the borrow checker trips you up, and that some of Rust’s best innovations are the “small” things like the algebraic types, macros, Cargo, etc. that are now available in a non-GC systems language.

                                  1. 1

                                    it bears the hallmarks of someone who has read about Rust but not actually used it themselves in any meaningful way

                                    I still use rustlang but share the same opinion as the author. Did I write enough of it to be trustworthy? :)

                                    Rust’s best innovations are the “small” things like the algebraic types, macros, Cargo, etc. that are now available in a non-GC systems language

                                    Nothing on that list was rustlang’s innovation.

                              1. 4

                                I wonder how much wasted bandwidth and cpu could have been saved by using for example protobuf.

                                1. 2

                                  Or at least msgpack - it’s pretty much a drop-in for JSON

                                1. 7

                                  signify seems like it would be a great tool for git signatures.

                                  1. 7

                                    But it’s written in C, which by definition can’t be used by any respectable rustlang developer ;)

                                  1. 7

                                    What am I supposed to do with that json file? edit: … oh, it renders completely differently on desktop…

                                    1. 2

                                      it doesn’t render at all for me

                                    1. 2

                                      News and linkndumps aren’t really the sweet spot for content here.

                                      1. 2

                                        Thanks for input.

                                        1. 1

                                          Maybe it’s me, but in the last year or so I’ve started to hide quite a lot of stories posted here. It’s not yet HN, but we’re getting there slowly :/

                                        1. 1

                                          Today I had (once again) a pleasure of using Go’s quick.CheckEqual. It’s very simple (for example there is no minification step for inputs) but it is also very easy to use and is always there as part of standard library.

                                          Here’s an example that verifies equivalence of naive implementation with real one.

                                          1. 1

                                            We really need GC soon, it’s hard to get started on webasm as a compilation target for a GC’d language without it. Also TCO would be incredibly useful for implementing certain languages.

                                            1. 3

                                              Interestingly Go has wasm target and just uses its own GC which is written in Go :)

                                              1. 1

                                                That’s exactly what I’d have suggested minus self-hosting part. Just try porting what already works.

                                              1. 2

                                                Another ergodox with dvorak here. The ability to type without having to squeeze your wrists together greatly increases comfort imho. I like the straight (vertically-staggered) columns from a comfort perspective as well.

                                                1. 2

                                                  On the ergodox ez and very happy with it. I built one back before the ez with clear switches but I actually prefer the brown switches in my ex.

                                                1. 6

                                                  There is no technical content in that post :(

                                                  1. 3

                                                    Ah sorry. I wasn’t sure how focused this site was meant to be on tech. I’d delete the post but there is no feature for that here.

                                                    1. 26

                                                      Personally, I found the post interesting.

                                                      1. 5

                                                        Same here. This is political, and as much as we might like to pretend otherwise, all technology is inherently political. +1 for this kind of post, and more of them.

                                                        1. 2

                                                          This is political, and as much as we might like to pretend otherwise, all technology is inherently political.

                                                          I dislike this justification being used to shoehorn politics into spaces which previously had functioned somewhat as a refuge from the sturm und drang of the times. I’ve also never seen a good stacktrace for the sentiment.

                                                        2. 1

                                                          Lots of things are interesting but have better homes elsewhere.

                                                    1. 1

                                                      I don’t see the point of that post. It’s just copy of various sections (1.1.7 and 1.3.3) from first SICP chapter without any added value.

                                                      1. 1

                                                        However, I still think there is value in fuzzing compilers. Personally I find it very interesting that the same technique on rustc, the Rust compiler, only found 8 bugs in a couple of weeks of fuzzing, and not > a single one of them was an actual segfault. I think it does say something about the nature of the ode base, code quality, and the relative dangers of different programming languages, in case it was not clear already. In addition, compilers (and compiler writers) should have these fuzz testing techniques available to them, because it clearly finds bugs. Some of these bugs also point to underlying weaknesses or to general cases where something really could go wrong in a real program. In all, knowing about the bugs, even if they are relatively unimportant, will not hurt us.

                                                        This is a really interesting point - this kind of fuzzing gives us a test for whether the sorts of more advanced static verification that programming languages like Rust offer are actually paying off in terms of program reliability. If rustc, written in Rust, gets a “better score” when fuzzed than gcc, written in C (do they use C++?) does, that’s evidence that the work the Rust language designers put into the borrow checker and the type system and so forth was worthwhile. We can imagine similar fuzz testing for large programs in other programming languages.

                                                        1. 1

                                                          that’s evidence that the work the Rust language designers put into the borrow checker and the type system and so forth was worthwhile

                                                          Not really - gcc and rustc are far from equivalent programs.

                                                          1. 2

                                                            It’d be interesting to know whether LLVM was also compiled with AFL’s instrumentation. Obviously any findings from GCC’s optimizers would be “expected” to be found in LLVM, not rustc.

                                                            1. 2

                                                              Maybe instead compare this compiler with just the parts of rustc it was based on. That version, too. From there, there’s a difference between team size, amount of time to do reviews, and possibly talent. Those could create a big difference in bugs. However, the bugs that should always be prevented by its static types should still count given the language should prevent them.

                                                              So, I’d like to see rustc vs mrustc in a fuzzing comparison.

                                                          1. 4

                                                            Can anyone help me understand why Metal was designed? Apple’s a heavy hitter in Khronos, right? So what was it that they felt like they couldn’t accomplish with OGL/OCL? Are there non-Mac targets that support Metal?

                                                            1. 6

                                                              OpenGL is a tired old API that is too high level for high performance graphics work. At the time when Metal was being developed folks were working on lower level APIs to expose the GPU more, like Mantle and DirectX 12, and Metal was Apple’s offering. I believe Mantle eventually evolved into Vulkan, but for some reason Apple is continuing to promote Metal. It’s a nicer API for Swift users, but that’s about it. I would have preferred that they’d make a safe API over Vulkan for Swift like Vulkano, they seem to be under some weird impression that they’ll be able to trap devs in their platform with their own, proprietary API. Or maybe they just can’t bear to give up all the sunk cost.

                                                              1. 2

                                                                they seem to be under some weird impression that they’ll be able to trap devs in their platform with their own, proprietary API

                                                                Is it not working quite well for Microsoft with DirectX?

                                                              2. 1

                                                                As I vaguely recall, it started on ios as a way to utilize their graphics chips faster and more efficiently (lower overhead).

                                                              1. 1

                                                                Is the resulting C++ and Haskell source available somewhere?

                                                                1. 1

                                                                  I gave up trying to find it shortly in due to how University of West Florida’s website is laid out. Most can take me right to the publications and software. It’s like they’re trying to hide their work behind a bunch of sales pitches. Coffey’s page was interesting in that he did a bunch of work on knowledge bases and cognitive applications. If not paywalled, his work on knowledge elicitation and representation might be submission-worthy.

                                                                1. 2

                                                                  Personally I think these small language are much more exciting than big oil tankers like Rust or Swift.

                                                                  I’m not familiar with either of those languages, but any idea what the author means by this? I thought Rust has been picking up quite a bit recently.

                                                                  1. 10

                                                                    I understood the author to be talking about the “size” of the language, not the degree of adoption.

                                                                    I’m not sure that I personally agree that C is a small language, but many do belive that.

                                                                    1. 3

                                                                      Your involvement with rust will bias your opinion - rust team hat would be appropriate here :)

                                                                      1. 11

                                                                        He is right though. C’s execution model may be conceptually simple but you may need to sweat the implementation details of it, depending on what you’re doing. This doesn’t make C bad, it just raises the bar.

                                                                        1. 9

                                                                          I had that opinion before Rust, and I’m certainly not speaking on behalf of the Rust team, so in my understanding, the hat is very inappropriate.

                                                                          (I’m also not making any claims about Rust’s size, in absolute terms nor relative to C)

                                                                          1. 4

                                                                            Or you can just test his claim with numbers. A full, C semantics is huge compared to something like Oberon whose grammar fits on a page or two. Forth is simpler, too. Whereas, Ada and Rust are complicated as can be.

                                                                            1. 5

                                                                              I agree that there are languages considerably smaller than C. In my view, there is a small and simple core to C that is unfortunately complicated by some gnarly details and feature creep. I’ve expressed a desire for a “better C” that does all we want from C without all the crap, and I sincerely believe we could make such a thing by taking C, stripping stuff and fixing some unfortunate design choices. The result should be the small and simple core I see in C.

                                                                              When comparing the complexity of languages, I prefer to ignore syntax (focusing on that is kinda like bickering about style; yeah I have my own style too, and I generally prefer simpler syntax). I also prefer to ignore the standard library. What I would focus on is the language semantics as well as the burden they place on implementation. I would also weigh languages against the features they provide; otherwise we’re talking apples vs oranges where one language simply makes one thing impossible or you have to “invent” that thing outside the language spec. It may look simpler to only present a floating 64-bit point numeric type, but that only increases complexity when people actually need to deal with 64-bit integers and hardware registers.

                                                                              That brings us to Oberon. Yes, the spec is short. I guess that’s mostly not because it has simple semantics, but because it lacks semantics. What is the range of integer types? Are they bignums, and if so, what happens you run out of memory trying to perform multiplication? Perhaps they have a fixed range. If so, what happens when you overflow? What happens if you divide by zero? And what happens when you dereference nil? No focking idea.

                                                                              The “spec” is one for a toy language. That is why it is so short. How long would it grow if it were properly specified? Of course you could decide that everything the spec doesn’t cover is undefined and maybe results in program termination. That would make it impossible to write robust programs that can deal with implementation limitations in varying environments (unless you have perfect static analysis). See my point about apples vs oranges.

                                                                              So the deeper question I have is: how small can you make a language with

                                                                              1. a spec that isn’t a toy spec
                                                                              2. not simply shifting complexity to the user
                                                                              3. enough of the same facilities we have in C so that we can interface with the hardware as well as write robust programs in the face of limited & changing system resources

                                                                              Scheme, Oberon, PostScript, Brainfuck, etc. don’t really give us any data points in that direction.

                                                                              1. 5

                                                                                So the deeper question I have is: how small can you make a language with

                                                                                1. a spec that isn’t a toy spec
                                                                                2. not simply shifting complexity to the user
                                                                                3. enough of the same facilities we have in C so that we can interface with the hardware as well as write robust programs in the face of limited & changing system resources

                                                                                Scheme, Oberon, PostScript, Brainfuck, etc. don’t really give us any data points in that direction.

                                                                                Good question. There are few languages with official standards (sorted by page count) that are also used in practice (well.. maybe not scheme ;>):

                                                                                1. Scheme r7rs - 88 pages - seems to be only language without useful standard library
                                                                                2. Ruby 1.8 - 341 pages
                                                                                3. Ada 95 - 582 pages
                                                                                4. Fortran 2008 - 621 pages - seems to be only language without useful standard library
                                                                                5. C11 - 701 pages
                                                                                6. EcmaScript - 885 pages
                                                                                7. Common Lisp - 1356 pages
                                                                                8. C++17 - 1623 pages

                                                                                I know that page count is poor metric, but it looks like ~600 pages should be enough :)

                                                                                1. 3

                                                                                  Here are the page counts for a few other programming language standards:

                                                                                  1. PL/I General purpose subset 443 pages
                                                                                  2. Modula-2 800 pages - base - 707 pages, generics - 45 pages, objects - 48 pages
                                                                                  3. Ada 2012 832 pages
                                                                                  4. Eiffel 172 pages
                                                                                  5. ISO Pascal 78 pages
                                                                                  6. Jovial J73 168 pages
                                                                                  1. 2

                                                                                    I know that page count is poor metric, but it looks like ~600 pages should be enough :)

                                                                                    Given that N1256 is 552 pages, yeah, without a doubt.. :-)

                                                                                    The language proper, if we cut it off starting at “future language directions” (then followed by standard library, appendices, index, etc.) is only some 170 pages. It’s not big, but I’m sure it could be made smaller.

                                                                                  2. 2

                                                                                    I’ve expressed a desire for a “better C” that does all we want from C without all the crap, and I sincerely believe we could make such a thing by taking C, stripping stuff and fixing some unfortunate design choices. The result should be the small and simple core I see in C.

                                                                                    That might be worth you writing up with hypothetical design. I was exploring that space as part of bootstrapping for C compilers. My design idea actually started with x86 assembler trying to design a few, high-level operations that map over it which also work on RISC CPU’s. Expressions, 64-bit scalar type, 64-bit array type, variables, stack ops, heap ops, expressions, conditionals, goto, and Scheme-like macros. Everything else should be expressable in terms of the basics with the macros or compiler extensions. The common stuff gets a custom, optimized implementation to avoid macro overhead.

                                                                                    “ What I would focus on is the language semantics as well as the burden they place on implementation. “

                                                                                    Interesting you arrived at that since some others and I talking verification are convinced a language design should evolve with a formal spec for that reason. It could be as simple as Abstract, State Machines or as complex as Isabelle/HOL. The point is the feature is described precisely in terms of what it does and its interaction with other features. If one can’t describe that precisely, how the hell is a complicated program using those same features going to be easy to understand or predict? As an additional example, adding a “simple, local change” show unexpected interactions or state explosion once you run the model somehow. Maybe not so simple or local after all but it isn’t always evident if just talking in vague English about the language. I was going to prototype the concept with Oberon, too, since it’s so small and easy to understand.

                                                                                    “but because it lacks semantics.”

                                                                                    I didn’t think about that. You have a good point. Might be worth formalizing some of the details to see what happens. Might get messier as we formalize. Hmm.

                                                                                    “So the deeper question I have is: how small can you make a language with”

                                                                                    I think we have answers to some of that but they’re in pieces across projects. They haven’t been integrated into the view you’re looking for. You’ve definitely given me something to think about if I attempt a C-like design. :)

                                                                            2. 4

                                                                              He also says that the issues with memory-safety in C are overrated, so take it with a grain of salt.

                                                                              1. 13

                                                                                He is not claiming that memory safety in general is not an issue in C. What he is saying is that in his own projects he was able to limit or completely eliminate dynamic memory allocation:

                                                                                In the 32 kloc of C code I’ve written since last August, there are only 13 calls to malloc overall, all in the sokol_gfx.h header, and 10 of those calls happen in the sokol-gfx initialization function

                                                                                The entire 8-bit emulator code (chip headers, tests and examples, about 12 kloc) doesn’t have a single call to malloc or free.

                                                                                That actually sounds like someone who understands that memory safety is very hard and important.

                                                                                1. 3

                                                                                  Not at all the vibe I got from it.

                                                                                2. 4

                                                                                  I’m not familiar with either of those languages, but any idea what the author means by this?

                                                                                  I’m also way more interested in Zig than I am in Rust.

                                                                                  What I think he’s saying is that the two “big” languages are overhyped and have gained disproportionate attention for what they offer, compared to some of the smaller projects that don’t hit HN/Lobsters headlines regularly.

                                                                                  Or maybe it’s a statement w.r.t. size and scope. I don’t know Swift well enough to say if it counts as big. But Rust looks like “Rubyists reinvented C++ and claim it to be a replacement for C.” I feel that people who prefer C are into things that small and simple. C++ is a behemoth. When your ideal replacement for C would also be small and simple, perhaps even more so than C itself, Rust starts to seem more and more like an oil tanker as it goes the C++ way.

                                                                                  1. 3

                                                                                    I agree with your point on attention. I just wanted to say maybe we should get a bit more credit here:

                                                                                    “compared to some of the smaller projects that don’t hit HN/Lobsters headlines regularly.”

                                                                                    Maybe HN but Lobsters covers plenty oddball languages. Sometimes with good discussions, too. We had authors of them in it for a few. I’ve stayed digging them up to keep fresh ideas on the site.

                                                                                    So, we’re doing better here than most forums on that. :)

                                                                                    1. 2

                                                                                      Sure! Lobsters is where I first learned about Zig. :-)

                                                                                1. 6

                                                                                  I loved QBasic and it’s where I started too… but I don’t see how it is any easier than, say, ruby or python for the same tasks being done in this post. No need to introduce advanced concepts just because a language has them.

                                                                                  1. 9

                                                                                    He double clicked the icon on his desktop and in a split second, we were in the IDE..

                                                                                    Ruby and Python can’t do that.

                                                                                    Also, OP doesn’t mention it, but: graphics. In Ruby or Python if you want graphics you end up having to deal with gem and rvm and pip and virtualenv and on and on and on and fucking on. In QBasic you type CIRCLE and hit F5.

                                                                                    I’ve written about these issues before. You have to try teaching programming for yourself to see the tiny things that trip noobs up.

                                                                                    1. 3

                                                                                      I expected graphics to be the reason OP thought QBasic was the way to go, but then it was never mentioned so it seemed like a much less compelling argument.

                                                                                      1. 3

                                                                                        you end up having to deal with gem and rvm and pip and virtualenv and on and on and on and fucking on. In QBasic you type CIRCLE and hit F5.

                                                                                        Racket can do just that.

                                                                                        You have to try teaching programming for yourself to see the tiny things that trip noobs up.

                                                                                        Racket is also made specifically for teaching.

                                                                                        1. 2

                                                                                          True. The drawback of Racket: much as I hate people harping on lisp parentheses, they do hinder noobs. Also mentioned in my post linked above.

                                                                                          But Racket also has Pyret. Which seems pretty nice, though I haven’t tried it.

                                                                                        2. 2

                                                                                          And then VB6 made GUI’s about as easy. And like you said about QBasic, I’d click VB6, IDE loaded in a second, start project, type some code for the console thing if I wanted, press run, wait one second, results. Rinse repeat. The concept that mattered aside from speed is flow: the BASIC’s have a development flow that maximizes mental flow to keep people constantly moving. Pascal’s can do it, too, since they compile fast. Smalltalks and LISP on extreme end of it.

                                                                                          The other advantage of BASIC’s are that they look like pseudocode people write down before actually coding. BASIC is so close to pseudocode that they can do the pseudocode in BASIC itself or barely do a translation step. In the Slashdot and other comments, I see the “it looks just like pseudocode!” response show up constantly among people that started with BASIC. Something that shouldn’t be ignored in language design at least for beginners. Probably also DSL or HLL designers, too, trying to keep things closer to the problem statement.

                                                                                      1. 1

                                                                                        Less well known are integer overflow bugs. Offset-length pairs, defining a sub-section of a file, are seen in many file formats, such as OpenType fonts and PDF documents. A conscientious C programmer might think to check that a section of a file or a buffer is within bounds by writing if (offset + length < end) before processing that section, but that addition can silently overflow, and a maliciously crafted file might bypass the check.

                                                                                        So, some experiences from the pixel mines.

                                                                                        We were implementing our own format for 3D mesh data for static (initially) meshes, and went through probably 4-5 implementations. The initial format was based on nice tree structures, while the final format was more of a length-prefixed block-based approach.

                                                                                        The reason for this change is that, if you have a maliciously-formed tree structure, it can be really easy to hamstring your parser. You have extra records, or not enough records, and parsing gets to be a headache. You also have to build a smarter parser, because you kind of need to keep an idea of state as you pop up and down in the hierarchy of things, and you can’t really make a lot of allocation guarantees ahead of time until the tree is walked.

                                                                                        By contrast, a block format lets you quickly skip down the list of blocks, do most of your allocations up front, and then patch up and copy things around at the end. At that point, having good safe arithmetic routines prevents you from over-allocating or under-allocating things.

                                                                                        Towards that end, in C++ a very handy thing to do is to create a BinaryRegionReader class that provides “safe” and bounds-checked access to a region of memory, and which allows the creation of child BinaryRegionReaders.

                                                                                        1. 1

                                                                                          Towards that end, in C++ a very handy thing to do is to create a BinaryRegionReader class that provides “safe” and bounds-checked access to a region of memory, and which allows the creation of child BinaryRegionReaders.

                                                                                          So, basically std::string_view?

                                                                                          1. 0

                                                                                            Sort of, but also with the ability to read off native types in order and respect endianness and do seeking safely.

                                                                                            1. 1

                                                                                              Sure, that makes sense.

                                                                                              Btw what lead you to go with completely custom format instead of using something like protocol buffers or cap’n proto?

                                                                                              1. 1

                                                                                                Reasons we didn’t use those:

                                                                                                • We had our own routines that better handled certain issues (see: safe arithmetic)
                                                                                                • We were doing some of that as a learning project
                                                                                                • Wanted to easily support multiple platforms–our codebase was already setup for that
                                                                                                • Would still have required coming up with a format for the layout (since you want something that is easy to shove into graphics buffers anyways) even with the help those libs provide
                                                                                                • Add extra build steps and autogenerated code wasn’t appealing
                                                                                                • Having our own code/copyright gave more licensing flexibility (WTFPL ftw)
                                                                                        1. 2

                                                                                          for all the diligence required to solve this sort of problem, you’d think that would start pushing programming more towards, ya know, engineering as a way of building things, but at least its a cool story!

                                                                                          1. 2

                                                                                            As it was a hardware issue in this case, I’m not sure I understand what you’re saying. Do you mean that if, say, the software was verified and guaranteed not to crash, they would have immediately diagnosed the crash as a hardware issue, thus saving a lot of time?

                                                                                            1. 3

                                                                                              In general, you can do that sort of thing in Design-by-Contract with a certified and non-certified compiler. In debug mode, the contracts can all become runtime checks showing you exactly which module fed bad input into the system. That lets you probe around to localize exactly which bad box took input that accepted its preconditions, did something with it, and produced output that broke the system. When looking at that module, it will normally be a software bug. However, you can run all failing tests through both a certified and regular binary to see if the failure disappears in one. If it does, it’s probably a compiler error. Similar check running sequential vs concurrent in case it’s a concurrency bug. Similarly, if the logic makes sense, it’s not concurrency, and passes on certified compiler, it’s probably an error involving something else reaching into your memory or CPU to corrupt it. That’s going to be either a hardware fault or a privileged component in software. With some R&D, I think we could develop for those components techniques for doing something similar to DbC in software for quickly isolating hardware faults.

                                                                                              That said, I don’t think it was applicable in this specific case. They’d have not seen that problem coming unless they were highly-experienced, embedded engineers. I’ve read articles from them where they look into things like effects of temperature, bootloader starting before PLL’s sync up, etc. Although I can’t find link, this isn’t the first time sunlight has killed off a box or piece of software. I’ve definitely seen this before. I think a friend might have experienced it with a laptop, too. We might add to best practices for hardware/software development to make sure the box isn’t in sunlight or another situation that can throw its operating temperature. I mean, PC builders have always watched that a bit but maybe the developers on new hardware should ensure it’s true by default. The hardware builders should also test the effects of direct sunlight or other heat to make sure the boxes don’t crash. Some do already.

                                                                                              1. 3

                                                                                                However, you can run all failing tests through both a certified and regular binary to see if the failure disappears in one. If it does, it’s probably a compiler error.

                                                                                                I don’t think that’s true, at least in C. I know CompCert at least takes “certified” to mean “guarantees well-defined output for well-defined input”, so it’s free to make hash of any UB-containing code the same as Clang.

                                                                                                That said, if your test results change between any two C compilers, it’s a strong suggestion you have a UB issue.

                                                                                                1. 2

                                                                                                  it’s a strong suggestion you have a UB issue.

                                                                                                  True, too. There’s teams out there that test with multiple compilers to catch stuff like that. OpenBSD folks mentioned it before as a side benefit of cross-platform support.

                                                                                                  1. 2

                                                                                                    That said, if your test results change between any two C compilers, it’s a strong suggestion you have a UB issue.

                                                                                                    In C, this can also mean that you depend on implementation-defined bahaviour or unspecified bahaviour which are not the same as undefined behaviour (which often will also be a bad thing ;)).

                                                                                                  2. 2

                                                                                                    Are you proposing gamedev companies should adopt that kind of techniques?

                                                                                                    1. 3

                                                                                                      Im always pushing all programmers to adopt anything that helps them that they can fit in their constraints. Aside from Design-by-Contract, I’d hold off on recommending game developers do what I described until the tooling and workflow are ready for easy adoption. Ive got probably a dozen high-level designs for it turning around in my head trying to find the simplest route.

                                                                                                      One thing Im sure about is using C++ is a drawback since it’s so hard go analyze. About everything I do runs into walls of complexity if the code starts in C++. Still working on ideas for that like C++-to-C compilers or converting it into equivalent, easier-to-analyze language that compiles to C (eg ZL in Scheme). So, I recommend avoiding anything as complex as C++ if one wants benefits from future work analyzing either C or intermediate languages.

                                                                                                      Edit: Here was a case study that found DbC fit game dev requirements.

                                                                                                      1. 2

                                                                                                        The other day there was a link to a project that does source-to-source transformation on C++ code to reduce the level of indirection: https://cppinsights.io

                                                                                                        1. 2

                                                                                                          Try doing an exhaustive list of FOSS apps for C and C++ doing this stuff. You’ll find there’s several times more for C which also collectively get more done. There’s also several certifying compilers for C subsets with formal semantics for C++ still barely there despite it being around for a long time.

                                                                                                          So, that’s neat but one anecdote goimg against a general trend.

                                                                                                        2. 1

                                                                                                          Im always pushing all programmers to adopt anything that helps them that they can fit in their constraints.

                                                                                                          This is meaningless - is there someone who doesn’t?

                                                                                                          Aside from Design-by-Contract, I’d hold off on recommending game developers do what I described until the tooling and workflow are ready for easy adoption.

                                                                                                          So the only one thing you propose would not help at all with problem with overheating console or with performance regression from that post ;)

                                                                                                          One thing Im sure about is using C++ is a drawback since it’s so hard go analyze.

                                                                                                          Sure, it’s hard but there are tools that can do some sort of static analysis for it (for example Coverity or Klocwork). Either way, there are no alternatives today for c++ as language for engine that can be used for AAA games.

                                                                                                          Here was a case study that found DbC fit game dev requirements.

                                                                                                          Have you actually read it? I have nothing against DbC, but as far as I can see that study doesn’t really show any great benefits of DbC nor is it realistic. They do show that writing assertions for pre/post conditions and class invariants helps in diagnosing bugs (which is obvious), but not much more.

                                                                                                          They don’t show that really hard bugs are clearly easier to diagnose and fix with DbC, nor do they show that cost/benefit ratio is favourable for DbC.

                                                                                                          Finally, that paper fails to describe in detail how was the experiment conducted - all I could gather is this:

                                                                                                          Implementation took approximately 45 days full-time and led to source code consisting of 400 files.

                                                                                                          code was predominantly implemented by one person

                                                                                                          Even it was an interesting paper (which imo it is not) it’s impossible to try and replicate it independently.

                                                                                                          1. 1

                                                                                                            “This is meaningless - is there someone who doesn’t?”

                                                                                                            Yes, most developers don’t if you look at the QA of both proprietary and FOSS codebases. I mean, every study done on formal specifications said developers found them helpful. Do you and most developers you see use them? Cuz I swore Ive been fighting an uphill battle for years even getting adoption of consistent interface checks and code inspection for common problems.

                                                                                                            “but as far as I can see that study doesn’t really show any great benefits of DbC nor is it realistic. They do show that writing assertions for pre/post conditions and class invariants helps in diagnosing bugs (which is obvious)”

                                                                                                            It’s so “obvious” most developers aren’t doing DbC. What it says is that DbC fits the requirements of game developers. Most formal methods don’t. It also helped find errors quickly. It’s not mere assertions in the common way they’re used: it can range from simple Booleans to more complex properties. One embedded project used a whole Prolog. Shen does something similar to model arbitrary, type systems so you can check what you want to. Finally, you can generate tests directly from contracts and runtime checks for combining with fuzzers taking one right to the failures. Is that standard practice among C developers like it has been in Eiffel for quite a while? Again, you must be working with some unusually-QA-focused developers.

                                                                                                            “would not help at all with problem with overheating console “

                                                                                                            First part of my comment was about general case. Second paragraph said exactly what you just asked me. Did you read it?

                                                                                                            “Even it was an interesting paper (which imo it is not) it’s impossible to try and replicate it independently.”

                                                                                                            The fact that it used evidence at all would put it ahead of many programming resources that are more like opinion pieces. Good news is you don’t have to replicate it: you can create a better study that tries same method against same criteria. Then, replicate that. If that’s the kind of thing you want to do.

                                                                                                            1. 1

                                                                                                              I mean, every study done on formal specifications said developers found them helpful. Do you and most developers you see use them?

                                                                                                              Academic studies show one thing, while practitioners for unknown reasons do not adopt practices recommended by academics. Maybe, the studies are somehow flawed. Maybe the gains recommended in studies don’t have good roi for most of gamedev industry?

                                                                                                              What it says is that DbC fits the requirements of game developers. Most formal methods don’t. It also helped find errors quickly. It’s not mere assertions in the common way they’re used

                                                                                                              The study was using “mere assertions in the common way they’re used” so I don’t know what is the point of rest of that paragraph - techniques you mention there are not even mentioned in that paper so there is no proof of their applicability to gamedev.

                                                                                                              First part of my comment was about general case. Second paragraph said exactly what you just asked me.

                                                                                                              I asked you about your recommendations for gamedevs not about some unconstrained general case and just pointed out that your particular recommendation would not help in case from the story - nothing more :)

                                                                                                              Did you read it?

                                                                                                              Sure I have, but to make it easier in future try to make your posts more succinct ;)

                                                                                                              1. 1

                                                                                                                Academic studies show one thing, while practitioners for unknown reasons do not adopt practices recommended by academics.

                                                                                                                I agree in the general case. Except that most practitioners trying some of these methods get good results. Then, most other practitioners ignore them. Like CompSci has it’s irrelevant stuff, the practitioners have their own cultures of chasing some things that work and some that don’t. However, DbC was deployed to industrial practice via Eiffel and EiffelStudio. The assertions that are a subset of it have obvious value. SPARK used them for proving absence of errors. Now, Ada 2012 programmers are using it as I described with contract-based testing. Lots of people are also praising property-based testing and fuzzing based on real bugs they’re finding.

                                                                                                                So, this isn’t just a CompSci recommendation or study: it’s a combo of techniques that each have lots of industrial evidence they work and supporters that work well together that also have low cost. With that, either mainstream programmers don’t know about them or they’re ignoring effective techniques despite evidence. The latter is all too common.

                                                                                                                “I asked you about your recommendations for gamedevs not about some unconstrained general case”

                                                                                                                What helps programming in general often helps them, too. That’s true in this case.

                                                                                                                “Sure I have, but to make it easier in future try to make your posts more succinct ;)”

                                                                                                                Oh another one of you… Haha.

                                                                                                                1. 2

                                                                                                                  Lots of people are also praising property-based testing and fuzzing based on real bugs they’re finding.

                                                                                                                  Those techniques are not what I would call “formal specifications” any more than simplest unit tests are, but if you consider them as such, than…

                                                                                                                  either mainstream programmers don’t know about them or they’re ignoring effective techniques despite evidence. The latter is all too common.

                                                                                                                  … I have no studies to back this up, but my experience is different. DbC (as shown in that study you linked), property based and fuzzing based testing are techniques that are used by the working programmers. Not for all the code, and not all the time but they are used.

                                                                                                                  When I wrote about studies showing one thing and real life showing something opposite I was thinking about methods like promela, spin or coq.

                                                                                                                  1. 1

                                                                                                                    Makes more sense. I see where you’re going with Coq but Spin/Promela have lots of industrial success. Similar to TLA+ usage now. Found protocol or hardware errors easily that other methods missed. Check it out.