1. 2

    Pretty nice introductory article, although I didn’t grasp the idea behind Gustafson-Barsis’ law at first. It might be me, though, since I found the wikipedia article on Gustafson’s law also a bit hard to follow.

    1. 2

      Reading it now, I didn’t do a very good job of explaining it either.

      One of the key observations from Gustafson’s paper is:

      One does not take a fixed-size problem and run it on various numbers of processors except when doing academic research; in practice, the problem size scales with the number of processors.

      I updated the blog post to hopefully reflect that difference better.

    1. 2

      Is Little’s law a useful model for the systems you deal with? The systems I work on tend to have power law and/or multi-modal latencies and in that context, knowing the mean latency is surprisingly uninformative.

      1. 3

        Mean latency is just the inverse of throughput and indeed uninformative from a system latency perspective (which is dominated by the tail latency). The reason I mention Little’s Law in the blog post is that it establishes the relationship between latency, throughput (or bandwidth), and concurrency/parallelism. A key point is an observation by Gustafsson there’s some lower bound on latency dictated by physical constraints, which means that at some point more throughput requires more parallelism (https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-09766-4_79).

        1. 1

          Good reply. Thanks!

      1. 3

        I find thinking about concurrency as a code design problem a better way to reason about it than as a execution model. It’s about designing your application so that the individual units are agnostic of execution order. They could execute sequentially as defined by the source or reordered without effecting the overall outcome of the application.

        1. 1

          That’s definitely a good way to design an application. However, I am arguing in the blog post that you also need to consider the execution model if performance is critical for your application. And for many applications out there, performance is important because of latency constraints and volume of data. Today, to maximize performance, you are pretty much forced to exploit parallelism because single-threaded performance is stagnating. To exploit parallelism, you need to consider your specific workload and find a programming model and an application architecture that suits it best.

          1. 2

            Oh, yeah I wasn’t arguing against that all. Just stating a difference between how I view parallelism and concurrency.They both have their benefits and with their powers combined you can have a well tuned application, barring any hard-to-debug errors.

        1. 2

          For me, it’s probably “Will Serverless End the Dominance of Linux in the Cloud?” by Ricardo Koller and Dan Williams. It’s an interesting take on what the impact of serverless computing paradigm has on the OS (especially on Linux that dominates the cloud).

          1. 7

            I am planning to finish my Haskell implementation of the Hindle-Milner type inference algorithm this week. I am using Basic Polymorphic Typechecking by Luca Cardelli (1987) as a reference. The paper includes a full implementation of the algorithm written in Modula-2 (see Appendix A for details). The same algorithm has previously been translated to Scala and Python. Once the implementation is finished, I am going to use it as a basis for type inference in Taylor (a Swift compiler written in Haskell).

            1. 2

              I’ve been in this place before. Notice how the Python code is translated from Scala, which is translated from Perl, which is translated from Modula-2? I feel like I found a a bug or two in the Python version, but can’t remember what they were…

            1. 5

              There’s now also a blog post by the author that explains what the code is doing in great detail.

              1. 10

                This week I will be improving Taylor, a work-in-progress Swift compiler written in Haskell. I have preliminary parser and LLVM codegen in place that understand simple constant declarations and I am hoping to add support for binary expressions and other primitive parts of the language next. As I am still more or less a Haskell newbie, I’m interested in feedback on how to structure the compiler properly. If anyone is interested in working on the compiler, drop me a line or send a pull request.

                1. 5

                  Using Haskell’s llvm-general as well. Very nice, I will watch this project with great interest.

                  1. 4

                    Awesome, will be watching your progress!

                  1. 10

                    Most advanced statically typed languages completely SUCK in the programmability department. Unless you are some genius of course.

                    It feels like all people who say things like this haven’t actually tried learning the more “advanced” languages. Haskell is my main language I work in and is IMO really easy to both learn and program in, you certainly don’t need to be a genius or even smart. The whole point of having the compiler is so that you don’t have to be smart.

                    It always feels like a cop-out, because the rumours are that languages like Haskell are academic, you relegate them to “you need to be a genius to use them so I’m not even going to try”.

                    I may also be biased on the matter of course, but I have yet to find someone that learnt Haskell go “this is too complicated for me”. I do wonder how much of it is bias and how much of it is true to the authors point.

                    1. 5

                      It feels like all people who say things like this haven’t actually tried learning the more “advanced” languages. Haskell is my main language I work in and is IMO really easy to both learn and program in, you certainly don’t need to be a genius or even smart. The whole point of having the compiler is so that you don’t have to be smart.

                      Is your background in functional or imperative programming?

                      For people with extensive imperative programming background, the paradigm switch can be very hard and Haskell as a language is very unforgiving unless you’re FP all the way. It can certainly feel like you need to be a genius to make the leap.

                      1. 3

                        I went PHP -> ASP.NET -> Rails (stuck around here for like 8 years) -> Clojure -> Haskell

                        Perhaps making the mind-switch into FP using Clojure was a more friendly path than direct to Haskell.

                        I think I understand the feeling though, that FP is so “different” that you have to change everything you are to, start working with it. However, I don’t think that feeling is validated once you actually start trying. Most people I’ve seen pick it up have gone “oh, well, that was easier than expected..”

                    1. 3

                      This week I am working on Hornet’s fast interpreter. Hornet is a OpenJDK-based JVM implementation that focuses on predictable execution for applications with low-latency requirements. It runs on both Linux and OS X.

                      I have about half of the JVM opcodes covered now. Hopefully I will be able to reuse much of the code for Dynasm and LLVM backends in the future. I managed to integrate MPS to the VM core few weeks ago but there’s no JVM stack walking yet so it’s not fully functional.

                      Questions, comments, testing, and pull requests are welcome!

                      1. 8

                        Last week, I got off my ass and penned a blog entry on Hython. Oh goodness, I could write a book about this if I wanted to. Submitted it to a few places, regretting it a bit later. I’m normally OK with most criticism, but middlebrow dismissals really irritate me for some reason. On the code side, I got started on implementing exception handling in Hython. I’m leveraging continuations to do it properly, but even with those there’s still a lot of thinking to do. I’m going slowly on this part and following a guide on doing it well. Hopefully I don’t have to circle back to it much after I’m done.

                        This week, I’d like to continue working on exception handling, possibly while gathering notes to see what the next blog post on Hython will be about.

                        1. 9

                          Don’t let the negative comments get to you, honestly. Half of the people who make the snide comments are too lazy even read what you wrote, much less have the courage to write themselves. There’s a real lack of project-based Haskell writing, so blogging about these topics is quite helpful to a lot of people most of whom are silent.

                          1. 5

                            Thanks for this writeup, I loved it. Looking forward to more.

                            Putting stuff on the internet is painful, but there are always more people that silently like it than commit uneducated non-constructive criticism.

                            1. 5

                              Why are you even reading comments on Reddit? :-) I loved the blog post! I’m also waiting to read more.

                            1. 23

                              My entry is short this week as I’m not in the mood for talking much.

                              If anyone knows of interesting compiler or related work (including, say, LLDB, LLVM, etc), I may well be interested to hear about it. I prefer working remotely but might be interested in discussing other options.

                              As for Dylan, I don’t know what my future is with it at the moment. Multiple attempts to get others involved have largely failed. A comment that I made last week about what if I did something that mattered got 4 people to upvote it.

                              There are so many interesting things that can be done on top of an industrial quality foundation. Instead, people quake at the thought of working on something big or involved. We’re in a time where people are proud to try to proclaim that they’re less than average, as a number of posters here on lobste.rs like to do. So I’m frustrated, I’m sad, and overall, I’m just really tired.

                              I found an answer to my question last week. I started writing a new document as a project notebook listing existing projects, status, notes on them, etc. I just don’t see much point in continuing given the above and the lack of any payoff in doing so.

                              1. 16

                                Your Dylan posts have been really inspiring to me. Sometimes I worry that the work that I do is somewhat quixotic, or worse, aggressively counterproductive. Yours was neither, to me at least. You’ve been consistent, passionate, and downright interesting to read for the past few. Right now it seems like your project is just a struggling seed, but it’s probably gotten farther than the vast unseen majority of passion projects posted to message boards.

                                I’m sorry that you’re feeling down. But if it means anything, your posts have brought me up.

                                Also, for perspective. There’s like on the order of 3000 users on lobste.rs, I’m willing to bet that you have one of the highest average interest per post [avg comment score, large volume of posts]. So don’t sweat the upvotes. In our small pond, you definitely have our interest. For me at least, I strongly associate this small community with BruceM who loves OpenDylan. Be well, I wish you the best.

                                Edit to update: TL;DR - I don’t know how I can help you with Dylan, but I want to point out that perhaps the payoff will be in an unexpected form; you inspired at least one person when they were down.

                                1. 3

                                  Also, relying on others for payoff will not really lead to a lot of happiness, probably. (From experience, anyway.) Maybe just do things for the fun of it.

                                  1. 5

                                    To be sure, I do most of what I do because I enjoy it. I usually refer to myself as “fun-employed”. I do contract work that I enjoy and I do other projects that I enjoy (Dylan stuff being a part of that). My sadness and tiredness stem in part from some other issues (depression), but also the realization that I can’t achieve some of my goals for Dylan by myself, so I’ve tried hard to get others involved without much success. Maybe I let myself get a bit too deep into what I want out of Dylan for my own good.

                                2. 6

                                  I am not surprised that it’s so difficult to attract new developers to OpenDylan. There are tons of new programming languages out there now and you’re competing for attention against companies like Microsoft, Apple, Google, and Mozilla.

                                  I hadn’t heard of Dylan before and one obvious question I have is: why should I care and invest time in learning about it? For example, Swift is obviously interesting if you’re working on iOS. Rust is a very interesting approach to systems programming. Go is starting to be everywhere so the big ecosystem around it makes it very interesting.

                                  What’s is special and unique about Dylan and where does it really shine at?

                                  1. 16

                                    Funny thing … Dylan was originally developed at Apple (along with CMU and Harlequin as partners) in the 1990s. It died at Apple for a variety of reasons, most of which had little to do with Dylan itself, much like many projects at Apple in the early to mid 1990s.

                                    What makes Dylan interesting?

                                    • It is a Lisp with an infix syntax.
                                    • Multiple dispatch / generic functions ala CLOS.
                                    • It has a Common Lisp style condition system.
                                    • It was the first or one of the first languages to offer a macro system with an infix syntax.
                                    • The core of the language is pretty elegant. The core concepts fit together cohesively and make sense together. (In contrast to some other languages around today.)
                                    • It has a pretty involved type system, including some aspects of dependent types, but without everything that one would want today. The type system is optional, but most commonly written code is typed.
                                    • It generates native executables and shared libraries.
                                    • Pretty good code generation as the compiler was previously a commercial product with a large team working on it who had years of experience implementing Common Lisp and other things.

                                    Personally, I came (back) to Dylan after a very unhappy time with seeing where things were headed with Scala, poor build tools, a slow compiler, a complicated language, slow development cycle, and decided that wasn’t a world that I wanted to live in. I also ended up not wanting to be tied to the JVM, which also changed my available options and interests.

                                    For someone who wants to learn and do interesting things, there’s a lot of open projects and interesting things to do.

                                    • Interested in numerics and generating good code and eliminating boxing overhead? We do okay at that, but can do better.
                                    • Ever wanted to hack on a compiler backend? We have a couple of those.
                                    • Been interested in designing aspects of a type system or working on implementing parts of a type system? We have a number of open projects in those areas.
                                    • Interested in FFI, integrating with Objective C libraries or other things? We have a number of open projects in that area as well, including the basics of an Objective C bridge and a fully implemented C-FFI.
                                    • Find macros interesting? We have some fun things to work on in that space as well?

                                    But those are things for people who are interested in working to extend their skills in the language design and implementation area.

                                    Without thinking too hard, we can probably come up with a good 30-50 projects, involving writing code that works with an industrial quality compiler implementation or related tools (like extensions to LLDB, etc) where the code involved could be in Dylan, C, C++, Python, JavaScript, etc depending on what exactly the project was. Some of these are more suitable for beginners, some for experts some for everyone in between. You don’t have to learn just by building toy compilers, at some point, diving into a full blown system will be a great experience.

                                    I personally enjoy writing prototype code in Dylan, but we’re missing libraries. Most of them aren’t too hard to do, but there’s plenty of help needed and plenty of areas where we can help someone learn wonderful and amazing new things.

                                    I’m currently building my own SAT solver as I’m interested in what might be possible with the addition of refinement types to the type system. (Among other potential uses for a SAT solver.)

                                  2. 4

                                    Sorry to hear this. You sound very tired.

                                    Can you put the Dylan stuff down for a bit? Just enough to recharge? FWIW, it sounds really cool; I’m just a bit committed to finishing something big of my own for once. I don’t think I’m the only one who’s in the “temporarily unavailable” category, either. I agree that people fear working on something big; our ‘technological plenty’ unintentionally creates a culture of consumption and taste over raw creation.

                                    I hope I can read more about Dylan via you and others.

                                    1. 2

                                      I think understand some of what you’re going through. At work, I’m involved with several huge scale projects, but sometimes it’s hard to get people to see past the immediate problems (JIRA-chasing, if you will) to see what we can do on a larger scale. It’s hard work, and an uphill battle, but if nothing else, I’m working to build the internet I want to see and quite frankly, I couldn’t do anything else. In the same vein, you’re building the language you want to see, and it’s an uphill battle, and I’m sure many times people don’t seem to see the bigger picture. Hang in there.

                                      1. 1

                                        There are so many interesting things that can be done on top of an industrial quality foundation. Instead, people quake at the thought of working on something big or involved. We’re in a time where people are proud to try to proclaim that they’re less than average, as a number of posters here on lobste.rs like to do.

                                        Are you being hyperbolic, or do you actually think that’s the issue?

                                        1. 1

                                          Both. It is certainly possible to pull out posts and comments saying similar things to what I mentioned. But should we always take them at their own word? It isn’t clear.

                                          This sort of comment isn’t all that uncommon: https://twitter.com/relrod6/status/513943840949288960

                                          1. 1

                                            Yeah; it’s a standard boring comment. I wouldn’t worry about it.

                                      1. 6

                                        I am sharpening my Haskell skills by writing a compiler for the Swift programming language using Alex/Happy for lexer and parser and LLVM for code generation. It is in very early stages but feedback from fellow Haskellers and compiler hackers is welcome!

                                        1. 5

                                          I just had a quick poke through, but one big piece of advice I can give you is to always insert type signatures for top levels. It’s a lesson I learned the hard way but just because the type can be inferred doesn’t mean you want it to be.

                                        1. 1

                                          Too bad that unlike the rest of the scalac forks, this one seems to be intentionally license incompatible.

                                          1. 7

                                            How is the new license incompatible with the previous?

                                            1. 7

                                              Paul P’s changes are distributed under the Apache 2.0 license which is compatible with 3-clause BSD license that AFAICT scalac uses.

                                              @soc, can you please clarify what you mean here?

                                              1. 2

                                                While the Apache License 2.0 is technically compatible, mainline Scala accepts only code with a CLA (and licenses it under BSD afterwards).

                                                The big question is a) whether PaulP’s CLA is still active and b) – if the CLA was still active – whether code made available online counts as a contribution (I’d say no).

                                                Regarding the CLA, http://typesafe.com/contribute/cla/scala/check/paulp says yes, and considering chapter 8 of the CLA, EPFL needs to be made aware of any changes regarding the CLA’s status.

                                                Let’s wait and see if there is any movement.

                                            2. 6

                                              I think the intent is for Paul to share his work with TypeLevel:

                                              publishing now and giving [Typelevel] an opportunity to exploit my work seems the lesser evil.

                                            1. 2

                                              I am curious to know why some people are voting this submission as off-topic. The article makes interesting points about issues with Python 3 migration. I don’t personally agree with the conclusion of the article but it’s still a good read if you’re interested in programming language evolution and/or Python.

                                              1. 6

                                                This week I am planning to integrate Memory Pool System to Hornet which is an experiental JVM implementation for low-latency applications that I have been working on for a while now.

                                                1. 2

                                                  Awesome to hear of more people using MPS. :) Are they aware of what you’re doing?

                                                  1. 1

                                                    No, I only learned about MPS recently and haven’t finished the integration. I’ll drop them an email when I have something working. :-)

                                                1. 9

                                                  How do you even know if /dev/(u)random is random in the first place? This may sound like a strange question, but it isn’t. You can’t just trust a file because of it’s path. Consider an attacker ran the following:

                                                  Now both random devices are actually large sparse files with known data. This is worse than not having access to these files, in fact, the attacker is able to provide you with a seed of his own choosing!

                                                  This seemed like a strange thing to even mention. In what environment could an attacker possibly control what comes out of the file */dev/urandom" but not have full control over everything else in the environment, like loading an LKM that keeps /dev/urandom as a device but spits out zeroes? I mean I guess if you are chroot()ing to some known valid path but somehow have such screwed up file permissions that an attacker was able to leave that bogus file beforehand, but if that happened I’d also assume the attacker could do much worse than screw up your randomness.

                                                  1. 5

                                                    “A good idea with bad usage” could describe the whole blog series I think. There’s a few good gotchas mentioned, but then it kind of jumps the shark and turns into “42 deadly facts you need to know about tap water.”

                                                    1. 2

                                                      The blog post talks about replacing “/dev/urandom” but a process could be tricked into opening a file that is not “/dev/urandom” but a regular file that is controlled by another unprivileged process.

                                                      1. 1

                                                        I had that exact same though, even if you used syscalls if someone’s got control over your stuff, it could put a debugger or something over it. Though there might be environments (I think OpenVZ and its “para-virtualization” thing) where with the right exploit at the same time you might get compromised just enough to do some minor changes and allow for a broken /dev/urandom to get into your system.. if that makes sense… ?

                                                      1. 7

                                                        I finished a raycasting game engine demo last week. Raycasting is a 3D rendering technique that was used pseudo 3D games like Wolfenstein 3D and Duke Nukem 3D in the 1990s. Of course, I ended up using OpenGL for the actual drawing which is actually what Wolfenstein 3D port for iPhone does as well.

                                                        This week I will modernize the rendering pipeline and switch to the common subset of OpenGL ES 3 and OpenGL 4 core using programmable GPU.