1. 3

    This looks great. Is this library being written to be part of some large application?

    1. 8

      I’ve written ZeroMQ and nanomsg once. This is part of my years-long project to make writing such applications tractable. And by that I mean being able to write them without falling into either callback hell or state machine hell.

      1. 2

        On that topic, what is the status of Nanomsg? Is libdill your main focus, or do you grow these projects in parallel? I’ve watched this projects without using them in practice, but I really like the approach of trying to find the right abstractions and patterns for expressive and efficient network programming.

        1. 1

          Banal question; libdill? why not just use go?

      1. 3

        I love this concept.

        One challenge is that one can always read all the pages. It would be great flavor if this shipped with like… 20 times as many pages. Things like random letters, accounting statements. So you can also experience the whole “sifting through a lot of stuff” thing, and perhaps landing on an interesting bit somewhere. Maybe even having this actually ship as several distinct sets of books, for example.

        EDIT: You might want to check out Her Story for some interesting ideas in there as well. A bit harder to execute upon on paper, but is a very interesting mechanism for non-linear storytelling.

        1. 2

          I like the idea of adding cruft to confuse the reader. Another option would be to hide different chapters at different physical locations and references would be instruction of how to get to the next chapter. But the it ceases to be a book, of course.

          EDIT: I’ve added a comment to README along the lines of your comment. I hope you don’t mind. One modification though. Adding 20x more content isn’t feasible for a printed book. So, instead, the pulp should look as a legitimate content to waste reader’s time by solving nonsesical puzzles etc.

          1. 1

            It would be great flavor if this shipped with like… 20 times as many pages.

            It’s an example of security through obscurity!

          1. 4

            If we accept the premise that there is a lot of value in the uniform morphology, Esperanto could be an option suited for ASCII (in the orthography with «x» instead of diacritics, of course). Then there is also Toki Pona. Many people prefer just to combine enough separate English words to get the point across.

            But I think there is another linguistical problem to consider: naming things in programming is hard because programming is an activity where minor semantic distinctions often matter. Maybe a uniform morphology would help by reducing verbosity and allowing to put more meaningful roots in the name of a given length; but anything general enough to be universally useful would have to be vague enough to be subtly misleaing in the specific cases anyway.

            The problem is not just to remember the words — «reading with a dictionary» is a skill older than programming. The problem is that too much details are needed for defining even a single word.

            1. 2

              Maybe. But you can also look at it from the other side: If morphology is standardized, people, being pattern-loving animals, would try to use the constructs consistently, i.e. try would try to make relationship between “parse” and “parser” be similar to relationship between “scan” and “scanner”. Eventually, the constructs could come to represent something like “design patterns”, something that you can assume to work in some specific way.

              1. 1

                Well, the design pattern called Factory definitely has its own «-Factory» suffix. And predicates often get an affix of one or two characters («is», «-p», «?»). And Hungarian notation was used.

                My fear is that humans are actually too good at pattern matching, so if all you have is «-er», you will get «parser» regardless of whether it tries to parse a prefix or requires a whole-string match.

                Do you hope that using a spoken language with a lot of morphological modifiers as a base will affect the culture to create enough new modifiers for smaller patterns? I mean enough to avoid combining any dangerous-to-combine notions. I find this plausible, but not guaranteed; I guess naming things in Esperanto could be a way to try.

                1. 2

                  I don’t know really, but it might be worth a try. At least when a programmer talks to another programmer in person, they use natural language to get through the idea. This is often (at least in my experience) superior to just reading the code. So, maybe, if we were able to take a bit of this person-to-person communication in convey it via the code, it would help to some extent.

                  1. 1

                    Well, in person-to-person communication there is not only different naming (I would be surprised if some structured morphology not used in the variable naming would arise), there are different protocols for manging the level of details. You can get an overview that is not just «N levels deep in the call tree». Sometimes abstractions are also intentionally lossy, which you are usually not allowed to do in code.

                    Some things depend on feedback, there is some research into allowing zoom in/zoom out for abstraction levels, but improving state of art in the area of zooming out the programming abstractions would definitely be valuable.

                    1. 1

                      Yep. That’s why I said “a bit” :)

                      1. 1

                        My point was also that we currently have more tools for lexical part than for grammatical part. Is morphology still where the best return on effort is? (I honestly do not know)

                        1. 1

                          I don’t think this would help with the tooling. However, it would decrease the amount of lexical baggage which in turn could help with, say, keeping the learning curve flat, or, maybe, returning to old code years later, remembering just the core concepts and being able to get up to speed immediately.

                          1. 1

                            I meant tools in a wider sense including conceptual tools.

                            Intuitively, a flat learning curve is not something you can achieve in an experiment (the morphology has to be learnt first). So this part is hard to know (getting data from Esperanto taking off only for code identifiers and comments sounds a bit optimistic).

                            It would be of course interesting if there were some subset that you could try with moderate effort and then tell a success story.

            1. 9

              The next step, being code negative in other peoples software repositories via sharing knowledge alone :)

              1. 18

                The bullet point on my resume that gets the most comments from interviewers says:

                Reduced codebase by 110 KLOC (43%), largely by rewriting the Java subsystem in Python. Increased reliablity from 93% to over 99.5%.

                1. 3

                  That’s the very crux of the problem. How would you shared knowledge without writing code? Well, there’s still an option to write academic papers, but given the rift between compsci academia and practicioners of programming I would expect it not to be very efficient.

                  1. 2

                    Well you can talk to people.

                    1. 5

                      Think about it in memetic terms.

                      The idea is a meme. The code is its reproductive organ. The code is ‘useful’ so that it can lure its prey (a living human brain). Once the code is used the idea is repeatedly injected into the brain.

                      Compare that to talking to people where the idea is basically let floating in the space to be voluntarily accepted or not.

                      The former approach is much more efficient.

                      1. 1

                        Ideas spread fine on their own. For example I’m about to convince you of this without a single line of code. There’s no need to push things into formal language when they make sense in nonformal language. I don’t need to tell you the steps of how to build a boat for you to realize that some method of traveling over water is good. In fact I’d argue that if I told everyone the exact steps to build a boat most would miss the point about what the boat is for. They’d get caught up in the details and fail to capture the bigger picture.

                    2. 1

                      English descriptions with formal specifications and/or pseudocode accompanying them in a formalism that’s simple. That was the standard for high-assurance security. It worked consistently so long as the formalism could handle what they were describing. The caveat that the teams needed at least one specialist to train them on and help them with the formalism. If we’re talking CompSci, that could just become another 101 course where people are exposed to a few easy ones.

                  1. 7

                    There was an interesting argument by David Graeber that “doing useful work” is considered a privilege nowadays: “You are a nurse? And you complain about not being paid enough? Shut up, you are at least doing useful work.”

                    In terms of software development that would be: “If you want to do useful work, do shut up and be glad you can do it for free as an open source project.”

                    1. 22

                      It’s interesting to ponder why Google would invite someone with a completely opposite worldview. Not just different, but a perspective that openly calls for the end of all the Googles out there. I have to watch it.

                      Edit: Gold nugget in the last final seconds:

                      Interviewer: Do you have anything you’d like to ask us [Googlers, marketers, software engineers]?

                      Chomsky: Why not do some of the serious things?

                      1. 7

                        Remember that Google might not equal the Googler or team of them that invited this person. This is a huge company with a lot of different kinds of people. I imagine they bring in many different kinds of people to suite different tastes. It’s also not going to be threatened by someone disagreeing with it given the audience can just shout the person out the door and not invite them again. One or more inviting him probably liked some stuff he said in a movie or presentation. Then, they thought some people might enjoy hearing him speak. The end.

                        That’s my default assumption anyway.

                        1. 3

                          I agree. In fact it would’ve been more notorious not accepting the proposal of his talk.

                          1. 2

                            Working at Google (but having no idea of the background of this talk) I would very much expect it to have happened like that.

                          2. 7

                            It’s interesting to ponder why Google would invite someone with a completely opposite worldview. Not just different, but a perspective that openly calls for the end of all the Googles out there. I have to watch it.

                            That’s a good way to signal you’re secure in your worldview: freely invite people to challenge it.

                            1. 13

                              While I agree with you in the general case, I think Google is doing this to placate it’s employees. What better way to dispel animosity than to accept the other side as one of your own?

                              1. 9

                                (kind of tangential, but…) I’ve always found it fascinating how social movements often collapse when legitimized.

                                It’s like when a manager gives lip service to the concerns of an unhappy employee, making them feel like it’s all going to be better soon, but effort is not spent to actually change a situation.

                                When you walk around Google campuses, there is often material on the walls in common areas that talks about various social causes that Google is working to improve. It feels great to think that your organization is part of the solution.

                                The gap between superficial and structural control structures is interesting to pay attention to when seeking changes to a system. You can really dispel the risk of an insurrection by letting a Black Panther get elected, bringing in an external investigator to fix your sexual harassment problems, hosting a Noam Chomsky talk, etc… without risking any structural change.

                              2. 2

                                I think know what you’re trying to say, but if your opinion is “Google should stop existing” and then Google invites you to give a talk, what’s the point here? They’re not going to be persuaded into oblivion, so… why? As a pretense of open-mindedness? Or maybe it wants to be associated with the intellectual prestige of Chomsky? What’s the real reason, I wonder.

                                1. 3

                                  An institution like Google might not even consider it to be at odds with a progressive, anti-capitalist view like Chomsky’s - it’s a different sphere with a different perception of reality. “Don’t be evil” is not just a empty phrase, these people really believe it. Moreover, the questions given by the interviewer where purely instrumentalist in nature: science is a tool for them, a means to an end. It’s an attempt to learn from an famous scientist, without considering the moral issues which are much more important to someone like Chomsky.

                                  1. 1

                                    “Don’t be evil” is not just a empty phrase

                                    They dropped that a while back if you’re talking Google. The company has been practicing plenty of evil in surveillance sense, too. Hell, just the revenues versus spending on quick, security patches for Android by itself shows how evil they’ll be to their users to squeeze extra profit out. ;)

                                    1. 1

                                      I totally agree. What I meant was that the people behind the institution called Google most certainly have a different perception of evil.

                              3. 7

                                A friend’s workplace had protestors picket outside its door. The boss brought out coffee and donuts, warmly thanked them all for coming, and went back in. Within a half-hour, fed on the company’s dime and with no target for anger, they wandered off.

                                The Google employees watched a rousing argument from a famous voice. Really what they watched is their employer act totally unworried while a thousand other employees sat still. Next comes lunch or that mid-afternoon status meeting with the team in Australia. There’s no social movement started here. If Chomsky is lucky he planted a seed, but it’s pretty easy to forget someone ineffective telling you that you’re wasting your life and should do something uncomfortable.

                              1. 2

                                True. Maybe it should be called Reputation Engineering, part 2 or similar. Anyway, I eventually want to join the series into a single article so the titles will disappear.

                                1. 2

                                  Also, note that if your title has “law” in it, it makes more sense to add the law tag. :)

                                  1. 2

                                    Thanks, done.

                                1. 6

                                  I like the idea of writing postmortems for abandoned projects. There may even be stadard POSTMORTEM file so that people would know where to find the cause of death.

                                  1. 3

                                    “But that’s where the complexity kicks in: Oh! MSVC only works on Windows! Test X requires 8G of memory and the box Y only has 4G available. Shared libraries have .so extension on Linux, .dll extension on Windows and .dylib extension on OSX. I need to switch on valgrind for test X an box Y temporarily to debug a problem. Support for SPARC in our preferred version of LLVM doesn’t quite work yet. We need to use older version of LLVM on SPARC plarforms. And so on and so on.”

                                    This is exactly why logic programming was used in early expert systems for configuring things. It’s easy to express such things with declarative programming with programmer feeding in environmental data and correct configuration coming out the other end. Similarly, one project in academia was rewriting “make” in Prolog for much cleaner operation. Biggest, commercial LISP’s always embed Prolog in them for such things so you write most of your app in preferred language then drop into declarative programming where it makes sense.

                                    Best tool and default for most such jobs. That said, Cartesian is interesting work. The examples look clean like with other declarative programming. Plus it’s unusual for me to see cartesian solution to problems like this. I wonder what the inspiration was to try that.

                                    1. 3

                                      Good observation. I would be a fan of doing this kind of work in Prolog myself, but lets’ be frank: Prolog is a hard sell to real-world developers.

                                      As for the motivation, it’s twofold.

                                      Firstly, I have to deal with complex configurations on daily basis and most of what’s out there is pretty ugly. Approach in Cartesian tries to solve the ugliness without resorting to exotic paradigms. It’s (I think) something that your average JavaScript developer would be comfortable with, yet powerful enough to describe really complex systems.

                                      Secondly, it’s a methodological experiment. I think it’s fair to say that inheritance has mostly failed us when we tried to model the complexity and irregularity of the real world. Therefore, this is a sketch of an alternative paradigm, one that uses cartesian products instead of inheritance. (But, as mentioned in the README, it can even be combined with classic inheritance.)

                                      1. 3

                                        I remember seeing libmill come out and being quite intrigued. It looks like this is a successor to libmill that aims to be more C-like rather than somewhat heavy-handedly porting Go conventions onto C.

                                        Super exciting, and I sincerely hope I have some usecase to try it some day, but even after finding libmill, I have yet to really need concurrency and parallelism in my C code; usually, I just strive for highly-optimized single-threaded execution.

                                        Either way, exciting work! You wouldn’t happen to have benchmarks (I know they’re mostly useless, but they can still be enlightening on occasion), would you?

                                        1. 2

                                          The implementation is very similar to libmill, so the benchmarks would be likely similar: 20 million coroutines and 50 million context switches per second.

                                        1. 2

                                          You asked for cases in which free licenses have helped: there is OpenWrt that was able to exist thanks to GPL enforcement and currently we have the on-going GPL lawsuit against VMWare. It remains to be seen how it will play out.

                                          More mundanely, there is the giant pile of open and free software that nearly every company now relies upon, big and small. These companies will not touch code unless it comes with a cleanly written legal promise that it’s ok to touch it. These benefits also extend to ordinary people, not just companies. So, we are already all benefiting from open and free licenses. We know the software can’t be taken away from us, because a license says it won’t be. It’s just a benefit that is so taken for granted that we even forget it’s there.

                                          1. 3

                                            Note that I’ve asked how the law as a whole have helped, not how software licenses have helped.

                                            So, the law takes software which is freely shareable in its nature (it’s not a scarce resource; you can’t even know if someone made yet one more copy) and bans free copying. Then RMS comes a devises a hack to use letter of law to fight the spirit of law. As a consequence, we get a little bit of freedom.

                                            It would require a pretty severe version of Stockholm syndrome to interpret that as law “helping” the FOSS community.

                                            1. 3

                                              Licenses are promises, deterrents. We don’t need to go to judges and lawyers every time we need to reap the benefits of a license, just like we don’t have to count all of the people who tried to jump over a fence but were caught being unable to do so. The law has helped in giving teeth, whether copyleft or not. The licenses don’t have to bite for their teeth to be effective.

                                              If there was no copyright or copyleft law, all it takes a few bad actors or bullies to exploit the goodwill of the majority. These bullies could DRM or obfuscate their software in order to abuse the rest. Software is not always naturally modifiable or redistributable. This abuse already happens, but we can fight back in the cases where licenses like GPLv3 prevent this. Anarchies always fail when the group grows too large to avoid the presence of harmful agents. Github’s unlicensed software is not an anarchy, and few would base a substantial endeavour on unlicensed software.

                                              I am more optimistic about the law. Understand the law, where it comes from, and what benefits it may bring. Proper laws can help us all.

                                              1. 1

                                                If there was no copyright or copyleft law, all it takes a few bad actors or bullies to exploit the goodwill of the majority.

                                                Would you mind expanding on this point? I’m not sure I agree as written.

                                                1. 1

                                                  If there was no copyright or copyleft law, all it takes a few bad actors or bullies to exploit the goodwill of the majority. These bullies could DRM or obfuscate their software in order to abuse the rest.

                                                  That doesn’t make any sense. If there were no copyright/IP law, certainly people could obfuscate or DRM their software, but they’re entitled to do so! Without copyright/IP law, folks would also be entitled to deobfuscate or crack the DRM and distribute the fruits of their labor without consequence. Certainly, folks are also entitled to not use software that is obfuscated or DRM’d.

                                                  This abuse already happens, but we can fight back in the cases where licenses like GPLv3 prevent this.

                                                  To clarify, by “fight back,” you mean, “use the threat of physical force to compel others to not obfuscate or DRM their software”? I’m sure actual force is rarely necessary, but the threat of it is effectively behind the enforcement of all laws.

                                                  Proper laws can help us all.

                                                  This is an utterly meaningless statement. Which laws are proper, exactly?

                                                  I am all for calling out people who do bad things like obfuscate or DRM their software and using one of many available tactics to try and change their behavior, but as soon as you start trying to use coercion to enforce your particular sensibilities on how software should be distributed, that crosses the line for me personally.

                                            1. 2

                                              I wonder how many of these repositories aren’t software (e.g. dotfiles or other config, websites). You could of course put a license on these (though tracking provenance for snippets in your vimrc might be a pain), but I can understand why most wouldn’t bother.

                                              1. 2

                                                Yes, agreed. My point was that not putting a license on a project, however small and trivial, is contributing to making no-licensing (and thus ignoring the legal aspects) a cultural norm. Small projects today, big projects tomorrow…

                                              1. 1

                                                Am I missing something or is he calling continuations by a different name?

                                                1. 3

                                                  I think the difference between continuation and green thread is that the former is accessible from the language. The latter is implicit.

                                                1. 6

                                                  I didn’t really understand the point of this post. If you ignore all the details, go can be as fast as if? What does that even mean, though? It’s not like go is a single piece of work, it’s creating something that will run until it’s done. I’m not sure calling go “control flow” entirely makes sense either. It’s a non-deterministic implicit context switch. I’ve never heard such a thing called control flow but maybe I’m being narrow minded.

                                                  1. 2

                                                    Green threads serve the same purpose as classic control statements.

                                                    Imagine a object that reads data from a socket on one side and spits out messages on the other side.

                                                    You can implement it as a state machine, using bunch of ifs, whiles and so on.

                                                    Or you can do the same thing using a separate green thread.

                                                    The two are, from computational point of view, the same thing. However, the latter is much more elegant and readable. Still, people often prefer the former because of performance reasons.

                                                    1. 4

                                                      By this definition, what isn’t control flow? An OS thread fulfills the same requirement. What I think makes it unclear, to me, that preemptive threads are control flow is that their context switches are implicit, whereas control flow generally involves an explicit control of the flow of the language. The wikipedia page on it, at least, suggests along the same lines.

                                                      1. 1

                                                        Switching between OS threads is much more expensive than a conditional jump.

                                                        Switching between green threads should be O(1).

                                                        1. 2

                                                          Switching between green threads may have less overhead than for OS threads, depending on how much machine state is preserved, but I don’t understand the O(1) part of your assertion. O(1) in what?

                                                          1. 2

                                                            OS threads should be O(1) for context switches as well, but that is the asymptotic’s, it doesn’t say anything about how long that operation takes.

                                                            Also, the actual cost doesn’t have anything to do with my point.

                                                        2. 2

                                                          My understanding, and I could be off the mark because I’m only coming at this with a small amount of book knowledge, is that the performance problems of context switches come not so much from stack allocation but from cache misses.

                                                          When it comes to vanilla branching, there’s usually a more common alternative, so you also get speculative execution. While coroutines can unroll into a predictable pattern, it seems more common that they’re used to model unpredictable environments in a sane way (i.e. select statements).

                                                          To me, it seems that one of the differences is that performance is measured in a different way. Busy-waiting isn’t a problem if your device will only do one task at a time (i.e. no operating system, no competing processes) and you don’t care about power consumption, but only want the fastest results possible. However, it’s a bad practice to busy-wait in a program that’s going to be competing for CPU cycles with other processes.

                                                      1. 14

                                                        One thing that’s rarely mentioned in this context is capacity planning.

                                                        With single-threaded server you have pretty good idea about its runtime behaviour: it scales with increasing traffic until it hits 100% utilisation on a single core. Want to fully utilise 24 core machine? Run 24 instances of it.

                                                        With multi-threaded server the capacity planning becomes black magic. 1 instance may be able to utilise all 24 cores. Or it may take advantege of at most 2 cores. Or 4.75 cores. Even worse, the scaling is rarely linear. In short, to understand how the server scales you have to be familiar with the internal working of the server. And of course, any change to the server’s threading model can throw you capacity plan into disarray.

                                                        1. 4

                                                          If you wrote your threaded server in the same way as your event based server, you wouldn’t have capacity planning problems. All those scaling issues come from thread synchronization, which you must necessarily eliminate in event-based servers for them to even work.

                                                          1. 1

                                                            If you are doing capacity planning you are unlikely to be the same person that wrote the code. Even worse, you are probably planning for multiple different servers, each written by a different developer, having different scaling constraints. Now what?

                                                            1. 5

                                                              I think you’ve touched on why the programmer needs to be involved in the whole system: The hardware, the software, the network infrastructure, and even the out-of-hours response process.

                                                              Where the programmer is removed from the reality where their software is running on hardware, and that hardware is made out of matter and powered by energy, then you’re going to continue to find efficiency outside the bailiwick of programming.

                                                              1. 4

                                                                The most stable systems I’ve worked with were those where the programmers were also the ones responsible for the production uptime (capacity planning, monitoring, on-call) of their systems. This works well for companies running services, but is hard to do in a product company with tons of customers who run the system on their own. In product companies the buck gets passed around a lot more and a dedicated QA team may be necessary to find problems, since engineers will often fail to find problems that arise in the fault space not accounted for in their local dev environment (they may write code that passes tons of unit tests, but as soon as ZK goes down their stuff ungracefully amplifies unavailability even though maybe using cached configuration would be totally reasonable, etc…)

                                                              2. 3

                                                                Now you capacity plan by increasing simulated load against the service until a component fails, which was the first step in your plan no matter how the code was written anyway. What was your other plan? Guess?

                                                                1. 1

                                                                  On what kind of machine would you run the load test? How many cores? Would it behave differently on a different machine? Hard to tell. With a single threaded server it’s pretty straightforward.

                                                                  1. 2

                                                                    Wait what? Are you saying you capacity plan by running your single threaded server on the dev box and hoping for the best?

                                                          1. 2

                                                            While the book is well known for its discussion about scaling of the software teams, there are other essays that are rarely mentioned, and much more contentious. For example, what about the proposition that there should be one programmer with a wide variety of support staff?

                                                            1. 1

                                                              What exactly about “The Surgical Team” (the essay I believe you’re referring to) is contentious, rather than merely outdated? I think the computing environment that an individual programmer works within today is very different from what was contemporary when the essay was penned (circa early 1970s?).

                                                              It’s pretty easy for me to produce and edit a README.md file now; in the past, editing and type-setting documentation was a more time-consuming task. The essay also mentions a lot of paper-based record keeping about program development and performance. I think it’s safe to say that there are now better, more powerful tools for those tasks as well – in large part because of how powerful computer systems and software have become today.

                                                              1. 1

                                                                like diagram on page 36?

                                                              1. 2

                                                                The real problem here is that you need 10 or so years to find out whether the tool works for large-scale and long-term projects. Until then, even the author of the tool himself can at best make an informed guess.

                                                                1. 1

                                                                  Compare open-source development to corporate development. What’s the cost of delay in the latter case?

                                                                  1. 1

                                                                    Can you clarify your question? I don’t understand “what’s the CoD in [corp development]”? It Depends?

                                                                  1. 27

                                                                    Overusing Dijkstra-isms “Considered Harmful.”.

                                                                    1. 11

                                                                      Seriously. This could be a fantastic article but I just can’t be bothered to open anything with ‘considered harmful’ in the title anymore.

                                                                      1. 9

                                                                        It isn’t, so your screen worked gloriously.

                                                                        1. 5

                                                                          It is already a written essay. It hasn’t dissuaded people to use the catchy phrasal template in their title http://meyerweb.com/eric/comment/chech.html

                                                                          1. 3

                                                                            The considered harmful part was partially a joke - obviously I don’t think every OSS project is potentially harmful. If you give it a chance, I’d love your opinion.

                                                                            1. 1

                                                                              I’ve used it lately, although in an article that is meant to be a direct follow-up to Dijkstra’s one.

                                                                              1. 2

                                                                                “‘Considered Harmful’ is Considered Harmful”

                                                                                I can see the medium.com post already.

                                                                                1. 3

                                                                                  It’s been written already, long before Medium existed. That’s the link others have been offering. :)

                                                                                  “Considered Harmful” Essays Considered Harmful