1. 6

    Morning all!

    I’m keeping busy, it seems.

    I’ve got more changes in progress for the Z3 theorem prover to clean up some more platform support issues. I’m also working on expanding my Rust bindings as well.

    In Zircon / Fuchsia land, I got a lot of my backlog of changes cleared out and then went and submitted a bunch of new changes. One day, Zircon will be the OS kernel with the best spelling.

    My client work is progressing. The devil is in the details and I’m trying to get everything nailed down in the next day or two.

    I’m still reading up and learning about things going on in the materials world with cellulose and lignin. This led me to the interesting world of using chitin from shrimp and crab shells, producing chitosan, and then using that to make biodegradable plastics. So fascinating! A lot of this seems like it needs further work to bring it into affordable production processes though. I’m trying to find some local chemical or materials engineers to meet up with and learn more about this.

    I’m also reading about some ideas from the 1990s on what might be better than a REPL. Also pretty fascinating.

    And this coming weekend, we undergo 30 or so hours of door to door travel to head to the US for a couple of weeks.

    1.  

      I’m also reading about some ideas from the 1990s on what might be better than a REPL. Also pretty fascinating.

      Sounds very interesting, can you share some links and/or thoughts?

      1.  

        I’m also reading about some ideas from the 1990s on what might be better than a REPL. Also pretty fascinating.

        I love this stuff! It’s really interesting to see the things that were tried, and that…well, not that failed, but that didn’t become popular. I think by the 1990s we were pretty doubled-down on the edit-compile-debug cycle, and sufficiently burned by CASE tools to be sceptical of anything more complex than a text editor with a “build” button.

        Some stuff never left academia, some didn’t get traction, and some was genuinely not a good idea :). Trying to work out which is which is fascinating.

      1.  

        Business development is my biggest problem, and my goal to advance this week. December is expensive, and the Labrary doesn’t have any clients yet. If you have time, I’d appreciate it if you could take a look at my website, and tell me whether it’s clear what my company offers after a read, and whether there are circumstances under which you would take up that offering or recommend it to a peer or manager. Am I even letting people know they can give me money?

        I’m checking out the local meet-ups, talking to the business development networks, startup incubators, and university departments. When I get bored of that, I am finishing the day with some research and pet projects to apply learned skills. At the moment that’s about learning Swift, but also after the recent thread on whether Fortran is still a thing, I’ve realised that I have some pretty unique experience relevant to scientific computing so I’m dusting out CUDA, OpenCL, and numerical methods.

        1.  

          I’m not quite clear how a lab library and a consulting detective go together. Also, you’re diverting people to DCI Barnaby before you get to your pitch, which seems… less than optimal.

          I’d go for one of these metaphors and run with it. If you can get to the point that people can hire you for improving their processes while they’re still on the front page, I’d consider that a plus.

          1.  

            Thanks, I appreciate that! I agree that the two metaphors don’t work together or support each other.

        1. 6

          I don’t disagree with these things, but I think that engineers are frequently not the cause but the victims of these syndromes.

          Engineering teams frequently don’t have access to any information about customer needs or company goals. Instead product managers feed them little context-less tickets. In that environment, all engineering can measure is how quickly they burn the tickets.

          I should add that in these organizations, product managers have two effects:

          1. they prevent engineers from getting too valuable by being anything other than ticket processors (thus keeping them cheap and interchangeable; but also with no incentive to do anything other than maximize the number of technologies on their resume).

          2. They relieve upper management of the need to take a real view on what the product is or should be - instead they outsource it to a bunch of also fairly interchangeable functionaries in the hope that some of these ideas stick.

          Engineering, product, or the executive team can call out this disfunction, but ultimately all three need to align around a new model.

          I’m increasingly of the opinion that the only way for this to be solved is to have product managers be fully integrated as members of self-organizing agile teams. And like all self-organizing teams it should be expected that exact work responsibilities will shift around as needed as needed to accomplish the high level business goal. Of course this means that the company has to pay for everyone to train in the necessary skills rather than shoehorning in on the job practice for the sake of their own skills, and top management has to stop treating engineering and product like a sausage factory. Instead they have to get with them to define goals, and track the point at which precise engineering scope needs to change. Which is what agile is supposed to be about.

          1.  

            I’m increasingly of the opinion that the only way for this to be solved is to have product managers be fully integrated as members of self-organizing agile teams.

            I agree, and all of the members of the team have to want that for it to succeed. If the engineers have a cognitive split between “us” and “the business”, they will be trying in their self-organisation to avoid that integration.

            1.  

              If the engineers have a cognitive split between “us” and “the business”, they will be trying in their self-organisation to avoid that integration.

              This is why I advocate eliminating engineering teams. An agile team should have all the disciplines needed to deliver the business goal, including product managers.

              How in your experience does such a cognitive split arise on a multidisciplinary team? Are different disciplines given different goals?

              1.  

                Yes, they have different goals. Despite “the team” being multidisciplinary, the reporting lines are still functional. The product owners are set objectives by a Head of Product, the developers by a Head of Engineering, and so on.

            2.  

              Engineering teams frequently don’t have access to any information about customer needs or company goals. Instead product managers feed them little context-less tickets. In that environment, all engineering can measure is how quickly they burn the tickets.

              I had the opposite in my previous $JOB where discussing business features, customer needs and company goals became a huge bike-shedding conversation. I actually really like when I can exchange around a coffee with the business owner/product manager/whatever about features and goals until I get the vision, but that I also have to trust what they decide is better for the company.

              To me, everybody has it’s own skills to make the projects go forward, and take decisions about what could the customers benefits the most according to numbers and personas is not part of mine. On the opposite, optimizing for performance and reliability or speed of development and quality of products is where I can help, driving the value up and the costs down, which is very complementary to the PMs’ goals.

              1.  

                discussing business features, customer needs and company goals became a huge bike-shedding conversation.

                Then you didn’t have any information on these topics. If you had gone out and gotten the information, then there wouldn’t be bikeshedding.

                This is why I don’t advocate eliminating product managers - I advocate allocating them to agile, multidisciplinary teams.

            1.  

              I think that the symptoms could use more discussion about why they are symptoms.

              Also: the “nearly done == done” issue seems like a case where velocity (in the JIRA sense) will look lower, so it doesn’t fit the post’s target of teams that look good in JIRA but fail in the real world. It’s a real issue, but one that JIRA will highlight.

              1.  

                Jira does highlight it, so the team will make sure to tell stakeholders that Jira is a traitor who is hiding their true velocity. I’ve even worked with a team that “booked” fractional points on stories that were started but incomplete.

                The dysfunction here is the focus on the stories being “nearly done”, and representing greater success for the team. They could have asked themselves and their stakeholders whether they overcommitted, exposed some unforeseen problem with defining or implementing the story, or something else led to them not being done. But they were not “not done”, they were “nearly done”, which is great.

              1. 4

                The 50th anniversary of “the mother of all demos” is tomorrow, Sunday December 9th.

                1. 1

                  It’s a pity it isn’t broadcasted :_(

                  1. 2

                    Get yourself a projector, the side of a building, and a lot of popcorn, and fire it up!

                    1. 2

                      Oh no, no I referred to the 50th anniversary on the 9th ;-)

                      1. 1

                        Again, I’m talking about today’s homage in Silicon Valley:

                        https://thedemoat50.org/symposium/

                  1. 16

                    I agree strongly with the message that the real “problem” with the Mac experience is that app makers no longer feel compelled to get all of the details correct. Back when I started using OS X (10.1), you could even tell which apps used the Cocoa or Carbon APIs based on small details, like how the text input UI reacted when you entered an accented character. The fact that you had to get to those details to tell meant everything else was consistent.

                    There were a few early UI inconsistencies (the brushed metal look was supposed to be for apps that modelled real-life hardware, like DVD Player, QuickTime Player and iTunes, but Finder was brushed metal), but even through the Delicious Generation of apps people were making things that were Mac apps, even if they had a distinctive look.

                    But now very few developers, including Apple’s internal developers, make things that are Mac apps. Is it a problem? I’m not sure. It feels wrong to me, but I use a laptop every day and many folks don’t. In five years, will “the desktop” be an important platform, or will Macs be the dev kits for iPads?

                    1. 2

                      The worst thing to happen to the Mac is Steve Jobs dying (okay, it’s also the worst thing to happen to Apple). He was the only one with enough clout and confidence to say “we aren’t shipping this until it’s fixed.”

                      1. 18

                        Unfortunately if you count unfinished things that they shipped on his watch, you find a different story: the cracking-case G4 cube, Mac OS X 10.0, the iWork rewrite that removed features from existing documents, Apple Maps, iAd (we’re not an advertising company), the iPhone 4, and MobileMe are the ones I can think of now.

                        I’m not arguing that quality is better now (I think it isn’t), but I will argue that Steve Jobs is not the patron saint of problem-free technology products. Apple has, like all other technology companies, shipped things that they thought (or maybe hoped) were good enough, under all of their CEOs.

                        1. 1

                          Didn’t they charge for the updates to fix their broken software on top of it while Windows did them for free? I could be misremembering but I think Mac folks told me that.

                          1. 4

                            All I can recall is charging for a patch to enable 802.11n wireless (as opposed to draft) but the explanation was that sarbanes oxley prohibited delivering a “new” product because they already booked the revenue for it, but then the law was clarified and software updates are ok now.

                        2. 2

                          It’s not so much about Steve Jobs, but about people that were behind products and development process, which of course were brought and held together by Steve Jobs. I would say that departure of Bob Mansfield is one of the mayor impacts on Apple’s final products.

                        1. 10

                          We compete with Google not because it’s a good business opportunity.

                          Bear in mind that a lot of Mozilla’s Firefox revenue comes from Google. Mozilla competes with Google because Google lets them. I would speculate that’s to keep the semblance of an open “The Web”, the same way Microsoft paid to prop up Apple in the 1990s.

                          1. 3

                            There can be other revenue sources, Mozilla has had other partners in the previous years. If Google or Mozilla decides that that agreement is no longer interesting, there are other partners to work with. For example, some years ago it was Yahoo! who was paying.

                            Personally, I’d like to see Mozilla going towards a more pulverized way of funding by people voluntarily contributing money to keep it afloat but I don’t think that with the current mindset of the web users this is viable.

                            1. 4

                              Personally, I’d like to see Mozilla going towards a more pulverized way of funding by people voluntarily contributing money

                              That’s more or less the 2019 plan. If you want to support us, there will be a way to “subscribe”. I hope more people realize how important this is, but I also understand your skepticism.

                              1. 1

                                I already support with yearly donations and I am also a Mozilla TechSpeaker and Rep. ;-) doing what I can for the web ecosystem.

                                1. 1

                                  Could you go into that some more (if you’re able)?

                                  1. 2
                                    1. if you’re interested in purchasing a VPN, you can start buying it through Mozilla and send a few dollars in the right direction. See https://blog.mozilla.org/futurereleases/2018/10/22/testing-new-ways-to-keep-you-safe-online/
                                    2. follow our blogs or get a Firefox account and I’m sure you’ll get mail about this :)
                              2. 1

                                With all the criticism that I and others have with Mozilla, I found that their strategy around funding has been very clever in the recent years. They have played their position as a neutral player very well. Google funds them to keep other from funding them, not as a smoke screen.

                              1. 3

                                This might be a good business decision for Microsoft but it is a disastrous advancement for the Web.

                                I appreciate that there are many people who advocate for open standards with multiple implementations (I count myself among them). However there hasn’t been a “The Web” independent of the businesses that build its bits in a very long time. Far from the end of “The Web”, this is the last attendee at its wake turning the lights out.

                                  1. 3

                                    Thank you, Graham! These look very good. BTW I just got your book yesterday, looking forward to reading it :-)

                                    1. 2

                                      Thank you, I hope you enjoy it! Please do send me a message with your questions and feedback :)

                                  1. 6

                                    Object-Oriented Programming: An Evolutionary Approach

                                    I actually went on one of my “deep dive into old computer science things” and got obsessed with pre-NeXT Objective-C. I tracked down a copy of the first edition of this book (the second edition is much closer to the more modern Objective-C that we now know and love).

                                    I even reached out to Tom Love (co-creator of Objective-C along with Brad Cox) and he was kind enough to recommend Object Lessons as an additional suggestion and dig through his garage for some old documents.

                                    Either way, it’s an excellent book.

                                    Object-Oriented Software Construction

                                    Meyer’s approach to software engineering is…I’m not even sure of the right word. “Perfectionist” might be close, but I don’t want the negative connotation to come through on that. Anyone who wants to study OOP could do with reading his work.

                                    1. 2

                                      and got obsessed with pre-NeXT Objective-C

                                      Do you have any resources for that in particular? I’ve been curious about Objective-C before NeXT, but never ended up diving into it and its history.

                                      1. 5

                                        The above mentioned Object-Oriented Programming: An Evolutionary Approach is good, of course.

                                        I read a lot of NeXT documentation, though again I was more interested in the pre-NeXT days.(I briefly had a NeXTstation set up in my living room. That was fun.)

                                        I bought a copy of “Objective-C: Object-Oriented Programming Techniques” by Pinson for like fifteen cents from Amazon; that was all right but not great.

                                        Most interesting was the original “Object-Oriented Pre-Compiler” paper, which I believe was published in Communications of the ACM but I’m not exactly sure where I got it. It documented a very early implementation where methods were invoked using a rather…awkward…syntax.

                                        I found references here and there to the various “ICpaks” that PPI (later Stepstone) released (ICpak101 was the core collection classes and ICpak102 was the GUI, IIRC). These were very different from the later NeXTstep/OPENSTEP classes, and really nice in their own ways. They were somewhat documented in the Evolutionary Approach book as well.

                                        Sadly, I was never able to get the Holy Grail that I was looking for: copies of the original PPI compiler/ICpak/library manuals. Those whom I reached out to (Brad Cox, Tom Love, and others) were unable to find their copies or were unwilling to part with them (which is understandable).

                                        If you’re interested, the Portable Object Compiler implements a pre-NeXT (but still post-ancient) Objective-C, and its manual describes its “ObjectPak”, which is more in line with the original “ICpaks” than NeXTstep. I still much prefer Objective-C to C++.

                                        1. 2

                                          The Object-Oriented Pre-Compiler: programming Smalltalk-80 methods in the C language is the citation (and if you’re an ACM member, the full article is linked there).

                                          1. 1

                                            You weren’t kidding about if an ACM member: couldn’t find a legal copy anywhere other than paywalls. ResearchGate’s at least has “request full text” button. Did at least stumble on an interesting, historical submission for Tuesday.

                                            1. 3

                                              The Object-Oriented Pre-Compiler: programming Smalltalk-80 methods in the C language

                                              http://sci-hub.tw/10.1145/948093.948095

                                              1. 2

                                                I did say “legal.” ;)

                                                1. 3

                                                  For all intents and purposes that article should be freely available by now. That it isn’t is just a bug in the system, a blip on the line, a hiccup in the clockwork and as such something the ’net has been designed to route around. Which it does.

                                          2. 2

                                            Thank you for the detalied response!

                                      1. 2

                                        I’ve spent a couple of days building a Literate Programming style tool, which additionally understands image declarations like graphviz and plantuml and generates figures in the document.

                                        I’ve still got a bit to go on that, then I want to rewrite it in itself.

                                        1. 6

                                          I love the versioned boot image idea. It’s the sort of thing you would hope never to need, but be really glad when it’s available.

                                          This seems like a user experience bug:

                                          When we create a contact in People, for example, everything we write in it are attributes. Notice the file size itself is ‘0 bytes’.

                                          As far as I know, every OS that has a file system with forks/streams/xattrs does this. As the owner of a finite storage device, I’m looking at the file size to decide how much of my finite resource is used by that file. I don’t care whether your kernel technically puts the data in another fork, or the directory record, or some lookaside structure. I care that it puts the data somewhere.

                                          Technically correct, the worst kind of correct.

                                          1. 5

                                            FYI, versioned boot and rolling back to previous generations is also supported in NixOS.

                                            1. 5

                                              Also similar facilities are available on FreeBSD, illumos, and SuSE.

                                          1. 4
                                            1. 3

                                              Are there readers here that do Fortran currently? If so could you tell a bite more about it? Just as exciting as NASA?

                                              1. 5

                                                I used to use it at Bloomberg. It was terrible. Just the ultimate in spaghetti code and spaghetti state. Also, we had to use FORTRAN 77.

                                                The thing about Fortran is that there’s nothing especially great about it as a language. It has language-level support for numerical processing stuff (which I never had to use), but it’s not necessary to have that as language-level features.

                                                See https://lobste.rs/s/ndqxfv/fortran_is_still_thing#c_bclfdy for more comments in which I concur.

                                                1. 4

                                                  I don’t personally do Fortan, but I do recall there being an interest when I was in HPC up 2 years ago. Intel still ships a fortran compiler, for example. Sorry I can’t recall any specific reasons why folks preferred it other than “it’s just what we are used to using”.

                                                  1. 6

                                                    I concur, and am also an escapee from HPC (I worked on Allinea/ARM Forge). There are lots of Fortran compilers (flang being a recent, US gov-sponsored addition), debuggers and perf tools, libraries, and so on. You can write a program in Fortran that will send CUDA to an Nvidia board.

                                                    Part of it is “what we’re used to/trained in”, part of it is “we trust these implementations of these models”. You don’t just write a new climate model in Scala and say “there you go, if it ever finishes compiling your toolchain will be modern”. There will be a decade of publication, testing and review in the literature before the new model is truated enough to be used by other researchers, and then they will start adapting it for their MPI/batch system/snowflake build environment. If then, even.

                                                    Also the HPC ecosystem has some interesting quirks. Because the purchase prices of the machines are 7-8 digits, often taxpayer funded, and will be used for years at very high utilisation, they like to find products that are supplied by multiple vendors to avoid gouging lock-in costs or disappearing vendors. So one of the interesting effects of gfortran or flang is that they make it easier to buy Intel or PGI’s compiler (in fact IIRC PGI contributed the flang code).

                                                  2. 4

                                                    I use Fortran for HPC as well.

                                                    1. 1

                                                      Not currently, but in the recent past I worked on a scientific modelling app whose backend was entirely fortran (and the frontend was delphi… oh my). It was a bit of a nightmare - think a single 10,000 LoC file that is fully of global state manipulation and GOTOs. I’m not sure how much of the terror was because of the language vs it being written mostly by scientists.

                                                    1. 8

                                                      Whether it’s due to some collective memory of the technical difficulty of doing it pre-internet, or because ESR once wrote that it should be a last resort, the open source community seems unwilling to reach for the fork at times like this.

                                                      TFA doesn’t even mention the possibility, but it’s always there. If you want some change to Clojure but Rich Hickey isn’t ready to accept it, you can avoid your own frustration and his by forking the repo and making the change. If you don’t like the fact that I haven’t made any changes to event-stream in a while, you don’t have to give control over to some random, you can fork it.

                                                      Linux has even taken the idea of forking in decentralised version control to heart, and everybody’s repository is a fork. If Linus won’t take your commits, they’re still in git in your version(s), and maybe Greg or someone else will take them.

                                                      1. 3

                                                        So much this. Freedom is the freedom to fork. That’s at least 2/4 of the point ;)

                                                        1. 1

                                                          That’s overstating it. There’s ways to allow forks under proprietary models. You just say they can make any change they want, distribute it to other paying customers, and the licensor isn’t responsible for the changes. It doesn’t become free without acquisition, changes, and redistribution all free. At a minimum.

                                                      1. 4

                                                        Today is the first day that I’m full-time on The Labrary, my consultancy for helping and upskilling software teams.

                                                        So this week involves lots of office hours calls, improving my website, and otherwise generating leads and clients before the bank account runs out. Additionally a lot of learning.

                                                        1. 3

                                                          Be sure to check out Barnacl.es if you haven’t yet for more ideas on generating leads.

                                                          1. 2

                                                            Thanks! I’m already signed up over there, although at the moment that DNS name doesn’t seem to be resolving.

                                                          2. 2

                                                            Very interested to see what Labrary turns into. Had a read of the site this weekend and will be pointing people at you.

                                                            1. 1

                                                              Thank you Matt!

                                                          1. 20

                                                            My sense now is that Alan Kay’s insight, that we can use the lessons of biology (objects are like cells that pass messages to each other), was on target but it was just applied incompletely. To fully embrace his model you need asynchrony. You can get that by seeing the processes of Erlang as objects or by doing what we now call micro-services. That’s where OO ideas best apply. The insides can be functional.

                                                            1. 17

                                                              “If you want to deeply understand OOP, you need to spend significant time with SmallTalk” is something I’ve heard over and over throughout my career.

                                                              1. 5

                                                                It’s also a relatively simple language with educational variants like Squeak to help learners.

                                                                1. 7

                                                                  I have literally taken to carrying around a Squeak environment on USB to show to people. even experienced engineers tend to get lost in it for a few hours and come out the other side looking at software in a different way, given a quick schpiel about message passing.

                                                                2. 4

                                                                  If you don’t have any Smalltalk handy, Erlang will do in a pinch.

                                                                  1. 2

                                                                    And if you don’t have Erlang handy, you can try Amber in your browser!

                                                                  2. 1

                                                                    I went through the Amber intro that /u/apg shared. I’d love to dive deeper. If anyone has any resources for exploring SmallTalk/Squeak/Etc further, I’d love to see them. Especially resources that explore what sets the OO system apart.

                                                                    1. 2

                                                                      I’m told that this is “required” reading. It’s pretty short, and good.

                                                                  3. 16

                                                                    I even wrote a book on that statement. My impression is that “the insides can be functional” could even be “the insides should be functional”; many objects should end up converting incoming messages into outgoing messages. Very few objects need to be edge nodes that turn incoming messages into storage.

                                                                    But most OOP code that I’ve seen has been designed as procedural code where the modules are called “class”. Storage and behaviour are intertwingled, complexity is not reduced, and people say “don’t do OOP because it intertwingles behaviour and storage”. It doesn’t.

                                                                    1. 2

                                                                      This.

                                                                      Whether the implementation is “functional” or not, the internals of any opaque object boundary should at least be modellable as collection of [newState, worldActions] = f(old state, message) behaviours.

                                                                      We also need a unified and clearer method for namespacing and module separation, so that people aren’t forced to make classes (or closures-via-invocation) simply to split the universe into public and private realms.

                                                                      To say that the concept of objects should be abandoned simply because existing successful languages have forced users to mis-apply classes for namespacing is as silly as the idea that we should throw out lexical closures because people have been misusing them to implement objects (I’m looking at you, React team).

                                                                    2. 5

                                                                      If there’s one lesson I’ve learned from software verification, it’s that concurrency is bad and we should avoid it as much as possible.

                                                                      1. 8

                                                                        I’m not entirely sure this is correct. I’ve been using Haskell/Idris/Rust/TLA+ for a while now and I’m now of the opinion that concurrency is just being tackled at the wrong conceptual level. In that most OOP/imperative strategies mix state+action when they shouldn’t.

                                                                        Also can you qualify what you mean by concurrency? I’m not sure if you’re conflating concurrency with parallelism here.

                                                                        I’m using the definitions offered by Simon Marlow of Haskell fame, from Parallel and Concurrent Programming in Haskell:

                                                                        In many fields, the words parallel and concurrent are synonyms; not so in programming, where they are used to describe fundamentally different concepts.

                                                                        A parallel program is one that uses a multiplicity of computational hardware (e.g., several processor cores) to perform a computation more quickly. The aim is to arrive at the answer earlier, by delegating different parts of the computation to different processors that execute at the same time.

                                                                        By contrast, concurrency is a program-structuring technique in which there are multiple threads of control. Conceptually, the threads of control execute “at the same time”; that is, the user sees their effects interleaved. Whether they actually execute at the same time or not is an implementation detail; a concurrent program can execute on a single processor through interleaved execution or on multiple physical processors.

                                                                        While parallel programming is concerned only with efficiency, concurrent programming is concerned with structuring a program that needs to interact with multiple independent external agents (for example, the user, a database server, and some external clients). Concurrency allows such programs to be modular; the thread that interacts with the user is distinct from the thread that talks to the database. In the absence of concurrency, such programs have to be written with event loops and callbacks, which are typically more cumbersome and lack the modularity that threads offer.

                                                                        1. 5

                                                                          Also can you qualify what you mean by concurrency?

                                                                          Concurrency is the property that your system cannot be described by a single global clock, as there exist multiple independent agents such that the behavior the system depends on their order of execution. Concurrency is bad because it means you have multiple possible behaviors for any starting state, which complicates analysis.

                                                                          Using haskell/rust/Eiffel here helps but doesn’t eliminate the core problem, as your system may be larger than an individual program.

                                                                          1. 10

                                                                            All programs run in systems bigger than the program

                                                                            1. 1

                                                                              But that’s not an issue if the interaction between the program and the system is effectively consecutive (not concurrent), I think is the point that was being made. A multi-threaded program, even if you can guarantee is free of data races etc, may still have multiple possible behaviors, with no guarantee that all are correct within the context of the system in which operates. Analysis is more complex because of the concurrency. A non-internally-concurrent program can on the other be tested against a certain input sequence and have a deterministic output, so that we can know it is always correct for that input sequence. Reducing the overall level of concurrency in the system eases analysis.

                                                                              1. 2

                                                                                You can, and probably should, think of OS scheduling decisions as a form of input. I agree that concurrency can make the state space larger, but I don’t believe it is correct to treat concurrency/parallelism as mysterious or qualitative.

                                                                            2. 3

                                                                              Using haskell/rust/Eiffel here helps but doesn’t eliminate the core problem, as your system may be larger than an individual program.

                                                                              They help in reducing the scope into the i/o layer interacting with each other. I think an example would be helpful here as there isn’t anything to argue for your stated position so far.

                                                                              But lets ignore language for the moment and give an example from my work. We have a network filesystem that has to behave generally like a POSIX filesystem across systems. This is all c and in kernel, so mutexes and semaphores are the overall abstractions in use for good or ill.

                                                                              I’ve been using TLA+ both as a learning aide in validating my understanding of the existing code, and to try to find logic bugs in general for things like flock() needing to behave across systems.

                                                                              Generally what I find is that these primitives are insufficient for handling the interactions in i/o across system boundaries. Aka lets take a call to flock() or even fsync(), you need to ensure all client systems behave in a certain way when one (or more) systems make a call. What I find is that the behavior as programmed works in general cases, but when you setup TLA+ to mimic the mutex/semaphores in use and their calling behavior, they are riddled with logic holes.

                                                                              This is where I’m trying to argue that the abstraction layers in use are insufficient. If we were to presume we used rust in this case, primarily as its about the only one that could fit a kernel module use case, there are a number of in node concurrent races across kernel worker threads that can just “go away”. Thus freeing us to validate our internode concurrent behavior logic via TLA+ and then ensuring our written code conforms to that specification.

                                                                              As such, I do not agree that concurrent programming should be avoided whenever possible. I only argue that OOP encourages by default bad practices that one would want to use when programming in a concurrent style (mixing state+code in an abstraction that is ill suited for it). It doesn’t mean OOP is inherently bad, just a poor fit for the domain.

                                                                              1. 1

                                                                                I feel that each public/private boundary should have its own singular clock, and use this to sequence interactions within its encapsulated parts, but there can never really be a single global clock to a useful system, and most of our problems come from taking the illusion of said clock further than we should have.

                                                                            3. 4

                                                                              I would go exactly tangential and say that the best software treats concurrency as the basis of all computation. in particular, agnostic concurrency. if objects are modeled to have the right scope of visibility and influence, they should be able to handle messages in a perfectly concurrent and idempotent manner, regardless of cardinality.

                                                                              1. 2

                                                                                Take Clojure for example, and I think concurrency is not that bad, and there is no reason to avoid it. Mutability and intertwining of abstractions is what leads to problematic situations. Functional programming solves that by its nature.

                                                                                1. 4

                                                                                  Even if the program is immutable, you ultimately want the program to have some effect on the outside world, and functional programming doesn’t magically fix the race conditions there. Consider having a bunch of immutable, unentwined workers all making HTTP requests the same server. Even if there are no data races, you can still exceed the rate limit due to concurrency.

                                                                            1. 3

                                                                              Can someone please explain to me what a “dickbar” is, and why this is an appropriate description of the UI mechanism? I read the daringfireball link, but I don’t quite get it.

                                                                              1. 3

                                                                                Twitter kills the #dickbar has a screenshot. It’s the ad obscuring the content.

                                                                                Actually called the Quickbar, but introduced when Dick Costolo took charge, hence the name.

                                                                              1. 3

                                                                                Or is processor cache more susceptible to gamma rays than RAM?

                                                                                Back when I worked for a Sun customer, Sun’s technical folks said that ECC memory was your defence against cosmic rays, and that their SPARC systems used ECC everywhere except the CPU register file. If the architecture involved here (which isn’t explicitly given, but I don’t know that ACPI was used anywhere except x86) didn’t use ECC or other protection for its CPU cache, then I’m still not sure what this fix will achieve. You know that the caches should have been flushed when entering S1, so flush them again now, sure. But what if a cosmic bit-flip happens in S0?