1. 14

    I believe that OO affords building applications of anthropomorphic, polymorphic, loosely-coupled, role-playing, factory-created objects which communicate by sending messages.

    It seems to me that we should just stop trying to model data structures and algorithms as real-world things. Like hammering a square peg into a round hole.

    1. 3

      Why does it seem that way to you?

      1. 5

        Most professional code bases I’ve come across are objects all the way down. I blame universities for teaching OO as the one true way. C# and java code bases are naturally the worst offenders.

        1. 4

          I mostly agree, but feel part of the trouble is that we have to work against language, to fight past the baggage inherent in the word “object”. Even Alan Kay regrets having chosen “object” and wishes he could have emphasized “messaging” instead. The phrase object-oriented leads people to first, as you point out, model physical things, as that is a natural linguistic analog to “object”.

          In my undergraduate days, I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

          I suppose I have taken that “Aha!” moment for granted and can see how, in the absence of such an explicit lesson, it might be hard to discover the notion on your own. It is definitely a problem if OO concepts are presented universally good or without pitfalls.

          1. 4

            I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

            Can you remember some of the specifics of this? Sounds fascinating.

            1. 3

              My memory is a bit fuzzy on it, but the project was about simulating a bank. Your bank program would be initialized with N walk-in windows, M drive-through windows and T tellers working that day. There might’ve been a second type of employee? The bank would be subjected to a stream of customers wanting to do some heterogeneous varieties of transactions, taking differing amounts of time.

              There did not need to be a teller at the drive-through window at all times if there was not a customer there, and there was some precedence rules about if a customer was at the drive-through and no teller was at the window, the next available teller had to go there.

              The goal was to produce a correct order of customers served, and order of transactions made, across a day.

              The neat part (pedagogically speaking) was the project description/spec. It went through so much effort to slowly describe and model the situation for you, full of distracting details (though very real-world ones), that it all-but-asked you to subclass things needlessly, much to your detriment. Are the multiple types of employees complete separate classes, or both sublcasses of an Employee? Should Customer and Employee both be subclasses of a Person class? After all, they share the properties of having a name to output later. What about DriveThroughWindow vs WalkInWindow? They share some behaviors, but aren’t quite the same.

              Most people here would realize those are the wrong questions to be ask. Even for a new programmer, the true challenge was gaining your first understandings of concurrency and following a spec rules for resource allocation. But said new programmer had just gone through a week or two on interfaces, inheritance and composition, and oh look, now there’s this project spec begging you to use them!

          2. 2

            Java and C# are the worst offenders and, for the most part, are not object-oriented in the way you would infer that concept from, for example, the Xerox or ParcPlace use of the term. They are C in which you can call your C functions “methods”.

            1. 4

              At some point you have to just let go and accept the fact that the term has evolved into something different from the way it was originally intended. Language changes with time, and even Kay himself has said “message-oriented” is a better word for what he meant.

              1. 2

                Yeah, I’ve seen that argument used over the years. I might as well call it the no true Scotsman argument. Yes, they are multi-paradigm languages and I think that’s what made them more useful (my whole argument was that OOP isn’t for everything). Funnily enough, I’ve seen a lot of modern c# and java that decided message passing is the only way to do things and that multi-thread/process/service is the way to go for even simple problems.

                1. 4

                  The opposite of No True Scotsman is Humpty-Dumptyism, you can always find a logical fallacy to discount an argument you want to ignore :)

          3.  
            Square peg;  
            Round hole;  
            Hammer hammer;  
            hammer.Hit(peg, hole);
            
            1.  

              A common mistake.

              In object-orientation, an object knows how to do things itself. A peg knows how to be hit, i.e. peg.hit(…). In your example, your setting up your hammer, to be constantly changed and modified as it needs to be extended to handle different ways to hit new and different things. In other words, your breaking encapsulation by requiring your hammer to know about other objects internals.

            2.  

              your use of a real world simile is hopefully intentionally funny. :)

              1. 2

                That sounds great, as an AbstractSingletonProxyFactoryBean is not a real-world thing, though if I can come up with a powerful and useful metaphor, like the “button” metaphor in UIs, then it may still be valuable to model the code-only abstraction on its metaphorical partner.

                We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                1. 2

                  Factory

                  A factory is a real world thing. The rest of that nonsense is just abstraction disease which is either used to work around language expressiveness problems or people adding an abstraction for the sake of making patterns.

                  We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                  I think OOP has its place in the world, but it is not for every (majority?) of problems.

                  1. 3

                    A factory in this context is a metaphor, not a real world thing. I haven’t actually represented a real factory in my code.

                    1.  

                      I know of one computer in a museum that if you boot it up, it complains about “Critical Error: Factory missing”.

                      (It’s a control computer for a factory, it’s still working, and I found that someone modeled that case and show an appropriate error the most charming thing)

                2.  

                  You need to write say a new air traffic control system, or a complex hotel reservation system, using just the concepts of data structures and algorithms? Are you serious?

                1. 81

                  I love that this post is called “I do not like Go”, not “Why you should stop using Go”, “Go considered harmful” or any of the other “here is my opinion, treat is as fact” titles.

                  1. 4

                    I suspect a lot of the problem is the way it was hyped and taught.

                    The notion that real world objects map to program objects is rubbished by the Liskov Substitution Principle.

                    A far better approach is to say program “Objects” are utterly unrelated to our real world intuition of Objects.

                    An Object instance is merely a binding of a set of names, to a set of values, for which a boolean expression (which we call the class invariant), always holds true.

                    This rids us of the urge to do “too much work in the constructor”, after all, it’s just binding names to values.

                    So why would we even want such a concept as an Object?

                    Because we want our functions to be correct.

                    What does correct mean? It means if a given precondition holds, we promise to fulfill a postcondition.

                    So if you have an object of a certain type, you guaranteed the invariant holds. If that invariant implies the precondition is met, then static type checking guarantees you will only invoke this function with values for which the precondition holds.

                    This submission should be upvoted a lot more…

                    …not because I think we should all go off and do program proving.

                    But because if you don’t understand it, you don’t understand what you are doing when you program.

                    1. 1

                      You can have objects that model real-world things and design by contract or formal specification (there’s a book called Object Orientation in Z that goes through this process for a collection of different OO extensions to Z, and of course Object-Oriented Software Construction shows how to do it and explicitly builds on the work of Liskov). With that in mind, I don’t believe that the LSP says that you can’t model real world things as objects.

                      1. 2

                        The notion that you can take what a non-programmer calls an object and a inheritance and map it naively onto programming objects is hopeless, it’s misleading.

                        Certainly you can model real world objects… and many other real world things common people wouldn’t call objects, and an infinite of non-real world things not remotely obectlike, all modeled by programming objects.

                        The naming is confusing and misleading and, from a teaching point of view, was a mistake. It produced a generation of badly structured code with just plain wrong inheritance trees.

                        Your code suddenly improves dramatically when you throw away mental crutch of programming objects being anything like real world objects.

                        Programming is a profoundly mathematical activity, but sadly the harsh mathematical realities of it tend to blunted by “testing it into a shape of the product required”.

                        Those harsh mathematical realities re-emerge sharply when we attempt to re-use code.

                        Then suddenly we whinge “Code re-use is hard” instead of seeing the mathematics that governs everything we do.

                    1. 3

                      Great stuff. Re the conditionals, I look at it this way: An object is a choice.

                      When you have sub-type polymorphism, you can make a choice in one part of a program and create an object than embodies it. When the object is bound, the choice is made and the conditional can disappear from the contexts where the object is used.

                      1. 1

                        I agree and use that model, and it makes me think “I answered this question back when I constructed this object”. That’s something I can certainly see as a source of confusion.

                      1. 5

                        This is so good. An object-oriented approach breaks down when you can’t see how to design the objects you need, so just put procedural code into the objects you have.

                        I predict similar “failures” in functional programming codebases as that paradigm continues its journey to the mainstream. In my uikonf talk I called this the “imperative trapdoor”: you can always hide imperative code behind a veneer of objects or functions, but then you lose the paradigm benefits of objects or functions.

                        1. 2

                          Thanks, this is a good observation!

                          1. 2

                            Pure FP with explicit effects can help push you towards a better way, but it’s always possible to end up with an equally enterprisey mess of monad transformers and lenses… hoping effect systems can alleviate some of the former problems. But yes, poor program design will interact poorly with any paradigm.

                          1. 4

                            This is a problem with any language or library. You need to know what is available in the Python library and what it does to use it effectively. You need to know the binaries in /bin to use the shell effectively. And so on.

                            It’s just like learning a human language: Until you use the vocabulary enough to get comfortable, you are going to feel lost, and spend a lot of time getting friendly with a dictionary.

                            1. 8

                              This is a problem with any language or library. You need to know what is available in the Python library and what it does to use it effectively. You need to know the binaries in /bin to use the shell effectively. And so on.

                              I think this probably misses the point. The Python solution was able to compose a couple of very general, elementary problem solving mechanisms (iteration, comparison), which python has a very limited vocabulary of (there’s maybe a half dozen control constructs, total?), to quickly arrive at a solution (albeit while a limited, non-parallel solution, one that’s intuitive and perhaps 8 times out of 10 does the job). The standard library might offer an implementation already, but you could get a working solution without crawling through the docs (and you could probably guess the name anyways).

                              J required, owing to its overwhelming emphasis on efficient whole-array transformation, selection from a much much much larger and far more specialized set of often esoteric constructs/transformations, all of which have unguessable symbolic representations. The documentation offers little to aid this search, complicating a task that was already quite a bit less intuitive than Python’s naive iteration approach.

                              1. 5

                                For a long time now, I’ve felt that the APL languages were going to have a renaissance. Our problems aren’t getting any simpler, so raising the level of abstraction is the way forward.

                                The emphasis is on whole array transformation seems like a hindrance but imagine for a second that RAM becomes so cheap that you simply load all of your enterprise’s data in memory on a single machine. How many terrabytes is that? Whole array looks very practical then.

                                For what it’s worth, there is a scheme to the symbols in J. You can read meaning into their shapes. Look at grade-up and grade-down. They are like little pictures.

                                J annoys me with its fork and hook forms. That goes past the realm of readability for me. Q is better, It uses words.

                                What I’d like to see is the entire operator set of, say, J brought into mainstream languages as a library. Rich operations raising the level of abstraction are likely more important than syntax.

                                1. 5

                                  J annoys me with its fork and hook forms. That goes past the realm of readability for me.

                                  In textual form I agree. The IDE has a nice graphical visualization of the dataflow that I find useful in ‘reading’ that kind of composition in J code though. I’ve been tempted to experiment with writing J in a full-on visual dataflow style (a la Max/MSP) instead of only visualizing it that way.

                                  1. 2

                                    I find it a lot easier to write J code by hand before copying it to a computer. It’s easier to map out the data flow when you can lay it out in 2D.

                                    1. 1

                                      Have you looked at Q yet?

                                    2. 1

                                      That would be a very useful comparison of the usability of a compact text syntax vs visual language. I imagine that discoverability is better with a visual language as by definition it is interactive.

                                    3. 2

                                      I started implementing the operators in Scala once - the syntax is flexible enough that you can actually get pretty close, and a lot of them are the kind of high level operations that either already exist in Scala are pretty easy to implement. But having them be just a library makes the problem described in the article much much worse - you virtually have to memorize all the operators to get anything done, which is bad enough when they’re part of the language, but much worse when they’re a library that not all code uses and that you don’t use often enough to really get them into your head.

                                      1. 1

                                        It could just be a beginning stage problem.

                                        1. 1

                                          It could, but the scala community has actually drawn back from incremental steps in that direction, I think rightly - e.g. scalaz 7 removed many symbolic operators in favour of functions with textual names. Maybe there’s some magic threshold that would make the symbols ok once we passed it, but I can only spend so much time exploring the possibility without seeing an improvement.

                                          1. 1

                                            Oh. I’d definitely go with word names. Q over J. To me, the operations are the important bit.

                                  2. 5

                                    The difference is that Python already is organized by the standard library, and has cookbooks, and doesn’t involve holistically thinking of the entire transformation at once. So it intrinsically has the problem to a lesser degree than APLs do and also has taken steps to fix it, too.

                                    1. 2

                                      How easy is it to refactor APL or J code? The reason I ask is that I have the same problem with the ramdajs library in JavaScript, which my team uses as a foundational library. It has around 150 functions, I don’t remember what they all do and I certainly don’t remember what they’re all called, so I often write things in a combination of imperative JS and ramda then look for parts to refactor. I’m interested to hear whether that’s possible with APL, or whether you have to know APL before you can write APL.

                                  1. 5

                                    I still haven’t recovered fully from an illness that started two weeks ago, so this is the second Monday in a row that starts with catching up/deleting information from the previous few days.

                                    My team has asked for an “overview of software engineering” (it’s largely comprised of people who are new to the company or the company and the industry) so I’m giving that this week, I’ll talk through the “lifecycle of a story” from vague idea to deployed to production. I also have management training, but more “how to use our HR/L&D systems” than “how to be an effective manager”.

                                    Side projects: the only thing I managed to do from my sickbed was carry on the Russian learning, so there’s some of that. If I get some mental energy back I’d like to implement some GNUstep APIs I found stubbed out when I ported Freecell for Mac.

                                    1. 5

                                      I spoke to the FreeBSD folks at fosdem about laptop compatibility as I’d also had issues. The advice they gave was that h/w support is best in -CURRENT so laptop users should treat that as a rolling release. I have yet to try that out.

                                      1. 2

                                        Are there binary releases of -CURRENT, or is the advice to “rolling recompile” the kernel & base system daily/weekly? 😒

                                        1. 2

                                          the advice is to compile from source, but rather than doing it regularly to track the -current mailing list and see what folks are talking about.

                                          1. 2

                                            You have to recompile if you want evdev support in most input device drivers (options EVDEV_SUPPORT in config is still not on by default >_<).

                                            1. 2

                                              I run TrueOS which tracks current + the DRM changes. I’m on UNSTABLE, but with Boot Environments it’s not a problem (the last UNSTABLE release actually broke quite a bit for me so I just rolled back, without issue). I suggest it if you want to track the latest stuff but don’t care to do it yourself. cc @oz @leeg

                                              1. 1

                                                That’s interesting, I’ll have a look. Thanks!

                                          1. 6

                                            I think the faulty assumption is that the happiness of users and developers is more important to the corporate bottom line than full control over the ecosystem.

                                            Linux distributions have shown for a decade that providing a system for reliable software distribution while retaining full user control works very well.

                                            Both Microsoft and Apple kept the first part, but dropped the second part. Allowing users to install software not sanctioned by them is a legacy feature that is removed – slowly to not cause too much uproar from users.

                                            Compare it to the time when Windows started “phoning home” with XP … today it’s completely accepted that it happens. The same thing will happen with software distributed outside of Microsoft’s/Apple’s sanctioned channels. (It indeed has already happened on their mobile OSes.)

                                            1. 8

                                              As a long-time Linux user and believer in the four freedoms, I find it hard to accept that Linux distributions demonstrate “providing a system for reliable software distribution while retaining full user control works very well”. Linux distros seems to work well for enthusiasts and places with dedicated support staff, but we are still at least a century away from the year of Linux on the desktop. Even many developers (who probably have some overlap with the enthusiast community) have chosen Macs with unreliable software distribution like Homebrew and incomplete user control.

                                              1. 2

                                                I agree with you that Linux is still far away from the year of Linux on the desktop, but I think it is not related to the way Linux deals with software distribution.

                                                There are other, bigger issues with Linux that need to be addressed.

                                                In the end, the biggest impact on adoption would be some game studios releasing their AAA title as a Linux-exclusive. That’s highly unlikely, but I think it illustrates well that many of the factors of Linux’ success on the desktop hinge on external factors which are outside of the control of users and contributors.

                                                1. 2

                                                  All the devs I know that use mac use linux in some virtualisation options instead of homebrew for work. Obviously thats not scientific study by any means.

                                                  1. 8

                                                    I’ll be your counter example. Homebrew is a great system, it’s not unreliable at all. I run everything on my Mac when I can, which is pretty much everything except commercial Linux-only vendor software. It all works just as well, and sometimes better, so why bother with the overhead and inconvenience of a VM? Seriously, why would you do that? It’s nonsense.

                                                    1. 4

                                                      Maybe a VM makes sense if you have very specific wishes. But really, macOS is an excellent UNIX and for most development you won’t notice much difference. Think Go, Java, Python, Ruby work. Millions of developers probably write on macOS and deploy on Linux. I’ve been doing this for a long time and ‘oh this needs a Linux specific exception’ is a rarity.

                                                      1. 4

                                                        you won’t notice much difference.

                                                        Some time ago I was very surprised that hfs is not case sensitive (by default). Due to a bad letter-case in an import my script would fail on linux (production), but worked on mac. Took me about 30 minutes to figure this out :)

                                                        1. 3

                                                          You can make a case sensitive code partition. And now with APFS, partitions are continuously variable size so you won’t have to deal with choosing how much goes to code vs system.

                                                          1. 1

                                                            A case sensitive HFS+ slice on a disk image file is a good solution too.

                                                          2. 2

                                                            Have fun checking out a git repo that has Foo and foo in it :)

                                                            1. 2

                                                              It was bad when microsoft did it in VB, and it’s bad when apple does it in their filesystem lol.

                                                          3. 2

                                                            Yeah definitely. And I’ve found that accommodating two platforms where necessary makes my projects more robust and forces me to hard code less stuff. E.g. using pkg-config instead of yolocoding path literals into the build. When we switched Linux distros at work, all the packages that worked on MacOS and Linux worked great, and the Linux only ones all had to be fixed for the new distro. 🙄

                                                          4. 2

                                                            I did it for awhile because I dislike the Mac UI a lot but needed to run it for some work things. Running in a full screen VM wasn’t that bad. Running native is better, but virtualization is pretty first class at this point. It was actually convenient in a few ways too. I had to give my mac in for repair at one point, so I just copied the VM to a new machine and I was ready to run in minutes.

                                                            1. 3

                                                              I use an Apple computer as my home machine, and the native Mac app I use is Terminal. That’s it. All other apps are non-Apple and cross-platform.

                                                              That said, MacOS does a lot of nice things. For example, if you try to unmount a drive, it will tell you what application is still using it so you can unmount it. Windows (10) still can’t do that, you have to look in the Event viewer(!) to find the error message.

                                                              1. 3

                                                                In case it’s unclear, non-Native means webapps, not software that doesn’t come preinstalled on your Mac.

                                                                1. 3

                                                                  It is actually pretty unclear what non-Native here really means. The original HN post is about sandboxed apps (distributed through the App Store) vs non-sandboxed apps distributed via a developer’s own website.

                                                                  Even Gruber doesn’t mention actual non-Native apps until the very last sentence. He just talks/quotes about sandboxing.

                                                                  1. 3

                                                                    The second sentence of the quoted paragraph says:

                                                                    Cocoa-based Mac apps are rapidly being eaten by web apps and Electron pseudo-desktop apps.

                                                              2. 1

                                                                full-screen VM high-five

                                                              3. 1

                                                                To have environment closer to production I guess (or maybe ease of installation, dunno never used homebrew). I don’t have to use mac anymore so I run pure distro, but everyone else I know uses virtualisation or containers on their macs.

                                                                1. 3

                                                                  Homebrew is really really really easy. I actually like it over a lot of Linux package managers because it first class supports building the software with different flags. And it has binaries for the default flag set for fast installs. Installing a package on Linux with alternate build flags sucks hard in anything except portage (Gentoo), and portage is way less usable than brew. It also supports having multiple versions of packages installed, kind of half way to what nix does. And unlike Debian/CentOS it doesn’t have opinions about what should be “in the distro,” it just has up to date packages for everything and lets you pick your own philosophy.

                                                                  The only thing that sucks is OpenSSL ever since Apple removed it from MacOS. Brew packages handle it just fine, but the python package system is blatantly garbage and doesn’t handle it well at all. You sometimes have to pip install with CFLAGS set, or with a package specific env var because python is trash and doesn’t standardize any of this.

                                                                  But even on Linux using python sucks ass, so it’s not a huge disadvantage.

                                                                  1. 1

                                                                    Installing a package on Linux with alternate build flags sucks hard in anything except portage

                                                                    You mention nix in the following sentence, but installing packages with different flags is also something nix does well!

                                                                    1. 1

                                                                      Yes true, but I don’t want to use NixOS even a little bit. I’m thinking more vs mainstream distro package managers.

                                                                    2. 1

                                                                      For all its ease, homebrew only works properly if used by a single user who is also an administrator who only ever installs software through homebrew. And then “works properly” means “install software in a global location as the current user”.

                                                                      1. 1

                                                                        by a single user who is also an administrator

                                                                        So like a laptop owner?

                                                                        1. 1

                                                                          A laptop owner who hasn’t heard that it’s good practice to not have admin privileges on their regular account, maybe.

                                                                      2. 1

                                                                        But even on Linux using python sucks ass, so it’s not a huge disadvantage.

                                                                        Can you elaborate more on this? You create a virtualenv and go from there, everything works.

                                                                        1. 2

                                                                          It used to be worse, when mainstream distros would have either 2.4 or 2.6/2.7 and there wasn’t a lot you could do about it. Now if you’re on python 2, pretty much everyone is 2.6/2.7. Because python 2 isn’t being updated. Joy. Ruby has rvm and other tools to install different ruby versions. Java has a tarball distribution that’s easy to run in place. But with python you’re stuck with whatever your distro has pretty much.

                                                                          And virtualenvs suck ass. Bundler, maven / gradle, etc. all install packages globally and let you exec against arbitrary environments directly (bundle exec, mvn exec, gradle run), without messing with activating and deactivating virtualenvs. Node installs all it’s modules locally to a directory by default but at least it automatically picks those up. I know there are janky shell hacks to make virtualenvs automatically activate and deactivate with your current working directory, but come on. Janky shell hacks.

                                                                          That and pip just sucks. Whenever I have python dependency issues, I just blow away my venv and rebuild it from scratch. The virtualenv melting pot of files that pip dumps into one directory just blatantly breaks a lot of the time. They’re basically write once. Meanwhile every gem version has it’s own directory so you can cleanly add, update, and remove gems.

                                                                          Basically the ruby, java, node, etc. all have tooling actually designed to author and deploy real applications. Python never got there for some reason, and still has a ton of second rate trash. The scientific community doesn’t even bother, they use distributions like Anaconda. And Linux distros that depend on python packages handle the dependencies independently in their native package formats. Ruby gets that too, but the native packages are just… gems. And again, since gems are version binned, you can still install different versions of that gem for your own use without breaking anything. Python there is no way to avoid fucking up the system packages without using virtualenvs exclusively.

                                                                          1. 1

                                                                            But with python you’re stuck with whatever your distro has pretty much.

                                                                            I’m afraid you are mistaken, not only distros ship with 2.7 and 3.5 at same time (for years now) it is usually trivial to install newer version.

                                                                            let you exec against arbitrary environments directly (bundle exec, mvn exec, gradle run), without messing with activating and deactivating virtualenvs

                                                                            You can also execute from virtualenvs directly.

                                                                            Whenever I have python dependency issues, I just blow away my venv and rebuild it from scratch.

                                                                            I’m not sure how to comment on that :-)

                                                                            1. 1

                                                                              it is usually trivial to install newer version

                                                                              Not my experience? How?

                                                                              1. 1

                                                                                Usually you have packages for all python versions available in some repository.

                                                                2. 2

                                                                  Have they chosen Macs or have they been issued Macs? If I were setting up my development environment today I’d love to go back to Linux, but my employers keep giving me Macs.

                                                                  1. 3

                                                                    Ask for a Linux laptop. We provide both.

                                                                    I personally keep going Mac because I want things like wifi, decent power management, and not having to carefully construct a house of cards special snowflake desktop environment to get a useable workspace.

                                                                    If I used a desktop computer with statically affixed monitors and an Ethernet connection, I’d consider Linux. But Macs are still the premier Linux laptop.

                                                                    1. 1

                                                                      At my work place every employee is given a Linux desktop and they have to do a special request to get a Mac or Windows laptop (Which would be in addition to their Linux desktop).

                                                                  2. 3

                                                                    Let’s be clear though, what this author is advocating is much much worse from an individual liberty perspective than what Microsoft does today.

                                                                    1. 4

                                                                      Do you remember when we all thought Microsoft were evil for bundling their browser and media player? Those were good times.

                                                                  1. 19

                                                                    I’m already having nightmares about opening AMP emails in mutt.

                                                                    1. 8

                                                                      you wait until they announce their partnership with Slack.

                                                                      1. 1

                                                                        Introducing R.Mutt — a fork of Mutt with AMP support and Material Design UI, rewritten in Dart.

                                                                      1. 1

                                                                        As a developer who moved from Linux to the macOS platform, this made me think about how many non-native apps I use as replacements for the Apple version. The obvious ones I’m thinking of:

                                                                        • Alfred instead of Spotlight
                                                                        • iTerm2 instead of Terminal
                                                                        • Dropbox instead of iCloud
                                                                        • Chrome instead of Safari
                                                                        • Gmail instead of Mail
                                                                        • Google Maps instead of Maps
                                                                        • VLC instead of iMovie
                                                                        • Spotify instead of iTunes
                                                                        • Signal instead of Messages

                                                                        &c. This surely isn’t a good trend for Apple to allow to continue.

                                                                        1. 13

                                                                          That’s not what’s meant by “native” in this case. Alfred, iTerm, Dropbox, Chrome, and VLC are native. Spotify is Electron, and I’m not sure about Signal. I’m guessing it’s probably a native app that does most of its UI in a WebView.

                                                                          1. 5

                                                                            Signal for Desktops is Electron.

                                                                            1. 2

                                                                              As it might be useful to describe what is meant by native, it means something on a spectrum between “using the platform-supplied libraries and UI widgets”, i.e. Cocoa and “not a wrapped browser or Electron app”, so it’s not clear whether an application using the Qt framework would be considered “native”. It could be delivered through the App Store and subject to the sandbox restrictions, so fits the bill for a “native” app in the original post, but it would also not be using the native platform features which are presumably seen as Apple’s competitive advantage for the purpose of the same post.

                                                                              1. 2

                                                                                I’d call QT native. It doesn’t use the native widgets, but then neither do most applications that are available on multiple platforms.

                                                                                1. 2

                                                                                  It may be native, but it’s not Mac-native in the sense Gruber was talking about. You will find that all three uses of “native” in his article appear as “native Cocoa apps” or “native Mac apps”. He is talking about a quite specific sense of native: apps that integrate seamlessly with all of the MacOS UI conventions (services, system-wide text substitutions, native emoji picker, drag & drop behaviours, proxy icons, and a myriad more). Qt apps do not.

                                                                            2. 5

                                                                              Why is it not a good trend? You are still using a Mac .. they sold you the hardware. Should they care about what apps you run?

                                                                              1. 3

                                                                                Apps with good experiences that aren’t available on other platforms keep users around. Third-party iOS apps do a better job of moving iPhones than anything else Apple does, because people who already have a pile of iOS apps they use generally buy new iPhones.

                                                                                Electron is just the latest in a long series of cross-platform app toolkits, and it has the same problems that every other one has had: look & feel, perceived inefficiency, and for the OS vendor, doesn’t provide a moat.

                                                                                1. 1

                                                                                  Counterpoint, their apps have always been limited and really for people who weren’t willing to learn and use more robust tooling. I mean how many professionals use iMovie.

                                                                                  1. 1

                                                                                    iMovie is a good example. I’m guessing a lot of us prefer VLC.

                                                                                2. 1

                                                                                  It’s good for the end user but not a good trend for their business model, part of which is to have best-in-class apps. Don’t get me wrong, I like having choice and I think they shouldn’t force you into their own app ecosystem.

                                                                              1. 8

                                                                                I ended up being off sick for most of last week, and because I was travelling last week that means I missed an entire sprint for my team. So I’m getting up to speed on what’s been done, and planning the next sprint. It looks like it went well so maybe I should stay away more often :).

                                                                                I’m giving a talk in MCE conf later in the year, and I’m working with the organisers on choosing a talk topic: I have three potentials and would like to understand which fits their programme better.

                                                                                Side project work around the edges: I’m learning Russian from a book. It’s a book on technical Russian for scientists, rather than conversational Russian for going on holiday (which is easier, because almost all technical words are Greek or Latin anyway, and you don’t need to be able to pronounce things well). I’m fascinated by the history of computing research, but being Western my education in this area is “the Russians were late to the party and copied Western computers”, and I don’t believe that that’s true. My understanding of Soviet cybernetics, for example, is that it’s very different from American and European cybernetics, because the social and cultural background informing the idea of control systems is very different. Anyway, that’s a long way round of saying that I want to be able to read primary literature in 20th century computing in the Soviet Union and draw my own conclusions.

                                                                                1. 1

                                                                                  There clearly is a spectrum of possible ways we could think about how to program the problems we’re trying to solve. As we have seen in the course of the last 100 years or so, some paradigms have stood out more than others, while others have had their good parts taken from them.

                                                                                  I would argue that the very early paradigms stood out because they were easy to understand and iterate on. The last 20 years or so has shown that paradigms had to shift to accommodate to scale, i.e. when the Internet started to take off, developers had to go from handling a dozen users to handling upwards of a few billion users. I think that the “scale paradigm shift” is coming to an end since we’ve got many services on the Internet which can accommodate to massive scale.

                                                                                  1. 3

                                                                                    Be aware of an efficient markets fallacy or purposeful evolution fallacy here. Our current paradigm makes it possible to build services at internet scale, and there are a small handful of successful examples. This is not the same as having converged on a paradigm for building at scale, nor is it the same as having found the best or even a good paradigm for building at scale.

                                                                                  1. 1

                                                                                    How does “programming as taxonomies” compare and contrast with OOP?

                                                                                    1. 2

                                                                                      Well, it depends on what you consider OOP to be. I suspect the author was referring to Smalltalk at least a little there, but there are other possibilities.

                                                                                      1. 3

                                                                                        I’ve actually started writing a full response, and one of the ideas I’m trying to explore is how a lot of what we think of as “normal” OOP is just what survived the method wars, and before that we had much more diversity in our object taxonomies.

                                                                                        1. 1

                                                                                          is there a way I can already bookmark this post before you’ve written it please? I’m excited!

                                                                                          1. 3
                                                                                        2. 1

                                                                                          I think so. This idea of restrictive paradigms (of research and thought, rather than ‘programming paradigms’) seems commensurate with Alan Kay’s description of the pink and blue planes. The pink plane represents advances made within the current paradigm, the blue plane is accessible by asking incommensurable questions.

                                                                                        3. 1

                                                                                          I always start with a big list of programming paradigms when wondering about this stuff. Aside from fun discoveries, such lists keep us from building categories or explanations that are too narrow for the field as a whole.

                                                                                        1. 2

                                                                                          Any other crustaceans at FOSDEM who want to meet up?

                                                                                          1. 4

                                                                                            Nothing! I’m at dotSwift now, and FOSDEM later. I’m working on nothing.

                                                                                            1. 5

                                                                                              next week I’m out at a couple of conferences, so finding out who’s there (are you at dotSwift Paris or FOSDEM in Brussels?), highlighting the FOSDEM schedule and making sure my train tickets and hotel reservations are correct.

                                                                                              I’ve been slowly progressing my HURD side project, though my work on it so far this week has been migrating the VM from my home PC to a laptop that I’ll take with me to FOSDEM.

                                                                                              Work: some people management and team workflow things, doing job interviews, and then the odd bit of Javascript around the edges.

                                                                                              Music: one of my bands is in a conference on Saturday so plenty of practice!

                                                                                              1. 3

                                                                                                Going myself to FOSDEM! First time there, I still have a ton of homework checking the talks I am going to attend, which I’m just constantly delaying, probably because it is overwhelming as hell.

                                                                                                1. 1

                                                                                                  it is. The mobile apps have favourites and alarms which make it easier to navigate than reading the whole thing, but you still have some work to do to star the interesting sessions :)

                                                                                                2. 2

                                                                                                  I’m also heading to FOSDEM! My first time, and also the first big event I’ve been to, so I’m not too sure what to expect.

                                                                                                  1. 1

                                                                                                    I found it overwhelming when I first went, there’s so much on. The talks will be videoed so if you have a choice meet people at the booths or dev rooms.

                                                                                                1. 4

                                                                                                  My workplace has finally formally decided that managers such as myself are not expected to also be senior technical staff. This is in every way the right model, but now I have a bunch of technical obligations that I need to schedule handing off to others in our group, so that’s sort of a PITA, because I can’t just dump stuff context-free, but at the same time, there are other demands on my time. Eh. So it goes, in middle management. Some post mortems to run/attend, some reviews to write, some organizational chaos to overcome.

                                                                                                  Outside of work, I need to find some time to rebuild part of the big computer; my patience with fiddly PC non-sense is growing thin, and I am tempted to just part the thing out and get a dedicated storage box or something. But this too will pass, and I’ll miss having a big powerful desktop computer in six months. So I’ll try to wait it out.

                                                                                                  Otherwise, baby proofing now that #2 Daughter (9mo) is up on her feet and taking preliminary steps.

                                                                                                  1. 3

                                                                                                    my model for that hand-off is that I still do technical work, both during offloading and to keep my hand in (there are plenty of programmers who respect capable programmers more than capable non-programmers), but that I should not be on the critical path. When I delegate technical leadership, I make it clear that I’m available for reverse-delegating where I can help out so that the new technical lead feels less like they’ve been dropped in it.

                                                                                                  1. 3

                                                                                                    The answer to “JavaScript isn’t as secure as I’d like it to be” sure as hell isn’t “replace it with a bunch of new, more general VMs”

                                                                                                    1. 3

                                                                                                      I think the subtlety is that Javascript’s shape makes people feel like they can “cheat” on the virtualization, and aren’t super careful. For example, Spectre is an issue because the JS ultimately is running within the browser process instead of in its own isolated environment.

                                                                                                      By going towards more general VMs you acknowledge “hey, anything can happen in this box, but it better stay there”. The mentality defaults to “anything can happen”, and you need explicit ways out.

                                                                                                      1. 3

                                                                                                        Indeed not, and the author says that in her post:

                                                                                                        But, it’s not JavaScript’s fault. I have my beef with Eich but I daresay things would be just as bad as they are now if Java had become the language of the web: behind every URL would lurk arbitrarily large applications with opaque rights to exploit your machinery, powered by advertising and malware. We can blame JS for being entrenched demoware but the ethics of the browser are distinct.

                                                                                                        She’s considering two things: one is that JavaScript isn’t great, and the other is that browser sandboxing isn’t great.

                                                                                                      1. 5

                                                                                                        I got enough motivation on my big-deal side project to get it started, got it started, then hit an issue: I’m working in a HURD* VM and don’t have a browser set up in it, so I can’t add my ssh key to GitHub to publish my code. The host OS is Windows so I don’t really understand how I might file-share my key between the two. Fixing it is just a question of apt-get, but I decided at the point that I hit that problem it was a good place to park it, and will be dealing with that this week.

                                                                                                        Work: we’ve got two distinct products that solve two similar problems, and the goal in the short term is to turn those into a single product where we can sell one or both features. I’ve designed the capability configuration and explored its impact on the architecture, it’s time to put that into practice. Additionally one of the teams I work with has failed to meet its delivery commitments for the last few sprints, and I’m investigating why with the people, putting changes in place, and communicating those changes around. And I seem to be involved in a few job interviews we’re hiring!

                                                                                                        *why? I want to use its model of message-sending for IPC. My thing is building an Erlang-style message passing model, but without doing all the threading in user space on a VM that is only supported by a couple of languages. If you can build your message-send function in C, then it can be used by anyone whose language includes a C FFI which is nearly everybody. I think, but do not know yet, that I can do the same on Protocol Buffers, and will investigate that once I’ve used Mach (which I understand better, i.e. at all) to express a solution to the problem.

                                                                                                        1. 3

                                                                                                          I fixed the paragraph 1 issue by installing lynx in my HURD VM and adding the SSH key in the github UI.