There’s another approach that I think is under-utilized in the industry - use real dependencies but run your tests against both the change and the last-known-good commit at the same time[^1]. If the LKG commit fails then you can chalk up it up to an external dependency failing. This obviously doesn’t fully exonerate the change, but in many scenarios that might be OK (or at least just significantly improves debug times). I would love to see a test framework with this built in.
[^1]: There are a few tricky things to consider (resource isolation etc.) but it’s not impossible.
I’m glad this was posted, because John Gruber has such different software values than I do. He seems to think of app development as being akin to making films (he even has a Kubrick quote), where meticulousness, look-and-feel, and polish matter much more than utility. He judges other pieces of software the way a filmmaker judges other films – he’s looking for artistry. But I view software as a utility first, and artwork second. And especially so for software running daily on my pocket computer (smartphone).
Meanwhile, many of my core software values don’t get a mention from him. Like the fact that there is way more open source software for Android than for iOS, and this goes down to every layer. Or, the fact that Android’s developer toolchain is entirely x-platform, allowing developers to tweak and modify software regardless of what desktop operating system they use.
I love Apple’s design values. When I have my design cap on, there’s a flow of admiration in the direction of macOS and iOS. And I even participate in the Apple ecosystem a little, with a Mac Mini & iPad. But my daily developer workstation is Linux, and my daily phone is Android. Thinkpad X1C and Pixel 7, because I do care about well-designed utility.
And both have f/oss software, programmability, and utility as their core values, aligned with mine. Thus, for me, and for many like me, that’s the show.
Now… when I’m recommending unfussy hardware/software for my non-techie friends & family? Sure, it’s the Macbook Air and iPhone, then. But I’m really glad a choice exists on the market for people like me, and I’m not sure what value there is in bashing the other side just because it doesn’t share your values.
The conclusion you don’t state, and perhaps don’t draw, is “the iphone apps that focus on look-and-feel are less functional than the android apps that don’t”. I certainly don’t draw that conclusion.
Look and feel matters for functionality. Those of you who haven’t read Jef Raskin’s book should read it, particularly chapters 2-4. One example: How many per cent of the touches/gestures hit and act on an item that wasn’t there yet when the user’s brain decided to act? This is easily measured with user testing, videos and questions, and one of the chief ways to reduce the number is to add slick little animations and transitions, so that touch targets don’t appear suddently, but rather slide in, grow, shrink in ways that the brain use.
Yes, I don’t draw that conclusion either. I think iOS and macOS apps are perfectly functional – and sometimes moreso than Android or Linux counterparts. But I don’t think John Gruber was treating good design as being in service of function. He was treating good design as a showcase of craft and artistry. (Perhaps even of commercial ambition, as he derides the Android Mastodon projects as “hobby projects”, while praising the iOS “commercial” ones.)
100% agree with you that Jef Raskin has some great thoughts on the utility of good design (many of which could benefit the f/oss world). There was some interesting work in this direction a few years back in the Linux desktop world by the (now defunct) non-profit Yorba.
Gruber is solidly from the background of Mac indie apps like Panic’s stuff, which place a premium on design and functionality but are also vehicles for sustaining small businesses.
That which we do is follow rules of thumb. We don’t reason from first principles, even when those first principles are important to us.
Our real goal is to build applications and services that serve the users well, which includes being low on frustration. Being low on errors and frustration is… being pleasant to use, which ends up being some rules of thumbs about animations and general slickness.
You may be interested in the work of Richard Sapper. He was the original designer of the first black Thinkpad 700c. He kind of embodies an alternative to the Deiter Rams school of design (which Apple follows closely) where every device is very solutions-oriented.
Facebook hired a bunch of googlers who liked Blaze (Google’s internal version of Bazel). Because blaze wasn’t open source at the time, they wrote a clone called buck.
I was out of Bazel by the fact that it depends on both Java and Python. A rust reimplementation that produces a statically linked binary for the build tool sounds attractive.
I hadn’t realized that in Smalltalk-72 each object parsed incoming messages. Every object had its own syntax. Ingalls cites performance as an issue then, but perhaps the idea is worth revisiting.
Smalltalk-72 - in contrast to all later Smalltalks - had indeed true message passing. That’s what Kay is usually referring to; it was his language design. The Smalltalk we know today, starting with Smalltalk-74, was invented by Ingalls and had little in common with Smalltak-72. Smalltalk-74 and later have virtual methods instead of message passing; but performance was still an issue.
Hey fun to see Sammy’s CPU on here. If anyone wants to hop on and look at it, connect to mc.openredstone.org with a Minecraft 1.17.1 client. Run /build then /warp CHUNGUS2 to warp to it.
The Open Redstone Engineers server (https://openredstone.org/) is a community of nerds who use Minecraft as our digital logic simulator of choice, and we spend most of our time on there designing and implementing CPUs. I and a few others started it in 2013 and it’s still going strong (though other people at the helm these days). This CPU was made by a member called sammyuri who’s online every now and then.
I don’t think there’s a lot Minecraft proper can or should do. The main problem is that everything in the game takes 1/10th of a second (what we call a redstone tick) to update – so that means a CPU like this has 10 ticks per clock cycle. (For reference, something like an AND gate requires two ticks, so 2/10ths of a second.) There are things Minecraft could do to support more complex circuits without the tick rate dropping further, but ORE already has a lot of server-side plug-ins to improve performance in that respect.
However, another member of ORE, StackDoubleFlow, has been working on a re-implementation of the Minecraft server software: https://github.com/MCHPR/MCHPRS – This reimplementation is written in Rust, and the whole point is to execute redstone circuits at insane clock rates. I would guess that something like this 1Hz CPU could execute at at least tens or hundreds of kHz on MCHPRS.
I think MCHPRS is the next step in cool Minecraft CPUs. I’m working on a CPU with a 16-bit address space in order to be able to write some seriously interesting complex programs. Someone else on ORE is working on a complete RISC-V implementation, including all the ISA extensions necessary to get Linux running on it. The promise of being able to actually do something with the machines is what has gotten me back into making redstone CPUs lately.
(Also, maybe, eventually, there may come a re-implementation of the client. I think we may eventually have re-implemented ourselves out of Microsoft’s grip. Watch this space.)
It means infra/tools ops people migrating between the companies and bringing their favorite tools with them.
Which has nothing to with whether it’s a good choice or not. Enterprise companies are notorious in copying each other approaches, so whichever gets popular at any point tends to be entrenched for some years. “Google does it” is always going to win against solitary voices saying “but it’s complicated, and there’s a better way”.
I work in an enterprise, a pretty large one and all I see is maven and some gradle. No bazel in sight. I also have been following the project a bit for year, but never met anyone who used it (outside Google, but they have the original called blaze IIRC)
Hey, let me make a list of enterprise who is using Bazel and then share it with Lobste.rs
There is definitely an upward trend in adoption of bazel in the last 3 years, so much that I don’t think I can adequately dropping all the names in 1 go. But if you must know, I will make a small list with some big names here who have publicly said that they are adopting bazel (not just 1-2 projects):
Google
Twitter (Java, Scala)
VMWare (Erlang, C/C++)
Adobe Cloud (Go / Docker / K8s)
Tinder (Kotlin / Java)
Pinterest (Java, Go, Python, NodeJS)
Dropbox (C, Python, Go)
Huawei (C/C++)
Lyft (iOS)
Uber (Golang)
Grab (Android)
Stripe (Ruby, Python)
Apple (Go, Java)
Robinhood
Wix
SpaceX
TwoSigma
Etsy
NVDIA (C/C++, CUDA, Go, Scala)
So you can see that a large portion of FAANGMULA and equivalents unicorns are adopting Bazel. Among them, Facebook is rebuilding Bucks 2.0 and has snatched some key hires from Bazel/Build community in the past years. Microsoft MSBuild is also adopting Bazel’s RBE spec. There are also bigger companies / startups who are experimenting with Bazel on 1 or 2 smaller projects:
Redhat
Gitlab
Brave
BMW
LinkedIn
Tweag
There are a number of startups / consulting companies in this space, some include members of the Blaze team leaving Google, to address the upcoming market demand:
BuildBuddy
EngFlow
FlareBuild
Aspect Dev
OasisDigital
The above are not an exhaustive list, it’s what in my head after 15 mins of writing this post.
Google people are trying to push Bazel into everything that they touch. It’s a fine build system if you can guarantee that you will only ever want to ship things on one of the three or four platforms that Google thinks are important. Since ‘Enterprise’ is often a synonym for ‘expensive lock-in’, I don’t have any reason to doubt the original claim.
I’m an ex developer of factorio, I have to ask: have you actually used this on a real candidate? Not sure how I would respond :D
I specifically remember great anger over my building a left hand drive train system during play testing (I’m Irish, we use left hand drive, most other devs are from mainland Europe, so use right hand drive).
I’m French, and my train systems are also left hand drive, for two reasons:
France’s trains also use left hand drive (even though the cars use right hand drive).
It places the signals in the insides of the tracks.
The first reason is pedantically subjective, but I believe placing signals inside of the tracks has real, technical value in some cases. And it’s more elegant. In any case, I still screw it up sometimes and forget that my own system is left hand drive…
The differences are fairly important - Phoenix has long lived stateful processes on the server and only transmits minimal state diffs (not HTML) to be interpreted in the client.
I am 100% over versioning. I have never seen an implementation that doesn’t suck. It’s miserable. Something is fundamentally wrong with the whole model, whatever methodology you use for tagging won’t fix that.
There could be different ways:
Google has run decently internally by building everything from HEAD. It’s not easy, and it requires a monorepo, but it does work. Could this work in the real world? Probably not. But what if you say “GitHub is a monorepo”? What if when the dependency author uploads a breaking change, GitHub can say who they broke and how it broke, prompt the dependency author to document what the remediation for that pattern of breakage is, and just let people be broken until they upgrade? Maybe this is pushed to per-language registries like crates.io or the Go proxy.
Unison tries to sidestep versioning entirely at the language level.
Stop trying to meld dependencies together across packages. Every package and its dependencies are treated separately, and binaries just include every version that is depended on. Hard drive size is a trivial concern, binary sizes when you’re building binaries into Docker containers means the image size almost certainly dominates.
I can’t wait for some of the ideas from Unison to permeate into more mainstream ecosystems. Lots of great ideas (globally accessible CAS, AST storage etc.) stuck behind a Haskell-like syntax.
I sort of agree because I don’t think there’s a perfect versioning system, but I think semver2 may be as good as it gets.
I like it because it’s more functional than the marketing driven “versions don’t matter, we’ll just call it version 2.0 to sell more” and all the alternatives get into too much time spent on perfecting versioning systems to diminishing returns.
I use it just so we have something, it saves time from deciding what to do, and it helps denote drafts or breaking changes. I use it even for stupid stuff like the “enterprise policy on bathroom breaks.” If it’s version 0.4.77 then it’s still in progress and could change any time. If it’s 1.1.16 then I mean it’s probably approved by someone. If I use 1.1.16 and see version 2.0 then it probably means I should read it because now it means I can only go to the bathroom on even hours or something that disrupts or may disrupt me.
I’ve been addicted to these podcasts lately. I started with the Johnathan Blow episode, but I’ve been working my way up from the bottom. Learning about both computing and the people involved has been really informative and entertaining, and that’s for someone who has never subscribed to a podcast series before. Bryan Cantrill is a surprisingly adept interviewer.
I’ve been telling my family for several years about the impending collapse of the gigantic house of cards of modern software based on what I’ve seen. They don’t believe me.
The #1 thing we need to do is to do is apprentice and case-study based training of developers. Start training developers using case studies with real programs instead of just abstract ideas like “encapsulation” and having them write from scratch without having seen what production software looks like. Literate programming book such as Pharr and Humphreys’ amazing “Physically based rendering” are good steps in this direction.
I really wished my college days were spent more on building and less on learning about polymorphism and encapsulation.
That said, the author misses the point that we are in the middle of a huge boom where entire industries are just now looking around and saying, “How can we make our business more efficient with software?” Of course software is ideally supposed to ship bug free, of course Catalina shouldn’t have bugs, but at the end of the day we live in a world where user growth and utility drives value, not code quality.
Century old industries are just now coming online and trying to integrate software into their business models, so they likely are either going to make low quality software that solves immediate needs. Not all companies view software as a revenue center.
For most companies in the world software is just a way to lower costs. Not a money maker. Many of us are biased by the fact we work in tech and work on software products/projects that are money makers. Some of the companies in the post are giant tech companies.
Imagine the total lines of code count at any of the companies mentioned. It’s easily in the hundreds of millions or even billions. Of course there are examples of bad code!
I wonder if folks would have this same reaction if a developer license was $10 a year? Is it a purely financial decision or is it just that they’re being asked to quantify their hobby at all?
Own a mac (recent enough to run a supported OS) in order to run the signing tools,
Spend your hobby time packaging your work for apple to review (rather than just distributing a zipfile which extracts the app, you need the whole signing workflow on each release),
Send apple your work for review, and be subject to arbitrary rejection (for the app store).
To this first-world-living, employed, healthy programmer, the $100/year is an irrelevance. Stealing my hobby time to make me jump through banal hoops is not (to be clear: I would be more OK with doing this if it weren’t such a hassle; signature verification isn’t without its merits).
I have quite a few apps that will stop working in Catalina. I just don’t care enough; this MacBook Air will go in the collection room when 10.14 is no longer supported.
There’s another approach that I think is under-utilized in the industry - use real dependencies but run your tests against both the change and the last-known-good commit at the same time[^1]. If the LKG commit fails then you can chalk up it up to an external dependency failing. This obviously doesn’t fully exonerate the change, but in many scenarios that might be OK (or at least just significantly improves debug times). I would love to see a test framework with this built in.
[^1]: There are a few tricky things to consider (resource isolation etc.) but it’s not impossible.
I’m glad this was posted, because John Gruber has such different software values than I do. He seems to think of app development as being akin to making films (he even has a Kubrick quote), where meticulousness, look-and-feel, and polish matter much more than utility. He judges other pieces of software the way a filmmaker judges other films – he’s looking for artistry. But I view software as a utility first, and artwork second. And especially so for software running daily on my pocket computer (smartphone).
Meanwhile, many of my core software values don’t get a mention from him. Like the fact that there is way more open source software for Android than for iOS, and this goes down to every layer. Or, the fact that Android’s developer toolchain is entirely x-platform, allowing developers to tweak and modify software regardless of what desktop operating system they use.
I love Apple’s design values. When I have my design cap on, there’s a flow of admiration in the direction of macOS and iOS. And I even participate in the Apple ecosystem a little, with a Mac Mini & iPad. But my daily developer workstation is Linux, and my daily phone is Android. Thinkpad X1C and Pixel 7, because I do care about well-designed utility.
And both have f/oss software, programmability, and utility as their core values, aligned with mine. Thus, for me, and for many like me, that’s the show.
Now… when I’m recommending unfussy hardware/software for my non-techie friends & family? Sure, it’s the Macbook Air and iPhone, then. But I’m really glad a choice exists on the market for people like me, and I’m not sure what value there is in bashing the other side just because it doesn’t share your values.
The conclusion you don’t state, and perhaps don’t draw, is “the iphone apps that focus on look-and-feel are less functional than the android apps that don’t”. I certainly don’t draw that conclusion.
Look and feel matters for functionality. Those of you who haven’t read Jef Raskin’s book should read it, particularly chapters 2-4. One example: How many per cent of the touches/gestures hit and act on an item that wasn’t there yet when the user’s brain decided to act? This is easily measured with user testing, videos and questions, and one of the chief ways to reduce the number is to add slick little animations and transitions, so that touch targets don’t appear suddently, but rather slide in, grow, shrink in ways that the brain use.
Yes, I don’t draw that conclusion either. I think iOS and macOS apps are perfectly functional – and sometimes moreso than Android or Linux counterparts. But I don’t think John Gruber was treating good design as being in service of function. He was treating good design as a showcase of craft and artistry. (Perhaps even of commercial ambition, as he derides the Android Mastodon projects as “hobby projects”, while praising the iOS “commercial” ones.)
100% agree with you that Jef Raskin has some great thoughts on the utility of good design (many of which could benefit the f/oss world). There was some interesting work in this direction a few years back in the Linux desktop world by the (now defunct) non-profit Yorba.
Gruber is solidly from the background of Mac indie apps like Panic’s stuff, which place a premium on design and functionality but are also vehicles for sustaining small businesses.
Try sending him mail. Ask “is a low error frequency a sign of good craftsmanship?”
I can guess his answer.
I’m going to post my own answer.
That which we do is follow rules of thumb. We don’t reason from first principles, even when those first principles are important to us.
Our real goal is to build applications and services that serve the users well, which includes being low on frustration. Being low on errors and frustration is… being pleasant to use, which ends up being some rules of thumbs about animations and general slickness.
His very first example, Ice Cubes, is open source.
You may be interested in the work of Richard Sapper. He was the original designer of the first black Thinkpad 700c. He kind of embodies an alternative to the Deiter Rams school of design (which Apple follows closely) where every device is very solutions-oriented.
I failed to understand what is Buck exactly - is it alternative to meson/bazel?
It’s worth reading the Buck2 explainer. It’s a really promising exploration of monadic build systems (ala. Shake, llbuild2) at Bazel scale.
Facebook hired a bunch of googlers who liked Blaze (Google’s internal version of Bazel). Because blaze wasn’t open source at the time, they wrote a clone called buck.
I was out of Bazel by the fact that it depends on both Java and Python. A rust reimplementation that produces a statically linked binary for the build tool sounds attractive.
The subset of Python supported by Bazel (Starlark) does not depend on CPython, it’s interpreted by Bazel itself.
A Python interpreter was used historically but I believe the migration happened before Bazel became open sourced.
The installation instructions say that it depends on:
I hadn’t realized that in Smalltalk-72 each object parsed incoming messages. Every object had its own syntax. Ingalls cites performance as an issue then, but perhaps the idea is worth revisiting.
Also, here’s a link to Schorre’s META II paper that he mentions: https://doi.org/10.1145/800257.808896
Smalltalk-72 - in contrast to all later Smalltalks - had indeed true message passing. That’s what Kay is usually referring to; it was his language design. The Smalltalk we know today, starting with Smalltalk-74, was invented by Ingalls and had little in common with Smalltak-72. Smalltalk-74 and later have virtual methods instead of message passing; but performance was still an issue.
Each object having its own syntax turned out to be an almost bigger problem than performance. And I am not sure about the “almost”.
Hey fun to see Sammy’s CPU on here. If anyone wants to hop on and look at it, connect to mc.openredstone.org with a Minecraft 1.17.1 client. Run
/build
then/warp CHUNGUS2
to warp to it.The Open Redstone Engineers server (https://openredstone.org/) is a community of nerds who use Minecraft as our digital logic simulator of choice, and we spend most of our time on there designing and implementing CPUs. I and a few others started it in 2013 and it’s still going strong (though other people at the helm these days). This CPU was made by a member called sammyuri who’s online every now and then.
I didn’t expect to see ORE show up on lobste.rs.
after updating java, right? :)
updating java won’t help you much
(OT) Given your experience, what do you think Minecraft should change to improve performance of in-game CPUs like this?
I don’t think there’s a lot Minecraft proper can or should do. The main problem is that everything in the game takes 1/10th of a second (what we call a redstone tick) to update – so that means a CPU like this has 10 ticks per clock cycle. (For reference, something like an AND gate requires two ticks, so 2/10ths of a second.) There are things Minecraft could do to support more complex circuits without the tick rate dropping further, but ORE already has a lot of server-side plug-ins to improve performance in that respect.
However, another member of ORE, StackDoubleFlow, has been working on a re-implementation of the Minecraft server software: https://github.com/MCHPR/MCHPRS – This reimplementation is written in Rust, and the whole point is to execute redstone circuits at insane clock rates. I would guess that something like this 1Hz CPU could execute at at least tens or hundreds of kHz on MCHPRS.
I think MCHPRS is the next step in cool Minecraft CPUs. I’m working on a CPU with a 16-bit address space in order to be able to write some seriously interesting complex programs. Someone else on ORE is working on a complete RISC-V implementation, including all the ISA extensions necessary to get Linux running on it. The promise of being able to actually do something with the machines is what has gotten me back into making redstone CPUs lately.
(Also, maybe, eventually, there may come a re-implementation of the client. I think we may eventually have re-implemented ourselves out of Microsoft’s grip. Watch this space.)
I wasn’t asked but have you seen papermc ? https://github.com/PaperMC/Paper it does a lot right
The enterprise community is uniting around Bazel right now.
Citation needed! No seriously, which enterprise community is that supposed to be?
It means infra/tools ops people migrating between the companies and bringing their favorite tools with them.
Which has nothing to with whether it’s a good choice or not. Enterprise companies are notorious in copying each other approaches, so whichever gets popular at any point tends to be entrenched for some years. “Google does it” is always going to win against solitary voices saying “but it’s complicated, and there’s a better way”.
I work in an enterprise, a pretty large one and all I see is maven and some gradle. No bazel in sight. I also have been following the project a bit for year, but never met anyone who used it (outside Google, but they have the original called blaze IIRC)
Hey, let me make a list of enterprise who is using Bazel and then share it with Lobste.rs
There is definitely an upward trend in adoption of bazel in the last 3 years, so much that I don’t think I can adequately dropping all the names in 1 go. But if you must know, I will make a small list with some big names here who have publicly said that they are adopting bazel (not just 1-2 projects):
So you can see that a large portion of FAANGMULA and equivalents unicorns are adopting Bazel. Among them, Facebook is rebuilding Bucks 2.0 and has snatched some key hires from Bazel/Build community in the past years. Microsoft MSBuild is also adopting Bazel’s RBE spec. There are also bigger companies / startups who are experimenting with Bazel on 1 or 2 smaller projects:
There are a number of startups / consulting companies in this space, some include members of the Blaze team leaving Google, to address the upcoming market demand:
The above are not an exhaustive list, it’s what in my head after 15 mins of writing this post.
https://github.com/bazelbuild/bazel/search?q=blaze always makes me laugh.
Google people are trying to push Bazel into everything that they touch. It’s a fine build system if you can guarantee that you will only ever want to ship things on one of the three or four platforms that Google thinks are important. Since ‘Enterprise’ is often a synonym for ‘expensive lock-in’, I don’t have any reason to doubt the original claim.
Do you mean Google? Or anyone else?
(I’ve read that Bazel has been designed especially for building Google projects, on Google architecture, using Google style of source management)
Square.
As well as Twitter, Dropbox & Pinterest (all to varying degrees).
I’m an ex developer of factorio, I have to ask: have you actually used this on a real candidate? Not sure how I would respond :D
I specifically remember great anger over my building a left hand drive train system during play testing (I’m Irish, we use left hand drive, most other devs are from mainland Europe, so use right hand drive).
(OT but…) Thank you (and the rest of Wube) for a really fantastic game! I’ve and so many people I know have got so much value out of Factorio.
Trains and subways (but not trams) still drive on the left in Sweden (Sweden went over to RHS road traffic in 1967).
As in France, because IIRC, Britain built the train system there.
I’m French, and my train systems are also left hand drive, for two reasons:
The first reason is pedantically subjective, but I believe placing signals inside of the tracks has real, technical value in some cases. And it’s more elegant. In any case, I still screw it up sometimes and forget that my own system is left hand drive…
For what it’s worth, I believe that Elixir’s Phoenix framework uses a similar technique for LiveView.
The differences are fairly important - Phoenix has long lived stateful processes on the server and only transmits minimal state diffs (not HTML) to be interpreted in the client.
I am 100% over versioning. I have never seen an implementation that doesn’t suck. It’s miserable. Something is fundamentally wrong with the whole model, whatever methodology you use for tagging won’t fix that.
There could be different ways:
I can’t wait for some of the ideas from Unison to permeate into more mainstream ecosystems. Lots of great ideas (globally accessible CAS, AST storage etc.) stuck behind a Haskell-like syntax.
Compare-And-Swap? Content-Aware Scaling? Close Air Support? Computer Algebra System? Content-Addressable Storage?
Content-Addressable Storage. Check it out! https://www.unisonweb.org/
I sort of agree because I don’t think there’s a perfect versioning system, but I think semver2 may be as good as it gets.
I like it because it’s more functional than the marketing driven “versions don’t matter, we’ll just call it version 2.0 to sell more” and all the alternatives get into too much time spent on perfecting versioning systems to diminishing returns.
I use it just so we have something, it saves time from deciding what to do, and it helps denote drafts or breaking changes. I use it even for stupid stuff like the “enterprise policy on bathroom breaks.” If it’s version 0.4.77 then it’s still in progress and could change any time. If it’s 1.1.16 then I mean it’s probably approved by someone. If I use 1.1.16 and see version 2.0 then it probably means I should read it because now it means I can only go to the bathroom on even hours or something that disrupts or may disrupt me.
Ron Minnich’s interview with On The Metal is great too: https://oxide.computer/blog/on-the-metal-3-ron-minnich/
I’ve been addicted to these podcasts lately. I started with the Johnathan Blow episode, but I’ve been working my way up from the bottom. Learning about both computing and the people involved has been really informative and entertaining, and that’s for someone who has never subscribed to a podcast series before. Bryan Cantrill is a surprisingly adept interviewer.
Agreed! Great guests, great interviewers. Don’t miss out on the show notes, they’re usually packed with a lot more supporting material.
I’ve been telling my family for several years about the impending collapse of the gigantic house of cards of modern software based on what I’ve seen. They don’t believe me.
The #1 thing we need to do is to do is apprentice and case-study based training of developers. Start training developers using case studies with real programs instead of just abstract ideas like “encapsulation” and having them write from scratch without having seen what production software looks like. Literate programming book such as Pharr and Humphreys’ amazing “Physically based rendering” are good steps in this direction.
http://www.pbr-book.org for those interested in following the referral.
Could you elaborate on what you mean by “collapse”? How would you expect the experience of writing or using software to be different afterwards?
I really wished my college days were spent more on building and less on learning about polymorphism and encapsulation.
That said, the author misses the point that we are in the middle of a huge boom where entire industries are just now looking around and saying, “How can we make our business more efficient with software?” Of course software is ideally supposed to ship bug free, of course Catalina shouldn’t have bugs, but at the end of the day we live in a world where user growth and utility drives value, not code quality.
Century old industries are just now coming online and trying to integrate software into their business models, so they likely are either going to make low quality software that solves immediate needs. Not all companies view software as a revenue center.
For most companies in the world software is just a way to lower costs. Not a money maker. Many of us are biased by the fact we work in tech and work on software products/projects that are money makers. Some of the companies in the post are giant tech companies.
Imagine the total lines of code count at any of the companies mentioned. It’s easily in the hundreds of millions or even billions. Of course there are examples of bad code!
I wonder if folks would have this same reaction if a developer license was $10 a year? Is it a purely financial decision or is it just that they’re being asked to quantify their hobby at all?
It’s not just the developer license; you have to:
To this first-world-living, employed, healthy programmer, the $100/year is an irrelevance. Stealing my hobby time to make me jump through banal hoops is not (to be clear: I would be more OK with doing this if it weren’t such a hassle; signature verification isn’t without its merits).
Yes. I haven’t owned a Mac since 2011, but I’m getting requests for a small open source project I haven’t really worked on since 2012 to be updated.
It’s no great tragedy that I probably won’t ever update it for Catalina, but it is a bit of a shame.
I have quite a few apps that will stop working in Catalina. I just don’t care enough; this MacBook Air will go in the collection room when 10.14 is no longer supported.