I haven’t followed Swift much recently, but the original Swift was simple because it punted on all of the hard problems. It had no mechanism for error handling (there were some small mods towards NSError). It had no support for concurrency and the everything-is-shared-mutable-state model made that really hard to fix (the structured concurrency and actor extensions are trying). It had no support for shared libraries (Swift’s shared library support now is really nicely designed and it’s the only modern language that I think has addressed this problem well, but that inevitably came with complexity).
For macOS development, Objective-C++ is actually a very nice language. C++ for local code that is specialised over strong types, Objective-C for late-bound interfaces that are exposed across module boundaries,
I have never used Objective C++, but from afar it sounds horrifying. Take a monstrously, ineffably complex programming language and another quite complex programming language and mash them together into an amalgamation. Can anybody hope to make sense of it?
Speaking as someone who used it for years, it actually works quite well. Obj-C is not very complex, and its OO stuff is pretty separable from the C part. Obj-C++ extends the C part.
I’ll also point out that modern Objective-C’s ARC (automatic reference counting) composes very well with C++’s object lifetime rules. In pure Objective-C with ARC,
is not allowed, because NSString* is an ARC pointer and the compiler needs to be able to reason about its initialisation and invalidation. The semantics of C structs don’t allow for that, which can make implementing custom data structures tricky: you either have to do it all in the context of Objective-C classes (which has certain overheads), use indirection, or you turn off ARC in the relevant source files and do manual reference counting.
The same code quoted above will compile on Objective-C++ however, because the ARC pointer is treated as having the usual set of C++ constructors, destructor, assignment operator, and so on - it’s not a POD (plain old data) type. This means the struct also gets implicit constructor/destructor/operator implementations when not explicitly implemented.
You can therefore shove pointers to Objective-C objects into all sorts of C++ data structures, that have been implemented without special Objective-C support, including the STL. It all composes rather nicely.
(A significant proportion of my work in recent years has been working on macOS device drivers and related apps, daemons/agents, etc.; I’ve mostly been using Objective-C++ on that, although I’ve recently introduced Rust in one part of such a project. My limited contact with Swift has been exceedingly frustrating, so I’ve avoided it where possible; it never appealed to me in the first place due to the reasons David mentioned, and the practicalities around extremely poor forward- and backwards-compatibility were a nightmare to deal with on a project where it was forced upon me.)
ARC certainly makes this easy, though prior to ARC I implemented a C++ smart pointer class for Objective-C references that did the relevant retain and release operation, which meant that this was already possible in Objective-C++ without ARC, it just became nicer because you didn’t need to use the smart pointer.
Definitely, you weren’t the only one to implement such a smart pointer class. I guess my point was more that modern Objective-C actually composes better with C++ libraries than it does with C libraries and data structures.
I also used it for years and we made Mac apps that spoke to hardware devices through the IOKit kernel driver API (which is in C++). It was indeed quite nice.
There is a non-trivial amount of ObjC++ in Firefox, too, or at least there was last time I checked. For TenFourFox I used ObjC++ as glue code to connect up libraries.
To add to what others have said: a lot of the pain in C++ comes from trying to do things that are easy in Objective-C, and vice versa. With C++, it’s easy to create rich types with no run-time overhead, but that create tight coupling. With Objective-C, you have late-bound interfaces everywhere, but avoiding dynamic dispatch is very hard / hacky. The combination means that you can completely avoid things like raw C pointers. You can used C++ collections inside a module, Objective-C ones across the boundaries.
You should give Obj-C a try, I think! It’s a surprisingly thin layer on top of C, giving a lot of bang for the buck for writing dynamic (but fast!) programs. It’s quite unique in that you have two extremes: C on the one hand, and a fully dynamic OO runtime (you can determine implementation at runtime). Overall syntax is outdated and weird (smalltalk influence), but it is still unmatched in that niche.
It’s not bad in practice. The use case is, you need Objective-C system frameworks but you can’t do without particular C++ libraries. You still spend the bulk of application code in one language.
I had a game that used Bullet physics in this way. I migrated most of the code to Swift after it was introduced, but I kept some Objective-C++ in order to keep Bullet. These days Swift has direct C++ interop, both for library use and for gradual migration of C++ projects to Swift.
I worked with making an iOS app at one point and I found while thinking in Lisp like patterns it seemed to get out my way if I wanted it to. But that is a beginner and greenfield perspective for sure.
I don’t think it sounds too bad, but I haven’t used it myself.
My understanding is it’s just the extra OOP bits from Objective-C overlayed on C++, similar to how it was overlayed on C in the first place. Basically just a second, independent object system. I understand why people wouldn’t like that, but it doesn’t sound too different than C++/CLI or using C++/JNI.
With the caveat that I’ve read their design docs, but not actually used it in anger:
They make a clear distinction between ABI-stable and ABI-unstable shapes of structures. Within a library, there are no ABI guarantees. At a library boundary, you have a choice whether you want to sacrifice some performance for the ability to change a layout later, or sacrifice flexibility for performance. This is a per-structure choice. Depending on the choice that you make, the compiler either lowers to something similar to Objective-C non-fragile ivars, or C struct fields.
They actually have a language notion of a library boundary. This means that generics can be statically reified within a library, but fall back to dynamic dispatch across library boundaries. Contrast this with C++ where templates either have to live in headers (and then end up copied in every compilation unit, including the implementation, and it’s a violation of the one-definition rule to link two libraries that use different versions of the same template) or they are private to a library. The Swift model gracefully falls back. Things may be faster inside a library, but they still work from outside, and the ABI doesn’t leak implementation details of the generics, only their interfaces.
but the original Swift was simple because it punted on all of the hard problems
Hmm…I’d say it was already incredibly complicated despite punting on a lot of hard problems and largely because it tried to codify “solutions” to non-problems into the language. Which never works, because reality, even code reality, is way too messy for that.
As an example, I wrote about the mess that is initialization back in 2014, so right after Swift apepared. There was some pushback from a prominent member of the Swift team saying that my goal of simplicity just wasn’t compatible with some of the goals they had. Hmm….
There was also a great rant by a prominent member of the community about Swift being just a mess of special cases upon special cases. I think he left the Apple ecosystem, and he’s not the only one. Alas I can’t find it and I don’t remember the name.
Anyway, I predicted back then that because they had so many language features for initialization it would never actually work out and there would be a renaissance of builder and factory method patterns and there would be even more language features for initialization. Five years later: hello SwiftUI! :-)
So the failure of Swift now isn’t surprising, the trajectory was already set in stone the day it launched and there wasn’t really much one could have done about it afterward…much less so since the same faulty reasoning that led to the initial problems was still present and guided later evolution.
Anyway, I predicted back then that because they had so many language features for initialization it would never actually work out and there would be a renaissance of builder and factory method patterns and there would be even more language features for initialization. Five years later: hello SwiftUI! :-)
I think this is an instance of correlation not being causation? My understanding is that the actual cause of SwiftUI is the successful design of Flutter (which gave raise to both SwiftUI and Kotlin Compose), and it is relatively orthogonal to language machinery.
Case in point, Kotlin’s initialization story is much more tame than Swit’s one (as it doesn’t try/doesn’t need to prove initialization safety statically), but it also converged on the essentially same design (or rather, vice-verse, IIRC Kotlin work in the area predate’s Swift’s).
Not to disagree with your wider point on constructors, which I agree with, just to point out that SwiftUI is not I think a particularly strong argument here.
I think you might want to have a look at the actual article. Swift introduced yet more special syntax for the part of SwiftUI that creates the view tree. So yet more language features for yet another way of constructing views^Wobjects^Wstructs.
I have read the articles! If I understand your argument correctly, it says that the fact that they needed to add new stuff to support Swift UI means that the original rules were inadequate. My counter-argument is even languages that don’t have Swift-style maze of initialization rule add special cases to support SwiftUI patterns. Ergo, adding stuff for SwiftUI is orthogonal to your normal way to initialize objects. In other words I claim that, in counter-factual where Swift doesn’t have complicated initialization rules and uses Java/Go style “everything is null to start with” or Rust/ML style “everything starts with all the parts specified”, it would have added more or less the same features still for SwiftUI.
The story is even more illustrative with Kotlin — it was specifically designed for DSLs like SwiftUI/Compose. The whole language, with its second-class-lambdas, extensions, and coming out-of-fashion implicit this, works towards that goal. And yet, when the actual UIs started to be implemented, it was quickly apparent that no one wants to write +button(), and a bit more of compiler special sauce is needed for nice surface syntax.
I must be a lousy communicator, because you seem to have misunderstood the article almost completely.
The point was not that Swift has the wrong initialization rules or too many of them. The point is, as it says in the title: “Remove features for greater power”. The many initialization rules are not the source of the problem, they are a symptom of the problem.
First rule of baking programming conventions into the language: Don’t do it!
The second rule of baking programming conventions into the language (experts only): Don’t do it yet!
The problem is trying to bake this stuff into the language. As a consequence, you get 30 pages of initialization rules. As a further consequence, those 30 pages will be forever insufficient.
So for me, the supposed counter-point you bring with Kotlin actually supports my point. You write:
it was specifically designed for DSLs like SwiftUI/Compose. The whole language, with its second-class-lambdas, extensions, and coming out-of-fashion implicit this, works towards that goal
So they baked a whole bunch of features into the language to support the DSL use case. What was the title of the blog post again?
“Remove features for greater power”
So they added a lot of features into the language especially for this use-case and it didn’t even work out for this particular use-case. Surprise surprise!
First rule of baking programming conventions into the language: Don’t do it!
The second rule of baking programming conventions into the language (experts only): Don’t do it yet!
I simply don’t think the static/compiler-oriented mindset is compatible with the sorts of things these languages are trying to do. You put way too much into the language/compiler, and you do it way too early.
Ruby has had a bunch of these kinds of frameworks, and as far as I know they did not require any changes to the language. Because Ruby had fewer but more flexible features to start with.
With Objective-S I seem to be violating that rule, because it certainly does put things into the language. Or at least seems to do so. What I am doing, however, is following the second rule: “don’t do it yet”. (With quite a bit of trepidation, because it is “experts only”).
And I am not actually baking all that much into the language. I am baking a bit of useful surface syntax and the associated metaobject-protocol into the language. What lies behind those metaobject protocols is quite flexible.
So far this appears to strike a good balance between providing some syntactic convenience and compiler support while not making the mistake of baking way too much into the language.
Indeed! I misunderstood your original comment as meaning that SwiftUI is a downstream consequence of initialization rules. I agree that both are rather the result of the lack of expressiveness, which doesn’t allow the user to “do it yourself” in userland code. The Kotlin example was exactly to illustrate that point.
There was also a great rant by a prominent member of the community about Swift being just a mess of special cases upon special cases. I think he left the Apple ecosystem, and he’s not the only one. Alas I can’t find it and I don’t remember the name.
Crucially, the vast majority of this is incidental complexity, not essential complexity. Swift is a crescendo of special cases stopping just short of the general; the result is complexity in the semantics, complexity in the behaviour (i.e. bugs), and complexity in use (i.e. workarounds).
How to fix:
What, then, should be removed? It’s probably too late to remove any of that, …
So I certainly wasn’t the only one who accurately predicted the current sad state of affairs. It was extremely obvious to a lot of people.
I am if the opinion that the only avenue for making a well rounded language that lasts is to put all of the skill points towards a small handful of skills (“fast”, “fast to compile”, “ease of dynamic linking”, “easy to use”, “easy to learn”, whatever) where each “skill” can be thought as a different axis in a graph. Go all in on a few niches. This helps with clarity of purpose for the project itself, a clear pitch for prospective users, and makes it possible to show the benefits for that niche. Without doing this the evolution of the project ends up rudderless, and adoption can stagnate either slowing down or killing the project. Once you’ve reached critical mass, fight for dear life not to abandon those hard earned early design decisions that allowed you to carve a niche, but go all-in on a handful of other axis as well. Rinse and repeat. A language can start as easy to use and once it has a clear base experience it can start trying to produce faster binary. Or it can start as fast to compile and afterwards try to produce optimized code. Or it can start producing super optimized code and later try to become faster to compile or easier to use. All the early design decisions will hamper the later pivot, and that’s ok! No language will ever be all things for everyone (barring some unrealistic investment in monstrous amounts of research, development and maintenance).
Swift had a clear initial view of what it wanted to accomplish, and did so successfully (I believe, don’t have personal experience with using Swift, nor do I keep up with its development). It is now on its expansion era, to allow for niches that previously were either difficult or impossible to cater to. It is a delicate process that can easily go haywire because of subtle interactions between completely unrelated features can make the result worse than the sum of its parts. But I can’t blame them for trying.
I could maybe argue that sync Rust had excellent “early design decisions that allowed you to carve a niche”, but the async related problems are now occupying a vast majority of the brain time of the lang teams. While I don’t think we’re heading to 217 keywords, I certainly feel uneasy seeing “keyword generics” and similar efforts.
These features together form the syntactic backbone of SwiftUI, the shiny new UI framework of the future™.
This is what Swift is. Apple needs a language it controls to run on its own devices. This can’t be a language they don’t control and because they control it, they’ll twist it into whatever shape is required.
All the noise about backend Swift and language evolution speeding up etc. notwithstanding, anything that happens in Swift will be in service to Apple’s device strategy. Everything else is a sideshow at best, but mostly just a distraction.
It’s a clickbait title. A title covering the full text would be “Apple maybe isn’t killing Swift anymore”.
Progressive disclosure. (Swift has 217 keywords)
A large number of keywords, while silly on its face, is not a progressive disclosure problem. The term means that despite a large space of things to learn about a tool, when you begin you’re confronted by very few of them and you can be comfortably productive, then learn a bit more as your needs deepen. Progressive disclosure is alive and well both in the syntax and APIs.
As a Swift user:
The renewed focus on more platforms has been a good sign, but Apple has had OSS PR problems for decades, and as a result so does Swift.
The language is very nice. An awful lot of bad ideas and unclear names have been avoided.
Apple has often been responsible for the good parts, especially progressive disclosure, clarity at the point of use, and interop.
SwiftUI is good now that the result builder compiler diagnostics are not junk. Of all private frameworks, though, this is the one whose source I’m itching to read.
Swift Concurrency is great if you don’t mind a function color. Swift 6’s strong checks need more time in the oven, and language leadership has lately been focused on that.
I wish macros were easier to create.
I’ve got only one major beef: Compile times are too damn slow, and I think there’s no way to make them 10x better in the source-stable era.
I’ve been writing games and apps in Swift for years and the language is getting more messy year over year. It’s a shame because it really has some fantastic ideas. I probably won’t ever port my old apps to Swift 6 and I’m getting the feeling that my next game will be my last Swift game.
What pushes you away from Swift, and what does that push you towards?
The most salient of many reasons: five years ago it seemed like the language was establishing real footholds into non-Apple platforms. Now I don’t see Swift ever escaping Apple-land and I’d really like my next game to be able to run on the Switch (2) and the Steam Deck.
A little clickbait-y title, but I agree with most of the stuff written in there.
Back in the days when Swift was just getting started, it was amazing for me to follow the development mailing list because I could witness with my own eyes how a real-world, production-grade programming language gets designed out there in the wild. I was able to pick up many underlying concepts of designing a language syntax, the compiler architecture, and all the nuts and bolts of what makes a programming language work under the hood.
Too bad I never got the chance to actually use Swift in any of my projects.
Shame about the title, because the post is great. I particularly appreciated the section that compared governance models across multiple other programming languages.
Go was mostly steered by a core team of Plan 9 refugees until recently. I hope the culture is strong enough to survive Russ Cox stepping down as lead, but time will tell. Rob Pike probably already thinks we ruined it by adding generic iteration.
I don’t buy this argument. The language has gotten more complicated because it is trying to do more things. Async and concurrency demand more complexity because of the memory layout and management system previously chosen.
The other major source of complexity appears to be objective c interop. That was needed to just get the language off the ground.
I find it funny that the author derides “syntactic bikeshedding” and then spends so much time complaining about the number of keywords, as though that is a significant imprimatur of quality.
Exactly. Progressive disclosure and syntax bikeshedding are how we keep the language nice to use despite a feature set that frequently grows to enable new use cases.
It’s been wild to watch this “too many keywords” meme spread. I see it being thrown out nearly every time Swift is discussed lately. I’d like to see someone pushing this argument propose a list of keywords that a significant number of developers run into (_alignment isn’t hurting your average app developer by existing!) and that removing wouldn’t cause significant pain.
I wonder how much of this is just sublimation of angst over the Swift 6 language mode and strict concurrency checking. It has definitely been a rough road, but there’s lots of work being done by the Swift community on making it smoother and the end product has potential to be uniquely powerful/safe/nice.
I’ve seen some other compilers with similar special cases. The Roslyn C# compiler, for example, has a few deliberate spec violations to be backwards compatible with the previous native compiler; there were too many programs dependent on its behavior.
I haven’t followed Swift much recently, but the original Swift was simple because it punted on all of the hard problems. It had no mechanism for error handling (there were some small mods towards NSError). It had no support for concurrency and the everything-is-shared-mutable-state model made that really hard to fix (the structured concurrency and actor extensions are trying). It had no support for shared libraries (Swift’s shared library support now is really nicely designed and it’s the only modern language that I think has addressed this problem well, but that inevitably came with complexity).
For macOS development, Objective-C++ is actually a very nice language. C++ for local code that is specialised over strong types, Objective-C for late-bound interfaces that are exposed across module boundaries,
I have never used Objective C++, but from afar it sounds horrifying. Take a monstrously, ineffably complex programming language and another quite complex programming language and mash them together into an amalgamation. Can anybody hope to make sense of it?
Speaking as someone who used it for years, it actually works quite well. Obj-C is not very complex, and its OO stuff is pretty separable from the C part. Obj-C++ extends the C part.
I’ll also point out that modern Objective-C’s ARC (automatic reference counting) composes very well with C++’s object lifetime rules. In pure Objective-C with ARC,
is not allowed, because
NSString*is an ARC pointer and the compiler needs to be able to reason about its initialisation and invalidation. The semantics of C structs don’t allow for that, which can make implementing custom data structures tricky: you either have to do it all in the context of Objective-C classes (which has certain overheads), use indirection, or you turn off ARC in the relevant source files and do manual reference counting.The same code quoted above will compile on Objective-C++ however, because the ARC pointer is treated as having the usual set of C++ constructors, destructor, assignment operator, and so on - it’s not a POD (plain old data) type. This means the
structalso gets implicit constructor/destructor/operator implementations when not explicitly implemented.You can therefore shove pointers to Objective-C objects into all sorts of C++ data structures, that have been implemented without special Objective-C support, including the STL. It all composes rather nicely.
(A significant proportion of my work in recent years has been working on macOS device drivers and related apps, daemons/agents, etc.; I’ve mostly been using Objective-C++ on that, although I’ve recently introduced Rust in one part of such a project. My limited contact with Swift has been exceedingly frustrating, so I’ve avoided it where possible; it never appealed to me in the first place due to the reasons David mentioned, and the practicalities around extremely poor forward- and backwards-compatibility were a nightmare to deal with on a project where it was forced upon me.)
ARC certainly makes this easy, though prior to ARC I implemented a C++ smart pointer class for Objective-C references that did the relevant retain and release operation, which meant that this was already possible in Objective-C++ without ARC, it just became nicer because you didn’t need to use the smart pointer.
Definitely, you weren’t the only one to implement such a smart pointer class. I guess my point was more that modern Objective-C actually composes better with C++ libraries than it does with C libraries and data structures.
This made me curious. What was your use-case? iOS apps? Mac apps? Something else?
I also used it for years and we made Mac apps that spoke to hardware devices through the IOKit kernel driver API (which is in C++). It was indeed quite nice.
Mac apps, libraries for use in Mac/iOS apps.
There is a non-trivial amount of ObjC++ in Firefox, too, or at least there was last time I checked. For TenFourFox I used ObjC++ as glue code to connect up libraries.
To add to what others have said: a lot of the pain in C++ comes from trying to do things that are easy in Objective-C, and vice versa. With C++, it’s easy to create rich types with no run-time overhead, but that create tight coupling. With Objective-C, you have late-bound interfaces everywhere, but avoiding dynamic dispatch is very hard / hacky. The combination means that you can completely avoid things like raw C pointers. You can used C++ collections inside a module, Objective-C ones across the boundaries.
You should give Obj-C a try, I think! It’s a surprisingly thin layer on top of C, giving a lot of bang for the buck for writing dynamic (but fast!) programs. It’s quite unique in that you have two extremes: C on the one hand, and a fully dynamic OO runtime (you can determine implementation at runtime). Overall syntax is outdated and weird (smalltalk influence), but it is still unmatched in that niche.
It’s not bad in practice. The use case is, you need Objective-C system frameworks but you can’t do without particular C++ libraries. You still spend the bulk of application code in one language.
I had a game that used Bullet physics in this way. I migrated most of the code to Swift after it was introduced, but I kept some Objective-C++ in order to keep Bullet. These days Swift has direct C++ interop, both for library use and for gradual migration of C++ projects to Swift.
Probably people who have actually used it.
I worked with making an iOS app at one point and I found while thinking in Lisp like patterns it seemed to get out my way if I wanted it to. But that is a beginner and greenfield perspective for sure.
I don’t think it sounds too bad, but I haven’t used it myself.
My understanding is it’s just the extra OOP bits from Objective-C overlayed on C++, similar to how it was overlayed on C in the first place. Basically just a second, independent object system. I understand why people wouldn’t like that, but it doesn’t sound too different than C++/CLI or using C++/JNI.
Can you tell us more about why Swift’s shared library support is well-designed?
With the caveat that I’ve read their design docs, but not actually used it in anger:
They make a clear distinction between ABI-stable and ABI-unstable shapes of structures. Within a library, there are no ABI guarantees. At a library boundary, you have a choice whether you want to sacrifice some performance for the ability to change a layout later, or sacrifice flexibility for performance. This is a per-structure choice. Depending on the choice that you make, the compiler either lowers to something similar to Objective-C non-fragile ivars, or C struct fields.
They actually have a language notion of a library boundary. This means that generics can be statically reified within a library, but fall back to dynamic dispatch across library boundaries. Contrast this with C++ where templates either have to live in headers (and then end up copied in every compilation unit, including the implementation, and it’s a violation of the one-definition rule to link two libraries that use different versions of the same template) or they are private to a library. The Swift model gracefully falls back. Things may be faster inside a library, but they still work from outside, and the ABI doesn’t leak implementation details of the generics, only their interfaces.
Wonderful explanation, thank you!
An overview from Rust folks: https://faultlore.com/blah/swift-abi/
Hmm…I’d say it was already incredibly complicated despite punting on a lot of hard problems and largely because it tried to codify “solutions” to non-problems into the language. Which never works, because reality, even code reality, is way too messy for that.
As an example, I wrote about the mess that is initialization back in 2014, so right after Swift apepared. There was some pushback from a prominent member of the Swift team saying that my goal of simplicity just wasn’t compatible with some of the goals they had. Hmm….
There was also a great rant by a prominent member of the community about Swift being just a mess of special cases upon special cases. I think he left the Apple ecosystem, and he’s not the only one. Alas I can’t find it and I don’t remember the name.
Anyway, I predicted back then that because they had so many language features for initialization it would never actually work out and there would be a renaissance of builder and factory method patterns and there would be even more language features for initialization. Five years later: hello SwiftUI! :-)
So the failure of Swift now isn’t surprising, the trajectory was already set in stone the day it launched and there wasn’t really much one could have done about it afterward…much less so since the same faulty reasoning that led to the initial problems was still present and guided later evolution.
I think this is an instance of correlation not being causation? My understanding is that the actual cause of SwiftUI is the successful design of Flutter (which gave raise to both SwiftUI and Kotlin Compose), and it is relatively orthogonal to language machinery.
Case in point, Kotlin’s initialization story is much more tame than Swit’s one (as it doesn’t try/doesn’t need to prove initialization safety statically), but it also converged on the essentially same design (or rather, vice-verse, IIRC Kotlin work in the area predate’s Swift’s).
Not to disagree with your wider point on constructors, which I agree with, just to point out that SwiftUI is not I think a particularly strong argument here.
I think you might want to have a look at the actual article. Swift introduced yet more special syntax for the part of SwiftUI that creates the view tree. So yet more language features for yet another way of constructing views^Wobjects^Wstructs.
The more general problems with SwiftUI (and related) are another issue, which I talk about a little bit here: UIs Are Not Pure Functions of the Model - React.js and Cocoa Side by Side
Last I checked the inspiration for Flutter and SwiftUI etc. was React.
I have read the articles! If I understand your argument correctly, it says that the fact that they needed to add new stuff to support Swift UI means that the original rules were inadequate. My counter-argument is even languages that don’t have Swift-style maze of initialization rule add special cases to support SwiftUI patterns. Ergo, adding stuff for SwiftUI is orthogonal to your normal way to initialize objects. In other words I claim that, in counter-factual where Swift doesn’t have complicated initialization rules and uses Java/Go style “everything is null to start with” or Rust/ML style “everything starts with all the parts specified”, it would have added more or less the same features still for SwiftUI.
The story is even more illustrative with Kotlin — it was specifically designed for DSLs like SwiftUI/Compose. The whole language, with its second-class-lambdas, extensions, and coming out-of-fashion implicit
this, works towards that goal. And yet, when the actual UIs started to be implemented, it was quickly apparent that no one wants to write+button(), and a bit more of compiler special sauce is needed for nice surface syntax.I must be a lousy communicator, because you seem to have misunderstood the article almost completely.
The point was not that Swift has the wrong initialization rules or too many of them. The point is, as it says in the title: “Remove features for greater power”. The many initialization rules are not the source of the problem, they are a symptom of the problem.
The problem is trying to bake this stuff into the language. As a consequence, you get 30 pages of initialization rules. As a further consequence, those 30 pages will be forever insufficient.
So for me, the supposed counter-point you bring with Kotlin actually supports my point. You write:
So they baked a whole bunch of features into the language to support the DSL use case. What was the title of the blog post again?
So they added a lot of features into the language especially for this use-case and it didn’t even work out for this particular use-case. Surprise surprise!
I simply don’t think the static/compiler-oriented mindset is compatible with the sorts of things these languages are trying to do. You put way too much into the language/compiler, and you do it way too early.
Ruby has had a bunch of these kinds of frameworks, and as far as I know they did not require any changes to the language. Because Ruby had fewer but more flexible features to start with.
https://github.com/AndyObtiva/glimmer
With Objective-S I seem to be violating that rule, because it certainly does put things into the language. Or at least seems to do so. What I am doing, however, is following the second rule: “don’t do it yet”. (With quite a bit of trepidation, because it is “experts only”).
And I am not actually baking all that much into the language. I am baking a bit of useful surface syntax and the associated metaobject-protocol into the language. What lies behind those metaobject protocols is quite flexible.
So far this appears to strike a good balance between providing some syntactic convenience and compiler support while not making the mistake of baking way too much into the language.
Indeed! I misunderstood your original comment as meaning that SwiftUI is a downstream consequence of initialization rules. I agree that both are rather the result of the lack of expressiveness, which doesn’t allow the user to “do it yourself” in userland code. The Kotlin example was exactly to illustrate that point.
Found it: Which features overcomplicate Swift? What should be removed?, by Rob Rix.
How to fix:
So I certainly wasn’t the only one who accurately predicted the current sad state of affairs. It was extremely obvious to a lot of people.
And silly me: of course I had referenced it an another one of my posts: The Curious Case of Swift’s Adoption of Smalltalk Keyword Syntax
It’s another example of complexity begetting more complexity, special cases begetting more special cases.
I am if the opinion that the only avenue for making a well rounded language that lasts is to put all of the skill points towards a small handful of skills (“fast”, “fast to compile”, “ease of dynamic linking”, “easy to use”, “easy to learn”, whatever) where each “skill” can be thought as a different axis in a graph. Go all in on a few niches. This helps with clarity of purpose for the project itself, a clear pitch for prospective users, and makes it possible to show the benefits for that niche. Without doing this the evolution of the project ends up rudderless, and adoption can stagnate either slowing down or killing the project. Once you’ve reached critical mass, fight for dear life not to abandon those hard earned early design decisions that allowed you to carve a niche, but go all-in on a handful of other axis as well. Rinse and repeat. A language can start as easy to use and once it has a clear base experience it can start trying to produce faster binary. Or it can start as fast to compile and afterwards try to produce optimized code. Or it can start producing super optimized code and later try to become faster to compile or easier to use. All the early design decisions will hamper the later pivot, and that’s ok! No language will ever be all things for everyone (barring some unrealistic investment in monstrous amounts of research, development and maintenance).
Swift had a clear initial view of what it wanted to accomplish, and did so successfully (I believe, don’t have personal experience with using Swift, nor do I keep up with its development). It is now on its expansion era, to allow for niches that previously were either difficult or impossible to cater to. It is a delicate process that can easily go haywire because of subtle interactions between completely unrelated features can make the result worse than the sum of its parts. But I can’t blame them for trying.
How do you see that wrt to Rust and async?
I could maybe argue that sync Rust had excellent “early design decisions that allowed you to carve a niche”, but the async related problems are now occupying a vast majority of the brain time of the lang teams. While I don’t think we’re heading to 217 keywords, I certainly feel uneasy seeing “keyword generics” and similar efforts.
This is what Swift is. Apple needs a language it controls to run on its own devices. This can’t be a language they don’t control and because they control it, they’ll twist it into whatever shape is required.
All the noise about backend Swift and language evolution speeding up etc. notwithstanding, anything that happens in Swift will be in service to Apple’s device strategy. Everything else is a sideshow at best, but mostly just a distraction.
This might be the best metaphor for open source development I’ve ever seen!
It’s a clickbait title. A title covering the full text would be “Apple maybe isn’t killing Swift anymore”.
A large number of keywords, while silly on its face, is not a progressive disclosure problem. The term means that despite a large space of things to learn about a tool, when you begin you’re confronted by very few of them and you can be comfortably productive, then learn a bit more as your needs deepen. Progressive disclosure is alive and well both in the syntax and APIs.
As a Swift user:
The renewed focus on more platforms has been a good sign, but Apple has had OSS PR problems for decades, and as a result so does Swift.
The language is very nice. An awful lot of bad ideas and unclear names have been avoided.
Apple has often been responsible for the good parts, especially progressive disclosure, clarity at the point of use, and interop.
SwiftUI is good now that the result builder compiler diagnostics are not junk. Of all private frameworks, though, this is the one whose source I’m itching to read.
Swift Concurrency is great if you don’t mind a function color. Swift 6’s strong checks need more time in the oven, and language leadership has lately been focused on that.
I wish macros were easier to create.
I’ve got only one major beef: Compile times are too damn slow, and I think there’s no way to make them 10x better in the source-stable era.
I’ve been writing games and apps in Swift for years and the language is getting more messy year over year. It’s a shame because it really has some fantastic ideas. I probably won’t ever port my old apps to Swift 6 and I’m getting the feeling that my next game will be my last Swift game.
What pushes you away from Swift, and what does that push you towards?
The most salient of many reasons: five years ago it seemed like the language was establishing real footholds into non-Apple platforms. Now I don’t see Swift ever escaping Apple-land and I’d really like my next game to be able to run on the Switch (2) and the Steam Deck.
what are the other reasons?
A little clickbait-y title, but I agree with most of the stuff written in there.
Back in the days when Swift was just getting started, it was amazing for me to follow the development mailing list because I could witness with my own eyes how a real-world, production-grade programming language gets designed out there in the wild. I was able to pick up many underlying concepts of designing a language syntax, the compiler architecture, and all the nuts and bolts of what makes a programming language work under the hood.
Too bad I never got the chance to actually use Swift in any of my projects.
Shame about the title, because the post is great. I particularly appreciated the section that compared governance models across multiple other programming languages.
I wonder which bucket is Go.
Go was mostly steered by a core team of Plan 9 refugees until recently. I hope the culture is strong enough to survive Russ Cox stepping down as lead, but time will tell. Rob Pike probably already thinks we ruined it by adding generic iteration.
I don’t buy this argument. The language has gotten more complicated because it is trying to do more things. Async and concurrency demand more complexity because of the memory layout and management system previously chosen.
The other major source of complexity appears to be objective c interop. That was needed to just get the language off the ground.
I find it funny that the author derides “syntactic bikeshedding” and then spends so much time complaining about the number of keywords, as though that is a significant imprimatur of quality.
Exactly. Progressive disclosure and syntax bikeshedding are how we keep the language nice to use despite a feature set that frequently grows to enable new use cases.
It’s been wild to watch this “too many keywords” meme spread. I see it being thrown out nearly every time Swift is discussed lately. I’d like to see someone pushing this argument propose a list of keywords that a significant number of developers run into (
_alignmentisn’t hurting your average app developer by existing!) and that removing wouldn’t cause significant pain.I wonder how much of this is just sublimation of angst over the Swift 6 language mode and strict concurrency checking. It has definitely been a rough road, but there’s lots of work being done by the Swift community on making it smoother and the end product has potential to be uniquely powerful/safe/nice.
LOL at having hard-coded type checking rules for the SwiftUI library in the Swift compiler
It wouldn’t have occurred to me that this is valid solution …
I’ve seen some other compilers with similar special cases. The Roslyn C# compiler, for example, has a few deliberate spec violations to be backwards compatible with the previous native compiler; there were too many programs dependent on its behavior.