Would it really “break the web” if PHP removed some of the sketchy parts of the language? It’s not like JS in the browser where you don’t have control over the version you’re running. Updating JS would break existing websites running old code. But pushing breaking changes in a version bump to PHP doesn’t have to break anything. It’s running on the backend and a lot of people are (or were at least) using old versions for WordPress anyway. Why don’t the just purge some of that stuff and make it actually nice.
Folks using those sketchy parts will just keep using older builds of PHP which keeps their applications sketchy but also locks them out of security updates. At least that’s been my experience developing with PHP in the past.
There is an almost constant push from a segment of the php internals community (i.e. the community of people that actually develop php itself) to improve things, make it stricter, remove old stuff.
In practically every case, the age old “but BC” is trotted out. Sometimes enough people think the improvement is worth it, and the vote passes. In some cases it doesn’t.
If you want to really scream “wtf” at your display, go read through the php internals mailing list messages arguing against recently passed RFC, that means accessing undeclared variables will be considered an error, not just a warning (and thus can’t just be silenced any more). You understood that correctly: in 2022 people are arguing in favour of the ability to deliberately attempting to access undeclared variables without an error.
I have mixed feelings about a lot of this. Being a user of both Lua and JavsScript I see a lot of issues PHP has handled in different ways.
In practically every case, the age old “but BC” is trotted out
In Lua, they allow, and liberally if needed break backwards compatibility. The language is better for it, BUT Lua is primarily an embedded language so users are free to use whatever version, official or not they want. JavaScript on the other hand still carries around stuff like string.bold because removing stuff like that could actually break unmodified code that has otherwise been running just fine on the web.
in 2022 people are arguing in favour of the ability to deliberately attempting to access undeclared variables without an error.
I’m not a huge user of PHP but I have to work with it from time to time. I would think because of the nature of stuff like includes, where code is basically inserted inline, you don’t always know what variables will be available at run-time. Sometimes, like Lua, it’s easier to just add null guards here and there or find other ways to protect your code against null values.
I mean, php isn’t a client-side language, there’s no reason a developer isn’t in control of the version of php they use, more now than ever.
When mod_php was “the way”, sure, it was harder to pick and choose versions on any kind of multiple use system (either shared hosting, or even multiple sites/projects on a single VM/server, but these days: php-fpm (i.e. a fastcgi daemon) is undoubtedly “the way”, and it’s trivial to have half a dozen different versions of php installed concurrently, and target a specific version per vhost / site. Furthermore, easier service management via systemd and/or system encapsulation via containers mean that running stuff which has very esoteric needs (i.e. needs mod_php, needs php5.4 or something else equally ancient) on an otherwise multi-use system is not rocket science.
The biggest difference is probably that php is generally exposed to the web as a server, which Lua can do obviously but it isn’t Lua’s raison d’etre, so running an older version that is potentially not receiving security updates is probably less critical.
So, until php 8.2, if run above you’ll either get stuff output, or a warning and then nothing, because an undefined variable is treated as null (and I deliberately misspelled the second one with a capital i instead of a lowercase L).
The argument isn’t about “should we allow for the concept of undefined variables”.. That exists in multiple ways:
the isset() language construct
the empty() language construct
the ?? null coalescing operator
The argument being made isn’t “an undefined variable is something we have to handle, this is the only way”.
The argument being made is “it’s too much work to explicitly check if a potentially undefined variable is defined before accessing it, and keeping the ‘default to null’ behaviour is more important than helping to solve a whole swathe of bugs caused by unexpectedly undefined variables (either due to logic issues, var name typos, etc).
One of the biggest benefits to me, is that it forces people to be explicit in their code now. Dealing with other people’s sloppy code is a nightmare, and even the best IDE can’t tell you what someone else’s intention was.
They did it with the PHP 3 => 4 transition, there was a massive breaking change in how objects were passed by reference (they used to be passed by value, if you can believe it). So they can do it again, perhaps over many releases by gradually deprecating (and replacing) and removing the old APIs. I think it’s more likely that they don’t really feel the sketchy parts are that sketchy.
Historically, I believe that was the purpose Lua was created for.
I kind of prefer there to be a separation between a configuration syntax and a DSL. On the one side, using a full language for configs makes it hard for new users; I just encountered Gulp (based on JS) last week and the config files are quite confusing. On the other hand, shoehorning logic into a config syntax ends up recreating control-flow constructs in ugly ways (viz. CMake, and GitHub Actions YAML.)
Well, I graduated from a film school. I have minimal knowledge of math and engineering (I dropped out of engineering after my second year). I still find a ton of value in SICP.
I think it is completely OK to recommend SICP. It is just that it is not an easy book; it requires effort. That effort required varies from reader to reader, and it may require you to go fetch another book and study for a while before coming back. It is OK to be challenged by a good book. A similar thing happens with TAoCP as well, heck, that book set was so above my pay grade that sometimes I had to go wash my face with cold water and think.
Now, what I think is important is to know when to recommend SICP or not. Someone says they want to learn programming basics fast because they’re in some bootcamp and need a bit of background to move on, then you’d be better recommending something more suitable.
As for alternatives for SICP for those who don’t want to dive too deep into math and engineering, I really enjoy How To Design Programs which I’ve seen being described as “SICP but for humanities”.
As for alternatives for SICP for those who don’t want to dive too deep into math and engineering, I really enjoy How To Design Programs which I’ve seen being described as “SICP but for humanities”.
The only trouble with SICP is that it was written for MIT students, all of whom love science and are quite comfortable with formal mathematics. Also, most of the students who use SICP at MIT have already learned to program computers before they begin. As a result, many other schools have found the book too challenging for a beginning course. We believe that everyone who is seriously interested in computer science must read SICP eventually. Our book is a prequel; it’s meant to teach you what you need to know in order to read that book successfully. Generally speaking, our primary goal in Parts I-V has been preparation for SICP, while the focus of Part VI is to connect the course with the kinds of programming used in “real world” application programs like spreadsheets and databases. (These are the last example and the last project in the book.)
Some of Bryan Harvey’s lectures on Scheme are available on YouTube. There used to be more, as I recall, but some are private now. A shame—I remember enjoying his lectures a lot.
I have to say comments like these and the GP are reassuring. I’m working through it now as an experienced programmer trying to formalize my foundational computer science stuff and just having a hard time digging in on the start. Not that I’m uncomfortable with recursion or Big O stuff, it’s just very information dense and hard to “just read” while keeping all the moving parts in your head space.
He also wrote a series of books called “Teaching Computer Science LOGO Style” or similar (He’s the author of USCB Logo).
I really enjoyed those books as I’m an avid LOGO fan. I’m still kinda sad that dynaturtles are effectively going to die because the only implementation still even remotely extant is LCSI’s Microworlds.
I’ve been wanting to learn Racket for a while. It’s next on my list after Javascript. Sadly it takes me years to truly grok a programming language to any level of real mastery :)
Bonus points: Even if you could care less about Scheme/LISP you learn recursion!
After struggling hard at trying to understand recursion in my freshman year with Concurrent Clean (the teachers pet language, a bit Haskell-like), this book made everything click. It also made me fall in love with Scheme because of its simplicity and functional high level approach. After the nightmare of complexity and weird, obscure tooling of Clean, this was such a breath of fresh air.
I don’t get The Little Schemer. There doesn’t seem to be a point to it, something it’s working towards. I feel like I should be enlightened in some way in the end, but it just seemed to end without doing anything special. What am I missing?
I like this approach. Recommend SICP, but make it SUPER clear that it’s a difficult book. It’s OK to get to a point where you say “This is over my head” and for those who are up for it, there’s an additional challenge there of “OK now take the next step! Learn what you need in order to understand”.
Not everyone has the time or the patience for that, but as long as we’re 1000% clear in our recommendations, I think recommending SICP is just fine.
However IMO it is THE WORST to recommend SICP to people who are looking for a quick and easy way to learn to program or learn the rudiments of computer science.
The Ugly: The speakers are truly terrible. They sound like a potato inserted into the anus of a dying llama and that’s just awful. It’s so bad that I won’t watch my favorite Jupiter Broadcasting shows on YouTube on this device without headphones. Honestly the OEM (Clevo) should hide their head in shame on this one.
Yep. The Lemur Pro I’m typing this on has similarly atrocious speakers.
One of the first things I do when I set up linux on a new laptop is disable the onboard sound. They should just stop shipping laptops with built-in speakers, they’re all pretty bad, and pluggable/bluetooth speakers or headphone will almost always be a better experience.
I find myself reluctantly agreeing with most of the article, which makes me sad. Nevertheless, I would like to be pragmatic about this.
That said, I think that most of the problems with the GPL can be sufficiently mitigated if we just remove the virality. In particular, I don’t think that copyleft is the problem.
The reason is because I believe that without the virality, companies would be willing to use copyleft licenses since the requirements for compliance would literally be “publish your changes.” That’s a low bar and especially easy in the world of DVCS’s and GitHub.
However, I could be wrong, so if I am, please tell me how.
The problem with ‘non-viral’ copyleft licenses (more commonly known as ‘per-file copyleft’ licenses) is that they impede refactoring. They’re fine if the thing is completely self-contained but if you want to change where a layer is in the system then you can’t move functions between files without talking to lawyers. Oh, and if you use them you’re typically flamed by the FSF because I don’t think anyone has managed to write a per-file copyleft license that is GPL-compatible (Mozilla got around this by triple-licensing things).
That said, I think one of the key parts of this article is something that I wrote about 15 or so years ago: From an end-user perspective, MS Office better meets a bunch of the Free Software Manifesto requirements than OpenOffice. If I find a critical bug in either then, as an experienced C++ programmer, I still have approximately the same chance of fixing it in either: zero. MS doesn’t let me fix the MS Office bug[1] but I’ve read some of the OpenOffice code and I still have nightmares about it. For a typical user, who isn’t a C++ programmer, OpenOffice is even more an opaque blob.
The fact that MS Office is proprietary has meant that it has been required to expose stable public interfaces for customisation. This means that it is much easier for a small company to maintain a load of in-house extensions to MS Office than it is to do the same for most F/OSS projects. In the ‘90s, MS invested heavily in end-user programming tools and as a result it’s quite easy for someone with a very small amount of programming experience to write some simple automation for their workload in MS Office. A lot of F/OSS projects have an elitist attitude about programming and don’t want end users to be extending the programs unless they pass the gatekeeping requirements of learning programming languages whose abstractions are far too low-level for the task at hand. There is really no reason that anything other than a core bit of compute-heavy code for any desktop or mobile app needs to be written in C/C++/Rust, when it could be in interpreted Python or Lua without any user-perceptible difference in performance.
Even the second-source argument (which is really compelling to a lot of companies) doesn’t really hold up because modern codebases are so huge. Remember that Stallman was writing that manifesto back when a typical home computer such as the BBC Model B was sufficiently simple that a single person could completely understand the entire hardware and software stack and a complete UNIX system could be written by half a dozen people in a year (Minix was released a few years later and was written by a single person, including kernel and userland. It was around 15,000 lines of code). Modern software is insanely complicated. Just the kernel for a modern *NIX system is millions of lines of code, so is the compiler. The bc utility is a tiny part of the FreeBSD base system (if memory serves, you wrote it, so should be familiar with the codebase) and yet is more code than the whole of UNIX Release 7 (it also has about as much documentation as the entire printed manual for UNIX Release 7).
In a world where software is this complex, it might be possible for a second company to come along and fix a bug or add a feature for you but it’s going to be a lot more expensive for them to do it than the company that’s familiar with the codebase. This is pretty much the core of Red Hat’s business model: they us Fedora to push core bits of Red Hat-controlled code into the Linux ecosystem, make them dependencies for everything, and then can charge whatever the like for support because no one else understands the code.
From an end-user perspective, well-documented stable interfaces with end-user programming tools give you the key advantages of Free Software. If there are two (or more) companies that implement the same stable interfaces, that’s a complete win.
F/OSS also struggles with an economic model. Proprietary software exists because we don’t have a good model for any kind of zero-marginal-cost goods. Creating a new movie, novel, piece of investigative journalism, program, and so on, is an expensive activity that needs funding. Copying any of these things has approximately zero cost, yet we fund the former by charging for the latter. This makes absolutely no sense from any rational perspective yet it is, to date, the only model that has been made to work at scale.
[1] Well, okay, I work at MS and with the whole ‘One Microsoft’ initiative I can browse all of our internal code and submit fixes, but this isn’t an option for most people.
The fact that MS Office is proprietary has meant that it has been required to expose stable public interfaces for customisation. This means that it is much easier for a small company to maintain a load of in-house extensions to MS Office than it is to do the same for most F/OSS projects. In the ‘90s, MS invested heavily in end-user programming tools and as a result it’s quite easy for someone with a very small amount of programming experience to write some simple automation for their workload in MS Office. A lot of F/OSS projects have an elitist attitude about programming and don’t want end users to be extending the programs unless they pass the gatekeeping requirements of learning programming languages whose abstractions are far too low-level for the task at hand. There is really no reason that anything other than a core bit of compute-heavy code for any desktop or mobile app needs to be written in C/C++/Rust, when it could be in interpreted Python or Lua without any user-perceptible difference in performance.
I’ve found this to be true for Windows too, as I wrote in a previous comment. I technically know how to extend the Linux desktop beyond writing baubles, but it’s shifting sands compared to how good Windows has been with extensibility. I’m not going to maintain a toolkit or desktop patchset unless I run like, Gentoo.
BTW, from your other reply:
I created a desktop environment project around this idea but we didn’t have sufficient interest from developers to be able to build anything compelling. F/OSS has a singular strength that is also a weakness: It is generally written by people who want to use the software, not by people who want to sell the software. This means that it tends to be incredibly usable to the authors but it is only usable in general if the authors are representative of the general population (and since they are, by definition, programmers, that is intrinsically not the case).
I suspect this is why it never built a tool something like Access/HyperCard/Excel/etc. that empower end users - because they don’t need it, because they are developers. Arguably, the original sin of free software (is assuming users are developers), and in a wider sense, why its threat model drifted further from reality.
The problem with ‘non-viral’ copyleft licenses (more commonly known as ‘per-file copyleft’ licenses) is that they impede refactoring. They’re fine if the thing is completely self-contained but if you want to change where a layer is in the system then you can’t move functions between files without talking to lawyers.
Is it possible to have a non-viral copyleft license that is not per-file? I hope so, and I wrote licenses to do that which I am going to have checked by a lawyer. If he says it’s impossible, I’ll have to give up on that.
Oh, and if you use them you’re typically flamed by the FSF because I don’t think anyone has managed to write a per-file copyleft license that is GPL-compatible (Mozilla got around this by triple-licensing things).
Eh, I’m not worried about GPL compatibility. And I’m not worried about being flamed by the FSF.
That said, I think one of the key parts of this article is something that I wrote about 15 or so years ago: From an end-user perspective, MS Office better meets a bunch of the Free Software Manifesto requirements than OpenOffice. If I find a critical bug in either then, as an experienced C++ programmer, I still have approximately the same chance of fixing it in either: zero. MS doesn’t let me fix the MS Office bug[1] but I’ve read some of the OpenOffice code and I still have nightmares about it. For a typical user, who isn’t a C++ programmer, OpenOffice is even more an opaque blob.
This is a good point, and it is a massive blow against Free Software since Free Software was supposed to be about the users.
Even the second-source argument (which is really compelling to a lot of companies) doesn’t really hold up because modern codebases are so huge.
I personally think this is a separate problem, but yes, one that has to be fixed before the second-source argument applies.
The bc utility is a tiny part of the FreeBSD base system (if memory serves, you wrote it, so should be familiar with the codebase) and yet is more code than the whole of UNIX Release 7 (it also has about as much documentation as the entire printed manual for UNIX Release 7).
Sure, it’s a tiny part of the codebase, but I’m not sure bc is a good example here. bc is probably the most complicated of the POSIX tools, and it still has less lines of code than MINIX. (It’s about 10k of actual lines of code; there are a lot of comments for documentation.) You said MINIX implemented userspace; does that mean POSIX tools? If it did, I have very little faith in the robustness of those tools.
I don’t know if you’ve read the sources of the original Morris bc, but I have (well, its closest descendant). It was terrible code. When checking for keywords, the parser just checked for the second letter of a name and then just happily continued. And hardly any error checking at all.
After looking at that code, I wondered how much of original Unix was terrible in the same way, and how terrible MINIX’s userspace is as well.
So I don’t think holding up original Unix as an example of “this is how simple software can be” is a good idea. More complexity is needed than that; we want robust software as well.
In other words, I think there is a place for more complexity in software than original Unix had. However, the complexity in modern-day software is out of control. Compilers don’t need to be millions of lines of code, and if you discount drivers, neither should operating systems. But they can have a good amount of code. (I think a compiler with 100k LOC is not too bad, if you include optimizations.)
So we’ve gone from too much minimalism to too much complexity. I hope we can find the center between those two. How do we know when we have found it? When our software is robust. Too much minimalism removes robustness, and too much complexity does the same thing. (I should write a blog post about that, but the CMake/Make recursive performance one comes first.)
bc is complex because it’s robust. In fact, I always issue a challenge to people who claim that my code is bad to find a crash or a memory bug in bc. No one has ever come back with such a bug. That is the robustness I am talking about. That said, if bc were any more complex than it is (and I could still probably reduce its complexity), then it could not be as robust as it is.
Also, with regards to the documentation, it has that much documentation because (I think) it documents more than the Unix manual. I have documented it to ensure that the bus factor is not a thing, so the documentation for it goes down to the code level, including why I made decisions I did, algorithms I used, etc. I don’t think the Unix manual covered those things.
From an end-user perspective, well-documented stable interfaces with end-user programming tools give you the key advantages of Free Software. If there are two (or more) companies that implement the same stable interfaces, that’s a complete win.
This is a point I find myself reluctantly agreeing with, and I think it goes back to something you said earlier:
A lot of F/OSS projects have an elitist attitude about programming and don’t want end users to be extending the programs unless they pass the gatekeeping requirements of learning programming languages whose abstractions are far too low-level for the task at hand.
This, I think, is the biggest problem with FOSS. FOSS was supposed to be about user freedom, but instead, we adopted this terrible attitude and lost our way.
Perhaps if we discarded this attitude and made software designed for users and easy for users to use and extend, we might turn things around. But we cannot make progress with that attitude.
That does, of course, point to you being correct about other things, specifically, that licenses matter too much right now because if we changed that attitude, would licenses really matter? In my opinion, not to the end user, at least.
Sure, it’s a tiny part of the codebase, but I’m not sure bc is a good example here. bc is probably the most complicated of the POSIX tools, and it still has less lines of code than MINIX. (It’s about 10k of actual lines of code; there are a lot of comments for documentation.) You said MINIX implemented userspace; does that mean POSIX tools? If it did, I have very little faith in the robustness of those tools.
To be clear, I’m not saying that everything should be as simple as code of this era. UNIX Release 7 and Minix 1.0 were on the order of 10-20KLoC for two related reasons:
The original hardware was incredibly resource constrained, so you couldn’t fit much software in the available storage and memory.
They were designed for teaching (more true for Minix, but somewhat true for early UNIX versions) and so were intentionally simple.
Minix did, I believe, implement POSIX.1, but so did NT4’s POSIX layer: returning ENOTIMPLEMENTED was a valid implementation and it was also valid for setlocale to support only "C" and "POSIX". Things that were missing were added in later systems because they were useful.
My point is that the GNU Manifesto was written at a time when it was completely feasible for someone to sit down and rewrite all of the software on their computer from scratch. Today, I don’t think I would be confident that I could rewrite awk or bc, let alone Chromium or LLVM from scratch and I don’t think I’d even be confident that I could fix a bug in one of these projects (I’ve been working on LLVM since around 2007 and I there are bugs I’ve encountered that I’ve had no idea how to fix, and LLVM is one of the most approachable large codebases that I’ve worked on).
So we’ve gone from too much minimalism to too much complexity. I hope we can find the center between those two. How do we know when we have found it? When our software is robust. Too much minimalism removes robustness, and too much complexity does the same thing. (I should write a blog post about that, but the CMake/Make recursive performance one comes first.)
I’m not convinced that we have too much complexity. There’s definitely some legacy cruft in these systems but a lot of what’s there is there because it has real value. I think there’s also a principle of conservation of complexity. Removing complexity at one layer tends to cause it to reappear at another and that can leave you with a less robust system overall.
Perhaps if we discarded this attitude and made software designed for users and easy for users to use and extend, we might turn things around. But we cannot make progress with that attitude.
I created a desktop environment project around this idea but we didn’t have sufficient interest from developers to be able to build anything compelling. F/OSS has a singular strength that is also a weakness: It is generally written by people who want to use the software, not by people who want to sell the software. This means that it tends to be incredibly usable to the authors but it is only usable in general if the authors are representative of the general population (and since they are, by definition, programmers, that is intrinsically not the case).
One of the most interesting things I’ve seen in usability research was a study in the early 2000s that showed that only around 10-20% of the population thinks in terms of hierarchies for organisation. Most modern programming languages implicitly have a notion of hierarchy (nested scopes and so on) and this is not a natural mindset of the majority of humans (and the most widely used programming language, Excel, does not have this kind of abstraction). This was really obvious when iTunes came out with its tag-and-filter model: most programmers said ‘this is stupid, my music is already organised in folders in a nice hierarchy’ and everyone else said ‘yay, now I can organise my music!’. I don’t think we can really make usable software until we have programming languages that are usable by most people, so that F/OSS projects can have contributors that really reflect how everyone thinks. Sadly, I’m making this problem worse by working on a programming language that retains several notions of hierarchy. I’d love to find a way of removing them but they’re fairly intrinsic to any kind of inductive proof, which is (to date) necessary for a sound type system.
That does, of course, point to you being correct about other things, specifically, that licenses matter too much right now because if we changed that attitude, would licenses really matter? In my opinion, not to the end user, at least.
Licenses probably wouldn’t matter to end users, but they would still matter for companies. I think one of the big things that the F/OSS community misses is that 90% of people who write software don’t work for a tech company. They work for companies whose primary business is something else and they just need some in-house system that’s bespoke. Licensing matters a lot to these people because they don’t have in-house lawyers who are an expert in software licenses and so they avoid any license that they don’t understand without talking to a lawyer. These people should be the ones that F/OSS communities target aggressively because they are working on software that is not their core business and so releasing it publicly has little or no financial cost to them.
To be clear, I’m not saying that everything should be as simple as code of this era.
Apologies.
My point is that the GNU Manifesto was written at a time when it was completely feasible for someone to sit down and rewrite all of the software on their computer from scratch.
Okay, that makes sense, and I agree that the situation has changed.
Today, I don’t think I would be confident that I could rewrite awk or bc, let alone Chromium or LLVM from scratch and I don’t think I’d even be confident that I could fix a bug in one of these projects (I’ve been working on LLVM since around 2007 and I there are bugs I’ve encountered that I’ve had no idea how to fix, and LLVM is one of the most approachable large codebases that I’ve worked on).
I think I can tell you that you could rewrite awk or bc. They’re not that hard, and 10k LOC is a walk in the park for someone like you. But point taken with LLVM and Chromium.
But then again, I think LLVM could be less complex. Chromium, could be as well, but it’s limited by the W3C standards. I could be wrong, though.
I think the biggest problem with most software, including LLVM, is scope creep. Even with bc, I feel the temptation to add more and more.
With LLVM, I do understand that there is a lot of inherent complexity, targeting multiple platforms, lots of needed canonicalization passes, lots of optimization passes, codegen, register allocation. Obviously, you know this better than I do, but I just wanted to make it clear that I understand the inherent complexity. But is it all inherent?
I’m not convinced that we have too much complexity. There’s definitely some legacy cruft in these systems but a lot of what’s there is there because it has real value. I think there’s also a principle of conservation of complexity. Removing complexity at one layer tends to cause it to reappear at another and that can leave you with a less robust system overall.
There is a lot of truth to that, but that’s why I specifically said (or meant) that maximum robustness is the target. I doubt you or anyone would say that Chromium is as robust as possible. I personally would not claim that about LLVM either. I also certainly would not claim that about Linux, FreeBSD, or even ZFS!
And I would not include legacy cruft in “too much complexity” unless it is past time that it is removed. For example, Linux keeping deprecated syscalls is not too much complexity, but keeping support for certain arches that have only single-digit users, none of whom will update to the latest Linux, is definitely too much complexity. (It does take a while to identify such cruft, but we also don’t spend enough effort on it.)
Nevertheless, I agree that trying to remove complexity where you shouldn’t will lead to it reappearing elsewhere.
F/OSS has a singular strength that is also a weakness: It is generally written by people who want to use the software, not by people who want to sell the software. This means that it tends to be incredibly usable to the authors but it is only usable in general if the authors are representative of the general population (and since they are, by definition, programmers, that is intrinsically not the case).
I agree with this, and the only thing I could think of to fix this is to create some software that I myself want to use, and to actually use it, but to make it so good that other people want to use it. Those people need support, which could lead to me “selling” the software, or services around it. Of course, as bc shows (because it does fulfill all of the requirements above, but people won’t pay for it), it should not just be anything, but something that would be critical to infrastructure.
One of the most interesting things I’ve seen in usability research was a study in the early 2000s that showed that only around 10-20% of the population thinks in terms of hierarchies for organisation. Most modern programming languages implicitly have a notion of hierarchy (nested scopes and so on) and this is not a natural mindset of the majority of humans (and the most widely used programming language, Excel, does not have this kind of abstraction).
I think I’ve seen that result, and it makes sense, but hierarchy unfortunately makes sense for programming because of the structured programming theorem.
That said, there is a type of programming (beyond Excel) that I think could be useful for the majority of humans is functional programming. Data goes in, gets crunched, comes out. I don’t think such transformation-oriented programming would be too hard for anyone. Bonus points if you can make it graphical (maybe like Blender’s node compositor?). Of course, it would probably end up being quite…inefficient…but once efficiency is required, they can probably get help from a programmer.
I don’t think we can really make usable software until we have programming languages that are usable by most people, so that F/OSS projects can have contributors that really reflect how everyone thinks.
I don’t think it’s possible to create programming languages that produce software that is both efficient and well-structured without hierarchy, so I don’t think, in general, we’re going to be able to have contributors (for code specifically) that are not programmers. That does make me sad. However, what we could do is have more empathy for users and stop assuming we have the same perspective as they do. We could assume that what is good for normal users might not be bad for us and actually try to give them what they need.
But even with that, I don’t think the result from that research is that people 80-90% of people can’t think in hierarchies, just that they do not do so naturally. I think they can learn. Whether they want to is another matter…
I could be wrong about both things; I’m still young and naive.
Licenses probably wouldn’t matter to end users, but they would still matter for companies. I think one of the big things that the F/OSS community misses is that 90% of people who write software don’t work for a tech company. They work for companies whose primary business is something else and they just need some in-house system that’s bespoke. Licensing matters a lot to these people because they don’t have in-house lawyers who are an expert in software licenses and so they avoid any license that they don’t understand without talking to a lawyer. These people should be the ones that F/OSS communities target aggressively because they are working on software that is not their core business and so releasing it publicly has little or no financial cost to them.
That’s a good point. How would you target those people if you were the one in charge?
Now that I have written a lot and taken up a lot of your time, I must apologize. Please don’t feel obligated to respond to me. But I have learned a lot in our conversations.
They’re fine if the thing is completely self-contained but if you want to change where a layer is in the system then you can’t move functions between files without talking to lawyers.
Maybe I misunderstand MPL 2.0, but I think this is a non-issue: if you’re not actually changing the code (just the location), you don’t have to publish anything. If you modify the code (changing implementation), then you have to publish the changes. This is easiest done on a per file basis of course, but I think you technically only need to publish the diff.
This is why it’s non viral: you say, “I’ve copied function X into my code and changed the input from integer to float”. You don’t have to say anything else about how it’s used or why such changes were necessary.
Generally, when you refactor, you don’t just move the code, you move and modify it. If you modify code from an MPL’d file that you’ve copied into another file then you need to make sure that you propagate the MPL into that file and share the changes.
they us Fedora to push core bits of Red Hat-controlled code into the Linux ecosystem, make them dependencies for everything, and then can charge whatever the like for support because no one else understands the code.
How do they make their things “dependencies for everything”? It seems you left out a step where other vendors/distributions choose to adopt Red Hat projects or not.
ISTM that quite a number of RH-backed projects are now such major parts of the infrastructure of Linux that it’s quite hard not to use them. Examples: pulseaudio, systemd, Wayland, and GNOME spring to mind.
All the mainstream distros are now based on these, and the alternatives that are not are increasingly niche.
If you want “non viral copyleft”, there are options: Mozilla Public License and the CDDL which has been derived from it. While they have niches in which they’re popular it’s not like they have taken off, so I’m not sure if “companies would be willing” is the right description.
Without the viral-nature, couldn’t you essentially white-wash the license by forking once and relicensing as MIT, then forking the MIT fork? It would take any power out of the license to enforce itself terms.
Virality is a separate thing from copyleft. People just think they are connected because the GPL is the first license that had both.
You can have a clause in the license that says that the software must be distributed under that license for the parts of the software that were originally under the license.
An example is a license I’ve written (https://yzena.com/yzena-copyleft-license/). It says specifically that the license only applies to the original source code, and any changes to the original source code. Anything else that is integrated (libraries, etc.) is not under the license.
Warning: Do NOT use that license. I have not had a lawyer check it. I will as soon as I can, but until then, it’s not a good idea to use.
So would you have to submit the source of the individual GPL components used as part of a derivative work? I don’t think the GPL would even make sense if it didn’t effect the whole project, that’s what the LGPL is for.
I think if you want to add a single GPL component you would need to release the full software under GPL. (Unless there were other licenses to allow the mixing)
While I like the aesthetic of @duck.com, you are still just trusting that DuckDuckGo will DoTheRightThing™ and not scrape, read, and sell your data maliciously now or sometime far in the future because it’s not just a ‘dumb’ email forwarder. I think more often than not, DuckDuckGo has been on the side of data privacy, but if these services aren’t being charged for, they have to sell something to fit the bill.
One of the practical ways around this sort of tracking is to configure your client to to prioritize text/plain emails, send a support email to text/html-using services, and subscribe to content via RSS instead of these automated mailing lists.
Exactly my thoughts when I read this: this smells just like the Google early days when “Don’t be evil” was still a slogan and not a punchline, and tech nerds would push all their friends to use it. While this is a legitimately useful service for those who aren’t using dumb terminal-based email clients (although, there’s no reason why regular all-bells-and-whistles mailclients can’t do this natively too with extra effort), Duckduckgo may have jumped the shark here with their mission scope creep. Or maybe I’m just becoming a cynical old man…
DuckDuckGo is hoping to get more search users (and so more ad money) by doing this because to sign up you have to download their browser for your mobile.
That’s a pretty straightforward monetisation strategy.
I don’t use either so 🤷. But DDG in this case isn’t even the email provider: they are a proxy. So instead of needing to trust just your provider, you also need to add a link in your chain of trust for the this proxy.
I thought it was known google definitely reads your gmail messages. I wouldn’t send any trust in that direction. It’s an information/ad company, you should have 0 expectation of privacy with them, even using paid services, especially not free ones.
The problem is that ‘reads your gmail messages’ has a lot of different possible meanings, for example:
A human reads them
An automated system builds an advertising profile of you based on the contents of every message.
An automated system scans them and produces some completely anonymised aggregate information, for example for spam filtering
It’s not clear where on this kind of spectrum gmail actually sits. I doubt that they make it easy for an administrator to read your email, but how hard is it? I suspect that they try to anonymise the information that they aggregate from your email, but how well does it actually work in practice?
The DDG service would be a really great use for confidential computing, where you get a guarantee that they can’t look inside the VM and you get a verifiable attestation that the VM is running what they say it is.
I use what the Romans do, mostly (I have an Exchange mailbox, so that does make things more difficult.):
On Windows, I use Outlook.
Filtering is fairly powerful, and can be punted to the Exchange server when possible
Threading is flattened AFAIK, at least by default. That is, a thread is grouped into one clump, and you can expand to pick messages in the thread.
On Mac, I use Mail.app.
I don’t use filtering here, no idea.
Threading is messages are grouped together by default, right pane shows them linearly, can be expanded in mailbox view.
On Linux, I use Evolution.
I don’t use filtering here.
Threading is nested, up to a certain point where it becomes flattened because the diagonal line going too far would be too much.
Fair warning: I generally use HTML email, top post, etc, because it’s what people in the real world (i.e. my clients) do, not what people who talk about email do.
I haven’t heard of a simple HTML view or use external editors, so I can’t speak for that. The crashing you’re having with mail clients is really unusual though.
In general I’m mostly satisfied with all of them, but Evolution does have infuriating bugs with synchronization message state with IMAP (i.e move a lot of messages at once to a folder, watch random messages in your current mailbox have random read/unread for a second until it updates).
Fair warning: I generally use HTML email, top post, etc, because it’s what people in the real world (i.e. my clients) do, not what people who talk about email do.
As far as I know there aren’t really any downside to using plain text when mailing with people who use HTML, right? Or are there email clients who render this in a weird way?
Replying to an HTML thread in plain text can make the quoting render very poorly. I’m not sure whether this is down to the MUA doing the quoting or to the ones rendering the reply, but it gets pretty hard to read. If I’m replying to a large group on a thread that’s seen more than one HTML message, I prefer to do so in HTML.
Some plain text renderers default to proportional fonts and mangle line endings and extra blank lines. (Notably, Gmail.) If you want your recipient to see the email the same way you intended, you should take this into consideration.
I write emails in proportional font anyway, so that’s not really an issue.
I dusted off my gmail account, and it seems alright from a quick test, at least in gmail (CC @hoistbypetard)
I don’t really mind HTML email as such and I’m hardly some sort of plain-text purist (I just use the FastMail web UI); I mostly just dislike all the silly font stuff people do with it: weird fonts, weird colours (does Outlook still do that stupid blue text thing?), etc. so I prefer to read emails in the plain text version.
Although it seems I can set writing to “rich” and reading to “plain text”, and this will put the HTML in the quoted version – so that might be a happy in-between.
I saw a HN comment suggesting plain text was negatively effecting their deliverability. This was anecdotal of course, but could be something to it. The proposed reasoning, was maybe more spam is sent as plain text. Whole bunch of maybes, but it could be some truth in there.
They’d match a pattern of spam. It’s also worth noting that spam filters tend to be tuned per person and if you’re someone how only corresponds with people who use HTML mail then your filters will probably learn that all legitimate mail comes with an HTML MIME type section.
Fair warning: I generally use HTML email, top post, etc, because it’s what people in the real world (i.e. my clients) do, not what people who talk about email do.
I need to correspond with a mix of those “people in the real world” and people who talk about email.
When I’m interacting with people who top post and use HTML email, I either use webmail or fire up a Win 10 VM and use Outlook.
When I’m interacting with people who react angrily to top posting and HTML email, I use mutt. I’ve been experimenting with aerc and like it quite a bit when it’s stable. Especially for reviewing patchsets that come in over email. For reading most email, these are my preferred clients, and both can just use a maildir so it’s easy to switch back and forth on a whim.
All the clients I use (even Outlook) can be switched to plain text; Evolution has the best implementation because it auto-reflows to wrap and tries to map HTML formatting options to plain-textified ones. Every project I work with has a forge, so it’s trivial for me to just use plain text only for lists. (I’ve received Python scripts in Excel files more than I have diff format, so…. You haven’t lived until you had to switch the Excel worksheet to view a .bash_profile.)
I like switching MUAs better than flipping the setting back to HTML when I need it in a reply. Mostly because I have found that if you keep your default format as plain text, then switch to HTML just when you need it in a reply, quoting gets messed up.
So the dance becomes: read a message where I want to reply with HTML -> go to settings and change how I compose email -> go back to the message and hit reply.
I’ve observed that in Outlook and Thunderbird. Evolution’s S/MIME bugs prevented me from using it for so long that I got out of the habit of even trying it.
But this:
You haven’t lived until you had to switch the Excel worksheet to view a .bash_profile.)
Apple Mail has a reply in kind setting where it chooses plain/HTML based on what the other person sent you. I feel like other email clients should have something similar? It seems like an obvious feature.
Does it suppress HTML if the other person didn’t send you that?
When you’re composing a new message, not replying, the iPhone/iPad version (at least) does not let you prevent it from sending HTML. So as far as I have found, there is not a way to say “always send plaintext only when I write email to this mailing list”. If it could do that, the reply in kind setting would probably make it work well for me all the time.
NSTextView has different modes for plain and rich-text editing. When rich text editing is not used, you can’t paste styled text into it and none of the style-related menu options work. Mail.app uses this directly[1] and so when you’re in plain-text mode, you’re in plain-text mode. As I recall, the same shortcut as TextExit can be used for switching the two modes.
Outlook, in contrast, uses the RichText control for editing and then strips the rich text. This is really annoying in some cases, most notably in a Teams Meeting invitation. If you send a Teams Meeting invitation as plain text, all of the links stop working. This is fine if the recipient is on Exchange because it also sets some metadata that Teams can use and so you can still connect, but if you send it to an external person they get something that says ‘click here to join’ and isn’t a link. There’s no warning in the UI about this. I filed a bug about this a couple of years ago.
[1] I believe newer versions may use a slightly tweaked / subclassed version, because they do draw coloured lines down the side for sequences of quoted text, but that’s all.
For me, one of the biggest sells of OpenResty is ngx_lua/OpenResty. All those times when you’d notmally have to add an additional API layer or cloud functions because you want to do something like protect an API key or something, you could just do by adding a short circuit route in nginx. Or stuff like contact forms on static sites. I imagine you could do some of that with HAProxy’s Lua scripting, but the ecosystem around OpenResty is faily well built out for a lot of the stuff you’d need.
Janet uses a similar syntax, where there are matching mutable and immutable data structures. Janet’s Data Structures except they do the opposite - immutable is the “normal” syntax, and you denote mutable structures with an @. One of the differences of a well designed languages and a language that continuously bolts on whatever is popular at the time.
Literally JS is on the exact same course as php in many ways. Both started off as niche languages to support web dev. Both continuously merge popular ideas from 3rd party libraries. Neither can purge the excess or “break the web”.
I dislike the use of # as well, but for different reasons. It’s already being used for private class fields, so this is going to make parsing more complicated and also be somewhat difficult to understand for new developers.
Making _varName private though would be backwards-incompatible, which is a nonstarter for JS. The irony is that the popularity of the convention makes this problem even more significant, because library consumers hacking together a solution by fiddling with the library’s “private” variables is probably a frequent occurrence.
Every type in Teal accepts nil as a valid value, even if, like in Lua, attempting to use it with some operations would cause a runtime error, so be aware!
This is a bit disappointing for me to read since nils are by far the most common type errors in Lua. I’m definitely open to the idea of putting a little more work into my coding by thinking in types, but the types need to pull their weight! A type system which can’t catch the most common type error feels like a missed opportunity.
Yeah, it looks really promising. IIRC Pallene is developed by the core Lua developers. Unfortunately the documentation in their repo does not have enough detail to determine whether their type system has the same nil problem as Teal’s.
One of the things I notice when working in Lua is, I’m sure because of its relatively small developer community (as compared to say Java or Python or C/C++) I find a lot of places where the Lua ecosystem goes right up to the edge of the water and then just … stops.
Like, as a for instance, try getting luarocks working on a non *NIX based system. It’s not easy :) I know it’s been done but - not easy.
Again this is totally understandable because polish and depth require engineer hours to create and those don’t grow on trees.
I find a lot of places where the Lua ecosystem goes right up to the edge of the water and then just … stops.
My perspective on this is that Lua developers tend to have more restraint and recognize that sometimes if you can’t do something right, it’s better not to do it at all.
Unrelated to this, but you may be pleased to know that you can use ?. to safely access values that may not exist in JS. e.g. const name = some?.nested?.obj?.name;
Totally agree. This makes me think of all the gyrations Swift goes through to ensure that you’re never using or getting a potentially Nil value unless you really REALLY need it and mean for that to be possible in this circumstance.
Pallene presents an alternative to JIT compiling Lua. Instead of writing non-idiomatic JIT-centric code to avoid implicit “optimization killers” in LuaJIT, Pallene presents an explicit subset of idioms and optional type annotations that allows for standard compiler optimizations as well as Lua-specific optimizations.
You should check Janet programming language; it has PEG as part of the core https://janet-lang.org/docs/peg.html. And it is overall nice LISP dialect with some Lua flavour.
On a related note, I’m also now the owner of the shithub.us domain – I’m thinking of offering semi-public git hosting on it. The name came up, and the comparison is obvious.
@ruki are you the dev? Maybe you can opt in to Hacktoberfest and get some motivated individuals to write some LDoc or similar annotations for the project. I’d love to jump in, but without some more comprehensive docs I wouldn’t know where to begin. Looks great though
Yes, but I don’t have much time to write documents. You can directly refer to all examples in the tests directory. And thank you very much for your suggestions.
I understand the dilemma. There’s a gang of people looking to complete Hacktoberfest, and would probably write (at least some of) those annotations for you if you opt-in and throw some issues up for completing them. Might as well cash in on free help while it’s available.
Although Hacktoberfest seems very interesting, I still don’t know how to organize and create events, and I don’t think anyone will participate in my project.
Hey, author of the post here! Really happy to see it on Lobsters, and I’d be happy to answer any questions and/or comments you have!
I encountered this “bug” while working on rewriting my iOS app with the new App and Scene structures introduced during WWDC2020. The project is nearing completion, and I’m really excited about how its turning out.
Understandable. I was attempting to make it “retro,” though I’m going to change the font when I rewrite the site (soon) to make it clearer and load faster.
The exclamation marks on the trailing side are supposed to be the main indicator of priority, which I understand might be too small of an indicator.
The “ascending task name sort” is just a fancy way of saying alphabetical order. Because it’s the third priority it may seem a little random, but what it does is sort all tasks of the same priority and the same category (completed/ongoing) in alphabetical order.
Feature suggestions:
I love the idea of color/weight indicators for priority! Definitely going to implement that going forward.
Completed tasks are grayed out in addition to the strikethrough in the main app, I’ve just yet to implement it in the rewrite.
The timestamp sort would be an important thing, but a big feature of the app is that tasks get deleted at midnight every day so that would be a really short-term thing. I will consider adding it as an additional sort method, though.
Infon is a FOSS game akin to Corewars or Screeps using Lua as it’s scripting language.
I enjoy Peter’s blog, as well as his book on PF. The book actually made PF accessible to most!
that book, The Book of PF, is on sale right now on humble1. You can pick it up in the $10 tier but there are a lot of other gems in there as well.
Yup, yup, already did. I enjoy those *BSD related bundles. :-)
OpenBSD assumes too much hardware. In the event of apocalypse what you probably want is CollapseOS
Would it really “break the web” if PHP removed some of the sketchy parts of the language? It’s not like JS in the browser where you don’t have control over the version you’re running. Updating JS would break existing websites running old code. But pushing breaking changes in a version bump to PHP doesn’t have to break anything. It’s running on the backend and a lot of people are (or were at least) using old versions for WordPress anyway. Why don’t the just purge some of that stuff and make it actually nice.
Folks using those sketchy parts will just keep using older builds of PHP which keeps their applications sketchy but also locks them out of security updates. At least that’s been my experience developing with PHP in the past.
There is an almost constant push from a segment of the php internals community (i.e. the community of people that actually develop php itself) to improve things, make it stricter, remove old stuff.
In practically every case, the age old “but BC” is trotted out. Sometimes enough people think the improvement is worth it, and the vote passes. In some cases it doesn’t.
If you want to really scream “wtf” at your display, go read through the php internals mailing list messages arguing against recently passed RFC, that means accessing undeclared variables will be considered an error, not just a warning (and thus can’t just be silenced any more). You understood that correctly: in 2022 people are arguing in favour of the ability to deliberately attempting to access undeclared variables without an error.
I have mixed feelings about a lot of this. Being a user of both Lua and JavsScript I see a lot of issues PHP has handled in different ways.
In Lua, they allow, and liberally if needed break backwards compatibility. The language is better for it, BUT Lua is primarily an embedded language so users are free to use whatever version, official or not they want. JavaScript on the other hand still carries around stuff like
string.bold
because removing stuff like that could actually break unmodified code that has otherwise been running just fine on the web.I’m not a huge user of PHP but I have to work with it from time to time. I would think because of the nature of stuff like includes, where code is basically inserted inline, you don’t always know what variables will be available at run-time. Sometimes, like Lua, it’s easier to just add null guards here and there or find other ways to protect your code against null values.
I mean, php isn’t a client-side language, there’s no reason a developer isn’t in control of the version of php they use, more now than ever.
When
mod_php
was “the way”, sure, it was harder to pick and choose versions on any kind of multiple use system (either shared hosting, or even multiple sites/projects on a single VM/server, but these days: php-fpm (i.e. a fastcgi daemon) is undoubtedly “the way”, and it’s trivial to have half a dozen different versions of php installed concurrently, and target a specific version per vhost / site. Furthermore, easier service management via systemd and/or system encapsulation via containers mean that running stuff which has very esoteric needs (i.e. needs mod_php, needs php5.4 or something else equally ancient) on an otherwise multi-use system is not rocket science.The biggest difference is probably that php is generally exposed to the web as a server, which Lua can do obviously but it isn’t Lua’s raison d’etre, so running an older version that is potentially not receiving security updates is probably less critical.
So let’s say you have two files: a.php
b.php
So, until php 8.2, if run above you’ll either get
stuff
output, or a warning and then nothing, because an undefined variable is treated as null (and I deliberately misspelled the second one with a capitali
instead of a lowercaseL
).The argument isn’t about “should we allow for the concept of undefined variables”.. That exists in multiple ways:
The argument being made isn’t “an undefined variable is something we have to handle, this is the only way”.
The argument being made is “it’s too much work to explicitly check if a potentially undefined variable is defined before accessing it, and keeping the ‘default to null’ behaviour is more important than helping to solve a whole swathe of bugs caused by unexpectedly undefined variables (either due to logic issues, var name typos, etc).
One of the biggest benefits to me, is that it forces people to be explicit in their code now. Dealing with other people’s sloppy code is a nightmare, and even the best IDE can’t tell you what someone else’s intention was.
They did it with the PHP 3 => 4 transition, there was a massive breaking change in how objects were passed by reference (they used to be passed by value, if you can believe it). So they can do it again, perhaps over many releases by gradually deprecating (and replacing) and removing the old APIs. I think it’s more likely that they don’t really feel the sketchy parts are that sketchy.
Lua actually makes a great data description language.
Thanks for the tip!
Historically, I believe that was the purpose Lua was created for.
I kind of prefer there to be a separation between a configuration syntax and a DSL. On the one side, using a full language for configs makes it hard for new users; I just encountered Gulp (based on JS) last week and the config files are quite confusing. On the other hand, shoehorning logic into a config syntax ends up recreating control-flow constructs in ugly ways (viz. CMake, and GitHub Actions YAML.)
Or starlak! I really enjoy writing configuration files in starlark that I then serialize to yaml/json/protobuf
Well, I graduated from a film school. I have minimal knowledge of math and engineering (I dropped out of engineering after my second year). I still find a ton of value in SICP.
I think it is completely OK to recommend SICP. It is just that it is not an easy book; it requires effort. That effort required varies from reader to reader, and it may require you to go fetch another book and study for a while before coming back. It is OK to be challenged by a good book. A similar thing happens with TAoCP as well, heck, that book set was so above my pay grade that sometimes I had to go wash my face with cold water and think.
Now, what I think is important is to know when to recommend SICP or not. Someone says they want to learn programming basics fast because they’re in some bootcamp and need a bit of background to move on, then you’d be better recommending something more suitable.
As for alternatives for SICP for those who don’t want to dive too deep into math and engineering, I really enjoy How To Design Programs which I’ve seen being described as “SICP but for humanities”.
A side note, but another book that can work well as a prequel (or alternative) to SICP is Simply Scheme. In fact, that’s exactly how the authors describe the book.
Some of Bryan Harvey’s lectures on Scheme are available on YouTube. There used to be more, as I recall, but some are private now. A shame—I remember enjoying his lectures a lot.
I found the first chapter or two of SICP to be uncomfortably math heavy. But my recollection is that after those, it’s relatively smooth sailing.
I have to say comments like these and the GP are reassuring. I’m working through it now as an experienced programmer trying to formalize my foundational computer science stuff and just having a hard time digging in on the start. Not that I’m uncomfortable with recursion or Big O stuff, it’s just very information dense and hard to “just read” while keeping all the moving parts in your head space.
Brian Harvey is amazing.
He also wrote a series of books called “Teaching Computer Science LOGO Style” or similar (He’s the author of USCB Logo).
I really enjoyed those books as I’m an avid LOGO fan. I’m still kinda sad that dynaturtles are effectively going to die because the only implementation still even remotely extant is LCSI’s Microworlds.
I haven’t played with it but I know that Racket has a
lang logo
and turtle graphics.I’ve been wanting to learn Racket for a while. It’s next on my list after Javascript. Sadly it takes me years to truly grok a programming language to any level of real mastery :)
In the realm of alternatives to SICP to teach programming, I’ve really enjoyed The little schemer and follow-up books.
I love that book. Did you read through the latest one? The Little Typer. I haven’t yet moved past the seasoned schemer.
The latest one is in the virtual book pile. But I’d like to get to it eventually. Thanks for the reminder. :)
The Little LISPer and its descendants are seriously pure sheer delight in book form.
They embody all the beautiful playfulness and whimsy I LOVE in computers that has been sucked out of so much happening today.
Bonus points: Even if you could care less about Scheme/LISP you learn recursion!
After struggling hard at trying to understand recursion in my freshman year with Concurrent Clean (the teachers pet language, a bit Haskell-like), this book made everything click. It also made me fall in love with Scheme because of its simplicity and functional high level approach. After the nightmare of complexity and weird, obscure tooling of Clean, this was such a breath of fresh air.
I really need to sit down and work through The Little Typer :)
I don’t get The Little Schemer. There doesn’t seem to be a point to it, something it’s working towards. I feel like I should be enlightened in some way in the end, but it just seemed to end without doing anything special. What am I missing?
I like this approach. Recommend SICP, but make it SUPER clear that it’s a difficult book. It’s OK to get to a point where you say “This is over my head” and for those who are up for it, there’s an additional challenge there of “OK now take the next step! Learn what you need in order to understand”.
Not everyone has the time or the patience for that, but as long as we’re 1000% clear in our recommendations, I think recommending SICP is just fine.
However IMO it is THE WORST to recommend SICP to people who are looking for a quick and easy way to learn to program or learn the rudiments of computer science.
Yep. The Lemur Pro I’m typing this on has similarly atrocious speakers.
Yeah they’re usable but not the greatest, they occasionally sound tinny but otherwise work in a pinch.
One of the first things I do when I set up linux on a new laptop is disable the onboard sound. They should just stop shipping laptops with built-in speakers, they’re all pretty bad, and pluggable/bluetooth speakers or headphone will almost always be a better experience.
My T450 had alright speakers, and my (work) MacBook Pro has good ones. The S76 speakers are bad even by the standard of laptop speakers.
I find myself reluctantly agreeing with most of the article, which makes me sad. Nevertheless, I would like to be pragmatic about this.
That said, I think that most of the problems with the GPL can be sufficiently mitigated if we just remove the virality. In particular, I don’t think that copyleft is the problem.
The reason is because I believe that without the virality, companies would be willing to use copyleft licenses since the requirements for compliance would literally be “publish your changes.” That’s a low bar and especially easy in the world of DVCS’s and GitHub.
However, I could be wrong, so if I am, please tell me how.
The problem with ‘non-viral’ copyleft licenses (more commonly known as ‘per-file copyleft’ licenses) is that they impede refactoring. They’re fine if the thing is completely self-contained but if you want to change where a layer is in the system then you can’t move functions between files without talking to lawyers. Oh, and if you use them you’re typically flamed by the FSF because I don’t think anyone has managed to write a per-file copyleft license that is GPL-compatible (Mozilla got around this by triple-licensing things).
That said, I think one of the key parts of this article is something that I wrote about 15 or so years ago: From an end-user perspective, MS Office better meets a bunch of the Free Software Manifesto requirements than OpenOffice. If I find a critical bug in either then, as an experienced C++ programmer, I still have approximately the same chance of fixing it in either: zero. MS doesn’t let me fix the MS Office bug[1] but I’ve read some of the OpenOffice code and I still have nightmares about it. For a typical user, who isn’t a C++ programmer, OpenOffice is even more an opaque blob.
The fact that MS Office is proprietary has meant that it has been required to expose stable public interfaces for customisation. This means that it is much easier for a small company to maintain a load of in-house extensions to MS Office than it is to do the same for most F/OSS projects. In the ‘90s, MS invested heavily in end-user programming tools and as a result it’s quite easy for someone with a very small amount of programming experience to write some simple automation for their workload in MS Office. A lot of F/OSS projects have an elitist attitude about programming and don’t want end users to be extending the programs unless they pass the gatekeeping requirements of learning programming languages whose abstractions are far too low-level for the task at hand. There is really no reason that anything other than a core bit of compute-heavy code for any desktop or mobile app needs to be written in C/C++/Rust, when it could be in interpreted Python or Lua without any user-perceptible difference in performance.
Even the second-source argument (which is really compelling to a lot of companies) doesn’t really hold up because modern codebases are so huge. Remember that Stallman was writing that manifesto back when a typical home computer such as the BBC Model B was sufficiently simple that a single person could completely understand the entire hardware and software stack and a complete UNIX system could be written by half a dozen people in a year (Minix was released a few years later and was written by a single person, including kernel and userland. It was around 15,000 lines of code). Modern software is insanely complicated. Just the kernel for a modern *NIX system is millions of lines of code, so is the compiler. The
bc
utility is a tiny part of the FreeBSD base system (if memory serves, you wrote it, so should be familiar with the codebase) and yet is more code than the whole of UNIX Release 7 (it also has about as much documentation as the entire printed manual for UNIX Release 7).In a world where software is this complex, it might be possible for a second company to come along and fix a bug or add a feature for you but it’s going to be a lot more expensive for them to do it than the company that’s familiar with the codebase. This is pretty much the core of Red Hat’s business model: they us Fedora to push core bits of Red Hat-controlled code into the Linux ecosystem, make them dependencies for everything, and then can charge whatever the like for support because no one else understands the code.
From an end-user perspective, well-documented stable interfaces with end-user programming tools give you the key advantages of Free Software. If there are two (or more) companies that implement the same stable interfaces, that’s a complete win.
F/OSS also struggles with an economic model. Proprietary software exists because we don’t have a good model for any kind of zero-marginal-cost goods. Creating a new movie, novel, piece of investigative journalism, program, and so on, is an expensive activity that needs funding. Copying any of these things has approximately zero cost, yet we fund the former by charging for the latter. This makes absolutely no sense from any rational perspective yet it is, to date, the only model that has been made to work at scale.
[1] Well, okay, I work at MS and with the whole ‘One Microsoft’ initiative I can browse all of our internal code and submit fixes, but this isn’t an option for most people.
I’ve found this to be true for Windows too, as I wrote in a previous comment. I technically know how to extend the Linux desktop beyond writing baubles, but it’s shifting sands compared to how good Windows has been with extensibility. I’m not going to maintain a toolkit or desktop patchset unless I run like, Gentoo.
BTW, from your other reply:
I suspect this is why it never built a tool something like Access/HyperCard/Excel/etc. that empower end users - because they don’t need it, because they are developers. Arguably, the original sin of free software (is assuming users are developers), and in a wider sense, why its threat model drifted further from reality.
Is it possible to have a non-viral copyleft license that is not per-file? I hope so, and I wrote licenses to do that which I am going to have checked by a lawyer. If he says it’s impossible, I’ll have to give up on that.
Eh, I’m not worried about GPL compatibility. And I’m not worried about being flamed by the FSF.
This is a good point, and it is a massive blow against Free Software since Free Software was supposed to be about the users.
I personally think this is a separate problem, but yes, one that has to be fixed before the second-source argument applies.
Sure, it’s a tiny part of the codebase, but I’m not sure
bc
is a good example here.bc
is probably the most complicated of the POSIX tools, and it still has less lines of code than MINIX. (It’s about 10k of actual lines of code; there are a lot of comments for documentation.) You said MINIX implemented userspace; does that mean POSIX tools? If it did, I have very little faith in the robustness of those tools.I don’t know if you’ve read the sources of the original Morris
bc
, but I have (well, its closest descendant). It was terrible code. When checking for keywords, the parser just checked for the second letter of a name and then just happily continued. And hardly any error checking at all.After looking at that code, I wondered how much of original Unix was terrible in the same way, and how terrible MINIX’s userspace is as well.
So I don’t think holding up original Unix as an example of “this is how simple software can be” is a good idea. More complexity is needed than that; we want robust software as well.
In other words, I think there is a place for more complexity in software than original Unix had. However, the complexity in modern-day software is out of control. Compilers don’t need to be millions of lines of code, and if you discount drivers, neither should operating systems. But they can have a good amount of code. (I think a compiler with 100k LOC is not too bad, if you include optimizations.)
So we’ve gone from too much minimalism to too much complexity. I hope we can find the center between those two. How do we know when we have found it? When our software is robust. Too much minimalism removes robustness, and too much complexity does the same thing. (I should write a blog post about that, but the CMake/Make recursive performance one comes first.)
bc
is complex because it’s robust. In fact, I always issue a challenge to people who claim that my code is bad to find a crash or a memory bug inbc
. No one has ever come back with such a bug. That is the robustness I am talking about. That said, ifbc
were any more complex than it is (and I could still probably reduce its complexity), then it could not be as robust as it is.Also, with regards to the documentation, it has that much documentation because (I think) it documents more than the Unix manual. I have documented it to ensure that the bus factor is not a thing, so the documentation for it goes down to the code level, including why I made decisions I did, algorithms I used, etc. I don’t think the Unix manual covered those things.
This is a point I find myself reluctantly agreeing with, and I think it goes back to something you said earlier:
This, I think, is the biggest problem with FOSS. FOSS was supposed to be about user freedom, but instead, we adopted this terrible attitude and lost our way.
Perhaps if we discarded this attitude and made software designed for users and easy for users to use and extend, we might turn things around. But we cannot make progress with that attitude.
That does, of course, point to you being correct about other things, specifically, that licenses matter too much right now because if we changed that attitude, would licenses really matter? In my opinion, not to the end user, at least.
To be clear, I’m not saying that everything should be as simple as code of this era. UNIX Release 7 and Minix 1.0 were on the order of 10-20KLoC for two related reasons:
Minix did, I believe, implement POSIX.1, but so did NT4’s POSIX layer: returning
ENOTIMPLEMENTED
was a valid implementation and it was also valid forsetlocale
to support only"C"
and"POSIX"
. Things that were missing were added in later systems because they were useful.My point is that the GNU Manifesto was written at a time when it was completely feasible for someone to sit down and rewrite all of the software on their computer from scratch. Today, I don’t think I would be confident that I could rewrite
awk
orbc
, let alone Chromium or LLVM from scratch and I don’t think I’d even be confident that I could fix a bug in one of these projects (I’ve been working on LLVM since around 2007 and I there are bugs I’ve encountered that I’ve had no idea how to fix, and LLVM is one of the most approachable large codebases that I’ve worked on).I’m not convinced that we have too much complexity. There’s definitely some legacy cruft in these systems but a lot of what’s there is there because it has real value. I think there’s also a principle of conservation of complexity. Removing complexity at one layer tends to cause it to reappear at another and that can leave you with a less robust system overall.
I created a desktop environment project around this idea but we didn’t have sufficient interest from developers to be able to build anything compelling. F/OSS has a singular strength that is also a weakness: It is generally written by people who want to use the software, not by people who want to sell the software. This means that it tends to be incredibly usable to the authors but it is only usable in general if the authors are representative of the general population (and since they are, by definition, programmers, that is intrinsically not the case).
One of the most interesting things I’ve seen in usability research was a study in the early 2000s that showed that only around 10-20% of the population thinks in terms of hierarchies for organisation. Most modern programming languages implicitly have a notion of hierarchy (nested scopes and so on) and this is not a natural mindset of the majority of humans (and the most widely used programming language, Excel, does not have this kind of abstraction). This was really obvious when iTunes came out with its tag-and-filter model: most programmers said ‘this is stupid, my music is already organised in folders in a nice hierarchy’ and everyone else said ‘yay, now I can organise my music!’. I don’t think we can really make usable software until we have programming languages that are usable by most people, so that F/OSS projects can have contributors that really reflect how everyone thinks. Sadly, I’m making this problem worse by working on a programming language that retains several notions of hierarchy. I’d love to find a way of removing them but they’re fairly intrinsic to any kind of inductive proof, which is (to date) necessary for a sound type system.
Licenses probably wouldn’t matter to end users, but they would still matter for companies. I think one of the big things that the F/OSS community misses is that 90% of people who write software don’t work for a tech company. They work for companies whose primary business is something else and they just need some in-house system that’s bespoke. Licensing matters a lot to these people because they don’t have in-house lawyers who are an expert in software licenses and so they avoid any license that they don’t understand without talking to a lawyer. These people should be the ones that F/OSS communities target aggressively because they are working on software that is not their core business and so releasing it publicly has little or no financial cost to them.
Apologies.
Okay, that makes sense, and I agree that the situation has changed.
I think I can tell you that you could rewrite
awk
orbc
. They’re not that hard, and 10k LOC is a walk in the park for someone like you. But point taken with LLVM and Chromium.But then again, I think LLVM could be less complex. Chromium, could be as well, but it’s limited by the W3C standards. I could be wrong, though.
I think the biggest problem with most software, including LLVM, is scope creep. Even with
bc
, I feel the temptation to add more and more.With LLVM, I do understand that there is a lot of inherent complexity, targeting multiple platforms, lots of needed canonicalization passes, lots of optimization passes, codegen, register allocation. Obviously, you know this better than I do, but I just wanted to make it clear that I understand the inherent complexity. But is it all inherent?
There is a lot of truth to that, but that’s why I specifically said (or meant) that maximum robustness is the target. I doubt you or anyone would say that Chromium is as robust as possible. I personally would not claim that about LLVM either. I also certainly would not claim that about Linux, FreeBSD, or even ZFS!
And I would not include legacy cruft in “too much complexity” unless it is past time that it is removed. For example, Linux keeping deprecated syscalls is not too much complexity, but keeping support for certain arches that have only single-digit users, none of whom will update to the latest Linux, is definitely too much complexity. (It does take a while to identify such cruft, but we also don’t spend enough effort on it.)
Nevertheless, I agree that trying to remove complexity where you shouldn’t will lead to it reappearing elsewhere.
I agree with this, and the only thing I could think of to fix this is to create some software that I myself want to use, and to actually use it, but to make it so good that other people want to use it. Those people need support, which could lead to me “selling” the software, or services around it. Of course, as
bc
shows (because it does fulfill all of the requirements above, but people won’t pay for it), it should not just be anything, but something that would be critical to infrastructure.I think I’ve seen that result, and it makes sense, but hierarchy unfortunately makes sense for programming because of the structured programming theorem.
That said, there is a type of programming (beyond Excel) that I think could be useful for the majority of humans is functional programming. Data goes in, gets crunched, comes out. I don’t think such transformation-oriented programming would be too hard for anyone. Bonus points if you can make it graphical (maybe like Blender’s node compositor?). Of course, it would probably end up being quite…inefficient…but once efficiency is required, they can probably get help from a programmer.
I don’t think it’s possible to create programming languages that produce software that is both efficient and well-structured without hierarchy, so I don’t think, in general, we’re going to be able to have contributors (for code specifically) that are not programmers. That does make me sad. However, what we could do is have more empathy for users and stop assuming we have the same perspective as they do. We could assume that what is good for normal users might not be bad for us and actually try to give them what they need.
But even with that, I don’t think the result from that research is that people 80-90% of people can’t think in hierarchies, just that they do not do so naturally. I think they can learn. Whether they want to is another matter…
I could be wrong about both things; I’m still young and naive.
That’s a good point. How would you target those people if you were the one in charge?
Now that I have written a lot and taken up a lot of your time, I must apologize. Please don’t feel obligated to respond to me. But I have learned a lot in our conversations.
Maybe I misunderstand MPL 2.0, but I think this is a non-issue: if you’re not actually changing the code (just the location), you don’t have to publish anything. If you modify the code (changing implementation), then you have to publish the changes. This is easiest done on a per file basis of course, but I think you technically only need to publish the diff.
This is why it’s non viral: you say, “I’ve copied function X into my code and changed the input from integer to float”. You don’t have to say anything else about how it’s used or why such changes were necessary.
Generally, when you refactor, you don’t just move the code, you move and modify it. If you modify code from an MPL’d file that you’ve copied into another file then you need to make sure that you propagate the MPL into that file and share the changes.
How do they make their things “dependencies for everything”? It seems you left out a step where other vendors/distributions choose to adopt Red Hat projects or not.
ISTM that quite a number of RH-backed projects are now such major parts of the infrastructure of Linux that it’s quite hard not to use them. Examples: pulseaudio, systemd, Wayland, and GNOME spring to mind.
All the mainstream distros are now based on these, and the alternatives that are not are increasingly niche.
If you want “non viral copyleft”, there are options: Mozilla Public License and the CDDL which has been derived from it. While they have niches in which they’re popular it’s not like they have taken off, so I’m not sure if “companies would be willing” is the right description.
I think you have a point, which is discouraging to say the least.
Without the viral-nature, couldn’t you essentially white-wash the license by forking once and relicensing as MIT, then forking the MIT fork? It would take any power out of the license to enforce itself terms.
No.
Virality is a separate thing from copyleft. People just think they are connected because the GPL is the first license that had both.
You can have a clause in the license that says that the software must be distributed under that license for the parts of the software that were originally under the license.
An example is a license I’ve written (https://yzena.com/yzena-copyleft-license/). It says specifically that the license only applies to the original source code, and any changes to the original source code. Anything else that is integrated (libraries, etc.) is not under the license.
Warning: Do NOT use that license. I have not had a lawyer check it. I will as soon as I can, but until then, it’s not a good idea to use.
No, because you can’t relicense someone else’s work.
“Virality” is talking about how it forces other software the depends on the viral software to release under the same license.
So would you have to submit the source of the individual GPL components used as part of a derivative work? I don’t think the GPL would even make sense if it didn’t effect the whole project, that’s what the LGPL is for.
I think if you want to add a single GPL component you would need to release the full software under GPL. (Unless there were other licenses to allow the mixing)
While I like the aesthetic of @duck.com, you are still just trusting that DuckDuckGo will DoTheRightThing™ and not scrape, read, and sell your data maliciously now or sometime far in the future because it’s not just a ‘dumb’ email forwarder. I think more often than not, DuckDuckGo has been on the side of data privacy, but if these services aren’t being charged for, they have to sell something to fit the bill.
One of the practical ways around this sort of tracking is to configure your client to to prioritize text/plain emails, send a support email to text/html-using services, and subscribe to content via RSS instead of these automated mailing lists.
Exactly my thoughts when I read this: this smells just like the Google early days when “Don’t be evil” was still a slogan and not a punchline, and tech nerds would push all their friends to use it. While this is a legitimately useful service for those who aren’t using dumb terminal-based email clients (although, there’s no reason why regular all-bells-and-whistles mailclients can’t do this natively too with extra effort), Duckduckgo may have jumped the shark here with their mission scope creep. Or maybe I’m just becoming a cynical old man…
DuckDuckGo is hoping to get more search users (and so more ad money) by doing this because to sign up you have to download their browser for your mobile.
That’s a pretty straightforward monetisation strategy.
I have the same trust issues with Apple and Google’s email services. They also “care a lot” about my privacy. 🤔
I don’t use either so 🤷. But DDG in this case isn’t even the email provider: they are a proxy. So instead of needing to trust just your provider, you also need to add a link in your chain of trust for the this proxy.
I thought it was known google definitely reads your gmail messages. I wouldn’t send any trust in that direction. It’s an information/ad company, you should have 0 expectation of privacy with them, even using paid services, especially not free ones.
The problem is that ‘reads your gmail messages’ has a lot of different possible meanings, for example:
It’s not clear where on this kind of spectrum gmail actually sits. I doubt that they make it easy for an administrator to read your email, but how hard is it? I suspect that they try to anonymise the information that they aggregate from your email, but how well does it actually work in practice?
The DDG service would be a really great use for confidential computing, where you get a guarantee that they can’t look inside the VM and you get a verifiable attestation that the VM is running what they say it is.
I use what the Romans do, mostly (I have an Exchange mailbox, so that does make things more difficult.):
Fair warning: I generally use HTML email, top post, etc, because it’s what people in the real world (i.e. my clients) do, not what people who talk about email do.
I haven’t heard of a simple HTML view or use external editors, so I can’t speak for that. The crashing you’re having with mail clients is really unusual though.
In general I’m mostly satisfied with all of them, but Evolution does have infuriating bugs with synchronization message state with IMAP (i.e move a lot of messages at once to a folder, watch random messages in your current mailbox have random read/unread for a second until it updates).
As far as I know there aren’t really any downside to using plain text when mailing with people who use HTML, right? Or are there email clients who render this in a weird way?
Replying to an HTML thread in plain text can make the quoting render very poorly. I’m not sure whether this is down to the MUA doing the quoting or to the ones rendering the reply, but it gets pretty hard to read. If I’m replying to a large group on a thread that’s seen more than one HTML message, I prefer to do so in HTML.
Some plain text renderers default to proportional fonts and mangle line endings and extra blank lines. (Notably, Gmail.) If you want your recipient to see the email the same way you intended, you should take this into consideration.
I write emails in proportional font anyway, so that’s not really an issue.
I dusted off my gmail account, and it seems alright from a quick test, at least in gmail (CC @hoistbypetard)
I don’t really mind HTML email as such and I’m hardly some sort of plain-text purist (I just use the FastMail web UI); I mostly just dislike all the silly font stuff people do with it: weird fonts, weird colours (does Outlook still do that stupid blue text thing?), etc. so I prefer to read emails in the plain text version.
Although it seems I can set writing to “rich” and reading to “plain text”, and this will put the HTML in the quoted version – so that might be a happy in-between.
I saw a HN comment suggesting plain text was negatively effecting their deliverability. This was anecdotal of course, but could be something to it. The proposed reasoning, was maybe more spam is sent as plain text. Whole bunch of maybes, but it could be some truth in there.
Anecdotally, almost all of the spam I see is either:
It’s possible that if you’re sending emails that look like:
They’d match a pattern of spam. It’s also worth noting that spam filters tend to be tuned per person and if you’re someone how only corresponds with people who use HTML mail then your filters will probably learn that all legitimate mail comes with an HTML MIME type section.
I need to correspond with a mix of those “people in the real world” and people who talk about email.
When I’m interacting with people who top post and use HTML email, I either use webmail or fire up a Win 10 VM and use Outlook.
When I’m interacting with people who react angrily to top posting and HTML email, I use mutt. I’ve been experimenting with aerc and like it quite a bit when it’s stable. Especially for reviewing patchsets that come in over email. For reading most email, these are my preferred clients, and both can just use a maildir so it’s easy to switch back and forth on a whim.
All the clients I use (even Outlook) can be switched to plain text; Evolution has the best implementation because it auto-reflows to wrap and tries to map HTML formatting options to plain-textified ones. Every project I work with has a forge, so it’s trivial for me to just use plain text only for lists. (I’ve received Python scripts in Excel files more than I have diff format, so…. You haven’t lived until you had to switch the Excel worksheet to view a
.bash_profile
.)I like switching MUAs better than flipping the setting back to HTML when I need it in a reply. Mostly because I have found that if you keep your default format as plain text, then switch to HTML just when you need it in a reply, quoting gets messed up.
So the dance becomes: read a message where I want to reply with HTML -> go to settings and change how I compose email -> go back to the message and hit reply.
I’ve observed that in Outlook and Thunderbird. Evolution’s S/MIME bugs prevented me from using it for so long that I got out of the habit of even trying it.
But this:
Damn. That is special.
Apple Mail has a reply in kind setting where it chooses plain/HTML based on what the other person sent you. I feel like other email clients should have something similar? It seems like an obvious feature.
Does it suppress HTML if the other person didn’t send you that?
When you’re composing a new message, not replying, the iPhone/iPad version (at least) does not let you prevent it from sending HTML. So as far as I have found, there is not a way to say “always send plaintext only when I write email to this mailing list”. If it could do that, the reply in kind setting would probably make it work well for me all the time.
NSTextView
has different modes for plain and rich-text editing. When rich text editing is not used, you can’t paste styled text into it and none of the style-related menu options work. Mail.app uses this directly[1] and so when you’re in plain-text mode, you’re in plain-text mode. As I recall, the same shortcut as TextExit can be used for switching the two modes.Outlook, in contrast, uses the RichText control for editing and then strips the rich text. This is really annoying in some cases, most notably in a Teams Meeting invitation. If you send a Teams Meeting invitation as plain text, all of the links stop working. This is fine if the recipient is on Exchange because it also sets some metadata that Teams can use and so you can still connect, but if you send it to an external person they get something that says ‘click here to join’ and isn’t a link. There’s no warning in the UI about this. I filed a bug about this a couple of years ago.
[1] I believe newer versions may use a slightly tweaked / subclassed version, because they do draw coloured lines down the side for sequences of quoted text, but that’s all.
Yes. It replies fully plain text to email received plain text.
I set it to plain text by default, and reply in kind to HTML.
For me, one of the biggest sells of OpenResty is ngx_lua/OpenResty. All those times when you’d notmally have to add an additional API layer or cloud functions because you want to do something like protect an API key or something, you could just do by adding a short circuit route in nginx. Or stuff like contact forms on static sites. I imagine you could do some of that with HAProxy’s Lua scripting, but the ecosystem around OpenResty is faily well built out for a lot of the stuff you’d need.
I really dislike the idea of it being prefixed by
#
, gives me perl/ruby/php vibes, but I thinkRecord
andTuple
would be really nice to have in JS.Janet uses a similar syntax, where there are matching mutable and immutable data structures. Janet’s Data Structures except they do the opposite - immutable is the “normal” syntax, and you denote mutable structures with an
@
. One of the differences of a well designed languages and a language that continuously bolts on whatever is popular at the time.Literally JS is on the exact same course as php in many ways. Both started off as niche languages to support web dev. Both continuously merge popular ideas from 3rd party libraries. Neither can purge the excess or “break the web”.
I dislike the use of
#
as well, but for different reasons. It’s already being used for private class fields, so this is going to make parsing more complicated and also be somewhat difficult to understand for new developers.That one is really weird to me because the convention has been to use
_varName
for a lonnnng time for private stuff. :shrug:Making
_varName
private though would be backwards-incompatible, which is a nonstarter for JS. The irony is that the popularity of the convention makes this problem even more significant, because library consumers hacking together a solution by fiddling with the library’s “private” variables is probably a frequent occurrence.I wonder if OP has an opinion on Pony lang, One of the newer, and more refreshing takes in OO offerings
Weird to put this much work into a programming language but not bother to register a domain for it.
Looking a little deeper:
This is a bit disappointing for me to read since nils are by far the most common type errors in Lua. I’m definitely open to the idea of putting a little more work into my coding by thinking in types, but the types need to pull their weight! A type system which can’t catch the most common type error feels like a missed opportunity.
Fwiw the talk mentioned nil safety as a potential future direction.
While still in semi-early development, Pallene is another alternative with some additional performance benefits.
White Paper
Repo
Yeah, it looks really promising. IIRC Pallene is developed by the core Lua developers. Unfortunately the documentation in their repo does not have enough detail to determine whether their type system has the same nil problem as Teal’s.
One of the things I notice when working in Lua is, I’m sure because of its relatively small developer community (as compared to say Java or Python or C/C++) I find a lot of places where the Lua ecosystem goes right up to the edge of the water and then just … stops.
Like, as a for instance, try getting luarocks working on a non *NIX based system. It’s not easy :) I know it’s been done but - not easy.
Again this is totally understandable because polish and depth require engineer hours to create and those don’t grow on trees.
My perspective on this is that Lua developers tend to have more restraint and recognize that sometimes if you can’t do something right, it’s better not to do it at all.
I appreciate that. I definitely is nice to skip the super annoying “Here are 30 half baked almost implementations of $THING” phase.
Like the fact that there used to be about 9000 Python distros for Windows and now there’s essentially 1 mainstream one.
I didn’t like this either. I appreciate Go have the zero-value idea for basic types, but there is still the nil issue for interfaces and pointers.
Back in my JavaScript days it was tedious always checking for null values before doing the real work.
Unrelated to this, but you may be pleased to know that you can use
?.
to safely access values that may not exist in JS. e.g.const name = some?.nested?.obj?.name;
Totally agree. This makes me think of all the gyrations Swift goes through to ensure that you’re never using or getting a potentially Nil value unless you really REALLY need it and mean for that to be possible in this circumstance.
there is a domain, but not a website: http://teal-language.org
Pallene presents an alternative to JIT compiling Lua. Instead of writing non-idiomatic JIT-centric code to avoid implicit “optimization killers” in LuaJIT, Pallene presents an explicit subset of idioms and optional type annotations that allows for standard compiler optimizations as well as Lua-specific optimizations.
Seems to be a more picture oriented version of https://git-rebase.io/
You should check Janet programming language; it has PEG as part of the core https://janet-lang.org/docs/peg.html. And it is overall nice LISP dialect with some Lua flavour.
I think that’s a great way to describe it. You can feel the Lua-isms in the design, even if the syntax are different.
On a related note, I’m also now the owner of the
shithub.us
domain – I’m thinking of offering semi-public git hosting on it. The name came up, and the comparison is obvious.How about a webshop for toilets and accessories? I think it could work :-)
Having both would be amazing.
Or sell it to a Septic Tank draining service
@ruki are you the dev? Maybe you can opt in to Hacktoberfest and get some motivated individuals to write some LDoc or similar annotations for the project. I’d love to jump in, but without some more comprehensive docs I wouldn’t know where to begin. Looks great though
Yes, but I don’t have much time to write documents. You can directly refer to all examples in the tests directory. And thank you very much for your suggestions.
I understand the dilemma. There’s a gang of people looking to complete Hacktoberfest, and would probably write (at least some of) those annotations for you if you opt-in and throw some issues up for completing them. Might as well cash in on free help while it’s available.
Thanks, I will consider to use LDoc to generate documents. https://github.com/tboox/ltui/issues/13
Although Hacktoberfest seems very interesting, I still don’t know how to organize and create events, and I don’t think anyone will participate in my project.
Hey, author of the post here! Really happy to see it on Lobsters, and I’d be happy to answer any questions and/or comments you have!
I encountered this “bug” while working on rewriting my iOS app with the new App and Scene structures introduced during WWDC2020. The project is nearing completion, and I’m really excited about how its turning out.
Enjoy!
Unfortunately not related to the content, but for me the font choice made the post too difficult to read.
Understandable. I was attempting to make it “retro,” though I’m going to change the font when I rewrite the site (soon) to make it clearer and load faster.
I agree with you. Try using Reader View if your browser supports it. It’s much better.
Pictures/videos also don’t work in Safari 14.
Yeah, they’re in .webm which for some reason is not supported by Safari despite massive size reductions from mp4. Going to need to add mp4s.
Nice post! Happens to all of us :-)
That’s what you get for populating static items in a list. I’m a little confused about the sorting (or whether it works as needed):
The above statements sound nice, but:
I’m not an Apple user, but I would enjoy having a task list with the following features:
Thank you for the great suggestions!
Some clarifications about sorting:
Feature suggestions: