What’s that C++++-= language he keeps referencing? Some theoretical, perfect language or an early code name for Java?
@NinjaTrappeur is there something specific you’d like to relate about Readline?
Yes, this C library is actually used by a lot of CLI utilities (zsh, bash, …), making all the keybinding listed in this Wikipedia article available in a lot of programs.
For example, the Ctrl-_, Alt-l are actually really useful in a shell context.
It is maybe common knowledge, but I just discovered that recently. I though it may be useful for some other people. I was apparently wrong.
I have trouble understanding why this is off-topic though.
I was apparently wrong.
I’m sure it is interesting to many people, I think it’s hard to understand your intention when you post a random wikipedia page though. Adding a description would probably help people appreciate the contribution more.
You are right, I’ll try to be more clear when I’ll share similar links in the future. Thanks for this helpful remark.
zsh does not use readline afaik, it’s mostly used by gtk and glib applications for providing line editing, and other applications that choose to use it. Most of the features readline offers either imitate behaviour specified by the POSIX terminal interface0 or are quality-of-life emacs bindings, so they’re often supported in other software and especially in shells.
Can I ask a potentially ignorant question? Why would someone who’s not already using Subversion choose to run it at this point? What are some of its advantages over Git or Fossil or Mercurial?
For software version control? Probably very little (especially as you included mercurial in the alternatives)
I think however, that SVN could be the basis of quite a good self-hostable blob/file storage system. WebDAV is a defined standard and accessible over HTTP and you get (auto-)versioning of assets for ‘free’.
Why would Mercurial in particular stand out on this list? Are you extrapolating from your own experience? I don’t think there are complete and reliable global usage statistic about any of these systems, are there?
On top of what stephenr says, Mercurial has an increasingly solid story for large assets from things like remotefilelog and other similar work from Facebook. That means I’d feel comfy using it for e.g. game asset tracking, at least to a point. Git is getting there too (specifically the work from Microsoft), but it’s a bit less mature at the moment.
Git is not the easiest thing in the world to learn/use.
If you just day “why use svn when git exists” it’s easy: because svn is easier to learn and understand.
Mercurial muddies that because you get the benefits of dvcs with usability that’s pretty close to svn.
I’ve worked in the last few years with entire teams that used no vcs.
Yeah, very much agreed that hg hits a rather nice middle ground. Their UI design is great.
Still, I don’t think we could infer anything from this about the actual number of users across the various vcs. Not sure though if I simply misunderstood what you meant.
Oh I’m not at all claiming to have stats on actual usage.
It was a hypothetical: if hg wasn’t an option, some developers will be more productive with svn than git.
why use svn when git exists
I think this sums it up well: https://sqlite.org/whynotgit.html
Not about subversion in particular though, just a bash at git.
Are you referring to mod_dav_svn? The last time I tried it it was pretty unreliable. It often truncated files silently. That’s probably not Subversion’s fault. Apache HTTPd’s WebDAV support doesn’t seem to be in a great state.
That’s the only subversion http server that I’m aware of.
I suspect that post is about mod_dav - WebDAV into a regular file system directory.
Mod_dav_svn provides WebDAV + Svn into a repo backend.
I know some game studios still run subversion, because of large art assets along side code, and the ability to check out specific subdirectories.
SVN is still heavily used by companies that are not pure software dev shops yet still produce their own software, e.g. in the financial sector and manufacturing industry.
I don’t think many people on lobsters still encounter SVN at their place of work, but that is due to lobster’s demographic rather than everyone on the planet migrating away from SVN. That is not the case. (Some people still use tools like ClearCase at work!)
I’ve read about half of it, it’s a really good book to introduce you not only to reverse-engineering but also to different assembly languages and CPU architectures. The author covers common code structures in various assembly languages using a few different compilers with different flags set to show comparisons. It’s a very nice read.
And I mean it about CPU architectures, I learned a ton about ARM by reading this.
What gets more contributions in code or dollars by companies using it right now? BSD/Apache or GPL software? And on top of that, which gets used the most by companies intending to turn the use into money to pay for laws and court decisions to limit software compatibility or freedom?
Those are a start at answering which to do if wanting mutually-beneficial uptake. I think the second question isn’t addressed nearly enough by GPL opponents. Saying that as a long-time advocate of BSD-style licenses that really reconsidered after watching the patent and API copyright action over time.
I’ve been through the experience of releasing software that was under a license (non-copyleft) banned by Google. The effect was clear: fewer people used it. I used said license as an ideological statement that reflected my views on copyright, but I simultaneously wanted as many people as possible to use the code I wrote. My goals were at odds with each other. I ended up switching the license.
Using a license on Google’s banned list has network effects that people might not account for. This banned license list doesn’t just apply to Google’s internal proprietary code, but also to their open source projects. If your library can’t be used in those, then people will find an alternative, or, failing that, create one themselves. The network effects of this should be obvious: “what library should I use for Foo?” “well, Google’s Huge Important Open Source Project Used By Very Serious People uses Quux for Foo, so that’s probably a safe bet.” There are other manifestations for this. For example, even if most people didn’t care about my non-standard license, the fact that Google does means that they are going to find and use an alternative for it. Google has a very large presence in open source, which means there is going to be a point at which you download some project and it might have both the library Google’s using for Foo and perhaps my own library for Foo, and now you get people being (quite reasonably) annoyed that they have two different dependencies in their chain for solving the same problem. Guess which one is going to win when an enterprising individual endeavors to consolidate their dependency graph?
Invariably, the ideological goal I set out with is tied to the goal of people using the code. If people don’t use it in the first place, then the whole point of using a different license to raise awareness about my cause will be for nothing.
Despite the fact that some people are “insulted” by non-standard licenses, I do still find occasion to express my views through my licensing choice, but I’ve found better ways to balance my goals.
I could use the above to be “mad” at Google, I guess, for being an Evil Corp that bullies the Small Guy via a complex struggle in power. And yeah, it was definitely a frustrating process to go through, and I was really stubborn about it for a long time. But that didn’t get me anywhere. I’m not a very good ideologue, and I can definitely understand why lawyers are reasonably unreasonable about these things, so there you have it.
(FWIW, I left my initial comment in this thread mostly out of annoyance that @pizzaiolo decided to thumb their nose at the submission guidelines.)
I appreciate the detailed response. It seems your concern is that Google’s network effects will boost both contributions to and adoption of a lot of code. If Google won’t use it, you loose that. So, you want to maximize adoption by building things that Google specifically might use. Maybe also any other company with similar acquisition policies. That does make sense if Google’s (or their) pull is literally that great or you’re willing to put all your eggs in one basket (theirs). On other hand, it might not for a project that can get contributors without Google’s support. There seems to be plenty of those. The default for FOSS in general on use/contribute ratio or sustainability is pretty bad, though.
I keep coming back to solving this problem by actually selling FOSS to companies in form of licensing, consulting, and/or support. What the proprietary companies do but without the restrictions or lock-in. The main, revenue stream will indirectly finance dependencies that are packaged in their own libraries under FOSS licenses. People might contribute to them. If they don’t, they can still be maintained from revenue. The bigger benefit is that enough FOSS companies pulling in lots of money can afford to fight big companies threatening the FOSS model by using media and lobbying campaigns. Google is actually one of the few, tech companies that’s been doing that on Washington fighting incumbents in areas that threaten them like net neutrality or copyright.
So, you want to maximize adoption by building things that Google specifically might use.
Well… no. I didn’t say that. I don’t wake up in the morning and think, “Oh I can’t wait to get home from work and spend time on my side project with the hope that Google will use it!” What I’m saying is that if you use a license that is banned by large companies (Google being one of them) that have influence in open source, then you’re handicapping the likelihood that others will use your code right out of the gate. Not everyone cares about that because not everyone has the same goals.
On other hand, it might not for a project that can get contributors without Google’s support. There seems to be plenty of those.
Again, that’s not what I said. It isn’t necessarily just about contributors, but also about use. Please remember that I’m describing my own experience with this. I observed the network effects first hand. What you do with that information is totally up to you, and outside the scope of what I want to discuss. :-)
Well, if you’re just talking about big companies, then your position still makes sense where people aiming to get more use or contributions from them have to use compatible licenses. What large companies allow varies both for the general case and sometimes by what a specific piece of software will do. For instance, they use a lot of proprietary software with restrictive licenses even though they don’t like the licenses.
Maybe I should try to assess what happens with network effects of FOSS components that are being sold to same big companies. As in, they got them not because they’re FOSS but because they’re useful with great price and flexible licensing. If it’s a copyleft, it would have to be isolated as one application or service doing a specific thing so they didn’t worry about effect on other software. From there, it gets a lot of use with potential network effects kicking in because of large uptake. It might get contributions from companies using it just because they depend on it. There’s quite a few ecosystems out there of people doing open enhancements on proprietary stuff just because it’s what they use at work. Especially Microsoft and IBM.
There’s some overlap between what you were doing and this other model. Main difference is yours will get more pull where this one will need to be pushed. I don’t know if a push is inherently a bad thing if it delivers long-term benefits the solutions getting pulls don’t. It could be bad for specific authors, though, if they’re aiming for pulls like you were.
So you’re fine with the requirement that I can’t license a program under ISC or anything non-GPLv3 for what it’s worth, when I intend on linking a GPLv3 library to it?
The goals of copyleft are laudable, but we regularly get upstream patches, also from Google employees, even though we use MIT/ISC only. And this carries on to many other projects.
If you really care about freedom by definition, you won’t use copyleft licenses, as copyleft limits your freedoms. On the other hand, copyleft is an effective tool to “force” people not to close down changes to code, obviously.
In reality though, I have found those companies not contributing back to suck at coding anyway. Those who do have a healthy idea of open source. Maybe that’s why most GNU-software sucks so much.
The discussion about copyleft or not is very similar to those about affirmative action or female quotas. By definition, e.g. female quotas discriminate against males as even males with higher qualifications get sorted out against females with lower qualifications. Anyone believing in equal rights rather than equal status will oppose that. Those who defend female quotas bring up terms like the “patriarchy” justifying it.
The motivation for the GPL is that companies are “by definition evil”. Sure, their drive is to make money and have power over you. Free software is a loophole in this regard. However, especially in the last few years, most companies have realized that it makes sense to just build upon open source and contribute back, especially for security relevant stuff, effectively negating the GPL-narrative. Apart from that, GPL-violations are rarely uncovered, so I’m pretty sure GPL-software is still used for nuclear warheads and there’s nothing you can do about it.
It was different in the 90’s and 00’s, but I only care about today and new code.
Stop taking yourself so important, as in the end, it’s about people in general benefitting from free software. And there are numerous examples where companies would’ve never considered open source solutions if they hadn’t been licensed permissively. For the advocates of the GPL, especially v3, it’s more about an agenda to turn all open code into GPL-code, and I’m sick of their propaganda and FUD.
However, especially in the last few years, most companies have realized that it makes sense to just build upon open source and contribute back,
It was different in the 90’s and 00’s, but I only care about today and new code.
I suggest looking at firmware based on Linux on Android devices, and many embedded devices. The GPL is the only reason any of them ever even comes close to publishing source, and most still won’t.
You’re seeing a very small subset of companies if you look at those contributing to BSD/MIT/Apache2 licensed software.
And even Google has stopped contributing to most open source projects it once started, Android for example is almost entirely proprietary now, not even the call app is fully open source anymore.
If you really care about freedom by definition, you won’t use copyleft licenses, as copyleft limits your freedoms.
“Using GPL / is encroaching on our rights / to encroach on yours”
They wrote 8 paragraphs and this is all you get from them?
Yes, the GPL is encroaching on your rights to licence your software properly. If you’d like to add a non-commercial clause to your licence, for example, tough luck. It wouldn’t be GPL-compatible. If you’d like to take the advice above and add a clause that specifies your software can’t be used in relation to weapons research or advertising, tough luck! It’s not GPL-compatible. You’ll have to reimplement everything. That would otherwise be covered by open source libraries.
From what I’ve seen, the GPL is mostly used by companies to keep their software in some weird middle between proprietary and open source. In other words to restrict the rights to compete. If someone wants to take the software and improve it to compete directly with the company that started it, they have another company with relatively endless resources and rights to all of the improvements they’ve done breathing down their neck. So if they do anything meaningful, companies can take back the improvements and take back their head and leave the contributors/competitors in the gutter. The GPL is good for protecting code, not for sharing it, is what I’m getting at.
Frankly I don’t blame them, the author pretty soon makes an ad hominem (GNU-software sucks), then diverts into a highly controversial topic(Affirmitive action), then a straw man (The motivation for the GPL is that companies are “by definition evil”.) if that were the case wouldn’t the GPL permit non-commercial restrictions.
If you’re a free software advocate not reading past the first three paragraphs is probably the most generous thing you could do in this discussion because everything after that just weakens the argument they made.
One could argue with FRIGN’s logic that if a business is afraid to use GPL/AGPL, that it’s the same as companies that are not contributing back. They are afraid that if they MUST share code in the future that it will harm them in some way. They probably suck at coding anyway.
Frankly I don’t blame them, the author pretty soon…
It’s why I ignored the comment in favor of xi’s which just made specific points about software licensing that we can discuss. Much more productive approach. :)
“One could argue with FRIGN’s logic that if a business is afraid to use GPL/AGPL, that it’s the same as companies that are not contributing back. “
Most software doesn’t directly contribute a competitive advantage to the business using or developing it. Most software is closed by default. Most use of open source software, permissive or not, involves no financial, code, or documentation contributions back from companies by default. I think these things tell us some foundational things about both the nature of how businesses will approach software and what we might get them to do. How to specifically react to these is, of course, very open for discussion and experiment. I’d like to see more experiments with business models built on free software, though. To me, it seems no harder than building the product in the first place esp if GPL components are properly isolated from hard-to-copy stuff that companies are paying for. Finally, if there’s low-cost copycats, one can always differentiate on great, face-to-face service and reputation.
Yes, the GPL is encroaching on your rights to licence your software properly.
The GPL is a license for other people’s software that says you can benefit from their work for free so long as you follow specific conditions, especially sharing your work back with them. It’s not your rights given it wasn’t your work to begin with assuming we’re thinking of software as something to be owned like in copyright law. When your work becomes the issue, you’re surveying others’ work that you might want to benefit from that come with various licenses. You get to choose which one you want based on your beliefs of how your own contributions should be licensed. GPL proponents might build on GPL. GPL opponents might build on another. Maybe something is there to use. Maybe you have to build it yourself due to no code compatible with your beliefs. Nobody has forced you to use GPL software: you simply are choosing whether to build on someone else’ work following their usage requirement or doing something different.
Your weapons example fits in my model nicely where it’s simply not compatible with GPL. Therefore, you’d look for code licensed differently. You might another person doing work for you for free had a license compatible with your expectations. Whereas, the GPL code would continue benefiting others without that extra requirement.
“In other words to restrict the rights to compete.”
This is a myth, too, given there’s a billion dollar business built on FOSS plus many smaller players that charge for it. Most don’t charge for it. So, they’re just not competing. Whereas, non-competitive practices in proprietary software were the norm. Most customers of the biggest software companies say the switching cost being prohibitive is a big reason they stay on the platform. The switching cost came from lock-in via proprietary languages, data formats, protocols, etc. People building on those that are open switch suppliers all the time even on major systems like databases.
As I said in a previous comment, the biggest threat to competition are companies that are bribing politicians to keep copyright and patent laws in a state that makes it illegal to compete. This is why IBM took out the company building on Hercules that let mainframe customers run their apps at a fraction of the cost on Intel hardware. It’s how Apple tries to keep software from even looking like theirs with Samsung forced to make their interface look like crap in Germany. It’s how Microsoft has collected over a billion in royalties off Android despite not contributing anything to it and trying to kill it in the market with Windows Mobile. It’s how it took a multi-billion dollar corporation’s deep pockets to stop Oracle from eliminating, taking most profit on, or seizing that same product. Two of these companies benefited a lot from BSD code that helped them get their revenues up enough to do more of these attacks. Another used GPL code mainly to reduce their own operating costs on infrastructure while shifting the lock-in to their 3rd-party apps built on top of it. That… at least helped on half the problem. The net effect of anything benefiting them is negative given they directly try to change laws in multiple countries to deny you basic rights like the ability to iterate on and improve a software implementation of an idea.
On other side, certain projects licensed under various free licenses had positive effects. This isn’t limited to GPL with success of Apache web server being first counterpoint coming to mind. Comparing them, though, I note that there were numerous times companies built on top of permissively-licensed code giving nothing back. There’s many cases, if not the default, of companies using GPL improving the product just because the license requires it and doing so didn’t hurt the business. The companies that did non-copyleft, shared source would occasionally close things back up after a lot of user contributions made their product better. QNX was an infuriating example. The GPL blocks that for at least components that got licensed under it. One company ceasing improvements doesn’t change anything if it’s really useful to another company who picks up the ball others dropped.
So, I see more positives out of GPL-like licenses than BSD’s if we’re talking contributions over the long term, esp by greedy companies. They also contribute less to the big companies’ evils when they adopt them as we see with IBM and Google. The companies even accidentally do good since the license forces their improvements back into the code that can be used by others doing good. Also, as I told burntsushi, it’s also important to remember that you can still license GPL software for a profit. There’s a number of companies, big and small, doing this. If it’s your own software, you can even do whatever you want with it under other licenses. The GPL version just gets to remain free with all contributions that get distributed shared back with people.
Emphasis is mine..
especially sharing your work back with them. It’s not your rights given it wasn’t your work to begin with
If you want to use GPL (as in picking a license or using a GPL licensed piece of code) thats your prerogative, but claiming that there is no restriction on what downstream developers do with the code they have written is being disingenuous or obtuse.
I didn’t say that. The commenter I replied to said GPL restricts “your rights to license your software properly.” It actually is an optional license that enforces your will on your work and its derivatives. That’s an entirely different thing. It means you can do whatever you want with your work. If you want what the GPL is designed to do, then you can license it with GPL to attempt to accomplish that. That’s for a single developer on their own work.
When building on existing code, it’s other’s work that they licensed how they saw fit with conditions/restrictions for those building on it. If you want to use their work in your own and are fine with the conditions, then you will build on it with the restriction of going with the conditions. At this point, it’s a collaborative work, not just your work, with conditions you already agreed to before building on it. That’s similar in concept to people entering a contract for how to accomplish a shared goal. If you oppose it, don’t enter into a contract with those people using their code and practices. That simple. Nobody has forced you to do anything with your work since it hasn’t even been created at that point. If it was, it wasn’t your work so much as theirs with licensed expectations + yours. By redefining it to just your work and rights as a single developer, which I can’t emphasize enough, it makes it look like someone is restricting your own individual activities out of nowhere with nothing in return. That would be a bad thing.
Good news is that’s not what’s happening: it’s either an optional tool for you to use to enforce your will on your own work if you agree with it; or conditions that come with a group work you or others started on derivatives that leverage that work. Again, conditions they can either agree to or reject in favor of different things to build on. In either case, the person using the GPL wants to use the GPL. If they didn’t, they’d use a different license for their work or build on something licensed differently. Entering into the restrictions is voluntary action by developers.
The goals of copyleft are laudable, but we regularly get upstream patches, also from Google employees, even though we use MIT/ISC only.
Are you talking about suckless software? I really don’t mean to be rude, but isn’t this a very selective subset? Since you’re very adamant about unix and unix style writing, the culture of sharing and interoperating was “naturally” adopted, or so it seem to me. It jut makes sense, but does this still hold for bigger projects like Emacs or Linux? I’m doubtful…
If you really care about freedom by definition, you won’t use copyleft licenses, as copyleft limits your freedoms.
This really doesn’t mean much, since both sides just use the (holy) word “Freedom” to talk about two different things. Take for example the conception of freedom that arises from unfreedom, rules and laws. Your (negative) conception is just one idea among others.
Maybe that’s why most GNU-software sucks so much.
You can say what you want, but I appreciate being able to write rm directory -rf ;^)
If you want to complain about ‘bad actor’ companies, those using AGPL should be directly in your sights.
I wish there were a downvote for not enough information. I’ve been seeing a lot of posts that may or may not be correct but don’t actually have any reasoning or evidence behind their claims.
As mentioned elsewhere (https://lobste.rs/s/mbufwv/some_software_cannot_be_used_at_google#c_olmjmh) - shitbag companies use (A)GPL as a moat against competition for their open-core/proprietary software.
I agree with other commenters that I can’t learn anything from your comment without your reasoning. I will say I opposed GenodeOS switching to AGPL because it would hurt adoption. Separation kernels, aka sacrificing most features for highest security, is already something that’s nearly unmarketable but highly useful for public good. Unless selling to military or safety-critical embedded, the best move is to use whatever license will get most uptake while selling value-adds on top that are marketable. On top of support, consulting, etc.
So, I opposed it in that case since it was already something fighting huge, uphill battle. Now, something with value like these companies use for networked applications or infrastructure might be AGPL licensed to get force them to contribute more if they want to use it. There will be many that don’t adopt it. A compelling product might still get a lot of adoption or even sales if sold to companies that don’t do cloud stuff they’d have to relicense. There were even a few people on HN that told me they make all the software they build for businesses FOSS by default with the businesses never caring because that software is necessary plumbing rather than a competitive advantage. So, it’s paid for plus can benefit others by default. There’s a possibility of AGPL-based projects doing that.
Personally, I default against AGPL if optimizing for uptake but for it if optimizing for ideological blocking of freeloading or financially supporting companies that try to limit our software freedoms. I don’t mean in a Richard Stallman speculative way: I mean they actively bribe politicians to reduce our rights in how we use our devices or software. And they make that money off of a mix of proprietary and permissively-licensed code. I’m fairly pragmatic where I know things are complicated enough I’ll have to make some tough decisions balancing many goals. However, some people out there might want to take steps to block their work from supporting companies that would (a) sue them for their work if given the chance or (b) give large financial backing to proprietary solutions but freeload of most FOSS.
Note: Google is a mixed bag here where they do a lot for FOSS versus most companies. Going with their flow makes sense for people maximizing adoption at expense of other variables.
The majority of companies I see using AGPL aren’t doing it for any Stallman-esque goals - they’re “open core” companies that use AGPL to effectively neuter any attempts to combat the intrinsic bullshit of “open core” projects.
This other comment sums up my views better than I have, clearly: https://lobste.rs/s/mbufwv/some_software_cannot_be_used_at_google#c_olmjmh
That would tell me that there’s companies using it for bad reasons, not that it’s inherently bad. You could similarly claim that projects like Linux are inherently about specific companies dominating markets because they’re the main contributors. Yet, the licensing allowed the kernel to be used for so much more. Unlike other bases, those using it were also forced to send back some contributions since they couldn’t just keep them private like companies using BSD code often did. That meant we gradually got an OS that could do everything from desktops to embedded to SGI’s NUMA’s to mobile.
There could be some beneficial effects to building businesses or major projects on AGPL that snowball like that. Also, there might not be where the license kills the potential. I don’t know until I see enough good attempts at using it for something with growth potential.
You’ve missed a key point when comparing to Linux.
RedHat Inc. (as an example) don’t “own” the Linux project, and then dual-license it under (A)GPL and a commercial License. They contribute to an external project.
MongoDB better fits the point I was making. They have a commercial (“Enterprise”) version and an AGPL (“community”) version. If a company then wanted to fork the community version, and either add new features or do clean implementations of “enterprise” functionality not offered in the upstream AGPL project, there is basically zero chance of making this a reality, because any effort they expend, automatically benefits the upstream company.
I don’t know until I see enough good attempts at using it
You’d think 11 years would be enough time for something to use it as-intended?
I think we’re probably just not going to agree here. My default opinion of anyone particularly pro-GPL is to wonder what their goals are. I’m interested in solving interesting technical problems. I’m not interested in dictating to people what they can do with code I choose to release.
“ My default opinion of anyone particularly pro-GPL is to wonder what their goals are. I’m interested in solving interesting technical problems. “
Know all that software running the world on mainframes, Windows, and so on that people can’t get off of because it’s proprietary and the company turned out to be leeches? And then doing stuff like suing competitors on copyright or patent grounds? My goal is avoiding that by default wherever something is a dependency. Certain licenses help maximize what we can do with software over time. Others don’t. So, I push for what benefits customers and developers over time the most. I don’t force anyone to do anything. To the contrary, the opposite types of software (and hardware) currently dominate in uptake in consumer, business, and government markets.
“RedHat Inc. (as an example) don’t “own” the Linux project, and then dual-license it under (A)GPL and a commercial License. They contribute to an external project.”
They contribute to an external project that they depend on which is licensed in a way to force contributions back to anything they improve. They’ve been doing this a long time, too. That was the point that they have in common with AGPL. The AGPL license just applies to extra software that’s had a lot of freeloaders who were already not contributing back because existing licenses let them dodge that in their business models. Had the best components been AGPL, a percentage of them might have chosen it with contributions coming back. Maybe. I’ve already said I’m not sure what long-term effects will be but I do want to see better attempts at it. Personally, I think the more permissive GPL is just outcompetiting it right now since it’s what copyleft people prefer. A social phenomenon.
“If a company then wanted to fork the community version, and either add new features or do clean implementations of “enterprise” functionality not offered in the upstream AGPL project”
What you’re implying, but not outright saying, is company A created something to be used for their profit if commercial or others benefit if AGPL. Company B wants to leverage Company A’s work in a way that exclusively profits Company B at Company A’s expense. Company A’s licensing prevents this by ensuring Company B’s commercial activities building on Company’s A’s work can benefit Company A, too. AGPL seems like a smart move if blocking competitors whose business model builds on other businesses’ work without sharing back.
Now, Company B has some options available. They can make their value add a service that communicates with Company A’s software. There’s a huge market for this with it becoming a default, architectural style of a lot of businesses despite negatives like performance or complexity. Company B might also target different use cases for Company A’s software that Company A might not adopt because it doesn’t fit their current market. Works better if it requires internal changes that negatively impact Company A’s current use case. Company B might also make their own software components, proprietary or open, that they license to Company A’s customers that can benefit them but clearly state they’re not to be licensed as AGPL. Whether that’s legally sound or not, we’ve seen lots of companies tie proprietary software into GPL stuff that way, esp as binary blobs, without the GPL stuff putting them out of business. It helps that GPL enforcement isn’t really aggressive like proprietary enforcement.
So, even for greedy Company B in your scenario, there’s still quite a few ways to make piles of cash as demonstrated by companies mixing proprietary and GPL code. They just can’t use the code Company A already wrote in a way that blocks company A from benefiting from the derivative work. They both will be able to sell whatever results. Just blocks greed. Doesn’t sound so bad to me. If anything, sounds like a bunch of companies competing that way would create piles of new features or differentiators at a much faster rate. Note that this is the default in Shenzhen even for hardware. Probably more product-level innovation at lower cost there than anywhere. It wouldn’t be as extreme here in U.S. since whatever doesn’t use or modify AGPL parts can stay proprietary or different type of OSS under strong copyright/patent laws.
You’re trying really fucking hard to miss the points I’m making, and I honestly don’t have time to read all of your war and peace style comments.
Well, I found time to read all of them.
nickpsecurity, I think I know the kind of company stephenr is talking about, though I’m unaware of any of them using AGPL.
I call these companies and their products “open source in name only”. And certainly not Free Software…
Examples include lots of products released as both an “OS/Community Edition” and also an “Enterprise Edition”.
One example is Trisano, by CS Initiative. The company appears to have disappeared and all the source code went with them. I had a copy of a repo at one point, but 1. it disappeared from my github because it was a fork of a private repo and I never had control of it (this was before I realized that I should maintain a local copy) and 2. it never had everything you needed to use the product in it anyway.
I sent emails back and forth with the head of that company explaining that I worked in the Public Health sector and we’d be happy to use the project and contribute back in the form of patches, if he could just send us the darned code, and he never did. Not even for the supposed “community edition”.
I think they just put the term “open source” into their marketing so their product would come up in my searches and maybe they could get my boss’s ear through me.
Anyway, if somebody can find a concrete example of a company using the AGPL toward malfeasance, I’d love to hear about it. stephenr mentioned MongoDB, but I’m not sure if that’s where his negative experience came from… If he doesn’t want to name names, I get it: $work and all…
Those are really interesting questions. I wonder how much non-GPL software has significant improvements that aren’t public. I have preferred BSD-style licenses for any code I publish, as I’d rather people just use it without any fuss. But if I had a substantial ongoing project I cared about I can now see myself picking a copyleft license. I had assumed the utility of submitting changes and fixes to the upstream project, as opposed to maintaining an internal fork, would be sufficient to encourage contribution. But I had never actually thought particularly hard about it until now.
Apple and Microsoft both built their proprietary OS’s along with a portion of their wealth on top of BSD code (esp networking). BSD’s didn’t get much benefit from that. Google acquired a mobile OS built on top of Linux. Even after they got more evil, many of their improvements still go into the ASOP that other companies can build on. I’m not claiming this extrapolates to all uses of highly-permissive vs copyleft code. I am saying it’s an example of what GPL intended to do when companies act in their self-interests. The BSD’s with no protections allows the other two companies to redirect all the benefits in their direction instead of reinvest them into projects they used.
BSD’s didn’t get much benefit from that.
It could be argued that the BSD’s got clang/llvm, so maybe it isn’t all bad.
I thought about that as I was writing this morning. Reason I left it off is I think it’s like Red Hat: an example proving what can happen but an outlier not representing what usually happens. The for’s and against’s were about what licenses normally do except the hypotheticals I did like AGPL.
I agree it’s a great, success story for a FOSS project. Im very greatful to the CompSci folks, Apple, and volunteers for it given all the people building on it.
Assuming a tangent of the current trends
2 years: The cloud is just someone else’s computer. It was all a big mistake. The next big thing is very similar and it’s called distributed computing because it’s a more technical term.
5 years: augmented reality is cheap enough to replace touch screens. Cars drive themselves. Quantum computers are a viable purchase for any self-respecting company. Security methods are meaningless if you want to protect yourself from corporations or the government unless you’d like to pay thousands for them.
10 years: everyone is wearing some sort of AR device in their every day life to actually always have a screen in their face, at the convenience of still seeing what’s in front of them. These also do cool things like predicting every object’s path in space in real time. For example: if that car doesn’t speed up, there’s no way in hell it’s going to hit your car, here’s where that ball’s going to pass, here are the top 10 reasons to buy a new car this summer. #9: that probable dent wouldn’t have an effect on this brand’s new paint. Distributed computing is done with quantum computers and serves an actual purpose. Previous methods of security are 100% meaningless.
15 years: Everyone will be augmented in some way, everyone is constantly distracted with things that keeps their brain from being stimulated whatsoever, leaving no room for imagination or any form of thinking. Nobody knows how to drive a car and nobody can catch a ball by themselves anymore. Nobody really cares about that fact, because it really isn’t important. Everyone has a QPU (Quantum Processing Unit) along with their CPU and GPU. It really isn’t useful in many computers other than to offload some things. This feature is managed by operating systems developed by people who really don’t care about it and only worked on that feature for one month so that it looks good on the patch notes and gives people a reason to buy a QPU before becoming absolutely abandoned.
20 years: The last Hurd developer dies by coincidence. The operating system itself is 45 years behind anything considered modern.
30 years: A war fought exactly 200 years ago is completely forgotten, as are all of its effects.
DragonFlyBSD has been doing some great work.
It makes me wonder if having fewer devs than FreeBSD (project it was originally forked from years ago) has necessitated trimming features, which presumably makes some things easier due to not having to support/maintain so many interfaces – for example DragonFlyBSD has 1 firewall (ipfw2), instead of the 3 (ipfw, pf, ipf) in FreeBSD.
If so, FreeBSD would be well served trimming some detritus.
I’ve often wondered that. A long time FreeBSD-er I know talks about the distro never really recovering after the BSDI merge.
Oh interesting. Do you have any background on that or further recollections on the impacts on FreeBSD?
It’s also interesting that apparently a company started as OffMyServer, by two BSDi employees, later purchased iXsystems and subsequently rebranded themselves as such. Further, according to wikipedia:
In 2007, iXsystems acquired FreeBSD Mall, Inc., reuniting all the portions of the original BSDi that had been spun off to Wind River Systems.
DragonFly has pf too: https://leaf.dragonflybsd.org/cgi/web-man?command=pf§ion=4
I don’t think they’re comparable anymore. I think you can compare dragonflybsd more to openbsd than to freebsd. They’re a relatively small project with a strong philosophy and project goal who are very good at what they do (in this case performance), but they have a lot of quirks that might leave some users uncomfortable while they make hobbyists very enthusiastic.
As such they don’t shy away from removing features that they judge aren’t meaningful to their users/community anymore (non-amd64 support, some older network interfaces, some legacy code that’s still in freebsd but not in dragonfly). The project philosophy is so different that the parallels you can make from them coming from the same root aren’t entirely accurate anymore. Keep in mind the fork happened 15 years ago, with development happening independently over that time. That’s also important.
Also, dragonfly has ipfw3 and pf. According to the documentation (https://www.dragonflybsd.org/docs/ipfw3/), ipfw3 was written from scratch.
Keep in mind the fork happened 15 years ago
Wow. Time sure passes quickly. Seems like not that long ago..
Interesting project, but I have a question regarding:
Python is Lisp with syntactic sugar and Lisp is Forth with syntactic sugar.
Could you elaborate further on this point? I’m guessing it’s not to be taken literally, but although I have heard of the connections between Python and Lisp (map, fliter, lambda, …) I haven’t heard about the second connection. Wasn’t FORTH invented in the 1970’s and LISP in the 1950’s? I’m guessing the order of appearance isn’t strictly necessary, but you don’t often hear about newer programming languages making older ones more “difficult”.
I’m guessing it’s not to be taken literally, but although I have heard of the connections between Python and Lisp (map, fliter, lambda, …) I haven’t heard about the second connection. Wasn’t FORTH invented in the 1970’s and LISP in the 1950’s?
I think the idea is that Lisp is a delimited, prefix language (e.g. (foo (bar baz) quux)) while Forth is an undelimited, postfix language (e.g. quux @ baz @ bar foo, where baz & quux are words which put an address on the stack, bar is a word which pops a value from the stack and pushes a value on the stack & foo is a word which pops two values from the stack and does something).
You can imagine that one could work up to a Lisp from something like OPAREN quux OPAREN baz bar CPAREN foo CPAREN in Forth, where OPAREN & CPAREN are special Forth words which do the obvious thing.
Its nothing so profound. Just a remark about syntax. Its almost as bargap explained. To go from Lisp to Forth
(foo (bar baz) quux)
Put the function at the end of the list instead of the beginning so (func arg1 arg2 arg3) becomes (arg1 arg2 arg3 func).
((baz bar) quux foo)
And remove the parenthesis
baz bar quux foo
This should produce the same output when run in Forth. (Assuming constants are functions with no parameters and return the constant. And adjust for autoquoting and implicit returns.)
The same goes for the Python to Lisp transfomration. Replace all syntax with calls to their dunder equivalent, put the functions inside the parenthesis and remove the comma separators (func(arg1, arg2, arg3) becomes (func args1 arg2 arg3)) and turn indented blocks into quotes (and put them as the last argument of the block heading).
I still don’t see the link they’re trying to make with lisp and forth, especially considering their lisp example looks and seems nothing like lisp…
consider their example
bind("fib" fun([i]
[if(<(i 3) [return(1)])
return(+(fib(-(i 1)) fib(-(i 2))))]))
against common lisp
(defun fib (i)
(if (< i 3)
1
(+ (fib (- i 1)) (fib (- i 2)))))
To do the reverse transformation:
(defun fib (i)
(if (< i 3)
1
(+ (fib (- i 1)) (fib (- i 2)))))
Put function names in front of the parenthesis.
defun(fib (i)
if(<(i 3)
1
+(fib(-(i 1)) fib(-(i 2)))))
Add explicit returns.
defun(fib (i)
if(<(i 3)
return(1)
return(+(fib(-(i 1)) fib(-(i 2))))))
Put square bracket for quotes to distinguish them from calls.
defun(fib [i]
if(<(i 3) return(1)
return(+(fib(-(i 1)) fib(-(i 2))))))
Change which arguments need to be quoted and which ones don’t.
defun("fib" [i]
[if(<(i 3) [return(1)]
return(+(fib(-(i 1)) fib(-(i 2))))]))
(Some of these transformations I’ve seen on discussions on possibilities for Lisp syntax.)
The point of this article is pretty badly made. They designed a purposely broken vector data structure and operations in C disregarding many mistakes a mediocre C programmer would have either known to avoid from good practice or just by knowing them, and then proposed a solution by making an implementation in rust that they actually put some effort in, because rust does force bad programmers to put more effort into their program before it can compile. Was that what they were trying to show?
agreed, I would have expected a case study to go find some real world equivalent code written in each and then look for problems.
Another potential bug many might miss is the multiply by 2. On a 64-bit system where int is 32-bit (Windows for instance), it would be possible for this operation to overflow, resulting in undefined behavior.
That would mean they had to allocate 2.1GiB * sizeof(int) (or at least 1.05GiB * sizeof(int)). Even so, calling malloc on a negative integer would just have it return NULL in new_data, so the assert would fail and the program would be terminated before anything bad happens.
I think “that would require unlikely input” and “undefined behavior does nothing bad on my platform” can be dangerous when it comes to C.
For instance, given the compiler knows the capacity starts at 1 (if we fix that bug) and is always multiplied by 2, since overflowing would be undefined behavior, it can assume that will never happen and generate a shift left for the multiply. That would result in overflowing to 0 (which could trap), which when passed to malloc could (implementation defined) return a non-NULL pointer that cannot be dereferenced.
I know that’s all highly unlikely, but likely isn’t safe.
In what other languages would it be possible?
I guess everything with properties (functions disguised as fields) so D, C#, etc.
Afaik not with C, C++, or Java.
&& and || are sequence points. The right expression may never happen depending on the result of the left, so it would make things interesting if they weren’t.
This is very easy to do in C++.
You can also do it with Haskell.
Doable with Java (override the equals method), and as an extension, with Clojure too:
(deftype Anything []
Object
(equals [a b] true))
(let [a (Anything.)]
(when (and (= a 1) (= a 2) (= a 3))
(println "Hello world!")))
Or, inspired by @zge above:
(let [== (fn [& _] true)
a 1]
(and (== a 1) (== a 2) (== a 3)))
Sort of. In Java, == doesn’t call the equals method, it just does a comparison for identity. So
a.equals(1) && a.equals(2) && a.equals(3);
can be true, but never
a == 1 && a == 2 && a == 3;
perl can do it very simply
my $i = 0;
sub a {
return ++$i;
}
if (a == 1 && a == 2 && a == 3) {
print("true\n");
}
Here is a C# version.
using System;
namespace ContrivedExample
{
public sealed class Miscreant
{
public static implicit operator Miscreant(int i) => new Miscreant();
public static bool operator ==(Miscreant left, Miscreant right) => true;
public static bool operator !=(Miscreant left, Miscreant right) => false;
}
internal static class Program
{
private static void Main(string[] args)
{
var a = new Miscreant();
bool broken = a == 1 && a == 2 && a == 3;
Console.WriteLine(broken);
}
}
}
One of the ‘tricks’ where all a’s are different Unicode characters is possible with Python and Ruby. Probably in Golang too.
Likewise in ruby, trivial to implement
a = Class.new do
def ==(*)
true
end
end.new
a == 1 # => true
a == 2 # => true
a == 3 # => true
In Scheme you could either take the lazy route and do (note the invariance of the order or ammount of the operations):
(let ((= (lambda (a b) #t))
(a 1))
(if (or (= 1 a) (= 2 a) (= 3 a))
"take that Aristotle!"))
Or be more creative, and say
(let ((= (lambda (x _) (or (map (lambda (n) (= x n)) '(1 2 3)))))
(a 1))
(if (or (= 1 a) (= 2 a) (= 3 a))
"take that Aristotle!"))
if you would want = to only mean “is equal to one, two or three”, instead of everything is “everything is equal”, of course only within this let block. The same could also be done with eq?, obviously.
Here is a Swift version that uses side effects in the definition of the == operator.
import Foundation
internal final class Miscreant {
private var value = 0
public static func ==(lhs: Miscreant, rhs: Int) -> Bool {
lhs.value += 1
return lhs.value == rhs
}
}
let a = Miscreant()
print(a == 1 && a == 2 && a == 3)
Anyone have ideas why FreeBSD did so poorly on the read intensive portion of the test?
I’m wondering if the zfs ARC was fighting over memory with PostgreSQL. Maybe some additional ARC memory limit tuning (eg. vfs.zfs.arc_max) would have helped? Maybe additionally setting primarycache=metadata too.
The fact that its ZFS filesystem was configured with lz4 compression while none of the others had any compression at all might have something to do with it. Decompression is done per-block, and each block was configured to be 8K bytes, and the more concurrent requests are made to the filesystem, the higher the chances are that different blocks might be requested, the more data the filesystem actually has to decompress before actually having a reply, even after fetching.
at this point most browsers are OS’s that run (and build) on other OS’s:
And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?
Browsers rarely link out the system. FF/Chromium have their own PNG decodes, JPEG decodes, AV codecs, memory allocators or allocation abstraction layers, etc. etc.
It bothers me everything is now shipping as an electron app. Do we really need every single app to have the footprint of a modern browser? Can we at least limit them to the footprint of Firefox2?
but if you limit it to the footprint of firefox2 then computers might be fast enough. (a problem)
New computers are no longer faster than old computers at the same cost, though – moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.
(Maybe secondary storage speed will have a big bump, if you’re moving from hard disk to SSD, but that only happens once.)
Are you claiming there have been no speedups due to better pipelining, out-of-order/speculative execution, larger caches, multicore, hyperthreading, and ASIC acceleration of common primitives? And the benchmarks magazines post showing newer stuff outperforming older stuff were all fabricated? I’d find those claims unbelievable.
Also, every newer system I had was faster past 2005. I recently had to use an older backup. Much slower. Finally, performance isn’t the only thing to consider: the newer, process nodes use less energy and have smaller chips.
I’m slightly overstating the claim. Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.
Once we’ve picked all the low-hanging fruit (simple optimization tricks with major & general impact) we’ll need to start seriously milking performance out of multicore and other features that actually require the involvement of application developers. (Multicore doesn’t affect performance at all for single-threaded applications or fully-synchronous applications that happen to have multiple threads – in other words, everything an unschooled developer is prepared to write, unless they happen to be mostly into unix shell scripting or something.)
Moore’s law isn’t all that matters, no. But, it matters a lot with regard to whether or not we can reasonably expect to defend practices like electron apps on the grounds that we can maintain current responsiveness while making everything take more cycles. The era where the same slow code can be guaranteed to run faster on next year’s machine without any effort on the part of developers is over.
As a specific example: I doubt that even in ten years, a low-end desktop PC will be able to run today’s version of slack with reasonable performance. There is no discernible difference in its performance between my two primary machines (both low-end desktop PCs, one from 2011 and one from 2017). There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.
I agree with that totally.
“Multicore doesn’t affect performance at all for single-threaded applications “
Although largely true, people often forget a way multicore can boost single-threaded performance: simply letting the single-threaded app have more time on CPU core since other stuff is running on another. Some OS’s, esp RTOS’s, let you control which cores apps run on specifically to utilize that. I’m not sure if desktop OS’s have good support for this right now, though. I haven’t tried it in a while.
“There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.”
Yeah, all the ideas I have for it are incremental. The best illustration of where rest of gains might come from is Cavium’s Octeon line. They have offloading engines for TCP/IP, compression, crypto, string ops, and so on. On rendering side, Firefox is switching to GPU’s which will take time to fully utilize. On Javascript side, maybe JIT’s could have a small, dedicated core. So, there’s still room for speeding Web up in hardware. Just not Moore’s law without developer effort like you were saying.
Although you partly covered it, I’d say “execution of programs” is good wording for JavaScript since it matches browser and OS usage. There’s definitely advantages to them being smaller. A guy I knew even deleted a bunch of code out of his OS and Firefox to achieve that on top of a tiny, backup image. Dude had a WinXP system full of working apps that fit on one CD-R.
Far as secure browsers, I’d start with designs from high-assurance security bringing in mainstream components carefully. Some are already doing that. An older one inspired Chrome’s architecture. I have a list in this comment. I’ll also note that there were few of these because high-assurance security defaulted on just putting a browser in a dedicated partition that isolated it from other apps on top of security-focused kernels. One browser per domain of trust. Also common were partitioning network stacks and filesystems that limited effect of one partition using them on others. QubesOS and GenodeOS are open-source software that support these with QubesOS having great usability/polish and GenodeOS architecturally closer to high-security designs.
Are there simpler browsers optimised for displaying plain ol’ hyperlinked HTML documents, and also support modern standards? I don’t really need 4 tiers of JIT and whatnot for web apps to go fast, since I don’t use them.
I’ve always thought one could improve on a Dillo-like browser for that. I also thought compile-time programming might make various components in browsers optional where you could actually tune it to amount of code or attack surface you need. That would require lots of work for mainstream stuff, though. A project like Dillo might pull it off, though.
NetSurf?
Oh yeah, I have that on a Raspberry Pi running RISC OS. It’s quite nice! I didn’t realise it runs on so many other platforms. Unfortunately it only crashes on my main machine, I will investigate. Thanks for reminding me that it exists.
Fascinating; how had I never heard of this before?
Or maybe I had and just assumed it was a variant of suckless surf? https://surf.suckless.org/
Looks promising. I wonder how it fares on keyboard control in particular.
Aw hell; they don’t even have TLS set up correctly on https://netsurf-browser.org
Does not exactly inspire confidence. Plus there appears to be no keyboard shortcut for switching tabs?
Neat idea; hope they get it into a usable state in the future.
AFAIK, it doesn’t support “modern” non-standards.
But it doesn’t support Javascript either, so it’s way more secure of mainstream ones.
No. Modern web standards are too complicated to implement in a simple manner.
Either KHTML or Links is what you’d like. KHTML would probably be the smallest browser you could find with a working, modern CSS, javascript and HTML5 engine. Links only does HTML <=4.0 (including everything implied by its
<img>tag, but not CSS).I’m pretty sure KHTML was taken to a farm upstate years ago, and replaced with WebKit or Blink.
It wasn’t “replaced”, Konqueror supports all KHTML-based backends including WebKit, WebEngine (chromium) and KHTML. KHTML still works relatively well to show modern web pages according to HTML5 standards and fits OP’s description perfectly. Konqueror allows you to choose your browser engine per tab, and even switch on the fly which I think is really nice, although this means loading all engines that you’re currently using in memory.
I wouldn’t say development is still very active, but it’s still supported in the KDE frameworks, they still make sure that it builds at least, along with the occasional bug fix. Saying that it was replaced is an overstatement. Although most KDE distributions do ship other browsers by default, if any, and I’m pretty sure Falkon is set to become KDE’s browser these days, which is basically an interface for WebEngine.
A growing part of my browsing is now text-mode browsing. Maybe you could treat full graphical browsing as an exception and go to the minimum footprint most of the time…
user choice. rampant complexity has restricted your options to 3 rendering engines, if you want to function in the modern world.
When reimplementing malloc and testing it out on several applications, I found out that Firefox ( at the time, I don’t know if this is still true) had its own internal malloc. It was allocating a big chunk of memory at startup and then managing it itself.
Back in the time I thought this was a crazy idea for a browser but in fact, it follows exactly the idea of your comment!
Firefox uses a fork of jemalloc by default.
IIRC this was done somewhere between Firefox 3 and Firefox 4 and was a huge speed boost. I can’t find a source for that claim though.
Anyway, there are good reasons Firefox uses its own malloc.
Edit: apparently I’m bored and/or like archeology, so I traced back the introduction of jemalloc to this hg changeset. This changeset is present in the tree for Mozilla 1.9.1 but not Mozilla 1.8.0. That would seem to indicate that jemalloc landed in the 3.6 cycle, although I’m not totally sure because the changeset description indicates that the real history is in CVS.
In my daily job, this week I’m working on patching a modern Javascript application to run on older browsers (IE10, IE9 and IE8+ GCF 12).
The hardest problems are due the different implementation details of same origin policy.
The funniest problem has been one of the used famework that used “native” as variable name: when people speak about the good parts in Javascript I know they don’t know what they are talking about.
BTW, if browser complexity address a real problem (instead of being a DARPA weapon to get control of foreign computers), such problem is the distribution of computation among long distances.
Such problem was not addressed well enough by operating systems, despite some mild attempts, such as Microsoft’s CIFS.
This is partially a protocol issue, as both NFS, SMB and 9P were designed with local network in mind.
However, IMHO browsers OS are not the proper solution to the issue: they are designed for different goals, and they cannot discontinue such goals without loosing market share (unless they retain such share with weird marketing practices as Microsoft did years ago with IE on Windows and Google is currently doing with Chrome on Android).
We need better protocols and better distributed operating systems.
Unfortunately it’s not easy to create them.
(Disclaimer: browsers as platforms for os and javascript’s ubiquity are among the strongest reasons that make me spend countless nights hacking an OS)