I think a good example of this is religion, how various people around the world all reached this “enlightened” state of bliss, happiness and love for other people. Yet, in trying to iron-out the details of how to get it across to people, the core of their message slowly got replaced by some version of “Allow this medieval hierarchy to take your money, start wars and molest your children because it leads to eternal happiness”.
Oh, what nonsense!
Honestly, I really hope this goes through for the benefit of both Perl communities. Perl 6 is a really fun and useful language that continues in the path of Larry’s linguistical ideas; I think it’s hampered by the initial confusion with Perl 5 and vice-versa. They’ve diverged too much to share a name anymore, in my opinion.
The linguistical ideas are what bug me the most about Perl.
So, some natural languages inflect for number, right? English has singular and plurals.
Linguistical idea for Perl: inflect every variable with its number, in a sort of non-optional pseudo-Hungarian notation: $
for scalars (singular) and @
for arrays (plural).
Result: super annoying with @foo
has to become $foo[0]
because you’ve now indexed an array. This was one of the first things that PHP threw out: all variables are just $
, period.
I think using natural languages as inspiration for programming languages is a very bad idea. Use a few mnemonics, sure, x = y if z else w
instead of x = z ? y : w
, that’s fine. Natural languages, however, are messy things very far removed from the order, predictability, and consistency that programmers value the most.
If Perl 6/Camelia requires me to do something like properly decline functions, remember the correct gender of every string literal, or conjugate modules, I’m outta here.
If Perl 6/Camelia requires me to do something like properly decline functions, remember the correct gender of every string literal, or conjugate modules, I’m outta here.
No, it’s going to be tonal, like Cantonese.
Entertainingly, iirc a lot of the grammatical fru-fru that natural languages seem to automatically collect, like numerical inflections or gendered nouns, seems to be there to provide redundancy in a vague and lossy medium. That’s sort of the opposite of what one often wants for programming languages, because programs are easier to design and modify when there’s less redundancy.
Result: super annoying with @foo has to become $foo[0] because you’ve now indexed an array. This was one of the first things that PHP threw out: all variables are just $, period.
This is solved in Perl 6. Indexing into arrays take the form @foo[0].
Expressiveness.
Perl5 is amazing when it comes to working with arrays and hashes.
@capital_of{'Belize', 'Kyrgyzstan'} = ('Belmopan', 'Bishkek');
Heh, that’s a nice/horrible example.
I prefer not being too clever in my coding
my %capital_of = ( Belize => 'Belmopan', Kyrgyzstan => 'Bishkek' );
The “arrow” =>
is of course just a comma, because why not: http://www.modernperlbooks.com/mt/2013/04/the-fat-comma-and-clarity.html
Oh, wow, thanks for explaining. I wouldn’t have been able to understand what was going on otherwise.
“Expressiveness”, ew. If you want expressiveness go write some poetry; we’re doing programming here.
Heh, reading the Wall article linked from the StackOverflow answer you refer to in your other comment:
Style not enforced except by peer pressure
We do not all have to write like Faulkner, or program like Dijkstra. […] Some language designers hope to enforce style through various typographical means such as forcing (more or less) one statement per line. This is all very well for poetry, but I don’t think I want to force everyone to write poetry in Perl. Such stylistic limits should be self-imposed, or at most policed by consensus among your buddies.
He and I have different interpretations of poetry, I guess. I’m thinking more of free verse, perhaps?
Linguistical idea for Perl: inflect every variable with its number, in a sort of non-optional pseudo-Hungarian notation: $ for scalars (singular) and @ for arrays (plural).
You’re confused about what’s going on. @
means I want a list. $
means I want a scalar. $f[0]
means I get a scalar. @f[0]
means I get a list of one element. Why is this important? Because I can say @f[0,1]
to get the first two elements, or @f[0,-1]
to get the first and the last.
I think using natural languages as inspiration for programming languages is a very bad idea. Use a few mnemonics, sure, x = y if z else w instead of x = z ? y : w, that’s fine. Natural languages, however, are messy things very far removed from the order, predictability, and consistency that programmers value the most.
There are millions of perl programmers, so you’re saying millions of programmers don’t value “order, predictability, and consistency” – how can you possibly have it both ways?
Lacking experience in perl you have no ability to make a statement as to how effective it is, unless you’re willing to stand up YourFavouriteLanguage against every perl programmer out there. Who has that kind of hubris?
If Perl 6/Camelia requires me to do something like properly decline functions, remember the correct gender of every string literal, or conjugate modules, I’m outta here.
Now this is just getting absurd. You’re better than this.
I’m ashamed of the crustaceans that agree with you.
I’m ashamed of the crustaceans that agree with you.
Bashing on Perl has been a popular pastime since at least Perl 4. Just ignore the haters and keep on hacking.
People actually believe set of things they don’t understand is equivalent (or neigh-equivalent) to the set of things they won’t benefit to understand. But to be proud of your ignorance is one thing, to bash on someone who has less of it (in an area) is something I’ll never find funny.
It’s not entirely clear to me what you’re stating here.
I’m saying that @JordiGH has stated they don’t like Perl, and think people who like Perl are… misguided. The upvotes may be from other users who agree, or just think the comment is well-written.
I like and use Perl(5), but I’m under no illusions concerning the language’s warts, nor some of the user base’s proclivities for “smart” solutions.
I think you can like Perl, but please don’t like it because of its “linguistical ideas”. Linguistics is a terrible place to get ideas for how to build a programming language, unless you’re willing to go whole hog and implement some kind of smart parser that tries to parse absolutely everything like Wolfram Alpha did and you’re basically doing NLP, not programming anymore.
I like and use Perl(5),
I don’t. I haven’t used Perl in a decade, but before that I used it off and on for two decades, and I think the “linguistical ideas” are definitely, without a doubt, the best part.
To that end, I’ll make the claim that someone who doesn’t like them almost certainly doesn’t understand them.
Variables are not “inflected” with its cardinality, this is just wrong as I’ve demonstrated. “Declining functions?” “Gendered string literals?” What nonsense! Surely this is meant to be a joke, but it’s not funny.
First, this is not even remotely close to what Wall is describing when he talks about perl’s learnings from linguists, and so maybe at best it is a cheap joke, but at worse it fosters this incredibly pervasive negative attitude about things we don’t understand. Maybe you’re not sensitive to this, but where I work pretty far in the outskirts of software, the idea that there is anything not worth learning is preposterous at the start, and the claim that something millions of people know is not worth learning is incredulous. This doesn’t make us better programmers.
I appreciate people might like the (elitist) joke, just like some people like racist and sexist jokes, but our capacity to know is limited by our communities acceptance of the journey we undertake to get that knowledge.
Ha ha. Sigils are dumb. Right up there with global warming. Seriously. This sort of shit doesn’t make me happy on a Friday.
Sigils are dumb, other languages get fine without them, there’s a reason no other language has acquired them, and, going by other comments here, even Perl 6 got rid of the most annoying part of array indexing.
If you want to distinguish array slices from scalars from arrays, there is better syntax (e.g. array
, element_of_an_array[0]
, single_element_slice_of_array[0..1]
) that doesn’t require inflecting every variable with the same kind of redundancy we put up with in natural languages, nor having to be thinking about evaluation contexts (because natural language requires so much context you see!) or all the other cognitive load of Perl 5.
And this is just one example of “linguistical ideas” resulting in annoying things; I could go on, but I don’t want to anger you further.
Edit: This looks like a good article on the pros and cons of sigils:
Sigils are dumb
Fine. Accepted. Sigils are dumb.
there’s a reason no other language has acquired them
Except rust and swift (? sigil), postscript (/ sigil), Python (@ and * and ** sigils), and actually quite a few languages. What makes this use of single characters “better” than Perl’s use of them? Why don’t we typically call those things sigils even though they’re required to appear next to every use (in that context) of the noun?
Look, the real issue here is you don’t like the way perl looks and you’re looking to invent a reason why. That’s what people do when they think learning things is going to be a waste of their time – it takes a long time to learn something properly, so if you can sum up easily the reasons why perl is a waste of your time, and tell a joke at the same time, then you post it online and get other laughs and high-fives from other people just like you. Ha ha, yay internet points.
But here’s the thing: Perl won. There are literally millions of perl programmers that value “order, predictability, and consistency” so perl cannot possibly be designed to mess these things up as you claim. You are definitely, absolutely and completely wrong justifying your “opinion” with this:
I think using natural languages as inspiration for programming languages is a very bad idea. Use a few mnemonics, sure, x = y if z else w instead of x = z ? y : w, that’s fine. Natural languages, however, are messy things very far removed from the order, predictability, and consistency that programmers value the most.
Natural languages have nothing to do with why you don’t like Perl. You’ve just latched onto this to justify your opinion (for some weird reason; like we need to justify opinions!?), and you’re cheating yourself and others out of what’s good about natural languages (and good about perl in general) with this kind of shit.
But wait there’s more: There are things you don’t know that haven’t won. Some of them are really exciting, and they look even more like line noise than Perl does. You’re going to miss out on them because of this attitude, and you’re not going to have such clear-cut evidence that you’re wrong with your justification.
You think I’m defending the use of @array
or $array[0]
? I’m not. However the use of a single character for a powerful (loaded) meaning is important and worth defence, and that’s why looking to linguists to figure out how they can formalise this overloading is important.The other way is Iverson, which is a mathematical (instead of linguistic) approach to notation, and (given your opinions about line noise) you won’t like that either. My opinion of perl is my own, and it’s built on a few decades of its use. I shouldn’t have to justify it to you or anyone else, but someone who is eager to learn something new is going to benefit more from where my opinion comes from (experience) than where yours does (avoidance).
One thing I think would help us all if there were a single standard goal behind what makes programming better or worse in one situation instead of another; whether that situation is language, operating system, library, or just feel-good stuff. Until someone shows me something better, mine is this: Software is better if it is more correct (for a wider domain), shorter (in source code bytes), faster (in run time), and is built more quickly. It is difficult to have all four, but this is the test I use when someone tells me some language is better than another: Are (some subset of programs) sorter, faster, easier to write quickly? Different languages “win” for a different subset of programs, and it’s worth considering why they win. I’m really always happy to have that conversation, but I’m really unhappy to argue with you about something which is really just your opinion (i.e. that perl sucks because you don’t like the way it looks).
Perl is pretty damn old, and I suspect part of the use of sigils was to aid parsing.
They can be considered a form of Hungarian notation, which was once considered de rigueur, but have now apparently been relegated to the dustbin of history.
With Perl 5’s references, you can treat everything as a scalar, (but you have to use the ->
infix to access hash keys or array positions. So that removes @
and %
at least! ;) I used to work with a guy who used hashrefs for everything.
Anyway, thanks for the discussion.
Ugh, stop throwing shade at me. It’s perfectly possible to understand contexts and sigils while simultaneously disliking them. Just because I dislike something doesn’t mean I don’t understand it.
You’re confused about what’s going on
I am not. I am paraphrasing Larry Wall who has actually said that these decorators are supposed to mimic singular and plurals.
Yes. You really are. Wall is explaining how he thinks about it. Unless you can think about these things in that way, this will not make sense. I can try to show you other ways to think about it (to correct your confusion), but your paraphrasing indeed misses the point: @
means plural, not @f
. There’s a reason Larry says:
“So $ and @ are a little like this and these in English.”
and that’s because you’re intended to read these f not the array f.
I am not confused. I know that $foo[0] is a dollar sign because we’re referring to a scalar. I still think it’s stupid and introduces the famous “line noise” into Perl. Not only stupid people dislike Perl, and JAPHs are a Perl-exclusive phenomenon for a reason.
I hope this happens too. It makes the most sense. They are two totally different languages at this point, and there was never going to be the Python 2 -> Python 3 type transition with it.
It seems to work to the advantage of C++, which sneaks in by being confused with C.
And to people who think that doesn’t happen: People can’t tell the difference between C and “C” even if you explicitly point it out to them, let alone Java and JAVA, Mac and MAC, and Java and Javascript. The distinction between C and C++ is way too subtle to matter, especially if you’ve ever had a web search query as your Facebook status or most recent Tweet.
It’s called Universal Serial Bus and is supposed to provide a universal standard for connectivity. But, you are telling me that not only are the different types not compatible with each other, but the same type is not compatible with itself? God, do we suck at naming things?
Capitalism is about enriching yourself - not enriching your users and certainly not enriching society.
By enriching themselves capitalists produce positive externalities that enrich society. What an unnecessary statement that derails the discussion about this piece into a flamewar.
What an unnecessary statement that details the discussion about this piece into a flamewar.
My guess is that is exactly what’s intended. The OP has a history of writing inflammatory blog posts.
I agree with the premise of the post that Git doesn’t do a good job supporting monorepos. Assuming the scaling problem of large repositories will go away with time, there is still the issue of how clients should interact with a monorepo. e.g. clients often don’t need every file at a particular commit or want the full history of the repo or the files being accessed. The feature support and UI for facilitating partial repo access is still horribly lacking.
Git has the concept of a “sparse checkout” where only a subset of files in a commit are manifested in the working directory. This is a powerful feature for monorepos, as it allows clients to only interact with files relevant to the given operation. Unfortunately, the UI for sparse checkouts in Git is horrible: it requires writing out file patterns to the .git/info/sparse-checkout
file and running a sequence of commands in just the right order for it to work. Practically nobody knows how to do this off the top of their head and anyone using sparse checkouts probably has the process abstracted away via a script. In contrast, I will point out that Mercurial allows you to store a file in the repository containing the patterns that constitute the “sparse profile” and when you do a clone or update, you can specify the path to the file containing the “sparse profile” and Mercurial takes care of fetching the file with sparse file patterns and expanding it to rules to populate the repository history and working directory. This is vastly more user intuitive than what Git provides for managing sparse checkouts. Not perfect, but much, much better. I encourage Git to steal this feature.
Another monorepo feature that is yet unexplored in both Git and Mercurial is partial repository branches and tags. Branches and tags are global to the entire repository. But for monorepos comprised of multiple projects, global branches and tags may not be appropriate. People may want branches and tags that only apply to a subset of the repo. If nothing else this can cut down on “symbol pollution.” This isn’t a radical idea, as per-project branches and tags are supported by version control systems like Subversion and CVS.
I agree with you, git famously was not designed for monorepo.
Also agreed, sub-tree checkouts and sub-tree history would be essential for monorepos. Nobody wants to see every file from every obscure project in their repo clones, it would eat up your attention.
I would also like storing giant asset files in repo ( without the git-lfs hack ), more consistent commands, some sort of API where compilers and build systems can integrate into revision control etc. Right now, it seems we have more and more tooling on top of Git to make it work in all these conditions while git was designed to manage a single text file based repo, namely the Linux kernel.
Isn’t one of the design goals of the Java platform to allow application developers to forget about the platform on which they are deploying their applications ? Docker aims to do this also, by bundling the application libraries along with application code to create a ‘container’ that allows you to forget about the base OS. So by combining docker and Java, you are solving the same problem twice.
While (somewhat) true, you can have company policies, for example, mandating your tech.
Maybe the IT department wants everything to run in Docker containers and your app is written in Java. Or you are already using Docker and want to write a Java/JVM based app for various reasons. This basically means that at the end of the day, these two technologies have to work together.
And in my opinion they don’t overlap completely. Java is/was meant to be platform independent, but it still needs a JDK/JRE for the cases where the JDK/JRE is not bundled. The app might need native dependencies. You might want to sandbox the CPU or disk usage or network usage… There’s a bunch of reasons you might want Docker, too.
Isn’t one of the design goals of the Java platform to allow application developers to forget about the platform on which they are deploying their applications ?
“Compiled” Java programs can be run on different platforms, but like any other language you have to pay attention in certain areas if you want to code to actually work across different platforms. (E.g. filesystem semantics). And as the article notes, you need to understand the memory requirements of a particular application in order to successfully deploy it in a production environment or you will have a bad time.
Docker aims to do this also
No, Docker aims to containerize services which has a host of benefits, mostly around easy dev environments, CI, CD, and devops. Platform independence could be one, but in practice, Linux is the only platform in serious use, and mostly on the AMD64 architecture.
Another reason why Chrome took over from Internet Explorer was probably because it was cross-platform. Today the only browser that provides browser sync and uBlock on most major platforms ( Android, Linux, Mac, Windows ) is probably Firefox, although I wish it was a bit more stable and snappy.
Hmm, it might be helpful to split automated testing into 3 levels :
There is probably another reason for free software projects to choose Gitlab over Github : Gitlab is free and open source while Github Enterprise is not. If Gitlab Inc says that the free software community can no longer use Gitlab, it can still be forked and things can go on.
I also think that new languages should ship with a compiler-based build system, i.e. the compiler is the build system.
Doesn’t Go already do this ?
I think Cargo works well at this. It’s a wrapper for the compiler, but it feels so well-integrated that the distinction doesn’t matter. I’ve never had trouble with stale files with Cargo, or force-built like I’ve had to with Make.
Rustc does as much of the ‘build system’ stuff as Cargo. rustc src/main.rs
finds all the files that main.rs needs to build, and builds them all at once. The only exception (i.e. all Cargo has to do) is pointing it at external libraries.
With external libraries, if you have a extern crate foo
in your code rustc
will deal with that automagically as well if it can (searches a search path for it, you can add things to the search path with -L deps_folder
). Alternatively regardless of whether or not you have an extern crate foo
(as of Rust2018, prior to that it was always necessary) you can define the dependency precisely by --extern foo=path/to/foo.rlib
.
All cargo does, is download dependencies, build them to rlibs as well, and add those --extern foo=path/to/foo
declarations (and other options like -C opt-level=3
) to a rustc command line based on a config file.
Oh, right! That’s neat. I did wonder whether Cargo looked through the module tree somehow, and the answer is that it doesn’t even need to.
GHC tried to do this. I don’t personally feel that it was a good idea, or that it worked out very well. Fortunately, it wasn’t done in a way that interfered with the later development of Cabal.
Having written a bunch of Nix code, to invoke Cabal, to set up ghc-pkg, to invoke ghc, I would say the situation is less than ideal (and don’t get me started on Stack..) ;)
Did you read the blog? The reason I want the compiler to be involved is to have dependencies to be calculated at the AST node level. That’s definitely not what Go does.
I read it; I was under the impression that your main point was that build systems should “Just Work” without all sorts of Makefile muckery, and that “the compiler [should be] the build system”. The comment about AST based dependencies seemed like a footnote to this.
The go
command already works like that. I suppose AST based dependencies could be added to the implementation, but I’m not sure if that would be a major benefit. The Go compiler is reasonably fast to start with (although not as fast as it used to be), and the build cache introduced in Go 1.10 works pretty well already.
I want the compiler … to [calculate dependencies] at the AST node level. That’s definitely not what Go does.
Technically the go
tool isn’t the Go compiler (6c
/8c
), but practically it is, and since the introduction of modules, it definitely parses and automatically resolves dependencies from source. (Even previous, separate tools like dep
parsed the AST and resolved the dep graph with a single command.)
There’s https://www.opencvs.org/ as well, but luckily rsync is a smaller (I think) project to replicate
Not to mention that Andrew Tridgell’s PhD thesis is a nice and short read.
Those users that get nonfree drivers would see what their moral cost is, and that there are people in the community who refuse to pay that cost. They would have the chance to reflect afterwards on the situation that their flawed computers have put them in, and about how to change that situation, in the small and in the large.
I think this will have the opposite effect to what Stallman intends. It is very important to people that their computers work. They might look at those people who went to the devil, obtained a functioning computer and decide that free software is not worth it.
However, I think if you show them the advantages of the freedom that free software provides ( customization options, ability to automate their boring chores, ability to revive their old computers etc ), they might show increased interest in free software which could prove to be more fruitful in the long run.
I also think it’s unlikely to have the desired effect. Aside from the general weirdness of someone wearing a sign/in costume, they’re going to see that the group norm is to defect from FSF’s ideals for basic functionality, then get fun practice explicitly doing so. (Unless my intuition about what percent of people want wifi working on their laptop is way, way off.)
I’ve attended an install fest in Conway, Arkansas in 2003 with 12 people in attendance and one in Boston two or three years ago was large enough to warrant its own room at a conference.
In both cases, almost all the attendees 1) had already drank the (delicious) free software kool-aide and 2) were probably a little weird themselves. 3) The hosts of said events are invariable charismatic–that is, perceptive, smiling, and outgoing.
Whimsy and kooky don’t sound right when described in English prose or when seen on video, but live and in person these things are fine. “You had to have been there.”
Stallman should simply patch his article to leave out the specifics and better describe the goal of the piece: to tell the world that RMS has decided that giving users experience with GNU on working machines through use of non-free drivers is more desirable than giving them experience with GNU on non-working machines through use of fully free software stacks, though the FSF will publicly deny it if anyone asks. (see that bit about no devils at FSF events.)
It is not sufficient that the design is open, the manufacturing should also be open and open for inspection. Otherwise the label may say ‘Design X’, but the hardware may in reality be ‘Design X + hardware backdoor’
Czech Pirates are currently pushing for this in reaction to the Huawei panic.
We are running into a solid wall of the free market doctrine. Even with respect to the national critical infrastructure. Government security experts are with us, but the government is dismissive.
Fun times.
There’s too many patents and patent suits for hardware to be a free market. It runs under monopoly and oligopoly rules depending on what you’re taking about. Most competition is in niche markets or platforms with royalties going back to big players. The government might have to pass laws blocking patent suits on computer hardware and software to generate a free market in that region. Then, development costs might be high enough that it still isn’t worthwhile.
At the moment, we are overpaying “mil-grade” 100Mbps routers about 60 times. It’s not only about money.
I don’t know what mil-grade means here since military is mostly using either COTS stuff or COTS-like stuff in MILSPEC packaging. For instance, I’ve seen TEMPEST setups that were Cisco inside. The older guards, which had high-security design, had a high cost with little adoption (low volume). That meant they were about $100k-200k per unit. Few organizations still buy them.
The dev cost and masks for high-performance, custom hardware can be tens of millions of dollars. Add military certification to that. They’d start with similar economics to guards with five to six digit cost per unit. The volume could be higher given they fit a larger market (i.e. network gear). That could get per unit price down. Any organization trying to sell them low per-unit from the start has to risk losing millions to tens of millions of dollars they spent upfront.
That’s why my recommendation was to get high-security, source-available, software stacks on good, proprietary hardware first to generate revenue while improving things. Also, port them to commodity routers to be sold a low profit just to get more consumers using them. Government and enterprise sales of hardware and licensing of software would build up to revenue needed to do custom hardware. That mixed with some DARPA or military grants for the initial build.
There is this push to comply with the 2% GDP military spending inside NATO that is not really happening for a lot of EU countries. What we hope for is to get some of them to start funding R&D of more open devices as a way to simultaneously satisfy the 2% obligation, generate some meaningful grants for the academic sector and boost domestic manufacturing.
We still don’t know how to sell this to the stakeholders. Maybe with a few MEPs lobbying for this…
There was a post on here recently linking to a talk by Bunnie Huang on a similar subject: https://lobste.rs/s/bhxaau/supply_chain_security_talk
I guess the author has not used truly horrible UIs like Collabnet. I once sent a link to a file to someone on email. It had all sorts of junk in the URL and after a while clicking on the link did not produce the intended result.
90% of the time I’m on this page, I’m looking for the history button
Not me. When I am on a page looking at the code, I need to see the code
And 90% of the time, it takes me 30 seconds to find it. I would love to get a hold of the analytics for how many people star a repository after looking at a random file buried somewhere in the tree
30 seconds is possibly an exaggeration. But it is possible that the star button is not removed because of PJAX : https://github.com/defunkt/jquery-pjax. It is probably why Github UI seems ( to me at least ) very fast
Also missing: any way to even get back to the file
Try the back button on your browser
Try the back button on your browser
Sad state of affairs when the back button has been practically killed off by web UIs.
My personal banks don’t deal with it at all, though the bank I use for my company fortunately does. Lots of governmental sites break or deny when going back.
Some sites explicitly tell you not to hit back, as if the developers never heard of idempotence.
Wonder if this started with having html elements with a javascript call to go back?
90% of the time I’m on this page, I’m looking for the history button
Not me. When I am on a page looking at the code, I need to see the code
You are really just saying the same thing. Both the code and the history button are under several rows of UI stuff.
Try the back button on your browser
Good luck. It may work on shithub for now, but many sites break catastrophically when you hit the magic destroy^W back button.
shithub
I don’t think you should use shithub. At best it is an annoying addition that doesn’t add anything to your statement and at worst it reduce it to a childish rant and shows you don’t want to engage in serious discussion.
I hadn’t heard that one before, and found a great explanation from a relic of an internet past… https://everything2.com/title/MICROS~1
Nice work!
I still can’t get past his “Silence is golden” admonishment. It’s utter crap. Every process should have a meaningful return code. Knowing that the third tool from the left in a giant pipeline produced no output or errors is exactly NOT helpful in any way.
With respect, I think this is a misunderstanding. SIG is fine with return code. 0 means success. That’s silence.
If you need to produce errors, then definitely produce them on stderr, but don’t mix them into stdout.
If you want to provide status information, it should be an option, not a default. By default, you have silence on success. This puts the user in control and keeps them from being swamped by what every *tool( thinks is important.
That makes a lot fo sense to me. This may be one of those instances where sloppy coders interpret a rule the way they want to.
I’ve just been very badly burned by people who quote this and take it to mean that as long as their program produces nothing on stdout or stderr it should be seen to have succeeded, and they can set the return code to garbage or have it come back no zero on success.
no output or errors
If there’s an error, it should output stuff. That’s actually one of the other rules:
Rule of Repair: When you must fail, fail noisily and as soon as possible.
Software should be transparent in the way that it fails, as well as in normal operation. It’s best when > software can cope with unexpected conditions by adapting to them, but the worst kinds of bugs are those in which the repair doesn’t succeed and the problem quietly causes corruption that doesn’t show up until much later.
As with all “rules”, common sense applies of course. I think the biggest issue is just useless output that obscures actual useful stuff. I wrote a bit of a rant on apt-get a few years ago; probably not my best article, but it does make some good points IMHO, among which:
$ sudo apt-get remove emacs24
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
emacs24-lucid xaw3dg
The following packages will be REMOVED
emacs24
The following NEW packages will be installed
emacs24-lucid xaw3dg
0 to upgrade, 2 to newly install, 1 to remove and 0 not to upgrade.
Need to get 3,534 kB of archives.
After this operation, 531 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Do you see the problem here? Sandwiched in between a lot of useless info is the information that:
The following extra packages will be installed:
emacs24-lucid xaw3dg
Which is really unexpected, but also hard to miss in this big wall o’ text if you’re not carefully reading everything. Since apt-get always output walls of texts users tend to skim over it a bit too quick (I do, anyway).
I once removed the wireless drivers on my Dell XPS by accident; it shipped with Ubuntu by default, but with extra Dell packages for the drivers. It was a pain to fix.
There is much more to be said about this, but terminals are user interfaces, and text programs require careful UX design. Burdening the user with loads of informational messages by default is usually not a good idea.
That said, a -v
switch for verbose output to aid debugging is usually a good thing. The book discusses that too if I recall correctly.
I think this is about superfluous output. For example consider rm
. Imagine if it said something like :
$ rm *
Deleteing file "file1.txt"
Successfully deleted file "file1.txt"
Deleting file "file2.txt"
Successfully deleted file "file2.xt"
Done
Unix philosophy recommends that this should be avoided
That’s a really good point, and I totally get that. Like I said in a previous reply, I think the way the rule is written tends to encourage sloppy coders in their sloppy behavior, but the intent is good.
Five Proofs for the Existence of God - Edward Feser: Goes through the classical arguments for God’s existence and points out the problems with some of the standard counter-arguments
Zero to One: This book is very concise and to the point and gives a different perspective on the world : avoid competition, all successful companies are different, a small handful of companies radically outperform all other companies ( power law ) etc
I thought the conspiracy theory folks were wrong. It’s looking like they were right. Google is indeed doing some shady stuff but I still think the outrage is overblown. It’s a browser engine and Microsoft engineers have the skill set to fork it at any point down the line. In the short term the average user gets better compatibility which seems like a win overall even if the diversity proponents are a little upset.
I thought the conspiracy theory folks were wrong. It’s looking like they were right. Google is indeed doing some shady stuff
If it’s an organization, you should always look at their incentives to know whether they have a high likelihood of going bad. Google was a for-profit companies aiming for IPO. Their model was collecting info on people (aka surveillance company). These are all incentives for them to do shady stuff. Even if they want Don’t Be Evil, the owners typically loose a lot of control over whether they do that after they IPO. That’s because boards and shareholders that want numbers to go up are in control. After IPO’s, decent companies start becoming more evil most of the time since evil is required to always make specific numbers go up or down. Bad incentives.
It’s why I push public-benefit companies, non-profits, foundations, and coops here as the best structures to use for morally-focused businesses. There’s bad things that can still happen in these models. They just naturally push organizations’ actions in less-evil directions than publicly-traded, for-profit companies or VC companies trying to become them. I strongly advise against paying for or contributing to products of the latter unless protections are built-in for the users with regards to lock-in and their data. An example would be core product open-sourced with a patent grant.
Capitalism (or if you prefer, economics) isn’t a “conspiracy theory”. Neither is rudimentary business strategy. It’s amusing to me how many smart, competent, highly educated technical people fail so completely to understand these things, and come up with all kinds of fanciful stories to bridge the gap. Stories about the role and purpose of the W3C, for instance.
Having read all these hand-wringy threads about implementation diversity in the wake of this EdgeHTML move, I wonder how many would complain about, say, the lack of a competitor to the Linux kernel? There’s only one kernel, it’s financially supported by numerous mutually distrustful big businesses and used by nearly everybody, its arbitrary decisions about its API are de-facto hard standards… and yet I don’t hear much wailing and gnashing, even from the BSD folks. How is the linux kernel different than Chromium?
While I actually am concerned about a lack of diversity in server-side infrastructure, the Linux kernel benefits, as it were, from fragmentation.
There’s only one kernel
This simply isn’t true. There’s only one development effort to contribute to the kernel. There is, on the other hand, many branches of the kernel tuned to different needs. As somebody who spent his entire day at work today mixing and matching different kernel variants and kernel modules to finally get something to work, I’m painfully aware of the fragmentation.
There’s another big difference, though, and that’s in leadership. Chromium is run by Google. It’s open source, sure, but if you want your commits into Chromium, it’s gonna go through Google. The documentation for how to contribute is littered with Google-specific terminology, down to including the special internal “go” links that only Google employees can use.
Linux is run by a non-profit. Sure, they take money from big companies. And yes, money can certainly be a corrupting influence. But because Linux is developed in public, a great deal of that corruption can be called out before it escalates. There have been more than a few developer holy wars over perceived corruption in the Linux kernel, down to allowing it to be “tainted” with closed source drivers. The GPL and the underlying philosophy of free software helps prevent and manage those kinds of attacks against the organization. Also, Linux takes money from multiple companies, many of which are in competition with each other. It is in Linux’s best interest to not provide competitive leverage to any singular entity, and instead focus on being the best OS it can be.
Performance tuning is qualitatively different than ABI compatibility. Otherwise, I think you make some great points. Thanks!
If there is an internal memo at Google along the lines of “try to break the other web browsers’ perf as much as possible” that is not “rudimentary business strategy”, it’s “ground for anti-trust action”.
It’s as good of a strategy as helping the Malaysian PM launder money and getting a 10% cut (which… hey might still pay off)
Main difference is that there are many interoperable implementations of *nix/SUS/POSIX libc/syscall parts and glibc+Linux is only one. A very popular one, but certainly not the only. Software that runs on all (or most) *nix variants is incredibly common, and when something is gratuitously incompatible (by being glibc+Linux or MacOS only) you do hear the others complain.
Software that runs on all (or most) *nix variants is incredibly common
If by “runs on” you mean “can be ported to and recompiled without major effort”, then I agree, and you’re absolutely right to point out the other parts of the POSIX and libc ecosystem that makes this possible. But I can’t think of any software that’s binary compatible between different POSIX-ish OSs. I doubt that’s even possible.
On the other side of the analogy, in fairness, complex commerical web apps have long supported various incompatible quirks of multiple vendor’s browsers.
How is the linux kernel different than Chromium?
As you just said it,
financially supported by numerous mutually distrustful big businesses
There’s no one company making decisions about the kernel. That’s the difference.
There’s no one company making decisions about the kernel. That’s the difference.
Here comes fuchsia and Google’s money :/
I am disgusted with the Linux monoculture (and the Linux kernel in general), even more so than with the Chrome monoculture. But that fight was fought a couple decades ago, it’s kinda late to be complaining about it. These complaints won’t be heard, and even if they are heard, nobody cares. The few who care are hardly enough to make a difference. Yes we have the BSDs (and I use one) and they’re in a minority position, kinda like Firefox…
How much of a monoculture is Linux, really? Every distro tweaks the kernel at least to some extent, there are a lot of patch sets for it in the open, and if you install a distro you get to choose your tools from the window manager onwards.
The corporatization of Linux is IMO problematic. Linus hasn’t sent that many angry emails percentually, but they make the headlines every time, so my conspiracy theory is that the corporations that paid big bucks for board seats on the Foundation bullied him to take his break.
We know that some kernel decisions have been made in the interest of corporations that employ maintainers, so this could be the tip of an iceberg.
Like the old Finnish saying “you sing his songs whose bread you eat”.
It’s a browser engine and Microsoft engineers have the skill set to fork it at any point down the line.
I think this is true. If Google screws us over with Chrome, we can switch to Firefox, Vivaldi, Opera, Brave etc and still have an acceptable computing experience.
The real concerns for technological freedom today are Google’s web application dominance and hardware dominance from Intel. It would be very difficult to get a usable phone or personal server or navigation software etc without the blessing of Google and Intel. This is where we need more alternatives and more open systems.
Right now if Google or Intel wants to, they can make your life really hard.
Do note that all but Firefox are somewhat controlled by Google.
Chrome would probably have been easier to subvert if it wasn’t open source; now it’s a kind of cancer in most “alternative” browsers.
I don’t know. MIPS is open sourcing their hardware and there’s also RISC-V. I think the issue is that as programmers and engineers we don’t collectively have the willpower to make these big organizations behave because defecting is advantageous. Join the union and have moral superiority or be a mercenary and get showered with cash. Right now everyone chooses cash and as long as this is the case large corporations will continue to press their advantage.
“Join the union and have moral superiority or be a mercenary and get showered with cash. Right now everyone chooses cash and as long as this is the case large corporations will continue to press their advantage.”
Boom. You nailed it! I’ve been calling it out in threads on politics and business practices. Most of the time, people that say they’re about specific things will ignore them for money or try to rationalize how supporting it is good due to other benefits they can achieve within the corruption. Human nature. You’re also bringing in organizations representing developers to get better pay, benefits, and so on. Developers are ignoring doing that more than creatives in some other fields.
Yup. I’m not saying becoming organized will solve all problems. At the end of the day all I want is ethics and professional codes of conduct that have some teeth. But I think the game is rigged against this happening.
I don’t think RISC-V is ready for general purpose use. Some CPUs have been manufactured, but it would be difficult to buy a laptop or phone that carries one. I also think that manufacturing options are too limited. Acceptable CPUs can come from maybe Intel and TSMC and who knows what code/sub-sytems they insert into those.
This area needs to be more like LibreOffice vs Microsoft Office vs Google Docs vs others on Linux vs Windows vs MacOS vs others
It seems to me that if you want a phone with longevity, a phone like Fairphone seems like a better bet. Fairphone seem to sell spare parts and LineageOS seems to already support Fairphone 2. There is a fair chance LineageOS will support the latest version (Fairphone 3) too.
As a happy user of SailfishOS on FairPhone2, I would still point out that not all spares have always been readily available. This was bad when I needed a new battery. Also the mic randomly died and I couldn’t talk on the phone, but that spare was available.
Now this overheats and reboots a bit too often for comfort, and if it’s a hardware issue, the warranty’s run out and the baseboard is not available. If it’s a software issue, I should prove the situation with an officially supported OS.
These are the reasons no sane person gives a shit. They want phones that actually do things and work(!)
For the fellow insanes I recommend the FairPhone, but with the caveat that spares aren’t always there, and maybe stick to their own de-Googled android, or just suck it and use Google.