It’s interesting because the author is not thoughtlessly in favour of GitHub, but I think that his rebuttals are incomplete and ultimate his point is incorrect.
Code changes are proposed by making another Github-hosted project (a “fork”), modifying a remote branch, and using the GUI to open a pull request from your branch to the original.
That is a bit of a simplification, and completely ignores the fact that GitHub has an API. So does GitLab and most other similar offerings. You can work with GitHub, use all of its features, without ever having to open a browser. Ok, maybe once, to create an OAuth token.
Whether using the web UI or the API, one is still performing the quoted steps (which notably never mention the browser).
A certain level of discussion is useful, but once it splits up into longer sub-threads, it becomes way too easy to loose sight of the whole picture.
That’s typically the result of a poor email client. We’ve had threaded discussions since at least the early 90s, and we’ve learned a lot about how to present them well. Tools such as gnus do a very good job with this — the rest of the world shouldn’t be held back because some people use poor tools.
Another nice effect is that other people can carry the patch to the finish line if the original author stops caring or being involved.
On GitHub, if the original proposer goes MIA, anyone can take the pull request, update it, and push it forward. Just like on a mailing list. The difference is that this’ll start a new pull request, which is not unreasonable: a lot of time can pass between the original request, and someone else taking up the mantle. In that case, it can be a good idea to start a new thread, instead of resurrecting an ancient one.
What he sees as a benefit I see as a detriment: the new PR will lose the history of the old PR. Again, I think this is a tooling issue: with good tools resurrecting an ancient thread is just no big deal. Why use poor tools?
While web apps deliver a centrally-controlled user interface, native applications allow each person to customize their own experience.
GitHub has an API. There are plenty of IDE integrations. You can customize your user-experience just as much as with an email-driven workflow. You are not limited to the GitHub UI.
This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.
Granted, it is not an RFC, and you are at the mercy of GitHub to continue providing it. But then, you are often at the mercy of your email provider too.
There’s a huge difference between being able to easily download all of one’s email (e.g. with OfflineIMAP) and only being at the mercy of one’s email provider going down, and being at the mercy of GitHub preserving its API for all time. The number of tools which exist for handling offline mail archives is huge; the number of tools for dealing with offline GitHub project archives is … small. Indeed, until today I’d have expected it to be almost zero.
Github can legally delete projects or users with or without cause.
Whoever is hosting your mailing list archives, or your mail, can do the same. It’s not unheard of.
But of course my own maildir on my own machine will remain.
I use & like GitHub, and I don’t think email is perfect. I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.
We’ve spent about half a century refining the email interface: it’s pretty good.
We’ve spent about half a century refining the email interface. Very good clients exist…. but most people still use GMail regardless.
That’s typically the result of a poor email client. We’ve had threaded discussions since at least the early 90s, and we’ve learned a lot about how to present them well. Tools such as gnus do a very good job with this — the rest of the world shouldn’t be held back because some people use poor tools.
I have never seen an email client that presented threaded discussions well. Even if such a client exists, mailing-list discussions are always a mess of incomplete quoting. And how could they not be, when the whole mailing list model is: denormalise and flatten all your structured data into a stream of 7-bit ASCII text, send a copy to every subscriber, and then hope that they’re able to successfully guess what the original structured data was.
You could maybe make a case for using an NNTP newsgroup for project discussion, but trying to squeeze it through email is inherently always going to be a lossy process. The rest of the world shouldn’t be held back because some people use poor tools indeed - that means not insisting that all code discussion has to happen via flat streams of 7-bit ASCII just because some people’s tools can’t handle anything more structured.
I agree with there being value in multipolar standards and decentralization. Between a structured but centralised API and an unstructured one with a broader ecosystem, well, there are arguments for both sides. But insisting that everything should be done via email is not the way forward; rather, we should argue for an open standard that can accommodate PRs in a structured form that would preserve the very real advantages people get from GitHub. (Perhaps something XMPP-based? Perhaps just a question of asking GitHub to submit their existing API to some kind of standards-track process).
You could maybe make a case for using an NNTP newsgroup for project discussion
While I love NNTP, the data format is identical to email, so if you think a newsgroup can have nice threads, then so could a mailing list. They’re just different network distribution protocols for the same data format.
accommodate PRs in a structured form
Even if you discount structured text, emails and newsgroup posts can contain content in any MIME type, so a structured PR format could ride along with descriptive text in an email just fine.
Even if you discount structured text, emails and newsgroup posts can contain content in any MIME type, so a structured PR format could ride along with descriptive text in an email just fine.
Sure, but I’d expect the people who complain about github would also complain about the use of MIME email.
You could maybe make a case for using an NNTP newsgroup for project discussion, but trying to squeeze it through email is inherently always going to be a lossy process.
Not really — Gnus has offered a newsgroup-reader interface to email for decades, and Gmane has offered actual NNTP newsgroups for mailing lists for 16 years.
But insisting that everything should be done via email is not the way forward; rather, we should argue for an open standard that can accommodate PRs in a structured form that would preserve the very real advantages people get from GitHub. (Perhaps something XMPP-based? Perhaps just a question of asking GitHub to submit their existing API to some kind of standards-track process).
I’m not insisting on email! It’s decent but not great. What I would insist on, were I insisting on anything, is real decentralisation: issues should be inside the repo itself, and PRs should be in some sort of pararepo structure, so that nothing more than a file server (whether HTTP or otherwise) is required.
…the new PR will lose the history of the old PR.
Why not just link to it?
This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.
That strikes me as disingenuous as well. Email is older. Of course it has more clients, with varying degrees of maturity & ease of use. That has no bearing on whether the GitHub API or an email-based workflow is a better solution. Your point is taken; the GitHub API is not yet “Just Add Water!”-tier. But the clients and maturity will come in time, as they do with all well-used interfaces.
Github can legally delete projects or users with or without cause.
Whoever is hosting your mailing list archives, or your mail, can do the same. It’s not unheard of.
But of course my own maildir on my own machine will remain.
Meanwhile, the local copy of my git repo will remain.
I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.
I’m fairly certain that it’s a website of some sort—that is, if you intend on using a definition of “perfect” that scales to those with preferences & levels of experience that differ from yours.
Meanwhile, the local copy of my git repo will remain.
Which contains no issues, no discussion, no PRs — just the code.
I’d like to see a standard for including all that inside or around a repo, somehow (PRs can’t really live in a repo, but maybe they can live in some sort of meta- or para-repo).
I’m fairly certain that it’s a website of some sort—that is, if you intend on using a definition of “perfect” that scales to those with preferences & levels of experience that differ from yours.
Why on earth would I use someone else’s definition? I’m arguing for my position, not someone else’s. And I choose to categorically reject any solution which relies on a) a single proprietary server and/or b) a JavaScript-laden website.
Meanwhile, the local copy of my git repo will remain.
Which contains no issues, no discussion, no PRs — just the code.
Doesn’t that strike you as a shortcoming of Git, rather than GitHub? I think this may be what you are getting at.
Why on earth would I use someone else’s definition?
Because there are other software developers, too.
I choose to categorically reject any solution which relies on a) a single proprietary server and/or b) a JavaScript-laden website.
I never said anything about reliance. That being said, I think the availability of a good, idiomatic web interface is a must nowadays where ease-of-use is concerned. If you don’t agree with that, then you can’t possibly understand why GitHub is so popular.
(author here)
Whether using the web UI or the API, one is still performing the quoted steps
Indeed, but the difference between using the UI and the API, is that the latter is much easier to build tooling around. For example, to start contributing to a random GitHub repo, I need to do the following steps:
It is a heavily customised workflow, something that suits me. Yet, it still uses GitHub under the hood, and I’m not limited to what the web UI has to offer. The API can be built upon, it can be enriched, or customised to fit one’s desires and habits. The difference in what I need to do to get the same steps done differs drastically. Yes, my tooling does the same stuff under the hood - but that’s the point, it hides those detail from me!
(which notably never mention the browser).
Near the end of the article I replied to:
“Tools can work together, rather than having a GUI locked in the browser.”
From this, I concluded that the article was written with the GitHub web UI in mind. Because the API composes very well with other tools, and you are not locked into a browser.
That’s typically the result of a poor email client.
I used Gnus in the past, it’s a great client. But my issue with long threads and lots of branches is not that displaying them is an issue - it isn’t. Modern clients can do an amazing job making sense of them. My problem is the cognitive load of having to keep at least some of it in mind. Tools can help with that, but I can only scale so far. There are people smarter than I who can deal with these threads, I prefer not to.
What he sees as a benefit I see as a detriment: the new PR will lose the history of the old PR. Again, I think this is a tooling issue: with good tools resurrecting an ancient thread is just no big deal. Why use poor tools?
The new PR can still reference the old PR, which is not unlike having an In-Reply-To header that points to a message not in one’s archive. It’s possible to build tooling on top of this that would go and fetch the original PR for context.
Mind you, I can imagine a few ways the GitHub workflow could be improved, that would make this kind of thing easier, and less likely to loose history. I’d still rather have an API than e-mail, though.
This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.
Refining? You mean that most MUAs look just like they did thirty years ago? There were many quality of life improvements, sure. Lots of work to make them play better with other tools (this is mostly true for tty clients and Emacs MUAs, as far as I saw). But one of the most wide-spread MUA (gmail) is absolutely terrible when it comes to working with code and mailing lists. Same goes for Outlook. The email interface story is quite sad :/
There’s a huge difference between being able to easily download all of one’s email (e.g. with OfflineIMAP) and only being at the mercy of one’s email provider going down, and being at the mercy of GitHub preserving its API for all time.
Yeah, there are more options to back up your mail. It has been around longer too, so that’s to be expected. Email is also a larger market. But there are a reasonable number of tools to help backing up one’s GitHub too. And one always makes backups anyway, just in case, right?
So yeah, there is a difference. But both are doable right now, with tools that already exist, and as such, I don’t see the need for such a fuss about it.
I use & like GitHub, and I don’t think email is perfect. I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.
I don’t think GitHub is anywhere near perfect, especially not when we consider that it is proprietary software. It being centralised does have advantages however (discoverability, not needing to register/subscribe/whatever to N+1 places, and so on).
I disagree, because that will only lead to a morass of incompatible software. You refuse for your software to be run by law enforcement, he refuses for his software to be run by drug dealers, I refuse for my software to be run by Yankees — where does it all end?
It’s a profoundly illiberal attitude, and the end result will be that everyone would have to build his own software stack from scratch.
“It’s a great way to make sure proprietary software is always well funded and had congress/parliment in their corner.” (TaylorSpokeApe)
I don’t buy the slippery slope argument. There are published codes of ethics for professional software people by e.g. the BCS or ACM, that may make good templates of what constitutes ethical activity within which to use software.
But by all means, if you want to give stuff to the drug dealing Yankee cop when someone else refuses to, please do so.
Using one of those codes would be one angle to go for ethical consensus, but precisely because they’re attempts at ethical consensus in fairly broad populations, they mostly don’t do what many of the people wanting restrictions on types of usage would want. One of the more common desires for field-of-usage restriction is, basically, “ban the US/UK military from using my stuff”. But the ACM/BCS ethics codes, and perhaps even more their bodies’ enforcement practices, are pretty much designed so that US/UK military / DARPA / CDE activity doesn’t violate them, since it would be impossible to get broad enough consensus to pass an ACM code of ethics that banned DARPA activity (which funds many ACM members’ work).
It seems even worse if you want an international software license. Even given the ACM or BCS text as written, you would get completely different answers about what violates it or doesn’t, if you went to five different countries with different cultures and legal traditions. The ACM code, at least, has a specific enforcement mechanism defined, which includes mainly US-based people. Is that a viable basis for a worldwide license, Americans deciding on ethics for everyone else? Or do you take the text excluding the enforcement mechanism, and let each country decide what things violate the text as written or not? Then you get very different answers in different places. Do we need some kind of international ethics court under UN auspices instead, to come up with a global verdict?
I had a thought to write software so stupid no government would use it but then I remembered linux exists
It’s not a slippery slope. The example in the OP link would make the software incompatible with just about everything other than stuff of the same license or proprietary software. An MIT project would be unable to use any of the code from a project with such a rule.
One of the things that I like about using a tiling window manager rather than a desktop environment is that I never see icons. That’s not quite true — I see them in Firefox — but it’s mostly true. Rather than constantly finding little things to point & click on, I just use my computer all day long.
What I’m trying to say is that maybe obsessing over icons is the wrong answer. Maybe what we need is to radically reimagine the human-computer interface. Tiling iconless WMs are probably not the answer for most people, but — maybe something new is.
I have StumpWM commands set up for command functions like ‘switch to emacs,’ ‘switch to Firefox,’ ‘switch to JavaScript-enabled Firefox,’ ‘switch to console’ &c.; I bind the really-commonly-used ones to keys and just use StumpWM’s colon (prefix ;) to execute them quickly. For other stuff I’ll either execute them directly with prefix !, or use the console or emacs shell.
It’s not terribly discoverable, which is why I won’t say that it’s the wave of the future. But it’s so much faster than e.g. scrolling through macOS, Windows, Android or GNOME stuff.
I mean, I use OS X almost exclusively, and I never see icons, either. I run everything through Alfred and keep my windows tiled or full screen. It’s not as smooth as using a tiled WM in X, but it’s still plenty nice for me.
There was a recent discussion about whether file-based APIs were better or worse the specific ones (https://lobste.rs/s/ckqzbn/why_filesystems_have_loose_coupling_your).
I think a lot of the pros and cons of that API choice are illuminated by unveil.
Things can always be implemented either way (layered on top of the generic mechanism, or given their own specific mechanism). But if you then enhance the general mechanism with a new feature, you get to use that for all of the functionality you have layered on top.
(As a side thought - it’s kind of a nice thought experiment: If the only syscalls were open/close/read/write and friends - how would you (a) provide an interface for all other system calls and (b) put them together in a filepath hierarchy so that unveil() grouped things nicely.)
I know very little about either of these topics, but I wonder if it would be interesting to combine OpenBSD’s security syscalls like pledge and unveil (is there a generic name for these? Privilege-based security?) with Plan 9’s extreme everything is a filesystem approach.
The amount of syscalls in Plan9 is so small that pledge(2) wouldn’t make much sense.
And unveil(2) wouldn’t really be necessary as there are already mount namespaces, a process can unmount paths and then use rfork(2)s RFNOMNT flag to disallow mounting new resources.
In other words, Plan 9 obviated the need for these approaches. It really was the wave of the future, but it hasn’t reached the shore yet. Yet …
I like the idea, actually (and I subscribe to similar feeds, e.g. Sacha Chua’s emacs news).
My biggest concern is the name: ‘Valuable News weekly update #26’ tells me nothing about your subject matter. Maybe change it, even to something like ‘Vermaden’s News’?
The issue with double dash (--) is one “string safety” problem I’ve meant to address on my blog, but haven’t yet.
I’m not sure I’m following the argument in this post though.
-- convention does.NUL convention of find -print0 and xargs -0 is (surprisingly) sufficient? I didn’t quite see that until writing the two Git Log in HTML posts.I think there does need to be some kind of lint check or function wrapper for --, like flagging the former but not the latter (somehow):
grep foo $file # oops, $file could be a flag and GNU and OS X grep accepts it
grep -- foo $file # this is more correct
mygrep() { # mygrep only accepts args
grep -E --color -- "$@"
}
mygrep2() { However sometimes you do want the user to be able to append flags, so this function is useful too
grep -E --color "$@"
}
mygrep2 -v pattern *.c # invert match
The rule I have for Oil is to avoid “boiling the ocean” – i.e. there can’t be some new protocol that every single command has to obey, because that will never happen. Heterogeneity is a feature.
However there should be some recommended “modern” style, and I haven’t quite figured it out for the flag/arg issue.
I think one place Oil will help is that you can actually write a decent flags parser in it. So maybe the shell can allow for wrappers for grep/ls/dd etc. that actually parse flags and then reserialize them to “canonical form” like:
command @flags -- @args # in Oil syntax
command "${flags[@]}" -- "${args[@]}" # in bash syntax
I addressed quoting and array types in various places, including this post:
Thirteen Incorrect Ways and Two Awkward Ways to Use Arrays
There are also related string safety problems here – is -a an argument or an operator to test ?
I don’t see the need to distinguish between paths and strings.
They’re different types: a path is a string which is or could be the pathname of a file or directory on disk. Since they’re different types, treating them differently yields all the standard benefits of strong typing.
texinfo(1) and DocBook are excessively complicated, ill-designed, and unmaintained
One might quibble with the complexity & design of texinfo, but I believe Brian Fox & Karl Berry would be surprised to find out that it’s unmaintained: they are, after all, the maintainers, and released the latest version last summer.
No, it’s really not a good language. It’s a pretty terrible language on which a colossal malinvestment of human resources has been squandered, leading to a not-entirely-terrible ecosystem which is just barely usable for real work.
If the effort which had been mis-spent on JavaScript had instead been focused on an actually good language, then the world would be so much better off now.
Not just that, though: JavaScript has poisoned the web. What was once a really nice way to serve resources has become a grotesque, undesigned application development platform whose sole virtue is that it’s been deployed everywhere. Also, it’s used regularly to undermine privacy & security.
Eclipse dying however results in a net loss for developers.
I don’t know about that — it means less competition pulling developers away from emacs, which can only be a good thing.
I believe that there’s a very real chance that a reëvaluation of scientific research — particularly the social sciences, which are so soft as to border on pseudoscience to begin with — in light of this new understanding is very likely to yield unexpected & revolutionary results.
The social sciences are not “so soft as to border on pseudoscience”. Yes P-Hacking and over reliance on p-values are an issue, but they’re an issue unilaterally. It matters just as much in physics as it does in social science. The idea that social sciences are somehow less rigorous than say physics is I think fallacious and without merit. Physics is still ultimately rooted in experimental evidence which is statistical in nature. You realize you’re discrediting an entire set scientific fields with literally no evidence and likely with very little knowledge of the subject. It’s not like there’s been literally no replication in the social sciences. What an absolutely absurd claim.
I have to agree with @bargap here. Physics is a poor example to hold up against the social sciences. Physics results require a level of significance that is mind boggling. Of course those piddly experimentalists do get things wrong sometimes. But a better comparison is perhaps life sciences. The basic problem is that the system is so complex that it is hard to figure out all the moving parts and make sure you’ve bolted down all but a few.
There is a running joke about “Doctors recommend … “ (take your pick - babies should sleep on their back/belly, this food is bad/good for you, this causes/cures cancer, this causes/does not cause autism …) precisely because we’ve been burned so many times by studies that just did not have enough N, or more insidiously (and forgivably), had a biased N.
Social sciences study PersONS which are much more complex than protONS or electrONS but their tooling and standards are, paradoxically, lower, not higher and society seems to have an expectation of what the right answer should be, allowing social biases to play a disproportionate role in biasing reports in these fields.
If you look in spaces like chemistry or spintronics you’ll find a lot of papers that people end up not being able to reproduce because it turned out that the machine that the writers were using had issues.
In situations where there are only 5 or so machines in the world that can do an experiment, if 1 of them has an issue then it’s not always easy to validate. Especially when the research methodology is “cartesian product of every material with every technique => see if something happens”. And even absent that, people’s interpretations of measurements change over time too!
Agree that people are harder to study, especially in long term stuff. But we don’t do many physics experiments over 20 years either! It’s mainly that time is hard, and preconditions are hard to set up. You can totally do experiments in social sciences over a small time scale correctly.
Instead of comparing the foundations of physics to the edges of social sciences you should be considering the edges of physics, after all it doesn’t make sense to compare things which have a thousand years of replication to ones with tens of years. We’re still not sure that dark matter actually exists. Classical bias risk is pervasive in theoretical physics and is a constant threat.
The analogy is instead saying “Medical science borders on pseudoscience, we should cast doubt on the entire field including claims such as “Arsenic is poisonous” because some of the newer claims on cancer haven’t yet been replicated”.
To be entirely clear you haven’t been burned by science, you’ve been burned by irresponsible reporting which has confidence from a single study.
This article is also relevant here https://news.harvard.edu/gazette/story/2016/03/study-that-undercut-psych-research-got-it-wrong/
The concession is that there hasn’t been enough replication but a recent study reported on by ars which I don’t have on hand right now says that the replication rate of social science studies were around 70%, which is yeah not great but it’s hardly pseudoscience.
To be entirely clear you haven’t been burned by science, you’ve been burned by irresponsible reporting which has confidence from a single study.
That’s a classic defense. “I never said that! They blew the press release all out of proportion”. Sometimes it is true.
I know of many papers where inferences and conclusions are not warranted by the data and have seen this in different fields of study. I infer that this is common human behavior to game a the system that rewards visibility and volume.
You have conveniently dodged the main premise of my argument. Sure I’ll concede there are plenty of bad actors, but they aren’t magically condensed in the social sciences.
The social sciences are definitely one of the areas where bad studies are harder to root out because of lower standards of rigor combined with the complexity of the subject and the fact that “truthiness” is easier to determine and therefore forms an extra social pressure not to challenge a particular study if it conforms to current social biases.
I don’t think its actually reasonable to say that the newest physics are any more rigorous, any less complex, or any less rife with bias. They both use studies with statistical evidence so they both have a lot of rigor but it’s hard to beat replication. They both are very very complex. They both have lots and lots of human bias, from a bias towards classicalism, conservativism, anti-classicalism and modernism. That doesn’t mean that either of them are anywhere near pseudoscience. The claim is just uninformed.
GUI development is broken, but JSON isn’t the solution: it’s another broken thing, just broken along a different axis.
An S-expression–based interface would be better, because then there’d be a seamless way to embed behaviour along with the GUI …
Perhaps beside the point, but: the gendered language was a bit distracting. By this point in history, when I read a blog post like this that exclusively uses “he” and “his” for unspecified genders, it feels like the author is making a political point of it (as I say, feels like: I have no insight into this author or his political position), and ends up niggling a little while reading.
when I read a blog post like this that exclusively uses “he” and “his” for unspecified genders, it feels like the author is making a political point of it
I’m the opposite: when I read a post which uses incorrect English, it feels like the author is making a political point of it. In English, the feminine is only used when referring to a specific female; for all other purposes the masculine (or, if you prefer, the ‘general’) is used.
In English, the feminine is only used when referring to a specific female; for all other purposes the masculine (or, if you prefer, the ‘general’) is used.
According to whom? I’m asking this because the singular they goes at least as far back as 1848, where it appeared in William Makepeace Thackeray’s novel Vanity Fair.
It actually goes back further than that — and in fact Shakespeare used it, IIRC! Still, it’s an ugly construction.
It actually goes back further than that — and in fact Shakespeare used it, IIRC! Still, it’s an ugly construction.
Ugly? Again you beg the question: according to whom? For example, the Associated Press has a style guide article which offers the following recommendation:
“They, them, their — In most cases, a plural pronoun should agree in number with the antecedent: The children love the books their uncle gave them.They/them/their is acceptable in limited cases as a singular and-or gender-neutral pronoun, when alternative wording is overly awkward or clumsy. However, rewording usually is possible and always is preferable. Clarity is a top priority; gender-neutral use of a singular they is unfamiliar to many readers. We do not use other gender-neutral pronouns such as xe or ze…”
“Arguments for using they/them as a singular sometimes arise with an indefinite pronoun(anyone, everyone, someone) or unspecified/unknown gender(a person, the victim, the winner)…”
“In stories about people who identify as neither male nor female or ask not to be referred to as he/she/him/her: Use the person’s name in place of a pronoun, or otherwise reword the sentence, whenever possible. If they/them/their use is essential, explain in the text that the person prefers a gender-neutral pronoun. Be sure that the phrasing does not imply more than one person… “
I’ll admit it can be awkward if you’re not used to it, but I don’t buy the premise that singular they is ugly or wrong.
Emacs Lisp has many features of Common Lisp, although it is considerably smaller (and thus easier to master).
Ehn, sorta. Elisp is smaller, but as far as I can tell, the cl library is used an awful lot, mostly because writing Elisp can be such a slog. I love Common Lisp (CL) and have used Emacs for 20 years now, and I’ve never been able to get into writing Elisp, although I’ve been trying more lately.
It’s not because it’s terrible but because the APIs are so vast. Have you ever tried to read through the Elisp manual? You’ll be there for a long time (hint: understanding buffers is the Zen of Elisp, so jump ahead).
Also, Elisp doesn’t seem to encourage breaking things down into functions as much as CL. A lot of existing code that I’ve looked at has many little expressions that would be nicely served as predicates or one-off functions, but since Elisp doesn’t have something like flet, the language doesn’t encourage such things. As a result, some functions are huge and really hard to follow. One great thing about Emacs is how easy it is to examine the code for things; it’s too bad that code is so obtuse a lot of the time.
Yup, rms famously doesn’t like Common Lisp’s complexity, but I think that the histories of both elisp & Scheme are used in practice demonstrate the wisdom of Common Lisp’s approach. As an example, elisp has no namespacing mechanism, so folks commonly prepend the package name to variables & functions, leading to variables named e.g. my-package-variable & internal variables named e.g. my-package--private-variable; one has to type those long names everywhere, even in the package defining them, or a package which heavily uses them; in Common Lisp one would have a package my-package & the variables could be referred to as variable & private-variable within the package, and as my-package:variable & my-package::private-variable externally, or as variable inside a package using my-package. Complex? Certainly: I had to write a lot of words there, and maybe the reader doesn’t quite follow. A good thing? Yes, it’s easily learnt and it pays for itself every single time one writes or uses a package.
I can only imagine how awesome emacs would be today had it switched to Common Lisp back in the early 90s.
AFAIK[1], Emacs doesn’t have a module or packages system which also doesn’t help breaking things down into smaller pieces.
[1] I could look this up but instead I choose to comment.
You are correct, it does not have a language-defined namespace mechanism. That’s done with function naming conventions. It does have a package system, but that’s for feature distribution.
Yes, sorry about my terminology: I meant Common Lisp “packages” (i.e. modules, namespaces).
(Common Lisp junky here as well.)
I started researching the GitHub APIs that would be relevant to implement something like this a few months ago, but I’m really hesitant to sink a lot of investment into GitHub and its accompanying monopoly in my free time.
I’ve moved a bunch of my personal projects over to GitLab, but they’ve been doing stupid stuff like refusing to render static repository content without whitelisting Javascript, or telling my my two-week-old browser is unsupported because it’s outdated, so … not a lot of motivation to invest in that ecosystem either.
This. I noticed the mandatory JS for rendering nonsense too. I really want to like GitLab, and have tried multiple times to use them as my main, but to me the UX is just inferior to GitHub. The UI is sluggish and feels very bloated.
It’s been a while since I’ve given up on GitLab for the time being, and have been self-hosting Gitea. Now Gitea uses JS too, but also works quite well without it. And it’s nowhere near as slow as GitLab.
but to me the UX is just inferior to GitHub.
Well, GitLab for all its faults doesn’t hijack my Emacs key bindings to do idiotic shit like “bold this thing in markdown” (which was already only two keystrokes to begin with; why are you adding a shortcut for something I never do on ctrl-b that I use all the time?) so I wouldn’t say GitLab has quite sunk to that level yet.
Interesting. That’s a fair point; though GitHub’s editor isn’t the first to do that. I hadn’t noticed it with GitHub mainly because I use Vimium in Firefox, Evil in Emacs, and bspwm; so I rarely use Emacs-style bindings but I agree that could be frustrating.
Does exwm’s simulation keys work around the issue, or does GitHub’s in-browser binding take precedence?
EDIT: There’s also xkeysnail, though it does require running as root.
EDIT2: It seems like running xkeysnail as root may not be necessary if the user has access to input devices. On Arch (or any distro with systemd >= 215) that can be achieved by adding the user to the input group (see here and here).
EDIT3: The Emacs-keybinding extension may be another option, though apparently it only works in macOS. There’s also shortkeys but I haven’t tried either one.
If you’re editing text, Ctrl-B for bold (or Ctrl-F if you’re in Germany) should be expected. Editing text means Word keybindings, not Emacs bindings.
This also means Ctrl-I for italic (or Ctrl-K in Germany) and Ctrl-U for underlined (this one is actually the same).
I strongly disagree, at least on a Macintosh, where all native text entry widgetsobey the Emacs keybindings. Web garbage that arrogates system functionality to itself, hijacking my chosen platform experience for a poor copy of some other system is noxious, and broken.
I just tried in the macOS Notes app and ctrl+b makes the text bold. The Pages app does the same, ctrl+b makes the text bold. These are two native text entry applications developed and provided by Apple themselves.
That’s the problem of your system then – the browser explicitly exposes Ctrl, Alt, Meta. If your keyboard does not offer these, either your browser, OS, or keyboard has to map between these and the actual keys.
Users on all other systems (aka 99.5% of users) expect Ctrl-B (or Ctrl-F) to create bold text.
No, users on Macs expect their modifier keys to respect platform convention – Emacs keybindings for movement, cmd for meta. To assume otherwise is disrespectful.
So how do you suggest to do that without using heuristics on the useragent?
I’d be interested in your implementation of a JS function that returns the correct set of modifiers and keys to use for the bold shortcut. And which works reliably.
Currently, the browser doesn’t expose this, so everyone gets the most commonly used solution.
Currently, the browser doesn’t expose this, so everyone gets the most commonly used solution.
????
Note: On Macintosh keyboards, [.metaKey] is the ⌘ Command key.
MOD_KEY_FIELD = navigator.platform.startsWith('Mac') ? 'metaKey' : 'ctrlKey';
// lazy
if (keyEvent.ctrlKey && ...
// bare minimum for any self-respecting developer
if (keyEvent[MOD_KEY_FIELD] && ...
What I want to know is how you’re commenting from 1997. Just hang tight, in a couple years two nerds are gonna found a company called Google and make it a lot easier to find information on the internet.
Using the proper modifier depending on platform? The browser should expose “application-level modifier” say, for bold, and that would be ^B on X11/Windows and Super-B for Mac.
The browser isn’t exposing this, though. The best chance is sniffing the user agent and then using heuristics on that, but that breaks easily as well.
100 - 99.5 != 12.8, your assumption is off by a factor of 25.
Ctrl+b on my Mac goes back a character in both macOS Notes and Pages, as it does everywhere else. Cmd+b bolds text (as also it does everywhere else).
In general, Macs don’t use the Ctrl key as a modifier too often (although you can change that if you want). They usually leave the readline keybindings free for text fields. This seems to be by design
The standard key bindings are specified in
/System/Library/Frameworks/AppKit.framework/Resources/StandardKeyBinding.dict. These standard bindings include a large number of Emacs-compatible control key bindings…
Editing text means Word keybindings, not Emacs bindings.
Those of us who use emacs to edit text expect editing text to imply emacs keybindings.
Some of us expect them everywhere, even.
If it was a rich text WYSIWYG entry, I’d be 100% agreed with you. (I would also be annoyed, but for different reasons.)
But this is a markdown input box. The entire point of markdown is to support formatted text which is entered as plain text.
It’d be great if we had a language server protocol extension for code review + a gerrit backend. I started taking a look at this a few months ago (I work mostly in Gerrit now) but didn’t have the bandwidth for actually prototyping it. It seems like an obviously good idea, though having to use git hampers some of the possibilities.
Nothing hugely shocking here. If you have a decentralized system without end to end crypto then servers can read all your stuff, its the same with email and gmail scanning all of your emails.
Which is why we shouldn’t build decentralised (or centralised) systems without end-to-end crypto any longer.
There’s no reason why something like Mastodon couldn’t have anonymous (unsigned, unencrypted), public (signed, unencrypted), group (signed, encrypted to a group — ‘friends’ is merely one group), and unlisted (signed, encrypted) posts. Yes, there are some key management challenges (particularly around key management & re-encryption as one adds & deletes friends), but they are no insurmountable.
I strongly believe that writing systems without cryptographically-strong privacy in 2018 is an error.
Secure Scuttlebutt is a pretty good example of this, you have public messages and private messages. If a message is private then it is encrypted and only people mentioned in the post can decrypt the message. But ssb does have serious key management issues.
What are the key management issues? I was just coming here to mention ssb, but I’m very new to it and was unaware of this. Can you share more?
Well off the top of my hat, key management issues arise whenever you try to use it across multiple machines. Now you could manually copy the key from machine to machine, but if you ever use two machines simultaneously it creates a sort of fork in your identity on the network, which causes plenty of trouble.
There are a few solutions under research, most notable a master / slave system, but last time I checked it was still very much in the design phase.
This is easily said, but both end to end crypto and key management add a huge amount of complexity to the system. If you need the privacy that e2e can provide, this is of course worth it, but it’s not at all clear that every service needs this. The fediverse is meant for public and targeted messages, not private ones. For those usecases, people can easily use e2e encrypted systems like matrix or gpg.
Hear hear! I think everyone has this vision of a perfect crypt-opia where we can conduct our social networking safe from the prying eyes of government or BigCorps, but the realities of making this happen are as you say not at all trivial.
It’s a great goal, and one I think people should continue working towards, but the logistics are hard.
Privacy and social media are kind of at odds with each other anyway. People want to share their posts with the world but also not have that data used against them. If you didn’t want everyone to know then you shouldn’t be sharing it.
I don’t know if I agree. When I publish toots on Mastodon, all they know is that feoh@amicable.feoh.org said blah blah blah.
When I use Facebook, they are collecting a SUPER rich trove of demographic data on me, cross referencing it with other commercial sources (my employer for one :) and linking it in with my “social graph” where my friends data is taken into account. It’s the difference between a linked list of nodes with 2 or 3 fields and a full on acyclic graph with zillions of nodes and zillions more connections.
all they know is that feoh@amicable.feoh.org said blah blah blah.
Anyone can also see who you are following, who you reply too, whos posts you like, what kind of content you like and then draw a graph based on this data. The main thing you lose is the tracking using apps to see more than what you post but a huge huge amount of data anyone can see can be used to track you and build a profile on you.
By ‘anyone’ you mean ‘any Fediverse user’ right? Also there’s a huge difference between having to scrap the correlate vast gobs of data yourself and having it handed to you for analysis on a silver platter by the platform.
Anyway, this is silly. I agree that social media is at odds with privacy to an extent, but some platforms are factually, provably better than others.
I totally disagree. I think there is a place in the world for social network protected by crypto, and also for those that aren’t.
Let’s not let the perfect be the enemy of the good.
How would you do this while still allowing mastodon to be used from a web interface? If it’s implemented using javascript you’re in the exact same situation of having to trust the instance administrator.
How would you do this while still allowing mastodon to be used from a web interface?
I’d either use a native client, or a web client running on localhost. It’s the only way to assure privacy & security.
If you store things on other people’s servers they are on other people’s servers. I don’t see how this statement is a resignation. If you want your posts to be private in the fediverse, encrypt it. If you want your emails, posts,etc to be private, encrypt them.
I was not talking about @mercer article: as you said it can be pretty useful for novices.
What scares me is that we could design something better, but there is not much research about the topic.
No one really try to challenge the status quo with original engineering solutions, in a sort of resignation.
At best, people are waiting for mathematicians to create a cheap fully homomorphic encryption scheme.
But I’m afraid it’s not lazyness, but lack of vision, interest and hope.
Vision, interest, and hope are not valid inputs to compilers.
I think a reasonable compromise in new system design (taken in some side projects of mine) is to assume that the channels of communication are compromised by hostile actors, that storage exists in the datacenters of hostile actors who are actively trying to munge through the contents, and that mere possession of encrypted material is of significant interest to the hostile actors.
You end up with a sort of “I am Spartacus” setup for communication systems under those constraints, where everybody by definition has open-access to all communications but all communications are also encrypted such that if you have a key you can read it and otherwise you are just providing storage–and because everybody has copies of the content, the metadata of how it moves through the system is not super interesting. Of course, the flipside is that participation in such a system is almost always a red flag.
Well… vision alone gave UNIX pipelines. And stacks. And timesharing systems… ;-)
Interest gave us Linux. And hope gave us GNU.
But, your system description look interesting… can you share links to some free software designed that way?
If you can’t read the code on the server, and you can’t, then you can’t know it was actually encrypted. The only thing you can do is end to end encryption, which you can already do on top of all of these existing services. What we need is education of the tools that already exist and also improving ease of use. The moment you put the tech on the server you’ve already lost. Otherwise the tech you’re describing already exists.
I agree with you about education. I deeply agree.
But with fully homomorphic encryption you can know it’s encrypted even without seeing the code.
I’m not entirely sure that no other mitigation is possible: my insight is that too few have tried to challenge the http/dns/browser/javascript stack to get a chance to find a solution.
My bet is that we just need to open our minds.
Still, you are right: there’s no cloud, just another person’s computer… ;-)
Why not just use Helm?
I don’t believe I can disagree more. Tools matter, because some tools are better than others on some axis or axes. ‘Tools don’t really matter’ is the only argument one can make when one can’t argue ‘my tools are better.’ I don’t think any particular tool is better on all axes, but certainly some aren’t the best on any axis (e.g. pico).
And I don’t think it’s inappropriate for someone who recognises a tool’s actual excellence in a world which incorrectly fails to do so to make that recognition part of his identity. It’s part of how an oppressed minority perpetuates its existence.
This is a good point, but it’s also an argument against strongly identifying with your tools. If tomorrow you encounter a new tool that does something better than your current tool, it should be as easy as possible for you to abandon your current tools and take up the new ones, and that’s harder to do if being a user of a particular tool is part of your self-identity.
If you’ve ever been an emacs-user on a team of vim-users, you’d know what I mean. Or a Linux-user on a team of Mac-users …
No doubt the other way ’round is the same, too …
Argh, I want to upvote ‘The Commons Clause will destroy open source’ without upvoting Redis Labs’s action.
Upvote for the discussion. “Company Makes Business Decision” is rarely on-topic for Lobsters and often goes off the Rails; I’ve upvoted because I appreciate that we’re not rehashing well-worn licensing arguments here, though this announcement was poorly written.
This is the second time Lobsters has censored my articles by merging them into tangentally related discussions.
Nobody is censoring you. If anything, your visibility has been boosted by being on front page with another high-visibility thread. I didn’t even know about your article until I saw it here. To avoid clutter, the admins sometimes combine posts talking about same event or topic into a group. I can see someone not liking that decision. I’m not for or against it right now.
You weren’t censored, though. A mechanism bringing your writing to my attention is opposite of censorship.
I don’t think you have this exactly right. What happens is someone submits it as an independent post, which is freely ranked on its own merits. Then a moderator merges it with a post which is already growing stale, as a significant fraction of the site has already looked at it and has little reason to return, except to consider replies to their comments - and even then they have to notice more articles have been added. It also removes a dedicated space for discussing that particular article, which in this case is important because the second article is more about Commons than it is about Redis, making the discussions difficult to live in parallel.
The original, independent post was censored by merging it into here. On the previous occasion the new post was merged into a post which was then several days old, where I presume approximately zero people saw it. This is censorship, and a clear cut case of it too. I don’t consider myself entitled to exposure but it’s censorship all the same, and part of the reason I distanced myself from Lobsters except to participate where others have posted my content.
If your story disappeared, that scenario would be censorship since it was manipulated in a way that would nearly guarantee nobody would see it. The new one isn’t censorship because censorship is about people not seeing your stuff. The combined submission is 20+ votes on front page of a site with more readers than most blogs. Your views going up instead of down is opposite of censorship at least in general definition and effect it has.
“Taking measures which prevent others from seeing your stuff” is literally censorship. I don’t want to argue semantics with you any longer. All of the information is on the table in the comments here, let people read them and settle on an opinion themselves.
“Taking measures which prevent others from seeing your stuff” is literally censorship”
I saw your stuff through Lobsters’ measures. So, it’s not censorship by your definition.
“let people read them and settle on an opinion themselves”
By all means.
I really appreciate your point of view normally, but in this case I think you’re incorrect: it would be nice to have the community’s take on @SirCmpwn’s article itself (which is well worth reading) rather than have the comments blended in with those on Redis Labs.
Upvoting doesn’t necessarily have to be approval of the content of the post. (though it usually should be)