I love the KISS launcher. Do you know if Odyssey ever resolved being able to play opus files? My quick read through their repo suggests not.
Just wanted to drop a big, heartfelt thank you to the Void crew. If you’re reading this: you folks make the modern linux experience pleasurable in the face of ever-encroaching bloat and complexity.
Not just bloat and complexity, but also general wrong-headedness.
Also, Void is interesting and innovative - involving a new package manager, using LibreSSL (rather than OpenSSL) by default and having a musl libc variant available, amongst other things.
This specific study is Nightly only. In general I’d expect and hope that similar studies would only be run against Nightly for various reasons. However, as far as I can tell the current Mozilla privacy policy makes no difference between Nightly and Beta, and therefore Mozilla could consider themselves to have permission to run these opt-out studies on Beta, just as they have with Nightly. Mozilla may clarify this in the future.
On a pragmatic level, doing something like this with Beta would probably be much more visible and produce many more annoyed people and bad publicity, so I think Mozilla probably has good reasons to avoid it unless they have a study that they consider really important. Beta also has enough visibility that news about such an opt-in study would probably be widely distributed and well know, so you’d hear of it enough in advance to do something.
(I’m the author of the article.)
Thanks for this. I’ve been running Nightly on my mobile, and I’ve switched to Beta. (Newer Firefox versions than Stable on Android seem quite a bit faster; that’s why I care.)
There’s lots of fun stuff in some of the shaving forums on both safety-razor blades and straight/cutthroat razors sometimes also involving electron microscopes.
Good bittorrent daemons are hard to find. rTorrent is common but it tacks on this difficult curses interface you have to deal with. Transmission is okay but it tends to get buggy and break down at scale. Deluge is buggy as hell. btpd is too bare bones, a lot of important features are missing. All of these options also have really poor RPC protocols that use a lot of network, are annoying to write clients for, and don’t scale.
synapse focuses only on being a good daemon and delivers only that. The UIs are offloaded to separate projects. If that doesn’t seem like much, that’s because it’s not - but surprisingly this is not easy to find. We made it because there were no good options.
Transmission is okay but it tends to get buggy and break down at scale.
I must not have used it at scale before, it always seems to work for me. What sort of failure modes do you observe? Corrupted downloads? Halted downloads w/peers available? Other?
Transmission crawls to a halt if you have several hundred torrents. The RPC protocol also becomes unweildy, because it polls for updates and has to resend large amounts of data every time it refreshes. Synapse is more performant with large torrents or a large number of torrents and the RPC is push based, with subscriptions and differential updates.
Has synapse been tested at scale then?
Everything I’ve tried has been horrible at scale except rTorrent, and most of the non-rTorrent choices can be pretty horrible even when at a modest amount of torrents (qBittorrent at a certain point ‘invisibly’ adds torrents etc.). With rutorrent as the frontend, I’ve been pretty happy with rTorrent.
Synapse looks interesting, though I’m not terribly enthuasiastic about the node.js webclient (the node.js Flood client uses significantly more resources on my system than does the php rutorrent).
Receptor is 100% frontend, pure static content. Node is just used for compiling and packaging it. You don’t even have to install it yourself - a hosted version is available at web.synapse-bt.org.
I’ve done load testing (though not particularly realistic) and it appears that both synapse and receptor perform reasonably at the order of 1000 torrents. One of the goals of the project is to perform well at scale and there’s been a fair amount of ongoing work to achieve that.
[Comment removed by author]
Funnily enough, I was just reading this article today: https://www.currentaffairs.org/2016/07/you-should-be-terrified-that-people-who-like-hamilton-run-our-country
I’ve been interested in Guis(SD), which is modeled on Nix, but Scheme based. Haven’t had the time to properly dig into it though.
I would love to be able to use something similar (org-mode instead of markdown) but so far I’ve never gotten satisfactory results. To me the ideal case would be that I am able to easily export the source document into multiple formats so that I can distribute it as a pdf and html document.
Where this fails is most often the interoperability of external tools and to some extent my own laziness. The process of setting up a framework to cite references exist in org-mode (and I believe in pandoc through cite-proc) it is cumbersome to set up. Another issue is the visualization. Since the fonts are most often different I would need to generate multiple versions of the same plot, which becomes even more work if I want to use TikZ in LaTeX and for example matplotlib for web publishing.
While the tufte layout is great, what should I do with overlapping margin notes? They need to be manually corrected in LaTeX, at least as far as I know.
Another thing: How can I generate a glossary? For LaTeX the glossaries package exists but you cannot use that straightforward in markdown.
While this probably reads very negative, I would love it, if something like this existed! I just do not see that happening so far. And yet another point would be getting other people on board for collaboration.
My experience with Pandoc was not excellent to convert Markdown to LaTeX. Pandoc is a bit complicated to extend as well.
What I did was use a Markdown parser(1) producing an AST(2), plugging into the parsing to extend Markdown syntax to support for instance the glossary use case(3), then write a LaTeX stringifier(4) for this AST. (In this Markdown AST -> LaTeX stringifier I only support standard Markdown syntax, this abbr plugin I wrote gets stringified through yet another plugins package(5) I wrote. It’s straightforward: 6.) Then I simply put the latex doc into some sort of latex template using a custom class, in the latex template you’d have your tableofcontents, glossaries, etc.
I had a bunch of lecture slides written up in org that I was dumping to LaTeX, which was always a bit fragile, and then a change in org-mode broke everything. In the end I wasted more time fixing everything than if I’d just written LaTeX to start with.
I think the best thing is to leverage smart editing (e.g. AUCTeX) as much as possible, and not deal with trying to convert from something else into LaTeX.
In my experience, with anything of any significant complexity, this sort of approach ends up being more work than straight LaTeX. On top of that, the conversion tools change too much inbetween versions to guarantee longevity.
Having worked on a 1,200 page book I am going to concur. I started out using markdown+Pandoc and it became a time loss pretty quickly.
This seems really cool. I’d love to have email more under my own control. I also need 100% uptime for email though, so it’s hard to contemplate moving from some large hosted service like Gmail.
If email is that important to you (100% uptime requirement), then what’s your backup plan for a situation where Google locks your account for whatever reason?
Yeah, that’s true. I mean I do have copies of all my email locally, so at least I wouldn’t lose access to old email, but it doesn’t help for new email in that eventuality.
Email does have the nifty feature that (legit) mail servers will keep retrying SMTP connections to you if you’re down for a bit, so you don’t really need 100% uptime.
Source: ran a mail server for my business for years on a single EC2 instance; sometimes it went down, but it was never a real problem.
True. I rely on email enough that I’m wary of changing a (more or less) working system. But I could always transition piece by piece.
If you need 100% delivery, then you can just list multiple MX records. If your primary MX goes down (ISP outage, whatever), then your mail will just get delivered to the backup. My DNS registrar / provider offers backup MX service, and I have them configured to just forward everything to gmail. So when my self hosted email is unavailable, email starts showing up via gmail until the primary MX is back online. Provides peace of mind when the power goes out or my ISP has outages, or we’re moving house and everything is torn apart.
Note that email resending works. If your server is unreachable, the sending mail server will actually try the secondary MX server, and if both are down, it will retry half an hour later, then a few more times up to 24 hours later, 48 hours if you are lucky. The sender will usually receive a noification if the initial attempts fail (and a second one when the sending server gives up)
On the other hand, if your GMail spam filter randomly decides without a good reason that a reply to your email is too dangerous even to put into the spam folder, neither you nor the sender will be notified.
And I have had that issue with GMail, both as a sender and a receiver, of mail inexplicably going missing. Not frequently, but it occurs.
Linux didn’t lose its way. It always suffered from NIH syndrome.
Right, but this does seem to compound over time, and thus gets worse.
Also, NIH is more prevalent in certain Linux developers/communities than others. systemd adoption/rejection I think is at least somewhat representative of this issue (since one of the issues of systemd is that it is not very portable, requiring glibc [rather like proprietary software….], and integration of, for instance, GNOME with certain subcomponents of systemd creates a dependency of GNOME on systemd – which is currently able to be worked around, creating issues both for any portable to BSDs and also issues for non-systemd Linux distros and for non-glibc distros [Alpine, some Void]).
dependency of GNOME on systemd
Yeah, this is really really weird. systemd publishes D-Bus interfaces. It’s just D-Bus, you should be able to use libdbus. But for some reason they created libsystemd which contains a new D-Bus client (and some other little things). And it’s installed as a part of systemd.
So far there are workarounds for these sorts of things. But I worry that these will get harder and harder and that clear systemd/Linux will coalesce. I prefer as much interoperability and interchangeability between components.
Why?I use Vim for almost everything. I wish I didn’t have to say almost. My usual workflow is to open Vim, write, copy the text out of my current buffer and paste it into whatever application I was just using. vim-anywhere attempts to automate this process as much as possible, reducing the friction of using Vim to do more than just edit code.
I don’t quite understand the rational behind this? Why should one perfer vi-editing when, for example, writing prose? It would seem like all you’d get would be typing i before you start writing, and pressing the escape button when you’re done, plus maybe a more preferable color theme?
Personally, as a guy who generally uses Emacs (and for that reason structurally can’t relate to this issue ^^), I see vi being nice when working on code-like or in some sense structured data, which might or might not have some.concept of words and paragraphs. Configuration files, logs, scripts, etc. things you want to easily and quickly manipulate, on a regular basis. (Maybe that’s the reason I don’t use Vi(m), since I see the editor as a kind of “sword”, with wich you quickly strike once, and change whatever you neee, instead of having it open for extended periods of time, and fully living within it, like Emacs. This is also why I don’t like extending vim, since I want it to stay clean and fast)
But back to this project, it seems to me, that when I’m writing stuff outside of my editor or a shell, it isn’t this kind of text vi keybinding are good for. Maybe the author has a different experience, and if that’s the case, I’d be very interested in hearing what “reducing the friction of using Vim to do more than just edit code” is supposed to mean.
I don’t quite understand the rational behind this? Why should one perfer vi-editing when, for example, writing prose? It would seem like all you’d get would be typing i before you start writing, and pressing the escape button when you’re done, plus maybe a more preferable color theme?
dis deletes a sentence'' reverses your last jumpc is great for editing and revisionsMost of these tricks seem more like hypothetical advantage, that look great in a list, than real justifications. How often does one reverse their last jump? Or moves the cursor through a text sentence by sentence? Most the other tricks can be more or less easily emulated by a combination of the shift/control and the arrow keys (or on the mac and some GTK versions using built in emacs keybindigs). The “normal” text entry interfaces offered by operating systems are not ot be underestimated, after all.
So unless one says “I don’t want to learn any other keybindings that vi’s”, one could understand why people would use this, but it still doesn’t appear to be a good reason to me.
How often does one reverse their last jump?
I go back to previous editing positions all the time in Vim, using g;
Some other Vim features I find useful for prose:
I use all of these tricks regularly when editing prose. And these were just the ones I immediately thought of when reading your thing. There are plenty of other commands I use all the time. None of them may be strictly necessary but it all adds up to a big quality of life improvement.
I don’t quite understand the rational behind this? Why should one perfer vi-editing when, for example, writing prose? ….
I’m not a vi-user, as you might guess, but I can say that a proper text editor is a real boon for writing any sort of text, whether code or light fiction. Word processors are such hostile environments for real production of anything, in my experience.
You say you generally use Emacs - don’t you prefer it for non-code things too? I assume vi(m)-people must feel similarly about their paradigm.
(now deprecated) btrfs
Link to deprecation notice? I was under the impression that it was still under active development.
I assume @varjag is referring to this redhat doc, stating that:
Btrfs has been deprecated. The Btrfs file system has been in Technology Preview state since the initial release of Red Hat Enterprise Linux 6. Red Hat will not be moving Btrfs to a fully supported feature and it will be removed in a future major release of Red Hat Enterprise Linux. The Btrfs file system did receive numerous updates from the upstream in Red Hat Enterprise Linux 7.4 and will remain available in the Red Hat Enterprise Linux 7 series. However, this is the last planned update to this feature.
SuSE still uses btrfs by default, AFAIR, so it’s not deprecated as such, but it also doesn’t have a lot to recommend it……
There is bcachefs, still in development; but even if it is successful, I would assume it would be at least a decade before it would be a real competitor for even present-day ZFS (which presumably would not stand still).
The author seems to be thinking that the Smalltalk programme had some merits that the current PLT programme doesn’t, which I find unthinkable. So it delivered as far as I’m concerned.
Most tantalizing question was:
..when and why did we start calling programming languages “languages”?
That seems neither unthinkable nor unanswerable. I mean, I don’t know the answer off hand, but there’s a finite number of papers one can read to find out.
Do you not think computer languages are formal languages? Do I need a citation if I say English is a natural language?
Ah, that link/term is helpful. Thanks!
I’m sorry I’m annoying you.
Do I need a citation if I say English is a natural language?
No, but the whole point under discussion is why our terminology connects formal languages with natural languages. When did the term “formal language” come to be? The history section in your Wikipedia link above mentions what the term can be applied to, but not when the term was coined. Was it coined by Chomsky at the dawn the field of mathematical linguistics? That’s not before computers, in which case the causality isn’t quite as clear and obvious as you make it sound.
I’ll stop responding now, assuming you don’t find this as interesting as I do.
Edit: wait, clicking out from your profile I learn that you are in fact a linguist! In which case I take it back, I’m curious to hear what you know about the history of Mathematical Linguistics.
Was it coined by Chomsky at the dawn the field of mathematical linguistics?
It’s at least older than that. The term “formal language theory” in the sense of regular languages, context-free grammars etc. does date to Chomsky. But the idea that one might want to invent a kind of “formal” language for expressing propositions that’s more precise than natural languages is older. One important figure making that argument was Gottlob Frege, who was also an early user of the term (I’m not sure if he actually coined it). He wrote an 1879 book entitled Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens, which you could translate as something like, Concept-Script, a formal language modeled on that of arithmetic, for pure thought.
In general they’re all languages because they have a syntax (certain combinations are ‘ungrammatical’ or produce interpreter/compiler errors) and a (combinatorial) semantics (the basic symbols have meaning and there are rules for deriving the meaning of [syntactic] combinations of symbols).
Formal languages go back at least to Frege’s Begriffsschrift of 1879, which isn’t before Babbage described the Analytical Engine (1837) but certainly before digital computers. And there are precursors like Boole’s Logic and Leibniz also worked on something of the same sort, and there are yet earlier things like John Wilkins’ “philosophical language” and other notions of a similar kind.
For modern linguistic work on semantics, the work of Richard Montague is perhaps the most important, and there are connections to computer science from very early on - Montague employs Church’s lambda calculus (from the 1930s) which also underlies Lisp.
I’ve fairly recently switched from using “regular” Android (albeit usually with a custom ROM) to LineageOS without Google Play Services (with microg instead so most things are still accessible if I want them). I turn off notifications for most everything other than messages from my wife. So I’ve almost got a device which serves me rather than the other way round: essentially just a pocket sized computer. I’m quite happy with this setup and don’t feel any more compulsion to check it than I do any of my other computers.
i remember mr. poettering saying that bsds aren’t relevant anymore in 2011: https://bsd.slashdot.org/story/11/07/16/0020243/lennart-poettering-bsd-isnt-relevant-anymore
guess they are still here.
“Lennart explains that he thinks BSD support is holding back a lot of Free Software development”
I can think of something else which is holding back a lot of Free Software development.
Poettering’s approach to software development seems to make it clear that he doesn’t see any value in the continued existence of the BSDs. I think that they are an important part of the larger open *Nix world/ecosystem and that Linux benefits from their existence so long as there remains some degree of compatibility. I will say that I think the BSDs’ use of a permissive rather than reciprocal licence I think had been bad for them in the long run.
I don’t think that it’s not about the *Nix world/ecosystem or that Poettering just doesn’t care about BSDs. His attitude seems to be more like that people and distros not wanting to buy in on systemd and/or pulseaudio or in general his software or designs are irrelevant - or approaches that aren’t compatible with his. I think the wrong statements he made leading to uselessd disproving them and OpenRC disproving a lot of them as well made that clear.
Now people have different opinions about systemd, but from my experience projects ignoring the rest of the world tend to turn out bad on multiple levels. Other than that portability often (not always) is an indicator for code quality as well.
But going a bit off topic. What I want to say is that even though BSDs are mentioned the statement also targets every distribution not relying on systemd. It’s just that most of them aren’t exactly “mainstream”, which is why I think they are ignored and not mentioned.
Two great tastes, now together?