Maybe that should be merged with https://lobste.rs/s/tsfnqd/abc_conjecture_has_not_been_proved
See e.g. https://cloud.google.com/beyondcorp It’s a model that is suitable iff you have good operational tools (to keep all your access proxies up to date).
Please use media.ccc.de instead of YouTube: https://media.ccc.de/v/34c3-9196-may_contain_dtraces_of_freebsd
Can you confirm, that firefox on windows does not exhibit that problem? (I don’t know how I could check that myself.)
Not without digging a bit further.
What I can confirm is that Firefox doesn’t allow new windows (popups and popunder) below a minimum size of 100 x 100px
Why does it allow them a all? Even if you want JS to open windows (I don’t), it seems like a mistake to not force it into a background tab. Let alone allowing JS to control placement.
I’d like to see popups disabled entirely by default.
Firefox does block them by default Popups can be useful in the workflow of some web apps and Firefox allows users to whitelist by domain.
From that page:
Is the pop-up shown after a mouse click or a key press?
And
Certain events, such as clicking or pressing a key, can spawn pop-ups regardless of if the pop-up blocker is on. This is intentional, so that Firefox doesn’t block pop-ups that websites need to work.
Sketchy websites game that. Just disable popups entirely. Delete the code, and be done with it.
You’re breaking my workflow! For serious, actually, I use a pop up window to present slides driven by the main window.
People are already trained to answer permission requests ‘this site wants to send you notifications allow/deny’. Isn’t it a good time to block opening new windows/tabs/popups via JS by default and prompt the user for a decision?
The sad truth is that people are trained to click “Yes/Allow” for their thing to work. But yes, the popup blocker should disallow most new windows, unless provoked by a true user gesture (modulo bugs, of course).
The sad truth is that people are trained to click “Yes/Allow” for their thing to work.
nothing sad about that. What you actually mean is that a majority of people do that, but that doesn’t justify not giving the minority a choice.
Is preact a viable alternative from a technical standpoint?
Preact is an excellent library. It does most of what React does for a fraction of the weight, and with a nicer license too.
I wonder whether the two major open source file systems/object stores ceph and gluster can offer a similar API to the object stores described in the article.
I wouldn’t say this is a replacement for ‘ls’ at all, because it doesn’t meet any of the POSIX requirements for ‘ls’.
http://pubs.opengroup.org/onlinepubs/009695399/utilities/ls.html
I would say, this is a alternative way to list directory contents on some UNIX'y platforms, and its interesting to see such work done in Rust.
I had a similar mental response to the FAQ on Windows support: “Why would you want something which doesn’t play nice with the rest of the PowerShell ecosystem?”
In both Unix and Windows, we use ls/gci for two different reasons: Standalone and composed with other programs/cmdlets. As a developer of a “replacement,” it’s good to think about both.
ls is one of those tools I’ve never used in a script. Shell globbing or find are easier and more powerful, respectively; ls occupies a middle ground which is ideal for interactive use but seldom good for scripting.
While you aren’t wrong, you’d be surprised how many people reflexively reach for ls in shell scripts.
I’ve seen so much like: ls -l | awk '{print $3}' and similar as to make me want to become a monk.
I do this all the damn time. Probably because I’m thinking of what file I want before I think of the exact regex I want to search for. Not in scripts though.
The issue reminds me of differing grammar structures.
They mean the same thing, but in Japanese the word order is totally different. Maintaining structure, it’s something like “in the file, for a pattern, search.”
Also, my Japanese is not great, “ファイルにパターンを探せ” or something else may be more correct but the structure is the same.
Sorry to beat a dead horse, but if you do cat foo | grep bar often because you think of foo first, just use <foo grep bar.
Why is this a problem?
It’s not unclear; it’s unlikely to slow things down in any sort of meaningful way; it doesn’t do the wrong thing.
What makes it more than a purely stylistic preference?
Another favorite type of shell gem I’ve found along those lines:
cat blah | grep -v foo | grep -v bar | grep somethingorother
I also know someone running SuSE that found out a recent patch broke ksh backtick behavior and not $(). Which makes me all the happier when I can avoid shell.
Another I just remembered:
cat something | grep foo | wc -l
vs
grep -c foo something
At work we run tinc as a quasi-VPC clone in production and it’s been good to us so far.
The only complaints I have is that under a lot of network load it’ll eat up a good amount of processing power on a DO droplet.
It took some time getting up a lot of the infrastructure in place to manage and hand out keys and configs – FWIW I think had we started with something more zerconf like this might have been easier on us.
I am usually recommending ipsec as VPC between hosts. Do you have performance numbers? The only downside I saw with these setups was that they add some latency. I did not see unreasonably huge CPU usage even when under heavy load. How is tinc performing. Sparing one core for tinc will usually be ok, if latency is improved.
In our testing across datacenters tinc did not add any noticable latency. In our tests with iperf bandwidth capped out at about 150Mb/s whereas without it we’d hit line speed (1 Gb/s) we’re not network constrained so that wasnt a deal killer for us – You’re right about it eating up a core, but that’s still a core you’re paying for.
Prior to selecting tinc we looked at using ipsec but the management burden of it seemed really high. There’s a good talk by Fran Garcia from hostedgraphite who went into their problems with it https://www.usenix.org/sites/default/files/conference/protected-files/srecon16europe_slides_garcia.pdf That presentation and doing some reading pretty much steered us away from ipsec
In the end I think we’ll probably up switching to a provider who provides a VPC like service and then we’ll do site to site vpns across providers if only to relieve us from the management and overhead burdens of tinc.
Prior to selecting tinc we looked at using ipsec but the management burden of it seemed really high. There’s a good talk by Fran Garcia from hostedgraphite who went into their problems with it https://www.usenix.org/sites/default/files/conference/protected-files/srecon16europe_slides_garcia.pdf
Decent write-up. TL;DR: Don’t use Racoon.
For hosts you control yourself, ipsec with strongswan and libreswan using ikev2 has always been a great experience for me. Connecting with roadwarriors, running old software versions on odd OSs, has never been the best part though.
i’m curious about what he’s running instead of android these days - wonder why he didn’t say in the post.
I don’t see what’s the difference to luatex. Is there any, apart from SILE not being compatible? Is there any advantage?
It looks like SILE is more of a from-scratch implementation. It doesn’t contain any of the original Knuth TeX code in it, though it does borrow some of his algorithms (like the line-breaking algorithm). Luatex, by contrast, is pdftex extended with an embedded Lua engine, and pdftex is the original Knuth TeX extended with PDF output (among other things). The main advantage to not being TeX-heritage is much cleaner code that you’re not as afraid to touch, I would guess; the original TeX is very hairy ‘70s code written in WEB auto-translated to C.
Besides that, the main high-level difference I can see is that SILE is built around an InDesign-style model where text flows between frames.
luatex is AFAIR not written in WEB anymore. It’s a nearly complete rewrite to support bidirectional and grid typesetting, and afaict it also deals pretty well with flows between frames. At least context mark IV could do that pretty impressively. I’d appreciate an answer from someone who worked/looked deeply on luatex and SILE, because I find it hard to form an opinion.
I’m not really familiar with luatex internals, but I did briefly browse its repository before posting and saw web2c still in there, which I took as a sign that vestiges of the old WEB code are still in there somewhere. Or is it just in the repo because nobody’s removed it?
The author also mentioned that he wanted to tackle some things that TeX cannot do, like match up the lines with those from the previous page. I believe that he works/worked with bible printing and the thin paper demands alignment between the paper sheets.
Whilst the insight about using iOS as your daily driver and the downsides of its design are interesting, the idea that Linux users use Linux “for the challenge” is downright wrong. I use Linux because it makes my life easier (osx would quite possibly also work for me, but I can’t afford a mac) mainly because it is designed almost opposite to iOS.
I agree. The fundamental concept in this article is “people use iOS as a challenge”, which is wrong. There may be several reasons (convenience, cost, novelty) but “challenge” is hardly one of them. The second idea the author presents to compare this claim to is Linux, which again, is wrong. There are a lot of reasons people might use Linux (convenience, cost, novelty, ideology) but few people use it as a challenge. To be sure, I’d bet more people use Linux as a challenge than they do iOS, but that’s still far from a main reason let alone the main reason.
Here’s a better claim: iPad-only is the new dumb phone (after the invention of the smartphone). iPad-only is the new typewriter (after the invention of the PC). iPad-only is the new unplugged, the new Luddite, the new Amish. It’s people intentionally crippling their workflow in an attempt to improve their quality of life. It’s people who believe that simplicity is the key to productivity. “Doing it for a challenge” is the exact opposite of that. iPad-only is the rejection of challenge.
You are right. Other people use a free or open source OS because they care for the philosophical or the security advantages they have.
Where’d we get this trend of libraries self-described as “minimal”? Is there even a definition of what that means? If a library exposes 100 functions, but they’re all used, is that “minimal”? Wouldn’t it be better to have a library described as “Everything you need to solve X without filling in the gaps yourself”? Reminds me of “light” food… after all of human history desperately looking for calories, suddenly having *less” calories is the main selling point – “fat free half and half” being the worst example.
Code is different from food. Code is liability.
There is also an explanation in the email on why it is useful to have a minimal crypto library in the kernel. In short, you mostly don’t need flexible APIs when in kernel. That might be different in userland.