even the $35 Raspberry Pi doesn’t need a particularly light OS
Well, it won’t run a heavy DE for sure. Never underestimate how much A53 cores suck. (And the RAM on the RPi specifically.)
But yeah, OS distributions optimized for tiny disk and RAM footprint are not very relevant.
(cross-posting my hn comment)
As a non-commercial user, I’m okay with using stuff that’s for non-commercial use only.
The problem is when I want to, say, copy a few functions into my own project which I’m publishing under completely unrestricted terms (say, Unlicense). Or use as a dependency, etc.
So, I’d say it’s okay-ish for “product-like” stuff, for user-facing applications. But please NEVER publish libraries everyone depends upon under these terms.
That’s a heavy loading solution for… the dubious benefit of being html at the top level just for html’s sake?
I just use ES Modules with lit-element. They’re natively supported, there’s no need for anything (apart from fixing paths when serving of course)
WARNING! YOU CAN TURN YOUR VERY EXPENSIVE CPU INTO SCRAP METAL IF YOU START MESSING WITH CPU OVERCLOCKING. OVERCLOCKING MEMORY VIA XMP IS RELATIVELY SAFE, BUT OVERCLOCKING A THREADRIPPER’s CORE CPU FREQUENCY CAN BE DANGEROUS AS HELL!
Keep calm :)
It’s not that easy to kill a CPU. Sure, if you let a literal monkey press random keys in the firmware setup, it might set Vcore to 2.0V and fry the cores (watch this video to see what happens — without the monkey of course :D) but if a person with a brain in their head operates the “bios”, they’re unlikely to unintentionally kill it — it’s not the 90s anymore, everything has safety protections all over the place. (Well, some vendors manage to screw some of it up, but still.)
XMP, by the way, is eXtremely Useless. XMP is a way of storing “profiles” on the DIMMs… and all the information these profiles have is something like “3200 14-14-14-24”. So, only primary timings. The secondary timings are left at “auto”. All boards are absolute trash at figuring out timings. Ryzen DRAM Calculator is much much better.
At full load after ~15minutes I read around 55C on the VRM heatsink, 32C for the memory on the ingress side of the fans, and 55C for the memory on the egress side. ACPI shows the cpu at 52C. For me, being a dedicated overclocker, those temps are ok but I don’t think I would want to O.C. the CPU much (if at all).
This is practically ice cold for a VRM at full load. I guess he must have one of the new TR4 boards.
Syncthing does not work in a FreeBSD Jails virtualization
I’ve been running it in a jail for over a year, never heard of any problems o_0 All other instances connect to it just fine. Did you set up port forwarding properly? It needs tcp:22000 by default (you can specify any port when adding a device)
I will have to test it again then …
I have tried to use two different FreeBSD hosts (all firewalls were disabled during the tests) and I was not able to synchronize/connect devices to Syncthing in a Jail …
I really, really want to like Nim but there are just too many oddities for me to seriously dig in. The biggest one is probably the story for writing tests: given its import and visibility rules, it’s really awkward to test functions that aren’t explicitly exported. The official position is, “don’t test those functions,” which I find somewhat naive.
The standard library is a bit unwieldy, but maybe it’s just a maturity issue. For instance, the “futures”-like system for threads, processes and coroutine style processing are all incompatible. What?
Finally, (and maybe this is just taste), but there’s heavy use of macros everywhere. The nim web server (jester), for instance, is heavily macro-ized which means error messages are odd and pretty severely affects composability.
Don’t forget the partial case sensitivity. “Let’s solve camelCase vs snake_case by allowing you to write any identifier in either, at any time! Yay!”
it’s really awkward to test functions that aren’t explicitly exported. The official position is, “don’t test those functions,” which I find somewhat naive.
I don’t know if that’s the official position, but you can test internal functions in the module itself in the when isMainModule: block. Here’s an example.
You can also include the module and test it that way. So there are definitely ways to do it. I don’t think there are any official positions on this.
Hmm, I seem to remember include getting kind of messy on larger code bases – wish I could be more specific, it was a while ago. By official position I meant the responses by the developers on the Nim forum, so maybe that was a bit heavy handed.
“A patch should arrive soon to flush the L1 cache before vmenter”
The default in separation kernels from early 2000’s was partitioning and/or flushing caches since shared resources could be covert channels. There’s now covert channels being found in the shared resources. The patch will flush L1 cache. Why not flush them all just in case more attacks are found? Are there not instructions to do that for L2 or L3?
We don’t know about attacks involving the other caches. Flushing all of them would significantly harm performance.
That’s what I was thinking. It means they’re doing what maximizes performance instead of minimizes security risks. Then, fixing the problems as they show up. That’s exactly what Intel is doing with the additional benefit of cost reduction.
So, the fixes are great. Just hypocritical to me that Theo called them out before about not doing enough mitigation for potential, side channels to boost performance or reduce development/runtime costs when they’re doing the same thing for same reasons w.r.t. other shared resources. Anyone wondering where the rest will come from to get started on preventative mitigations can check this out. Also, ditch Intel for simpler architectures where possible. Submitted my list here to help. Reduces attack and research space to identify remaining problems.
ditch Intel for simpler architectures where possible
That is, where you’re paranoid enough to prioritize simplicity and security over performance. Generally performance is very important. Trying to do any actual work (say, compiling code) on Cortex-A53s makes you really appreciate the performance hacks CPU designers are doing :)
I wonder if a good solution is to keep all the aggressive speculation, but avoid sharing any resources between mutually untrusted code. Imagine a machine with a ton of little cores (like this one) plus an OS where applications can completely reserve cores so that nothing else runs on that core – so e.g. a web browser could reserve one core per origin. And in that situation, cache should be flushed, data in RAM should be arranged to maximize space between these untrusted processes, etc. But trusted processes should be able to use the non-reserved resources normally, with maximum performance.
Yeah, I use Intel/AMD CPU’s for that reason. I also claim to run an intentionally insecure setup. Im at least honest. ;)
Far as your idea, Clive Robinson on Schneier’s blog always pushed something like that called Prison architecture. They’d be a pile of little CPU’s (more like MCU’s) running individual functions and modules of the system. They’d have behavioral profiles plus be designed where that was possible to begin with. Like Cell’s helper units, they’re restricted to run what they’re told with some master CPU’s and hypervisor controlling them. It would inspect them agsinst the profile like a warden inspecting prison cells. Plus be metered.
Those are details I remember most. I was into security/separation kernels heavily at the time. We debated the two concepts a lot. Blog readers liked that topic the most that I remember.
https://unrelenting.technology more microblog than blog — I write long form articles with actual titles almost never :D
The user has to manually verify the checksum, and figure out how to do it on a phone, no less. A checksum isn’t a signature, by the way - if your government- or workplace- or abusive-spouse-installed certificate authority gets in the way they can replace the APK and its checksum with whatever they want. The app has to update itself, using a similarly insecure mechanism.
I don’t think that’s how app updates work on Android??
I’m pretty sure it’s TOFU. CAs are not involved. You install an APK from somewhere, Android remembers the key that signed it. You get an APK that updates an app you already have, Android checks if it’s signed by the same key.
Author here. I was referring to the process of downloading the APK from Moxie’s website. He provides an APK download and a SHA over HTTPs. Once you have the APK there’s no signatures involved afaik.
Oh, CA for the initial HTTPS download. Well, if you go to that level of paranoia, the same could be used to modify your initial download of the F-Droid client :)
Once you have the APK there’s no signatures involved
If the APK was signed, a signature from the same key will be required for an update.
You can easily see this when switching between F-Droid and PlayStore||site-download versions of the same app. (The official F-Droid repo signs APKs with the F-Droid key, while the app developer signs with their own key.) Trying to e.g. update a Play Store app from F-Droid will result in an error message.
I don’t understand how/if webmentions are significantly different from the pingbacks everbody used to have on their WordPress blog (because I think they were enabled by default?) and then promptly disabled because of too much spam.
Webmentions have been modeled after pingback. They are basically a refinement.
Regarding spam, well, as always when you are exposing a write permission though the web, you are more or less vulnerable. This problem has and is still discussed within the indieweb community. A protocol, Vouch has been proposed to address this problem.
And as mentioned below, you still can moderate or simply not display your webmentions altogether.
Webmentions have been modeled after pingback. They are basically a refinement.
Did anyone ever care about pingbacks though? Even the non-spammy ones?
Did anyone ever care about pingbacks though? Even the non-spammy ones?
I can only speak for myself, but I did. It often gave me access to blogs by people with similar interests that I would have otherwise never known about to visit. It used to be that a large percentage of bloggers would list their favourite blogs somewhere on every page (sidebar/footer) so finding one blog with an author who shared similar interests could end up in a twenty five link binge.
I guess it was a different time, then again it was over a decade ago.
I was wondering that, unless these aren’t intended to be published in verbose and instead used more as a notification for the author and only really published publicly as a counter?
If the pingbacks had been used simply to list in the authors admin places where their articles had been mentioned and not published alongside comments to the article then there would have been a lot less spam.
Vouch was mentioned here already, but for now, just requiring a valid h-entry reply/like/repost/etc. instead of just a link works well enough. Of course spammers can start posting proper replies, but they haven’t yet.
I dislike CloudFlare because they’re making the internet more centralized (the more small websites use them as a proxy, the less direct connections to small websites are made) and because of some infamous abuse handling incidents, but I would trust them 100000% more than my local ISP.
The local ISP knows where I live, the local ISP has to comply with local laws, the local ISP has monitoring installed by the local equivalent of the NSA. The local ISP didn’t even promise any privacy at all, which is worse than CloudFlare’s privacy policy for this resolver.
“the local ISP has monitoring installed by the local equivalent of the NSA”
You should assume Cloudfare does, too. They are a venture-funded, for-profit company operating in a surveillance state in ideal position to do surveillance. The NSA/FBI also pays or coerces compliance per Core Secrets leaks. The real question to determine if they won’t cooperate with the NSA is: “Will they turn down $30-$100+ million, go bankrupt, and/or go to prison for me?” If not, then they’ll likely cooperate. The cooperation also always mandates they lie about cooperating. They can promise government-proof anything while relaying data to the government.
Key word being local. If you live in a country that’s not very friendly to the US, it’s better to have NSA surveillance than local surveillance :)
Excellent point! I argued something similar in essay on using multiple, non-cooperative jurisdictions for security. :)
Couldn’t the opposite be just as true? If you live in a country that’s not friendly enough to the US, it may also be better to have local surveillance than NSA surveillance. If I know my government is out for my data, can’t easily access the stuff the US has, and isn’t sophisticated enough to upstream crypto algorithms into the Linux kernel or tap into underwater fibre cables, I’d pick local any day.
edit: plural
That’s true. However, your local ISP will still know where you connect. It will still see how much and if it’s unencrypted what you send/receive.
CloudFlare being a big target has to comply with some other country’s laws, as a US company it has to comply with NSLs, which might or might not exist in your local country. CloudFlare being a big company might also comply with other country’s laws - maybe not small ones, bug look at the list of companies that comply with China, etc.
Also this is actually not about your ISP vs CloudFlare. It’s about whatever you have configured vs ClfoudFlare. If Firefox starts making HTTPS requests to CF as a system administrator, when you expect DNS requests you might even miss them.
I think the problem is not that Firefox allows this, but that it’s skipping your system-wide configuration, without asking. After all I can already use CloudFlare’s DNS servers if I want to do so.
And then: CloudFlare makes its money by selling CDN features (including analytics, etc.) to companies, while my ISP makes money by selling internet to me. If your ISP doesn’t promise any privacy (or has no privacy policy, as you make it sound like) maybe consider switching your ISP.
The main point however is: I don’t think “overwriting” things like resolving hostnames is something an application should do, unless it’s asking or by design made to do so. In this case it’s not.
It will per default skip what you, your system administrator, etc. might have done to secure you.
It’s totally fine you trust CloudFlare more than your ISP/your local setup, but I don’t think it’s fine if a piece of software dictates and overwrites whom you trust silently, when you might already have consciously chosen someone else you trust.
If your ISP doesn’t promise any privacy (or has no privacy policy, as you make it sound like) maybe consider switching your ISP.
In most of the US, that isn’t feasible. Most places have at most two residential broadband providers: the phone company (typically AT&T), and the cable company (either Comcast or Spectrum, depending on location). And not counting MVNOs, there are, what, four mobile broadband providers?
I do basically agree with you that this may skip what your local sysadmin has done to secure you. But it’s making the trade-off that most people do not have a local sysadmin doing anything to secure you, and will never opt-in to anything to secure themselves.
?? GitLab is moving from Azure (Microsoft) to Google Cloud, and they’re announcing the unavailability in these places as “because Google cloud”. What’s the difference between Azure and Google?
Azure was available there, even though Microsoft is also a US company?? How?
Microsoft and Google are government level companies; they work with and for governments. That means that sometimes they will have some advantages somewhere and sometimes have to give up some other. Which explains probably the difference between them.
As a Cuban (not living there right now) not really excited with this news. I used gitlab in the past, and still use it right now for personal projects. I experienced something similar with Bitbucket a few years ago, when they went public, at least Gitlab has posted some news about it, Bitbucket closed the access, without any warnings.
With JavaScript it seems the simpler the programming model gets, the more complex the build system and tooling becomes
No! We’re actually pretty close to not needing any build process!
The only thing browsers can’t do is resolve package-based import paths. (And I have a small on-the-fly rewriter for that.) Other than that, the only reason you “need” to have a bundler that calls babel which calls plugins and so on is backwards compatibility.
(Oh, also another reason is JSX. Consider lit-html/LitElement instead of JSX based… stuff!)
If you don’t have to support ancient browsers, you SHOULD ship raw ES6 with async/await directly to browsers without any compilation steps.
A better test bed. Although my work focus on developing programs on Linux, I will try to compile and run applications on OpenBSD if it is possible.
I feel like the lack of valgrind does hurt OpenBSD as a testbed. I know there’s malloc.conf(5), but that doesn’t seem to help much in the case of, say, out of bounds access of a stack-allocated variable.
a) Patches. Although most of them are trivial modifications, they are still my contributions.
Don’t claim it’s just trivialities. The small things and adding polish is what really makes OpenBSD stand out (or any software project, really), and every “trivial” modification helps.
OpenBSD does have Valgrind.
aws kms create-key
I think that costs a little bit of money? IIRC I saw a dollar or so from KMS on my bill… Damn Bezos :D
Anyway, fetching from a service that always reveals the secrets to your machine is not that different from a local file on that machine. It’s good against random people trying disk recovery on a volume (though that’s not a problem on EC2, they do wipe all data) and accidental backups/snapshots (“your VM is cast into an AMI” as you mentioned), but fundamentally it’s still secrets-on-your-machine.
I think it might be reasonable for these services to support additional protection — not just based on secrets, but also, say, IP address whitelisting. So that if someone gets your secrets from an AMI you accidentally included secrets into and made public, they couldn’t access your accounts because they’re not accessing from your machines.
costs a little bit of money
Unfortunately, yes. As said below, it’s $1/mo. It’s unfortunate they charge for it, but most of the AWS accounts I’m privvy to are spending a couple hundred a month so it goes unnoticed.
but also, say, IP address whitelisting
Absolutely! I use an IAM instance role to allow retrieving and decrypting credentials, so the machine/container has to be launched in AWS and under the right role to allow retrieving credentials.
With clouds being so dynamic, IP address whitelisting is harder – containers and instances come online at different places by default. Elastic IPs, etc, make some room for lockdown. Fetching secrets at container start, process start, etc is a great way to keep the secrets off the file system, but it’s only one component of the broader picture of protecting your database credentials.
to allow retrieving credentials
I mean, require the right IP/machine to connect to the service the credential is for. I guess you already have that for AWS services :)
I’m interested in the pretty impressive performance delta – I wouldn’tve thought that Zen could outperform Broadwell quite so handily!
Me too! I’ll be completely honest: I have no idea what factors contributed here. Maybe things like no NUMA? a bit more cache? Something with Spectre / Meltdown? No idea – not my forte – but I am sure delighted by it.
EPYC is way more NUMA than Intel equivalents. EPYC has four dies on one package, and each die is a NUMA domain.
But Meltdown mitigations are indeed usually only turned on for Intel! :)
“ in a world where I can rent a machine that tries billions of MD5 calls per second.” wouldnt the test for a successful hash operation involve using the hash to decrypt the data on each try? This would make MD5 cracking prohibitively expensive.
Would it really be prohibitive? AES is really fast, especially with hardware instructions… and you probably only have to try the first block to check for the private key’s header?
This is exactly what JtR does: https://github.com/magnumripper/JohnTheRipper/blob/bleeding-jumbo/src/opencl/ssh_kernel.cl#L225 Theoretically you could probably pipeline the AES into GPU processing too, which does slow the “raw” crack rate, but not all that much.
This is really a non-issue as far as I’m concerned.
Browsers (either standalone or with plugins) let users turn off images, turn off Javascript, override or ignore stylesheets, block web fonts, block video/flash, and block advertisements and tracking. Users can opt-out of almost any part of the web if it bothers them.
On top of that, nobody’s twisting anybody’s arm to visit “heavy” sites like CNN. If CNN loads too much crap, visit a lighter site. They probably won’t be as biased as CNN, either.
Nobody pays attention to these rants because at the end of the day they’re just some random people stating their arbitrary opinions. Rewind 10 or 15 or 20 years and Flash was killing the web, or Javascript, or CSS, or the img tag, or table based layouts, or whatever.
Rewind 10 or 15 or 20 years and Flash was killing the web, or Javascript, or CSS, or the img tag, or table based layouts, or whatever
Flash and table based layouts really were and, to the extent that you still see them, are either hostile or opaque to people who require something like a screen reader to use a website. Abuse of javascript or images excludes people with low end hardware. Sure you can disable these things but it’s all too common that there is no functional fallback (apparently I can’t even vote or reply here without javascript being on).
Are these things “killing the web” in the sense that the web is going to stop existing as a result? Of course not, but the fact that they don’t render the web totally unusable is not a valid defense of abuses of these practices.
I wouldn’t call any of those things “abuses”, though.
Maybe it all boils down to where the line is drawn between supported hardware and hardware too old to use on the modern web, and everybody will have different opinions. Should I be able to still browser the web on my old 100 Mhz Petnium with 8 Mb of RAM? I could in 1996…
Should I be able to still browser the web on my old 100 Mhz Petnium with 8 Mb of RAM?
To view similar information? Absolutely. If what I learn after viewing a web page hasn’t changed, then neither should the requirements to view it. If a 3D visualization helps me learn fluid dynamics, ok, bring it on, but if it’s page of Cicero quotes, let’s stick with the text, shall we?
I wouldn’t call any of those things “abuses”, though.
I think table based layouts are really pretty uncontroversially an abuse. The spec explicitly forbids it.
The rest are tradeoffs, they’re not wrong 100% of the time. If you wanted to make youtube in 2005 presumably you had to use flash and people didn’t criticize that, it was the corporate website that required flash for no apparent reason that drew fire. The question that needs to be asked is if the cost is worth the benefit. The reason people like to call out news sites is they haven’t really seen meaningfully new features in two decades (they’re still primarily textual content, presented with pretty similar style, maybe with images and hyperlinks. All things that 90s hardware could handle just fine) but somehow the basic experience requires 10? 20? 100 times the resources? What did we buy with all that bandwidth and CPU time? Nothing except user-hostile advertising as far as I can tell.
If you wanted to make youtube in 2005 presumably you had to use flash and people didn’t criticize that
At the time (ok, 2007, same era) I had a browser extension that let people view YouTube without flash by swapping the flash embed for a direct video embed. Was faster and cleaner than the flash-based UI.
Maybe you would like this one https://github.com/thisdotvoid/youtube-classic-extension
I’d say text-as-images and text-as-Flash from the pre-webfont era are abuses too.
On top of that, nobody’s twisting anybody’s arm to visit “heavy” sites like CNN. If CNN loads too much crap, visit a lighter site.
Or just use http://lite.cnn.io
nobody’s twisting anybody’s arm to visit “heavy” sites like CNN
Exactly. It’s not a “web developers are making the web bloated” problem, it’s a “news organizations are desperate to make money and are convinced that personalized advertising and tons of statistics (Big Data!!) will help them” problem.
Lobsters is light, HN, MetaFilter, Reddit, GitHub, GitLab, personal sites/blogs, various wikis, forums, issue trackers, control panels… Most of the stuff I use is really not bloated.
If you’re reading general world news all day… stop :)
Huh. I’d been wondering why GNOME crashing tended to exit the whole session in beginning in Ubuntu 17.10(ish); now I know. Is this true for all Wayland-backed desktops?
Is this true for all Wayland-backed desktops?
Of course not. It’s pretty common to run panels and whatever other bits of UI in separate processes. So e.g. if the panel crashes in Sway, the whole desktop won’t crash. I’m not sure about KDE, but probably kwin_wayland only does window management and not all the shell functionality? Also I don’t think it loads arbitrary user installed extensions that can do whatever with the shell…
The divide between the “continuous deployment” world and the embedded Linux “LTS” world is interesting. Would be amazing if some company decides to merge these worlds and make, say, a phone that updates to a new nightly kernel build every day. Purism could do this :)
What would be the point? Most people want their devices to be stable, right?
Most people probably, but pretty much all LineageOS users run nightlies. Which aren’t really “unstable” from my experience. The point is getting improvements fast.
I don’t believe in “stability by using old stuff” (like centos and debian stable). They’re not “stable”, just “outdated”.
CentOS and Debian are stable in the sense of not changing. It’s very useful to be able to install an OS on your computer(s), then know that it will stay the same for X years, with the exception of truly important (e.g. security) updates. It may not be “improving”, but then again, especially when it comes to UI, a lot of people just want it to stay the same so they can get on with their lives.