I wish I could get a cloned key like how door locks come with a spare that I could store securely in case I lost it. Correct me if I’m wrong, but having a second key for U2F just means you have to register every service with both keys which you wouldn’t do since it’s hopefully a pain to get to your securely-stored spare key.
There is a proposal for registering backup keys without them being present, but I’m not sure it’s making progress and it certainly isn’t implemented yet.
You’re looking for a cryptocurrency hardware wallet, basically. They’re the only ones that do U2F and also have key backup/import. Although it would be nice to just be able to buy two of the same key from yubikey or whoever.
Syncthing has had this feature since 2021: release v1.15.0. While I am using Syncthing I haven’t tried connecting an untrusted device. Instead I decided to just run on my home NAS to have an always-on device.
In that setup, I’d still be inclined to enable this mode, unless you also expose the syncthing directory via SMB or something else. If the NAS is just there as a blob of storage, then treating it as an untrusted device means that a compromise of the NAS won’t leak the data that you sync with SyncThing. A computer that doesn’t need to see plaintext should never have access to plaintext.
I don’t think you can take that as an absolute principle. You need to weigh the chances of your NAS being compromised against the chances of losing your keys, and the cost of data leakage against the cost of data loss.
It’s remarkably easy to get invited to Lobste.rs What I like about the invite only system is that you have to contact a person for an invite. I got invited by contacting a person. This simple extra effort is so much better than all the AI tools to filter out bots. I’m sure bad humans do slip in, and now with ChatBots, bad computers impersonating good humans will slip in, but I still think the system is better than something that lets the FSB sign up a hundred thousand accounts and bring down lobste.rs.
What I like about the invite system is that it provides some basic accountability. If spammers are getting invites we can look at the tree and finf the right spot to prune. It provides some basic protection from spam and sick puppets.
I do think invites should be fairly easy to get. I think I would extend an invite to most people to asked me. If I don’t know them personally I would just want to see some comment history on another site to make sure that they are likely a human and interesting in what is on-topic here.
It would be interesting to have some sort of indication of “how sure” you should be for invites. Right now the about page says you should invite “people [you] believe will contribute positively” which doesn’t say much about the confidence margin you should have. I wonder if some “the mods are doing ok, don’t worry about being too stingy with invites” vs “the mods are busy right now, be conservative” would be useful, and it could occasionally be updated. But I’m probably just way overthinking this.
As an addendum to your comment on accountability, I feel that accountability also extends in both directions. I was given an invite by a stranger and knowing that they might pay a price if I’m too big of a jerk is a good reminder to give every comment a second editing pass and make sure that I’m contributing instead of ranting.
Disabled password auth on your ssh and breath easy. This kind of strategy might help your log files look nicer, but it does nothing for security since the bad guys can get access to almost infinite IPs (especially if they have IPv6) and fail2ban itself adds attack surface area.
I’ve used fail2ban. It’s a nice way of spamming your admin mail and pretending you’re more secure. The only exception I see are people dealing with DDoS where fail2ban might help.
For hysterical raisins, I have one system that has ssh on port 22 and accepts passwords. The system I have set up there is a custom syslog that allows scanning the logs as soon as they’re submitted via syslog() (no grovelling through files) and perform actions on matched logs. It scans for 5 failed ssh attempts from an IP address before banning it via iptables. At first, I just banned IPs for a few days. It didn’t help. Then two weeks. That helped for several years until I had problems with iptables failing (because of too many entries). I now just keep adding until iptables complains, drop a few of the oldest rules, then attempt to add again. The system currently has 2,391 blocked entries. Sigh.
You could investigate having one single iptables rule that matches on an ipset, and adding the attackers’ IPs to the ipset. This should make adding and blocking much more efficient, since there’s a single rule to check.
However, by doing this, you would be essentially rewriting sshguard.
Yeah, that’s why I do on my servers too, actually. I wrote the blog post with SSH as an example because it’s a universal service, but fail2ban can protect any service with the right configuration. For example, I host a lot of web-facing services that don’t have IP-based anti-bruteforce logic. It’s way easier (and smarter?) to have one external tool do this instead of implementing this logic again and again in each service imo.
Two possibly newbie questions from a non-server-admin:
Disabled password auth on your ssh and breath easy.
Why is this not the universally accepted solution? Do some people need pw login for legacy reasons?
Also in the article they write:
Fail2ban also consumes a lot of RAM, and gets easily to 300 MB.
Just a pure curiosity question… why does it consume so much RAM? Naively, I’d think you could read files with a stream, and updating firewall rules shouldn’t take much RAM, so it could in theory be a tiny footprint program. Am I missing something?
The slow copy performance when it’s page-aligned is really something. Wonder if this is somewhere there was a CPU bug (like, page-aligned copies could corrupt or leak data) and the slow performance we’re seeing is the end result of microcode adding a workaround or mitigation.
I suspect it’s someone being clever in microcode. The flow I imagine is this:
If something is page aligned, we know that either the first load / store will trap or none will for the remainder of the page, so let’s do a special case for that which is fast.
The common case is less-aligned data, let’s aggressively optimise that.
Oh, oops, we forgot to apply the optimisations in the fast-path case.
You might be right though. One of the Specre variants (I think on Intel) involved the first cache line in a page being special. It’s possible that AMD has some mitigations for a similar vulnerability in their rep movsb microcode.
One of the Specre variants (I think on Intel) involved the first cache line in a page being special. It’s possible that AMD has some mitigations for a similar vulnerability in their rep movsb microcode.
If the non-aligned access is still fast, I wonder if the mitigation is missed…
At the core of my complaints is the fact that distributing an application only as a Docker image is often evidence of a relatively immature project, or at least one without anyone who specializes in distribution. You have to expect a certain amount of friction in getting these sorts of things to work in a nonstandard environment.
This times a thousand. I have tried to deploy self-hosted apps that were either only distributed as a Docker image, or the Docker image was obviously the only way anyone sane would deploy the thing. Both times I insisted on avoiding Docker, because I really dislike Docker.
For the app that straight up only offered a Docker image, I cracked open the Dockerfile in order to just do what it did myself. What I saw in there made it immediately obvious that no one associated with the project had any clue whatsoever how software should be installed and organized on a production machine. It was just, don’t bother working with the system, just copy files all over the place, oh and if something works just try symlinking stuff together and crap like that. The entire thing smelled strongly of “we just kept trying stuff until it seemed to work”. It’s been years but IIRC, I ended up just not even bothering with the Dockerfile and just figuring out from first principles how the thing should be installed.
For the service where you could technically install it without Docker, but everyone definitely just used the Docker image, I got the thing running pretty quickly, but couldn’t actually get it configured. It felt like I was missing the magic config file incantation to get it to actually work properly in the way I was expecting to, and all the logging was totally useless to figure out why it wasn’t working. I guess I’m basically saying “they solved the works-on-my-machine problem with Docker and I recreated the problem” but… man, it feels like the software really should’ve been higher quality in the first place.
no one associated with the project had any clue whatsoever how software should be installed and organized on a production machine. It was just, don’t bother working with the system, just copy files all over the place, oh and if something works just try symlinking stuff together and crap like that.
That’s always been a problem, but at least with containers the damage is, well, contained. I look at upstream-provided packages (RPM, DEB, etc) with much more scrutiny, because they can actually break my system.
Can, but don’t. At least as long as you stick to the official repos. I agree you should favor AppImage et al if you want to source something from a random GitHub project. However there’s plenty of safeguards in place within Debian, Fedora, etc to ensure those packages are safe, even if they aren’t technologically constrained in the same way.
I agree you should favor AppImage et al if you want to source something from a random GitHub project.
I didn’t say that. Edit: to be a bit clearer. The risky bits of a package aren’t so much where files are copied, because RPM et al have mechanisms to prevent one package overwriting files already owned by another. The risk is in the active code: pre and post installation scripts and the application itself. From what I understand AppImage bundles the files for an app, but that’s not where the risk is; and it offers no sandboxing of active code. Re-reading your comment I see “et al” so AppImage was meant as an example of a class. Flatpak and Snap offer more in the way of sandboxing code that is executed. I need to update myself on the specifics of what they do (and don’t do).
However there’s plenty of safeguards in place within Debian, Fedora, etc to ensure those packages are safe
Within Debian/Fedora/etc, yes: but I’m talking about packages provided directly by upstreams.
Within Debian/Fedora/etc, yes: but I’m talking about packages provided directly by upstreams.
Regardless of which alternative, this was also my point. In other words, let’s focus on which packagers we should look at with more scrutiny rather than which packaging technology.
AppImage may have been a sub-optimal standard bearer but we agree the focus should be on executed code. AppImage eliminates the installation scripts that are executed as root and have the ability to really screw up your system. AppImage applications are amenable to sandboxed execution like the others but you’re probably right that most people aren’t using them that way. The sandboxing provided by flatpak and snap do provide some additional safeguards but considering those are (for the most part) running as my user that concerns my personal security more than the system as a whole.
On the other side, I’ll happily ignore the FHS when deploying code into a container. My python venv is /venv. My application is /app. My working directory… You get the picture.
This allows me to make it clear to anybody examining the image where the custom bits are, and my contained software doesn’t need to coexist with other software. The FHS is for systems, everything in a dir under / is for containers.
That said, it is still important to learn how this all works and why. Don’t randomly symlink. I heard it quipped that Groovy in Jenkins files is the first language to use only two characters: control C and control V. Faking your way through your ops stuff leads to brittle things that you are afraid to touch, and therefore won’t touch, and therefore will ossify and be harder to improve or iterate.
I got curious so I actually looked up the relevant Dockerfile. I apparently misremembered the symlinking, but I did find this gem:
RUN wget --no-check-certificate https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p /miniconda
There was also unauthenticated S3 downloads that I see I fixed in another PR. Apparently I failed to notice the far worse unauthenticated shell script download.
I’ve resorted to spinning up VMs for things that require docker and reverse proxying them through my nginx. My main host has one big NFTables firewall and I just don’t want to deal with docker trying to stuff its iptable rules in there. But even if you have a service that is just a bunch of containers it’s not that easy because you still have to care about start, stop, auto-updates. And that might not be solved by just running Watchtower.
One case of “works on docker” I had was a java service that can’t boot unless it is placed in /home/<user> and has full access. Otherwise it will fail, and no one knows why springboot can’t work with that, throwing a big fat nested exception that boils down to java.lang.ClassNotFoundException for the code in the jar itself.
I want to throw in something that hasn’t been mentioned as much yet: Smartgit - it’s my favorite git UI, even though it’s proprietary for non FOSS work. Haven’t found any other cross platform git interface that has all the features I need for my daily needs. I sometimes forget how bad the git CLI is, and then I’m back SSH’ed into a machine with only that left - no wonder people struggle with git that much.
Yeah this headline could have been “Canonical offers free Ubuntu Pro for personal use, small networks, and open source projects[1]”. Love and high-fives could have ensued.
[1] I’m guessing, but that would have been the sensible move.
I’ve read all comments in this thread, and perused the ones over on HN, and nowhere have I seen anyone mention this option. It’s beyond pathetic from Canonical.
The security patches exist. Canonical has then locked up in repos which normal Ubuntu LTS users can’t access. To me, “holding back” seems to describe more or less exactly what Canonical is doing with those packages in that repo.
So I’m guessing the TOS on Pro asks you nicely to not simply clone the repo? Since they’re mostly distributing FOSS software, it doesn’t seem like there’s much of a case against someone else just offering the Pro packages for free.
But can someone help me understand why a switch needs a 64-bit multicore processor, 8 gigs of RAM and run Linux (though this is not unique to SN2700, just a general observation)? I was under the impression that switches (both L2 and L3) do all performance-sensitive work in hardware.
There are some exceptional cases that need to be offloaded to a real CPU, plus you want to be able to support at least a bit of monitoring and statistics. When you’ve got 32 ports and an aggregate switching capacity of 5 billion packets per second, you don’t want that CPU to be too poky, and on a $25,000 device you can probably afford to spend $100 on the CPU instead of $10 if it opens up some flexibility for your customers. And reading the part of the article about switchdev (and knowing a bit about Mellanox’s history with Linux), flexibility was definitely their intent.
Switches do a bunch of control plane stuff, things like STP, LLDP, VXLAN, etc. usw. Dunno how much if that is in the data plane on this device :-)
Switches also need some kind of CLI for configuration, and it makes sense to use Linux for that. It can also act as the front-end processor for the data plane, e.g. feeding it firmware at boot time.
The last 50G firewall I ordered is also a bunch of mellanox cards and an EPYC processor - if you hit the CPU with your traffic for whatever reason (things you can’t offload to the network cards), then you better have enough compute for that..
You can do way more than VLANs on such a thing, like NAT, VPN, VRF and other routing stuff. For firewalling you might also hit the CPU, depending on what you want to filter (and what your hardware offloading can do).
It really depends what “all performance-sensitive work” means. Sure, the packet-flinging is done in hardware, and is obviously very performance-sensitive. But running routing protocols, collecting and reporting statistics, deciding what to do with packets that the hardware plane can not handle, … all very suddenly become “performance-sensitive” as soon as your management CPU turns out to be to slow to do them all and gets overwhelmed, because suddenly the device does not behave as expected anymore.
Interesting. So switches actually perform non-trivial amount of “management work”, as well as being a fallback for special cases (like things that cannot be offloaded to the NIC). Good to know.
TBH I think that, for the most part, I will only benefit from a few of these - mostly in terms of sugar. I routinely have positive experiences with Async rust and basically never have negative experiences/ issues that crop up because of it. In 3 years of writing Rust full time I had one async problem one time - I accidentally was causing an infinite select! loop in a tonic server, so the server would hang. Not really a big deal for 3 years of async work.
I also don’t think that async-drop as quite as necessary or desirable as it may seem. I really wanted it at one point and then I realized that it’s just too tricky. It reminds me of how File calls sync_all on drop but ignores the error - ultimately, “drop” is just a really tricky place for anything complex. I’d rather see a linear type, like:
Where Linear can’t be implicitly drop’d, you have to .close() it, get a File, and File can be dropped. Hand wavy and not necessarily a good implementation but hopefully this is getting the point across. This would be preferable to shoving more into a drop impl when drop is such a constrained interface.
Or maybe add a try_drop(&mut self) that will run implicity but also ? implicity. I don’t know.
I guess the point is that I’m not sure an async drop can ever be worth it.
Yes, the post mentions “Undroppable types”, as a proposal in the direction of linearity as you suggest. This is something that comes up relatively often in Rust context, that drop is too restricted and that being able to do things manually would sometimes matter for expressiveness.
async Drop should enable safe completion based IO without the existing overhead required at the moment (IO buffers need to either be copied around or heap allocated (+ maybe dynamically tracked)). Believe it could also be the grounds for async scoped() / structured concurrency. The pattern here being that borrowed data needs to safely ensure it’s no longer borrowed on Drop without blocking the runtime.
I can share the experience for some projects. But for others I really wanted async traits and simply went down a weird route where I didn’t have to use it, or used the async-trait macro. I’m pretty sure that library authors will love having some of this functionality.
The short answer is that the game is throwing so much unnecessary geometry at the graphics card that the game manages to be largely limited by the available rasterization performance. The cause for unnecessary geometry is both the lack of simplified LOD variants for many of the game’s meshes, as well as the simplistic and seemingly untuned culling implementation. And the reason why the game has its own culling implementation instead of using Unity’s built in solution (which should at least in theory be much more advanced) is because Colossal Order had to implement quite a lot of the graphics side themselves because Unity’s integration between DOTS and HDRP is still very much a work in progress and arguably unsuitable for most actual games. Similarly Unity’s virtual texturing solution remains eternally in beta, so CO had to implement their own solution for that too, which still has some teething issues.
Sounds like they bought a bunch of stock-ish assets and never had the artist resources to clean them up, they had an unsolvable choice between the old devil-they-knew renderer and the new devil-they-didn’t one. Painful, and lacking a dedicated graphics engine guru (probably an experienced team, for a renderer of this complexity) there’s no real good solution besides getting lucky or making a game that looks like Cities: Skylines 1.
And I can just hear someone saying “we’ll buy a bunch of premade assets so we don’t need as many artists”. And they should be demoted until they have learned a thing or two. But honestly it sounds like the core simulation works well, so there’s hope for fixing the renderer.
I’m rather disappointed in Paradox for this, tbh. I respected them a lot more when they were a tiny nobody who made weird-ass niche games and did it as hard as possible.
I respected them a lot more when they were a tiny nobody who made weird-ass niche games and did it as hard as possible.
They still release DLCs for games you can’t actually run in multiplayer without getting disconnects and desyncs. And where the whole bundle costs you 160€.
Yep. Age Of Wonders 4 came out with a list of 4 DLC’s already scheduled for every 3 months since the release date, each costing, what… 10-15$? It’s like, guys, can you at least pretend to be selling me an actual finished game?
In contrast, Age Of Wonders: Planetfall has 3 DLC’s total, spaced 6 months apart, even though they individually cost more. Age of Wonders 3 was apparently self-published before Paradox acquired Triumph Studios, and has 3 DLC’s total, which are actually called “expansion packs”.
Does anyone know how that works with other engines? I don’t do serious 3d, but I’d expect that game IDEs would have some lint-like element that would shout at you about issues like “these 10k triangles rendered into one pixel, what are you doing?”. Is something like renderdoc integration not possible/common in large projects?
My guess would be that the issues were known but the game was rushed out in an unfinished state. I tried to look up Paradox Interactive’s financials and it seems like there’s a slight 7% drop in year-over-year revenue in 2023 Q3 while the expenses went up but on the other hand their Q2 was excellent. So it can’t be deduced from this they were short on money.
Perhaps they wanted guarantee some positive cash flow for the last quarter? Those C-suite bonuses don’t pay for themselves!
People do gather that kind of data, but it needs to be part of the renderer in-game and you need time to implement it. I think it’s usually counters on a “whole frame” basis rather than per-object like you might desire for quickly debugging specific assets.
The team working on CS2 were starved for time and they had to implement a lot more of the renderer themselves than you would normally hope for, so they would have had unusually little help from off-the-shelf tooling built into their renderer.
Outside of the running game, you don’t have enough context. For example a 10k triangle model might be totally appropriate if this is the model for a character who appears in scenes where the camera passes close to them.
In a previous lobste.rs post, I asserted that you could build a system for doing end-to-end encryption over an existing messaging service in about a hundred lines of code. It turns out that (with clang-format wrapping lines - libsodium) it was a bit closer to 200. This includes storing public keys for other users in SQLite, deriving your public/private key pair from a key phrase via secure password hashing, and transforming the messages into a format that can be pasted into arbitrary messaging things (which may mangle binaries or long hex strings).
Please share this with any lawmakers who think that making Signal, WhatsApp, and so on install backdoors will make anything safer.
Please share this with any lawmakers who think that making Signal, WhatsApp, and so on install backdoors will make anything safer.
“Lawmakers” don’t think that banning E2EE will actually prevent the use of E2EE, and engaging with them as if they do is a big part of why we lose these fights.
Being perceived as doing something about a thing that voters perceive as a problem.
Also, the point is rarely to actually make the thing impossible to do, the point is to make it impossible to do the thing while also staying on the right side of the law/retaining access to the normal banking and financial system/etc.
This, for example, was the outcome of SESTA/FOSTA – sex trafficking didn’t stop, and sex work didn’t stop, but legitimate sex workers found it much harder to stay legit.
I wish I could pin this reply to the top of the thread.
Our lawmakers are, by and large, quite competent. They are competent at politics, because our system selects people who are good at politics for political positions.
Part of that is, as you say, being seen to do something. Another part is reinforcing the systems of power they used to rise to the top; any law that makes using encryption criminal, criminalizes the kind I person who already uses encryption. It’s worth thinking about why that’s valuable to politicians and their most monied constituents.
Don’t forget the key word “perceived” and the two places it appeared in that sentence.
There’s often a lot of separation between “the problems that actually exist in the world” and “the problems voters believe exist in the world”, and between “the policies that would solve a problem” and “the policies voters believe would solve that problem”. It is not unusual or even particularly remarkable for voters to believe a completely nonexistent problem is of great concern, or that a politician is pursuing policies the exact opposite of what the politician is actually doing, or that a thing that is the exact opposite of what would solve the problem (if it existed) is the correct solution.
(I emphasize “voters” here because, in democratic states, non-voters wield far less influence)
So in order to usefully engage with a politician on a policy, you need to first understand which problem-voters-believe-in the politician is attempting to address, and which perceived-solution to that problem the politician believes they will be perceived-by-voters to be enacting.
Which is why cheeky tech demos don’t really meaningfully do anything to a politician who is proposing backdooring or banning encryption, because the cheeky tech demo fundamentally does not understand what the politician is trying to accomplish or why.
Assuming politicians are competent, what they think people believe should be pretty accurate, and talking to them directly will not change that. What will is changing what voters actually believe, and the cheeky tech demo could help there.
We do need a platform to talk to voters though. So what we need is reach out to the biggest platforms we can (this may include politicians that already agree) and show them the cheeky tech demo.
In future, if Mozilla is doing official things on domains unrelated to any existing project domain, it would be helpful to:
Link to that domain from one of the official domains
Have a link in the thing on that domain pointing to the place Mozilla links from it.
Doing this would mean that, in two clicks, readers can validate that this really is Mozilla-endorsed and not someone impersonating Mozilla. Training Mozilla users that anyone who copies and pastes the Mozilla logo is a trusted source is probably not great for security, in the long term.
I, too, question whether this page was really written by Mozilla, but I did confirm that Mozilla and other companies really do oppose Article 45 of eIDAS.
Their calls have also been echoed by companies that help build and secure the Internet including the Linux Foundation, Mullvad, DNS0.EU and Mozilla who have put out their own statement.
Some other parties published blog posts against eIDAS Article 45 today:
And at the bottom, yet it’s not on a Mozilla domain, it doesn’t name any Mozilla folks as authors, and the domain it is hosted on has fully redacted WHOIS information and so could be registered to anyone. I can put up a web site with the Mozilla logo on it, that doesn’t make it a Mozilla-endorsed publication.
As is normal for any domain I order from inside the EU.
Edit: And the open letters are all hosted on the https://www.mpi-sp.org/ domain. That doesn’t have to make it more credible, but at least that’s another institute.
As is normal for any domain I order from inside the EU.
It is for any I do as an individual. Corporate ones typically don’t redact this, to provide some accountability. Though I note that mozilla.org does redact theirs.
I’ve found this to be inconstantly administrated. For instance, I believe that it is Nominet (.uk) policy that domain registrant information may be redacted only for registrants acting as an individual. But registration information is redacted by default for all domain contact types at the registry level and there is no enforcement of the written policy.
I would love to have something like bcachefs. ZFS is far too slow for my needs, but I would like to use raidz2 for one specific backup server. Btrfs can’t actually do something like RAIDz2 without eating my data when shutting it down hard, and btrfs’s RAID1C3 is just a substitute, but not the real deal.
I would love to have something like bcachefs. ZFS is far too slow for my needs
Do you know where the slow speed comes from? Typically, when people say ‘ZFS is slow’ they have a workload that is problematic for CoW filesystems. Lots of in-place overwrites that require a write to the log, a write of the new block, and then a write of the log entry for recycling the now-unused block, then a write of the metadata updates for that will be slow, though adding a log device on a fast SSD significantly improves performance here. A workload with lots of random writes and then sequential reads can also be problematic due to the fragmentation introduced during the writes, though adding an L2ARC device can improve this a lot.
This kind of problem is intrinsic to a CoW filesystem, so switching to a different CoW filesystem is unlikely to help more than tuning configuration parameters on ZFS would. If you have a workload that’s slow as the result of something that’s intrinsic to the design of ZFS rather than as a property of CoW filesystems, I’d be very curious to learn what it is.
There’s zero cost to that. Youtube encodes in 500ms segments that are each independent, fragmented video files. The list of those files is then compiled into a DASH manifest.
You could just as well add other segments, e.g. ads, into that segment.
I dunno about that, I’m no expert in video codecs but if they still use keyframes or something like them then it doesn’t seem too hard. Each keyframe is an opportunity to chop off the video stream and splice a new one into it, then pick up at that same keyframe once your ad is over. Youtube controls the video encoder, so they can insert those keyframes wherever it suits them. The extra frame data will make the video slightly larger, and re-encoding every youtube video in existence is not trivial… but they re-encode those videos anyway periodically as codecs change or improve, I assume it just happens in idle time of otherwise-unused capacity. The only real question would be, are the cost and time needed lower than the amount of money they would make from such a thing?
Trillion dollar business are rarely “fine” with losing money on watch time, yt was never ok with it - just other forms of growth were more profitable until recently.
End user watch time has shifted to tiktok and ig reels, away from YT. They used to be literally the only video provider anyone cared about for years, and suddenly in a ~3 year period, they have competition. Especially when it comes to recommendations, their old moat.
There is a ratio they all watch: $/watch time/user, along with total $. So if you increase $, watch time, or users, you can increase total $. Previously it was more profitable to focus on hours watched, now they’re focusing on $/watch time, as user growth has become more difficult.
Very insightful. Though one way of interpreting this “trying to think of how to be better is harder than trying to gouge users”, which is where many businesses seem to end up sooner or later.
I don’t know to which degree I am representative of YouTube users, but since they forced this on me a couple of weeks ago, my time spent has gone down significantly. I will still watch interesting talks from conferences I follow, but I find I no longer do the “rabbit hole” sessions where you start with one video on a subject and then watch related videos for a while – simply because it is too annoying and distracting with the constant interruptions.
The problem is: Are you one of the few users who react that way ? Or are you the majority that does it. Reminds me of netflix changing their account sharing and payment stuff, and apparently it did gain them some percentage of subscriptions and revenue.
I’ve used adblockers for years and I generally leave if I see any kind of banner (“sign up for our newsletter”, “log in to view the full story”, etc) so when YouTube added one my bounce rate skyrocketed. Particularly bad sites go into a mental block list (medium, towardsdatascience, pinterest) and I simply never click on them.
Maybe people who don’t use adblockers are used to this sort of thing?
When governments don’t provide enough value to their customers, you tend to get revolutions. :-P
Of course, sometimes “value” means not getting shot, and/or “their customers” are a small subset of the citizens of the country. Which raises the bar much higher!
This started out tongue-in-cheek, but would actually be an interesting way of looking at governments for certain things…
For any definition of fine, your sentence wasn’t true to me. I put it in quotes to emphasize that really it could be any word.
Youtube has been just disappointed
Youtube has been just happy
Youtube has been just satisfied
None of these are true in my experience. Thus my description of how it’s an ongoing optimization function of a huge business, rather than a static position the company has held against how many Ads etc they serve.
In addition, YouTube and Google/Alphabet has been dealing with the recession of the ad-industry, after some very strong years during the peak/lock downs of the pandemic.
Debian update on the way: https://lwn.net/Articles/954356/
Currently using my ansible script for now. Use at your own risk.
Feels more threatening than any zeroday right now. Got a bunch of debian 12 VMs at work.
I wish I could get a cloned key like how door locks come with a spare that I could store securely in case I lost it. Correct me if I’m wrong, but having a second key for U2F just means you have to register every service with both keys which you wouldn’t do since it’s hopefully a pain to get to your securely-stored spare key.
There is a proposal for registering backup keys without them being present, but I’m not sure it’s making progress and it certainly isn’t implemented yet.
You’re looking for a cryptocurrency hardware wallet, basically. They’re the only ones that do U2F and also have key backup/import. Although it would be nice to just be able to buy two of the same key from yubikey or whoever.
You could get two yubikeys with the same key and then use a password manager which can handle FIDO & co. (KeePassXC has this now)
Usersbcan also purchase the open source hardware options if they don’t want closed, unpatchable firmware.
I have not tested this myself but looking at dicekeys.app it appears one can load the secret key into Solokeys V2.
If that indeed works, you can store the seed securely, and load it into a new key when something happens to your current key.
I tried with my OnlyKeys using the backup, but no luck. Nonmsure how to get an exact copy
Syncthing has had this feature since 2021: release v1.15.0. While I am using Syncthing I haven’t tried connecting an untrusted device. Instead I decided to just run on my home NAS to have an always-on device.
In that setup, I’d still be inclined to enable this mode, unless you also expose the syncthing directory via SMB or something else. If the NAS is just there as a blob of storage, then treating it as an untrusted device means that a compromise of the NAS won’t leak the data that you sync with SyncThing. A computer that doesn’t need to see plaintext should never have access to plaintext.
I don’t think you can take that as an absolute principle. You need to weigh the chances of your NAS being compromised against the chances of losing your keys, and the cost of data leakage against the cost of data loss.
Ideally, yes.
My NAS has several SMB shares and I am too lazy to encrypt the disk.
huh this is apparently still in beta ? didn’t get that memo when setting up my syncthing stuff
It’s remarkably easy to get invited to Lobste.rs What I like about the invite only system is that you have to contact a person for an invite. I got invited by contacting a person. This simple extra effort is so much better than all the AI tools to filter out bots. I’m sure bad humans do slip in, and now with ChatBots, bad computers impersonating good humans will slip in, but I still think the system is better than something that lets the FSB sign up a hundred thousand accounts and bring down lobste.rs.
What I like about the invite system is that it provides some basic accountability. If spammers are getting invites we can look at the tree and finf the right spot to prune. It provides some basic protection from spam and sick puppets.
I do think invites should be fairly easy to get. I think I would extend an invite to most people to asked me. If I don’t know them personally I would just want to see some comment history on another site to make sure that they are likely a human and interesting in what is on-topic here.
It would be interesting to have some sort of indication of “how sure” you should be for invites. Right now the about page says you should invite “people [you] believe will contribute positively” which doesn’t say much about the confidence margin you should have. I wonder if some “the mods are doing ok, don’t worry about being too stingy with invites” vs “the mods are busy right now, be conservative” would be useful, and it could occasionally be updated. But I’m probably just way overthinking this.
As an addendum to your comment on accountability, I feel that accountability also extends in both directions. I was given an invite by a stranger and knowing that they might pay a price if I’m too big of a jerk is a good reminder to give every comment a second editing pass and make sure that I’m contributing instead of ranting.
I just hopped on IRC - as the about page also suggests
Disabled password auth on your ssh and breath easy. This kind of strategy might help your log files look nicer, but it does nothing for security since the bad guys can get access to almost infinite IPs (especially if they have IPv6) and fail2ban itself adds attack surface area.
you may also change your ssh port
I’ve used fail2ban. It’s a nice way of spamming your admin mail and pretending you’re more secure. The only exception I see are people dealing with DDoS where fail2ban might help.
Changed my ssh port on all my servers. Currently have 62 banned on S1, 67 on S2, and 317 on S3. It doesn’t really help much.
For hysterical raisins, I have one system that has
ssh
on port 22 and accepts passwords. The system I have set up there is a custom syslog that allows scanning the logs as soon as they’re submitted viasyslog()
(no grovelling through files) and perform actions on matched logs. It scans for 5 failedssh
attempts from an IP address before banning it viaiptables
. At first, I just banned IPs for a few days. It didn’t help. Then two weeks. That helped for several years until I had problems withiptables
failing (because of too many entries). I now just keep adding untiliptables
complains, drop a few of the oldest rules, then attempt to add again. The system currently has 2,391 blocked entries. Sigh.You could investigate having one single iptables rule that matches on an ipset, and adding the attackers’ IPs to the ipset. This should make adding and blocking much more efficient, since there’s a single rule to check.
However, by doing this, you would be essentially rewriting sshguard.
haven’t had that problem for the last 10 years with key-auth
Enabling key-auth doesn’t stop people from connecting to your servers and consuming resources.
Yeah, that’s why I do on my servers too, actually. I wrote the blog post with SSH as an example because it’s a universal service, but fail2ban can protect any service with the right configuration. For example, I host a lot of web-facing services that don’t have IP-based anti-bruteforce logic. It’s way easier (and smarter?) to have one external tool do this instead of implementing this logic again and again in each service imo.
Two possibly newbie questions from a non-server-admin:
Why is this not the universally accepted solution? Do some people need pw login for legacy reasons?
Also in the article they write:
Just a pure curiosity question… why does it consume so much RAM? Naively, I’d think you could read files with a stream, and updating firewall rules shouldn’t take much RAM, so it could in theory be a tiny footprint program. Am I missing something?
Nothing in this world is universal, but it is best practice.
I don’t know for sure, but given that alternatives exist which take less RAM you are correct that it’s not a fundamental need of the problem.
Mild spoilers if you haven’t read it :)
The slow copy performance when it’s page-aligned is really something. Wonder if this is somewhere there was a CPU bug (like, page-aligned copies could corrupt or leak data) and the slow performance we’re seeing is the end result of microcode adding a workaround or mitigation.
Extra ironic because conventional wisdom is to align stuff for best performance.
and for less clang warnings about UB with unaligned access
I suspect it’s someone being clever in microcode. The flow I imagine is this:
You might be right though. One of the Specre variants (I think on Intel) involved the first cache line in a page being special. It’s possible that AMD has some mitigations for a similar vulnerability in their rep movsb microcode.
If the non-aligned access is still fast, I wonder if the mitigation is missed…
This times a thousand. I have tried to deploy self-hosted apps that were either only distributed as a Docker image, or the Docker image was obviously the only way anyone sane would deploy the thing. Both times I insisted on avoiding Docker, because I really dislike Docker.
For the app that straight up only offered a Docker image, I cracked open the
Dockerfile
in order to just do what it did myself. What I saw in there made it immediately obvious that no one associated with the project had any clue whatsoever how software should be installed and organized on a production machine. It was just, don’t bother working with the system, just copy files all over the place, oh and if something works just try symlinking stuff together and crap like that. The entire thing smelled strongly of “we just kept trying stuff until it seemed to work”. It’s been years but IIRC, I ended up just not even bothering with theDockerfile
and just figuring out from first principles how the thing should be installed.For the service where you could technically install it without Docker, but everyone definitely just used the Docker image, I got the thing running pretty quickly, but couldn’t actually get it configured. It felt like I was missing the magic config file incantation to get it to actually work properly in the way I was expecting to, and all the logging was totally useless to figure out why it wasn’t working. I guess I’m basically saying “they solved the works-on-my-machine problem with Docker and I recreated the problem” but… man, it feels like the software really should’ve been higher quality in the first place.
That’s always been a problem, but at least with containers the damage is, well, contained. I look at upstream-provided packages (RPM, DEB, etc) with much more scrutiny, because they can actually break my system.
Can, but don’t. At least as long as you stick to the official repos. I agree you should favor AppImage et al if you want to source something from a random GitHub project. However there’s plenty of safeguards in place within Debian, Fedora, etc to ensure those packages are safe, even if they aren’t technologically constrained in the same way.
I didn’t say that. Edit: to be a bit clearer. The risky bits of a package aren’t so much where files are copied, because RPM et al have mechanisms to prevent one package overwriting files already owned by another. The risk is in the active code: pre and post installation scripts and the application itself. From what I understand AppImage bundles the files for an app, but that’s not where the risk is; and it offers no sandboxing of active code. Re-reading your comment I see “et al” so AppImage was meant as an example of a class. Flatpak and Snap offer more in the way of sandboxing code that is executed. I need to update myself on the specifics of what they do (and don’t do).
Within Debian/Fedora/etc, yes: but I’m talking about packages provided directly by upstreams.
Regardless of which alternative, this was also my point. In other words, let’s focus on which packagers we should look at with more scrutiny rather than which packaging technology.
AppImage may have been a sub-optimal standard bearer but we agree the focus should be on executed code. AppImage eliminates the installation scripts that are executed as root and have the ability to really screw up your system. AppImage applications are amenable to sandboxed execution like the others but you’re probably right that most people aren’t using them that way. The sandboxing provided by flatpak and snap do provide some additional safeguards but considering those are (for the most part) running as my user that concerns my personal security more than the system as a whole.
On the other side, I’ll happily ignore the FHS when deploying code into a container. My python venv is /venv. My application is /app. My working directory… You get the picture.
This allows me to make it clear to anybody examining the image where the custom bits are, and my contained software doesn’t need to coexist with other software. The FHS is for systems, everything in a dir under / is for containers.
That said, it is still important to learn how this all works and why. Don’t randomly symlink. I heard it quipped that Groovy in Jenkins files is the first language to use only two characters: control C and control V. Faking your way through your ops stuff leads to brittle things that you are afraid to touch, and therefore won’t touch, and therefore will ossify and be harder to improve or iterate.
I got curious so I actually looked up the relevant
Dockerfile
. I apparently misremembered the symlinking, but I did find this gem:There was also unauthenticated S3 downloads that I see I fixed in another PR. Apparently I failed to notice the far worse unauthenticated shell script download.
I’ve resorted to spinning up VMs for things that require docker and reverse proxying them through my nginx. My main host has one big NFTables firewall and I just don’t want to deal with docker trying to stuff its iptable rules in there. But even if you have a service that is just a bunch of containers it’s not that easy because you still have to care about start, stop, auto-updates. And that might not be solved by just running Watchtower.
One case of “works on docker” I had was a java service that can’t boot unless it is placed in
/home/<user>
and has full access. Otherwise it will fail, and no one knows why springboot can’t work with that, throwing a big fat nested exception that boils down tojava.lang.ClassNotFoundException
for the code in the jar itself.Another fun story was when I tried to setup mariadb without root in a custom user.
I want to throw in something that hasn’t been mentioned as much yet: Smartgit - it’s my favorite git UI, even though it’s proprietary for non FOSS work. Haven’t found any other cross platform git interface that has all the features I need for my daily needs. I sometimes forget how bad the git CLI is, and then I’m back SSH’ed into a machine with only that left - no wonder people struggle with git that much.
Man, Canonical’s messaging really is abysmal. Lost in this whirlwind of negativity is the fact that Ubuntu Pro is free for personal use and 5 systems:
https://ubuntu.com/pro/subscribe
Edit subscribed, and enrolled my 2 machines. You’ll need the
ubuntu-advantage-tools
package to get the tooling.Yeah this headline could have been “Canonical offers free Ubuntu Pro for personal use, small networks, and open source projects[1]”. Love and high-fives could have ensued.
[1] I’m guessing, but that would have been the sensible move.
I’ve read all comments in this thread, and perused the ones over on HN, and nowhere have I seen anyone mention this option. It’s beyond pathetic from Canonical.
According to another source it is also an additional service - so ubuntu isn’t “holding back”, it’s providing more, if you want to pay or register.
The security patches exist. Canonical has then locked up in repos which normal Ubuntu LTS users can’t access. To me, “holding back” seems to describe more or less exactly what Canonical is doing with those packages in that repo.
Are they supplied by Canonical (backported) or just repackaged ?
So I’m guessing the TOS on Pro asks you nicely to not simply clone the repo? Since they’re mostly distributing FOSS software, it doesn’t seem like there’s much of a case against someone else just offering the Pro packages for free.
That’s a really great article.
But can someone help me understand why a switch needs a 64-bit multicore processor, 8 gigs of RAM and run Linux (though this is not unique to SN2700, just a general observation)? I was under the impression that switches (both L2 and L3) do all performance-sensitive work in hardware.
There are some exceptional cases that need to be offloaded to a real CPU, plus you want to be able to support at least a bit of monitoring and statistics. When you’ve got 32 ports and an aggregate switching capacity of 5 billion packets per second, you don’t want that CPU to be too poky, and on a $25,000 device you can probably afford to spend $100 on the CPU instead of $10 if it opens up some flexibility for your customers. And reading the part of the article about switchdev (and knowing a bit about Mellanox’s history with Linux), flexibility was definitely their intent.
Switches do a bunch of control plane stuff, things like STP, LLDP, VXLAN, etc. usw. Dunno how much if that is in the data plane on this device :-)
Switches also need some kind of CLI for configuration, and it makes sense to use Linux for that. It can also act as the front-end processor for the data plane, e.g. feeding it firmware at boot time.
The last 50G firewall I ordered is also a bunch of mellanox cards and an EPYC processor - if you hit the CPU with your traffic for whatever reason (things you can’t offload to the network cards), then you better have enough compute for that..
You can do way more than VLANs on such a thing, like NAT, VPN, VRF and other routing stuff. For firewalling you might also hit the CPU, depending on what you want to filter (and what your hardware offloading can do).
It really depends what “all performance-sensitive work” means. Sure, the packet-flinging is done in hardware, and is obviously very performance-sensitive. But running routing protocols, collecting and reporting statistics, deciding what to do with packets that the hardware plane can not handle, … all very suddenly become “performance-sensitive” as soon as your management CPU turns out to be to slow to do them all and gets overwhelmed, because suddenly the device does not behave as expected anymore.
Interesting. So switches actually perform non-trivial amount of “management work”, as well as being a fallback for special cases (like things that cannot be offloaded to the NIC). Good to know.
Should probably be merged with Article 45 Will Roll Back Web Security by 12 Years
I think this is related but also critical information as to why this weird clause got added.
TBH I think that, for the most part, I will only benefit from a few of these - mostly in terms of sugar. I routinely have positive experiences with Async rust and basically never have negative experiences/ issues that crop up because of it. In 3 years of writing Rust full time I had one async problem one time - I accidentally was causing an infinite select! loop in a tonic server, so the server would hang. Not really a big deal for 3 years of async work.
I also don’t think that async-drop as quite as necessary or desirable as it may seem. I really wanted it at one point and then I realized that it’s just too tricky. It reminds me of how
File
callssync_all
on drop but ignores the error - ultimately, “drop” is just a really tricky place for anything complex. I’d rather see a linear type, like:Where
Linear
can’t be implicitly drop’d, you have to.close()
it, get aFile
, andFile
can be dropped. Hand wavy and not necessarily a good implementation but hopefully this is getting the point across. This would be preferable to shoving more into adrop
impl whendrop
is such a constrained interface.Or maybe add a
try_drop(&mut self)
that will run implicity but also?
implicity. I don’t know.I guess the point is that I’m not sure an async drop can ever be worth it.
Yes, the post mentions “Undroppable types”, as a proposal in the direction of linearity as you suggest. This is something that comes up relatively often in Rust context, that
drop
is too restricted and that being able to do things manually would sometimes matter for expressiveness.async Drop should enable safe completion based IO without the existing overhead required at the moment (IO buffers need to either be copied around or heap allocated (+ maybe dynamically tracked)). Believe it could also be the grounds for async scoped() / structured concurrency. The pattern here being that borrowed data needs to safely ensure it’s no longer borrowed on Drop without blocking the runtime.
I can share the experience for some projects. But for others I really wanted async traits and simply went down a weird route where I didn’t have to use it, or used the async-trait macro. I’m pretty sure that library authors will love having some of this functionality.
No conversation about the actual conclusion?
Sounds like they bought a bunch of stock-ish assets and never had the artist resources to clean them up, they had an unsolvable choice between the old devil-they-knew renderer and the new devil-they-didn’t one. Painful, and lacking a dedicated graphics engine guru (probably an experienced team, for a renderer of this complexity) there’s no real good solution besides getting lucky or making a game that looks like Cities: Skylines 1.
And I can just hear someone saying “we’ll buy a bunch of premade assets so we don’t need as many artists”. And they should be demoted until they have learned a thing or two. But honestly it sounds like the core simulation works well, so there’s hope for fixing the renderer.
I’m rather disappointed in Paradox for this, tbh. I respected them a lot more when they were a tiny nobody who made weird-ass niche games and did it as hard as possible.
They still release DLCs for games you can’t actually run in multiplayer without getting disconnects and desyncs. And where the whole bundle costs you 160€.
Yep. Age Of Wonders 4 came out with a list of 4 DLC’s already scheduled for every 3 months since the release date, each costing, what… 10-15$? It’s like, guys, can you at least pretend to be selling me an actual finished game?
In contrast, Age Of Wonders: Planetfall has 3 DLC’s total, spaced 6 months apart, even though they individually cost more. Age of Wonders 3 was apparently self-published before Paradox acquired Triumph Studios, and has 3 DLC’s total, which are actually called “expansion packs”.
Does anyone know how that works with other engines? I don’t do serious 3d, but I’d expect that game IDEs would have some lint-like element that would shout at you about issues like “these 10k triangles rendered into one pixel, what are you doing?”. Is something like renderdoc integration not possible/common in large projects?
Ah, apparently it exists even in Unity https://thegamedev.guru/unity-gpu-performance/renderdoc-gpu-timings/ so I guess it wasn’t used in development.
My guess would be that the issues were known but the game was rushed out in an unfinished state. I tried to look up Paradox Interactive’s financials and it seems like there’s a slight 7% drop in year-over-year revenue in 2023 Q3 while the expenses went up but on the other hand their Q2 was excellent. So it can’t be deduced from this they were short on money.
Perhaps they wanted guarantee some positive cash flow for the last quarter? Those C-suite bonuses don’t pay for themselves!
People do gather that kind of data, but it needs to be part of the renderer in-game and you need time to implement it. I think it’s usually counters on a “whole frame” basis rather than per-object like you might desire for quickly debugging specific assets.
The team working on CS2 were starved for time and they had to implement a lot more of the renderer themselves than you would normally hope for, so they would have had unusually little help from off-the-shelf tooling built into their renderer.
Outside of the running game, you don’t have enough context. For example a 10k triangle model might be totally appropriate if this is the model for a character who appears in scenes where the camera passes close to them.
In a previous lobste.rs post, I asserted that you could build a system for doing end-to-end encryption over an existing messaging service in about a hundred lines of code. It turns out that (with clang-format wrapping lines - libsodium) it was a bit closer to 200. This includes storing public keys for other users in SQLite, deriving your public/private key pair from a key phrase via secure password hashing, and transforming the messages into a format that can be pasted into arbitrary messaging things (which may mangle binaries or long hex strings).
Please share this with any lawmakers who think that making Signal, WhatsApp, and so on install backdoors will make anything safer.
“Lawmakers” don’t think that banning E2EE will actually prevent the use of E2EE, and engaging with them as if they do is a big part of why we lose these fights.
Relevant XKCD: https://xkcd.com/651/
Interesting. What exactly motivates them then?
Being perceived as doing something about a thing that voters perceive as a problem.
Also, the point is rarely to actually make the thing impossible to do, the point is to make it impossible to do the thing while also staying on the right side of the law/retaining access to the normal banking and financial system/etc.
This, for example, was the outcome of SESTA/FOSTA – sex trafficking didn’t stop, and sex work didn’t stop, but legitimate sex workers found it much harder to stay legit.
I wish I could pin this reply to the top of the thread.
Our lawmakers are, by and large, quite competent. They are competent at politics, because our system selects people who are good at politics for political positions.
Part of that is, as you say, being seen to do something. Another part is reinforcing the systems of power they used to rise to the top; any law that makes using encryption criminal, criminalizes the kind I person who already uses encryption. It’s worth thinking about why that’s valuable to politicians and their most monied constituents.
Don’t forget the key word “perceived” and the two places it appeared in that sentence.
There’s often a lot of separation between “the problems that actually exist in the world” and “the problems voters believe exist in the world”, and between “the policies that would solve a problem” and “the policies voters believe would solve that problem”. It is not unusual or even particularly remarkable for voters to believe a completely nonexistent problem is of great concern, or that a politician is pursuing policies the exact opposite of what the politician is actually doing, or that a thing that is the exact opposite of what would solve the problem (if it existed) is the correct solution.
(I emphasize “voters” here because, in democratic states, non-voters wield far less influence)
So in order to usefully engage with a politician on a policy, you need to first understand which problem-voters-believe-in the politician is attempting to address, and which perceived-solution to that problem the politician believes they will be perceived-by-voters to be enacting.
Which is why cheeky tech demos don’t really meaningfully do anything to a politician who is proposing backdooring or banning encryption, because the cheeky tech demo fundamentally does not understand what the politician is trying to accomplish or why.
Assuming politicians are competent, what they think people believe should be pretty accurate, and talking to them directly will not change that. What will is changing what voters actually believe, and the cheeky tech demo could help there.
We do need a platform to talk to voters though. So what we need is reach out to the biggest platforms we can (this may include politicians that already agree) and show them the cheeky tech demo.
Start with Oregon senator Ron Wyden.
Power.
thank you so much for this!
Huh, CF is sharing some DC documents which say “proprietary and confidential” to finger point at the DC operators ?
no date, no author, no reference. Looks fishy.
This is legitimately from Mozilla.
In future, if Mozilla is doing official things on domains unrelated to any existing project domain, it would be helpful to:
Doing this would mean that, in two clicks, readers can validate that this really is Mozilla-endorsed and not someone impersonating Mozilla. Training Mozilla users that anyone who copies and pastes the Mozilla logo is a trusted source is probably not great for security, in the long term.
There’s literally a date, references at the bottom, and it says Mozilla both at the top and bottom.
date acknowledged, but placing a mozilla logo is too easy faked.
IMO would be ok on their own domain. But not on a vanity domain.
I, too, question whether this page was really written by Mozilla, but I did confirm that Mozilla and other companies really do oppose Article 45 of eIDAS.
This Mozilla URL hosts a 3-page open letter against Article 45 of eIDAS: https://blog.mozilla.org/netpolicy/files/2023/11/eIDAS-Industry-Letter.pdf. It’s a completely different letter from the 18-page letter linked by this story, though both letters are dated 2 November 2023. This story references Mozilla’s letter as if it’s by someone else:
Some other parties published blog posts against eIDAS Article 45 today:
This convinced me https://techpolicy.social/@mnot/111339245119669445
There’s a very big Mozilla logo at the top.
And at the bottom, yet it’s not on a Mozilla domain, it doesn’t name any Mozilla folks as authors, and the domain it is hosted on has fully redacted WHOIS information and so could be registered to anyone. I can put up a web site with the Mozilla logo on it, that doesn’t make it a Mozilla-endorsed publication.
As is normal for any domain I order from inside the EU.
Edit: And the open letters are all hosted on the https://www.mpi-sp.org/ domain. That doesn’t have to make it more credible, but at least that’s another institute.
It is for any I do as an individual. Corporate ones typically don’t redact this, to provide some accountability. Though I note that mozilla.org does redact theirs.
Good to know. The company domains I dealt with all have this enabled. (Some providers don’t even give you the option to turn it off.)
I’ve found this to be inconstantly administrated. For instance, I believe that it is Nominet (.uk) policy that domain registrant information may be redacted only for registrants acting as an individual. But registration information is redacted by default for all domain contact types at the registry level and there is no enforcement of the written policy.
This is the link that was shared by Stephen Murdoch, who is one of the authors of the open letter: https://nce.mpi-sp.org/index.php/s/cG88cptFdaDNyRr
I’d trust his judgement on anything in this space.
Note that bcachefs is merged into kernel 6.7.
I am really excited about the possibilities of bcachefs. It seems like it would really be a great storage subsystem for virtualization servers.
I would love to have something like bcachefs. ZFS is far too slow for my needs, but I would like to use raidz2 for one specific backup server. Btrfs can’t actually do something like RAIDz2 without eating my data when shutting it down hard, and btrfs’s RAID1C3 is just a substitute, but not the real deal.
Do you know where the slow speed comes from? Typically, when people say ‘ZFS is slow’ they have a workload that is problematic for CoW filesystems. Lots of in-place overwrites that require a write to the log, a write of the new block, and then a write of the log entry for recycling the now-unused block, then a write of the metadata updates for that will be slow, though adding a log device on a fast SSD significantly improves performance here. A workload with lots of random writes and then sequential reads can also be problematic due to the fragmentation introduced during the writes, though adding an L2ARC device can improve this a lot.
This kind of problem is intrinsic to a CoW filesystem, so switching to a different CoW filesystem is unlikely to help more than tuning configuration parameters on ZFS would. If you have a workload that’s slow as the result of something that’s intrinsic to the design of ZFS rather than as a property of CoW filesystems, I’d be very curious to learn what it is.
We did some benchmarks on that box previously and for Borgbackups it didn’t work out for us.
I have a new box coming in, and I’ll test it again whether things changed - we’ll see.
I don’t know exactly what kind of stuff Borg does under the hood, but at least it’s not a WAL of some DBMS or a VM disk.
It is possible this my issue will give you some insights into borg performance: https://github.com/borgbackup/borg/issues/7674
As well as I remember borg flushes OS cache in
borg create
operation (see the issue for details)When ads are baked into the video stream, is when the fight is over.
SponsorBlock handles that case pretty well already, at least for videos where the ads at known times.
That will work until the Web Integrity API comes along.
Or you’re required to watch with your webcam on so they can track that you are actually watching it.
That also goes for watching youtube with an adblocker.
I don’t think that it’s likely to happen, the cost and time needed would be too high.
There’s zero cost to that. Youtube encodes in 500ms segments that are each independent, fragmented video files. The list of those files is then compiled into a DASH manifest.
You could just as well add other segments, e.g. ads, into that segment.
Interesting. I wonder why they did not start doing it already in this case.
I dunno about that, I’m no expert in video codecs but if they still use keyframes or something like them then it doesn’t seem too hard. Each keyframe is an opportunity to chop off the video stream and splice a new one into it, then pick up at that same keyframe once your ad is over. Youtube controls the video encoder, so they can insert those keyframes wherever it suits them. The extra frame data will make the video slightly larger, and re-encoding every youtube video in existence is not trivial… but they re-encode those videos anyway periodically as codecs change or improve, I assume it just happens in idle time of otherwise-unused capacity. The only real question would be, are the cost and time needed lower than the amount of money they would make from such a thing?
Trillion dollar business are rarely “fine” with losing money on watch time, yt was never ok with it - just other forms of growth were more profitable until recently.
End user watch time has shifted to tiktok and ig reels, away from YT. They used to be literally the only video provider anyone cared about for years, and suddenly in a ~3 year period, they have competition. Especially when it comes to recommendations, their old moat.
There is a ratio they all watch: $/watch time/user, along with total $. So if you increase $, watch time, or users, you can increase total $. Previously it was more profitable to focus on hours watched, now they’re focusing on $/watch time, as user growth has become more difficult.
Very insightful. Though one way of interpreting this “trying to think of how to be better is harder than trying to gouge users”, which is where many businesses seem to end up sooner or later.
Classic enshittification.
I don’t know to which degree I am representative of YouTube users, but since they forced this on me a couple of weeks ago, my time spent has gone down significantly. I will still watch interesting talks from conferences I follow, but I find I no longer do the “rabbit hole” sessions where you start with one video on a subject and then watch related videos for a while – simply because it is too annoying and distracting with the constant interruptions.
The problem is: Are you one of the few users who react that way ? Or are you the majority that does it. Reminds me of netflix changing their account sharing and payment stuff, and apparently it did gain them some percentage of subscriptions and revenue.
I’ve used adblockers for years and I generally leave if I see any kind of banner (“sign up for our newsletter”, “log in to view the full story”, etc) so when YouTube added one my bounce rate skyrocketed. Particularly bad sites go into a mental block list (medium, towardsdatascience, pinterest) and I simply never click on them.
Maybe people who don’t use adblockers are used to this sort of thing?
If you are losing your market share, you don’t get it back by bullying your users. YouTube will have to do some soul-searching in the coming years.
Soul-searching? The McKinsey school of economics says that it’s everybody else’s fault.
Provide value to your customers or become irrelevant. The only entity that is an exception to this rule is the government.
Youtube’s customers are the advertisers.
When governments don’t provide enough value to their customers, you tend to get revolutions. :-P
Of course, sometimes “value” means not getting shot, and/or “their customers” are a small subset of the citizens of the country. Which raises the bar much higher!
This started out tongue-in-cheek, but would actually be an interesting way of looking at governments for certain things…
As one commentator phrased it: “the please-seize-squeeze cycle”.
For any definition of fine, your sentence wasn’t true to me. I put it in quotes to emphasize that really it could be any word.
None of these are true in my experience. Thus my description of how it’s an ongoing optimization function of a huge business, rather than a static position the company has held against how many Ads etc they serve.
It’s because they’re (almost) doubling the cost of youtube premium, and they don’t want people to go back to regular with adblock.
YouTube’s CEO has just changed recently, which might explain new changes in such fundamental things. The previous one was there for almost 10 years.
In addition, YouTube and Google/Alphabet has been dealing with the recession of the ad-industry, after some very strong years during the peak/lock downs of the pandemic.